text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Incomplete Assembly of the Dystrophin-Associated Protein Complex in 2D and 3D-Cultured Human Induced Pluripotent Stem Cell-Derived Cardiomyocytes
Human induced pluripotent stem cells derived cardiomyocytes (hiPSC-CM) are increasingly used to study genetic diseases on a human background. However, the lack of a fully mature adult cardiomyocyte phenotype of hiPSC-CM may be limiting the scope of these studies. Muscular dystrophies and concomitant cardiomyopathies result from mutations in genes encoding proteins of the dystrophin-associated protein complex (DAPC), which is a multi-protein membrane-spanning complex. We examined the expression of DAPC components in hiPSC-CM, which underwent maturation in 2D and 3D culture protocols. The results were compared with human adult cardiac tissue and isolated cardiomyocytes. We found that similarly to adult cardiomyocytes, hiPSC-CM express dystrophin, in line with previous studies on Duchenne’s disease. β-dystroglycan was also expressed, but, contrary to findings in adult cardiomyocytes, none of the sarcoglycans nor α-dystroglycan were, despite the presence of their mRNA. In conclusion, despite the robust expression of dystrophin, the absence of several other DAPC protein components cautions for reliance on commonly used protocols for hiPSC-CM maturation for functional assessment of the complete DAPC.
INTRODUCTION
Muscular dystrophies are genetically inherited degenerative disorders with a progressive impairment of skeletal, respiratory, and cardiac function (Mercuri et al., 2019). The most prevalent muscular dystrophies involve proteins from the dystrophin-associated protein complex (DAPC) with dystrophin, sarcoglycans, dystroglycans, and laminin as core components (Figure 1A, left). The DAPC has a mechanical and signaling role in muscle cells, providing a link between the extracellular matrix and the intracellular cytoskeleton (Cohn and Campbell, 2000;Ozawa, 2010). Studies in animal models for muscular dystrophies provided insights into the mechanistic pathways leading to the development of cardiomyopathy (loss of membrane integrity, increase in cell permeability, cardiomyocyte cell death, and replacement fibrosis) (Ikeda et al., 2000;Heydemann et al., 2001;Lapidos et al., 2004;Fraysse et al., 2010;Law et al., 2020).
However, because of the unavailability of cardiac biopsies from those patients, there remains a knowledge gap in the understanding of the cellular mechanisms underlying cardiomyopathy in humans, hampering clinical translation. To overcome this limitation, human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CM) are increasingly used as a model. A leading example is Duchenne muscular dystrophy, resulting from loss of dystrophin, which has been studied extensively with important translational insights (Long et al., 2018;Kamdar et al., 2020;Moretti et al., 2020;Pioner et al., 2020;Mekies et al., 2021). Notwithstanding the advantages of using human cells, a general limitation of the approach is that hiPSC-CMs lack several features of adult cardiomyocytes, presumably due to incomplete maturation, resulting in a fetal or neonatal phenotype (Guo and Pu, 2020;Karbassi et al., 2020). Multiple strategies have been presented to promote hiPSC-CM maturation (reviewed in Ahmed et al., 2020). These rely on hormonal treatment, imposing load and pacing, or a 3D environment. HiPSC-CM generated via some of these methods have been used in the study of Duchenne muscular dystrophy (Long et al., 2018;Pioner et al., 2020), yet it has also been suggested that dystrophin is needed for hiPSC-CM maturation (Pioner et al., 2020), and presently it is unknown whether the DAPC in hiPSC-CM forms a complete functional complex.
The present study examines the presence of the DAPC in hiPSC-CM, using maturation protocols that are accessible and commonly used ( Figure 1B). The first is the well-established technique to create a small-engineered heart tissue by culturing the cells in a 3D microenvironment. Cells are embedded in a fibrin/Matrigel hydrogel connected to silicone posts that will exert a tension force, mimicking the preload tension on a muscle fiber (Jackman et al., 2016;Breckwoldt et al., 2017;Tiburcy et al., 2017;Leonard et al., 2018;Goldfracht et al., 2020). The second maturation method is a protocol that has been shown to structurally improve iPSC-CM membrane with the presence of transverse tubules, an important hallmark of cardiomyocyte maturity, by stimulating 2D cultured hiPSC-CM with thyroid hormones and glucocorticoids (Parikh et al., 2017;Huang et al., 2020). The data are compared with the hiPSC-CM differentiated in 2D without an intensified maturation protocol, and with adult human cardiac tissue.
METHOD Human Induced Pluripotent Stem Cell Lines
We used a commercial hiPSC line from ThermoFisher Scientific (A18945-lot 1793435) and three additional non-commercial hiPSC lines, one derived within the Stem Cell Institute at KU Leuven, (HC1) and two elaborated at the University Medical Center, Hamburg Lab (ERC001 and ERC018).
Proteasome Inhibition Test
Three-dimensional cultured hiPSC-CM were incubated in the 37 • C incubator with 10 µM of MG-132 (Merk-474787) in the culture medium for 8 h. After 8 h, the 3D hiPSC-CM were snap frozen in liquid nitrogen for further analysis by immunoblot. Proteasome inhibition efficiency was confirmed by assessing by immunoblotting for ubiquitinated proteins using a ubiquitin antibody.
Adult Human Cardiomyocyte Isolation
Use of tissue from non-used human donor hearts conforms with ethical guidelines, and permission for the study was obtained from the Ethical Committee of UZ Leuven (permit number S58824). Hearts were collected in an ice-cold solution containing (in mM): 130 NaCl, 27 KCl, 6 N-2-hydroxyethylpiperazine-N-2-ethanesulfonic acid (HEPES), 1.2 MgSO 4 , 1.2 KH 2 PO 4 , and 10 glucose; pH was adjusted to 7.2 with NaOH and transported from the hospital to the laboratory. A coronary artery from a wedge of the left ventricle was cannulated. Then, the wedge was perfused for 30 min with a Ca 2+ free solution at 37 • C bubbled with O 2 and containing (in mM): 130 NaCl, 5.4 KCl, 6 HEPES, 1.2 MgSO 4 , 1.2 KH 2 PO 4 , and 20 glucose; pH was adjusted to 7.2 with NaOH. After this washing step, the wedge was perfused for 40 min with the same solution containing around 0.4 U/ml of Collagenase A (Merk-10103586001) and 0.1 mg/ml Protease XIV (Merk-P5147). When the tissue appeared digested, it was perfused for 20 min with a low Ca 2+ solution (Ca 2+ free solution with 0.18 mM CaCl 2 ). The mid-myocardium from the digested perfused area was cut into small pieces and triturated for 5 min in the low Ca 2+ solution. Isolated cardiomyocytes were then filtered through a 250 µm mesh and resuspended in low Ca 2+ solution until use.
Electrophysiology
Coverslips containing the cells (isolated from 2D hiPSC-CM, 3D constructs, or adult human hearts) were mounted in a chamber perfused with normal Tyrode solution warmed at 37 • C and containing (in mM): 137 NaCl, 5.4 KCl, 1.8 CaCl 2 , 0.5 MgCl 2 , 5.5 glucose, and 10 HEPES; pH was adjusted to 7.4 with NaOH. Patch-clamp pipettes (2-3 M ) (GB200-8P-Science Products) were filled with a solution containing (in mM): 120 K-Asp, 20 KCl, 10 HEPES, 5 Mg-ATP, 10 NaCl, and 0.05 Fluo-4 (ThermoFisher Scientific-F14200); pH was adjusted to 7.2 with KOH. Cells were patched in a whole-cell configuration, and action potentials were measured using an Axon 200B amplifier and Digidata 1550B (Molecular Device) in current-clamp mode. Stimulated action potentials were recorded after a 5 ms pulse of 0.1 nA at a 1 Hz frequency. To measure voltage-gated calcium currents (ICaL), the setup was set to voltage-clamp mode. A train of seven pulses of 250 ms from −70 to +10 mV was followed by a sodium channel activation pulse of 750 ms from −70 to −40 mV, and then ICaL was recorded with increasing steps of 10 mV of 250 ms from −50 to +60 mV.
Immunostaining
Snap frozen tissue of adult human hearts embedded in optimal cutting temperature compound (OCT) were cut using a cryostat (Leica) and directly fixed with 4% paraformaldehyde for 10 min (Santacruz Biotechnology-sc-281692). HiPSC-CM in 2D monolayers were directly cultured in imaging plates (Ibidi-82406) and fixed with 4% paraformaldehyde for 15 min. HiPSC-CM in 3D constructs were directly fixed with the silicon posts with 4% paraformaldehyde for 20 min.
Immunoblot
Adult human heart samples and 3D cultured hiPSC-CM were snap frozen in liquid nitrogen and stored at −80 • C until use. Homogenization of samples was done on ice using a tissue grinder (Weathon) with the following solution: 10 mM Tris-HCl pH 7.5, 100 mM NaCl, 1 mM EDTA, 1 mM Na 3 VO 4 , 1% sodium deoxycholate, 1% Triton X-100, 1% NP-40, 0.1% sodium dodecyl sulfate (SDS), 10 mM NaF, 1 mM phenylmethylsulfonyl fluoride (PMSF), and protease inhibitor tablets (ThermoFisher Scientific-A32963). Protein concentration was estimated using the bicinchoninic acid (BCA) assay from ThermoFisher Scientific (23225), and aliquots were stored at −80 • C until use. For de-glycosylation of proteins, a PNGase kit was used (New England BioLabs-NEB P0704S). Homogenized samples (30 µg) were loaded in a home-made Tris-acetate 3-15% gel, as described (Cubillos-Rojas et al., 2012). After an overnight liquid transfer (4 • C at 40 V for 19 h) of the gel to a polyvinylidene difluoride (PVDF) membrane, the membrane was saturated for 45 min with 4% non-fat dry milk (Bio-Rad-1706404) diluted in PBS (pH = 7.4) with 0.05% Tween-20. The membrane was cut at around the 160 kDa marker into two pieces. The top part was used to probe for dystrophin and the lower part for sarcoglycans and dystroglycans (Supplementary Figure 3). Then, membranes were incubated overnight at 4 • C with the primary antibodies diluted in 2% milk (same antibodies as used for immunostainings, 1:1,000 dilution). The next day, after three washes in PBS, membranes were incubated for 2 h at room temperature with secondary antibodies: goat anti-mouse IgG Alexa 680 (1:10,000, ThermoFisher Scientific-A28183). Membrane immunofluorescence was quantified with a Licor Odyssey Clx infrared imaging system.
Polymerase Chain Reaction
Adult human heart samples and 3D-cultured hiPSC-CM were snap frozen in liquid nitrogen and stored at −80 • C until use. Homogenization of samples was done using ceramide beads (MP Biomedicals-116913050-CF) in 1 ml of TRI Reagent (Merk-93289) and using the MP Biomedical Instrument FastPrep-24 grinder (MP Biomedicals) at a speed of 6 m/s for 20 s, twice. Chloroform (0.2 ml) was added per milliliter of TRI Reagent and incubated for 3 min at room temperature. After centrifugation at 12,000 g for 15 min at 4 • C, the upper phase containing RNA was collected. To this, 0.5 ml of isopropanol per milliliter of TRI Reagent was then added and incubated for 5 min at room temperature. Samples were then centrifuged at 12,000 g for 10 min at 4 • C and the supernatant removed. The pellet was then washed with 1 ml of ethanol 75% and centrifuged at 7,600 g for 5 min at 4 • C and the supernatant discarded. The RNA pellet was resuspended in 20 µl of DNase/RNase-free water. cDNA was generated from the RNA extracted samples by reverse transcription using a kit (ThermoFisher Scientific-4368814). The cDNA was then polymerase chain reaction (PCR) amplified using the Platinum R Taq DNA Polymerase High Fidelity kit (ThermoFisher Scientific-11304011) with the following primers: α-SG (TGAGGTCACAGCCTACAATCG and AACTCGGCTTGGTATGGCAG), β-SG (AGCAAAGT TCCAATGGTCCTG and TCATCAATCGGAATGTATCCAGC), γ-SG (GAGCAGTACACTACAGCCACA and CGCAGTCCA TCTTTTGTTACACA), and δ-SG (GCGGAAACGATGCCT GTATTT and TGGCGTAGAGAGGTTGTAAGAA). The PCR products were resolved on a 2% agarose gel for 30 min at 50 V using SYBR R Safe DNA Gel Stain (ThermoFisher Scientific-S33102) and visualized with UV light exposure using a GelDoc Imaging System (Bio-Rad). For RT-qPCR, Platinum TM SYBR TM Green qPCR SuperMix-UDG was used (ThermoFisher Scientific-11733038) and run on a ViiA 7 Real-Time PCR System (ThermoFisher Scientific). The gene expression was normalized to housekeeping genes (GAPDH and RLP13a), and values were expressed as 2 − CT as a fold difference to adult.
Statistics
Graphs were prepared and data analyzed using GraphPad Prism software version 9. Normality was tested with Shapiro-Wilk. Except for resting membrane potential, the data did not pass the normality test, and hence groups were compared using Kruskal-Wallis with Dunn's multiple comparison. For the analysis of the resting membrane potential, we used Welch ANOVA, with Dunnett T3 for multiple comparisons. P-values are indicated above each graph. Individual data points are displayed in the graphs with the mean and the standard error of the mean as error bars.
RESULTS AND DISCUSSION
In adult cardiomyocytes, the core proteins of the DAPC [dystrophin, dystroglycans (α and β), and sarcoglycans (α, β, γ, and δ)] were present at the membrane, both in the external plasmalemma and in transverse tubules but not at the intercalated discs (Figure 2A, right). In contrast, hiPSC-CM generated using a common 2D monolayer protocol expressed an incomplete DAPC with only dystrophin and β-dystroglycan present (Figure 2A, left). We investigated whether a further maturation process could improve the expression of the proteins that comprise the DAPC, especially sarcoglycans as important mediators of dystrophy-related cardiomyopathies. To these ends, two protocols were used: the first in which we cultured hiPSC-CM in a 3D microenvironment and a second in which we combined a treatment with triiodo-Lthyronine and glucocorticoid for 14 days before seeding the cells on 2D Matrigel mattresses. However, neither maturation protocol improved DAPC expression above that seen in 2D hiPSC-CM, which only express dystrophin and β-dystroglycan but not sarcoglycans or α-dystroglycan (Figure 2A, middle). These findings were confirmed in immunoblots in the 3Dcultured hiPSC-CM ( Figure 2B, panel a and b). All data presented here are from the ThermoFisher Scientific hiPSC line, and similar data were obtained in 3D cultures from three different hiPSC lines, one from the Leuven and two from the Hamburg Labs (Supplementary Figures 1,2). The specificity of the sarcoglycan antibodies used was further verified by deglycosylating the proteins in adult cardiac homogenates with PNGase F, and as expected, all sarcoglycans decreased in molecular weight after deglycosylation ( Figure 2B, panel b). Inhibition of the proteasome by treatment with MG-132 (10 µM) for 8 h to reduce protein degradation had no effect and could not uncover sarcoglycan expression (Figure 2B, panel c). Yet, hiPSC-CM expressed sarcoglycans at the mRNA level ( Figure 2C). Additional RT-qPCR experiments showed differences in expression of components of the DAPC between 3D culture hiPSC-CM and adult cardiac tissue ( Figure 2D).
We examined proxies for maturation in the present experiments, focusing on aspects of excitation-contraction coupling as a key feature of cardiomyocytes. The increased sarcomeric organization in hiPSC-CM cultured in 3D and with hormonal treatment in 2D supported the assumption of advanced maturation of the myocyte phenotype under these conditions ( Figure 3A). We also evaluated how 3D culture influences the electrophysiological properties of hiPSC-CM, compared to cells cultured in 2D monolayers and to adult human ventricular cardiomyocytes. Figure 3B, panel a shows representative examples of single cell action potentials. Both hiPSC-CM cultured in 2D and 3D were smaller than adult ventricular cells, in cell perimeter and electrical capacitance, though the latter was higher in 3D-cultured hiPSC-CM ( Figure 3B, panel b). In addition, compared to 2D-cultured cells, 3D-cultured hiPSC-CM had a more negative resting membrane potential and greater action potential amplitude and duration, with values closer to that in adult ventricular cardiomyocytes (Figure 3B, panel c). Considering we did not correct for junction potentials, these values for resting membrane potential are comparable to those previously reported (Horvath et al., 2018). Of note, the resting membrane potential measured with microelectrodes in hiPSC-CM within the connected 3D micro-tissue are more negative than those after FIGURE 2 | (A) From left to right: confocal images of immunostained 2D monolayer cultured hiPSC-CM, 3D-cultured hiPSC-CM tissue, 2D-cultured hiPSC-CM seeded on Matrigel mattresses, and treated with T3+Dexamethasone and cryosections of adult human heart tissue. Adult heart sections were counterstained with wheat germ agglutinin (WGA) for membrane, shown in magenta. hiPSC-CM were counterstained with cardiac troponin T (cTnT) or α-actinin, shown in magenta. Nuclei were labeled with DAPI in blue. The DAPC components are in green. Results were replicated in three independent hiPSC-CM differentiations. Scale bar, 10 µm. (B) (a) Immunoblot of dystrophin (Dys) and dystroglycan (DG). (b) Immunoblot of sarcoglycans (SG) and Deglycosylation tests (+PNGase treatment) in adult cardiac homogenates and in 3D-cultured hiPSC-CM. Results were replicated in seven independent hiPSC-CM differentiations for α-SG, δ-SG, and γ-SG; 10 independent differentiations for β-SG and dystrophin and 5 independent differentiations for dystroglycan (DG). (c) Immunoblot for sarcoglycans from 3D-cultured hiPSC-CM treated for 8 h with MG-132 (10 µM). The proteasome inhibition efficiency was confirmed by immunoblotting of lysates with an anti-ubiquitin antibody (right). Results are from four 3D constructs prepared from one hiPSC-CM differentiation. Straight lines separate adult vs. hiPSC-CM, and dotted lines separate control vs. treatment (PNGase or MG132), from the same blot. The expected molecular weight for glycosylated and deglycosylated forms of sarcoglycans are indicated in the figure. The β-SG band at 50 kDa was considered as non-specific as its molecular weight did not decrease with deglycosylation (a similar band was also seen in β-SG-null mouse heart-Supplementary Figure 2B). (C) mRNA detection by reverse transcription PCR of expression of sarcoglycans in 2D and 3D-cultured hiPSC-CM and in adult human heart lysates. Results were replicated in three independent hiPSC-CM differentiations. (D) RT-qPCR of sarcoglycans in 3D-cultured hiPSC-CM and adult human heart lysates. Values are expressed as 2 − CT normalized as a fold difference to adult. isolation (Horvath et al., 2018). These features seen in 3D culture (lower resting membrane potential and longer action potential duration) are considered a characteristic of a more adult and ventricular-like cardiomyocyte phenotype. Compared to adult cardiomyocytes, both 2D and 3D-cultured hiPSC-CM had a higher density of L-type voltage-gated calcium channel current (ICaL) (Figure 3C), probably related to the absence of T-tubules and consequent smaller membrane surface area. Adult cardiomyocytes typically have a fast inactivation phase of ICaL caused by Ca 2+ release from the internal store, followed by a slow phase. ICaL in 2D-cultured hiPSC-CM typically has a single inactivation phase, while ICaL in 3D-cultured hiPSC-CM can have either type of inactivation time course. These data highlight that the link between calcium influx and sarcoplasmic reticulum release of calcium may improve in 3D but remains poorly developed.
Taken together, our findings show that despite evidence for a more advanced maturation, 3D-cultured-hiPSC-CM lack the complete DAPC seen in adult cardiomyocytes: dystrophin and β-dystroglycan are present, but sarcoglycans and α-dystroglycan are not ( Figure 1B). The lack of a full DAPC in hiPSC-CM, even after additional culture in 3D or with hormonal treatment, may reflect the incomplete maturity of the cells. Interestingly, during the early stages of human fetal development, the heart expresses sarcoglycans at the mRNA but not at the protein level, only expressing dystrophin and β-dystroglycan (Mora et al., 1996;Fougerousse et al., 1998), and this is in line with the immaturity or "fetal-like" phenotype of hiPSC-CM. The expression of dystrophin in hiPSC-CM, even already present in 2D monolayer differentiated cells, confirms the use of hiPSC-CM to study Duchenne muscular dystrophy. However, the lack of sarcoglycans undermines the use of hiPSC-CM as a model for sarcoglycanopathies and suggests caution in the interpretation of the dystrophin studies. The absence of α-dystroglycan, as recently observed (Kamdar et al., 2020), would prevent the linking of the complex to laminin and the extracellular matrix, thereby potentially affecting mechanosensing signaling, which is important for cell adaptation and maturation. However, we cannot rule out that we did not detect α-dystroglycan in our samples due to its release into the culture medium, as this extracellular protein could be poorly retained in an immature DAPC. It is conceivable that the incomplete DAPC is one of the hurdles to progression to an adult phenotype of hiPSC-CM. Recent protocols using co-cultures with fibroblasts and endothelial cells may further improve maturation but, because of their complexity, are not yet widely adopted (Giacomelli et al., 2020).
CONCLUSION
In conclusion, because of the unique insight into patientspecific genetic and functional background they provide, hiPSC-CM are a highly relevant model to study genetic cardiac diseases. However, our findings indicate that it is important to recognize the limitations of the hiPSC-CM model for the study of dystrophy-related cardiomyopathies. Further understanding of the mechanisms that govern the stabilization of sarcoglycans and α-dystroglycan within the DAPC can improve the use of hiPSC-CM as a model system and as a bridge to medical applications such as regenerative medicine and drug screening.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethical Committee of UZ Leuven (permit number S58824). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. | 4,722.2 | 2021-11-04T00:00:00.000 | [
"Biology",
"Medicine"
] |
Does wild-type Cu/Zn-superoxide dismutase have pathogenic roles in amyotrophic lateral sclerosis?
Amyotrophic lateral sclerosis (ALS) is characterized by adult-onset progressive degeneration of upper and lower motor neurons. Increasing numbers of genes are found to be associated with ALS; among those, the first identified gene, SOD1 coding a Cu/Zn-superoxide dismutase protein (SOD1), has been regarded as the gold standard in the research on a pathomechanism of ALS. Abnormal accumulation of misfolded SOD1 in affected spinal motor neurons has been established as a pathological hallmark of ALS caused by mutations in SOD1 (SOD1-ALS). Nonetheless, involvement of wild-type SOD1 remains quite controversial in the pathology of ALS with no SOD1 mutations (non-SOD1 ALS), which occupies more than 90% of total ALS cases. In vitro studies have revealed post-translationally controlled misfolding and aggregation of wild-type as well as of mutant SOD1 proteins; therefore, SOD1 proteins could be a therapeutic target not only in SOD1-ALS but also in more prevailing cases, non-SOD1 ALS. In order to search for evidence on misfolding and aggregation of wild-type SOD1 in vivo, we reviewed pathological studies using mouse models and patients and then summarized arguments for and against possible involvement of wild-type SOD1 in non-SOD1 ALS as well as in SOD1-ALS.
Background
Amyotrophic lateral sclerosis (ALS) is an adult-onset neurodegenerative disease classically characterized by loss of motor neurons in the central nervous system including motor cortex, brainstem, and spinal cord [1]. The loss of motor neurons leads to inability to control voluntary muscles and ultimately results in respiratory failure. Only two drugs, Riluzole and Edaravone, are currently available, but their therapeutic effects are limited to the extent that the survival can be extended at most a few months [2]. Together with full elucidation of the pathomechanism, therefore, development of efficient cures for this devastating disease has long been demanded.
In 1993, mutations in the gene encoding Cu/Zn-superoxide dismutase (SOD1) were first reported as a cause of ALS [3], and since then, more than 30 genes responsible for ALS have been identified [1]. A genetic cause/predisposition still remains unclear in most of ALS cases (~80%), and SOD1 mutations describe only approximately 3% of total ALS cases (called SOD1-ALS) [4]. Nonetheless, pathological examinations on SOD1-ALS cases provide us with important clues to understand disease mechanisms; namely, SOD1 proteins abnormally accumulate and form inclusions selectively in affected motor neurons [5]. Based upon such pathological observations, furthermore, a mechanism has been proposed where SOD1 proteins assume an abnormal conformation (or misfold) by an amino acid substitution corresponding to a pathogenic mutation, accumulate as oligomers/aggregates, and then exert toxicity to kill motor neurons [6]. Several researchers have attempted to extend the pathological roles of SOD1 misfolding in SOD1-ALS to more prevailing ALS cases, in which no mutations in the SOD1 gene are confirmed (non-SOD1 ALS). In other words, wild-type SOD1 could cause ALS when it somehow misfolds. Nonetheless, experimental results on the involvement of wild-type SOD1 in non-SOD1 ALS are not consistent among different research groups, making this issue highly controversial. In order to discuss SOD1 proteins as a potential target for the development of therapeutics to ALS, we comprehensively reviewed reports on possible roles of wild-type SOD1 in the pathology of ALS.
Misfolded forms of SOD1 as a pathological hallmark of SOD1-ALS SOD1 is a metalloenzyme that catalyzes the disproportionation of superoxide anion into hydrogen peroxide and molecular oxygen [7]. The enzymatic activity in most of the patients with the SOD1 mutations was almost half as much as those in healthy controls [8], which had initially been considered to trigger pathological changes in ALS. Indeed, homozygous and even heterozygous knockout of the Sod1 gene in mice exhibited a wide range of phenotypes relevant to ALS such as slowly progressive motor deficits [8]. Recently, furthermore, human patients with a homozygous truncating variant c.335dupG (p.C112Wfs*11) in the SOD1 gene that leads to total absence of the enzymatic activity were reported, and the resulting phenotype was marked by progressive loss of motor abilities [9,10]. Heterozygous carriers of the c.335dupG variant had an approximately halved SOD1 activity when compared to normal controls but appear not to develop symptoms of ALS [10]. Also, the Sod1-knockout mice did not develop ALS-like pathologies [8]; instead, overexpression of mutant SOD1 in mice reproduces ALS-like pathological changes with a significant increase in the SOD1 enzymatic activity [11]. While any reduction in the SOD1 enzymatic activity might modify the ALS pathomechanism, mutant SOD1 is considered to cause the disease not through a loss of the enzymatic activity but by a gain of new properties exerting toxicity to motor neurons.
As a pathological hallmark of SOD1-ALS, SOD1 proteins are known to abnormally accumulate in motor neurons (e.g. [5]), leading to prevailing idea that pathogenic mutant SOD1 gains toxicity through its misfolding into non-native conformations. While the abnormal accumulation of SOD1 in motor neurons does not necessarily mean the misfolding of SOD1, biophysical examinations in vitro using recombinant SOD1 proteins have strongly supported conformational changes of SOD1 by amino acid substitutions due to the pathogenic mutations. SOD1 is functionally and conformationally matured through post-translational processes including copper and zinc binding and disulfide formation [12]. The bound copper ion acts as a catalytic center, whereas the bound zinc ion and the intramolecular disulfide bond play roles in stabilizing the native structure [13][14][15]. Pathogenic mutations decrease the affinity of SOD1 toward the metal ions and/or the stability of the disulfide bond [16,17], thereby disturbing the native conformation of SOD1. In other words, the posttranslational maturation appears to be hampered in the mutant SOD1 proteins, resulting in an increased propensity of SOD1 to misfold into oligomers and aggregates. Indeed, in transgenic mice expressing human SOD1 with ALS-causing mutations (G37R and G93A), oral administration of a copper complex Cu II (atsm) facilitates the copper binding of mutant SOD1 in their spinal cords and improves the neurological phenotype and survival [18][19][20]. Also, further expression of CCS, which is a copper chaperone assisting the maturation of SOD1 in vivo [21,22], remarkably extends the survival of the transgenic mice administered with Cu II (atsm) [23]. In the absence of the Cu II (atsm) administration, overexpression of CCS in the transgenic mice (G37R and G93A) is known to dramatically reduce the mean survival (from 242 days to 36 days), to which mitochondrial dysfunction appears to contribute due to the perturbation of intracellular copper dynamics [24,25]. Increased amounts of CCS would supply most of the intracellular copper ions to overexpressed mutant SOD1 proteins; therefore, the copper ions are not recruited to the other copper-requiring enzymes such as cytochrome c oxidase in mitochondria. Indeed, overexpression of CCS did not influence the disease phenotypes of the transgenic mice expressing human SOD1 with L126Z or murine SOD1 with G86R mutation [24], which are considered to be unable to bind a copper ion. Also notably, marked acceleration of disease in the transgenic mice (G93A) with CCS overexpression was not observed when the mice had an additional mutation H80G in the SOD1 (G93A) transgene [26]. This is probably because the zinc-binding in G93A-mutant SOD1 was compromised by substitution of a zincligand (His80) to Gly. Given important roles of the zinc binding in conformational stabilization of SOD1 [14,27], H80G/G93A-mutant SOD1 was not able to receive a copper ion from the overexpressed CCS. Misfolding of SOD1 proteins in vivo as well as in vitro will hence be circumvented through their post-translational maturation of SOD1, which would eventually reduce the toxicity of mutant SOD1 proteins.
Pathological roles of wild-type human SOD1 in transgenic mouse models of SOD1-ALS Given that wild-type SOD1 is misfolded in vitro when losing the bound metal ions and/or the conserved disulfide bond [28], SOD1 could exert the disease-causing toxicity even without the pathogenic amino acid substitutions. Actually, co-expression of wild-type human SOD1 in transgenic mice expressing ALS-linked mutant human SOD1 (G37R, G85R, G93A, and L126Z) is known to accelerate the disease onset, suggesting the toxicity of wild-type human SOD1 [29][30][31][32][33][34][35]. Also, mice did not develop ALS-like symptoms upon expression of A4V-mutant human SOD1, but co-expression of wildtype human SOD1 in the A4V-SOD1 expressing mice did trigger the progression of ALS-like disease [29]. Taking advantage of distinct electrophoretic mobilities of wild-type and mutant SOD1 proteins (G85R and L126Z), furthermore, wild-type human SOD1 was found to accumulate as detergent-insoluble aggregates with the mutant proteins in transgenic mice [29,31,33,34], while the interactions in the aggregates would not be simply a co-assembly of mutant and wild-type proteins [33]. A mechanism of disease-accelerating effects of wild-type SOD1 remains unclear, but heteromeric interactions between wild-type and mutant SOD1 appear to aggravate the aggregation and toxicity in cultured cell models [36] and have correlation with the disease severity [37]. It should be also noted that, in some studies, overexpression of wild-type human SOD1 did not affect the onset or duration of disease in mice expressing G85R-mutant human SOD1 [5] or G86R-mutant murine SOD1 [38]. Furthermore, disease-related phenotypes were not observed in transgenic mice expressing human SOD1 that has multiple mutations including those at copper and zinc binding sites (H46R/H48Q/H63G/ H71R/H80R/H120G) and two free Cys residues (C6G/ C111S) with an ALS-linked mutation, H43R, and coexpression of wild-type human SOD1 did not cause the disease [35]. Such apparent discrepancies would, nonetheless, indicate that expression levels of SOD1 as well as interactions between wild-type and mutant SOD1 play key roles in exerting toxicity of wild-type human SOD1.
Even in the absence of ALS-causing mutant SOD1, overexpression of wild-type human SOD1 alone can exert motor neuron toxicity to mice. In hemizygous transgenic mice expressing wild-type human SOD1, their lifespan was not affected, but neurodegenerative changes appeared in old age including mitochondrial vacuolization, axonal degeneration and a moderate loss of spinal motor neurons [32,39,40]. Upon decreasing glutathione levels, the mice developed overt motor symptoms, and their lifespan was decreased [41]. Also, spinal cord homogenates from the hemizygous wild-type human SOD1 transgenic mice were found to contain age-dependent, progressive formation of high-molecular-weight SOD1 aggregates [40,42], which would be caused by oxidation of a unique tryptophan in SOD1 upon endoplasmic reticulum stress [42]. Furthermore, homozygous wildtype human SOD1 transgenic mice significantly increased the expression levels of wild-type human SOD1 and thereby developed ALS-like syndrome with formation of aggregated SOD1 in spinal cord and brain [43]. Even without any amino acid substitutions, therefore, wild-type human SOD1 could exert motor neuron toxicity to model animals under certain experimental conditions.
Possible involvement of wild-type SOD1 in pathological inclusions of SOD1-ALS patients In contrast to the mouse models, pathological involvement of wild-type SOD1 is highly controversial in SOD1-ALS as well as non-SOD1 ALS patients. While most of SOD1-ALS patients express both wild-type and mutant SOD1 proteins, it is difficult to biochemically and immunohistochemically distinguish between wildtype and mutant SOD1 in tissues. In that sense, the involvement of wild-type SOD1 was examined in a SOD1-ALS patient with the G127insTGGG (G127X) mutation; such a truncated G127X-mutant SOD1 can be discriminated from the wild-type protein because of the difference in size and also of a non-native procession of the five amino acids following Gly127 in the variant [44,45]. Wild-type SOD1 was detected in a detergent-insoluble (0.1% Nonidet P-40-insoluble) fraction of the cervical ventral horn of the G127X patient, while no control patients were examined [45] . Also, G127X patients had aggregates in glial cell nuclei of spinal cords, some of which were stained with an antibody (Chi 131-153 ab) raised against a peptide sequence absent in G127Xmutant SOD1 (Asn131 -Gln153) [46]. Those Chi 131-153 ab-positive aggregates were not stained with a G127X-mutant specific antibody directed to the nonnative, C-terminal sequence of the five amino acids, suggesting pathological aggregation of wild-type SOD1 that is not co-localized with G127X-mutant proteins. As discussed later, however, even in control patients, significant amounts of wild-type SOD1 were present in the 0.1% Nonidet P-40-insoluble fraction [47]. Also, the same research group has published the paper showing that G127X-mutant but not wild-type SOD1 in the ventral horn of lumbar spinal cord of a G127X patient was sedimented by density gradient ultracentrifugation [44], implying no involvement of the wild-type protein in the mutant SOD1 aggregates. Some of the pathogenic fulllength as well as truncated mutant SOD1 proteins are known to exhibit distinct electrophoretic mobilities from that of the wild-type protein [48]; therefore, more biochemical analysis on tissue samples from SOD1-ALS patients will reveal any involvement of wild-type SOD1 in the abnormal accumulation of SOD1 proteins in spinal cord.
Controversies on pathological involvement of wild-type SOD1 in non-SOD1 ALS Also in non-SOD1 ALS cases, which are much more prevailing than SOD1-ALS, there are harsh controversies on pathological roles of wild-type SOD1. While few studies have examined the metal binding and/or disulfide status of wild-type SOD1 in ALS, the lack of such posttranslational processes is expected to result in the decrease of its enzymatic activity. Indeed, SOD1 activity in brain homogenates of sporadic ALS cases was reported to be decreased [49], but another study confirmed little differences in the activity in several parts of the central nervous system between sporadic ALS cases and non-ALS controls [50]. It should be noted that only the activity but not the amount of SOD1 was compared in those previous reports; therefore, it remains to be concluded whether wild-type SOD1 becomes misfolded and enzymatically inactive under pathological conditions of ALS.
SOD1 is ubiquitously and highly (10-100 μM) expressed as a soluble protein [51][52][53] (Human Protein Atlas available from http://www.proteinatlas.org) and diffusedly detected in most of subcellular compartments including cytoplasm [54], mitochondria [55], nucleus [56], and endoplasmic reticulum [57]. Based upon many studies using mouse models as well as purified proteins (e.g. [14,58]), a consensus has been reached on the significantly reduced solubility of SOD1 by ALS-causing mutations, which leads to the formation of detergentinsoluble SOD1 aggregates. It should, however, be noted that only a few studies confirmed the solubility changes of SOD1 proteins in spinal cord tissues of ALS patients (even in those of SOD1-ALS patients).
Bosco et al. prepared insoluble pellets from spinal cord homogenates in detergent-free lysis buffer, where comparable levels of SOD1 proteins were detected among a SOD1-ALS case (A4V mutation), four sporadic ALS cases, and four non-neurological controls [59]. No differences were observed in the amount of 0.1% Nonidet P-40resistant SOD1 among two SOD1-ALS patients with the homozygous D90A mutations and two controls [47]. In contrast, when spinal cord homogenates were treated with 0.5% Nonidet P-40, significantly more amounts of SOD1 were detected in the insoluble fraction of a SOD1-ALS case (A4V mutation) than those of two familial ALS cases with unknown genetic causes, 12 sporadic ALS cases, and three controls [60]. Significantly more amounts of SOD1 were also detected in the 1% Nonidet P-40-insoluble pellets from two sporadic ALS cases (a non-SOD1 ALS and a case with C9orf72 mutation) as well as two SOD1-ALS cases (A4V and G72C mutations) than those of three Alzheimer's disease cases and four non-neurological controls [61]. Furthermore, a filter-trap assay using a 0.22 μm cellulose acetate membrane was examined to detect SOD1 aggregates in spinal cord homogenates containing Nonidet P-40 and sodium dodecyl sulfate; wild-type SOD1 aggregates trapped on the membrane were significantly augmented in the lumbar spinal cord of sporadic ALS cases (4 positive/7 total) compared with control subjects (0 positive/6 total) [42]. It is thus possible that SOD1 proteins form detergent-insoluble aggregates in pathological conditions of ALS cases even without SOD1 mutations (Fig. 1, left), but more numbers of studies will be required for conclusions.
Given that SOD1 is highly expressed in most of intracellular compartments, an immunohistochemical method using anti-SOD1 antibodies may be suitable for detection of pathological changes occurring in wild-type SOD1 only if the protein is densely accumulated as inclusion bodies. Indeed, a subset of Lewy body-like (hyaline) inclusions in the anterior horn cells of 10 out of 20 sporadic ALS patients (albeit with no test on SOD1 mutations) were immunoreactive to anti-SOD1 antibodies, while skein-like inclusions and Bunina bodies were not [62][63][64]. Also, SOD1-immunoreactive inclusions were discerned against background staining in spinal cord motor neurons of a familial ALS patient without SOD1 mutation [50]. In the other study, however, no SOD1-immunoreactivity was confirmed in the hyaline inclusions of all sporadic ALS cases examined (17 cases, again with no mention about SOD1 mutations) [65]. While such a sharp discrepancy among those studies remains to be solved, different SOD1 antibodies were used for immunohistochemical analysis: a rabbit or sheep polyclonal antibody was raised against a holo form of human SOD1 in the former two studies [66], and a rabbit polyclonal antibody was raised against a SOD1 peptide corresponding to Asp124 to Lys136 in the latter [67]. These ideas are challenged by a report showing no SOD1-positive inclusions in non-SOD1 ALS cases with a rabbit polyclonal anti-SOD1 antibody or a mouse monoclonal anti-SOD1 antibody [68]. Nonetheless, misfolding of SOD1 is well expected to affect epitope availability; therefore, the choice of the antibodies is still a key factor to detect any misfolded forms of SOD1 proteins in vivo. Indeed, increasing numbers of studies have examined non-SOD1 ALS cases with conformation-specific antibodies that can discriminate misfolded SOD1 from the natively folded protein in vitro (called misfolded-SOD1 antibodies hereafter).
Immunohistochemical examination on non-SOD1 ALS cases with misfolded-SOD1 antibodies As summarized in a recent comprehensive paper [69] as well as in an excellent review [70], a number of misfolded-SOD1 antibodies have been used for examination of sporadic ALS cases, and the results are sharply divided. In this review, we performed extensive search on the previous reports describing immunohistochemical and/or immunofluorescence examinations on human spinal cord tissues with misfolded-SOD1 antibodies, which is summarized in Table 1. As colored cyan in Table 1, some studies have claimed positive immunostaining of spinal cords (motor neurons and glial cells) selectively in sporadic and familial ALS with misfolded-SOD1 antibodies [46,50,59,61,69,[73][74][75]. As reviewed later in detail, a misfolded-SOD1 antibody (α-miSOD1) designed based on an antibody from the healthy elderly subjects was also found to stain spinal cord of sporadic as well as familial ALS patients but not of nonneurological controls [71]. In the other studies (colored orange in Table 1), however, no difference in the staining pattern was observed between ALS and non-ALS controls [72,74,[76][77][78][79]. Some of the misfolded-SOD1 antibodies in Table 1 (in particular, the ones reported from one research group: SEDI, USOD, AJ10, B8H10, 4A1, and A5E5) were found to immunostain spinal motorneurons in SOD1-ALS but not in non-SOD1 ALS, which might simply mean that misfolded conformations of wild-type SOD1 in non-SOD1 ALS are not the same with those of mutant SOD1 in SOD1-ALS. Immunostaining results using mouse monoclonal C4F6, 3H1, 10E11C11 and a rabbit polyclonal Ra 131-153 antibody have been reported from more than two research groups but still did not reach a consensus about the detection of misfolded SOD1 in non-SOD1 ALS cases (Table 1). Fig. 1 Schematic representation on possible changes of wild-type SOD1 in ALS. (Left) A natively folded SOD1 binds copper and zinc ions and forms an intramolecular disulfide bond. Pathological conditions might disrupt intracellular metal homeostasis and augment oxidative stress/ER stress, facilitating the formation of misfolded SOD1 even without any disease-causing mutations. Disulfide-crosslinked oligomers and insoluble aggregates of wild-type SOD1 have been detected in spinal cords of sporadic ALS. (Right) SOD1 has been known to constitutively secreted to extracellular fluid such as ISF and CSF, and recently, toxic wild-type SOD1 in abnormally misfolded conformations was detected in CSF of sporadic ALS. Misfolded SOD1 appears to be cleared by humoral immune response and/or glymphatic/intramural peri-arterial drainage systems, and their failure might contribute to the disease. There is no mention on the non-neurological controls in the paper. f In this review, the cases with cytoplasmic granular staining, rare round deposits, abundant round deposits, and globular inclusions are counted as misfolded-SOD1 positive, while the cases with no signal, sparse diffuse staining, and abundant diffuse staining are counted as misfoded-SOD1 negative. g Not available (no mention in the paper). h The control cases are described as "non-ALS controls". i In the paper, it was described that "no or only weak immunoreactivity was observed in motor neurons of most of the 41 spinal cord tissue samples from NNC patients" Much effort has been directed to resolve those discrepancies, which could be caused by differences in experimental procedures including tissue fixation, antigen retrieval, and working concentrations of primary antibodies [69]. Indeed, antigen retrieval treatments in a citrate buffer with heat (boiling, steaming, microwave) are considered to denature SOD1 proteins, which could efficiently expose the epitope for misfolded-SOD1 antibodies [72] but appears not to describe the discrepancy on the immunohistochemical detection of misfolded SOD1 (Table 1).
In immunohistochemical/immunofluorescence analysis of tissues, the experimental procedures/conditions are often not described in detail; in particular, a working concentration of a primary antibody is usually indicated as a dilution factor but not a concentration of the antibody in many studies. These situations prevent us from comparing the previously reported staining results in detail; based upon Table 1, however, a trend can be found that a significant dilution of the misfolded-SOD1 antibodies fails to detect non-SOD1 ALS-specific immunostaining. The antibody C4F6 is commercially available from MediMabs, and the concentration was found to be < 0.05 mg/mL in our hands. Ayers et al. [72] and Da Cruz et al. [77] have reported the absence of C4F6positive staining in sporadic ALS cases by using the C4F6 antibody from MediMabs in 500-fold and 200-fold dilution (Table 1), which would correspond to < 0.1 and < 0.25 μg/mL of the working concentration, respectively. Instead, Bosco et al. successfully detected C4F6positive staining with 1.0 μg/mL C4F6 in some of sporadic ALS cases but not in non-neurological controls (Table 1) [59]. Also in the three papers by Grad et al. [61], Pokrishevsky et al. [75], and Da Cruz et al. [77], we have supposed that they used the antibodies 3H1 and 10E11C11 originated from the same source for immunohistochemical examination on misfolded SOD1 (we further assumed the same concentration of the original antibody solution in their studies). Successful detection of misfolded SOD1 in ALS tissues with a lower dilution rate of the antibodies was reported by Grad [46,50,69,73]. The antibody was then distributed to the other research group and used for the immunohistochemical examination; however, the Ra 131-153-positive immunostaining was observed in not only ALS but also non-neurological control cases [77], which might be due to an antigen retrieval step using Tris-EDTA-based solution [69]. Collectively, further investigations with more quantitative, detailed descriptions on the experimental procedures (a working concentration of antibodies, in particular) will be definitely required for evaluating immunohistochemical evidence of misfolded SOD1 proteins in non-SOD1 ALS cases.
Immunoprecipitation from spinal cords of non-SOD1 ALS with misfolded-SOD1 antibodies Immunohistochemical examinations require several harsh treatments of tissue samples (depaffinization, antigen retrieval, etc.) that can significantly affect protein conformations; therefore, the presence or absence of misfolded wild-type SOD1 proteins in tissues may not be accurately evaluated. Instead, more accurate evidence on misfolded wild-type SOD1 in ALS could be provided by immunoprecipitation (IP) from unfixed spinal cord homogenates with misfolded-SOD1 antibodies, which are summarized in Table 2. Again, experimental details required for testing reproducibility were not fully described in most of the papers, and the results were sharply divided. Mutant SOD1 in all SOD1-ALS cases examined was successfully immunoprecipitated with any of misfolded-SOD1 antibodies listed in Table 2, and wild-type SOD1 in sporadic ALS cases without SOD1 mutations was also immunoprecipitated in the studies by Grad et al. [61] and Paré et al. [69]. In contrast, the other studies by Liu et al. [78], Kerman et al. [76], and Da Cruz et al. [77] have concluded that no wild-type SOD1 proteins are immunoprecipitated from spinal cords of sporadic ALS cases with misfolded-SOD1 antibodies. Nonetheless, we note that the interpretation on the immunoprecipitation results appears somewhat different among those studies; namely, no SOD1 proteins were observed in immunoprecipitates from sporadic ALS with SEDI (Liu et al. [78]) and USOD (Kerman et al. [76]) antibodies, while the misfolded-SOD1 antibodies (3H1, 4A1, A5E5) used in the Da Cruz et al. paper did immunoprecipitate SOD1 proteins in sporadic ALS cases but also in non-neurological controls [77]. Using the 3H1 antibody, furthermore, Grad et al. were found to immunoprecipitate wild-type SOD1 from spinal cords of sporadic ALS cases but not from those of nonneurological controls [61]. Again, it is highly possible that some differences in experimental procedures influence the detection of misfolded wild-type SOD1 in sporadic ALS tissues, and much more numbers of studies with detailed description on IP methods are definitely required.
It is also important to note that wild-type SOD1 immunopurified with anti-SOD1 antibody from spinal cord homogenates of sporadic ALS inhibited anterograde but not retrograde fast axonal transport in the assay using isolated squid axoplasm through a mechanism possibly involving specific activation of p38 MAPK [59]. Such inhibition was no longer observed when the immunopurified SOD1 proteins were first mixed with the misfolded-SOD1 antibody C4F6 and then perfused into squid axoplasm. These results have thus supported toxic and pathogenic roles of misfolded wild-type SOD1 in sporadic ALS (Fig. 1, left).
Misfolded forms of SOD1 in cerebrospinal fluid of ALS As described, SOD1 is localized mostly in the cytoplasm (Human Protein Atlas, see above), and the intraneuronal inclusions containing SOD1 are the pathological hallmark of SOD1-ALS [5]. Many researchers thus focused on the toxic/conformational properties of SOD1 within cells, even though SOD1 proteins were reported to be present also in the extracellular space by their active and constitutive secretion from cells (Fig. 1, upper) [80,81]. Recently, misfolded/aggregated proteins are considered to propagate between cells, which would contribute to the pathological progression in many of neurodegenerative diseases including SOD1-ALS [82][83][84][85][86]. For example, premature motor neuron disease in transgenic mice expressing human SOD1 with G85R mutation is triggered by inoculation of detergent-resistant fractions of SOD1 from a SOD1-ALS patient (G127Gfs*7) into the lumbar spinal cord [83]. Also, much attention has been paid on glymphatic system [87] and intramural peri-arterial drainage pathway [88], by which misfolded/aggregated proteins in interstitial fluid (ISF) of the brain and spinal cord could be drained into cerebrospinal fluid (CSF) and then cleared [89]. Regarding SOD1-ALS, indeed, the disease duration of transgenic mice expressing ALS-linked mutant SOD1 was shortened by deletion of aquaporin-4 [90], a water channel playing central roles in the extracellular clearance through the glymphatic system [87]. Furthermore, pathologies and amyloid-β accumulation in transgenic mouse models of Alzheimer's disease were aggravated by disrupting meningeal lymphatic vessels, which are proposed as a drain of macromolecules from ISF and CSF [91]. Therefore, SOD1 proteins that are secreted from neurons and glia and then possibly drained into CSF will be important in understanding the pathology of ALS.
Indeed, SOD1 is well known as a constituent of CSF, and amounts of SOD1 in CSF tended to increase as a function of age albeit with a low correlation coefficient (r 2 = 0.1~0.2) [92][93][94]. In most studies, total SOD1 levels in CSF appear to be not significantly different between ALS and neurological/non-neurological controls [92][93][94][95][96]. Alternatively, absolute levels of SOD1 in CSF were reported to show substantial variability among individuals but with little variability in each individual over time [97]. In the same study [97], ALS cases and neurological controls were characterized by slightly higher levels of SOD1 in CSF compared to those of healthy controls; however, the amount of SOD1 in CSF did not correlate with the severity of ALS. In CSF, significant fractions of SOD1 were also reported to be Nterminally truncated, but the amount of such truncated proteins did not differ between ALS and controls, suggesting little pathological roles of the truncated SOD1 in ALS [93,95]. In electrophoretic analysis of CSF, furthermore, neither SOD1-positive smears nor highmolecular-weight ladders were observed, indicating that detergent-resistant oligomers/aggregates were not evident in CSF of ALS [93,95]. Based upon those reports, SOD1 in CSF appears to have no pathological roles in ALS. Nonetheless, it is quite notable that, in rats overexpressing wild-type human SOD1, half-life of the SOD1 protein was significantly longer in CSF (14.9 days) as well as in spinal cord (15.9 days) than that in liver and kidney (1.7 and 3.4 days, respectively) [98]. Also in CSF of human subjects, the turnover rate of SOD1 was found to be significantly slower (half-life: 25.0 +/− 7.4 days) than that of total proteins (half-life: 3.6 +/− 1.0 days) [98]. Accordingly, slow turnover rate of SOD1 in CSF as well as in spinal cord would allow sufficient time for SOD1 to become misfolded and to contribute to the development of pathological changes.
To test if SOD1 becomes misfolded in CSF of ALS, CSF samples from 96 ALS cases (57 sporadic ALS, 22 SOD1-ALS, 17 Non-SOD1 familial ALS) and 38 neurological controls were examined with sandwich ELISA using misfolded-SOD1 antibodies (Ra 24-39, Ra 57-72, and Ra 111-127) [94]. Signals indicating the presence of misfolded SOD1 were found in all samples, but no significant differences were confirmed between ALS with and without SOD1 mutations and also between the ALS cases combined and the controls [94]. In contrast, by using other types of misfolded-SOD1 antibodies, we recently showed that wild-type SOD1 proteins were misfolded in CSF of sporadic ALS cases as well as of a SOD1-ALS case [95]. More precisely, sandwich ELISA was performed on CSF from 21 ALS cases (20 sporadic ALS, 1 SOD1-ALS) and 40 controls by using misfolded-SOD1 antibodies (C4F6, UβB, EDI, apoSOD, 24-39 and SOD1 int ). Among those, C4F6, UβB, EDI, and apoSOD were found to give significantly higher signals in CSF of ALS cases compared to those of controls; in contrast, no differences were observed with 24-39 and SOD1 int . It was also surprising to us that large fractions of SOD1 in CSF of sporadic ALS cases were immunoprecipitated with C4F6 antibody [95]. CSF collected from ALS patients has been known to exert toxicity toward motorneuron like cells NSC-34 [99], and we revealed that the toxicity was alleviated by removing the misfolded SOD1 from CSF with immunoprecipitation using C4F6 antibody [95]. It is also notable that misfolded SOD1 immunoreactive to C4F6 and UβB was observed, albeit with less amount, in CSF of a subset of patients with Parkinson's disease (PD) and progressive supranuclear palsy (PSP). Therefore, not all types of misfolded-SOD1 antibodies could detect pathological forms of wild-type SOD1 in CSF, but our study has suggested that wildtype SOD1 in CSF adopts a misfolded, toxic conformation(s) in pathological conditions of ALS and also a subset of PD and PSP. In that sense, it is important to note that levels of SOD1 in CSF of SOD1-ALS patients were reduced by oral medication with pyrimethamine [100].
Misfolding of wild-type SOD1 under oxidative environment of spinal cord and CSF
Another important issue to be solved is where SOD1 is misfolded; in other words, it remains to be tested whether SOD1 is misfolded in CSF, or misfolded SOD1 in affected spinal cord (or some other tissues) is drained into CSF (Fig. 1). As of now, we do not have an answer to this question; nonetheless, one of the notable features observed commonly in spinal cord and CSF of ALS patients is significantly elevated levels of oxidative markers, which has been summarized in an excellent review [101]. It is thus plausible that oxidative environment in the spinal cord/CSF of ALS is important to understand any pathological changes occurring in SOD1.
In accordance with this, we have detected abnormal SOD1 oligomers crosslinked via intermolecular disulfide bonds in spinal cord of SOD1-ALS cases as well as transgenic mice expressing human SOD1 with ALS mutations (G37R, G93A, and L126Z) [31,102]. While the disulfide-crosslinked SOD1 oligomers were not evident in CSF of sporadic ALS cases and a SOD1-ALS case [95], reductant (DTT)-sensitive aggregates of wild-type SOD1 were detected in affected spinal cord of sporadic ALS cases [42]. Furthermore, Xu et al. suggested the oxidation of Cys111 in SOD1 to a sulfenic acid (−SOH) in CSF of a subset of sporadic ALS cases [103], and we also found that Cys111 was oxidized to a sulfonic acid (−SO 3 H) in CSF of a subset of ALS, PD, PSP, and AD cases [95]. In our experiments in vitro [104], followed by the sulfenylation of Cys111 in metal-bound SOD1 with H 2 O 2 , dissociation of the bound metal ions from the protein was found to allow another free Cys residue (Cys6) to attack the sulfenylated Cys111. SOD1 has a canonical intramolecular disulfide bond between Cys57 and Cys146; therefore, oxidation with H 2 O 2 led to the formation of abnormal SOD1 (SOD1 2xS-S ) with two intramolecular disulfide bonds (Cys6-Cys111 and Cys57-Cys146), and SOD1 2xS-S was prone to aggregation and also toxic to motor-neuron like cells NSC-34 [104].
As summarized above, Cys is considered to be the most susceptible to oxidation among amino acids and would hence be a key residue for oxidative modifications under pathological conditions. Notably, several other oxidized forms of SOD1 have been also reported in cell lines, transgenic mice, and purified SOD1 proteins. For example, SOD1 proteins with oxidized carbonyl groups were detected in lymphoblasts derived from sporadic ALS with bulbar onset [105]. SOD1 oxidized at tryptophan (Trp32) was found to accumulate in the microsomal fractions purified from spinal cord of transgenic mice expressing wild-type human SOD1 [42] and was also detected in human blood and the blood isolated from transgenic mice expressing wild-type or ALSlinked mutant human SOD1 [106]. Furthermore, several His residues as well as Trp32 are also susceptible to oxidation, which has been proposed to trigger the aggregation of SOD1 in vitro [107][108][109][110]. It, however, remains to be tested whether the His and/or Trp oxidations occur on SOD1 in ALS patients.
Misfolded SOD1 in extracellular fluid as a potential immunotherapeutic target As reviewed above, formation of misfolded and plausibly toxic SOD1 species in extracellular fluid is well expected as a pathological change occurring in ALS cases. This could in turn open the way to alleviate the disease by removing such extracellular SOD1 proteins with the humoral immune response. Indeed, the survival of transgenic mice expressing ALS-linked mutant SOD1 was extended by vaccination with full-length misfolded SOD1 proteins [111,112] and with peptides corresponding to the region available only in misfolded SOD1 [113,114]. Passive immunization with several misfolded-SOD1 antibodies was also reported to be beneficial to the SOD1-ALS model mice [112,[115][116][117] except for one study [118]. Furthermore, sera from sporadic ALS patients were found to contain IgM antibodies reacting with misfolded SOD1 (recombinant SOD1 oxidized with 10 mM H 2 O 2 ), and the sporadic ALS cases with higher levels of the IgM antibodies (n = 153) exhibited a longer survival of 6.4 years than the subjects lacking those antibodies (n = 127) [119].
Notably, Maier et al. screened human memory B cell repertoires from a large cohort of healthy elderly subjects and successfully generated a monoclonal antibody (α-miSOD1) that can react selectively with misfolded/ oxidized SOD1 but not with native SOD1 [71]. Based upon the presence of B cell memory against misfolded SOD1 in a majority of those healthy elderly subjects, Maier et al. suggested that misfolding of SOD1 and the subsequent humoral immune response are frequent events in the elderly [71]. This antibody, α-miSOD1, was found to stain motor neurons of the spinal cord samples from ALS including sporadic as well as familial cases with and without SOD1 mutations, but not from nonneurological controls (Table 1) [71]. Furthermore, intracerebroventricular infusion and also intraperitoneal injections of α-miSOD1 antibody to transgenic mice expressing ALS-linked mutant human SOD1 (G37R and G93A) delayed the onset of motor symptoms and extended survival [71]. Therefore, clearance of misfolded SOD1 by utilizing the immune system would be a potential treatment for patients with sporadic as well as familial ALS; nonetheless, it should be also noted that, in sera of sporadic ALS subjects, higher levels of IgG antibodies reacting with normal wild-type SOD1 associated with a shorter survival of 4.1 years [119]. For successful immunotherapy to treat ALS, it will be critical to develop antibodies specifically recognizing toxic, misfolded SOD1 and/or to design antigens efficiently producing such antibodies.
Conclusions
While misfolding of ALS-linked mutant SOD1 has been established as a pathological change occurring in SOD1-ALS, roles of wild-type SOD1 in more prevailing non-SOD1 ALS have long been debated. Even in SOD1-ALS, involvement of wild-type SOD1 in the pathology remains obscure. As reviewed above, we performed an extensive literature search and found that a number of studies supported the presence of misfolded wild-type SOD1 in spinal cord and CSF of non-SOD1 ALS cases (Fig. 1). Nonetheless, not all studies detected misfolded wild-type SOD1 proteins in non-SOD1 ALS, possibly suggesting the importance of experimental conditions in their immunohistochemical and immunochemical detection. Also, some of misfolded-SOD1 antibodies gave positive signals in SOD1-ALS but not in non-SOD1 ALS, which may indicate distinct conformations of misfolded SOD1 between SOD1-ALS and non-SOD1 ALS. As we recently reported [95], CSF of non-SOD1 ALS contained misfolded forms of wild-type SOD1. The misfolded SOD1 in CSF was toxic to cultured cells, but it still needs to be tested whether it is a pathogenic species causing degeneration of motor neurons. Quite notably, misfolding of SOD1 could occur in the healthy elderly, and the humoral immune response to the misfolded SOD1 would be a key to prevent ALS. Consistent with beneficial results of immunization-based treatment of transgenic mouse models, therefore, immunological modulation of misfolded SOD1 in extracellular fluids such as CSF would be a promising strategy to delay onset and/or relieve symptoms of ALS. | 8,687 | 2020-08-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
cGAS–STING pathway in ischemia-reperfusion injury: a potential target to improve transplantation outcomes
Transplantation is an important life-saving therapeutic choice for patients with organ or tissue failure once all other treatment options are exhausted. However, most allografts become damaged over an extended period, and post-transplantation survival is limited. Ischemia reperfusion injury (IRI) tends to be associated with a poor prognosis; resultant severe primary graft dysfunction is the main cause of transplant failure. Targeting the cGAS–STING pathway has recently been shown to be an effective approach for improving transplantation outcomes, when activated or inhibited cGAS–STING pathway, IRI can be alleviated by regulating inflammatory response and programmed cell death. Thus, continuing efforts to develop selective agonists and antagonists may bring great hopes to post-transplant patient. In this mini-review, we reviewed the role of the cGAS–STING pathway in transplantation, and summarized the crosstalk between this pathway and inflammatory response and programmed cell death during IRI, aiming to provide novel insights into the development of therapies to improve patient outcome after transplantation.
Introduction
The innate immune response mediated by the cyclic guanosine monophosphateadenosine monophosphate (cGAMP) synthase-stimulator of interferon (IFN) genes (cGAS-STING) pathway has long been the front-line defense against pathogens, such as bacteria, parasites, DNA viruses, or retroviruses (1,2).However, owing to the sequence-independent identification of double-stranded DNA (dsDNA), relevant research on the cGAS-STING pathway has indicated that cellular function extends beyond resisting the invasion of foreign pathogens, and unnecessary activation by accidental sensing of self-derived DNA or mutations can lead to autoinflammatory diseases (3).For example, STING-associated vasculopathy with onset in infancy (SAVI) commonly develops in patients with gain-of-function mutations in TMEM173 (4,5).Aicardi-Goutières syndrome mainly presents with an aberrant generation of type I IFN, and accumulation of DNA damage may be an important driver of STING-related inflammatory responses (6,7).
During organ acquisition, preservation, and transplantation, ischemia-reperfusion injury (IRI) exacerbates damage to donor graft tissues when blood flow is restored after a certain ischemic time.A deficient arterial blood supply invariably leads to a redox imbalance and creates a hypoxic environment in donor graft tissues.Surgical blood reperfusion can lead to severe oxidative damage and an inflammatory response following reoxygenation.This series of events aggravates allograft injury and may lead to primary graft dysfunction, which is associated with high mortality and morbidity (8,9).The cellular and molecular events that occur during IRI are complex and involve innate immune system activation and programmed cell death (PCD) (10,11).However, their interplay is still not clearly understood.In the context of limited treatment options, it is urgent to develop less toxic and higher specificity immunosuppressors to better control graft rejection and avoid mortality related to their toxicity (12).
In the present review, we offer an overview of the cGAS-STING pathway and highlight its role in patients who have undergone transplantation.We then summarize the pharmacological basis for targeting the cGAS-STING pathway for treating IRI to explore potential treatment approaches for IRI following transplantation.
Overview of the cGAS-STING pathway
Cyclic GMP-AMP synthase (cGAS) serves as a novel cytosolic DNA sensor that stimulates IFN production by binding to abnormal DNA within the cytoplasm and activates STING, this activation then triggers host innate immunity in response to "danger signals" (13,14).The overview of the cGAS-STING signaling pathway is illustrated in Figure 1.The binding of abnormally accumulated dsDNA to cGAS in the cytoplasm greatly induces a phase transition, during which cGAS is activated.Recognition is independent of a specific sequence (15)(16)(17)(18).Owing to the dsDNA-induced oligomerization of cGAS, a dimerized cGAS-dsDNA complex is catalytically formed (19-21).Activated cGAS promotes conformational changes in the catalytic pocket that allow the cyclization of GTP and ATP as substrates for conversion into cGAMP as a second messenger (22,23).
Important sources of DNA within the cytoplasm include damage-associated DNA released from nuclear and mitochondrial leakage, as well as exogenous pathogen-associated DNA resulting from microbial infection (24, 25).Compared with bacterial cyclic dinucleotides, dsDNA-activated cGAS contains a linear 2'-5'-linked dinucleotide between GMP and AMP that effectively activates human STING (26,27).Furthermore, higher DNA-binding valences and longer-packed DNA structures facilitate cGAMP production and innate immune signaling (15).Via gap junctions, receptor-based transport, and membrane fusion approaches, activated cGAS can trigger cGAMP transfer from original cells to bystander cells as additional routes to induce downstream signaling cascades (28,29), which is illustrated in Figure 2. Regarding canonical cGAS-STING signaling, cGAMP is produced as an agonist for STING, and its binding to STING located in the endoplasmic reticulum (ER) induces a 180°rotation for ligand FIGURE 1 Overall explanations about the cGAS-STIBG signaling pathway.binding with the transmembrane domain as a reference, which unlocks the right-handed cross-over connections.Therefore, during the rearrangement of the STING dimer, the ligand-binding pocket is closed.This important conformational transition enables the oligomerization and release of STING from anchoring proteins, which are further translocated by integrating with cytoplasmic coat protein II complex vesicles (30)(31)(32).Cytoplasmic coat protein I-mediated retrograde membrane trafficking is also significant for STING activation (33,34).Under the assistance of ADP-ribosylation factor GTPases and cytoplasmic coat protein II, higher-order complex STING is transferred from the ER through the ER-Golgi intermediate compartment to the Golgi apparatus.There, STING and the transcription factor IFN regulatory factor 3 (IRF3) are phosphorylated by recruited TANK-binding kinase 1 (TBK1), and nuclear factor kB is activated simultaneously (35)(36)(37)(38).Phosphorylated IRF3 further oligomerizes and migrates into the nucleus with nuclear factor kB, and both synergistically initiate the expression of type I IFN and inflammatory cytokines, contributing to the innate immune response (39, 40).
cGAS-STING pathway and transplantation outcomes
Few studies have tried to elucidate the role of the cGAS-STING pathway in transplantation models.Some preclinical and clinical studies have demonstrated STING as an effectiveness therapeutic target for graft-versus-host disease following allogeneic hematopoietic stem cell transplantation (41).However, research on solid organ transplantation is still in its infancy, and thus more attention should be paid to this field.
The traditional Chinese herbal medicine ingredient ginsenoside Rb3 could alleviate oxidative stress caused by ischemia-reperfusion damage (42,43).Li et al. used ginsenoside Rb3 to suppress adhesion molecule expression in endothelial cells (ECs) and improve microcirculation of murine transplanted skin flaps.They confirmed that the protective effect of IRI resulted from the inhibition of STING-IRF3 signaling (44).Besides, Yang et al. demonstrated that tumor necrosis factor-a-induced protein-8 like 2 (TIPE2), a negative immunoregulator for immune homeostasis, showed a positive correlation with apoptosis and TIPE2 expression in the graft, which might activate ferroptosis-mediated transplant rejection.TIPE2 −/− mice that had undergone heart transplantation experienced insufficient IFN-g production through the TBK1 signaling axis and increased expression of glutathione peroxidase 4 compared with wild-type mice.Mechanistically, TIPE2 deficiency may inhibit IFN-g generation in T cells by suppressing the TBK1 signaling axis, prevent lipid peroxidation, and relieve ferroptotic cell death in an injured allograft (45,46).Mesenchymal stromal cell therapy combined with low-dose tacrolimus is a feasible and safe therapeutic regimen (47,48).Surprisingly, Chen et al. revealed that combination treatment using low-dose tacrolimus (FK506) and mesenchymal stem cells is beneficial to graft survival, possibly due to weakened graft inflammation by suppressing IFN-g production and TBK1/IRF3 phosphorylation (49).The generation of STING-deficient mice through gene deletion is more conducive to improving our understanding of the cGAS-STING pathway and its importance in transplantation, and clinical trials are urgently needed.
SAVI is an autoinflammatory disease arising from gain-offunction mutations in the STING 1 gene from abnormal encoding, leading to the overproduction of type I IFN (5).Three patients diagnosed with SAVI who underwent solid organ transplantation have been reported.The first patient was a 1year-old infant with SAVI who underwent liver transplantation and immunosuppressive therapy but developed severe multiple biliary cysts and cholangitis in the transplanted liver at the age of 3. Intensive tacrolimus, hydroxychloroquine, prednisolone, and mycophenolate mofetil were administered; however, the patient experienced fatal gastrointestinal bleeding 1 year later (50).The second patient was a 34-year-old woman with SAVI who underwent double-lung transplantation but experienced acute primary graft dysfunction, with acute liver and systemic vasculature complications; she finally died from multiple organ failure (51).The last patient was a 17-year-old girl with SAVI who underwent lung transplantation and developed systemic inflammatory symptoms within 4 months; she was treated with three immunosuppressors, including mycophenolate mofetil, tacrolimus, and prednisolone; however, her symptoms relapsed during prednisolone dose reduction (52).Despite reporting on individual cases, these studies indicated that the abnormal activation of STING in patients receiving transplantation probably led to extremely poor outcomes, even when immunosuppressive therapy was administered.Although the relationship between SAVI and the cGAS-STING pathway is not yet clear, inhibiting STING may be beneficial for improving the outcome of transplant recipients.However, research into this area is still lacking and is considered an important area for future studies.
4 Crosstalk between cGAS-STING pathway and IRI
IRI: an important player in allograft injury
IRI is a major transplant issue, mostly because there is still no effective treatment plan.Recently, it has been demonstrated that targeting the cGAS-STING pathway may be a feasible approach to improve transplantation outcomes by alleviating IRI.In general, the inflammatory response and PCD following ischemia and reperfusion play vital roles in triggering transplant rejection (53,54).Both transplantation and non-transplantation models of IRI share these key pathogenic mechanisms to increase the incidence rate and mortality, and the cGAS-STING pathway participates in their regulation (2,55).This suggests a possible relation between post-transplant IRI and the cGAS-STING pathway.Therefore, we have summarized the regulatory mechanisms of the cGAS-STING pathway in IRI in Figure 3; Table 1, providing new insights for the development of new treatment strategies.Besides, we extracted potential therapeutic agents for treating IRI that may help improve transplant prognosis.
Inflammatory response in IRI
Although it is yet to be fully explored, it would not be surprising to find that the inflammatory response of post-transplant IRI can be triggered by cGAS-STING pathway.Damage-associated molecular patterns (DAMPs) induced by injury are recognized by cGAS, which trigger immune-mediated inflammation.In general, targeting cGAS-STING pathway may help reduce the inflammatory response in transplantation models.
Increased cytosolic DNA can be recognized by the pathogen recognition receptor cGAS and trigger STING activation-induced inflammation, especially for mitochondrial DNA (mtDNA).Elevated mtDNA accumulation caused by IRI is related to delayed graft function (24, 79, 80).Phosphoglycerate mutase 5mediated Bax dephosphorylation triggers mtDNA release, activating the cGAS-STING pathway and causing acute kidney injury following IRI (56).Furthermore, kidney IRI increases receptor-interacting protein 3 levels to facilitate mtDNA damage and leakage and then activates the cGAS-STING-p65 pathway by promoting cytosolic mtDNA expression and increases the transcription of pro-inflammatory factors (57).In contrast, mixed lineage kinase domain like (MLKL) pseudokinase knockout significantly enhances PTEN-induced kinase 1-mediated mitophagy activation to alleviate oxidative stress in hepatocytes, thereby inhibiting macrophage cGAS-STING activation and liver IRI (58).These make mtDNA an important target for inhibiting cGAS-STING pathway activation.In addition, a flap endonuclease I inhibitor has been shown to inhibit mtDNA fragment release and cGAS-STING pathway activation (81), which should be further verified in transplantation models.Notably, IRI, especially for apoptosis, is often associated with mitochondrial damage and mtDNA release, shaping a positive feedback circuit (82,83).Thus, combination therapy for inhibiting cGAS-STING signaling and concurrent mtDNA fragment release may have a synergistic effect.Nowadays, some preclinical studies have made progress.STING inhibitor H-515 can prevent extracellular cold-inducible RNA-binding protein, a potent DAMP, from activating STING and causing intestinal and distant organ injuries (59).Moreover, by transcriptionally upregulating cGAS expression, histone deacetylase 3 (HDAC3) activates the cGAS-STING pathway in a p65dependent manner; tissue inflammation and injury are triggered; accordingly, HDAC3 inhibitors, such as trichostatin A and MS275, can reverse this detrimental effect (60).Post-translational modifications of cGAS play critical roles in regulating its activity and stability (84,85), and thus regulation of cGAS expression and function may be another intervention approach.
Recently increasing attention on macrophage-mediated innate immunity as a crucial player in allograft injury (86).Editing macrophage effector function may be an adjuvant therapy to alleviate inflammation.At present, most studies on the regulation of macrophages mediated by the cGAS-STING pathway have set liver IRI as the main research object.Aged mice are more susceptible to aggravated hepatic injury following ischemiareperfusion, and stronger NLRP3 activation and proinflammatory activity of macrophages.Probably because aged parenchyma cells have more extracellular DNA than younger ones, which triggers a stronger STING/TBK1 signaling in macrophages.Consistent with previous studies, older donors are associated with reduced recipient and graft survival rates (61, 87, 88).Knockout of STING in Myelocyte can reduce liver IRI and inflammatory response, indicating that STING activation may promote the proinflammatory response of Monocyte derived macrophages in liver transplantation (62).In addition, Kupffer cells act as tissue-resident macrophages in the liver, playing an important role in in liver IRI as well, but the effect of cGAS-STING pathway toward it is still not clear.However, the promotion of microglial cell M1 polarization can be attenuated by the STING inhibitor C-176 during IRI-induced mtDNA release (63).
It is well known that T cell-mediated immune response is closely related to post-transplant IRI, which is an important factor affecting transplantation prognosis.The cGAS-STING pathway activates adaptive T cell responses by regulating dendritic cells and macrophages (64).By using cGAS-deficient donor tissue, the activation of CD8 + T cells in the graft and the proportion of effector memory lymphocytes in the spleen were reduced, and the graft survival was significantly prolonged, this provided a basis for immunosuppressive therapy targeting T cells (65).In addition, the induction of transplantation tolerance depends on the presence of Treg.Surprisingly, activation of the cGAS-STING pathway can induce an increase in the production of Mechanisms underlying cGAS-STING activity in ischemia-reperfusion injury (IRI) condition.When IRI happens, self-DNA recognition is the primary determinant for cGAS-STING activity.mtDNA leakage, which is executed by RIP3 and PGAM5, is an important source of cGAS stimulation during IRI.But mitophagy mediated by PINK1 and IFI16 help to counteract mtDNA stress.Besides, HDAC3 can upregulate cGAS expression.cGAMP can be transmitted from parenchymal cells to neighboring macrophages through intercellular transmission to activate immune response.PGAM5, phosphoglycerate mutase family member 5; RIP3, receptor-interacting protein 3; IFI16, interferon gamma-inducible protein 16; PINK1, PTENinduced kinase 1; NLRP3, NOD-like receptor protein 3.
regulatory cytokine IL-10 and promote the inhibitory activity of Treg.Therefore, damage to grafts caused by T-cell-mediated adaptive immunity triggered by the cGAS-STING pathway may be the result of immune imbalance (89).Interestingly, although it has been speculated that STING gain-of-function mutations cause disease through abnormal type I interferon signaling, another study suggests that T cell-mediated adaptive immunity may be the main pathogenic factor, but more researches are needed to confirm it (90).ECs form the primary barrier between the host and solid organ allografts and are essential for inducing cell-mediated acute rejection following transplantation (91).Mitochondrial exposure may upregulate EC adhesion molecules and enhance inflammatory responses by activating ECs (92).Mitochondrial transplant was able to reduce the risk of primary graft dysfunction in lung transplant recipients during ex-vivo lung perfusion (93).Mitochondrial transplantation therapy has shown promise as a therapy in clinical practice, but there is still a lack of research on the underlying molecular mechanisms.A recent study suggested that exposing murine heart ECs to exogenous mitochondria triggers internalized mitochondriaactivated IFI16-STING-NF-kB signaling.Subsequently, STINGdependent mitophagy stabilized the endothelium and weakened apoptotic activity, and activated ECs promoted T-cell-mediated costimulation blockade-resistant rejection (94); the cGAS-STING pathway possibly plays a significant role in this.
Communication between the ER and mitochondria bridges ER stress and activates the innate immune system (66).Inhibition of the cGAS-STING pathway suppresses ER stress, thereby attenuating lung IRI (95).Surprisingly, ER stress-induced NLRP3 inflammasome activation is possibly a pivotal driver during posttransplant IRI (67).Moreover, the macrophage TXNIP-mediated CYLD-NRF2-OASL1 axis possesses a regulatory effect, and TXNIP disruption suppresses STING-mediated TBK1 activation and subsequent inflammation (96).However, these regulatory effects on macrophages should be further verified in transplantation models.
The cGAS-STING pathway is also involved in controlling energy metabolism.Activated cGAS-STING signaling is accompanied by systemic and cellular metabolism abnormality, involving increased nutrient metabolism and decreased mitochondrial respiration (68).In adipocytes, TBK1 attenuates AMP-activated protein kinase (AMPK) activation to increase energy reserves but inhibits respiration, and promotion of tissue inflammation can be observed in adipocyte-specific TBK1 knockout models (97).Surprisingly, by activating AMPK signaling, the STING inhibitor C-176 improves intestinal IRI-induced acute lung injury (98).Notably, metabolic disorders are often accompanied with intensive mitochondrial damage (69).This indicates the need to determine whether modulation of cGAS-STING pathway-induced metabolism is beneficial to inflammation regulation.
Programmed cell death in IRI
The cGAS-STING pathway participates in a variety of cell death pathways, including pyroptosis, ferroptosis, necroptosis, apoptosis, and autophagy, but without obvious specificity.During IRI following transplantation, multiple types of PCD may coexist.The cGAS-STING pathway serves as a target to provide further insight into their relation.
A strong STING signaling is associated with apoptosis induction (99, 100).Scutellarin plays a protective role in IRI through downregulation of the NLRP3 inflammasome, and it inhibits Bcl −2/Bax/Caspase−3 and the cGAS-STING pathway to ameliorate graft dysfunction and apoptosis (101,102).STING antagonists can probably be used in combination with it in transplant models.Additionally, the in vitro upregulation of miR-24-3p relieves cardiomyocyte apoptosis following IRI (71).This protective effect may be due to the targeting of STING by miR-24-3p to salvage the STING-IRF3 activation-mediated inflammatory response and cellular apoptosis (103).Currently, there is a lack of non-invasive biomarkers in clinical practice that can be used to predict transplant prognosis (72).Although significant miRNAs in tissues are hardly as useful as non-invasive biomarkers, they can be used to inspire later research on biological fluids.More studies are necessary to discover the correlation between miRNAs and transplantation prognosis.
Recent studies have indicated that autophagy is involved in IRI regulation.Direct regulation of autophagy alleviates IRI following transplantation (104).Ferritinophagy is a type of autophagy that targets ferritin to maintain balanced intracellular iron levels.NCOA4-mediated ferritinophagy plays a vital role in IRI, and suppression of the cGAS-STING pathway can diminish ferritinophagy, thus ameliorating IRI (105).Activin A, a wellknown neuroprotective factor, is also involved in IRI alleviation through inhibition of cGAS/STING-mediated autophagic cell death (73).25-hydroxycholesterol is an oxidized cholesterol associated with the pathophysiological pathways of cholesterol homeostasis, immune response, or cell survival, it alleviates IRI by inhibiting STING and excessive autophagy-induced cell death (74,106).Combinations of these agents with STING antagonists may probably help to lower doses of immunosuppressive drugs and reduce toxicity.Interestingly, cGAS-mediated autophagy has been shown to relieve liver IRI, with this novel protective effect being independent of STING (75).The cGAS-STING pathway participates in the regulation of autophagic cell death in a bidirectional manner.These conflicting results may be related to the existence of multiple noncanonical cascades (76).Thus, future studies should aim to validate the precise mechanism involved in selective activation and understand whether it is equally applicable to other pathogenic process.
Ferroptosis is triggered under conditions of excessive oxidative stress, such as IRI (107,108).Ferroptotic cell death contributes to inflammatory responses following transplantation (109).Mechanistically, this PCD is induced by activation of the cGAS-STING pathway via lipid peroxidation, this damage can be reversed by the anti-lipid peroxidation drug liproxstatin-1 (110).Surprisingly, lipid peroxidation induced by cellular stress also specifically weakens the STING pathway (70).Thus, more accurate experiments are required to explain their relationship.Additionally, Ubiquitin-specific protease 7 inhibition could reverse ferroptosis-induced IRI, probably because of the suppression of TBK1 degradation and DNA methyltransferase 1 (DNMT1)mediated methylation of FMRP translational regulator 1 (77,111).
Both pyroptosis and necroptosis are inflammatory forms of cell death (112,113).The cGAS-STING-IFN pathway is responsible for maintaining mixed lineage kinase domain-like pseudokinase (MLKL) expression, which is a key component for initiating necroptosis (114).Interestingly, mtDNA released from ECs during intestinal IRI activates the STING pathway and triggers necroptosis through collaborative IFN and tumor necrosis factor-a signaling (115).The cGAS-STING-NLRP3 axis has been demonstrated to be the default mode of inflammatory body activation and pyroptosis.STING activation induces lysosomal cell death and triggers the classic mode of NLRP3 activation (78).STING deficiency in macrophages can inhibit pyroptosis and the subsequent intense inflammatory response during liver IRI; this protective effect is probably due to reduced calcium-dependent caspase 1-GSDMD processing in macrophages (116).However, these results need to be confirmed and complemented in transplantation models.
Outlook and future perspective
At present, immunosuppressors effectively control transplant rejection; however, considerable issues, such as opportunistic infections, higher occurrence of malignancy, and drug toxicity, have been linked to their use.Compared to immunosuppressor, the breadth of the cGAS-STING pathway in inflammation and PCD is its most powerful advantage, so it has the potential to serve as a multifunctional therapeutic target.Because the mechanism of IRI after transplantation is very complex, single immunosuppressor is limited and combinations are required to achieve the desired therapeutic effect.Superimposed drug toxicity inevitably deteriorates the prognosis.Among them, the most serious side effect of Immunosuppressive drug in transplantation is the severe infection caused by the excessively low immunity of the body (117).The activation of the cGAS-STING pathway has a highly collaborative characteristic, and partial rather than complete blockade seems sufficient to produce anti-inflammatory effects, and proper activity of the cGAS-STING pathway is allowed under obvious infection conditions.Therefore, it is feasible to retain necessary ability of anti-infection while achieve antiinflammatory, and achieving this balance can help improve prognosis.In order to achieve this balance, a key aspect in the future is to better understand the minimum level of inhibition required for therapeutic benefits.In addition, personalized treatment is the biggest problem faced by the clinical application of immunosuppressors.A lot of efforts are being made to develop small molecule inhibitors targeting the cGAS-STING pathway, and precise treatment based on this pathway may become an important component of future clinical organ transplantation (118).
Conclusion
IRI is currently a serious complication after transplantation, mainly because there is still no effective therapy to manage it.Attempting to utilize the cGAS-STING pathway as a potential target can provide new insights and help develop treatment approaches for post-transplant IRI.So far, many drugs targeting the cGAS-STING pathway have played therapeutic roles in IRI based on the mechanisms of inflammation and PCD.The next step may include further analysis of the results of these agents in transplantation models and exploring more convincing evidence to elucidate their clinical translation value.In addition, the detailed molecular mechanism of the cGAS-STING pathway is not yet clear, and preventing unexpected and adverse cascade reactions are also issues that need to be addressed.
FIGURE 2
FIGURE 2 Intercellular communication in cGAS-STING signaling.(A) Gap junctions, (B) receptor-based transport, and (C) membrane fusion could serve as approaches for intercellular transmission of cGAMP.
TABLE 1 cGAS
-STING pathway-based regulation involved in IRI-related mechanism. | 5,090.4 | 2023-09-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Randomized controlled phase III trial of adjuvant chemoimmunotherapy with activated cytotoxic T cells and dendritic cells from regional lymph nodes of patients with lung cancer
Randomized controlled trial of adjuvant chemoimmunotherapy for lung cancer indicated a significant advantage in patients receiving immunotherapy. Herein we report the final results and immunological analysis with a median follow-up of 59.6 months. Patients with post-surgical lung cancer were randomly designated to receive either chemoimmunotherapy (group A, immunotherapy arm) or chemotherapy (group B, control arm). The immunotherapy comprised the adoptive transfer of autologous activated killer T cells and dendritic cells (AKT–DC). The 2- and 5-year overall survival (OS) rates were 96.0 and 69.4% in group A and 64.7 and 45.1% in group B, respectively. Multivariate analysis results revealed that the hazard ratio was 0.439. The 2- and 5-year recurrence-free survival rates were 70.0 and 57.9% in group A and 43.1 and 31.4% in group B, respectively. Subgroup analysis for the OS between treatment groups indicated that younger patients (≤ 55 years: HR 0.098), males (HR 0.474), patients with adenocarcinoma (HR 0.479), patients with stage III cancer (HR 0.399), and those who did not receive preoperative chemotherapy (HR 0.483) had lower HRs than those in the other groups. Immunological analysis of cell surface markers in regional lymph nodes of subjects receiving immunotherapy indicated that the CD8+/CD4+ T-cell ratio was elevated in survivors. Patients with non-small-cell lung cancer benefited from adoptive cellular immunotherapy as an adjuvant to surgery. Patients with stage III cancer, those with adenocarcinoma, and those not receiving preoperative chemotherapy were good candidates. Lastly, cytotoxic T cells were important for a favorable chemoimmunotherapy outcome.
Introduction
Progress in diagnostic procedures and surgical technology has considerably improved the prognosis of lung cancer surgery [1,2]. In advanced cases, patient outcomes remain poor despite progress in adjuvant chemotherapy [3,4] and molecular-targeted therapy [5]. Patients with stage IIIB and IV lung cancer and malignant pleural effusion, micrometastasis to mediastinal lymph nodes, or intrapulmonary metastasis are often identified after thoracotomy, shortly recur after surgery, and die early. Previously, we recruited advanced lung cancer patients with poor prognoses who had undergone surgery for improving prognosis by immunotherapy
3
in combination with adjuvant chemotherapy or moleculartargeted therapy [6]. We present the results with a median follow-up of 59.6 months and the associated statistical immunological analyses.
Patients and methods
The patients and methods are described in our previous report [6].
Study design and inclusion criteria
Patients with post-surgical NSCLC were randomly assigned to receive either adjuvant chemoimmunotherapy (group A, immunotherapy arm) or adjuvant chemotherapy (group B, control arm). Immunotherapy consisted of the adoptive transfer of activated cytotoxic killer T cells and dendritic cells (AKT-DC) derived from the regional lymph nodes of patients with lung cancer. The patient inclusion criteria of this study were as follows: post-surgical patients, < 76 years; Eastern Cooperative Oncology Group performance status (PS), 0 or 1; adequate bone marrow, liver, and renal function; histology, primary NSCLC, including combined-type small-cell carcinoma; pathological stage, IB with a tumor size > 5 cm or with severe vessel invasion and II-IV (TNM staging system version 6). Patients with clinical stage I and II cancer received surgery and were pathologically stratified as group a: stage IB, group b: stage II, group c: stage IIIA, and group d: stage IIIB, IV. Patients with clinical stage IIIA cancer (single station N2 or T3N1) received two courses of induction chemotherapy and were stratified as group e: stage IIIA and group f: stage IIIB, IV diagnosed after thoracotomy (pathological stages). Patients with stages IIIB or IV cancer and malignant pleural effusion, micrometastasis to mediastinal lymph nodes, or intrapulmonary metastasis identified after thoracotomy were also included. Patients who underwent non-curative resection were included, but those with exploratory thoracotomies or macroscopic residual tumors were excluded from this study.
Chemotherapy
We used platinum doublet regimens belonging to third-generation drugs as induction and adjuvant chemotherapy. Both groups received four courses of adjuvant chemotherapy after surgery (groups a, b, c, and d). Patients with clinical stage IIIA cancer (groups e and f) received two courses of induction and adjuvant chemotherapy. After the confirmation of recurrence, chemotherapy was resumed and EGFR-mutation-positive patients received EGFR-TKI. Immunotherapy was continued or resumed with the patient's consent in combination with chemotherapy.
Preparation of activated killer T cells and dendritic cells from regional lymph nodes
The procedure involved in the preparation of AKT-DC has been previously described [6]. One to two grams of tumordraining lymph nodes (TDLN) from intrapulmonary to mediastinal lymph nodes with no metastasis was transferred to a sterile Petri dish and aseptically minced into 1-mm 3 tissue fragments. The tissue preparation was then suspended in 50 ml Alyse (ALyS505N: Cell Science and Technology Institute, Inc., Sendai, Japan) serum-free lymphocyte medium containing 400 IU/ml human recombinant interleukin 2, transferred to a 75-cm 2 culture flask, and incubated at 37 °C in air containing 5% CO 2 . When the TDLN started to release AKT-DC, the tissues and cells were transferred to culture bags. The AKT-DC were separated from the TDLN tissue by filtering through a nylon mesh and were then transferred to another set of bag, split 2-3 times, and harvested. Cells were suspended in the cryoprotective agent CP-1 (Kyokuto Pharm. Co., Tokyo, Japan) with 4% human albumin and stored at 5-10 × 10 9 cells/ bag (freeze bag F-100A: NIPRO Osaka, Japan) in − 80 °C until used. AKT-DC were intravenously infused 1 week after each course of chemotherapy and were then continued once a month for the first 6 months after resection and then every 2 months until 2 years after surgery.
Immunological analysis
We selected patients who died within 3 years of recurrence (n = 7) and compared the cell surface markers with that of other patients (n = 42) who were alive at 3 years in group A. Mononuclear cells obtained from regional lymph nodes of the patients after surgery were stained with immunofluorescence and analyzed using flow cytometer before and 1-2 months after the initiation of in vitro culture in IL2 when the cells actively proliferated.
Statistical analysis
The population for analysis was defined as randomly assigned patients eligible before treatment. Overall survival (OS) was defined as the time from random assignment to death from any cause. Recurrence-free survival (RFS) was defined as the time from randomization to confirmation of recurrence by the trial cancer board. Survival curves were estimated using the Kaplan-Meier method, and survival rates with 95% confidence intervals (CIs) were calculated. The survival rates between the treatment arms were compared using the log-rank test, and hazard ratios (HR) were calculated using the Cox proportional hazards model with and without the following covariates: age, sex, histology, stage, and preoperative chemotherapy. The significance level of the two-tailed statistical test was 0.05. Statistical analyses were performed using the Translational Research Informatics Center (TRI: TRILC1304) and by the Foundation for Biomedical Research and Innovation using SAS (version 9.3; SAS Institute, Cary, NC, USA). Interim analysis was scheduled for 5 years after the initiation of the study regardless of the number of enrolled patients.
Consort diagram
As shown in Fig. 1, 453 of 556 patients who underwent surgery for NSCLC between April 2007 and July 2012 were excluded, and the remaining 103 patients were selected for randomization. Of the ineligible patients, 79 were excluded due to age (> 76 years), 303 were ineligible due to earlystage tumors. Among 62 patients with stage IIIB and IV cancer, a sufficient number of AKT-DC (> 7 × 10 9 ) needed for a course of treatment could not be obtained because of immunosuppression in 35 cases (56.5%), and these patients were excluded from the study. Nine patients were excluded due to hepatitis viral infections or due to refusal to provide an informed consent for immunotherapy.
One patient each from groups A and B was excluded from the study due to a study violation or leukemic conversion of AKT-DC after randomization.
Overall survival
The difference in OS rates between the treatment arms was noted to be statistically significant (log-rank test, P = 0.0005) and in agreement with our initial findings from 4 years ago [6] (Fig. 2). The 2-, 5-, and 7-year OS rates were 96.0% (95% CI 84. 9 (Fig. 3). These differences in the RFS rates between the two treatment groups were also noted to be statistically significant (log-rank test, P = 0.0044). The HRs were 0.473 (95% CI 0.280-0.801) by univariate and 0.473 (0.275-0.812) by multivariate analysis in favor of group A.
OS using Cox proportional hazards model for subgroup analyses and treatment interactions
The HRs by subgroup analysis of OS between treatment groups that were significantly lower than 1.0 in favor of
RFS using Cox proportional hazards model for subgroup analyses and treatment interactions
As shown in Table 2 Table 2).
Cell surface markers and survival
The CD8 + /CD4 + T-cell ratio analyzed 1-2 months after the initiation of in vitro culture was higher in the survivors than in the deceased (p = 0.013: Fig. 4). Additional analyses using cell surface markers for determining the positive percentages of CD8 + , CD4 + , CD80 + , CD83 + , HLA-DR + , B7H1 + , and T-reg (CD4 + CD25+) cells before and after in vitro culture failed to show significant relationships with survival.
Discussion
This study is a series of adjuvant immunotherapy trials in patients with post-surgical lung cancer extending over 20 years, starting with Lymphokine activated killer (LAK) cells and continuing with AKT-DC. The first RCT conducted between 1986 and 1992 using LAK cells was reported in the journal Cancer [7]. The results of a phase II study conducted between 1998 and 2004 using AKT-DC obtained from the regional lymph nodes of patients with primary lung cancer predicted a promising outcome for a phase III study using this approach [8,9]. The results of this phase III study clearly demonstrate that adoptive cellular immunotherapy benefits patients with lung cancer as an adjuvant to surgery. Subgroup analysis of OS, comparing the immunotherapy arms using Cox models, indicated that younger patients, male patients, and patients with adenocarcinoma or stage III tumors are good candidates for immunotherapy. The prognosis for stages I and II is better than that for stage III in patients with NSCLC; however, chemoimmunotherapy improved prognosis for stage III more efficiently than it did for stage I and II; this finding was observed in both male and female patients. While the prognosis was better for females than for males, it was significantly improved by immunotherapy in males. Stage IIIB and IV tumors have evolved mechanisms for escaping immune response in the tumor microenvironment [10][11][12][13]. The immune system is either inefficient or tolerates the growth of tumors in stages IIIB or IV, which likely leads to ineffective chemoimmunotherapy outcomes in those patients. Lung cancer patients with stage I and II tumors are good candidates for surgery; however, surgery is not indicated in stage IIIA cases due to poor prognosis. If the prognosis of advanced NSCLC can be improved by cell-mediated immunotherapy, surgery may be added to the current treatment modalities for patients with stage IIIA cancer.
The assessment of histological types in this RCT showed that the HR of OS for adenocarcinoma was lower than that for squamous cell carcinoma. Patients with adenocarcinoma were demonstrated to benefit from immunotherapy. It is speculated that metastatic tumors resulting from circulating and disseminating tumor cells (CDTC), the primary residual adenocarcinoma constituents in these patients [14,15], cannot escape immune surveillance and are eventually eliminated by cell-mediated immunotherapy [10,16,17]. Conversely, the residual pattern of squamous cell carcinoma includes residual edges of the resected primary tumor margin, but not CDTC. These residual tumors, like the original tumors, are capable of escaping immune surveillance blocking immune response. We excluded cases with macroscopically residual tumors; however, microscopically residual tumors, such as those with positive bronchial or chest wall margins, were included in the present trial. Squamous cell carcinoma invades the surrounding tissues that remain after resection, induces immunosuppression, and blocks immune responses, and may also prevent effective cell-mediated immunotherapy. Patients who did not receive preoperative chemotherapy had lower HRs and benefited from immunotherapy, whereas patients who received preoperative chemotherapy did not significantly benefit from immunotherapy. Specific immune responses may be abrogated by preoperative chemotherapy. Cytotoxic anticancer drugs may negatively affect immune responses in regional lymph nodes, dampening the effect of immunotherapy. Immunological analysis using cell surface markers of cultured lymphocytes indicated that the CD8 + /CD4 + T-cell ratio was elevated in survivors. Analysis using other cell surface markers of lymphocytes before and after in vitro culture failed to show any significant correlation with survival. These results indicated that CD8 + cytotoxic T cells were more effective than CD4 + helper T cells in the adjuvant therapy circumstances in this study. The effect of direct cytotoxic killer T cells seems to be more significant than that of indirect support from helper T cells in eliminating CDTC.
Most cancer recurrences result from CDTC, which are clinically undetectable at the time of resection of a primary carcinoma [14,15]. The immune response against CDTC released from primary tumors is distinct from that against original tumors regarding immunosuppression, which is induced by several immune escape mechanisms. Original The target of immunotherapy in this trial was not the primary lesion, but the undetectable tumor cells remaining after the resection of primary carcinoma of the lung.
The phenotypic diversity of disseminated cells resulting from intra-tumor heterogeneity [18,19] gives rise to clones that are resistant to chemotherapy and prevents tumor cell eradication by chemotherapy. The heterogeneity of tumor cells enables them to escape even from molecular-targeted therapy [20,21]. The regional lymph nodes of patients with lung cancer are the organ where the first adaptive immune response against cancer develops [22,23]. Dendritic cells at the tumor site internalize antigens, migrate to lymph nodes, and induce naive T cells to become antigen-specific cytotoxic T cells [24]. They act as messengers between the innate and adaptive immune response. From initiation to progression of cancer, tumor cells give rise to various antigens, which are recognized by dendritic cells. Regional lymph nodes represent one of the frontline defense mechanism against cancer to cope with heterogeneous cancer cells. Cell-mediated immunotherapy derived from regional lymph nodes as a source of dendritic cells and cytotoxic T cells may finally eradicate heterogeneous tumor cell clones that disseminate throughout the body, carrying a wide variety of antigens before immunosuppression develops in micrometastases. An important point to consider is whether the sufficient number of AKT-DC can be obtained from the patient for this cell-mediated immunotherapy, as we noticed that AKT-DC could not be obtained in the required quantities in nearly 56% of the stage IIIB and IV cases.
Even though our results suggest the clinical significance of cell-mediated immunotherapy along with chemotherapy for patients with lung cancer, there are certain limitations of this study. This study was carried out with a relatively small group of patients (only 103 patients) and at a single institution and only in Japan. Also, this was not a blinded study and the included patients were heterogeneous population. A large-scale, double-blind, randomized, multiinstitutional trial is essential for ascertaining the efficacy of the presently described adjuvant cellular immunotherapy procedure and its clinical application. Successful dissemination of skills is required for the culture of regional lymph nodes to successors and this requires experience, time, and financial resources. Close cooperation and collaboration to extend the study protocol nationwide will be immensely beneficial to patients with lung cancer awaiting this cellular immunotherapy. Fig. 4 Cell surface markers and survival. The relationship between cell surface markers and survival was examined, which showed that the CD8 + /CD4 + ratio was elevated in survivors | 3,644.2 | 2018-05-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Physical and Chemical Properties of Highland Bamboo (Yushania alpina) Culms Grown in Ethiopia
• Bamboo is the fastest growing plant currently known on earth, a property that enables it to be the best alternative as a future source of wood fi ber. This study investigated the effect of site and culm height on the physical and chemical properties of Yushania alpina culms grown in Ethiopia. Matured Yushania alpina 3 to 5-year-old samples were harvested from Hagere-Selam and Rebu-Gebeya sites. The culms were subdivided into three equal lengths (bottom, middle, and top), and the variations in physical and chemical properties between the two sites and the culm heights of Yushania alpina were investigated. The results showed that the average values of MC, basic density, tangential and longitudinal shrinkage of Yushania alpina culms for Hagere-Selam and Rebu-Gebeya sites were (91.78 and 80.32 %), (0.65 and 0.63 g/cm 3 ), (6.63 and 5.84 %) and (0.63 and 0.56 %), respectively. The average values of cellulose, lignin, extractive and ash contents in the culms for Hagere-Selam and Rebu-Gebeya sites were (52.84 and 50.71 %), (26.55, and 26.04 %), (8.41 and 8.02 %) and (1.95 and 2.17 %), respectively. The results revealed that the site affected the MC, basic density, cellulose, lignin, extractive, and ash contents of Yushania alpina culms but not the tangential and longitudinal shrinkage. The culm height of Yushania alpina affected MC, basic density, tangential shrinkage, longitudinal shrinkage, cellulose, lignin, extractive, and ash contents. In the case of both sites, the highest percentages of MC, tangential and longitudinal shrinkage, and ash content were observed at the base and lowest at the top of the culms. On the contrary, both sites observed the highest magnitude of basic density, cellulose and extractive at the top and lowest at the base of the culms. The variations in physical and chemical properties at different sites and culm heights in fl uence the utilization of Yusha-nia alpina culms for industries and end products.
INTRODUCTION 1. UVOD
Bamboo is the fastest growing plant currently known on earth, a property that enables it to be the best alternative as a future source of wood fi ber (Liese and Köhl, 2015).Unlike timber, bamboo culms need short rotation (3-5 years) to mature before they can be harvested and utilized (Liese and Köhl, 2015).Bamboo is a perennial plant that belongs to the subfamily Bambusoideae of the family Poaceae (Gramineae), and it contains more than 1,500 species.It provides more than 1,500 applications, from traditional utilization in rural areas to industrial production, construction, and other versatile uses (Liese, 1987;Zhaohua, 2011;Hinde and Kaba, 2018).Bamboo has a higher strength-to-weight ratio than wood, enabling easy harvesting, transporting, and manufacturing of products (Wahab et al., 2009;Anokye et al., 2014).The above characteristics of bamboo have encouraged and intensifi ed bamboo research in recent years.
The utilization of bamboo for various applications is governed by its properties, like any wood material.Moisture content, density, and shrinkage are physical properties that infl uence the dimensional stability, toughness, strength, working properties, and durability of bamboo and bamboo products (Liese and Köhl, 2015).The basic chemical properties found in bamboo are cellulose, hemicellulose, and lignin, which infl uence its utilization for different applications (Liese, 1985).The properties of bamboo culms are mainly affected by culm position, age, topography, and climate (Liese and Köhl, 2015;Tolessa et al., 2019).The physical and chemical properties of bamboo vary between species, sites, age, and different parts of culm positions (Santhoshkumar and Bhat, 2015;Liese and Köhl, 2015;Tolessa et al., 2019).Information on the physical and chemical properties of bamboo is necessary for assessing its suitability for various end products (Kamruzzaman et al., 2008;Tolessa et al., 2019).Such information will also enable increasing the utilization of bamboo species as substitutes for solid wood in woodbased industries.
Recently, Yushania alpina (Y.alpina) culms and splints have been used to construct traditional houses, rudimentary furniture, handcrafts, mats, fencing, beehive, and household utensils (baskets, winnowing trays) (Desalegn and Tadesse, 2014;Desalegn, 2015).The highland bamboo (Y.alpina) is indigenous to Ethiopia and available in the highlands of Kenya, Sudan, Zambia, Zaire, Burundi, Rwanda, Cameroon, and Tanzania (Embaye et al., 2003).This bamboo species is the most cultivated and widespread throughout the country.It is preferred mainly because of its suitability and ease of processing and converting into different products.
In Ethiopia, the multiple uses of bamboo in industrial applications are not getting the most economic advantage, and its utilization is limited to domestic services (Mulatu and Kindu, 2010;Zenebe et al., 2014).This is due to insuffi cient basic information on its properties.Previously, few studies have been done on physical and chemical properties (Muche and Degu, 2019;Tsegaye et al., 2020;Dessalegn et al., 2021).There is still limited information about the variation of physical and chemical properties between sites, within culms, and along the culm heights of Y. alpina grown in Ethiopia.Therefore, this study investigated the effect of site and culm heights on the physical and chemical properties of Y. alpina grown at Hagere-Selam (Sidama Region) and Rebu-Gebeya (Amhara Region).
MATERIJALI I METODE
A total of twenty matured Yushania alpina culm samples (3-5 years old) were harvested from potential sites of Hagere-Selam (Sidama Region) and Rebu-Gebeya (Amhara Region).First, the bamboo culms were harvested, and their branches were removed from the top parts of the culms leaving the entire length to be about 9 m.After that, the culms were subdivided into three equal lengths, labeled bottom, middle, and top portions, with lengths of 3 m (Liese and Köhl, 2015).
Determination of physical properties
2.1.Određivanje fizičkih svojstava 2.1.1Determination of moisture content 2.1.1.Određivanje sadržaja vode Three centimeters long specimens representing two sites (Hagere-Selam and Rebu-Gebeya) and three culm heights (base, middle, and top) were cut from fresh Y. alpina to determine its initial moisture content.Green weight of each specimen was measured using an analytical balance with a 0.01g accuracy (IS 6874, 2008).The specimens were then oven-dried at a temperature of (103±2) °C until attaining constant weight.The moisture content was calculated using Eq 1. (1) Where W g is the green weight of specimens and W od is the oven-dry weight of the specimen
Determination of basic density 2.1.2. Određivanje nominalne gustoće uzoraka
Three centimeters long specimens, (representing 2 sites and three culm heights), were cut from fresh Y. alpina culms for the determination of basic density.The green weights of all specimens were measured using an analytical balance with an accuracy of 0.01 g.The water displacement method was used to determine the volume of each specimen.The specimens were then oven-dried at a temperature of (103±2) °C.The repeated measurement of weight was recorded until the constant weight was reached.Basic density was determined based on ISO 22157-2:2004 and IS 6874 (2008).Basic density was calculated using Eq 2. (2)
Determination of shrinkage 2.1.3. Određivanje utezanja
Specimens representing 2 sites and three culm heights were prepared from round-shaped Y. alpina culms, 3-cm in length, to determine tangential and longitudinal shrinkage.For each specimen, the green weight and dimensions of wall thickness at four points and lengths at four points in green conditions were measured using an analytical balance and digital caliper with an accuracy of 0.01g and 0.01 mm, respectively (Figure 1).The specimens were oven-dried at a temperature of (103±2) °C.The repeated measurements for weight were recorded until constant weight was reached.Shrinkage was determined based on ISO 22157-2:2004 and IS 6874 (2008).The shrinkage was calculated using Eq 3.
(3)
Where D i is the initial dimension of the specimens before oven-drying (mm) and D f is the fi nal dimension of the specimens after oven-drying (mm)
Determination of chemical composition 2.2. Određivanje kemijskog sastava
The harvested bamboo culms were dried and converted into small-size strips suitable for further milling processes.Thereafter, they were placed in a hammer mill and Willey mill to reduce it to the appropriate size.Then the milled powder bamboo samples were fi ltered using a 40 mesh size (425 μm) and 60 mesh sieve (250 μm).The particles were stored in an airtight container labeled with the appropriate code for chemical analysis.
The chemical composition including extractive (alcohol-toluene solubility), ash, and lignin content were determined using the standard procedures of the American Society for Testing Materials (ASTM) (Table 3).Cellulose content was determined according to alkali extraction and Kurchner-Hoffer method (Brown, 1975).Toluene was used instead of benzene and reported as alcohol-toluene extractive (Tolessa et al., 2017).The amounts were expressed on a percentage basis of the starting oven-dry mass.The lignin content test was performed with extractive-free bamboo derived from the alcohol-toluene extractive test.
Statistical analysis 2.3. Statistička analiza
The data were analyzed statistically to assess signifi cant differences between the two sites (Hagere-Selam and Rebu-Gebeya) and along the three culm heights (base, middle, and top) using descriptive statistics and analysis of variance (ANOVA) by R software, version 4.1.3.The least signifi cant difference (LSD) was used for mean comparison at p<0.05. 4. The results revealed that Y. alpina culms harvested from Hagere-Selam had a higher value of initial MC than those harvested from Rebu-Gebeya.These differences may be associated with the age and season of felling of bamboo culms (Liese, 1985).For this study, the culms 3 to 5 years old were harvested during the dry season but in different months.
REZULTATI I RASPRAVA
The results revealed that the values of initial MC for the two sites decreased from the base to the top position of the culms (Table 4).Similar trends to this fi nding were seen in Bambusa balcooa, Bambusa tulda, Bambusa salarkhanii, and Melocanna baccifera, grown in Bangladesh (Kamruzzaman et al. 2008).Other researchers also found similar variations in their studies (Wahab et (2009), the variation of MC along the culm height was due to differences in anatomical structure and chemical composition between locations along the bamboo culms.On the other hand, the decreasing trend of initial MC might be due to a smaller proportion of vascular bundles at the bottom when compared to the top position of culms (Anokye et al., 2014).
Nominalna gustoća
Density is among the main factors that affect the utilization of bamboo culm as raw material.The over- all mean values of basic density for the Hagere-Selam and Rebu-Gebeya sites were 0.65 and 0.63 g/cm 3 , respectively.This shows that the culms harvested from the Hagere-Selam site were signifi cantly denser than those from the Rebu-Gebeya site (Table 4).Generally, the density of bamboo ranges from about 0.4 to 0.9 g/ cm 3 depending on the anatomical structure refl ected in the quantity and distribution of fi bers around the vascular bundles (Zakikhani et al., 2017).The density of this fi nding was in the range of generally recognized values of bamboo density.The result shows that the culm height had signifi cant effects on the basic density at p<0.001 level, whereas the site had a signifi cant effect on basic density at p<0.01 level (Table 4).However, the table shows that the interaction effect between the site and culm height did not signifi cantly affect the basic density at p>0.05 level (Table 4).
The results revealed that the basic density significantly increased along with the height of Y. alpina culms for both sites (Figure 2).Many researchrs reported similar trends to this fi nding.They found an increase in basic density with increasing height of bamboo culms from the base to the top (Wahab et al., 2009;Santhoshkumar and Bhat, 2015;Vetter et al., 2015).The increase of basic density from the base towards the top position of culms was also reported for thirteen bamboo species grown in Malaysia (Siam et al., 2019).This variation is associated with anatomical structure variations at differ-ent bamboo culms heights (Liese, 1985).According to Santhoshkumar and Bhat (2015), the increase in the magnitude of density from the bottom to the top position of the bamboo culm was due to the increase in the proportion of fi brous tissue and increased frequency of the occurrence of vascular bundles.
Utezanje
Shrinkage is another main factor that affects the utilization of bamboo culm as a raw material in different wood industries.Unlike wood, bamboo begins to shrink from the very beginning of drying.The results revealed that the overall mean tangential or culm-wall shrinkage for the Hagere-Selam and Rebu-Gebeya sites was 6.63 % and 5.84 %, respectively (Figure 5).The overall mean longitudinal shrinkage for the Hagere-Selam and Rebu-Gebeya sites was 0.63 % and 0.56 %, respectively (Figure 5).Shrinkage of the Y. alpina culm grown at Hagere-Selam was higher in both tangential (culm-wall thickness) and longitudinal direction when compared to the Rebu-Gebeya site (Figure 5).This variation may be related to the initial MC and the culm-wall thicknesses of the bamboo culms (Siam et al., 2019).
The tangential and longitudinal shrinkage of Y. alpina culm at different heights for Hagere-Selam and Rebu-Gebeya sites is presented in Figure 3 and Figure 4, respectively.The statistical analysis of variance revealed that site and culm height had a highly signifi cant effect on tangential shrinkage at p<0.001 (Table 4).The same table shows that the culm height had a signifi cant effect on longitudinal shrinkage at p<0.001, whereas the site had a signifi cant effect on the longitudinal shrinkage at p<0.05.However, the interaction between the site and the culm height position had an insignifi cant effect on the tangential and longitudinal shrinkage at p>0.05 (Table 4).
The results showed that the tangential and longitudinal shrinkage of the culm decreased from the base to the top positions of Y. alpina for both sites (Figure 3, Figure 4).A similar trend was observed in Bambusa balcooa, Bambusa tulda, Bambusa salarkhanii, and Melocanna baccifera species grown in the Philippines (Kamruzzaman et al., 2008).These variations are associated with higher parenchyma cell content and fewer vascular bundles at the base positions.On the other hand, lower parenchyma cell content and a higher number of vascular bundles are found at the top positions of bamboo culms (Wahab et al., 2009).
Cellulose content 3.2.1. Sadržaj celuloze
Cellulose is the main constituent of lignocellulosic wood material, and it is located predominantly in the secondary cell wall.The analysis of variance shows that the site and culm height had a signifi cant (p<0.001)effect on the cellulose content of Y. alpina bamboo (Table 5).The same table shows that the interaction effect between site and culm height did not show a signifi cant (p>0.05)effect on cellulose content (Table 5).The overall mean value of cellulose content in Y. alpina culms grown at Hagere-Selam was 52.84 %, which is statistically higher than the bamboo culms collected from the Rebu-Gebeya site, which was 50.71 %.The results obtained from this study were higher than those found in the previous study on the same bamboo species (46.76 %) (Tsegaye et al., 2020).The cellulose content of this fi nding was in the range of softwood (40 -52 %) and hardwoods (38 -56 %).Normally, the cellulose content of bamboo ranges from 40 -50 % (Fengel and Wegener, 1984).The results obtained in this study were in the range of the values mentioned above for woods and other bamboo species.Bamboo species with cellulose content in this range are suitable for pulp and papermaking, bioenergy, and biobased composite production (Hammett et al., 2001;Li et al., 2007).It can also be used for applications similar to those of softwood and hardwood.
In the case of both sites, statistically, the highest cellulose contents were observed at the top; followed by the middle and lowest at the bottom position (Figure 8, Figure 9).The percentage cellulose content of Y. alpina culms for both sites showed an increasing trend from the base to the top (Figure 8, Figure 9).The same trend to this fi nding was reported for the same bamboo species at age of three (Tolessa et al., 2019).The in-
Klason lignin content 3.2.2. Sadržaj Klasonova lignina
Lignin is a phenolic substance consisting of an irregular array of variously bonded hydroxyl-and methoxy-substituted phenylpropane units.Statistically, the overall mean values of lignin content of Y. alpina culms collected from the Hagere-Selam (26.55 %) differed insignifi cantly from the culms collected from the Rebu-Gebeya site (26.04 %) (Figure 7).The lignin content in different tropical bamboo species was reported in the range of 24.84 to 32.65 % (Razak et al., 2013).Fengel and Wegner (1984) investigated the ranges of lignin for softwoods (24 -37 %) and hardwoods (17 -30 %).The results obtained from this study were in the range of the above reports for bamboo, softwood, and hardwood.According to Zhang et al. (2022), the high lignin content of bamboo culms can provide excellent physical and mechanical properties.On the other hand, the high lignin content contributes to bamboo rigidity and makes it suitable for structural applications, such as construction and furniture-making.
The analysis of variance shows that the culm height had a signifi cant effect on lignin content at p<0.01 (Table 5).The site showed a signifi cant (p<0.1)effect on lignin content (Table 5).However, the interaction effect between the site and culm height did not show a signifi cant (p>0.05)effect on the lignin content of the culms (Table 5).The results revealed that the highest percentage of lignin content was observed at the bottom; followed by middle and minimum at the top culm positions for both sites (Figure 8, Figure 9).A similar variation pattern to this fi nding was observed in the same bamboo species (Tolessa et al., 2019).A similar variation pattern to this study was reported for Melocanna baccifera (Hossain et al., 2022).
Sadržaj ekstraktiva
Non-structural chemical compositions found in wood and bamboo materials are known as extractive substances.On the other hand, extractives in bamboo are non-cell wall components with diverse chemical compositions such as resins, lipids, waxes, tannins, pentosan, hexosan, starch, and silica (Fengel and Wegener, 1984).According to the analysis of variance, site and culm height showed a signifi cant (P<0.001)effect on the extractive content (Table 5).However, the interaction effect between the site and culm height did not show a signifi cant effect on the extractive content of Y. alpina at p>0.05 (Table 5).
The overall mean value of extractive content in Y. alpina culms collected from Hagere-Selam was 8.41 %, which is insignifi cantly higher than the culms collected from the Rebu-Gebeya site, which was 8.02 %.Extractives in bamboo can enhance the structural rigidity of its cell wall and effectively resist diseases and pests/decay (Zhang et al., 2022).
Statistically, in the case of both sites, the highest extractive contents were observed at the bottom position; followed by the middle and lowest at the top position (Figure 8, Figure 9).The same variation pattern to this fi nding was reported in Y. alpina culms (Tolessa et al., 2019).
Ash content 3.2.4. Sadržaj pepela
Ash is a term generally used to refer to inorganic substances such as silicates, sulfates, carbonates, or metal ions.The analysis of variance shows that the main effects of site and culm height had a signifi cant effect on ash content at p<0.001 (Table 5).However, the interaction effects between the site and culm height did not show a signifi cant effect on ash content at p>0.05 (Table 5).The overall mean value of ash content in Y. alpina culms collected from Rebu-Gebeya was 2.17 %, which is signifi cantly higher than the culms collected from the Hagere-Selam site, which was 1.95 %.Liese and Köhl (2015 stated that bamboo growth sites affect the amount of ash in bamboo.This difference might be due to the topography, soil, and climate of the area where the bamboo culms are grown.Values lower than those determined in this study were reported for highland bamboo, namely 1.87 % (Tolessa Statistically, in the case of both sites, the highest ash contents were observed at the bottom position; followed by the middle and lowest at the top position (Figure 7, Figure 8).The same variation to this fi nding was reported in Y. alpina culms at the age of three (Tolessa et al., 2019).According to Tolessa et al. (2019), the ash content variation along the culm height varied with moisture content.
ZAKLJUČAK
This study investigated the variation of physical and chemical properties in Yushania alpina culm with site and culm height.The Y. alpina culms grown at the Hagere-Selam had higher values of green moisture content (MC) and basic density compared to the culms collected from the Rebu-Gebeya.The green MC of Y. alpina culms decreased from the base toward the top positions for both sites.Similarly, the tangential and longitudinal shrinkage from green to oven-dry conditions of Y. alpina culms showed a decreasing trend from the base to the top for both sites.In contrast, the opposite tendency was observed in the basic density, which decreased from the top to the base positions of the bamboo culms for the two sites.Density is the most important parameter affecting its practical utilization.Based on density, most of the other characteristics of bamboo culms can be predicted.A signifi cant effect of site and culm height on chemical properties was observed.The amount of cellulose and lignin content found was in the range of wood and other bamboo species.Consequently, Y. alpina culm can be a potential source of pulp and paper, and bioenergy, and is suitable for structural applications, such as construction and furniture-making and biobased composite production.Further study should be carried out on other potential bamboo species abundantly available in Ethiopia, such as lowland bamboo (Oxythenatera abyssinica) and other introduced and exotic bamboo culms.
Table 4
Analysis of variance for basic density and shrinkage at different sites and culm heights of Yushania alpina Tablica 4. Analiza varijance nominalne gustoće i utezanja uzoraka Yushania alpina s različitih staništa i različite visine stabljike | 4,935 | 2023-09-29T00:00:00.000 | [
"Materials Science"
] |
Freudenthal Gauge Theory
We present a novel gauge field theory, based on the Freudenthal Triple System (FTS), a ternary algebra with mixed symmetry (not completely symmetric) structure constants. The theory, named Freudenthal Gauge Theory (FGT), is invariant under two (off-shell) symmetries: the gauge Lie algebra constructed from the FTS triple product and a novel global non-polynomial symmetry, the so-called Freudenthal duality. Interestingly, a broad class of FGT gauge algebras is provided by the Lie algebras"of type e7"which occur as conformal symmetries of Euclidean Jordan algebras of rank 3, and as U-duality algebras of the corresponding (super)gravity theories in D = 4. We prove a No-Go Theorem, stating the incompatibility of the invariance under Freudenthal duality and the coupling to space-time vector and/or spinor fields, thus forbidding non-trivial supersymmetric extensions of FGT. We also briefly discuss the relation between FTS and the triple systems occurring in BLG-type theories, in particular focusing on superconformal Chern-Simons-matter gauge theories in D = 3.
geometry, to its N > 2 generalization and to the so-called effective black hole potential governing the scalar flows has been discussed in [24].
At any rate, FGT, in its simplest setup presented in this paper, can be regarded as the simplest gauge theory admitting F-duality as global symmetry. Despite the No-Go theorem proved in Sec. 4.2, a slight generalization of the FGT will be presented in a companion paper [33].
Intriguingly, as discussed in Sec. 5, FGT shares the same symmetry structures as the "quaternionic level" of Faulkner's construction [34], which relates triple systems to pairs (g, V) of a metric Lie algebra g and a suitable representation V. After the treatment [35,36], an interesting similarity between FGT and the bosonic sector of N = 3, D = 3 superconformal (SC) Chern-Simons-matter (CSM) gauge theories can be envisaged. An important difference relies in supersymmetry, which in FGT, as discussed in Sec. 4, is essentially spoiled by the enforcement of global invariance under F-duality; this affects also other terms in the Lagrangian, e.g. the scalar potential (quartic in FGT, sextic in BLG-type theories).
All in all, we can observe that, with some important differences pointed out along the present investigation, the same symmetry structures are shared (with different implementations and physical meanings) by three (a priori very different) classes of theories, namely : (D = 3) FGT (nonsupersymmetric), D = 4 MESGT (with various amounts of local supersymmetry) and D = 3 SC CSM gauge theory (with N = 3 global supersymmetry). Further details and results will be reported in a companion paper [33]. This paper is organized as follows. We start by recalling the relation between FTS, rank-3 Euclidean Jordan algebras and exceptional Lie algebras (Sec. 2.1); the treatment is then generalized in Sec. 2.2. The axiomatic definition of a FTS and the general symmetry of its structure constants are then discussed in Secs. 2.3 and 2.4. The Freudenthal duality for a generic FTS is introduced in Sec. 2.5, along with a discussion of its basic properties.
The global transformation constructed from the FTS triple product is introduced in Sec. 3.1, and its gauging is discussed in Sec. 3.2. Then, in Sec. 3.3 we propose a bosonic Lagrangian density that exhibits both FTS gauge transformations and (global) F-duality as off-shell symmetries, and we provide a detailed proof of its invariance under such symmetries. The class of FGT gauge Lie algebras of type e 7 is considered in Sec. 3.4, and the intriguing relation between the corresponding FGT and D = 4 MESGT's with U -duality symmetry given by such Lie algebras of type e 7 is discussed in Sec. 3.5. The possible generalization of the simplest FGT Lagrangian introduced in Sec. 3.3 is discussed in Sec. 4, in which the FTS K is coupled to the most general algebraic system, and the mathematical structure required for a consistent definition of F-duality is investigated (Sec. 4.1); a No-Go theorem is proved in Sec. 4.2.
The intriguing similarities (and important differences) between FGT and (the bosonic sector of) N = 3 SC CSM gauge theories in D = 3 are discussed in Sec. 5.
The concluding Sec. 6 contains a summary, along with some remarks and an outlook of further developments.
Three Appendices conclude the paper. Apps. A and B respectively contain details on the F-duality and on the FGT scalar kinetic term, whereas App. C lists the induced axioms needed for the discussion of the generalization of FGT and in the proof of the No-Go theorem of Sec. 4.2.
As mentioned above, further results and more detailed analysis of some topics mentioned along the paper will be reported in a companion work [33]. 2 Freudenthal Triple Systems (FTS 's) 2
.1 Rank-3 Jordan Algebras and Lie Algebras
The Freudenthal Triple System (FTS ) K was first introduced by Freudenthal in his study of exceptional Lie algebras [37,38,39] (see also [40]). In the original construction, K is defined to be the direct sum of two copies of a Jordan Triple System (JTS ) J and two copies of real numbers 3 R: Over the vector space K(J), one can introduce a symplectic invariant 2-form, as well as a triple product. The latter is defined via the completely symmetric tri-linear form (also known as cubic norm) of the JTS J, and it can be re-interpreted as a linear map L φ I φ J over K parametrized by a pair of elements φ I , φ J ∈ K (cfr. definition (13)).
In Freudenthal's construction of exceptional Lie algebras, the JTS J is restricted to a rank-3 simple Euclidean Jordan algebra J, namely J = R or J = J A 3 ≡ H 3 (A), where H 3 (A) stands for the algebra of Hermitian 3 × 3 matrices with entries taking values in one of the four normed division algebras A = R (real numbers), C (complex numbers), H (quaternions), O (octonions) (see e.g. [41]). Then, by introducing in K( J) the submanifold the five exceptional (finite-dimensional) Lie algebras G = g 2 , f 4 , e 6 , e 7 , e 8 arise as the the direct sum of the algebra Inv(M J ) that keeps M J invariant, together with a copy of su (2) and two copies (namely, an su(2)-doublet) of K( J) [37,42]: As a vector space, K J may be regarded as the representation space of a non-trivial 4 symplectic representation R of the algebra Inv(M J ) itself, introduced in (3): At least for R irreducible, Inv(M J ) is maximally (and non-symmetrically) embedded into the symplectic algebra sp K J through the Gaillard-Zumino (GZ) embedding [43] (see also e.g. [75] for a recent review) sp K J ⊃ Inv(M J ); This can be regarded as a consequence of the following Theorem by Dynkin (Th. 1.5 of [44], more recently discussed e.g. in [45]) : Every irreducible group of unimodular linear transformations of the N-dimensional complex space (namely, a group of transformations which does not leave invariant a proper subspace of such a space) is maximal either in SL(N ) (if the group does not have a bilinear 3 Namely, the ground field was chosen to be R. Other choices are of course possible (such as Z or C), but we will not deal with them in the present investigation. 4 Such a representation is not necessarily the smallest one. A counter-example is provided e.g. by sp(6) = Inv(M J R 3 ), whose smallest non-trivial symplectic irrep. is the fundamental 6. However, K(J R 3 ) has dimension 14, and it is based on the rank-3 completely antisymmetric irrep. 14 ′ , which exhibits a completely symmetric rank-4 invariant structure.
However, a suitable FTS K on the 6 can also be constructed; see point 2 in Sec. 5.
invariant), or in Sp(N ) (if it has a skew-symmetric bilinear invariant), or in O(N ) (if it has a symmetric bilinear invariant). Exceptions to this rule are listed in Table VII of [45]. For later convenience, we introduce the number f as (cfr. (4)) which is even whenever the symplectic 2-form on K J is non-degenerate (as we will assume throughout).
From (3) and (5), it thus follows that the invariance subalgebra Inv(M J ) can be equivalently defined as the intersection of two Lie algebras : the symplectic one sp K J in (5) and the exceptional one (3):
General Case
Within Freudenthal's formulation, the above construction can be repeated for a generic FTS K , by generalizing (2) to the submanifold and thus introducing its invariance algebra Inv(M J ). It is however worth remarking that, in this general case, neither Inv(M J ) nor (this latter generalizing (3) to a generic JTS J), along with their possible non-compact real forms, are necessarily simple. Nonetheless, it still holds that, as a vector space, K (J) may be regarded as the representation space of the relevant symplectic representation R of the invariance subalgebra Inv(M J ) of M J (8): Before proceeding to analyze the axiomatic definition of FTS, we remark that, as mentioned in Footnote 1, in the mathematics literature there are several different notions of FTS, which differ by the symmetry structure of the corresponding triple product (see for instance [22,40,46]). All of these "FTS 's" are closely inter-related by simple redefinitions; however, because they exhibit different symmetry properties, some algebraic properties of the FTS are manifest only within a specific formulation.
Axiomatic Definition
We define an FTS to be a particular Symplectic Triple System [47,48], which is a symplectic vector space K equipped with a (not necessarily completely symmetric) triple product T : In the following, for brevity's sake, we will denote By introducing the symplectic form as 5 ·, · : in an FTS the triple product (11) satisfies the following axioms: where λ is an arbitrary (real) constant 6 . By introducing, for any pair φ L , φ M ∈ K, a linear operator L φ L φ M ∈ gl(K) acting on φ K ∈ K as axiom (iii) yields that L φ I φ J is a derivation with respect to the FTS triple product T (11).
On the other hand, axiom (i) implies which justifies the symmetric tensor product of K's in the definition (13) itself. By virtue of the definition (13), one can reformulate axioms (iii) and (iv) as follows: In particular, the reformulation (iv ′ ) of axiom (iv) makes manifest the fact the symplectic form ·, · (12) is invariant under L φ I φ J . Thus, L φ I φ J is valued in a certain Lie algebra g, which exhibits a symplectic bilinear invariant structure in the relevant representation R to which φ I belongs. At least when such a representation space is irreducible, through the GZ embedding [43], or equivalently through the abovementioned Dynkin Theorem [44] one has Within Freudenthal's construction, an important class of algebras is given by g = Inv(M J ) introduced above. The Lie algebra g will be identified below as the gauge Lie algebra of the Freudenthal gauge theory.
It is worth remarking here that for λ = 0 axiom (iv) can actually be derived from axioms (i)-(iii). Mathematically, whenever λ = 0 axiom (ii) yields a compatibility condition that constrains the structure of the triple product (11) and the symplectic form (12), and hence the non-trivial algebraic structure of the FTS itself. We anticipate that axiom (iii) can be regarded as the "FTS counterpart" of the so-called "fundamental identity" of Lie-3 algebras (see Sec. 5). On the other hand, for λ = 0 axioms (i)-(iii) reduce to the defining properties of a Lie-3 algebra over Grassmannian numbers, which in general is not a FTS. And hence, in order to restore the algebraic structure of the FTS K, one has to further impose axiom (iv) as a compatibility condition between the (now totally symmetric) triple product (11) and the symplectic form (12).
At any rate, in the present investigation we regard an FTS K as a Symplectic Triple System [47,48] with λ = 0, and we include (iv) (or equivalently (iv ′ )) as part of the defining axioms, so that the most generic situation will be considered.
FTS Structure Constants and their Invariance
In order to make our treatment more explicit yet basis-dependent, it is convenient to introduce a basis {e a } of K, such that φ = φ a e a (a = 1, ..., f ; f = dim R (K), (6)). Thus, one can define the symplectic metric ω ab and the FTS (triple product) structure constants f abc d respectively as As mentioned above, ω ab is invariant under g (recall (15) and (16)). Furthermore, when ω ab is nondegenerate (which we will always assume to hold true in this paper), an isomorphism is defined between the vector space K and its dual space, and hence one can lower 7 the last index of the FTS structure constants as follows: By virtue of definitions (17), the defining axioms (i)-(iv) of the FTS K can be rewritten as follows: (i) f abcd = f bacd ; (ii) f abcd = f acbd + 2λω ad ω bc − λω ca ω bd − λω ab ω cd ; (iv) f abcd = f abdc .
It is worth stressing here that the non-complete symmetry of the FTS triple product T (11) (as yielded by axioms (i) and (ii)) implies the non-complete symmetry of the rank-4 tensor of FTS structure constants f abcd (18). However, note that axioms (i), (ii), and (iv) imply the structure constants to be symmetric also under exchange of the first and last pair of its indices: a property which will be important in the construction of a Chern-Simons action for the gauge fields of the "Freudenthal gauge theory" (see next Sections). Summarizing, the general symmetry properties of f abcd , as implied by axioms (i), (ii) and (iv), are given by f abcd = f ((ab),(cd)) .
f abc d and f abcd are rank-4 invariant tensors of the Lie algebra g (15)- (16). Under certain further restrictions (see point 2 in Sec. 5), the symmetry can be extended to sp(K) itself. It is here worth recalling that Kantor gave a complete classification of the finite dimensional triple systems that can arise in Lie algebras [49] (see also [50]); in particular, Kantor and Skopets showed that there is a oneto-one correspondence between simple Lie algebras and simple FTS 's with a non-degenerate bilinear form [51].
Freudenthal Duality
Whenever the completely symmetric part of f abcd is non-vanishing, from the definition of the FTS triple product (11) and of the symplectic form (12) one can define a quartic g-invariant structure ∆(φ) for any φ ∈ K, as follows 8 (cfr. (25c) of [23]; T (φ) ≡ φφφ): ∆ : Such a quartic form has appeared in physical literature e.g. in the formula for the Bekenstein-Hawking [31,32] entropy of spherically symmetric, asymptotically flat, static, extremal black hole solutions of D = 4 supergravity theories whose U -duality Lie algebra is a particular non-compact, real form of Inv(M J ), namely the conformal Lie algebra g = conf( J) of J itself (see e.g. [19] and [53] for a review, and a list of Refs.).
Interestingly, ∆ also occurs in the duality-invariant expression of the cosmological constant of some AdS 4 vacua (and of the corresponding central charge of the dual CFT's) of general N = 2 gauged supergravities underlying flux compactifications of type II theories [76].
The fact that f (abcd) = 0 which allows for the existence of (primitive) quartic g-invariant structure ∆(φ) characterizes the pair g = conf( J), R as a (non-degenerate) Lie algebra of type e 7 , defined axiomatically by the axioms (a)-(c) of [22]: R is a representation space of g such that (a) R possesses a non-degenerate, skew-symmetric bilinear g-invariant form (cfr. (12) and (17)); (b) R possesses a completely symmetric, rank-4 g-invariant structure f (abcd) ( given by the completely symmetric part of (18)), which allows to define (c) by defining a ternary product T (x, y, z) on R as then one has 3 T (x, x, y) , T (y, y, y) = x, y q (x, y, y, y) .
Note that, from (22) and (23), T (x, y, z) is the the completely symmetric part of the triple product T (11) on K ∼ R.
Recently, the role of Lie algebras of type e 7 was investigated in supergravity in some detail (see Sec. 3.5). In Sec. 5 Brown's definition of Lie algebras of type e 7 [22] will be discussed in relation to FTS and Freudenthal gauge theory. 8 Even if here f abcd is not (necessarily) completely symmetric in the present framework, we adopt the same normalization of [23] and [24].
From the FTS axioms discussed in Subsecs. 2.3 and 2.4, one can show that ∆(φ) is invariant under the following transformation: namely that The proof can be found in App. A (which generalizes the treatment of [23], in turn referring to [22], to FTS defined by axioms (i)-(iv); see also [24]). In the physics literature, the map F (25) has been called "Freudenthal Duality" (or F-duality for short); it was first observed in [23] as a symmetry of the Bekenstein-Hawking [31,32] entropy-area formula for black holes, and then further generalized 9 in [24].
In the rest of this Subsection, we list some brief remarks; further details will be reported in a forthcoming paper [33].
(I) Anti-Involutivity. The F-duality F (25) is an anti-involution in K [22,23,24]: This holds whenever φ is an element in M c J , which is the complement in K of the submanifold (recall (8)) In addition to this, for λ = 0 and for any φ ∈ K, the F-duality map and its imageφ (namely, the "F-dual" scalar field) are defined iff ∆(φ) = 0. Whenever Inv(M J ) is non-empty and thus its corresponding action determines a stratification of the symplectic vector space K (J) ∼ R (Inv(M J )) (cfr. (10)), this can also be equivalently stated as the requirement that φ belongs to the rank-4 orbit of K under the action of Inv(M J ) itself.
(II) Z 4 -Grading. The anti-involutivity (27) of F yields a Z 4 -grading of the symplectic vector space K. This interesting property will be investigated in [33].
(III) F-Duality is not an FTS Derivation. The non-linear map over K provided by F-duality (25) is not a derivation with respect to the triple product (11) over K. Thus, such a mathematical structure cannot be consistently used to define an infinitesimal transformation. This means that the invariance (26) is rather a global symmetry ("duality") of K, and thus a global (off-shell) symmetry of the corresponding gauge theory; see next Sections.
Freudenthal Gauge Theory (FGT)
In the present Section, we will introduce the gauge theory based on the FTS discussed in Sec. 2. As anticipated, this theory, whose consistent (bosonic) Lagrangian density is proposed in Subsec. 3.3, will be named "Freudenthal Gauge Theory" (FGT ). As it will become clear, our construction resembles very much the one of BLG theory [10,11]. However, we present here a detailed analysis, also in order to make several remarks addressing the differences between FGT (and thus FTS ) and the triple systems-related gauge theories, especially in D = 3 (see the discussion in Sec. 5).
From Global Symmetry...
We consider a real scalar field φ(x) valued in a FTS K over R, and we aim at constructing a Lagrangian density functional L [φ(x)] with the desired symmetry.
Clearly, L [φ(x)] must be a K-scalar, and thus all its terms must be of the form α : At each point x in space-time, f (φ(x)) and g (φ(x)) are elements of the subalgebra K φ(x) ⊂ K generated by the element φ(x) ∈ K. More precisely, elements of K φ(x) are homogeneous polynomials of odd degree in φ(x), with the multiplication defined by the non-associative (cfr. axiom (iii)) triple product T (11) over K.
The FTS axiom (iii) (or equivalently (iii ′ )), along with the definition (13), allow for a consistent definition of an infinitesimal transformation L Λ ∈ sp(K) (recall (16)), such that where the parameters of the transformation are denoted by Note that only elements in the symmetric tensor product K ⊗ s K can generate a transformation L Λ , because the antisymmetric part K ⊗ a K is projected out by the symmetry property under the exchange of the first two entries of the triple product T (cfr. axiom (i)). Crucially, axiom (iv) (or equivalently (iv ′ )) states that for any f (φ), g(φ) ∈ K, the symplectic product f (φ), g(φ) (defined in (12) and in (17)) is invariant under L Λ : By the same argument, all K-scalar real functions α(φ) (30) are necessarily of this form, namely for some functions h(φ) and l(φ) of the same kind as f (φ) and g(φ) defined in (31).
Thus, one can conclude that any Lagrangian density functional L of the form (29) is invariant 10 under the infinitesimal transformation (32). In other words, by the four axioms (i)-(iv) of FTS, any Lagrangian L of the form (29) is guaranteed to be invariant under the global symmetry generated by L Λ (32).
It should also be remarked here that the definitions (21) and (25) imply that the F-dual fieldφ(x) is also an element of K φ(x) . Therefore,φ(x) transforms in the very same way as φ(x) under the global symmetry L Λ (32).
As already pointed out above, the invariance (34) of the symplectic product ·, · (12) in K under the action of the infinitesimal transformation L Λ implies that the latter is not simply an element in gl(K), but rather it generally belongs to the Lie algebra g (15)-(16).
...to Gauge Symmetry
We will now proceed to gauge the global symmetry introduced in Subsec. 3.1, by promoting the infinitesimal generator Λ (33) to be a function Λ(x) over space-time. Correspondingly, this will identify g (15)-(16) as the gauge algebra.
As done in Subsec. 2.3, by adopting a basis {e a } for K, one can generally write down the gauge transformation of a K-valued scalar field φ(x) = φ a (x)e a in the following form (recall (17)): where Λ ab (x) denotes the rank-2 tensor generating the gauge transformation itself. Note that axiom (i) of FTS implies that such a tensor is symmetric (cfr. (14)): which is consistent with (33). When Λ ab is constant over space-time, one consistently re-obtains the global symmetry considered in Subsec. 3.1. By recalling (16), one can define the linear operatorΛ ∈ g as 11 such that the gauge symmetry transformation (36) of a field φ(x) is nothing but a matrix multiplication by the linear operatorΛ: As discussed at the end of Subsec. 3.1, the gauge transformation of the F-dual fieldφ(x) (25) is by construction the following one: Next, we introduce a gauge field which is a 1-form valued in 12 K⊗ s K. Correspondingly, a g-valued gauge covariant derivative D µ acting on the scalar field φ a (x) can be defined as: where is the corresponding 1-form linear operator in g.
It is worth remarking that both definitions (38) and (43) can respectively be regarded as images of the rank-2 symmetric tensor Λ ab (x) (33) of infinitesimal gauge parameters and of the corresponding rank-2 symmetric tensor A ab µ (x) (41) of 1-form gauge potentials, under a map (dubbed "hat" map), defined through the FTS structure constants f d abc (17) as follows: · : The "hat" map (44) allows one to implement (generally g-valued) infinitesimal gauge transformation L Λ defined via the FTS triple product in terms of standard matrix multiplication (in gl(K)). As such, this map provides an explicit matrix realization of the gauge Lie algebra g of the FGT, by means of an embedding (local in space-time) analogous to the local embedding K φ(x) ⊂ K mentioned below (31). Then, the requirement of D µ φ(x) to transform under the gauge symmetry L Λ in the same way as consistently fixes the gauge transformation A µ (x) as follows: namely A µ (x) transforms as a g-valued 1-form.
To proceed further, we introduce the gauge field strength 2-form whose infinitesimal gauge transformation can consistently be computed to be The matrix embedding of L Λ into g provided by the "hat" map (44) also ensures that the "trace" of the field strength F µν (x) (47) is g-gauge invariant; in the next Subsection, this fact will be used to work out a bosonic Lagrangian for FGT.
The Lagrangian
We are now going to propose a consistent bosonic Lagrangian for the FGT. By recalling definitions (21) and (25) and considering the lowest possible order in the scalar field φ(x), one can introduce the following (generally non-polynomial) term which is homogeneous of degree 2 in φ(x). As discussed in Subsec. 3.2, the gauge covariant derivatives of both φ(x) and its F-dual fieldφ(x) transform as vectors under the gauge transformation L Λ ; therefore, a consistent kinetic term for scalar fields reads whose gauge invariance is guaranteed by the FTS axioms (i)-(iv), (34), and by the very treatment of Subsec. 3.2.
From axiom (iv) (or equivalently (34)) and (49), it follows that for any sufficiently smooth function is a gauge invariant real function of φ: 13 Actually, by recalling definitions (30) and as the most general gauge invariant potential term. However, the invariance also under Fduality F (25), as we do impose in FGT (see further below), further restricts the choice to V ∆(φ) , as given by (51).
which therefore can be taken as a gauge invariant potential in the bosonic FGT action.
By exploiting the matrix embedding of g-valued Freudenthal gauge transformations L Λ (realized by the "hat" map (44)), one can construct a Maxwell term for the gauge invariant kinetic term for the gauge field µ (x).
By introducing the Minkowski metric η µν = η µν and a function N (∆(φ)) coupling vector and scalar fields, for D 4 the following kinetic Maxwell term can be constructed: The gauge invariance of (53) results from the simple computation where (52) has been used for the function N , the field strength gauge transformation property (48) has been recalled, and the cyclicity of the trace has been exploited.
Thus, by merging (50), (51) and (53), the following (bosonic) Lagrangian for the "Freudenthal gauge theory" (FGT) can be written down: whose simplest ("minimal") version corresponds to setting V ∆(φ) = ∆(φ) (quartic scalar potential) and N (∆(φ)) = 1: Remarkably, the FGT Lagrangian density functional (56) is not only invariant under the off-shell gauge Lie algebra g introduced in Subsecs. 3.1-(3.2), but also under the F-duality F (25), which acts as a global (off-shell) symmetry 14 . In order to check this, one should simply recall (26), as well as the anti-involutivity (27) of F (25) itself and the anti-symmetry of the symplectic product used to construct the scalar kinetic term (50). In particular, the F-invariance of the latter reads (recall point (IV) of Subsec. 2.5): where in the second line one does not necessarily have to use the the symmetry of the Minkowski spacetime metric η µν , because, the scalar kinetic term is symmetric under the exchange of its space-time indices: as shown in App. B.
14 From point (IV) of Subsec. 2.5), the Freudenthal duality F (25) is not a derivation with respect to the FTS triple product (11) over K, and thus with respect to the FTS -based gauge transformation introduced above.
It should be remarked here that in the above construction the dimension D of space-time does not necessarily need to be specified. As mentioned, the (φ-coupled) Maxwell kinetic vector term (53) is well defined in D 4. Moreover, in D = 4 a topological (theta) term can also be introduced, along with its vector-scalar coupling function M (∆(φ)): and its gauge invariance and F-invariance once again follow from (52), (48), (26) and the the cyclicity of the trace. Thus, in D = 4, the bosonic Lagrangian density (56) can be completed as follows: Even if in the above construction the dimension D of space-time does not necessarily need to be specified, it should be stressed that in D 4 the FGT is non-unitary whenever the gauge Lie algebra g is non-compact (and thus with a Cartan-Killing metric which is not positive-definite). Indeed, we recall that in the present investigation we consider the FTS to be defined on the ground field R (cfr. Footnote 1); this constrains the pair (g, R) such that R is a real representation space of the real algebra g. The latter, at least in the examples related to conformal symmetries of JTS J = J (treated in Sec. 3.4 and reported in Table 1), is non-compact.
On the other hand, in D = 3 space-time dimensions this does not hold any more, and the noncompactness of the (real) gauge Lie algebra g is not inconsistent with unitarity of the theory. Indeed, R is always assumed to possess a positive-definite inner product (for unitarity of the corresponding gauge theory), but the gauge fields are not propagating (and they are in Adj(g)), and therefore g does not necessarily have to be endowed with a positive-definite product, thus allowing for non-compact (real) forms of g itself. As we discuss in Sec. 5, this is particularly relevant for the connection between D = 3 FGT and (the bosonic sector of) superconformal Chern-Simons-matter gauge theories in D = 3.
Moreover, in D = 3 a Chern-Simons (CS) term for the gauge sector can be considered, with the same form as in the BLG theory (cfr. (45) of [10]): whose consistence in FGT follows from FTS axioms (i) and (iv). The F-invariance of the CS term (62) is trivial (it does not depend on φ at all), while its gauge invariance can be easily proved by exploiting the symmetry property (19) of FTS structure constants f abcd . Thus, in D = 3 one can propose the following bosonic FGT Lagrangian density:
Gauge Algebras of Type e 7
An interesting class of gauge algebras g (15)- (16) for the FGT can be obtained by considering symmetry algebras of Jordan algebras J themselves. Indeed, a particular non-compact, real form of the decomposition (3) reads where conf( J) and qconf( J) respectively denote the conformal and quasi-conformal 15 Lie algebras of rank-3 simple Euclidean Jordan algebras J. Note that conf( J) is nothing but a particular non-compact, real form of Inv(M J ); this is also consistent with the fact that conf( J) is nothing but the automorphism Lie algebra of K( J) itself: Analogously, also formulae (4)- (7) hold at the suitable non-compact real level, by respectively replacing Inv(M J ) and sp K( J) with conf( J) and 16 sp (f, R). In particular, (7) can be recast as The decompositions (3) and (64), as well as the whole treatment above, also hold for rank-3 semisimple Euclidean Jordan algebras of the type where Γ m,n is a rank-2 Jordan algebra with a quadratic form of pseudo-Euclidean signature (m, n), i.e. the Clifford algebra of O(m, n) [77]. However, in this case the corresponding Lie algebra G in (3) (or qconf( J) in (64)) is a classical Lie algebra, namely a (pseudo-)orthogonal algebra. Table 1 lists the entries of (64) for rank-3 Euclidean Jordan algebras, also including the cases J = J As It is also worth recalling here that the Lie algebra Inv(M J ) (or equivalently conf( J)) is "of type e 7 " [22], as recalled in Sec. 2.5, and in the mathematical literature its symplectic (real) representation R is sometimes called minuscule irrep. (see e.g. [54]).
FGT and Supergravity
Summarizing, a class of gauge algebras (and representations) for FGT is provided by the conformal Lie algebras conf of (simple and semi-simple) Euclidean, rank-3 algebras J, listed in Table 1, along with their (real) symplectic representation R. The pair conf J , R characterizes conf J as a Lie algebra of type e 7 [22].
Indeed, within such a class of theories, the decomposition (64) can be further interpreted as the Cartan decomposition of the qconf( J) (U -duality algebra in D = 3) with respect to conf( J) (U -duality algebra in D = 4). In particular, R conf( J) listed in Table 1 is the representation in which the 15 The novel, non-linear geometric quasi-conformal realizations of groups were first discovered by Günaydin, Koepsell and Nicolai in [16], by exploiting the underlying FTS, and showing that they extend to the complex forms and hence to different real forms of the corresponding groups. In the subsequent papers [17] and [18], the quasi-conformal realizations of D = 3 U -duality groups of Maxwell-Einstein supergravity theories, respectively with with 8 and at least 16 supersymmetries, have been determined. See e.g. [19], for a review and a list of Refs.. 16 Note that sp (f, R) is the maximally non-compact (split) real form of sp K( J) . 17 Here U -duality is referred to as the "continuous" symmetries of [29]. Their discrete versions are the U -duality non-perturbative string theory symmetries introduced by Hull and Townsend [30]. Table 1: Conformal conf( J) and quasi-conformal qconf( J) Lie algebras associated to rank-3 Euclidean Jordan algebras. The relevant symplectic irrep. R of conf( J) is also reported. In particular, 14 ′ denotes the rank-3 antisymmetric irrep. of sp(6, R), whereas 32 and 32 ′ are the two chiral spinor irreps. of so * (12) . Note that conf(J As 3 ) and qconf(J As 3 ) are the maximally non-compact (split) real forms of the corresponding compact Lie algebra. M 1,2 (O) is the JTS generated by 2 × 1 vectors over O [14,15]. Note the Jordan algebraic isomorphisms Γ 1,1 ∼ R ⊕ R, and Γ 1,0 ∼ R. The number of spinor supercharges N of the corresponding supergravity theory in D = 4 (cfr. Subsec. 3.5) is also listed.
2-form field strengths of the D = 4 Abelian vector potentials sit, along with their duals. As mentioned above, conf( J) is nothing but Inv(M J ), possibly specified as a suitable non-compact real algebra 18 .
At least in D = 3, 4, 5, 6, the theories of this class all exhibit (Abelian vector multiplets') scalar manifolds which are symmetric cosets 19 . In particular, the coset Lie generators in D = 4 and D = 3 Lorentzian space-time dimensions are respectively given by conf( J) and qconf( J) modded out by their maximal compact subalgebra (mcs).
The number of spinor supercharges N of the D = 4 supergravity theory is reported in Table 1. In particular, the theories associated to J = J A 3 ≡ H 3 (A) are usually dubbed "magical" MESGT's [14,15], whereas the N = 2, D = 4 theories corresponding to J = R, R ⊕ R and R ⊕ R ⊕ R are the so-called T 3 , ST 2 and ST U models [57,58]. It should also be remarked that J = J H 3 is related to both N = 2 and N = 6 theories, which in fact share the very same bosonic sector [14,15,59,60,61].
As discussed in Subsec. 2.1, FTS 's K J (with J simple) exhibit a close relationships with exceptional Lie algebras, as given by (3). As listed in Table 1, when considering suitable non-compact, real forms, (3) enjoys the reinterpretation (64) : in other words, exceptional Lie algebras occur as quasi-conformal Lie algebras of the corresponding simple Jordan algebras J [37,42]. In this respect, it 18 In fact, as a maximal subalgebra of qconf( J), in this framework the Lie algebra Inv(M J ) can be compact (with commuting subalgebra su(2)) or non-compact (with commuting subalgebra sl(2, R)), depending on whether the Kaluza-Klein reduction from D = 4 → 3 is performed along a space-like or time-like direction, respectively; in turn, this mathematically corresponds to perform a c-map [78] or a c * -map (see e.g. [79]) on the D = 4 (vector multiplets') scalar manifold. 19 A particular case is given by M1,2 (O), which (cfr. caption of Table 1) is a JTS generated by 2 × 1 vectors over O [14,15]. It is related to supergravity with 20 local supersymmetries, which exists only in D = 4 (N = 5 [55]) and in D = 3 (N = 10; see e.g. [56] and Refs. therein).
is worth adding that classical (namely, pseudo-othogonal) Lie algebras also occur as quasi-conformal Lie algebras of rank-3 semi-simple Euclidean Jordan algebras of the type 20 (67) [18]. These facts provide indication of possible links between FGT and Yang-Mills (exceptional) gauge theories.
At bosonic level, differences and similarities between the FGT and the class of MESGT's under consideration can be observed by comparing e.g. the D = 3 FGT Lagrangian density (63) with the bosonic sector of the (ungauged) MESGT (D = 4) Lagrangian density (cfr. e.g. the treatment in [62], and Refs. therein) Besides the presence of the Einstein-Hilbert term, there are crucial differences : in the FGT the scalar fields φ fit into R(g) and the vectors arise from the gauging of the FTS triple product symmetry algebra g; as a consequence, the derivatives acting on φ are covariantized, as discussed in Secs. 3.2 and 3.3. On the other hand, in the corresponding (D = 4) supergravity framework, the Abelian twoform field strengths fit into R(g = conf( J)), while the scalar fields are in a suitable representation of the maximal compact subalgebra mcs(g). Furthermore, as discussed above, in FGT the gauge algebra g = conf( J) and the corresponding global Freudenthal duality are off-shell symmetries of the theory, whereas in the MESGT's under consideration g = conf( J) is only an on-shell symmetry 21 . It is also worth pointing out that on the gravity side supersymmetry seems to be an accidental feature; indeed, we recall that for J = J Cs 3 and J Hs 3 , the corresponding theories of gravity coupled to Maxwell and scalar fields are not supersymmetric; possible supersymmetrization of FGT will be discussed in Sec.
4.
It will be interesting to investigate these relations in future studies; see also the discussion in Sec. 5.
Generalization?
In the previous Section, we have constructed a consistent Lagrangian for the Freudenthal gauge theory (FGT), based on the FTS K (J), with K-valued scalar field φ(x), admitting both (off-shell) FTS gauge symmetry and (off-shell) global Freudenthal-duality symmetry F.
The most important kind of generalization would concern an FGT-type Lagrangian involving some vector fields and/or spinor fields, which is again invariant under both FTS gauge and Freudenthal duality symmetries; indeed, this would be a necessary condition for a supersymmetric (non-trivial) extension of FGT. Moreover, such a generalization is of interest to the physicists, since it potentially might define a sigma-model type theory if the space-time considered in this paper is regarded as the world-volume of some extended objects (for instance, M 2-branes), and correspondingly the vector fields conceived as the image of the world-volume in some target space.
However, in Subsecs. 4.1-4.2 we shall prove that, within some minimal reasonable assumptions, such a generalization is not possible.
Coupling to a Vector Space
Let us start the analysis by coupling a generic FTS K to a generic vector space V, over which one can introduce suitable algebraic structures and make it into an algebra; for instance, spinors can be regarded as vectors with an anti-symmetric binary product that yields the Fermi statistics. In this way, our discussion for the formal algebraic system V will cover the most generic space that couples to K.
Thus, we are considering an extended vector space whose element, denoted by Φ, is the tensor product of an element φ ∈ K and an element v ∈ V, i.e.
In order to be able to construct a Lagrangian density functional L [Φ(x)] for the fields Φ(x) ∈ N obtained from promoting an element Φ ∈ N to a N-valued space-time field Φ(x), one starts by introducing a bilinear form (namely, the metric) defined for any two Φ I,J = φ I,J ⊗ v I,J in N. Via direct evaluation, (71) induces a metric on V itself: where "×" is here multiplication by a scalar (real) factor, and is the induced metric over V. Note that the symmetry property of (·, ·) V (73) is to be determined by the required symmetry property of the metric ·, · (71) over N (by also recalling the anti-symmetry of the symplectic form (12) over K). Furthermore, in order to consistently define the Freudenthal duality F of this extended theory, one needs to introduce a triple product T : defined for any three elements Φ I , Φ J , Φ K ∈ N, which would then induce a tri-linear triple product on V itself: In order to proceed further, we make here a plausible conjecture that Freudenthal duality F can be defined only for algebraic systems satisfying the axioms (i)-(iv) of an FTS, introduced in Subsec. 2.3. As a consequence, we require the metric (71) to be an anti-symmetric bilinear form (and append this as axiom (o)), thus obtaining the following five axioms for the algebra N: where µ plays the role of the real parameter λ introduced above for the FTS K. Then, by repeating for the algebra N the very same construction discussed in Sec. 3 for the FTS K, one gets the most general Lagrangian density functional L [Φ(x)] invariant under the two desired symmetries, namely under both (off-shell) FTS gauge symmetry and (off-shell) global Freudenthalduality symmetry F.
A No-Go Theorem
However, this seemingly smooth construction of an extended FGT coupled to vector and/or spinor fields suffers from some severe constraints, which actually spoils the above generalization.
Indeed, axioms (o)-(iv) of N induce a set of corresponding axioms for the metric (73) and the triple product (75) induced on V (in addition to the ones already introduced for other physical reasons, such as the ones yielded by the Bose and/or Fermi statistics for the fields v I ∈ V); the reader can find the full set of such axioms for V in App. C.
Among them, axiom (B. iii) induced from the derivation property of N leads to a particularly strong constraint. In order to realize this, let us restrict to a subalgebra where K φ is the subalgebra in K generated by a single generator φ ∈ K (see also Subsec. 3.1). Then, by taking five elements of the form and inserting them into axiom (B. iii) of App. C, the following simplified (weaker) condition on the algebraic structure of V is achieved: where the simplification comes from the fact that over the subalgebra K φ , L φ T (φ) and L T (φ)φ act as annihilation operators, whose proof can be found in App. A. Moreover, we observe that, as holding for K (cfr. definition (13)) for any two elements v L , v M ∈ V one gets a linear operator (generally gl(V)-valued, whenever it is non-zero) L v L v M , whose action is evaluated by the triple product (75) as: Then, by using definition (79), the weaker form (78) of the axiom (B. iii) can be recast as a condition on the matrix commutator in gl(V): Under the assumption that the metric (71) in N is non-degenerate (which we understand throughout 22 ), the condition (80) can be satisfied in only two instances: [ I ] when dim (R) V = 1, i.e.
which is the case of a single K-valued (real) scalar field discussed in Secs. 2-3; [ II ] when the set is a subset of the Cartan subalgebra of gl(V), namely 23 (recall definitions (73) and (79)): The triple product [·, ·, ·] V (75) defined by (83) satisfies the strong form of axiom (B. iii) and most of other axioms of App. C. However, at least within the assumption of non-degeneracy of the metric of the algebra N (cfr. Footnote 19), it is refuted by axiom (B. ii) whenever K is larger then a single-generator algebra K φ .
This completes the proof of the following
No-Go Theorem
Assuming the metric of the algebraic system N (69) to be non-degenerate and the Freudenthal duality F to be defined only for N satisfying all the four FTS axioms introduced in Subsec. 2.3, then it is not possible to construct a Lagrangian density functional L [Φ(x)] for a K-valued vector/spinor field Φ(x) which admits both (off-shell) FTS gauge symmetry and (off-shell) global F-duality symmetry F.
FGT and (N = 3, D = 3) SC CSM Gauge Theories
We will now briefly make some observations on the relation between Freudenthal gauge theory (FGT) (based on Freudenthal triple systems (FTS 's)) and the intense research on triple systems and gauge theories, in which remarkable advances were achieved after the seminal papers of Bagger and Lambert [10] and Gustavsson [11]. A more detailed analysis will be reported in [33].
Here, we will focus on the relation to superconformal (SC) Chern-Simons-matter (CSM) gauge theories in D = 3 (in which the R-symmetry structure is richer); we will mainly refer to the mathematical treatment of [35] and [36] (see also [66]); for an extensive list of Refs. on BLG theories and related developments, besides [35,36,66], we address the reader e.g. to the recent comprehensive review [12].
We anticipate that the symmetry properties (20) of the FTS structure constants on which the FGT is based are generally different from the ones pertaining to the structure constants on which the BLG-type theories (such as the ones investigated e.g. in [67,68], among others) rely. Among SC CSM D = 3 gauge theories, the symmetry (20) is indeed consistent only with N = 3 (see e.g. [36], and Refs. therein). Disregarding the global (off-shell) Freudenthal duality, (D = 3) FGT could be viewed as an alternative, purely bosonic sector of the corresponding N = 3, D = 3 SC CSM gauge theory. In fact, as analyzed in Sec. 3.3, in FGT the non-vanishing of f (abcd) allows for terms in the Lagrangian which differ from the usual ones in BLG theories; for instance, the simplest FGT scalar potential is quartic in the scalar fields (essentially given by ∆ (21); see (57)), whereas in BLG theories it is of order six (see e.g. (19) of [10]).
We start by observing that the set of axioms (i), (iii) and (iv) defining an FTS (as given in Sec. 2.4) match the set of axioms (a), (b) and (c) defining the triple systems based on quaternionic unitary representations W of a metric Lie algebra g, as discussed in [35] and [36] (see e.g. App. A.2.4 of [36], and axioms (125)-(127) therein); in particular, the FTS axiom (iii) is nothing but the so-called fundamental identity of the triple system (see e.g. (127) of [36]). In turn, the treatment of [35] and [36] is based on a construction due to Faulkner [34,69], which essentially constructs triple systems from pairs (g, V), where V is a suitable representation 24 of g [35].
The g-irreducible decomposition of the rank-4 g-invariant structure in W is given by (124) of [36] (also, cfr. Table 2 therein): In tensor notation, a reformulation 25 of (84) reads as follows (a, b ∈ R): (85) is consistent with the general symmetry of the FTS structure constants' tensor f abcd given by (20); furthermore, Freudenthal duality F (25) can be consistently introduced whenever f (abcd) = 0. It is worth remarking that Brown's definition of Lie algebra (g, R) of type e 7 [22] (cfr. (a)-(c) in Sec. 2.5) can be extended to include also the not completely symmetric part ω a(c ω d)b of (85) as follows: R is a representation space of g such that ( a) R possesses a non-degenerate, skew-symmetric bilinear g-invariant form ω (cfr. (12) and (17)); ( b) R possesses a rank-4 g-invariant structure f abcd (85), which allows to define ( c) by defining a ternary product T (x, y, z) on R as T (x, y, z) , w ≡ q (x, y, z, w) , then one has 3 T (x, x, y) , T (y, y, y) = x, y q (x, y, y, y) .
By enhancing f abcd = f (abcd) to a not completely symmetric f abcd given by (85), one can conclude that, by virtue of ( a), the real parameters a and b can always be chosen such that the inclusion of ω a(c ω d)b in Brown's definition [22] yields nothing but an equivalent definition of a Lie algebra of type e 7 ; however, as pointed out below, the presence or absence of the term ω a(c ω d)b matters in order to make contact with FTS 's.
Note that the λ-dependent F T S-defining axiom (ii) was not mentioned so far. However, at least for the class of pairs (g, R) = conf J , R reported in Table 1, the parameters a and b can be fixed consistently with axiom (ii), by further elaborating (85) as For pairs (g, R) = conf J , R with g simple, both (89) and the parameter λ acquires a very simple group-theoretical meaning. Indeed, exploiting the results of [70], (89) can be rewritten as where t α ab = t α (ab) is the (g-invariant) realization of the generators of g in R; the indices α and a respectively are in Adj and R of g, whose Cartan-Killing metric is g αβ . Therefore, f abcd can be defined as the adjoint-trace of the product of two realizations of generators of g in its representation R. Moreover, the parameter [70] expresses the ratio between the sets of indices α and ab = (ab) of t α ab (in the treatment above, we set dim R R (g) ≡ f ; cfr. (6)). By virtue of the Gaillard-Zumino embedding (5) [43] (or, equivalently of the aforementioned Theorem by Dynkin [44,45]), τ expresses the fraction of generators of sp (f, R) which generate its maximal (generally non-symmetric) sub-algebra g. Indeed, it holds that By a suitable generalization of the analysis of [80], explicitly worked out in [68], the choice of f abcd given by (90) can be made also for the pairs (g, R) = conf J , R with g semi-simple. However, in these cases the last step of (90) does not hold: in fact, the explicit expression of t α ab t α|cd for these cases has been computed in [68], and it is such that [67] g αβ t α (ab t β c)d = 0.
Thus, the FTS (the triple system on which the FGT is based) turns out to be related to the quaternionic level of Faulkner's construction [34] of triple systems from pairs (g, V), which has been recently re-analyzed by [35,36,66] within D = 3 SC CSM gauge theories.
An important difference with the latter framework is the fact that, in the treatment of the present paper, FTS is defined on the ground field R (recall Footnote 1); this constrains the pair (g, V) = (g, K) such that V is a real representation space of the (non-compact) real algebra g; some examples, related to conformal symmetries of JTS J = J, are reported in Table 1. As mentioned in Sec. 3.3, we point out that this is not inconsistent with the physical constraint on matter representations in D = 3 SC CSM gauge theories; indeed, V = W is always assumed to possess a positive-definite inner product (for unitarity of the corresponding gauge theory), but CS gauge fields are not propagating (and they are in Adj (g)), and therefore g does not necessarily have to be endowed with a positive-definite product, thus allowing for non-compact (real) forms of g.
The expression (85) of the FTS structure constants' tensor f abcd (or, equivalently, for the rank-4 g-invariant structure in W in (g, V = W)-based Faulkner's construction of triple systems [34]) entails two "extremal" cases: 1. The case in which f abcd is completely symmetric (and therefore Freudenthal duality F (25) can be consistently introduced). This corresponds to b = 0 and (up to redefinition) a = 1 in (85): which characterizes Brown's definition [22] of (g, W) as a Lie algebra of type e 7 (cfr. axiom (b) in Sec. 2.5). The corresponding triple system has been called quaternionic Lie triple system (qLTS ) in [36]. However, this triple system is not relevant for application to (BLG-type) gauge theories. Indeed, for positive-definite W (as assumed for unitarity of the corresponding gauge theory), f abcd is nothing but the Riemann tensor of a symmetric hyper-Kähler manifold, which is Ricciflat; however, any homogeneous Ricci-flat Riemannian manifold is actually Riemann-flat [81,82]. Thus, a positive-definite W in qLTS (94) is necessarily the trivial representation (cfr. Corollary 6 in [36]). Remarkably, this result has a consistent interpretation in the FTS framework. Indeed, it can be checked that (94), when plugged into the FTS axiom (iii) (fundamental identity) and contracted with x a x b y c y e y f y g , does not yield the axiom (c) which defines a Lie algebra of type e 7 [22]. In other words, (g, W) of type e 7 [22] is not consistent with the FTS introduced in Secs. 2.5-2.4; in particular, the fundamental identity (iii) is not consistent with axiom (c) of Lie algebras of type e 7 [22]. As a consequence, the limit of the defining axioms (i)-(iv) in which f abcd is taken to be completely symmetric (94) is ill defined; a non-trivial λ → 0 limit in (i)-(iv) can still be implemented, but it yields an FTS which does not fulfill the symmetry condition (94) [33].
2. The case in which f abcd lacks its completely symmetric part. This corresponds to a = 0 and (up to redefinition) b = 1 in (85): In this case the Freudenthal duality F (25) cannot be consistently introduced. The corresponding triple system has been called anti-Lie triple system (aLTS ) in [36]; it characterizes N = 4 and N = 5 SC CSM gauge theories in D = 3, as thoroughly analyzed in [36] (see also Table 6 therein), by elaborating on previous literature (see Refs. therein). A prototypical case (treated in Example 1 of [40]) is provided by a consistent limit of (89), given by 26 (recall (6)) g = sp(f, R) and W = f (fundamental irrep.). Since is irreducible in sp(f, R) and contains no singlets, it follows that f (abcd) = 0. On the other hand, since Adj(sp(f, R)) = S 2 f ≡ (f × f ) s , the definition (91) also yields τ = 1, and therefore (95) is recovered from (89). It is worth remarking that in this case the resulting FTS is not endowed with a manifestly JTS -covariant structure (1) as in the original Freudenthal's formulation [37,38,39]; the corresponding (super)gravity theory in D = 4 can have at most 27 N = 1 local supersymmetry, and has a (non-special) Kähler scalar coset with algebra sp (f, R) ⊖ u(f /2) (upper Siegel half-plane).
The general triple system under consideration, which interpolates between qLTS (94) and aLTS (95), is endowed with an f abcd given by (85) with both a and b non-vanishing. As anticipated, among SC CSM gauge theories in D = 3, this is consistent only with N = 3 (see e.g. [36], and Refs. therein), which is thus the only amount of (global) supersymmetry for which Freudenthal duality F (25) could a priori be implemented, even if its enforcement as a global (off-shell) symmetry is in contrast with supersymmetry itself, as implied by the No-Go theorem proved in Sec. 4.2.
It is worth observing that this general case is also consistent with the "extension" of the definition of Lie algebras of type e 7 (based on axioms ( a)-( c) above); indeed, up to some redefinitions, the real parameters a and b can always be chosen such that (85), when plugged into the FTS axiom (iii) and contracted with x a x b y c y e y f y g , does yield the axiom ( c) introduced above; the term ω a(c ω d)b plays a key role in this result.
The above treatment hints for the existence of a class of N = 3, D = 3 SC CSM gauge theories in which the gauge Lie algebra and its matter representation are given by namely they are respectively given by the conformal symmetries of rank-3, Euclidean Jordan algebras, and by their relevant symplectic irreps. R, as reported in Table 1. In this respect, by recalling Sec. 3.5, N = 3, D = 3 SC CSM gauge theories based on (97) share the same symmetry (with different physical meanings) of two other distinct classes of theories : • D = 4 Maxwell-Einstein (super)gravity theories (ME(S)GT) (with various amount N of local supersymmetry) having symmetric scalar manifolds, as discussed in Sec. 3.5 (and reported in Table 1); • D = 3 Freudenthal gauge theories (FGT's) based on an FTS K ∼ R conf J . The consistency of FGT with (global) supersymmetry is an important difference with respect to N = 3 SC CSM gauge theories. Indeed, the No-Go Theorem proved in Sec. 4.2 essentially states that global (off-shell) Freudenthal duality is not consistent with a non-trivial coupling to space-time vector/spinor fields, which in turn is a necessary condition for supersymmetry.
These relations among N = 3, D = 3 SC CSM gauge theories, D = 4 ME(S)GT's and FGT's can actually be extended to the general case in which the pair (g, V = W) defines a generic F T S (based on axioms (i)-(iv)) corresponding, in the sense outlined above, to the "quaternionic level" of Faulkner's construction [34,69,35,36,66].
We plan to investigate this interesting interplay of symmetries in future work [33] (also in view of possible AdS/CFT applications). In particular, as anticipated above, when disregarding the global (off-shell) Freudenthal duality, it would be interesting to consider the consistency of (D = 3) FGT as an alternative, purely bosonic sector of the corresponding N = 3, D = 3 SC CSM gauge theory. In fact, as analyzed in Sec. 3.3, in FGT the non-vanishing of f (abcd) allows for terms in the Lagrangian which differ from the usual ones in BLG theories; for instance, the simplest FGT scalar potential is quartic in the scalar fields (essentially given by ∆ (21); see (57)), whereas in BLG theories it is of order six (see e.g. (19) of [10]).
Concluding Remarks
In this paper, we have introduced the Freudenthal Gauge Theory (FGT), a gauge theory invariant under two off-shell symmetries: a local, gauge symmetry constructed from a Freudenthal Triple System (FTS ) K, and a global symmetry based on the so-called Freudenthal Duality (F-duality) F.
We have presented the most general bosonic action invariant under these two symmetries, containing a single K-valued scalar field φ(x) and a gauge field A ab µ (x) ∈ K ⊗ S K. The algebraic structure of the FTS ensures that the FGT is well defined and has the required properties.
One of the building blocks of FGT is the F-duality F, which is a non-linear anti-involutive duality (F 2 = −Id) which gives, up to a sign, a one-to-one pairing of elements in K.
In Sec. 4, we have also analyzed the possibility of generalizing the simple setup presented in Sec. 3 by coupling to space-time vector and/or spinor fields, which is a necessary condition for supersymmetry and is usually a relatively simple step in the construction of gauge theories. Within the assumption 28 that Freudenthal duality F can be defined only for algebraic systems satisfying the FTS axioms (i)-(iv) (see Subsec. 2.3) we have proved a No-Go theorem (which holds true if the metric of the system is non-degenerate), which essentially forbids the coupling to space-time vector and/or spinor fields.
However, we point out that such a coupling is possible at least if one relaxes the requirement of invariance under F-duality. Despite the fact that in our treatment there is, a priori, no restriction on the space-time dimension D, non-compact gauge Lie algebras g generally yield non-unitary theories in D 4 (cfr. the remark below (61)). However, in D = 3 this is no more a problem, and the resulting (non-Freudenthal-invariant) FGT can contain both bosonic and fermionic degrees of freedom together with the Chern-Simons term.
In D = 3, some intriguing similarities (and important differences) between FGT and superconformal (SC) Chern-Simons-matter (CSM) gauge theories with N = 3 global supersymmetry have been discussed in Sec. 5. Indeed, among SC CSM gauge theories in D = 3, a generic FTS is only consistent for N = 3 (see e.g. [36], and Refs. therein), which is thus the only amount of (global) supersymmetry for which Freudenthal duality F (25) could a priori be implemented, even if its enforcement as a global (off-shell) symmetry is in contrast with supersymmetry itself, as implied by the No-Go theorem proved in Sec. 4.2.
It is worth recalling here that our treatment hints for the existence of a class of N = 3, D = 3 SC CSM gauge theories in which the gauge Lie algebra is given by (97), namely by the conformal algebras g = conf J of rank-3, Euclidean Jordan algebras, and by their relevant symplectic irreps. R, as reported in Table 1. In this respect, such N = 3, D = 3 SC CSM gauge theories share the same symmetry (with different physical meanings) of two other distinct classes of theories : I] D = 4 Maxwell-Einstein (super)gravity theories (ME(S)GT) (with various amount N of local supersymmetry) with symmetric scalar manifolds, as discussed in Sec. 3.5 (and reported in Table 1); II] D = 3 FGT's based on an FTS K ∼ R conf J .
These relations among N = 3, D = 3 SC CSM gauge theories, D = 4 ME(S)GT's and D = 3 FGT's can actually be extended to the general case in which the pair (g, V = W) defines a generic F T S (based on axioms (i)-(iv)) corresponding, as discussed in Sec. 5, to the "quaternionic level" of Faulkner's construction [34,69,35,36,66].
We plan to investigate this interesting interplay of symmetries in future work [33] (also in view of possible AdS/CFT applications). In particular, when disregarding the global (off-shell) Freudenthal duality, it will be interesting to consider the consistency of D = 3 FGT as an alternative, purely bosonic sector of the corresponding N = 3, D = 3 SC CSM gauge theory. In fact, as analyzed in Sec. 3.3, in FGT the non-vanishing of f (abcd) allows for terms in the Lagrangian which differ from the usual ones in BLG theories; for instance, the simplest FGT scalar potential is quartic in the scalar fields (essentially given by ∆ (21); see (57)), whereas in BLG theories it is of order six (see e.g. (19) of [10]).
The close relation between the particular class K J of FTS 's and exceptional Lie algebras g (discussed in Secs. 2.1 and 3.4) could also be used to investigate the possible relation (duality? ) between FGT and Yang-Mills gauge theory with exceptional gauge Lie algebra g. This is certainly possible, but one should recall that exceptional Lie groups cannot be embedded into standard matrix groups, and thus the resulting Yang-Mills theory would not have the standard Maxwell term constructed from trace over matrices. Geometrically, a better way to understand this model is by noting that the exceptional Lie groups can be embedded as matrix groups over octonions O [71]; thus, the K J -based FGT would be dual to a standard Yang-Mills theory over 29 O.
The present investigation proved the quartic polynomial ∆ (21) to be invariant not only under Freudenthal duality F (25), but also under the (global or gauged) transformation based on the FTS triple product (11). It will be interesting to investigate the physical meaning of such an invariance of ∆ e.g. within black hole physics [23] and flux compactifications [76], in which ∆ occurs in relation respectively to the Bekenstein-Hawking [31,32] black hole entropy and to the cosmological constant. Interesting recent advances on Freudenthal duality [84,65] might also lead to further developments in understanding FGT.
Finally, we would like to point out that the FTS has another intriguing geometrical interpretation in terms of the so-called metasymplectic geometry, introduced decades ago by Freudenthal [37] [85]. In such a geometric framework, two points can define, instead of a line passing through them as in the standard geometry, two more relations, called interwoven and hinged. Furthermore, to each set of points there corresponds a set of dual geometrical objects called symplecta, satisfying relations which are dual to the aforementioned three ones among the points. In this bizarre geometrical setup, the FTS axioms acquire a natural geometrical interpretation, and the relation to the exceptional Lie algebras becomes more transparent. We leave the possible physical interpretation of such a fascinating geometry within FGT for further future investigation.
Since this equation is true for any element φ K ∈ K, it is true as an operator equation for L φ I φ J . Setting I = J = L = M , we find that where the FTS axiom (i) of Subsec. 2.3 has been used. Since the commutator of an operator with itself must vanish, the above equation must be equal to zero: This means, again by the derivation property of L, that both L T (φ)φ and L φT (φ) act like annihilation operators on any element φ K ∈ K.
B Space-Time Symmetry of Scalar Kinetic Term
In order to prove the symmetry (59) of the FGT kinetic scalar term under the exchange of its spacetime indices, one needs to re-write it only in terms of the K-valued scalar field φ(x), by recalling the definitions (21) and (25) of the quartic polynomial ∆ (φ) and of F-dual field φ(x). One starts by computing the FTS gauge covariant derivative of φ(x), as follows: As an aside, notice that the ∆(φ) in the denominator of the last term does not have absolute signs attached to it. Plugging this expression into the kinetic term (prior to contraction with η µν ) yields its following explicit re-writing only in terms of φ(x): On the other hand, the second and third term of (106) are manifestly symmetric under µ ↔ ν, the symmetry of the first term can be proved as follows: thus implying the result (59).
C Axioms of V
As discussed in Subsec. 4.2, we report here the five axioms induced on V by the five axioms (o)-(iv) of the algebra N (in addition to the ones already introduced on V for other physical reasons, such as the ones required by the Bose and/or Fermi statistics for the fields v I ∈ V). In particular, in the proof of the No-Go Theorem in Subsec. 4.2, a crucial role is played by axioms (B. iii) and (B. ii). | 15,729 | 2012-07-31T00:00:00.000 | [
"Physics"
] |
PrgE: an OB-fold protein from plasmid pCF10 with striking differences to prototypical bacterial SSBs
Enterococcal PrgE, from the conjugative plasmid pCF10, is a non-typical SSB that not only binds ssDNA in a filamentous manner but also binds dsDNA equally well as ssDNA.
Introduction
Horizontal gene transfer is an important way for bacteria to spread genetic information between populations, for example, for the propagation of antibiotic resistance or virulence genes (Von Wintersdorff et al, 2016).Conjugation is one type of horizontal gene transfer, which allows for the transfer of plasmids from donor to recipient cells via type IV secretion systems (T4SSs) (Waksman, 2019).These systems are increasingly well understood in Gramnegative bacteria, where recent cryo-EM structures provide an understanding of the mating channel at a molecular level (Mac é et al, 2022;Costa et al, 2024).In contrast, our current understanding of Gram-positive T4SSs is much more limited as such detailed information is not available (Grohmann et al, 2018).
One of the best studied Gram-positive T4SSs is from the conjugative plasmid pCF10 (Hirt et al, 2005;Dunny & Berntsson, 2016).This plasmid is a clinical isolate from Enterococcus faecalis, a commensal pathogen that often causes hospital-acquired infections and is frequently multiresistant to antibiotics (Palmer et al, 2010;Gilmore et al, 2013;Mikalsen et al, 2015;Weiner-Lastinger et al, 2020).pCF10 is a pheromone-inducible plasmid with a complex regulation (Kohler et al, 2019;Lassinantti et al, 2021).All T4SS proteins on pCF10 are encoded on a single operon, controlled by the P Q promoter.This operon thus contains the genes that code for (i) some of the regulatory proteins, (ii) the adhesin proteins that facilitate mating pair formation, (iii) the proteins that form the mating channel, and (iv) the DNA transfer and replication (Dtr) proteins, including ATPases and relaxosome proteins (Fig 1) (Dunny, 2013;Grohmann et al, 2018).The relaxosome is made up of an accessory factor PcfF and the relaxase PcfG, which nicks and binds covalently to the origin of transfer and gets transferred together with the single-stranded plasmid DNA into the recipient cell (Guzm án-Herrador & Llosa, 2019;Rehman et al, 2019).
Many conjugative plasmids encode additional proteins that are not directly involved in conjugation, but have various functions that confer competitive advantages to the plasmid (Cooke & Herman, 2023).PrgE is a small soluble protein that is encoded roughly onethird into the P Q operon, in between genes encoding for the mating channel (Fig 1).PrgE has not been previously characterized, and its role in type IV secretion is therefore unknown, but it has been suggested that PrgE is a single-stranded DNA-binding protein (SSB), based on its sequence homology of 37% to an SSB in a lactococcal phage (Desiere et al, 2001;Hirt et al, 2005).
SSBs are involved in all molecular mechanisms that require manipulation of single-stranded (ss) DNA, such as DNA replication, recombination, and repair, and can be found in all kingdoms of life (Marceau, 2012).Generally, SSBs share a structural motif, the oligosaccharide/oligonucleotide-binding (OB) fold.The motif consists of a five-stranded beta-barrel followed by a single alphahelix.However, there is a lot of variability in the loops between the beta-strands, the length of OB domains can range from 70 to 150 amino acids, and they often have a low primary sequence identity of 5-25% (Theobald et al, 2003;Mishra & Levy, 2015).Although the topology of the OB-fold is well conserved, the quaternary organization of SSBs varies between the different kingdoms of life.The Escherichia coli SSB, which is the prototype for bacterial SSBs, forms a homotetramer with two distinct DNA-binding modes, depending on salt and protein concentrations.In the first binding mode, E. coli SSB interacts with ssDNA with only two of its subunits, whereas the ssDNA wraps around the full tetramer in the second DNA-binding mode (Lohman & Ferrari, 1994;Raghunathan et al, 2000;Shereda et al, 2008).In eukaryotes, the prototypical SSB is replication protein A (RPA).RPA forms a heterotrimer consisting of RPA70, RPA32, and RPA14, with each subunit containing at least one OB-fold (Liu & Huang, 2016;Nasheuer et al, 2024).When it comes to archaea, some phyla have SSBs that resemble bacterial SSBs, whereas others are more similar to eukaryotic RPA (Taib et al, 2021).There are not only viruses that rely exclusively on host SSBs, but also those that encode their own proteins, with a large diversity of characteristics, some of which act as monomers (Shokri et al, 2009;Oliveira & Ciesielski, 2021).However, there is also variation within the kingdoms, as many bacterial and eukaryotic species have more than one type of OB-fold protein, which can vary significantly from their respective prototypes (Richard et al, 2008;Flynn & Zou, 2010;Yadav et al, 2012;Oliveira & Ciesielski, 2021).
In addition to chromosomal SSBs, many prokaryotes carry conjugative plasmids that encode SSBs (Golub & Low, 1985;Ruvolo et al, 1991).These are believed to contribute to plasmid maintenance, and are thought to be important for protecting ssDNA during conjugation (Ruvolo et al, 1991;Jones et al, 1992;Couturier et al, 2023).Many plasmid SSBs can complement deficiencies in genomic SSBs (Golub & Low, 1985).Recently, it was shown that the F plasmid-encoded T4SS can translocate plasmid SSB into recipient cells where they function to suppress the mating-induced SOS response (Al Mamun et al, 2021).However, it is not known whether SSBs encoded on conjugative plasmids from Gram-positives are functionally analogous.
In this study, we show that PrgE plays no essential role in conjugation, but that it has very unusual DNA-binding properties.Crystal structures of apo and DNA-bound PrgE show that PrgE has the characteristic OB-fold of SSBs, but that it binds ssDNA in a filamentous way, which is further supported by in vitro experiments.We also present data that show that PrgE unexpectedly binds both ssDNA and dsDNA equally well.S1).Searching for E. faecalis proteins in the AlphaFold database (AFDB50) only resulted in uncharacterized proteins or proteins with low sequence identity to PrgE.This suggests that PrgE differs from previously studied SSBs.
PrgE has an OB-fold
PrgE was produced in E. coli and purified to homogeneity.We solved the crystal structure of apo PrgE to 2.7 Å, using the AlphaFold2 model of PrgE as a template for molecular replacement.The asymmetric unit contained two copies of the protein in the space group P2 1 2 1 2 1 .Both copies were modeled from residues 1-130, with residues 34 and 35 missing in loop 1 of chain A (Fig S2).For both chains, the remaining C-terminal part (residues 131-144) is missing in the density.PISA analysis shows that this dimer has an interface area of 680 Å2 , with 9 H-bonds and three salt bridges.The overarching fold of the protein corresponds to an oligosaccharide/ oligonucleotide-binding (OB) fold, characterized by five betastrands that form a beta-barrel with a 1-2-3-5-4-1 topology, which is only partially closed between strands 3 and 5 (Fig 2A).PrgE also has a 42-residue-long region between strands 3 and 4 that forms two alpha-helices of which the first seemingly contributes to the opening in the barrel between strands 3 and 5.The apo structure overall aligns very well with the predicted AlphaFold2 model of PrgE, having an RMSD of 0.48 Å over 113 residues.
We used DALI (Holm, 2020) and Foldseek (Van Kempen et al, 2024) to search the PDB for the closest structural homolog to
PrgE binds ssDNA in a filamentous manner
We also crystallized PrgE together with a single-stranded poly-A 60mer DNA in a molar ratio of 1:3.The obtained crystallographic data were refined in the space group P2 1 2 1 2 1 with the asymmetric unit containing three copies of the protein sitting on a string of 15 ssDNA bases.Although there are only 15 bases in the asymmetric unit, the ssDNA shows a continuous density throughout the crystal packing (Fig S3A).Compared with the apo structure of PrgE, a few more residues are visible at the C-terminal end (until residues 136 of 144), continuing as an alpha-helix as predicted by the AlphaFold2 model.The DNA does not get wrapped around PrgE, like it does with E. coli SSB (Raghunathan et al, 2000); rather, PrgE interacts with the DNA like beads on a string, with the N-terminal tail of one PrgE binding to the neighboring PrgE, using interactions between polar side chains (Fig 3A).PISA analysis shows that the interaction areas between the PrgE subunits in the DNA-bound structure are between 600 and 800 Å2 .
PrgE binds to the ssDNA between loops 1 and 4, where the betabarrel is partially open.Each subunit binds to five DNA bases.The binding also bends the ssDNA between the protein-binding sites, resulting in a kink at every fifth base.The kinks between subunits C9-A and A-B form the same angle.However, the N-terminal tail of chain B bends at a smaller angle and the kink in the DNA chain between subunits B and C is therefore also slightly less pronounced (Fig S3B).
The different PrgE subunits bind to the ssDNA in a similar, but not identical, manner.Many interactions with the phosphate backbone of the ssDNA are the same within all subunits, including with residues Ser33, Gln34, and Asn37 in loop 1 that form H-bonds with the DNA backbone with the fourth and fifth phosphate of each stretch of five bases (Fig 3B -D).Additional phosphate binding can be found with Lys111 and Tyr110 in loop 4 in chains A and C, but not B. Interestingly, this loop interacts with the phosphate of the second base of the DNA-binding cassette that is primarily bound by the neighboring copy of PrgE.
In addition to hydrogen bonding with the phosphate backbone, pi-pi interactions between the aromatic rings of the DNA and two tyrosine residues are of major importance for DNA binding.Tyr110 stacks on the fifth DNA base in the binding cassette in all subunits.In contrast, the orientation of Tyr62 varies.For chains A and B, Tyr62 points inward toward the bases, whereas it is oriented toward the DNA backbone for chain C. Accordingly, the exact orientation of the first DNA base varies between the binding cassettes.In the third binding cassette in the asymmetric unit, base 11 stacks on top of the following four bases and forms two H-bonds with PrgE chain C (Asn120 and Asn66).In the other two cassettes (bound to chains A and B), this base is tilted away and only forms one H-bond with Asn120.Other than these interactions with the DNA bases, hydrogen bonding with DNA bases seems to be less important, consistent with the lack of sequence specificity in DNA binding.In our structure, only Gln108 of chain B interacts with adenine 9, with the other copies of Gln108 being close to the DNA but not in hydrogen bonding distance.In conclusion, PrgE binds to ssDNA with a high degree of plasticity.
PrgE quaternary structure resembles viral SSBs
The overall quaternary structure of PrgE binding to ssDNA is different than that of bacterial or eukaryotic SSBs, where ssDNA commonly wraps around a homotetramer in bacterial SSBs (Fig 4A) and eukaryotic RPA binds DNA as a heterotrimer (Fig 4B).Instead, it appears more similar to that of viral SSBs, which have monomers as a functional unit in DNA binding (Fig 4C).Each PrgE monomer binds fewer DNA bases (5), which are more neatly stacked on top of each other, compared with other SSBs that have a larger interaction area (Fig 4D -F).The exact DNA-binding mechanisms share some similarities in that stacking interactions with aromatic residues play an important role.However, in PrgE, the responsible residues are tyrosines, whereas they are phenylalanines and tryptophans for E. coli SSB and RPA, and the viral SSB uses both tyrosines and phenylalanines.Based on the DNA-bound crystal structure, we hypothesized that the N-terminal tail of PrgE could play an important role in oligomerization.We therefore created a deletion variant where we removed the 12 first residues of PrgE (ΔN-PrgE).This variant eluted significantly later on SEC than the WT protein; however, we still observed differences in elution volume in different salt concentrations (Fig 6A).To explore these differences in more detail, we performed SEC-MALS in 300 mM NaCl, which resulted in a molecular weight of 16.5 ± 0.6 kD, which is close to the theoretical molecular weight of a ΔN-PrgE monomer (15.5 kD) (Fig 6B).In addition, we performed SEC-MALS in 50 mM NaCl, where ΔN-PrgE was found to form a dimer (molecular weight of 33.1 ± 4.7 kD) (Fig 6C).These results show that the N-terminal tail of PrgE is a major contributor to oligomerization.
PrgE binds ssDNA and dsDNA with comparable affinities
Given the suggested function of PrgE as an SSB, we performed DNAbinding experiments with both WT and ΔN-PrgE.Binding affinities were compared for random single-stranded (ss) and doublestranded (ds) DNA molecules (Table S2), by determining the dissociation constant (K d ) by fluorescence anisotropy (Table 1 and Figs 7 and S4).Surprisingly, PrgE bound ssDNA and dsDNA with similar affinities, with a K d of 0.3 μM for 60-mer ssDNA and 0.5 μM for 60-mer dsDNA in 50 mM NaCl (Fig 7A and B).ΔN-PrgE also bound ssDNA and dsDNA equally well, but it showed a roughly one order of magnitude lower affinity than WT, with 4.5 μM for 60-mer ssDNA and 5.6 μM for 60-mer dsDNA (Table 1 and Fig 7A and B).Notably, WT PrgE bound with higher affinity to the longer DNA substrate, whereas ΔN-PrgE did not show this difference (compare Fig 7A and B with Fig 7C and D).For WT PrgE, we also tested binding in 100 mM NaCl, where the same binding patterns were observed as in lower salt, albeit with somewhat lower affinities (Table 1 and Fig S4A and B).All fluorescence anisotropy data could be fitted using a quadratic equation (Equation ( 2)) with R 2 > 0.9.In addition, we also fitted the data using the Hill equation (Equation (3)), which accommodates cooperativity.For most data, there were no signs of positive cooperativity.However, for PrgE binding to the 60-mer ssDNA, the Hill equation with a Hill coefficient of ca 1.5 fits the data well, suggesting mild positive cooperativity (Fig S4C).This positive cooperativity was not seen with ΔN-PrgE (Fig S4C).All DNA substrates used behaved as expected on agarose gel (Fig S4D).Taken together, these experiments confirm that the DNA-binding properties of PrgE differ considerably from other SSBs, as PrgE binds both ssDNA and dsDNA.They also highlight the importance of the N-terminal tail for DNA binding.
PrgE is not essential for conjugation
Given that PrgE is a soluble protein in the T4SS operon that binds DNA, we speculated that it might interact with the DNA transfer and replication proteins PcfF (accessory factor [Rehman et al, 2019]) and/or PcfG (relaxase [Chen et al, 2007]), which form the relaxosome at the origin of transfer of plasmid pCF10.We therefore conducted pull-down experiments where untagged PrgE was incubated with either the His-tagged PcfG (Fig 8A ) or the GST-tagged PcfF (Fig 8B).However, neither of the proteins co-eluted with PrgE, indicating that they do not strongly interact.
Because PrgE is likely not part of the relaxosome, we wanted to know whether it is essential for conjugation in another way.We therefore created an E. faecalis knockout strain (OG1RF:pCF10ΔprgE) to explore the function of PrgE in vivo by comparing the conjugation efficiency between the mutant and WT.We tested conjugation both during the exponential phase when cells were actively dividing and in the stationary phase when cells are no longer dividing and the availability of other, genome-encoded, SSBs in E. faecalis may be different.We observed a decrease in efficiency between exponentially growing cells and cells in the stationary phase, but there was no significant difference between ΔprgE and WT in either condition (Fig 9).We further considered whether multiple conjugative events would be needed to observe an effect.We therefore passaged the plasmids several times between donor and recipient cells, using transconjugant cells as new donor cells.However, also here we did not observe any difference within four passages between ΔprgE and WT (Fig 9).We conclude that PrgE does not play an essential role in conjugation under the tested conditions.
Discussion
Many conjugative plasmids, with different incompatibility groups, encode for (at least) one SSB protein, which can often complement the genome-encoded SSB (Golub & Low, 1985).In conjugation, SSBs have been proposed to be important for protecting plasmid ssDNA both in donor and in recipient cells and to evade the SOS response (Howland et al, 1989;Jones et al, 1992;Al Mamun et al, 2021;Couturier et al, 2023).However, all of the available research has been done on SSBs from Gram-negative T4SSs.Here, we characterized the proposed SSB PrgE from the Gram-positive conjugative plasmid pCF10.
By crystallizing PrgE, we showed that it indeed has the typical OBfold of SSBs, but that its structure has important differences when compared to other SSB proteins.PrgE has three alpha-helices that are positioned differently from other SSBs, and also differs in its beta-sheet where the DNA-binding regions are.The differences became even more apparent when we analyzed the DNA-bound structure.Each monomer binds DNA in a way that is to be expected, relying on interactions with the DNA backbone and stacking interactions with the bases to achieve DNA binding in a sequenceindependent manner.However, PrgE does not bind DNA as the typical bacterial SSB, which commonly forms homotetramers around which they wrap the ssDNA.It is also very different from how eukaryotic SSBs, like RPA, bind the ssDNA as heterotrimers.Instead, PrgE binds the ssDNA in a filamentous manner, like beads on a string (Fig 3).Between each binding site, the DNA gets bent (Fig S3B).Whether the exact angles are due to crystal packing or are also the ones found in solution is not known.The oligomerization in the DNA-bound structure is supported by the N-terminal tail of PrgE, which interacts with the neighboring monomer on the DNA-bound structure (Fig 3), a feature that is not found on the prototypical bacterial SSBs.Further supporting the filamentous oligomerization are the different oligomerization states that were observed for PrgE in solution (Fig 5).The N-terminally truncated variant of PrgE (ΔN-PrgE), which was predominantly monomeric and showed capacity to dimerize only in low salt conditions, confirms the role of the N-terminus in oligomerization that was suggested by the DNAbound crystal structure (Fig 6).
Most of our data from the fluorescence anisotropy experiments fit best to a standard quadratic binding curve that does not account for cooperativity (Figs 7 and S4).However, for the single-stranded 60-mer substrate, the Hill equation with a positive Hill coefficient fits the data well and indicates cooperativity in the binding (Fig S4C).This cooperative binding was lost for ΔN-PrgE, suggesting that the N-terminal tail does promote cooperative binding on longer DNA substrates.Surprisingly, we found that PrgE bound dsDNA equally well as ssDNA (Figs 7 and S4 and Table 1).Most characterized SSBs have a high affinity and specificity for ssDNA (Oliveira & Ciesielski, 2021).As an example, RPA binds mixed ssDNA with affinities of 10-40 nM albeit displaying a preference for pyrimidines, and with K D values to ssDNA up to three orders of magnitude lower than to dsDNA (Brill & Stillman, 1989;Wold et al, 1989;Kim et al, 1992).To our knowledge, only one studied SSB-like protein shares PrgE's feature of binding equally well to both ssDNA and dsDNA, namely, one from the archaea Nanoarchaeum equitans (Olszewski et al, 2015).When PrgE binds dsDNA, the DNA must be in a different conformation than in our ssDNA-bound structure.This makes it difficult to speculate exactly how PrgE would structurally bind dsDNA, besides that the residues interacting with the ssDNA phosphate backbone likely also are important for dsDNA binding.Given these data, it is clear that PrgE is not a typical SSB, and we therefore refer to it simply as an OB-fold protein.
Given these unexpected characteristics of PrgE, it is tempting to speculate about its evolutionary origin.Despite being present in the middle of a T4SS operon on a bacterial conjugative plasmid, PrgE does not behave at all like a bacterial SSB.No close structural homologs could be identified via DALI (Holm, 2020) and Foldseek (Van Kempen et al, 2024).PrgE's oligomerization behavior in DNA binding, where PrgE monomers can be added like beads on a string in a non-cooperative manner, is reminiscent of some viruses whose SSBs have a monomer as a functional subunit that can be added on ssDNA (Dekker et al, 1997;Shokri et al, 2009).We did find similarities regarding DNA-binding affinities with an archaeal SSB, which is described as resembling viral SSB-like proteins (Olszewski et al, 2015;Oliveira, 2021).Indeed, the C-terminally truncated Enc34 phage SSB has been shown to bind dsDNA (Cernooka et al, 2017).Furthermore, the Enc34 SSB was also suggested to be able to bind DNA in a filamentous manner, similar to what we here observe for PrgE (Cernooka et al, 2017).In addition, PrgE was originally annotated as an SSB protein based on its 37% sequence similarity to a lactococcal phage SSB (Desiere et al, 2001).We therefore find it likely that PrgE at some point has been introduced to pCF10 via horizontal gene transfer mediated by a phage.
What then is the function of PrgE for the T4SS and in conjugation?PrgE is expressed as part of the P Q operon of pCF10, surrounded by proteins that are essential for its T4SS (Fig 1).This means that PrgE will be produced only when transcription of the P Q operon has been induced, and its production will be quickly shut down again, just like the rest of the proteins encoded by the P Q operon (Lassinantti et al, 2021).Our first hypothesis was that PrgE might interact with other important DNA-binding components of type IV secretion, the relaxosome proteins PcfG and PcfF, as SSBs can be important players in recruiting proteins to DNA (Bianco, 2017;Antony & Lohman, 2019).However, PrgE does not seem to interact strongly with either of them.Secondly, we speculated that PrgE was important for conjugation in other ways, potentially by protecting the conjugative ssDNA in either the donor or recipient strain, or maybe by aiding the establishment of the plasmid in the recipient cells (Couturier et al, 2023).To test this, we created a knockout of PrgE (pCF10:ΔprgE).However, no significant differences in conjugation efficiency could be observed, neither in the exponential phase nor in the stationary phase.It also did not affect the efficiency during multiple serial conjugation events.This is in line with what was observed in previous studies on an F-plasmid, where knocking out a plasmid-encoded ssb also did not reduce mating rates (Al Mamun et al, 2021).However, these experiments were performed under laboratory conditions, and it is possible that PrgE does contribute to conjugation efficiency under other, less ideal, circumstances.
Conjugative plasmids retain many proteins that are not strictly required for conjugation itself, but provide various other advantages, for example, competitiveness against other conjugative elements or replacement of host functions that allows plasmids to use a wider host range (Cooke & Herman, 2023).The F-plasmid encodes an SSB that gets transferred into the recipient cell where it suppresses the SOS response (Al Mamun et al, 2021).It could be one potential avenue to explore whether also PrgE can be transferred through the T4SS and serve a similar function in the E. faecalis recipient cell.However, we deem it unlikely that PrgE has a homologous function, given that the F-plasmid SSB is a typical bacterial SSB that can compensate for genomic SSB deficiencies (Chase et al, 1983;Kolodkin et al, 1983), whereas PrgE is very different from E. faecalis SSB and has very unusual DNA-binding characteristics.In addition, it has yet to be demonstrated whether the pCF10 T4SS can transfer proteins other than DNAcoupled relaxases.The ability of PrgE to bind both ssDNA and dsDNA increases the range of potential functions to any cellular process involving DNA.Understanding the exact function of PrgE remains an exciting prospect for future research.
Conjugative plasmids have been studied for many decades now, ever since the R1 conjugative plasmid was first isolated from a clinical isolate in 1963 (Datta & Kontomichalou, 1965).Genes encoding for OB-fold proteins are part of these plasmids, but our understanding of their specific function within conjugation remains very limited and is almost exclusively based on T4SSs from Gramnegative bacteria.Here, we have shown that PrgE from the Grampositive conjugative plasmid pCF10 behaves differently to the more well-studied SSBs.It binds ssDNA by attaching PrgE monomers to the DNA like beads on a string, instead of around a globular oligomer like E. coli SSB, and it binds dsDNA equally well as ssDNA.Its oligomerization behavior and DNA-binding mechanism are instead providing insight into a class of OB-fold proteins that has been very poorly characterized.
The sequence encoding prgE was PCR-amplified from the pCF10 plasmid using primers PrgE_FX_F or ΔN-PrgE_FX_F and PrgE_FX_R and cloned into the intermediate vector pINIT_kan after digestion by SapI, using the FX cloning system (Geertsma & Dutzler, 2011).It was subcloned into the expression vector p7XC3H, which provides a C-terminal 10xHis-tag and a 3C protease cleavage site, before transformation of E. coli ArcticExpress (DE3) cells.The sequence encoding pcfG was PCR-amplified using the primers PcfG_F and PcfG_R and cloned into a pET24d vector after digestion with Eco31I, which provides an N-terminal 10xHis-tag and a SUMO-tag, before transformation into E. coli BL21 (DE3) cells.
The E. faecalis PrgE-deleted strain, OG1RF:pCF10ΔprgE, was obtained by allelic exchange and counter-selection using a pCJK218 plasmid (Vesi ć & Kristich, 2013), leaving the nucleotides encoding the first and last five amino acids of the protein.About 800 bp of the upstream and downstream regions of PrgE was PCR-amplified using the primer pairs PrgE-UF-F/PrgE-UF-R and PrgE-DF-F/PrgE-DF-R, respectively.The products were digested by BamHI/SalI for the upstream region and SalI/NcoI for the downstream region, before cloning into the pCJK218 digested by BamHI/NcoI.The resulting plasmid was used to transform E. faecalis OG1RF:pCF10 by electroporation (Bae et al, 2002).The PrgE-deleted transformants were obtained by switching temperature to induce allelic exchange as described by Vesić and Kristich (2013), and the gene deletion was subsequently confirmed by sequencing.
Protein production
Proteins were expressed using the LEX system (Large-scale EXpression system, Epiphyte 3).PrgE and ΔN-PrgE were transformed in E. coli ArcticExpress (DE3) cells and cultivated in TB medium supplemented with 0.4% glycerol.The cultures were grown at 30°C until an OD 600 of 0.8, then cooled down to 12°C before 0.4 mM IPTG was added to induce protein expression.After 24 h, cells were centrifuged at 4,000g during 20 min.PcfF was produced the same way, with the exception that BL21 (DE3) cells were used, and cultures were grown at 37°C before lowering the temperature to 18°C before induction, and harvested after 20 h.PcfG was produced in Origami (DE3) cells using autoinduction TB media.Cultures were grown at 37°C until OD 0.6 was reached, followed by 24 h at 25°C without the addition of IPTG.
The GST-PcfF supernatant was incubated for 1 h with glutathione resin (GE Healthcare) at 4°C and subsequently washed with 50 CV wash buffer (20 mM Hepes, pH 7.5, 200 mM NaCl) before elution with 20 mM Hepes, pH 7.5, 200 mM NaCl, 30 mM glutathione.The protein was concentrated with Amicon Ultra Centrifugal filters with a molecular weight cutoff of 10 kD before SEC in 20 mM Hepes, pH 7.5, 200 mM NaCl on a Superdex 200 Increase 10/300 GL column using ÄKTA pure (Cytiva).
Crystallization and structure determination SEC-purified PrgE, with a concentration of 11 mg/ml, was used for crystallization trials.Crystals appeared after 2-5 d, at 20°C, using the vapor diffusion method in a condition with 0.2 M LiSO 4 , 0.1 M K Phos Cit, pH 4.2, 20% wt/vol PEG 1000 in a 2:1 ratio.For the DNAbound structure, 117 μM of single-stranded poly-A 60-mer was added to 6 mg/ml PrgE and mixed in a 1:2 ratio with a reservoir solution containing 15% vol/vol PEG 400, 50 mM MES, pH 6.5, 80 mM Mg acetate, 15 mM MgCl 2 .Crystals were flash-frozen in liquid nitrogen without an additional cryoprotectant.X-ray diffraction data were collected at the ID30A-3 (apo) or ID23-1 (DNA-bound) beamlines at the ESRF, France, and processed using XDS (Kabsch, 2010).The space group of both crystals was P2 1 2 1 2 1 , and the phase problem was solved in Phenix Phaser (McCoy et al, 2007) using molecular replacement with an AlphaFold2 (Jumper et al, 2021) model of PrgE where the flexible extremities of the protein had been removed, generated using ColabFold version 1.5.2 using default settings (Mirdita et al, 2022).The asymmetric unit of the crystal contained two copies of PrgE for the apo structure.The asymmetric unit of the DNA-bound protein contained three copies of the protein and a 15-nucleotide stretch of the single-stranded DNA.The chosen asymmetric unit thus contains only a quarter of the full ssDNA that the protein was crystallized with.We chose to do so because the ssDNA has continuous density throughout the crystal packing, and this greatly simplified the refinement process.The structures were built in Coot (Emsley & Cowtan, 2004) and refined at 2.7 Å using Refmac5 (Vagin et al, 2004), and we obtained R work /R free values of 23.45 and 27.77 for the apo structure and 23.05 and 25.23 for the DNA-bound structure.Further refinement statistics can be found in Table S3.
SEC-MALS
For analysis of the oligomeric state of PrgE, 150-300 μl of 1 mg/ml PrgE or ΔN-PrgE (with a theoretical mass of 17 or 15.5 kD, respectively) was loaded on a Superdex 200 Increase 10/300 GL column, equilibrated in buffer (20 mM Hepes, pH 7.5, and 300 mM NaCl) via ÄKTA pure (Cytiva) that was coupled to a light scattering (Wyatt Treas II) and refractive index (Wyatt Optilab T-Rex) detector to determine the molecular weight of the elution peak via SEC-MALS.Data were analyzed using Astra software (version 7.2.2;Wyatt Technology).
Crosslinking
PrgE crosslinking experiments were performed by incubating 30 μg of protein with 2 mg of disuccinimidyl suberate in 20 mM Hepes, pH Conjugation rates of E. faecalis donor cells carrying WT pCF10 or pCF10:ΔprgE either in the exponential phase or in the stationary phase.In the exponential phase, serial passaging was performed, where transconjugants from one passage were used as donor cells in the following passage.ns stands for not significant.
7.5, and 300 mM NaCl for 30 min at 20°C.The reaction was quenched by adding 100 mM Tris-HCl, pH 8.0, at least 10 min before analysis using SDS-PAGE with Coomassie Brilliant Blue staining.
Preparation of DNA substrates
Oligonucleotides were purchased from Eurofins and are listed in Table S2.For double-stranded substrates, one nmol of each oligonucleotide was annealed to an equimolar amount of its complementary strand by denaturing at 95°C for 5 min in TE buffer (50 mM Tris-HCl, pH 8.0, 1 mM EDTA) containing 100 mM NaCl, and allowing the reaction mixture to cool to RT.The DNA was separated on a 15% acrylamide gel in 0.5 × TBE (15 mM Tris, 44.5 mM boric acid, 1 mM EDTA), stained with 3 × GelRed (Biotium) for 30 min, and visualized using ChemiDoc (Bio-Rad).The bands corresponding to double-stranded molecules were excised with a clean razor blade, eluted from crushed gel slices into TE buffer (10 mM Tris-HCl, pH 8.0, 1 mM EDTA), and purified by phenol-chloroform extraction and isopropanol precipitation.
Fluorescence anisotropy assay
Single-stranded and double-stranded oligonucleotides of 30 or 60 nt with a 59 FITC label (Table S2) were diluted to 20 nM in binding buffer (20 mM Hepes, pH 7.5, 50 or 100 mM NaCl, as indicated).Before use, the single-stranded oligonucleotides only were boiled for 5 min at 95°C and chilled on ice.Fluorescence anisotropy reactions containing 10 nM oligonucleotide and 0-20 μM PrgE or ΔN-PrgE in binding buffer were pipetted in duplicates onto black, shallow 384-well microplates (OptiPlate-F, PerkinElmer) and incubated in the dark for 30 min at RT. Fluorescence intensities were collected from above on a CLARIOstar Plus plate reader (BMG Labtech) with the excitation and emission wavelengths 480 and 520 nm, respectively.Fluorescence anisotropy in millianisotropy units (mA) was calculated using MARS Data Analysis Software (BMG Labtech) according to Equation (1): where F k and F ' are the parallel and perpendicular emission intensity measurements corrected for background (buffer).PrgE alone exhibited no fluorescence.The dissociation constant (K d ) was determined by fitting data to a quadratic equation by non-linear regression analysis in GraphPad Prism software (GraphPad Software, Inc.) using Equation (2): where Y is the anisotropy value at protein concentration X, X is the concentration of PrgE in μM, B 0 and B max are specific anisotropy values associated with free DNA and total DNA-PrgE complex, respectively, and D is the concentration of DNA in μM.
For 60-nt ssDNA, the data were in addition fitted to the Hill equation by non-linear regression analysis in GraphPad Prism software (GraphPad Software, Inc.) using Equation (3): where Y is the anisotropy value at protein concentration X, X is the concentration of PrgE in μM, B max is the specific anisotropy value associated with total DNA-PrgE complex, and h is the Hill coefficient.
Pull-down experiments with relaxosome components
PrgE pull-down experiments were performed in 20 mM Hepes, pH 7.5, and 200 mM NaCl by mixing either 2 nmol GST-PcfF or PcfG-His (baits) with 4 nmol PrgE without tag (prey) and 100 μl of the resin (glutathione resin [GE Healthcare] when using PcfF and Ni-NTA [Protino] for PcfG).The proteins were incubated for 15 min at 4°C before collecting the flow-through and washing with 5 × 5 CV wash buffer and eluting with 2 × 5 CV elution buffer.For GST-PcfF pulldowns, 20 mM Hepes, pH 7.5, and 200 mM NaCl were used as wash buffer and 20 mM Hepes, pH 7.5, 200 mM NaCl, and 30 mM glutathione as elution buffer.For His-PcfG pull-downs, wash buffer contained 20 mM Hepes, pH 7.5, 200 mM NaCl, 30 mM imidazole, and elution buffer, 20 mM Hepes, pH 7.5, 200 mM NaCl, 500 mM imidazole.The samples were analyzed on SDS-PAGE and stained with Coomassie Brilliant Blue.
Conjugation assays
Donor (OG1RF:pCF10 or OG1RF:pCF10ΔprgE) and recipient (OG1ES) strains were inoculated with the indicated antibiotics and incubated overnight at 37°C with agitation.The next day, the overnight cultures were refreshed in BHI media without antibiotics in a 1:10 ratio.For conjugation assays in the exponential phase, cells were directly induced to express the T4SS with 5 ng/ml cCF10 for 1 h at 37°C without agitation.For conjugation assays in the stationary phase, cultures were first incubated for 3 h at 37°C with agitation before induction.Donor and recipient cells were then gently mixed in a 1:10 ratio and incubated for 30 min at 37°C without agitation.To disrupt the ongoing conjugation, cells were vortexed and placed on ice for 10 min.A serial dilution was performed with cold media, and 10 μl of the appropriate dilutions was spotted in triplicates on the top of a square BHI agar plate and placed in an upright position to allow the drops to run down the plate to facilitate counting of the colonies.To select donor cells, BHI agar contained 10 μg/ml tetracycline and 25 μg/ml fusidic acid, and to select for transconjugant cells, BHI agar contained 10 μg/ml tetracycline and 20 μg/ml erythromycin.The plates were incubated for ~24 h at 37°C before colonies were counted and enumerated for colony-forming units (CFU).The frequency of DNA transfer is presented as the number of transconjugants per donor.Experiments were done in triplicates and are reported with their SD.For the serial passaging, conjugation assays were performed in the exponential phase as described above.Three colonies of the transconjugant plates from passage 1 were picked to start new overnight cultures, which were then used as donor cells for the following passage.In passage 2, donor cells were therefore OG1ES: pCF10, and OG1RF without a plasmid served as recipient cells.Three transconjugant colonies from passage 2 served as donor cells for passage 3 with OG1ES as recipient cells, and transconjugant cells from passage 3 were donors for passage 4 with OG1RF as recipient cells.Donor and transconjugant cells were selected as previously described for passages 1 and 3.For passages 2 and 4, BHI agar containing 10 μg/ml tetracycline and 20 μg/ml erythromycin was used to select for donor cells and BHI agar containing 10 μg/ml tetracycline and 25 μg/ml fusidic acid was used to select for transconjugants.
All in vivo data are from three biological replicates and are plotted with their SD using GraphPad Prism (version 10.2) (GraphPad Software).Statistical significance was analyzed with one-way ANOVA.
is not a homolog of a genome-encoded E. faecalis SSB To compare PrgE with other proteins, we performed sequencebased homology searches.These yielded very little insights, besides that PrgE is predicted to be an SSB and found only in Enterococci and other related species from the order Lactobacillales.We performed multiple sequence alignment of PrgE with SSBs encoded on the E. coli and E. faecalis genome (Fig S1A).PrgE only has a very low sequence identity to both sequences (24% to the aligned regions of E. faecalis SSB and 19% to E. coli SSB).We also created AlphaFold2 models to investigate structural homology.Genomic SSB from E. faecalis strongly resembles typical bacterial SSBs, and the model aligns with E. coli SSB with an RMSD of 0.59 Å over 83 residues (Fig S1B).In contrast, the PrgE model differs significantly.It superimposes with an RMSD of 5.4 Å over 80 residues to the model of the genome-encoded E. faecalis SSB, with differences in the part of the beta-sheet that is involved in DNA binding in typical bacterial SSBs.It also has differences in the N-and C-terminal regions, and contains more alpha-helices than typical OB-folds (Fig S1C).Performing structural homology searches to the AlphaFold2 model of PrgE using Foldseek (Van Kempen et al, 2024) did not yield better information than the sequence-based searches.Top hits in the Protein Data Bank (PDB) database were only distantly related proteins with an OB-fold, with high E-values or low TM scores (Table
Figure 1 .
Figure 1.Schematic overview of the genes included in the P Q operon of pCF10.Each arrow represents one gene, colored by its proposed function in the T4SS.Genes coding for proteins involved in T4SS regulation are shown in orange, surface adhesins in green, mating channel in purple, DNA transfer and replication (Dtr) proteins in blue, and genes of unknown function in gray.The length of the arrows is approximately to scale of the corresponding genes.prgE is highlighted in yellow.
Figure 2 .
Figure 2. Apo structure of PrgE.(A) Crystal structure of PrgE colored in rainbow colors from the N-terminus (blue) to the Cterminus (red).All secondary structure elements are marked in the figure.(B) Superimposition PrgE (green) with the C-terminal domain of RadD (gray, PDB: 7R7J).The beta-sheet superimposes relatively well, but there are larger differences in the orientation of the alpha-helices.
Figure 3 .
Figure 3. DNA-bound structure of PrgE.(A) In the asymmetric unit, there are three PrgE molecules bound to the ssDNA.(B, C, D) Enlarged views of the regions indicated in panel (A), highlighting the residues that are important for DNA binding for each of the three monomers.Black dotted lines show potential hydrogen bonds.Orientation of panels (B, C, D) is not the same as in (A), to increase clarity and allow easier comparison.
Figure 4 .
Figure 4. Comparison between PrgE and other single-stranded DNA-binding proteins (SSBs).(A) E. coli homotetrameric SSB bound to ssDNA (PDB: 1EYG).(B) Yeast heterotrimeric RPA bound to ssDNA (PDB: 6I52).(C) SSB from Enterobacter phage Enc34 (PDB: 5ODL).(D, E, F) Superposition of DNA-bound PrgE (brown) with the proteins shown in panels (A, B, C).View in panel (D) is rotated 45°on the x-axis when compared to panel (A) for clarity; the views in panel (E, F) are the same as in (B, C).In panel (E), PrgE is aligned to chain C of RPA as it has the highest structural homology to PrgE.
Figure 5 .
Figure 5. Oligomerization of PrgE.(A) Size-exclusion chromatogram of PrgE (on a Superose 6 column) shows that the elution volume, which is coupled to protein radius, depends on the salt concentration.(B) Size-exclusion chromatogram of PrgE (on a Superdex 200 column), in the same salt concentration but with different protein concentrations, shows that the elution volume decreases with increasing protein concentrations.(C) SEC-MALS analysis of 60 μM PrgE in 300 mM NaCl.The black line, plotted on the left axis, indicates the Rayleigh ratio, which is directly proportional to the intensity of the scattered light in excess of the buffer.The orange line, plotted on the right axis, indicates the molecular weight of the protein measured throughout the peak.The average molecular weight was 51.1 ± 2.8 kD.(D) SDS-PAGE of PrgE, with or without crosslinking with disuccinimidyl suberate.Source data are available for this figure.
Figure 6 .
Figure 6.Oligomerization of ΔN-PrgE.(A) ΔN-PrgE (solid lines) elutes significantly later than WT (dotted lines, the same as in Fig 5B) on size-exclusion chromatography, but still, its elution volume is dependent on the salt concentrations.(B) SEC-MALS analysis of ΔN-PrgE in 300 mM NaCl with the Rayleigh ratio indicated in black on the left axis and the molecular weight in orange on the right axis.The calculated weight was 16.4 ± 0.6 kD, which is close to that of a monomer.(C) SEC-MALS analysis of ΔN-PrgE in 50 mM NaCl gave a calculated molecular weight of 33.1 ± 4.7 kD, which is close to that of a dimer.
Figure 8 .
Figure 8. PrgE does not interact with the main components of the pCF10 relaxosome.(A) Pull-down experiment with the relaxase PcfG, showing the input protein, washes, and elution, in which His-PcfG (bait) was unable to pull down PrgE (prey).(B) Pull-down experiment in which the relaxosome accessory factor GST-PcfF (bait) was unable to pull down PrgE (prey).Source data are available for this figure.
ml DNase I. Resuspended cells were lysed in Cell Disruptor (Constant Systems) at 25 kPsi and centrifuged at 30,000g for 30 min at 4°C.
Figure 9 .
Figure 9. PrgE is not essential for conjugation.Conjugation rates of E. faecalis donor cells carrying WT pCF10 or pCF10:ΔprgE either in the exponential phase or in the stationary phase.In the exponential phase, serial passaging was performed, where transconjugants from one passage were used as donor cells in the following passage.ns stands for not significant.
Table 1 .
K d values and standard deviations (n = 3) for PrgE and ΔN-PrgE binding to ssDNA or dsDNA oligonucleotides in 50 or 100 mM NaCl as determined by fluorescence anisotropy using Equation (2) (quadratic fit). | 9,671.2 | 2024-03-13T00:00:00.000 | [
"Biology"
] |
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based Token Classification and Span Prediction Techniques
Toxicity detection of text has been a popular NLP task in the recent years. In SemEval-2021 Task-5 Toxic Spans Detection, the focus is on detecting toxic spans within English passages. Most state-of-the-art span detection approaches employ various techniques, each of which can be broadly classified into Token Classification or Span Prediction approaches. In our paper, we explore simple versions of both of these approaches and their performance on the task. Specifically, we use BERT-based models - BERT, RoBERTa, and SpanBERT for both approaches. We also combine these approaches and modify them to bring improvements for Toxic Spans prediction. To this end, we investigate results on four hybrid approaches - Multi-Span, Span+Token, LSTM-CRF, and a combination of predicted offsets using union/intersection. Additionally, we perform a thorough ablative analysis and analyze our observed results. Our best submission - a combination of SpanBERT Span Predictor and RoBERTa Token Classifier predictions - achieves an F1 score of 0.6753 on the test set. Our best post-eval F1 score is 0.6895 on intersection of predicted offsets from top-3 RoBERTa Token Classification checkpoints. These approaches improve the performance by 3% on average than those of the shared baseline models - RNNSL and SpaCy NER.
Introduction
Offensive language can include various categories such as threats, vilification, insults, calumniation, discrimination and swearing (Pavlopoulos et al., 2019). Detection of such language is necessary for ease of moderation of content on social media. Despite their popularity, toxicity detection tasks have focused majorly on sequence classification, rather than sequence tagging. Finding which spans make a comment or document toxic in nature is crucial in explaining the reasons behind their toxicity. Additionally, such attributions would allow for more efficient semi-automated quality-based moderation of content, especially for verbose documents, in comparison to quantitative toxicity scores.
In SemEval-2021Task-5, Pavlopoulos et al. (2021 provide a dataset of 10k English texts filtered from Civil Comments (Borkan et al., 2019) dataset. Each text is crowd-annotated with character offsets that make the text toxic. The task is to predict these character offsets given the text. The work presented in this paper aims to provide a comprehensive analysis of simple Token Classification (TC) and Span Prediction (SP) methods across multiple BERT-based models - BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and SpanBERT (Joshi et al., 2020). Additionally, we experiment with a few hybrid approaches -Multi-Span (MSP), where the model is trained on multiple spans simultaneously; Span+Token (SP-TC), where the model is trained on both kinds of tasks simultaneously; LSTM-CRF (LC), which uses a LSTM and CRF layer on top of BERT-based models; and a combination of predicted offsets for above techniques using union/intersection. In Section 2, we perform a compendious literature survey. Section 3 elucidates our approach, including the modelling aspect, the various variants of the base model, and the different Hybrid Systems. In Section 4, we describe our experimental setup and hyperparameters used for our methods. Lastly, in Section 5 we analyze our results and perform ablative analysis on our systems.
Background
Before the advent in research pertaining to toxic texts, Warner and Hirschberg (2012) modeled hate speech as a word sense disambiguation problem where SVM was used for classification of data. used RNN Language Model with character and token based methods to classify the text. Recently, however, toxic text detection has garnered a lot of attention (Nobata et al., 2016;Park and Fung, 2017;Pavlopoulos et al., 2017;Wulczyn et al., 2017). The increase in offensive language research can partly be credited to various workshops such as Abusive Language Online 1 (Waseem et al., 2017) , as well as other fora, such as GermEval for German texts, 2 or TRAC (Kumar et al., 2018) and Kaggle challenges 3 .
Hanu and Unitary team (2020) introduced Detoxify, a comment detection library modeled using HuggingFace's transformers (Wolf et al., 2020) to identify inappropriate or harmful text online as a result of participation in three such challenges. In a contemporary work, Pavlopoulos et al. (2020) discuss context requirement for toxicity detection.
In SemEval 2020-Task 11 (Da San Martino et al., 2020), the first sub-task -Span Identification -aims at detecting the beginning and the end offset for the propaganda spans in news articles. This sub-task is similar to SemEval 2021-Task 5. The proposed approaches for the sub-task can be broadly classified into Span Prediction or Token Classification. Most teams use multi-granular transformer-based systems for token classification/sequence tagging (Khosla et al., 2020;Morio et al., 2020;Patil et al., 2020). Inspired by Souza et al. (2019), Jurkiewicz et al. (2020 use RoBERTa-CRF based systems. Li and Xiao (2020) use a variant of SpanBERT span prediction system.
Baseline Models
From the models already provided with the dataset, we use RNNSL and SpaCy NER Tagging baselines for token-wise classification.
RNNSL model is a combination of a single Bi-LSTM layer with a randomly initialized embedding layer. It uses a three-label classification task for each word in the sentence. The labels used are: special token, non-toxic word, and toxic word. For 1 https://sites.google.com/site/ abusivelanguageworkshop2017/ 2 https://projects.fzai.h-da.de/iggsa/ 3 Jigsaw Toxic Comment Classification Challenge each word, the corresponding offsets are added to the predicted spans. A word with containing any toxic offset is marked as toxic during training. SpaCy NER Tagging model is an NER classifier built on SpaCy Language Models. It is used to predict the entities which are labelled as TOXIC in the text using the spans provided.
BERT-based Token Classification Models
These models comprise a BERT-based model and a classification layer over each final token embedding which predicts whether a token is toxic or not. Based on these classifications, we add the offsets for those tokens (not words) which are marked as toxic by the model. Figure 1a represents a Token Classification Model.
BERT-based Span Prediction Models
We use the BERT-based Span Prediction ( Figure 1c) models based on Extractive Question Answering systems similar to work on SQuAD (Rajpurkar et al., 2016) and MRQA (Fisch et al., 2019). In these systems, the output at each token is a start logit and an end logit denoting whether that token is a start token or an end token of the span, depending on the softmax value. Since the Toxic Spans text can have multiple toxic spans, we take different contiguous spans from the given offsets, and make several 'samples' out of the example. Each span becomes an 'answer' for the particular text sample. We use the word 'offense' as a dummy question. Thus, each contiguous span leads to one 'sample' for every example (Table 1). We store the start index of the text, similar to the SQuAD (Rajpurkar et al., 2016) dataset, and process the data to provide start and end token positions during training. The classifier layer on top of the encoder embeddings performs a binary classification task for start and end positions. A span is scored using the sum of predicted start and end logits. From top-K start and end logits, valid predicted answer spans 4 are chosen during postprocessing. A union of all the corresponding offsets is taken to give the final prediction for the example. A threshold is learned on the span scores using the resulting dev set F 1 score on offsets, which is then used for test set prediction. All spans with score above threshold are considered to be toxic spans.
Multi-Spans
In Section 3.2, we allow each context to have multiple single-span answers during training. This is counter-intuitive, as the model is only trained to handle a single-span at a time, and expected to predict multiple single-spans during prediction. Two toxic spans in text are equally important to predict, and thus, should not be shown at different times during training. To mitigate this issue, we try an approach which we refer to as the 'Multi-Spans' (MSP) approach. Here, we take all the ground start and end token positions during training, and use Binary Cross Entropy on each of the start/end logits. This essentially treats the task as a multi-label classification problem. Hence, during training, all the ground spans are used in the same iteration with the example, and only one 'sample' per example is generated. Figure 1d depicts a representation of the system. Note that two tokens -dumb and pathetic are marked as the start token. Similarly, both ignorant and troll are marked as the end token. 4 Valid spans are those which have end index greater than start index, and length less than a maximum span length.
LSTM-CRF
A recently popular approach in Named-Entity Recognition tasks has been to use Conditional Random Fields (CRF) with BERT-based models. Inspired by the CRF-based approaches (Souza et al., 2019;Jurkiewicz et al., 2020), we use BERT-based models with a single BiLSTM layer and a CRF layer. During training, the CRF loss is used and during prediction, Viterbi Decoding is performed. Though CRF is generally used for word-level classification, we do not mask inner and end tokens for a word as it degrades dev set performance for our systems. Hence, all the tokens of a word are considered for classification.
Spans+Token
For this system, we use a combination of the two tasks -Token Classification and single-span Span Prediction. We use two classification layers on the token-wise embeddings -one for start and end prediction, and the other for token classification. Training is done simultaneously on both tasks, and the cross-entropy loss for each classifier is weighted. The overall loss is given as: where s t ,e t , and p t are labels for start, end and token classifiers for token t, whileŝ t ,ê t andp t are predictions. This is done to equally scale both SP and TC task losses. During prediction, we consider top-K start and end scores. From the valid spans, the score is calculated as the average of start and end logit scores, as well as the mean of toxicity logits over the span under consideration. The score is given as: where i s and i e are start and end indices,ŝ is and e ie are start and end logits at those indices, andt k is toxicity logit at index k. A threshold, similar to Section 3.2 is tuned on the dev set. The predicted offsets taken from the predicted spans are considered to be toxic.
Combination of Offset Predictions
Chen et al. (2017) proposed using the predictions from top few checkpoints and averaging the results to achieve better classification scores. Based on a similar line of thought, we also combine the predicted spans for various checkpoints of a model, as well as across different models using union or intersection.
Hardware Requirements
The training and the evaluation of systems was performed on Google Colab's free GPU (NVIDIA K80/P100). The training time varies with the models. For each model, it is around 4-6 hours, which is well-within the 12 hour limit of Colab.
Models & Hyperparameters
For RNNSL, a Keras-based BiLSTM model is provided. We use a max length of 192, batch size of 32 and a dropout of 0.1. The training is done using Adam Optimizer with early stopping (patience period = 3), which in our case halts at 5 epochs. The embedding/hidden state size used is 200. A threshold is used to classify a word as toxic on the predicted toxic word probability. This threshold is tuned on the trial dataset. For SpaCy, the en core web sm model is used with 30 iterations. For all BERT-based models, we use Hugging-Face's transformers (Wolf et al., 2020) in PyTorch. For CRF, we use the pytorch-crf (Kurniawan, 2018) library. We use a batch size of 4, train for 3 epochs, use a linear learning rate decay, and an AdamW optimizer with a weight decay of 0.01. The initial learning rate is 2e−5. During tokenization, the maximum length allowed is 384, with the exception of RoBERTa Span+Token where it is 512. We use LARGE models for all -BERT, RoBERTa and SpanBERT, unless otherwise specified.
For Token Classification, we add a label for the [CLS] token if the percentage of toxic offsets in text is greater than 30% in order to provide a proxy text classification objective for the system. For span-based models, the K used for top-K start and top-K end logit selection is 20, and the maximum allowed answer length is 30 tokens. For LSTM-CRF systems, a dummy label is used for the [CLS] token, while the prediction mask for other special tokens is set to 0. A dropout of 0.2 is used. For Span Prediction systems, the overlapping stride is set to 128.
The training dataset used is tsd train.csv and the dev set used is tsd trial.csv file, unless otherwise specified. For all systems, we evaluate the F 1 scores using the provided script on the checkpoints which give the lowest dev set loss.
In Table 2, we mention scores for our approaches. The scores are evaluated are performed after the evaluation phase, using the hyperparameters mentioned in Section 4.2. We observe that the highest score is obtained by SBT-TC (0.6856). The baseline scores (RNNSL/SpaCy) are good (≈0.65) considering that these models are not pre-trained. Notably, SP systems perform worse than their TC counterparts. A good reason could be the selfattention used in BERT-based models. Since the interaction is between tokens, and not spans, it is expected that each token is well represented and less consideration will be given to the span representation around a single token. The reason why SBT-TC performs best out of all the LARGE models could be the random-spans Masked Language (Table 3), we get out best scoring system -RBTa-TC(3,∩) -which achieves a score of 0.6895. However, our best official submission 7 was a variant of the third best combination -RBTa-TC(3,∪)∩SBT-SP (0.6765). It is also observed that intersection ap-proaches perform better than corresponding union and single checkpoints approaches, while union approaches perform worse than single checkpoints. This means that the individual checkpoints are predicting some extra offsets to be toxic. In Table 4, we present results on TBT 8 and TRBTa 9 for TC and SP approaches. These are BASE models fine-tuned on the Civil Comments Dataset. Since the Toxic Spans dataset has similar text data, we expect these models to perform better than BASE models. We observe that TBT-TC and TRBTa-SP perform slightly better than BT-TC and RBTa-SP, despite being BASE models. Also, BT-SP and RBTa-TC are only slightly better than their 'Toxic counterparts. Yet, in comparison, BASE models -BT-B and RBTa-B, without any multi-stage pre-training perform better than their 'Toxic' counterparts, and are comparable, if not better than their LARGE counterparts. This means that there not enough data for LARGE models, and hence, they tend to overfit. However, the reasons behind worse performance of 'Toxic' systems is unclear. We also evaluate scores for a few systems on the test set after 3 epochs of training on both train and trial data (-TT). We observe that the performance on both train and trial datasets increases significantly (≈7-10%), showing that these datasets have similar distribution. However, the performance on test decreases for RBTa-TC-TT and RNNSL-TT in comparison to the Table 2, which shows that test set distribution might be slightly different for TC task. For SBT-SP-TT, we see a slight increase, showing scope of improvement for SP systems with more data. Lastly, we evaluate the token-based predictions and span-based predictions for SBT SP-TC separately. Surprisingly, token predictions achieve a F 1 score of 0.6522 on the test set, which is much better than using both token and spans (0.5959). However, for span-based predictions, we only achieve an F 1 score of 0.1510. This means that the system is focusing heavily on token-based-predictions. Hence, we need to re-evaluate our architectural decisions in order to successfully incorporate both token and spans together.
Conclusion
Based on our results and analysis, we conclude that Token Classification systems have an edge over Span Prediction methods on this task. BASE models perform better than LARGE models in either of the approaches, which could imply need for more data to train LARGE models. Our Multi-Span approach performs poorly, but Span+Token approach shows some promise and we need to re-evaluate our architectural choices. The reason why Toxi-cBERT/ToxicRoBERTa perform worse than BASE models is also an avenue for further analysis. Finally, our individual BERT-based models tend to predict extra offsets for the task. While checkpoint ensembling using intersection is a good way to address this issue, we will explore other remedies in a future work.
A Official Submissions
During the evaluation period, we performed a 'cleaning' of the data by removing starting/trailing whitespace and punctuation characters in spans. Additionally, we include those partial words in spans which had more than half the number of characters in the span, and discard remaining partial words from spans. We considered this version of the tsd train.csv and tsd trial.csv to be 'clean train' and 'clean trial', respectively. During the post-eval period, we found out potential issues with the cleaning, and thus, we use original files. Additionally, since the distribution of tsd test.csv is expected to be similar to tsd train.csv and tsd trial.csv, the scores are much better for models trained on tsd train.csv file instead of clean train.csv. However, some of our official submissions were from systems trained on the 'clean train' data. Keeping that in mind, we report our official scores for our top-few approaches in Table 5.
B Integrated Gradients
We use Integrated Gradients (Sundararajan et al., 2017) from the Captum(Kokhlikyan et al., 2020) library for qualitative analysis of predictions for the SpanBERT-SP, and the RoBERTa-TC models. We calculate Integrated Gradients of the targets with respect to the embedding layer outputs. The Riemann Right numerical approximation method is used, with n steps=50. Following Ramnath et al. (2020), we calculate token-wise importance distributions and word-wise distributions for a few examples. We refer the paper to the reader for more details.
For the Token Classification model, the targets are softmax outputs of toxicity logits of those tokens which the model predicts to be toxic, with a score greater than 0.5. For all such toxicity logits as targets, we calculate attributions with respect to the embedding layer outputs for all the tokens, and average them to get token-wise importance scores. For the Span Prediction model, we find start and end indices for all the predicted spans, and calculate respective attributions, add them, and then average them to get token-wise importance scores.
Text: offense See a shrink you pathetic troll .
Ground Spans: [ 'pathetic troll' ] Predicted Spans: [ 'pathetic troll' ]
Create PDF in your applications with the Pdfcrowd HTM We observe in Figure 2a that the Span Prediction model performs correct prediction. However, on average, the word 'shrink' gets higher importance than 'pathetic troll'. This is in contrast with Figure 2b where the Token Detection model misses out on space (because it only considers tokens) and focuses more on the words 'pathetic', 'troll'. However, the word 'shrink' seems to be important in both cases. This means that while Token Classification models perform better, there are cases which are missed by these approaches. Additionally, some words outside of the span may contribute to toxicity of a particular span. We will be analyzing such words in a future work.
C Model Predictions
The predictions of the various systems for one example that is present in the test set, are listed in Table 6. The examples provide the following intuition about the data and the systems: • The spaces in between the words are, predictably, ignored by the the token based models. Moreover, the conjunctives like 'and' are ignored as well. This means that additional post-processing of the data will lead to improvements in performance of token classification systems.
• Sometimes, random words like 'go' and 'on' are selected to be toxic, which means that these types of prepositions and verbs can be removed by exact matching in the string, unless they form parts of larger spans.
• The best checkpoints of the span-based models tend to predict empty spans for the selected example. However, when using checkpoint ensembling, we see that union models return accurate spans.
• The ground spans are not entirely correct and are ambiguous. For example, it is not clear whether the word 'ignorant' should be considered to be toxic. The models, based on other examples, predict 'ignorant' to be toxic, but it is not present in the ground spans. This means that finding the toxic spans is not a trivial task for humans, and annotation can not be performed easily by crowd-workers.
• In some cases, one of the occurrences of the word 'ignorant' is considered to be toxic, while the other is predicted to be benign. The first instance of 'ignorant' does not seem to be as toxic as the second instance and therefore, more analysis needs to be done to determine the 'degree' of toxicity of the spans. This can be a good direction for future research. | 4,872 | 2021-02-24T00:00:00.000 | [
"Computer Science"
] |
Loss of Kat2A Enhances Transcriptional Noise and Depletes Acute Myeloid Leukemia Stem-Like Cells
Acute Myeloid Leukemia (AML) is an aggressive hematological malignancy with abnormal progenitor self-renewal and defective myelo-monocytic differentiation. Its pathogenesis comprises subversion of transcriptional regulation, through mutation and by hijacking normal chromatin regulation. Kat2a is a histone acetyltransferase central to promoter activity that we recently associated with stability of pluripotency networks, and identified as a genetic vulnerability in AML. Through combined chromatin profiling and single-cell transcriptomics, we demonstrate that Kat2a contributes to leukemia propagation through homogeneity of transcriptional programs and preservation of leukemia stem-like cells. Kat2a loss reduces transcriptional bursting frequency in a subset of gene promoters, generating enhanced variability of transcript levels but minimal effects on mean gene expression. Destabilization of target programs shifts cellular equilibrium out of self-renewal towards differentiation. We propose that control of transcriptional variability is central to leukemia stem-like cell propagation, and establish a paradigm exploitable in different tumors and at distinct stages of cancer evolution.
INTRODUCTION
Acute Myeloid Leukemia (AML) is the most prevalent leukemia in adults with a dismal prognosis of less than 30% 5-year survival (Dohner et al., 2017). It is a heterogeneous disease, clinically and pathologically, with common cellular themes of myeloid differentiation block, and recurrent molecular targeting of chromatin and transcriptional regulation. Effects on transcription are reflected in the AML mutational spectrum (Cancer Genome Atlas Research et al., 2013), as well as through the implication of general transcriptional co-regulators in AML pathogenesis, in the absence of specific mutation events (Roe and Vakoc, 2013). Examples of these are specific AML dependencies on BRD4 (Dawson et al., 2011;Zuber et al., 2011), LSD1 (Harris et al., 2012) or DOT1L (Bernt et al., 2011;Daigle et al., 2011). Moreover, chemical inhibitors exist to target these regulators and have progressed to clinical trials (Gallipoli et al., 2015). More recently, TFIID and SAGA subunit TAF12 was shown to be critical for MYB protein stability and transcriptional activity in AML cells through its participation in the TFIID complex (Xu et al., 2018).
In a recent CRISPR drop-out screen of genetic dependencies in AML, we have identified several members of the SAGA complex, including histone acetyl-transferase KAT2A, as being required for AML maintenance (Tzelepis et al., 2016). KAT2A was suggested to impact cell survival and differentiation status, but its precise molecular mechanisms of action remain to be elucidated, and it is unclear whether it is required in AML initiation, as well as maintenance. Kat2a is a mammalian orthologue of yeast histone acetyl-transferase Gcn5, and is required for H3K9 acetylation (H3K9ac) (Jin et al., 2014), a modification that fine-tunes, rather than initiates, locus-specific transcriptional activity. Kat2a is required for specification of mesodermal derivatives during early embryonic development (Lin et al., 2007) (Wang et bursting nature of gene expression (Chubb and Liverpool, 2010): for most if not all loci, transcriptional activity is not continuous, but burst-like or episodic, with locus-specific rates of locus activation (κON), inactivation (κOFF), and RNA production (κRNA), as well as RNA degradation (Raj et al., 2006) contributing to the net effect. Frequency of bursting depends on the κ ON rate, whilst κ RNA impacts the burst size (Cai et al., 2006). Both parameters contribute to mean gene expression, whilst transcriptional noise is more strictly dependent and anti-correlated with burst frequency . In yeast, size and frequency of bursts are influenced by histone acetylation in gene bodies and promoters, respectively (Weinberger et al., 2012).
In functional terms, transcriptional noise has been directly implicated as a mechanism of cell fate choice in yeast (Blake et al., 2006) and bacteria (Suel et al., 2006), and recurrently associated, albeit correlatively, with cell fate transitions in mammalian systems (Moris et al., 2016). We had previously shown that normal transitions into hematopoietic lineage specification associate with cell-to-cell heterogeneity in gene expression (Pina et al., 2012;Teles et al., 2013). More recently, we have inhibited the activity of Kat2a in mouse embryonic stem cells, and observed an increase in transcriptional heterogeneity that impacted the stability of pluripotency with reconfiguration of correlation gene regulatory networks (GRNs) (Moris et al., 2018). Whilst we have not mechanistically linked enhanced heterogeneity with the loss of pluripotency, we observed propagation of variability of transcriptional levels through the GRNs downstream of nodes with differential H3K9ac.
Cancer, and in particular leukemia, can be perceived as an imbalance between self-renewal and differentiation in favor of self-renewal. We postulated that enhancing transcriptional variability in AML cells would enhance the probability of cell fate transitions out of selfrenewal into differentiation, with loss of leukemia stem-like cells (LSC
Conditional loss of Kat2a does not affect normal hematopoiesis and allows MLL-AF9driven transformation
We sought to investigate Kat2a requirements in vivo during early leukemia initiation and 1A). We obtained locus excision by treatment of experimental and control mice with a course of intra-peritoneal polyinosylic-polycytidylic (pIpC) acid injections, as described (Chan et al., 2011). Excision was tested 4-6 weeks after injection and consistently achieved values greater than 80% in stem and progenitor cell compartments (Fig. 1B), reflected in a profound loss of gene expression, including amongst myeloid-biased (LMPP) and committed (GMP) progenitors critical for AML initiation (Goardon et al., 2011) (Fig. 1C). Of note, locus excision generates an in-frame product that joins the first 2 and the last exons ( Supplementary Fig. 1A); this product is transcribed ( Supplementary Fig. 1B), but should not code for catalytic or acetyl-binding activity ( Supplementary Fig. 1A). In agreement with a previous report (Bararia et al., 2016), Kat2a was dispensable for HSC maintenance and function, as assessed by BM composition acutely after excision and throughout aging (Supplementary Transformation of progenitor-enriched, lineage-depleted (Lin-) cells with an MLL-AF9 fusion transcript was initially assessed in vitro through serial re-plating of WT and KO cells in in semi-solid medium-based colony-forming assays. Transformation was observed for cells of both genotypes, with similar efficiency (Fig. 1D). Locus excision (Fig. 1E) and gene expression loss (Fig. 1F) were maintained or even increased during transformation, suggesting that loss of Kat2a does not impede the initial selection of a leukemia-transformed clone.
Kat2a depletion impairs establishment of MLL-AF9-initiated leukemia
We investigated the longer-term effects of Kat2a loss in transformation progression in vitro by continued serial re-plating. Whilst no differences were seen in re-plating ability ( Fig. 2A), it was noted that the colonies obtained had a clear component of differentiated cells ('dubbed' mixed or type II colonies (Johnson et al., 2003)) ( Fig. 2B), which could also be observed in colonies initiated from primary BM transformed colonies (Fig. 2C). Accordingly, Kat2a KO colonies showed increased levels of the differentiation marker CD11b (Supplementary Fig. 2A). Interestingly, establishment of clonal liquid cultures from in vitro transformed cells revealed a relative advantage in culture initiation from WT cells (Fig. 2D), suggesting that an imbalance between self-renewal and differentiation in the KO setting.
We probed the effect of Kat2a in MLL-AF9-driven transformation in vivo by injecting lethally-irradiated recipients with WT and KO Lin-BM cells transduced for 2 days with retrovirus encoding the MLL-AF9 oncogenic fusion. Animals developed leukemia 3 months after transplantation, as previously described, with a modest survival advantage for recipients of KO cells (Fig. 2E). At the point of culling, no differences in leukemia burden inspected the pattern of distribution of H3K9ac in promoter and enhancer elements in MLL-AF9 primary leukemia initiated by Kat2a KO or WT cells. Although global H3K9ac was minimally changed between genotypes, there was a specific depletion of H3K9ac peaks at promoters in regions devoid of concomitant H3K27ac activation mark (Fig. 3A). Conversely, H3K9ac was mildly increased at candidate active enhancer regions marked by the presence of H3K27ac (Fig. 3B), suggesting a possible pattern of imbalance of H3K9ac regulation between promoters and enhancers.
We focused on those promoter peaks with unique loss of H3K9ac upon Kat2a depletion, and used the ENCODE database (Auerbach et al., 2013) to confirm enriched experimental binding of KAT2A (aka GCN5) in other model systems ( Fig. 3C and Supplementary File 1).
Similar to a previous study of the effects of Kat2a and H3K9ac loss in embryoid bodies (Wang et al., 2018), we also found evidence for increased representation of MYC targets, which is a known Kat2a interacting protein (Hirsch et al., 2015). Genes associated with differentially-acetylated promoter peaks fell into 3 main categories (
Differential H3K9ac subsequent to Kat2a loss results in transcriptional variability
The role of yeast Gcn5 as a regulator of locus-specific intrinsic transcriptional noise (Field et al., 2008;Tirosh and Barkai, 2008).
Kat2a regulates transcriptional bursting activity in cells with stem-like characteristics
Having established that loss of Kat2a associates with increased cell-to-cell variability in expression levels of a subset of directly-targeted genes, we asked whether the variability reflected differential regulation of locus transcriptional bursting, and hence modulation of transcriptional noise. We made use of the D3E code developed by the Hemberg lab (Delmans Kat2a targets using the cells in cluster 7 revealed significantly lower frequency of bursting and associated high CV in KO cells ( Fig. 5C and Supplementary Fig. 5C). Again, we observed a mild gain in burst size (Fig. 5C), which associates with unchanged mean expression levels ( Supplementary Fig. 5C). In contrast, modelling of cells in cluster 6, with the lowest STEM-ID score, revealed no differences in transcriptional parameters between WT and KO cells (Fig. 5D). Of note, Kat2a targets had lower average gene expression in cluster 6 ( Supplementary Fig. 5D). Overall, the data suggest that Kat2a target genes associate with candidate stem-like clusters and that Kat2a regulates their expression through buffering of transcriptional variability.
Kat2a regulates the activity of translation-associated genes
Having established that Kat2a loss results in deregulation of transcriptional activity with decrease of bursting frequency, we asked if this effect was biased towards particular classes of genes. Indeed, we found that translation-associated genes, including ribosomal protein genes and translation initiation factors, were significantly overrepresented (
Kat2a loss depletes functional MLL-AF9 leukemia stem-like cells
Finally, we asked whether the enhanced transcriptional variability observed in STEM-ID high Interestingly, we note a dissociation between surface phenotype and stem-like function, suggesting that identification of the classical L-GMP surface antigen phenotype (Krivtsov et al., 2006) may not absolutely associate with function. Also, importantly, we, like others, did not observe that loss of Kat2a introduced changes to normal hematopoiesis (Bararia et al., 2016), particularly in HSC, LMPP or GMP compartments that directly contribute to MLL-AF9 transformation. Repeated 5-FU treatment or secondary transplantation (data not shown) also failed to reveal a stem cell function defect, indicating specific dependency of leukemia stem-like cells on expression of Kat2a. 1 We used single-cell transcriptional analysis to capture cell-to-cell heterogeneities within seemingly phenotypic equivalent primary AML from both genotypes. Whilst differences between genotypes were minimal in terms of average gene expression, we identified a clear distinction in cell-to-cell variability in transcript levels that was specific to a subset of promoters characterized by H3K9, but not H3K27, acetylation, and which were dependent on bursting activity, which correlates with transcriptional noise. The number of mRNA molecules produced by each burst, or burst size, on the other hand, was not changed or was even mildly increased upon loss of Kat2a, suggesting a mechanistic link between H3K9ac and bursting frequency. This recapitulates findings in yeast linking H3K9ac at gene promoters with noise, but not level of gene expression (Weinberger et al., 2012), and provides mechanistic insight into the cell-to-cell heterogeneity elicited by Kat2a loss. In a recent study, the Naëf lab has shown that locus-specific manipulation of promoter, but not distal or enhancer, H3K27 acetylation can change transcriptional bursting frequency (Nicolas et al., 2018). Whilst the association with H3K27ac is unclear in our study, there is a clear contribution of H3K9 acetylation to bursting frequency, which matches an association of promoter H3K9ac, in addition to H3K27ac, and frequency of locus activation in the Naëf study (Nicolas et al., 2018). The mild gain in burst size, although unproductive in terms of transcriptional level, could reflect the differential reconfiguration of H3K9ac at promoters and enhancers upon Kat2a loss, and will be interesting to follow-up in subsequent studies.
Indeed, our lab has recently developed a KAT2A-Cas9 fusion capable of catalyzing targeted acetylation events (data not shown), that will be instrumental in answering these questions.
In linking the promoter-specific effects of Kat2a on H3K9ac and frequency of transcriptional bursting to the observed depletion of leukemia stem-like cells we found that general metabolic categories, in particular related to RNA processing, rather than known leukemiaassociated programs, were affected in their chromatin signature and frequency of bursting.
While we cannot exclude that our focus on loss, rather than reduction, of H3K9ac, combined with current limitations of single-cell RNA sequencing in capturing low-expressed genes, may have missed individual candidates, the agreement between the two levels of analysis, and indeed the ontology overlap with published studies of Kat2a-depleted or inhibited ES cells (Hirsch et al., 2015;Wang et al., 2018), including ours (Moris et al., 2018), suggest that Kat2a may regulate pervasive, rather than cell specific programs. The identification of a candidate nucleosome displacement motif in Kat2a target promoters also indicates specific regulation of highly and widely expressed genes. Amongst these, we found that translation as a category was targeted by Kat2a depletion, and demonstrated that not only is the assembly of polysomes perturbed by Kat2a inhibition, but that perturbation of the translational machinery can re-capture defects in in vitro propagation of leukemia-initiating cells akin to those imposed by Kat2a depletion. In agreement, Morrison and collaborators (Signer et al., 2014) have reported that impaired protein synthesis upon genetic depletion of the ribosomal protein machinery impedes leukemia self-renewal, whilst having non-linear dose-dependent effects on normal hematopoiesis, mimicking our own observations in the Kat2a KO setting.
Future studies directing Kat2a histone acetylation activity to single or multiple loci will illuminate individual vs. global target gene contributions to the leukemia phenotype.
However, it is tempting to speculate that the generic nature of the programs impacted by
Isolation of mouse BM stem and progenitor cells
BM was isolated from mouse long bones as described before (Pina et al., 2015). Following red blood cell lysis, total BM suspension was depleted of differentiated cells using a cocktail of biotinylated lineage antibodies (Table B) and streptavidin-labeled magnetic nanobeads (Biolegend), according to manufacturers' instructions. Cells were directly used in transplants, colony-forming assays or flow cytometry for analysis of normal hematopoiesis. For leukemia studies, cells were cultured overnight at 37°C 5% CO2 in RPMI supplemented with 20% Hi-FBS (R20), 2mg/mL L-Glutamine, 1% PSA, 10 ng/mL of murine Interleukin 3 (mIL3), 10 ng/mL of murine Interleukin 6 (mIL6), and 20 ng/mL of murine Stem Cell Factor (mSCF) (cytokines from Peprotech) (supplemented R20), followed by retroviral transduction.
Colony forming cell (CFC) assays
For analysis of normal progenitors, sorted mouse BM cells were plated at a density of 200-
cells/plate in duplicates, in MethoCult GF M3434 (STEMCELL Technologies). Colonies
were scored at 7-9 days. For analysis of MLL-AF9 leukemia, retroviral-transduced BM cells were plated in M3434 at an initial density of 10000 cells/condition and scored and re-plated every 6-7 days. Re-plating was performed up to passage 9, with 4000 cells/condition used from plate 3. CFC assays from mouse MLL-AF9 transformed lines were seeded in M3434 and scored 6-7 days later. RPS6K inhibition studies were set by adding 3.3uL DMSO, either as vehicle or with a final concentration of 3.5uM of PF4708671 (Tocris), directly to the methylcellulose medium, with mixing prior to cell addition.
In vivo analysis of leukemia initiation and engraftment
For analysis of normal hematopoiesis, 10 6 Kat2a WT or Kat2a KO cKit+ cells were intravenously injected via tail vein into lethally irradiated (2*5.5Gy) CD45.1 recipient mice.
Retroviral transduction
Retroviral construct MSCV-MLL-AF9-IRES-YFP was previously described (Fong et al., 2015). For viral particle production, Human Embryonic Kidney (HEK) 293T cells were seeded at 2.5x10 6 cells/10cm dish in DMEM supplemented with 10% Hi-FBS, 2mg/mL L-Glutamine, 1% PSA and cultured overnight at 37°C 5% CO2. The following day, a transfection mix [per plate: 47.5 uL of TranSIT (Miros), 5ug of packaging plasmid psi Eco vector (5ug), retroviral vector (5ug) and 600uL of Optimem Medium (Gibco)] was prepared according to manufacturer's instructions and added dropwise to cells followed by plate swirling and overnight culture at 37°C 5% CO2. Medium was replaced with R20 the next day. At 24 and 48 hours after R20 replacement, medium was collected and filtered through a 1 8 from 6-10 week-old Kat2a WT and Kat2a KO mice were collected and Lineage-depleted as described above (Isolation of mouse BM stem and progenitor cells), and cultured overnight at 37°C 5% CO2 in supplemented R20. For viral transduction, BM cells were briefly centrifuged at 400G, 5 minutes, and viral particle suspension medium supplemented with 10 ng/mL mIL3, 10 ng/mL mIL6, and 20 ng/mL mSCF added to a final density of 10 6 cells/mL. Cells were plated in 6-multiwell plates and centrifuged for 1 hour at 2000rpm, 32°C. After, cells were incubated for 4 hours at 37°C 5% CO2. A second round of viral transduction was performed, with post-centrifugation incubation performed overnight. Next day, cells were collected, pelleted and washed three times with PBS (2x) and R20 (1x). YFP level was accessed by Flow Cytometry in a Gallios Analyser (Beckman Coulter).
Establishment of MLL-AF9 transformed cell lines
MLL-AF9 clonal liquid cultures were set up using MLL-AF9 retrovirus-transduced primary
BM cells (see Retroviral Transduction section). Transformed cells enriched in vitro by 3
rounds of serial plating (CFC assays) were maintained in R20 supplemented on alternate days with mSCF, mIL3 and mIL6, all at 20ng/mL. Cells were cultured at 2*10 5 cells/ml and passaged when they reached a density of 1*10 6 /ml.
Flow Cytometry
Cell surface analysis of BM and Sp was performed using a panel of antibodies described on Table B according to sorting strategies detailed on Table C. Data was acquired using a Gallios Analyser (Beckman Coulter) and analysis performed in Kaluza software (Beckman Coulter). For sorting, an Influx or an AriaII BD sorter were used.
Quantitative Real time PCR (Q-RT-PCR)
Total RNA was extracted using Trizol Reagent (Invitrogen). RNA from equal numbers of cells was reverse-transcribed using Superscript II (Invitrogen), following manufactures' instructions. Complementary (c)DNA was analyzed in duplicate or triplicate by qPCR using Taqman gene expression assays (Table D) and Taqman Gene Expression Mastermix (Applied Biosystems). Gene expression levels were determined by the Pfaffl method following normalization to Reference gene, as stated. For exon 2-18 in-frame products, qPCR using Sybr Green Master Mix (Applied Biosystems) was performed in triplicates.
After centrifugation (Beckman SW40Ti rotor) at 260 900g for 3 hours at 4°C, gradients were fractionated at 4°C using a Gilson Minipulse 3 peristaltic pump with continuous monitoring (A254nm) and polysome profiles recorded using a Gilson N2 data recorder.
Chromatin Immunoprecipitation sequencing (ChIP-seq)
Total BM cells from duplicate pools of MLL-AF9 Kat2a WT and Kat2a KO primary leukemia samples were crosslinked with 1% Formaldehyde Solution (Sigma Aldrich) for 10 min at room temperature (RT), with gentle rotation (50rpm). Fixation was stopped with Glycine, and cells incubated for 5 min, RT, with gentle rotation (50rpm), followed by two washing steps in ice-cold PBS. Cell pellets were resuspended in Lysis buffer (Table E) followed by Nuclei preparation. Chromatin pellets were sheared in a Bioruptor Pico Plus (Diagenode) in TPX tubes, using 3 runs of 11 cycles (Cycle: 30sec ON 30sec OFF) on high setting. A short spin was performed between runs and samples were transferred to new TPX tubes. 1:10 of total sheared chromatin was kept for input reference. Immunoprecipitation was set up using Dilution Buffer, Protease cocktail Inhibitor, and the respective antibody (Table F) and the sheared chromatin incubated overnight at 4°C with rotation. On the following day, protein A/G magnetics beads were pre-cleared with Dilution Buffer supplemented with 0.15% of SDS and 0.1%BSA, then mixed with immunoprecipitation mix and incubated for at least 4hours at 4°C with rotation. Chromatin-Antibody-Beads mixes were sequentially washed with ChIP Wash1, ChIP Wash2, ChIP Wash3 (Table E) Institute.
Raw ChIPseq reads were analyzed on the Cancer Genomics Cloud (CGC) platform (Lau et al., 2017). Reads were aligned to the mouse mm10 Genome obtained from UCSC genome browser using the Burrows-Wheeler Aligner (BWA). Peaks from the aligned reads were obtained using the MACS2 peak calling algorithm with a significance q-value of 0.05. The deepTools bamCoverage command (Ramirez et al., 2016) was used to compare the enrichment of reads in the ChIPseq samples relative to corresponding controls. ChIP-seq samples with distinct separation between control and sample pair for a given marker were retained with exclusion of one H3K4me1 and one H3K27ac replicate. To analyze the changes in acetylation patterns at promoter and enhancer elements, H3K4me3 and H3K4me1 peaks from WT and KO were crossed with H3K9ac-only peaks, H3K27ac-only peaks and dual H3K9ac and H3K27ac peaks from the corresponding genotypes. The H3K9ac-only peaks associated with me3 (promoter elements) were used for further analysis. Genomic peaks were obtained for Kat2a WT and Kat2a KO genotypes separately using Bedtools intersect (Quinlan and Hall, 2010) and H3K4me3 K9ac peaks exclusive to WT genotypes retained as putative Kat2a peaks. Peak locations were converted to fastq sequences using UCSC table browser tool (Karolchik et al., 2004). Genomic Regions Enrichment of Annotations Tool (McLean et al., 2010) was used to assign gene identities to the fastq sequences associated with putative Kat2a peaks. Using the GREAT tool, the genomic region for gene identification was restricted to 1kb upstream and 500bp downstream of the transcription start site (TSS) to infer genes regulated at the promoter level. We used ENCODE ChIP-Seq Significance Tool (Auerbach et al., 2013) to obtain putative transcription factors regulating these targets, as well as lists of genes experimentally bound by GCN5/KAT2A, to confirm the identity of putative Kat2a targets. MEME-chip tool version 4.12.0 (Bailey et al., 2009) (Butler et al., 2018) was used for pre-processing the count-matrix data and obtaining differential gene expression between the two genotypes. RaceID/StemID (Grun et al., 2016) algorithms were used for clustering using t-SNE and obtaining pseudo-temporal arrangement of clusters based on entropy information and cluster stem scores. Parameters for the stochastic gene expression were fitted to the two-state promoter model using the D3E algorithm (Delmans and Hemberg, 2016). R scripts were written for plotting the results as boxplots and for bootstrapping of distance-to-median measure between Kat2a WT and KO.
Statistical analysis
Statistical tests performed are specified in the figure legends. Differences were obtained at pvalue significant at 0.05. All analyses were performed in statistical language R (version 3.4.4).
Data deposition
All single-cell RNAseq data and ChIPseq data were deposited in GEO (SuperSeries GSE118769).
DECLARATION OF INTERESTS
S.P. is a co-founder on Noncodomics, a data analysis company.
ACKNOWLEDGEMENTS
The Kat2a parameters for genes in the Robust gene set in Kat2a WT and KO primary leukemic cells.
Parameters derived by applying D3E algorithm to single cell RNAseq data. (E) Estimated burst frequency (top) and burst size (bottom) parameters for Kat2a target genes. In (D) and (E), *p<0.05, ***p<0.001, computed with Wilcoxon rank sum test with continuity correction. | 5,331.2 | 2018-10-18T00:00:00.000 | [
"Biology"
] |
Applying Idea Management System (IMS) Approach to Design and Implement a collaborative Environment in Public Service related open Innovation Processes
. Novel ideas are the key ingredients for innovation processes, and Idea Management System (IMS) plays a prominent role in managing captured ideas from external stakeholders and internal actors within an Open Innovation process. By considering a specific case study, Lecce-Italy, we have designed and implemented a collaborative environment, which provides an ideal platform for government, citizens, etc. to share ideas and co-create the value of innovative public services in Lecce. In this study the application of IMS with six main steps, including: idea generation, idea improvement, idea selection, refinement, idea implementation, and monitoring, shows that this, remarkably, helps service providers to exploit the intellectual capital and initiatives of the regional stakeholders and citizens and assist service providers to stay in line with the needs of society. Moreover, we have developed two support tools to foster collaboration and transparency: sentiment analysis tool and gamification application.
Introduction
Value co-creation is an approach to create innovative services.Co-creation is the process by which products, services, and experiences are developed jointly by companies, their stakeholders, and final customers, opening up a new world of value [1].A new way of conceiving the provision of public services in a mutual relationship among service providers, professionals, service users, and citizens, making these services much more effective, efficient, and far more sustainable [2].
Progress in technologies such as Web 2.0 phenomenon [3], offers the ideal platform for service providers, users and other actors to communicate and interact with each other for exchanging ideas and opinions, which are necessary (but not sufficient) to foster the process of value co-creation.
Great ideas are the key parameters of innovation process for organizations and communities.The ideas flowing without a proper managing mechanism to evaluate, categorize, and prioritize them, would not assist innovation process.As stated by Geoff Mulgan [4], "Innovation is often given complex definitions,'' but he prefers the simple one: ''new ideas that work'' [4].
Reviewing related literature shows the importance of ideas in the innovation processes.As an example, the European Foundation for Quality Management (EFQM) 1 defines innovation as "the practical translation of ideas into new or improved products, services, processes, systems or social interactions".
Ven and Poole (1990) [5] argue that "invention is the creation of a new idea, but innovation is more encompassing and includes the process of developing and implementing a new idea.The development of innovation is not a linear process (a pipeline of sequential processes), but it needs a systemic approach".Therefore, innovation starts with 'management of ideas' [6].
The formal process such as Idea Management System (IMS) to structure the aforementioned stages including: capture, filter, evaluation, and implementation of the best ideas, seems essential.Lack of this system may cause superfluous innovation efforts [7].
The complex interactions among many individuals, organizations, and their operating environment is an open innovation process [8], [6].Chesbrough defines open innovation as: "the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively.Open innovation is a paradigm assuming that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology" [9].It is in order to drive structural changes far beyond the scope of what any organization could do alone.On the other hand, open innovation was first defined by Henry Chesbrough as opening up the process of innovation to external stakeholders.He expresses that "due to the global completion which leads to the shorter innovation cycles, companies should tap expertise and creativity outside the organization working imperative" [10], [11].
The Open Government (OG) concept, which emphasizes on including citizens and society as well as administration members within governmental processes, is a translation of open innovation in governmental processes.OG seeks to engage citizens in order to increase efficiency within political/organizational decision process leading to society's satisfaction [12].
In this study, we propose a conceptual framework describing the idea of life-cycle and the tools enabling collaboration between citizens and Public Administration, with particular focus on Idea Management System and its role in each step.Moreover, we propose a sentiment analysis within the context of the public administration, with the purpose to provide reliable estimates and analysis of what citizens think about the institutions, the efficiency of services and infrastructures, and the degree of satisfaction about special events.
The proposed IMS is not only a tool for collecting ideas, but it is a collaborative tool for decision support and a tool to support a democratic society where people participate proactively.
To meet the needs of public administration dictated by the new standards of the Open Government, IMS proposed in this paper, reached the following results: (1) Transparency in the decisions made by Public Administration; (2) Bottom up approach; (3) Citizen's active participation; and (4) Development of services more responsive to the users' needs.
The paper is structured as follows.The problem description is given in the next section.Conceptual framework is discussed in Section 3. Lecce 2019 -IMS case study is described in Section 4. Related works are discussed in Section 5. Section 6 concludes the paper.
Problem Description
According to Edelmann, governments are aware of the significance of citizens' engagement in decision-making processes by integrating their potential in innovation process and acquiring better outcome [12], which reflects a paradigm shift in public administration.However, as stated by Collm and Schedler [13], innovation process in public sector, up to now, has occurred in closed-off processes mainly handled by internal public administration and sometimes with the consultancies support [13].
Public administration has understood the need to encourage stakeholders and citizens to participate, nevertheless, it still has not found its role in the virtual atmosphere [12].
The ubiquitous presence of ICT together with the recent willingness of citizens to participate and contribute online can enable government agencies to restructure their interaction with citizens in order to achieve better collaboration results [14].
IMS has been successfully implemented in private sector with the purpose of identifying the real demands in order to generate services and products based on them [7].However, the current discussion on open innovation has hardly touched upon the public sector.For example, Brunswicker has investigated the possibility to apply crowdsourcing platform in the governmental context.These studies showed that design principles derived from open innovation projects in the corporate world may not be directly applied in the governmental context: they need to be adjusted and integrated [15].
Recently, Twitter, one of the most popular micro-blogging tools, has gained significant popularity among the social network services.Micro-blogging is an innovative form of communication, in which users can express, in short posts, their feelings or opinions about a variety of subjects or describe their current status.
Much of the research is particularly interested in the Sentiment Analysis of Twitter messages and tweets.The brevity of these texts (the tweets cannot be longer than 140 characters) and the informal nature of social media, have involved the use of slang, abbreviations, new words, URLs, etc.These factors together with the frequent misspellings and improper punctuation make more complex the understanding of the opinions and feelings of the people.
In this paper we present the Idea Management System2 developed in the Puglia@service3 project, supporting the co-creation activities in the initiative for Lecce candidacy as European Capital of Culture 2019 4 .
Conceptual Framework
The proposed idea life-cycle is characterized by the following six steps (Figure 1):
and Monitoring
Each step is carried out in collaboration with citizens or between citizens and public administration; it is characterized by tools that allow the responsible of each step to perform the functions in a collaborative way.
IMS, starting from designed process in BPM, gives users the opportunity to create a social network where they can share, vote, and promote ideas.This environment is designed around local government and citizen needs and provides an engagement approach more efficient and effective than the usual BPM interfaces.Idea Generation.This is the phase of ideas input from users.It can take place according to two techniques: Push (the ideas about particular topics are required from public administrator) and Pull (citizen can suggest ideas to Public Administration).The actors involved are Public Administration and citizens.The importance of this phase is the free expressions of citizens able to generate ideas of common and public utility, and encouraging service of co-creation and the participation in the "res nostra".
Idea Management System supports the idea collection, contest creation, and allows the idea sharing on the most important social networks in order to encourage discussion and the promotion of the IMS.Tags and categorization of ideas allow simplifying the idea organization and research.
Idea Improvement.This is the collaboration phase and collective development of ideas.Once generated, ideas are shared and improved thanks to the continuous collaboration between users, who may, through the Idea Management System, contribute to the enrichment of ideas with comments, pictures, links, etc.In this way, from one or more initial ideas a process of cocreation, socialization, and exchange of experience and knowledge is triggered.
Ideas are made available to the whole community that collaborates to transform them into a structured project.Therefore the community, properly supported, can improve ideas, exploiting know-how and multiple perspectives emerging from the system.To encourage the engagement of the citizen and to create participatory behavior, gamification tools were developed.
Idea Selection.This phase supports the evaluation, selection, and ranking of Ideas.Idea Management System allows voting for the ideas that leads to a ranking.This ranking points out ideas with greater priority or the ones considered by users to be better than others.The indicators used for the evaluation are, for example: the number of threads or the vitality index that expresses how the idea remains active over time.In addition, it is possible to make even an indirect analysis of ideas through sentiment analysis that allows identifying the issues particularly important for the citizen/user.The output of this phase is the selection of ideas to be analyzed in detail by studying the sustainability.Charts show the most popular ideas and suggest the most active members of the community.In this way, the ideas most read, commented, or appreciated, emerge and are highlighted more than others.The actors involved are both citizens and the Public Administration (PA).In addition to Idea Management System, a tool of sentiment analysis and a dashboard that shows (both to the PA and citizens) intuitively the data collected have been implemented.Figure 2 illustrates the sentiment analysis dashboard useful to select the most popular ideas.Refinement.In this phase the selected ideas are refined thanks to the involvement of expert users (citizens or employees of PA) able to describe in detail all the steps and evolutionary processes that accompany the same idea.The expert group is formed through social office tools, by comparing the roles specified by the author of the idea and the co-authors, as necessary for the execution and implementation of the idea itself.The skills available in the profile of users are entered during Idea Management System registration.To assess the social and economic sustainability of the idea, the methodology [25] based on the Value Network Analysis [26] supported by simulation tools is developed.Cooperation and communication between the two actors and the Idea Management System become essential to support this functionality.The output of this phase is the transformation of the idea into a sustainable product/service in technological, economic, and social terms.So, it is important to identify the role of the user, through the tools of the social office.
Implementation.The actors involved in this phase are both the PA and citizens, experts and not experts.
When the ideas require the development of an application and/or an information service, IMS provides a collaborative tool, allowing the user to report the needs useful to develop a service.This notification can supply documentation and models created using the tool.The technologist will try to implement the new service by the integration of existing applications in the marketplace.Where it is necessary to develop a new application that is not in the marketplace, the tool allows the user to report these needs to technologist (in this case the notification shall be accompanied by documentation and models that clarify the functionality).
During each phase, in order to engage and encourage users to continue their collaboration, IMS allows the collaborative resolution of problems, emerged during the implementation of the idea.Moreover, IMS transparently associates additional information to each phase as follows: • Update of the status of implementation of the idea • Resources (technical, human, ...) associated with implementation of the ideas • Information about any problems encountered in the implementation phase • Financial data • Timelines Execution and Monitoring.The final stage of the process of co-creation is to run the service and continuously monitor the results.The monitoring phase is very important because it allows evaluating and monitoring the success or failure of the new service through the feedback received from users and the PA.
Monitoring techniques are questionnaires, interviews, surveys, reviews, and the feedback, collected by the Idea Management System.Also in this phase sentiment analysis tools are used.The functionality of "Analysis, and tracking of ideas" provides statistics and graphs that depict the performance of the Idea Management System over time.All contents of the system, in the form of a summary table and the frequency of interactions within the community, can be displayed.
The actors involved are: PA, citizens, and community of users.The feedback of the users and the data collected allow generating suggestions for improvement and new ideas that will reopen the cycle.
Case Study "Lecce 2019 -IMS"
The first four steps of the proposed framework were tested in the city of Lecce on the occasion of the initiative for Lecce candidacy as European Capital of Culture 2019.
Counting 90,000 citizens, Lecce is a mid-sized city, which represents the most important province of the Salento region, located in the "heel" of the Italian "boot".Even though Lecce is known for its great cultural, artistic, and naturalistic heritage, it can also be considered as a typical example of southern Italian city from a socio-economic point of view: poor in infrastructure, with high and increasing unemployment rates.However, despite this disadvantageous context, remarkable investments in university research and tourism sectors have been taking place over the last few years, making Lecce an area of attraction also at international level.
The Municipality of Lecce (Italy) has decided to change the approach for the creation of a shared path towards a social model, in which the direct participation and collaboration of the citizens is included in order to generate innovation.
Public administration and citizens are generally not coordinated with each other, since the traditional approach of urban planning is top-down and often does not meet citizens' needs.In the guidelines of the Bid Book 5 for the candidacy of Lecce as European Capital of Culture 2019, one of the main criteria of the evaluation is "the city and citizens", referring to concrete initiatives that must be launched to attract local neighboring and foreign citizens' interest and participation.
Citizens' involvement and their needs definition are important elements for Lecce.For these reasons the Municipality of Lecce organized LUAC`s (urban, open, creative laboratories): a kind of informal debate aiming to satisfy citizens' participation."Lecce 2019 -Idea Management System" was adopted to integrate LUAC`s and other initiatives that enable interaction between citizens.
As regards the access to the platform, the correlation between the number of visits and the interest shown by citizens and local associations towards the initiative Lecce 2019 is evident.The launch of the website, which took place in July 2013, was accompanied by a steady increase with a peak in September, close to the deadline for the submission of the bid book.Thereafter there was a decline in November after the announcement of the results of the first phase of selection; this shows that the number of accesses and interactions is strongly influenced by the diffusion of the various initiatives and different maturities (see Table 1).In this regard, Caritas Diocesana of Lecce proposed, within the IMS, the idea of creating a network of solidarity aimed at collecting and distributing food.This idea has been voted and commented by other voluntary associations (Red Cross of Lecce, Comunità Emmanuel, etc.) and by some local shops, all enthusiastic and ready to participate.
The Municipality of Lecce by considering the idea interesting for the local community and evaluating, thanks to sentiment analysis indicators, the interest shown on the web for this topic, has intervened, proposing itself as guarantor and coordinator of this network.Meetings and focus groups were organized in order to create a "network of solidarity" involving several actors, such as: voluntary associations, Confcommercio, Confesercenti, Confindustria, and the managers of the Puglia@service project.The latter have given their availability to implement a web/mobile application able to facilitate the matching of demand and supply of unsold food.The execution and monitoring phase is in progress.
Idea Management System's Main Functions
Our research on IMS has mainly focused on three macro-categories in order to select the best solution for public administration usage.
1. Solution derives from the European research 62. Solution includes market tools of IMS 73.Solution includes Open Source tools 8Having investigated all the aforementioned categories' solutions, the implementation of the "Lecce 2019 -IMS" was performed by using the tool Gi2MO IdeaStream 9 .It consists of a set of modules able to customize Drupal [27] in order to implement it as a system of Idea Management.
The decision to use Open Source (OS) software offers the following benefits: Cost-effectiveness: Since the cost of proprietary software is a considerable expense, the use of OS software gives the possibility to switch to other suppliers to receive support. Security and reliability: The software is more secure because the user can view the source code and improve it.The product is, therefore, more stable and always updated. Freedom: OS software allows the interaction between multiple systems in a simple and fast way having the source code always available.
The implemented Idea Management functions are based on two fundamental entities: Contest and Ideas: Contests represent the area of interest requiring new solutions and approaches to improve the conditions of the public goods 10 .
Ideas represent the possible solutions proposed to the satisfaction of the needs expressed by the Contests.
These two entities will be the prerogative of two types of users.On the one hand, the Contest can be expressed by people representing Public Administration; while, on the other hand, the ideas can be expressed freely by the citizens.
The two categories (Contest and Ideas), despite being conceptually different, are characterized by several common elements: they can be represented by textual description and the title, they both can have as attachments some documents and files, they can belong to categories that allow the best definition of their scope and context, they can have some associated tags, they can be shared on social networks, and, finally, the contests and ideas can be defined by only one author.
In addition to these common elements, Contests and Ideas are characterized by some distinctive features.
In particular, the category Contests has a starting date, from which the contest will be visible and available to the community, and an end date or closure, from which nobody can contribute with the formulation of new ideas.The Ideas have some unique characteristics: the ability for the main author and co-authors to edit the content of the idea, still preventing the co-authors to add or remove other collaborators (this functionality remains the prerogative of the author of the idea).
As already mentioned earlier, in order to better manage the proposed ideas by the community, IMS provides the idea life-cycle management, and, to realize it, the management team, called "board'', is provided.This team is composed of the idea/contest author and also by a group of selected users.
The main functionalities that IMS offers to its users can be divided into four categories, namely, systems administration tool, co-creation tools for public administration, co-creation tools for citizen users, and information tools for citizen users (see Table 2).
Sentiment Analysis Case Study
To test the effectiveness of the Sentiment Analysis tool, developed to support the Idea Management System, we have considered the tweets about the event Lecce 2019, and we have analyzed the sentiment.We collected a corpus of tweets using the Twitter search API between September 2, 2014 and November 17, 2014, the period in which there were relatively many twitter messages about the event.We extracted tweets using query-based search to collect the tweets relating to "Lecce2019" and "noisiamolecce2019", hashtags most used for this topic.The resulting dataset contains 5000 tweets of which 3560 are re-tweets.Duplicates were automatically removed leaving a set of 2000 tweets with a class distribution as shown in Table 3.In order to achieve a training set for creating a language model useful to the sentiment classification, a step of manual annotation was performed using the following 3 labels: • Positive: Tweets that carry positive sentiment towards the topic Lecce 2019.
• Neutral: Tweets that do not carry any sentiment towards the topic Lecce 2019 or Tweets which do not have any mention or relation to the topic.• Negative: Tweets that carry negative sentiment towards the topic Lecce 2019.
Before performing the classification of sentiment, the text has been processed by the preprocessing component.Seven classification loops were performed, each with different approaches for the features selection, and different fixed training dimensions.An accuracy of 77.6% was achieved using an individual sentiment classification algorithms and unigrams features with the stop-words and repeated letters elimination [28].
Idea Management System
Literature on Idea Management (IM) are predominantly associated with innovation management in organizations [29].As reported by Baumgartner, the practices on innovation management are not new and have been introduced in several organizations much before the IT systems explosion (e.g., 30-years history of innovation management in Toyota had been always oriented on the road to the capture of novel ideas) [30].However, what nowadays is known as 'idea management' in IT sector, has been created in reference to systems that appeared in the late 90s [31].Idea Management Systems are the tools for collecting community's ideas for innovation purposes.In order to evaluate captured ideas precisely, Westerski et.al [10] have tried to resolve the problem by introducing annotation of ideas, through which the characteristics of ideas can be described highlighting their distinctive features.
Reviewing IT related literature remarks the development of IM dealing with applications of IMS.Xie and Zhang, for instance, have designed an IMS to support the process of idea generation, evaluation, improvement, and implementation [32].The work of Westerski et al. [33] deals with the development of IMS and extends it from being nothing more than a box where employees could submit their ideas on a piece of paper to the web 2.0 techniques.Such transformation allows complex submission of data and data handling in IMS.They also suggests the use of semantic web principles to link organizational systems for better idea assessments [34].IMS can also be considered as a sharing point among users and organizations [35], besides, in this manner it can be utilized as a managing and controlling tool for open innovation [36].An example of Idea Management System is OpenIDEO 11 that enables people to collaborate in developing innovative solutions to press social and environmental challenges.OpenIDEO is based on a collaborative process characterized by six steps.These tools, already adopted within enterprises, are able to avoid failures due to the implementation of products or services that do not suit market needs.
Idea Management System can be defined, therefore, as a process of needs recognition and ideas generation and evaluation [7], [37].Those platforms aim to help all aforementioned practices of idea management and allow organizations to collect community ideas during enterprise procedures [38].The main contribution of this paper is to develop an approach, based on idea life-cycle, which uses the concept of open innovation.We apply the proposed approach in the context of Public Administration in order to co-create innovative public services.In this approach, all steps of life-cycle are supported by the Idea Management System that interacts through a number of technological and methodological tools to facilitate collaboration and cocreation.
Sentiment Analysis
Sentiment Analysis (SA) and Opinion Mining study, analyze, and classify documents of the opinions expressed by the people about a product, service, event, organization, or a person.The objective of this area is the development of linguistic analysis methods that allows identifying the polarity of the opinions.
In the last decade, SA had a strong development, thanks to the large presence in the World Wide Web of an increasing number of documents generated by users and thanks to diffusion of social network.
In 2001 the paper of Das [39] and Tong [40] began to use the term "sentiment" refering to the automatic analysis of evaluative text and tracking of the predictive judgments.Since that time, the awareness of the research problems and the opportunity related to SA and Opinion Mining has been growing.The growing interest in SA and Opinion Mining is partly due to the different application areas: in commercial field -to the analysis of the review of products [41]; in political field -to identify the electorate mood and therefore the trend in the voting (or abstaining) [42]; etc.
In social environments, the SA can be used as a survey tool that allows to understand the existing points of view: for example, to understand the opinion that some people have about a subject, to predict the impact of a future event, or to analyze the influence of a past event on a recent one [43].
The big data technologies, the observation methods, and the analysis of the behavior on the web, make SA an important decision making tool for the analysis of social network, able to develop relation, culture, and sociological debate.
The SA carried out in social networks allows public administrations to identify and meet user's needs and also enables citizens to affect the service delivery and to participate in the creation of a new services, or even to identify innovative uses of existing services [44].
Results and Future Developments
The proposed approach is used in the context of Public Private Partnership for a charitable cause.This need was expressed by citizens through the IMS platform and has been taken into consideration by the Local Government.The idea was to create a ''food bank'' for collecting the excess food from restaurants and supermarkets and distribute it among the needy.Based on this idea, a specific platform, which enables both donators and poor citizens to interact, has been developed.Such system reduces the food waste and, at the same time, increases the support for needy citizens.It is one of the significant preliminary results of the implemented system, which has been achieved through the exploitation of the platform of IMS.
IMS tool could be adapted to different contexts and to a variety of interactions between government and citizens, but it requires further improvements.A more user-friendly interface and a mobile version could be valuable additions.The moderator can be in charge of preserving the platform essence (aggregator of ideas and perspectives), since he/she could have alternatively two different roles: spamming activities checking and inappropriate content detection.However, it is crucial to pay attention to the second role since it could even be seen as an attempt to restrain freedom, which is one of the fundamental characteristic of the IMS tool.
Another lesson learned, during the test of IMS, concerns its adoption by citizens.The outcome of the statistical analysis has shown that the use of "Lecce 2019 -IMS" was closely correlated to the candidacy of Lecce as European Capital of Culture 2019.In the future, the IMS will be implemented in order to stimulate its daily usage as a social innovation tool.
A new extension, called "Social sentiment index'' is currently under development.This new extension aims at integrating the potential of sentiment analysis to identify the greatest interest of the community.However, the usage of an Idea Management System to support strategic planning in an open environment, such as urban areas, introduces a problem: administrators need further tools to prioritize efficiently interventions in the urban context.For this reason, we are working to extend the capabilities of the Idea Management System by introducing an algorithm that could calculate the user participation.The Social sentiment index will be calculated from a set of input parameters, resulting not only from the Idea Management System, but also by means of the major social networks like Facebook, Twitter, Google+, and LinkedIn.On the other hand, sentiment analysis tools, by using specific algorithms as well as semantic function, will have the purpose to simplify and to categorize the content.Founded on the concept of interoperability, the project proposes a number of solutions using metadata and providing new methods of evaluation: metrics based on opinion mining, taxonomy, and categorization of innovation, as well as metrics based on reports of the idea.
Figure 1 .
Figure 1.Main steps of idea life-cycle
Table 1 .
Detailed data of the idea collection process
Table 3 .
Class distribution | 6,713 | 2015-12-30T00:00:00.000 | [
"Computer Science"
] |
Modelling of pool boiling on the structured surfaces using Lattice Boltzmann method
. The process of boiling on spatially structured surfaces is simulated by a hybrid model based on Lattice Boltzmann Method and heat transfer equation. The model permits to study heat transfer at boiling in a wide range of surface superheats for different surface structural characteristics. The regimes of natural convection, nucleate boiling, and transition to film boiling are studied. The boiling curves for the surfaces with different structural and wetting properties are obtained. It was shown that the onset of nucleate boiling occurs at lower wall superheat on the structured surfaces than that on the smooth surface. However, at high wall superheats the heat flux and the critical heat flux at modified surfaces are lower. It was obtained that special modification of both structural and wetting properties of heat exchange surface permits to obtain higher removal heat flux as well as higher critical heat flux.
Introduction
Boiling is one of the most efficient mechanisms for heat transfer from a heated surface.Thus, it is widely used in various technological applications related to heat and mass transfer, e.g. in cooling systems.The main parameters characterizing the efficiency of heat transfer at boiling are the removal heat flux and the critical heat flux (CHF) at which the transition to film boiling occurs.In order to simultaneously enhance the removal heat flux and to increase the CHF, various methods of modifying the heat exchange surface are used which is connected with changing the surface wettability properties and introducing structural spatial modifications on it [1].
The surface can be modified by structuring using mechanical processing or laser texturing, and in some works, micro-and nanostructured or porous coatings are used to increase the efficiency of heat transfer [1,2].The addition of protrusions and depressions increases the number of vaporization centres.It leads to an earlier onset of nucleate boiling (ONB), permits to increase the CHF due to preventing vapour babbles merging, as well as to enhance the efficiency of heat transfer.
However, the search for the optimal configuration of a structured surface using experimental studies is a rather laborious task.First, in order to fully provide the desired surface characteristics, specialized equipment is required.There are works that consider all the morphological parameters of the surface at once, including porosity, roughness, and wettability.It is difficult to analyse the influence of each effect separately [3] due to a large number of parameters that need to be considered and a fast speed of the process occurred.
Along with experimental studies, methods for numerical simulation of the boiling process are currently being actively developed, which make it possible to more effectively determine the optimal surface parameters without carrying out expensive experimental studies.It has been proven that methods of computational fluid dynamics (CFD) are good alternative to experiment for studying two-phase flows [4,5].However, additional models should be used for tracing the gas-liquid interface.Moreover, they do not allow modelling of the nucleation process, which is important in obtaining the temperature of the onset of boiling, and the density of nucleation sites.Thus, it is impossible to calculate the boiling curves for an ensemble of bubbles that form on extended surfaces.
One of alternatives to CFD for gases and liquids flows is Lattice Boltzmann method (LBM).It is based on the calculation of the distribution function of pseudoparticles over discrete velocities and space.In recent years, this method has become popular in modelling liquid boiling [6][7][8][9][10].One of the features of the method is the possibility of modelling the process of vapour phase generation without specifying additional initial conditions based on empirical relationships.In addition, in LBM it is easy to set a solid boundary of an arbitrary shape with a different contact wetting angles.
The effect of contrast wettability on the smooth surface was considered in previous papers [9,10].The aim of the work is to determine the optimal configuration of a structured surface in terms of heat transfer enhancement at boiling using a hybrid model based on the Lattice Boltzmann method and solving the heat transfer equation in a two-phase medium in a wide range of wall superheats.The following model is used that was thoroughly described elsewhere [9,10].Only the main features of this model shall be presented below.In presented model the time evolution of the boiling process is described through discrete velocity distribution functions fi.These functions are calculated with the LBM approach by the following set of linear equations:
E3S Web of
where x is the position of the cell located on a straight regular grid, t is the dimensionless time, Ωi is the collision operator, Si source term for forces.In the model, two dimensional D2Q9 lattice is used, hence, discrete set of velocities is chosen: |ci| = (0, 1, √2), i = {1, 2 ... 5, 6 ... 9}.
where eq i f is the equilibrium distribution function, τ is the relaxation time.
The source term is calculated by the exact difference method [11,12]: where Δu = τF/ρ.The total force F acting on the particles consists of the following components: where FSC is the fluid-gas interaction force, Fsolid describes the fluid-gas interaction with the solid surface, and Fgravity is the gravity force.Said forces are determined as follows: where G, are the interaction coefficients and ψ is a pseudopotential: where pressure is determined by the Peng-Robinson equation of state and s c x t is an acentric factor.In the calculations, the dimensionless variables are used for the values of temperature T, pressure P and density ρ.These dimensionless variables are presented in units Tc, Pc and ρc.These units correspond to the fluid parameters at the critical point.
Evolution of the temperature spatial distribution T(x,t) in the computational domain is calculated by the heat conduction equation taking into account diffusion, convection, work of pressure forces, as well as phase transition: where uf = u+0.5Δu is a physical fluid velocity.The cv and λ are the heat capacity and thermal conductivity coefficient.
The model algorithm is as follows: 0. The initial spatial distributions of density and velocity are set.In accordance with these distributions, discrete velocity distribution functions eq i f are calculated.
1.According to the forces spatial distribution (6) and the source term (4), the equations of collision and streaming (1) are calculated.2. The boundary conditions are enforced, which in this model are given by the Bounce Back approximation.3.According to the current distribution of the fluid physical velocity, the spatial distribution of temperature is calculated.4. Transition to step 1.This calculation is carried out until the desired boiling evolution is obtained.
Computational domain is shown in Fig. 1.Liquid/vapor medium are presented in red/blue colors while metal heater is presented in grey.Spatial and time steps are equal to Δx = 30×10 -6 m and Δt = 5×10 sec, respectively.In this work, 800 and 500 spatial cells in horizontal (axis x) and vertical direction (axis y) are used, correspondingly.The metal heater with thickness of 30 spatial cells is placed on the bottom boundary.Different rectangular steps made of the same metal were placed on its surface.Thus, we obtain the structured heat exchange surface with a number of rectangular caverns.In calculations, the height h and width l of the caverns as well as their number N on the heater surface can be modified.For example, five caverns, N = 5, with 40 spatial cells size, h = l = 1.2 mm, are presented in Fig. 1.At the left and the right boundaries of the solution region the periodic boundary conditions are applied.Constant temperature T0 = 0.9 Tc and pressure P0 in accordance with equation of state are specified on the top boundary.Inside of the metal heater only heat diffusion equation ( 6) is solved with constant temperature.Constant temperature Th is applied at the bottom of the metal heater.The calculations were performed for different values of wall superheat ΔT = Th -T0, which is equal to the temperature difference between the top and the bottom boundaries of the solution region.Thermal conductivity and heat capacity of the metal heater are set for copper.Heat capacity, thermal conductivity, and viscosity of the fluid are taken for water.
Results
The numerical simulations were performed for different configuration of the structured surface.The key parameters of the structures are the height and the width of the caverns, h and l, and their total number on the surface N. In this paper the width of the surface was nonviable parameter and was equal to 24 mm.Depending on the number N of the caverns on the surface, the pitch distance p between the caverns varies.At the first stage the simulation of pool boiling at the surface with square caverns uniformly distributed on it were performed.It was obtained that the gas phase nucleation occurs even at the caverns with small sizes, i.e. h = l = 0.12 mm corresponding to 4 spatial cells.In the paper, the regimes with h = 0.12, 0.24 and 0.48 mm were considered.The number of the caverns on the surface for these sizes are N = 100, 50 and 25, respectively.These regimes were compared with the boiling process on the smooth surface, i.e. h = 0 mm.All these calculations were made for the surface with neutral wetting properties, i.e. for 90º contact angle.Finally, the regime of complex modification of the surface was studied.The wetting properties of the bottom of the caverns was taken to be hydrophobic (110º contact angle) and the rest of the heating surface was hydrophilic (67º contact angle).
Boiling curves
At the initial stage, the boiling curves <q>(ΔT΄) were calculated, where ΔT΄ = <Tw>-T0 is the surface superheat and T0 is the saturation temperature (Fig. 2).Average heater surface temperature <Tw> was obtained by averaging the temperature Tw(x,t) over time and the heater surface.To obtain each point of the dependence <q>(ΔT΄), temperature value was set on the lower wall of the heater Th = const.In turn, the time and the surface averaged heat flux <q> through the metal heater at the height Hh = 0.9 mm was determined as <q> = -λh (<Tw>-Th)/Hh.In all considered regimes Hh = 0.9 mm was the bottom of the caverns.
As it was described previously [10], at low wall superheat the heat transfer occurs in the mode of singlephase natural convection.That is why the boiling curves <q>(ΔT΄) approximately do not depend on the surface configuration.After the onset of boiling, the slope of the <q>(ΔT΄) curves sharply increases, which corresponds to an increase in the intensity of heat transfer with the transition from natural convection to nucleate boiling.It can be seen that the temperature corresponding to ONB is lower for structured surfaces than that on the smooth surface.In the region of low surface superheat, the removal heat flux increases with increasing of size of the caverns However, the situation changes dramatically in the area of high wall superheats.The value of the CHF as well as the removal heat flux decrease significantly with an increase in the caverns size.It should be noted that after the exceeding the CHF all the curves fall on to the line corresponding to film boiling regime.
The obtained absolute values of the removal heat flux <q> are smaller than in experiments on water boiling on a smooth surface.It is connected both with the limitations of LBM (liquid to vapor ratio 30 in present work) and with the fact that in the calculations absolutely smooth surfaces are considered meanwhile in real experiments some roughness are always presented on the heater surface that could increase the heat flux due to some stabilization of the vapor bubbles and the presence of vaporization sites.
Boiling on the surfaces with evenly distributed square caverns
To describe such a behaviour of the boiling curves let us consider the density contour plots for described regimes.Let us compare the boiling process on the smooth surface and on the structured surface with N = 100 caverns for the same wall superheat ΔT = 0.09 Tc.The corresponding regimes are denoted in Fig. 2 by points A and B, respectively.In Fig. 3 the boiling process on the smooth neutral surface is presented.At the same wall superheat, the boiling process on the structured surface is completely different (see Fig. 4).The babbles are merged together possibly due to the presence of the caverns in which the vapour phase is easily formed.Large areas of the heater surface are covered with the vapour films preventing the heat transfer from the surface due to low coefficient of thermal conductivity of the vapour.It should be noted that the degradation of heat transfer on the modified surfaces compared to the unmodified lyophilic surface was also observed experimentally [13][14].
Boiling on specially modified surface
At the next stage, the simulation of pool boiling was performed for specially modified surface.Previous results have shown that at low superheats the enhancement of the ONB and intensifying of the removal heat flux occurs due to the presence of the caverns on the heat exchange surface.However, the number of caverns slightly influences these parameters.Moreover, at high surface superheats, the presence of caverns leads to a formation of the areas with vapour films.To avoid bubbles merging it was decided to increase the pitch size, i.e. the distance between neighbour caverns.The regime with N = 5 caverns with size h = l = 0.3 mm was considered.As it was shown and discussed in previous paper [10], the pitch size between the hydrophobic partens should be of the order of the bubble departure diameter to achieve heat transfer enhancement.In this section p = 4.8 mm was used, which is slightly exceeds mean value of the bubble departure diameter Dd ≈ 3 mm for our conditions.Moreover, in order to prevent the bubble merging on the top boundary of the heater, these areas were chosen to be hydrophilic.To reduce the temperature threshold of ONB, the bottom parts of the caverns were set to be hydrophobic.The obtained boiling curve for the modified surface is presented in Fig. 2. The density contour plot for point C in Fig. 2 is presented in Fig. 5.It is seen that the heat removal flux is higher than that on all previously considered regimes in the whole range of surface superheats.Moreover, the CHF on the specially modified surface is the highest.
Conclusion
With the help of the hybrid Lattice Boltzmann method the process of boiling on the structured surfaces was studied.Different spatial structure and wetting parameters of the heat exchange surface were investigated.Heat transfer enhancement and an increase in CHF at boiling on the structured surface with alternative wetting properties were demonstrated.To obtain optimal conditions for heat transfer enhancement further investigations will be performed.
Fig. 1 .
Fig. 1.Scheme of solution region.Red colour is liquid, blue colour is vapour, grey colour is metal heater.h and l are the height and the width of the caverns.N is number of caverns on the surface of the heater, Dd is bubble departure diameter, p is distance between caverns.
Fig. 5 .
Fig. 5.The density contour plot illustrating the boiling process at surface with specially distributed caverns.Point C in Fig. 2. ΔT = 0.09 Tc. | 3,474.8 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Structural reliability under uncertainty in moments: distributionally-robust reliability-based design optimization
This paper considers structural optimization under a reliability constraint, where the input distribution is only partially known. Specifically, when we only know that the expected value vector and the variance-covariance matrix of the input distribution belong to a given convex set, we require that, for any realization of the input distribution, the failure probability of a structure should be no greater than a specified target value. We show that this distributionally-robust reliability constraint can be reduced equivalently to deterministic constraints. By using this reduction, we can treat a reliability-based design optimization problem under the distributionally-robust reliability constraint within the framework of deterministic optimization, specifically, nonlinear semidefinite programming. Two numerical examples are solved to show relation between the optimal value and either the target reliability or the uncertainty magnitude.
Introduction
Reliability-based design optimization (RBDO) is a crucial tool for structural design in the presence of uncertainty [2,36,45,49]. It adopts a probabilistic model of uncertainty, and evaluates the probability that a structural design satisfies (or, equivalently, fails to satisfy) performance requirements. An underlying premise is that complete knowledge on statistical information of uncertain parameters is available. In practice, however, it is often difficult to obtain statistical information with sufficient accuracy. This incents recent intensive study of RBDO with incomplete statistical information [9,10,16,21,22,24,25,35,37,38,46,50,51,53].
Another methodology dealing with uncertainty in structural design is robust design optimization [6,20,31]. Although there exist several different concepts in robust design optimization, in this paper we focus attention on the worst-case optimization methodology, which is called robust optimization in mathematical optimization community [4]. This methodology adopts a possibilistic model of uncertainty, i.e., specifies the set of possible values that the uncertain parameters can take. We call this set an uncertainty set. Then, the objective value in the worst case is optimized, under the condition that the constraints are satisfied in the worst cases. This paper deals with RBDO when the input distribution is only partially known. Specifically, we assume that the true expected value vector and the true variancecovariance matrix are unknown (i.e., the true values of the first two moments of the input distribution are unknown), but they are known to belong to a given closed convex set. For example, suppose that the input distribution is a normal distribution, and we only know that each component of the expected value vector and the variance-covariance matrix belongs to a given closed interval. Then, for each possible realization of pair of the expected value vector and the variance-covariance matrix, there exists a single corresponding normal distribution. The set of all such normal distributions is considered as an uncertainty set of the input distribution. 1 As another example, suppose that distribution type of the input distribution is also unknown. Then the uncertainty set is the set of all probability distributions, the expected value vector and the variance-covariance matrix of which belong to a given set. 2 Among probability distributions belonging to a specified uncertainty set defined as above, the worst-case distribution is the one with which the failure probability takes the maximum value. Our methodology is that we require a structure to satisfy the reliability constraint evaluated with the worst-case distribution. In other words, for any probability distribution belonging to the uncertainty set, the failure probability should be no greater than a specified target value. Thus, the methodology guarantees robustness of the structural reliability against uncertainty in the input distribution. 3 The major contribution of this paper is to show, under some assumptions, that this structural requirement is equivalently converted to a form of constraints that can be treated in conventional deterministic optimization. As a result, a design optimization problem under this structural requirement can be solved with a deterministic nonlinear optimization approach.
Recently, RBDO methods with uncertainty in the input distribution have received considerable attention, because in practice it is often that the number of available samples of random variables is insufficient. For example, Gunawan and Papalambros [16] and Youn and Wang [50] proposed Bayesian approaches to compute the confidence that a structural design satisfies a target reliability constraint, when both a finite number of samples and probability distributions of uncertain parameters are available. Noh et al. [37,38] proposed Bayesian methods to adjust an input distribution model to limited data, with a given confidence level. When intervals of input variables are given as input information, Zaman et al. [52] and Zaman and Mahadevan [51] use a family of Johnson distributions to represent the uncertainty. Cho et al. [9] and Moon et al. [35] assume that the input distribution types and parameters follow probability distributions. The failure probability is therefore a random variable, the confidence level of a reliability constraint, i.e., the probability that the failure probability is no greater than a target value, is specified. To reduce computational cost of this method, Jung et al. [25] proposed a so-called reliability approach, inspired by the performance measure approach [33,34]. Subsequently, to further reduce computational cost, Wang et al. [46] proposed to use the second-order reliability method for computation of the failure probability. Ito et al. [21] assume that each of random variable follows a normal distribution with the mean and the variance modeled as random variables, and show that RBDO with a confidence level can be converted to a conventional form of RBDO by altering the target reliability index value. Zhang et al. [53] proposed to use the distributional probability box (the distributional p-box) [41] for RBDO with limited data of uncertain variables. Kanno [29,30] and Jekel and Haftka [23] proposed RBDO methods using order statistics. These methods, based on the order statistics, do not make any assumption on statistical information of uncertain parameters, and use random samples of uncertain parameters directly to guarantee confidence of the target reliability.
As reviewed above, most of existing studies on RBDO with uncertainty in the input distribution [9,21,25,35,46] consider probabilistic models of input distribution parameters and/or distribution types. Accordingly, a confidence level evaluates how the satisfaction of structural reliability is reliable. In contrast, in this paper we consider a possibilistic model of input distribution parameters. Hence, what this approach guarantees is a level of robustness [3] of the satisfaction of structural reliability. A possibilistic model might be, in general, less information-sensitive, and hence useful when reliable statistical information of input distribution parameters is unavailable.
From another perspective referring to Schöbi and Sudret [41], the uncertainty model treated in this paper can be viewed as follows. Uncertainty in a structural system is often divided into aleatory uncertainty and epistemic uncertainty [39]. Aleatory uncertainty, i.e., natural variability, is reflected by an (uncertain) input distribution. Epistemic uncertainty, i.e., state-of-knowledge uncertainty, is reflected by uncertainty in the input distribution moments. Thus, in our model, aleatory uncertainty is probabilistic, while epistemic uncertainty is possibilistic. In other words, state-of-knowledge uncertainty is represented as an uncertainty set of the input distribution moments.
Throughout the paper, we assume that only design variables possess uncertainty, and that variation of a performance requirement can be approximated as a linear function of uncertain perturbations of the design variables. Also, we do not consider an optimization problem with variation of structural topology. As for an uncertainty model of moments of the input distribution, we consider two concrete convex sets. We show that the robust reliability constraint, i.e., constraint that the structural reliability is no less than a specified value for any possible realizations of input distribution moments, can be reduced to a system of nonlinear matrix inequalities. This reduction essentially follows the idea presented by El Ghaoui et al. [12] for computing the worst-case value-at-risk in financial engineering. 4 We can deal with nonlinear matrix inequality constraints within the framework of nonlinear semidefinite programming (nonlinear SDP) [48]. In this manner, we can convert an RBDO problem under uncertainty in the input distribution moments to a deterministic optimization problem. It is worth noting that there exist several applications of linear and nonlinear SDPs, as well as eigenvalue optimization, to robust design optimization of structures [5, 17-19, 26, 28, 32, 43, 44].
The paper is organized as follows. In section 2, we consider the reliability constraint when the input distribution is precisely known, and show some fundamental properties. Section 3 presents the main result; we consider uncertainty in the expected value vector and the variance-covariance matrix of the input distribution, and examine the constraint that, for all possible realizations of the input distribution, the failure probability is no greater than a specified value. Section 4 discusses some extensions of the obtained result. Section 5 presents the results of numerical experiments. Section 6 presents some conclusions.
In our notation, denotes the transpose of a vector or matrix. All vectors are column vectors. We use I to denote the identity matrix. For two matrices X = (X ij ) ∈ R m×n and Y = (Y ij ) ∈ R m×n , we denote by X • Y the inner product of X and Y defined by For a vector x = (x i ) ∈ R n , the notation x 1 , x 2 , and x ∞ designate its 1 -, 2 -, and ∞ -norms, respectively, i.e., For a matrix X = (X ij ) ∈ R m×n , define matrix norms X 1,1 , X F , and X ∞,∞ by Let S n denote the set of n × n symmetric matrices. We write Z 0 if Z ∈ S n is positive semidefinite. Define S n + by S n + = {Z ∈ S n | Z 0}. For a positive definite matrix Z ∈ S n , the notation Z 1/2 designates its symmetric square root, i.e., Z 1/2 ∈ S n satisfying Z 1/2 Z 1/2 = Z. We use Z −1/2 to denote the inverse matrix of Z 1/2 . We use N(µ, Σ) to denote the multivariate normal distribution with an expected value vector µ and a variance-covariance matrix Σ. For a random variable x ∈ R, its expected value
Reliability constraint with specified moments
In this section, we assume that the expected value vector and the variance-covariance matrix of the probability distribution of the design variable vector are precisely known. We first recall the reliability constraint, and then derive its alternative expression that will be used in section 3 to address uncertainty in the probability distribution.
Let x ∈ R n denote a design variable vector, where n is the number of design variables. Assume that performance requirement in a design optimization problem is written in the form where g : R n → R is differentiable. For simplicity, suppose that the design optimization problem has only one constraint; the case where more than one constraints exist will be discussed in section 4.
Assume that x is decomposed additively as where ζ is a random vector andx is a constant (i.e., non-random) vector. Therefore, in a design optimization problem considered in this paper, the decision variable to be optimized isx. We use µ ∈ R n and Σ ∈ S n to denote the expected value vector and the variance-covariance matrix of ζ, respectively, i.e., It is worth noting that Σ is positive definite. Throughout the paper, we assume that, among parameters in a structural system, only ζ possesses uncertainty. Also, we restrict ourselves to optimization without change of structural topology; i.e., we do not consider topology optimization. 5 For simplicity and clarity of discussion, we assume ζ ∼ N(µ, Σ) in section 2 and section 3. In fact, the results established in these sections can be extended to the case that the type of probability distribution is unknown; we then require that the reliability constraint should be satisfied for any probability distribution with moments belonging to a specified set. We defer this case until section 4.
Since x is a random vector, g(x) is a random variable. Therefore, constraint (1) should be considered in a probabilistic sense, which yields the reliability constraint Here, ∈]0, 1] is the specified upper bound for the failure probability. Let g lin (x) denote the first-order approximation of g(x) centered at x =x, i.e., Throughout the paper, we consider an approximation of constraint (3) i.e., Therefore, the corresponding RBDO problem has the following form: Here, f : R n → R is the objective function, X ⊆ R n is a given closed set, and constraint x ∈ X corresponds to, e.g., the side constraints on the design variables. From the basic property of the normal distribution, we can readily obtain the following reformulation of the reliability constraint.
where Φ is the (cumulative) distribution function of the standard normal distribution N(0, 1). Then,x ∈ X satisfies (5) if and only if it satisfies Proof. Since g lin (x) follows the normal distribution, it is standardized by By using this relation, we can eliminate g lin (x) from (4) (i.e., (5)) as This inequality is equivalently rewritten by using the distribution function Φ as By direct calculations, we see that the expected value of g lin (x) is and the variance is which concludes the proof.
In section 3, we deal with the case in which µ and Σ are known imprecisely. To do this, we reformulate κ Σ 1/2 ∇g(x) 2 in (7) into a form suitable for analysis. The following theorem is obtained in the same manner as El Ghaoui et al. [12,Theorem 1].
Theorem 2.2. For κ > 0, Σ ∈ S n + , and ∇g(x) ∈ R n , we have Proof. We first show that the left side of the equation can be reduced to To see this, we apply the Lagrange multiplier method to the equality constrained maximization problem on the right side of (8). Namely, the Lagrangian L 1 : R n × R → R is defined by where µ ∈ R is the Lagrange multiplier. The stationarity condition of L 1 is By solving this stationarity condition, we can find that are optimal. Hence, the optimal value is which is reduced to the left side of (8).
Next, observe that the right side of (8) is further reduced to Here, the last equality follows from the fact that the positive semidefinite constraint is equivalent to the nonnegative constraint on the Schur complement of Σ in the corresponding matrix, i.e., κ 2 − ζ Σ −1 ζ ≥ 0; see [7, appendix A.5.5]. It is worth noting that the last expression in (9) is an SDP problem.
Finally, we shall show that the right side of the proposition in this theorem corresponds to the dual problem of the SDP problem in (9). Since this dual problem is strictly feasible, the proposition follows from the strong duality of SDP [8, section 11.3]. We can derive the dual problem of (9) as follows. The Lagrangian is defined by where z ∈ R, λ ∈ R n , and Λ ∈ S n are the Lagrange multipliers. Indeed, since the positive semidefinite cone satisfies [27,Fact 1.3.17] inf we can confirm that the SDP problem in (9) is equivalent to The dual problem is defined by Since (10) can be rewritten as Therefore, the dual problem in (12) corresponds to the right side of the proposition of the theorem.
Worst-case reliability under uncertainty in moments
In this section, we consider the case that the moments (in this paper, the expected value vector and the variance-covariance matrix) of the design variable vector are uncertain, or not perfectly known. Specifically, they are only known to be in a given set, called the uncertainty set. We require that a structure satisfies the reliability constraint for any moments in the uncertainty set. In other words, we require that the failure probability in the worst case is not larger than a specified value. We show that this requirement can be converted to a form of conventional constraints in deterministic optimization.
Convex uncertainty model of moments
Let U µ ⊂ R n and U Σ ⊂ S n + denote the uncertainty sets, i.e., the sets of all possible realizations, of µ and Σ, respectively. Namely, we only know that µ and Σ satisfy Assume that U µ and U Σ are compact convex sets. For notational simplicity, we write Recall that we are considering the reliability constraint in (5) with a linearly approximated constraint function. The robust counterpart of (5) against uncertainty in µ and Σ is formulated as i.e., we require that the reliability constraint should be satisfied for any normal distribution corresponding to possible realizations of µ and Σ. This requirement is equivalently rewritten as That is, the reliability constraint should be satisfied in the worst case.
The following theorem presents, with the aid of Theorem 2.1 and Theorem 2.2, an equivalent reformulation of (14). (14) if and only if there exists a pair of z ∈ R and Λ ∈ S n satisfying Proof. It follows from Theorem 2.1 that (14) is equivalent to Furthermore, application of Theorem 2.2 yields In the expression above, we see that U is compact and convex, and the feasible set for the minimization is convex. Also, the objective function is linear in µ and Σ for fixed z and Λ, and is linear in z and Λ for fixed µ and Σ. Therefore, the minimax theorem [8,Theorem 8.8] asserts that (18) is equivalent to This inequality holds if and only if there exists a feasible pair of z ∈ R and Λ ∈ S n satisfying which concludes the proof.
The conclusion of Theorem 3.1 is quite abstract in the sense that concrete forms of U µ and U Σ are not specified. To use this result into design optimization in practice, we have to reduce max{∇g( (15) to tractable forms. This is actually performed in section 3.2 and section 3.3, where we consider two specific models of U µ and U Σ .
Uncertainty model with ∞ -norm
Letμ ∈ R n andΣ ∈ S n denote the best estimates of µ and Σ, respectively, whereΣ is positive definite. In this section, we specialize the results of section 3.1 to the case that the uncertainty sets are given as Here, z 1 ∈ R m and Z 2 ∈ S k are unknown vector and matrix reflecting the uncertainty in µ and Σ, respectively, A ∈ R n×m and B ∈ R n×k are constant matrices, and α and β are nonnegative parameters representing the magnitude of uncertainties.
This means that the expected value vector µ belongs to a hypercube centered at the origin, with edges parallel to the axes and with an edge length of 2α. In other words, each component µ j of µ can take any value in [−α, α]. Similarly, a simple example of the uncertainty set in (20) is the one with B = I and k = n, i.e., This means that, roughly speaking, the variance-covariance matrix Σ has componentwise uncertainty. More precisely, for each i, j = 1, . . . , n we havẽ and besides Σ should be positive semidefinite. It is worth noting that, even ifΣ and β satisfyΣ − β11 0 andΣ + β11 0 (here, 1 denotes an all-ones column vector), (21) does not necessarily imply Σ 0. Indeed, as for an example with n = 2, consider Then we haveΣ − β11 = 1 0 0 1 0,Σ + β11 = 5 4 4 5 0, and, for example, we see that To derive the main result in this section stated in Theorem 3.2, we need the two technical lemmas. Lemma 3.1 explicitly computes the value of max{∇g(x) µ | µ ∈ U µ } in (15). Lemma 3.2 converts max{Σ • Λ | Σ ∈ U Σ } to a tractable form.
Proof. Substitution of (19) into the left side yields It is known that the dual norm of the ∞ -norm is the 1 -norm [7, appendix A.1.6], i.e., max t∈R n {s t | t ∞ ≤ 1} = s 1 .
Therefore, we obtain which concludes the proof. Proof. We shall show that the right side corresponds to the dual problem of the SDP problem on the left side. Therefore, this proposition follows from the strong duality of SDP [8, section 11.3], because the dual problem is strictly feasible.
As preliminaries, for a convex cone defined by K = {(s, S) ∈ R × S k | S 1,1 ≤ s}, observe that its dual cone is given by [7,Example 2.25] By using definition (20) of U Σ , the left side of the proposition of this theorem is reduced to The Lagrangian of this optimization problem is defined by where v ∈ R, V ∈ S k , and Ω ∈ S n are the Lagrange multipliers. Indeed, by using (11) and (22), we can confirm that problem (23) is equivalent to The dual problem is then defined by Since (24) can be rewritten as Therefore, the dual problem in (25) is explicitly written as follows: Minimize v∈V, Λ∈S k , Ω∈S kΣ Constraint B (Λ+Ω)B 1,1 ≤ v becomes active at an optimal solution, which concludes the proof.
We are now in position to state the main result of this section. By using Theorem 3.1, Lemma 3.1, and Lemma 3.2, we obtain the following fact.
Theorem 3.2. Let U µ and U Σ be the sets defined by (19) and (20), respectively. Then, x ∈ X satisfies (14) if and only if there exists a pair of z ∈ R and W ∈ S n satisfying Proof. It follows from Lemma 3.1 and Lemma 3.2 that (15) and (16) in Theorem 3.1 are equivalently rewritten as Put W = Λ + Ω to see that this is reduced to This is straightforwardly equivalent to (26) and (27).
It should be emphasized that Theorem 3.2 converts the set of infinitely many reliability constraints in (13) to two deterministic constraints, i.e., (26) and (27). The latter constraints can be handled within the framework of conventional (deterministic) optimization.
Uncertainty model with 2 -norm
In this section, we consider the uncertainty sets defined by Example 3.2. As a simple example, putμ = 0 and A = I with m = n to obtain This means that the expected value vector µ belongs to a hypersphere centered at the origin with radius α. Similarly, putting B = I and k = n we obtain This means that the variance-covariance matrix Σ satisfies and is symmetric positive semidefinite.
In a manner parallel to the proofs of Lemma 3.1 and Lemma 3.2, we can obtain Here, the facts have been used. Accordingly, analogous to Theorem 3.2, we obtain the following conclusion:x ∈ X satisfies (14) if and only if there exists a pair of z ∈ R and W ∈ S n satisfying
Truss optimization under compliance constraint
In this section, we present how the results established in the preceding sections can be employed for a specific RBDO problem. As a simple example, we consider a reliability constraint on the compliance under a static external load. We assume linear elasticity and small deformation. For ease of comprehension, consider design optimization of a truss. In this context, x j denotes the cross-sectional area of truss member j (j = 1, . . . , n), where n is the number of members. We attempt to minimize the structural volume of the truss, c x, under the compliance constraint, where c j denotes the undeformed member length. Let π(x) denote the compliance corresponding to a static external load. The first-order approximation of the compliance constraint is written as whereπ (> 0) is a specified upper bound for the compliance. Accordingly, the design optimization problem to be solved is formulated as follows: Here, the specified lower bound for the member cross-sectional area, denoted byx j (j = 1, . . . , n), is positive, because in this paper we restrict ourselves to optimization problems without variation of structural topology. As for uncertainty sets of the moments, consider, for example, U µ and U Σ studied in section 3.3. For simplicity put A = B = I so as to obtain From the result in section 3.3, we see that problem (28) is equivalently rewritten as follows: Here,x ∈ R n , z ∈ R, and W ∈ S n are variables to be optimized. It is worth noting that problem (29) is a nonlinear SDP problem. The remainder of this section is devoted to presenting a method for solving problem (29) that will be used for the numerical experiments in section 5.
The method sequentially solves SDP problems that approximate problem (29), in a fashion similar to sequential SDP methods for nonlinear SDP problems [32,48]. Letx k denote the incumbent solution obtained at iteration k − 1. Define h k ∈ R n by h k = ∇π(x k ).
At iteration k, we replace ∇π(x) in (29c) and (29d) with h k . Moreover, to deal with π(x) in (29c), we use the fact that s ∈ R satisfies π(x) ≤ s if and only if is satisfied [27, section 3.1], where K(x) ∈ S d is the stiffness matrix of the truss, p ∈ R d is the external load vector, and d is the number of degrees of freedom of the nodal displacements. It is worth noting that, for trusses, K(x) is linear inx. Therefore, (30) is a linear matrix inequality with respect tox and s, and hence can be handled within the framework of (linear) SDP. By this means, we obtain the following subproblem that is solved at iteration k for updatingx k tox k+1 : Since this is a linear SDP problem, we can solve this problem efficiently with a primaldual interior-point method [1].
Extensions
This section discusses some extensions of the results obtained in section 3.
Robustness against uncertainty in distribution type
An important extension is that the obtained results can be applied to the case that, not only the moments, but also the type of probability distribution are unknown. In this case, we consider any combination of all types of probability distributions and all possible moments (expected value vectors and variance-covariance matrices) in the uncertainty set, and require that the failure probability is no greater than a specified value. This robustness against uncertainty in distribution type is important as the input distribution in practice is not necessarily known to be a normal distribution. Recall that, in section 2 and section 3, we assumed that the design variables, x, follows a normal distribution. Then we consider the robust reliability constraint in (14). For the sake of clarity, we restate this problem setting in a slightly different manner. We have assumed that random vector ζ can possibly follow any normal distribution satisfying µ ∈ U µ and Σ ∈ U Σ . We use P N to denote the set of such normal distributions, i.e., In other words, P N is the set of all possible realizations of the input distribution. We write p ∈ P N if p is one of such realizations. With this new notation, (14) can be rewritten equivalently as For U µ and U Σ defined in section 3.2, Theorem 3.2 shows that (33) is equivalent to (26) and (27).
We are now in position to consider any type of probability distribution. Only what we assume is that the input distribution satisfies µ ∈ U µ and Σ ∈ U Σ , where, for a while, we consider U µ and U Σ defined in section 3.2. We use P to denote the set of such distributions, i.e., Then, instead of (33), we consider the following constraint: That is, we require that the reliability constraint should be satisfied for any input distribution p satisfying p ∈ P. A main assertion of this section is that, by simply setting (35) is equivalent to (26) and (27) in Theorem 3.2.
We can show this fact in the following manner. Let P(µ, Σ) denote the set of probability distributions, the expected value vector and the variance-covariance matrix of which are µ and Σ, respectively. Observe that, with P(µ, Σ), (35) can be rewritten equivalently as With relation to the inner supremum, consider the condition sup p∈P(µ,Σ) El Ghaoui et al. [12,Theorem 1] show that (38) holds if and only if (7) of Theorem 2.1 holds with κ defined by (36). Therefore, all the subsequent results established in section 2 and section 3 hold by simply replacing the value of κ with the one in (36). Thus, the robust reliability constraint with unknown distribution type is also reduced to the form in (26) and (27)
Multiple constraints
In section 2 and section 3, we have restricted ourselves to the case that the design optimization problem has a single performance requirement, (1). In this section, we discuss treatment of multiple constraints.
Suppose that the performance requirement is written as The first-order approximation yields where g lin i (x) = g i (x)+∇g i (x) ζ (i = 1, . . . , m). Suppose that we impose a distributionallyrobust reliability constraint for each i = 1, . . . , m independently, i.e., Here, P is the set of possible realizations of the input distribution (i.e., P here is either P N in (32) or P in (34)). It is worth noting that in (39) the worst case distributions are considered independently for each i = 1, . . . , m. Constraint (39) can be straightforwardly dealt with in the same manner as section 3.
In contrast, suppose that we consider a single (i.e., common) worst-case distribution for all i = 1, . . . , m. Then the distributionally-robust reliability constraint is written as Treatment of this constraint remains to be studied as future work. It is worth noting that constraint (39) is conservative compared with constraint (40).
Numerical examples
In section 3.4 we have seen that an optimization problem of trusses under the compliance constraint is reduced to problem (29). In this section we solve this optimization problem numerically.
Example (I): 2-bar truss
Consider a plane truss depicted in Figure 1. The truss has n = 2 members and d = 2 degrees of freedom of the nodal displacements. The elastic modulus of the members is 20 GPa. A vertical external force of 100 kN is applied at the free node. The upper bound for the compliance isπ = 100 J.
We first consider the uncertainty model with the ∞ -norm, studied in section 3.2. In the uncertainty model in (19) and (20) The magnitude of uncertainty is α = 0.2 and β = 0.01. The specified upper bound for the failure probability is = 0.01. The optimal solution obtained by the proposed method is listed in the row " ∞ -norm unc." of Table 1, where "obj. val." means the objective value at the obtained solution. For comparison, the optimal solution of the nominal optimization problem (i.e., the conventional structural volume minimization under the compliance constraint without considering uncertainty) is also listed. The optimization result was verified as follows. We randomly generate µ ∈ U µ and Σ ∈ U Σ , and then generate 10 6 samples drawn as ζ ∼ N(µ, Σ). Figure 2a and Figure 2b show the samples of x =x + ζ generated in this manner. Figure 2c shows the values of the linearly approximated constraint function, for these samples. Therefore, the ratio of the number of samples of which these function values are positive divided by the number of all samples (i.e., 10 6 ) should be no greater than (= 0.01). We computed this ratio for each of 10 4 randomly generated samples of µ ∈ U µ and Σ ∈ U Σ , where the continuous uniform distribution was used to generate samples of the components of µ and Σ. Figure 3a shows the histogram of the values of this ratio computed in this manner, i.e., it shows distribution of the failure probability estimated by double-loop Monte Carlo simulation. It is observed in Figure 3a that, for every one of 10 4 probability distribution samples, the failure probability is no greater than . Thus, it is verified that the obtained solution satisfies the distributionally-robust reliability constraint in (14). Indeed, among these samples of the failure probability, the maximum value is 0.009054 (< ). For reference, Figure 3b shows the histogram of failure probabilities computed for the constraint function values without applying the linear approximation, i.e., g(x) = π(x) −π. It is observed in Figure 3b that, in only rare cases, the failure probability exceeds the target value (= 0.01). Figure 4a shows the variation of the optimal value with respect to the upper bound for the failure probability , where α = 0.2 and β = 0.01 are fixed. As decreases, the optimal value increases.
In contrast, Figure 4b shows the variation of the optimal value with respect to α and β, where = 0.01 is fixed. Although in Figure 4b only values of α are shown, values of β ∈ [0, 0.02] are also varied in a manner proportional to α. As the magnitude of uncertainty increases, the optimal value increases. We next consider the uncertainty model with the 2 -norm, studied in section 3.3. The uncertainty set is defined with A, B,μ,Σ, α, and β used above. The specified upper bound for the failure probability is = 0.01. The obtained optimal solution is listed in the row " 2 -norm unc." of Table 1. It can be observed that the objective value is small compared with the solution with the ∞ -norm uncertainty model. This is natural, because, with the common values of α and β, the uncertainty set with the 2 -norm is included in the uncertainty set with the ∞ -norm. The optimization result is verified in the same manner as above. Namely, Figure 5 shows 10 4 samples of the failure probability, each of which was computed with 10 6 samples of ζ. Among these samples, the maximum failure probability is 0.009850 (< ), which verifies that the obtained solution satisfies distributionally-robust reliability constraint (14). Figure 6a and Figure 6b show the variations of the optimal value with respect to the failure probability, , and the magnitude of uncertainty, α and β, respectively. These variations show trends similar to the ones with the ∞ -norm uncertainty model in Figure 4a and Figure 4b.
Finally, as discussed in section 4.1, we consider, not only the normal distributions, but all the probability distributions with µ and Σ belonging to the uncertainty set. That is, the set of possible realizations of probability distributions is given by (34). Figure 7 collects the variations of the optimal value with respect to the failure probability, , and the magnitude of uncertainty, α and β (in the same manner as above, values of β ∈ [0, 0.02] are varied in a manner proportional to α). Compared with the results for normal distributions in Figure 4 and Figure 6, the optimal value in Figure 7 is large, as expected. Moreover, as decreases, the optimal value in Figure 7a
Example (II): 29-bar truss
Consider a plane truss depicted in Figure 8, where n = 29 and d = 20. The elastic modulus of the members is 20 GPa. Vertical external forces of 100 kN are applied at two nodes as shown in Figure 8. The upper bound for the compliance isπ = 1000 J. The lower bounds for the member cross-sectional areas arex j = 200 mm 2 (j = 1, . . . , n).
As for the uncertainty model, we consider both the model with the 2 -norm, putting where 1 ∈ R n is an all-ones column vector. The magnitude of uncertainty is α = 0.2 and β = 0.01. The specified upper bound for the failure probability is = 0.01.
The optimization results obtained by the proposed method are listed in Table 2. Figure 9a and Figure 9b show the variations of the optimal value with respect to the failure probability and the magnitude of uncertainty, respectively.
As done in section 4.1, we next require that the reliability constraint should be satisfied for all the probability distributions satisfying µ ∈ U µ and Σ ∈ U Σ , i.e., for any probability distribution belonging to P in (34). For the 2 -norm uncertainty, Figure 10a and Figure 10b report the variations of the optimal value with respect to the failure probability and the magnitude of uncertainty, respectively. Figure 11 collects the optimal solutions of the optimization problem without uncertainty, as well as the distributionally-robust RBDO problems with the two uncertainty models. Here, the width of each member in the figures are proportional to its cross-sectional area.
Conclusions
This paper has dealt with reliability-based design optimization (RBDO) of structures, in which knowledge of the input distribution that the design variables follow is imprecise. Specifically, we only know that the expected value vector and the variance-covariance matrix of the input distribution belong to a specified convex set, and do not know their true values. Then we attempt to optimize a structure, under the constraint that, even for the worst-case input distribution, the failure probability of the structure is no greater than the specified value. This constraint, called the distributionally-robust reliability constraint, is equivalent to infinitely many reliability constraints corresponding to all possible realizations of the input distribution. Provided that change of a constraint function value is well approximated as a linear function of uncertain perturbations of the design variables, this paper has presented a tractable reformulation of the distributionally-robust reliability constraint. This paper has established the concept of distributionally-robust RBDO, and developed fundamental results. Much remains to be studied. For instance, in this paper we have considered uncertainty only in the design variables. Other sources of uncertainty in structural optimization can be explored. Also, as discussed in section 4.2, multiple performance requirement in the form of (40) remains to be studied. Extension to topology optimization is of great interest. Moreover, this paper relies on the assumption that quantity of interest is approximated, with sufficient accuracy, as a linear function of uncertainty perturbations of the design variables. Extension to nonlinear cases can be attempted. Finally, development of a more efficient algorithm for solving the optimization problem presented in this paper can be studied. | 8,939 | 2021-06-16T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Low-Energy Heavy-Ion Reactions and the Skyrme Effective Interaction
The Skyrme effective interaction, with its multitude of parameterisations, along with its implemen- tation using the static and time-dependent density functional (TDHF) formalism have allowed for a range of microscopic calculations of low-energy heavy-ion collisions. These calculations allow variation of the effective interaction along with an interpretation of the results of this variation informed by a comparison to experimental data. Initial progress in implementing TDHF for heavy-ion collisions necessarily used many approximations in the geometry or the interaction. Over the last decade or so, the implementations have overcome all restrictions, and studies have begun to be made where details of the effective interaction are being probed. This review surveys these studies in low energy heavy-ion reactions, finding significant effects on observables from the form of the spin-orbit interaction, the use of the tensor force, and the inclusion of time-odd terms in the density functional.
Introduction
Heavy-ion collisions combine the rich dynamics of a many-body out-of-equilibrium open quantum system with the complexities of the residual part of the strong interaction which leaks out of the small, but neither fundamental or point-like, nucleons, causing them to stick loosely together some of the time, and to fall apart at others. Understanding heavy-ion reactions across all energy scales is necessary to understand stellar nucleosynthesis [1], the synthesis of superheavy nuclei [2,3], the properties of nuclear matter [4][5][6], the QCD phase diagram [7,8] as well as the understanding of reaction mechanisms themselves [9][10][11][12][13].
Among the theoretical techniques used to study heavy-ion reactions, methods based on timedependent Hartree-Fock have recently achieved the status of having sufficiently mature implementations free of limiting approximations, and running at a suitable speed, such that systematically varying the effective interaction in the calculations is possible. It is such studies that form the main subject of the present review. The practical implementations, using the Skyrme interaction, are in some sense parameter-free, in that one has a framework using an effective interaction fitted to ground state data and nuclear matter properties, with no further adjustment to dynamics. Structure and reaction effects are together determined self-consistently from the interaction, subject to the approximations of the mean-field and one gives no further adjustment. In another sense, the variation among the sets of available effective interactions are parameters of the calculations. We attempt to summarise here what has been learnt from exploring different Skyrme force parameterisations within low-energy heavy-ion reaction calculations.
Overlapping this subject area are other recent review articles, to which the reader is referred: A review in which extensive coverage of theoretical approaches to dynamics of heavy-ion collisions in TDHF and its extensions is presented by Simenel and Umar [14]. This review extensively covers the detail of the calculational framework, which we cover in less detail here, instead concentrating more on the role of the effective interaction. Spin-dependent aspects of the effective interaction and their role in heavy-ion reactions at low and higher energy have recently been reviewed by Xu et al. [15]. Recent developments in experimental studies of heavy-ion fusion reactions are covered by Back et al. [16].
The border between the kind of calculations we have included in this review, and those not, is a somewhat arbitrary choice. Using other theoretical approaches such as transport theory [17][18][19], suitable for higher energy collisions (above a few hundred MeV/A), also required the use of an effective interaction, varying which produces different outcomes that can be compared with nature. We concentrate on the mean-field + Skyrme approach as a lowest order, and self-consistent, first step to address the role of the effective interaction in low-energy heavy-ion collisions.
The review is laid out as follows: We give a brief summary of the TDHF approach, noting the availability of recent detailed reviews, in section 2. Section 3 covers in some detail the Skyrme effective interaction, and its implementation in time-dependent mean-field approaches. The range of available works in which aspects of the effective interaction are systematically studied is surveyed in section 4.
Time-dependent Hartree-Fock
The Time-dependent Hartree-Fock (TDHF) method, as originally posited by Dirac [20], is the basic microscopic quantal approximation to nuclear dynamics with effective nucleon-nucleon interactions [14,[21][22][23][24][25][26]. It can be derived as a truncation of the hierarchy of dynamical equations which couple together all many-body density matrices, limiting to the one-body density matrix, and assuming that the two-body density can be expressed as an antisymmetrised product of one-body matrices [21]. Alternatively, the TDHF equations can be derived from the principle of least action within a space of Slater Determinant wave functions [25], or from a more general variational principle in which both the state of the system and the desired observable are optimised, with TDHF arising as the result when the expectation value of one-body observables are optimised. This more general variational principle is due to Balian and Vénéroni [27].
One derivation for the TDHF equations, following references [28,29], begins from the time-dependent Schrödinger equation One then considers the time-evolution of the one-body density matrix as From (1), its adjoint, and the Hermiticity ofĤ, the time-derivative of the one-body density matrix becomes i ρ βα = Ψ| a † α a β ,Ĥ |Ψ , using the dot to notate a time-derivative. Now, one supposes a Hamiltonian of the form where the kinetic energy is and are the two-body interaction matrix elements. Using (5) in (4) and the anticommutation relationships for fermion creation and annihilation operators gives i ρ βα = δ (t βδ ρ δα − ρ βδ t δα ) where the two-body density matrix is The equation of the time-evolution of the one-body density matrix, (8), thus links the one-body density matrix to the two-body density matrix via the two-body interaction. Similarly, if one follows the same procedure, higher-order equations couple together each successive N -body density matrix, leading to the BBGKY hierarchy.
To truncate the hierarchy and retrieve the TDHF equations, the two-body density matrix is approximated as ρ Substituting this into (8) and defining the one-body (Hartree-Fock) potential as gives with In shorthand, one can then write the compact form of the TDHF equations as Practical implementations of TDHF work in a representation of single particle states that make up the Slater Determinant wave function where a † i creates a particle in state i and is given by One can then show [29] that the TDHF equation (14) can be satisfied if each |ψ i evolves in time according to In practice, one works in a coordinate representation; ψ i (r r rsτ ) = r r rsτ |ψ i (18) and solves the equation (17) by time-evolution of initial wave functions in small increments of time. Details of practical numerical solution of the TDHF equations can be found elsewhere [24,30], including implementations in which the full code is published [31,32]. We mention also that the closely-allied time-dependent relativistic mean field has been implemented with published code [33], which is restricted to collective motion of a single nucleus, such as the case of giant resonances, but not set up for the calculation of heavy-ion collisions. Results using this code have been presented in which the external field is used to directly simulate Coulomb excitation as if from a projectile [34] but in lieu of calculations with varied interactions that can be compared to the wider literature we do not include it subsequently in the discussion. Earlier implementations of time-dependent relativistic mean-field have been reported [35] in which a brief indication of TDHF-like behaviour is made before concentrating on relativistic energies beyond the scope of this review, and as an exemplar for the density-constrained TDHF method [36]. The existing time-dependent relativistic mean field codes are implemented in the so-called no-sea approximation in which states in the Dirac sea are ignored, and it is suggested [37] that a full (and technically-challenging) implementation of the Dirac sea is needed for the study of dynamics within the relativistic energy density functional / mean-field approach. [38]. Error bars show the energy intervals in which the transition between fusion and not-fusion is found.
Heavy-ion reactions in TDHF
In order to describe a heavy-ion reaction in TDHF, one must start with a suitably-prepared initial condition. This is usually two nuclei in their ground states, calculated with a particular effective interaction. The two nuclei are placed in a computational box in coordinate space, such that the wave functions from each nucleus do not overlap (or barely overlap and are re-orthogonalised) and combined into a single Slater Determinant. Each single particle wave function is then given a Galilean boost such that nucleus 1 is moving with momentum P P P 1 and nucleus 2 with momentum P P P 2 . The initialisation process can be written [31] ψ α,1 (r r r, s; t = 0) = e ip p p 1 ·r r r ψ (stat) which gives the transformation from the stationary solutions, indicated by ψ (stat) , shifted to R R R 1 and R R R 2 and boosted by p p p 1 and p p p 2 . A 1 and A 2 are the mass numbers of the two nuclei. P P P 1 and P P P 2 are set up so that P P P 1 = −P P P 2 . One typically specifies a total centre of mass energy for a collision, along with an impact parameter and appropriate values for the initial momenta are calculated assuming a Rutherford trajectory from infinity to the initial nuclear placement on the grid. Following a collision, the final state of the system can be analysed. The results of a single TDHF calculation give a final state in which different channels are mixed. Further interpretation can require post-processing, e.g. in the form of projection onto good quantum numbers [39,40]. An accessible outcome of a standard TDHF calculation is whether the collision resulted in fusion or not-fusion. In the first case, one must run the calculation long enough to see that following the collision, the compound nucleus undergoes at least one full oscillation of the internal motion without separation into fragments.
It can still be the case that later in the calculation fission might occur, but it gives an adequate operational definition of fusion.
Reactions in which fusion does not take place result in more than one fragment in the final state. In this case, the reaction may be a below-barrier approach with Coulomb excitation, a grazing reaction, transfer, fusion-fission, quasi-fission, a deep-inelastic collision, or a mixture of a combination of these. Figure 1 shows the region of the E CM -b plane in which fusion occurs for 16 O+ 16 O calculations using the SkM* [38] interaction, giving one a typical idea of the fusion landscape that arises in TDHF calculations in terms of the regions of fusion and not-fusion.
From such calculations, one can extract a fusion cross-section based on a sharp-cutoff formula [22,41,42] arising from the fact that in TDHF at a given energy and impact parameter the probability of fusion is either 0 or 1: Here, µ is the reduced mass of the dinuclear system, E CM is the centre of mass energy, l < is the minimum angular momentum at which fusion occurs and l > the maximum angular momentum at which fusion occurs at the given energy. b < and b > are corresponding minimum and maximum impact parameters. The approximate equality in (20) comes from taking the quantised angular momentum over to a semiclassical limit as a function of the continuous variable b. Examples of such calculations for specific effective interactions are shown in section 4. From the calculations leading to Figure 1 one sometimes reduces the information by characterising the upper and lower lines of the locus delineating fusion and not-fusion for comparison between different interactions [43][44][45].
Frozen HF approximation
Without invoking the full complexity of a TDHF calculation, one can bring information form the effective interaction to bear using methods designed to extract a nucleus-nucleus (NN) potential from the microscopic interaction [46][47][48]. In particular, one can begin from static Hartree-Fock ground state calculations and make use of the so-called frozen Hartree-Fock approximation. One uses the groundstate densities from Hartree-Fock calculations to generate a nucleus-nucleus (NN) potential and defines the nuclear part of the NN potential as [49,50] in which R R R is the radius vector between the two nuclei, and E HF [ρ 1 ] and E HF [ρ 1 ] are the Hartree-Fock energies for the nuclei with given densities ρ 1 and ρ 2 . These are defined for the Skyrme interaction in the next section (28), but may be written schematically as in which E(r r r) is an energy density functional. The total interaction energy is defined in terms of the same functional as E(r r r) = E[ρ 1 (r r r) + ρ 2 (R R R − r r r)]dr r r.
From the NN potential, one can read off the barrier height (the maximum in the potential) or and use as input for two-body scattering or fusion calculations, with e.g. a coupled-channels method [51].
Density-Constrained TDHF
An improvement of the Frozen Hartree-Fock approximation involves allowing the densities of the incoming nuclei to change as a function of separation distance to account for the Pauli exclusion principle as the nuclei begin to overlap [52]. The principal approach along these lines is the Density-Constrained Time-Dependent Hartree-Fock approach [3,36,[53][54][55].
In DC-TDHF, the densities are computed by a single TDHF calculation at an energy above the Coulomb barrier. At each point along the trajectory, a density-constrained Hartree-Fock calculation is performed to find the energy of a nucleus with the given density but without the internal excitations associated with the TDHF calculation. One then extracts a NN potential in which effects such as necking, shape changes, re-ordering of single-particle states, and the Pauli principle are taken into account. From the potential one can solve a two-body Schrödinger equation with incoming wave boundary conditions [56] to obtain interaction cross sections. The complexity of a coupled-channel calculation is not needed as the DC-TDHF potential implicitly includes exited state information.
The Skyrme Interaction
The Skyrme interaction was suggested by its eponymous proposer as an effective two-and threebody interaction for use in the independent particle model [57] 1 . A link between it and more realistic interactions can be made by, for example, the density-matrix expansion method of Negele and Vautherin [58], which implicitly makes the link via nuclear matter, or alternatively with more direct approaches [59,60]. One can re-formulate the Skyrme interaction as a energy density functional (EDF) [61,62]. The EDF formalism is strictly the correct way to approach the problem for irreducibly density-dependent versions of the Skyrme interaction [63]. However, here we use the language of the interaction as the starting point for the derivation of the EDF since it is the basis of most available comparisons of the underlying forces in heavy-ion collisions within this mean-field framework.
The original Skyrme interaction may be written as a potential as [57,61,64,65] The two and three body Skyrme interactions, in a form essentially the same as that originally given, can be written as and υ respectively. Here σ σ σ are Pauli spin matrices, k k k = 1 2i (∇ ∇ ∇ 1 − ∇ ∇ ∇ 2 ) acting to the right, and k k k = , acting to the left.
A widely-used variant of the Skyrme interaction replaces the three-body interaction with a two-body density-dependent form [63], which adds a new exchange parameter x 3 along with a parameter α which is allowed to take on noninteger values, hence breaking the link between the "interaction" and a force, and formally requiring and EDF picture. From this, one derives [67][68][69] a Hamiltonian density, or density functional, of Here, the summation index t runs over values 0 for isoscalar densities (ρ 0 = ρ p + ρ n and similarly for the other densities) and 1 for isovector densities (ρ 1 = ρ p − ρ n etc), the set {C t } are the coefficients of the functional and the densities are defined in terms of the density matrix with the particle density matrix being and the spin density matrix The further densities found in the functional (28) are given by Here, as in (28), the Greek letter indices run over the Cartesian coordinates x, y, z.
The densities ρ, τ , and J are time-even 2 (identical upon reversal of the sign of the time coordinate), while s, T , and F are time-odd (change sign upon change of the sign of t). Terms in the Hamiltonian density (28) are all time-even, and are made of bilinear products of either two time-even densities or two time-odd densities [70]. Time-odd densities are identically zero in the ground states of even-even nuclei and are essentially unconstrained by fits of the Skyrme interaction parameters, which are made to ground states of even-even nuclei and to nuclear matter properties. While all the terms in the Hamiltonian are time-even, the phrase "time-odd terms" is used to mean those terms made of time-odd densities.
In making the derivation from the interaction to the functional, there is a fixed link between the two sets of coefficients [65,67]. One can choose to either break this link or not, and to fit either set of parameters directly. So far, the majority of fitted sets of parameters in the literature [71] keep the link and fit at the level of the interaction parameters. Note that the terms in the functional (28) which feature derivatives of the spin density apparently give rise to instabilities [72,73] and are not usually included in actual calculations.
If linking the interaction parameters to the density functional coefficients, one has the choice of using only those terms in the density functional which are really constrained at the fitting stage -i.e. those that are associated with non-zero terms in the ground states of even-even nuclei or nuclear matter (or indeed, the subset of these terms which were actually considered at the fitting stage) -or one may choose to activate all terms in the functional. Both methods are used in the literature. Particular terms in the functional are obliged to be grouped together due to Galilean invariance. For example, the spin-orbit interaction consists of a time-even term C ∇·J t ρ t ∇ · J t and a time-odd term C ∇·J t ∇ × j t with the same coefficient. If these two terms are allowed to have different coefficients, then Galilean invariance is broken and, for example, a calculation translating a nucleus through space will fail to conserve energy [74].
Since the focus of this review is on the effect of the interactions, we give here coefficients of the functional in terms of those of the interaction, so that one may clearly see from which terms in the interaction (24) the terms in the functional (28) arise: From the energy density, one attempts to find the optimal solution by varying with respect to each of the densities: Here, we have switched to a form in which neutron and proton densities (labelled by q) are treated separately, rather than as isoscalar and isovector sums and differences. This reflects the usual computational implementation strategy. The partial derivatives are conventionally written in symbolic form as Since the densities are made up of single-particle wave functions, the variation of each kind of density amounts to the variation of the single particle wave functions. Combining a minimisation of the energy along with a Lagrange multiplier constraint to ensure normality of each single particle wave function, one arrives at the Kohn-Sham equations which represent the particular method of approaching DFT in which one considers the density to comprise single particle wave functions: where the quantities in these terms are given in terms of Skyrme force parameters in Appendix A.
The quantity between large square brackets acting on the left hand side on the single particle wave function is thus identified with the single particle Hamiltonian as used in the HF and TDHF equations. This is a complete specification of the Skyrme-Kohn-Sham Hamiltonian making no assumptions for symmetries and including the tensor terms in the Skyrme interaction. Actual Skyrme parameter sets used in the literature may have been fitted using a subset of this full Hamiltonian, and one should be aware of the detailed form of the interaction used when fitting a parameter set before making use of it oneself. A derivation of the Hamiltonian assuming time-reversal and axial symmetry was originally given by Vautherin and Brink [75]. Engel et al. [67] extended the derivation to allow time-reversal symmetry breaking, as necessary for any dynamic calculation and for triaxial and odd-mass static calculations. Their version of the Skyrme interaction assumed x 1 = x 2 = t o = t e = 0. A complete specification of the mean-field without detailed derivation was given by Perlińska et al. [68]. Full derivations of the expressions given in Appendix A are available in unpublished theses [65,76], the most recent of which, while unpublished, is freely available from the awarding institute's online repository.
Pairing
The pairing interaction is important in the determination of ground-state properties of open-shell nuclei. Its role in most aspects of heavy-ion reaction dynamics is thought to be relatively unimportant, however, its role has been studied in heavy-ion collisions [77] and has been shown to be significant in transfer reactions [78] and in other large-amplitude collective motion, such as fission [79]. Systematic studies of the variation of the effective pairing interaction on the behaviour of heavy-ion dynamics has not been extensively studied.
The BKN interaction
In early TDHF calculations, a simplified version of the Skyrme interaction, which became known as the BKN interaction was used [80]. It takes the t 0 and t 3 terms of the original Skyrme interaction (24) and replaces the momentum-dependent terms with a finite-range Yukawa potential with exchange coefficients constructed to yield an action in the mean field solely in the direct term. This results in an energy density functional of [80] H(r r r) = a 0 τ (r r r) + 3 4 t 0 ρ(r r r) + 3 16 t 3 ρ 2 (r r r) Note that the spin-orbit interaction is specifically not included in the BKN force. [80], "Force I" is the BKN force with relaxed isospin symmetry and "Force SII" is the SII full Skyrme interaction [75]. Note that Ref 16. of [21] is [81].
Early TDHF calculations
The first nuclear TDHF calculations were made in the 1970s [21,80,82,83], featuring simplified versions of the Skyrme interaction, and/or restricted geometries. Figure 2, from an early review paper [21], shows a comparison between the experimental fusion cross-section for 40 Ca + 40 Ca collisions compared with two different implementations of the BKN force, and the SII [75] Skyrme interaction. One sees that there are noticeable effects in the calculated cross sections both from the specific choice of force parameters, as well as the allowed symmetries underlying the implementation. By relaxing the isospin symmetry with the BKN force, the cross section increases, thanks to the ability for the initial translational kinetic energy to transfer into internal collective excitation modes permitted through the relaxation of symmetry.
The use of the BKN force vs the full Skyrme force was motivated by relative ease of implementation, though the genuine finite range of the Yukawa terms may be considered more physical than the zerorange momentum-dependent terms. As well as omitting those terms from the energy density functional whose coefficients feature the t 1 and t 2 terms, the lack of the momentum-dependent terms gave a fixed effective mass of m * /m = 1.
An early study including the numerically complicated momentum-dependent terms brought them in at the level of the density-dependent effective mass [84]. Here, versions of Skyrme forces SII [75], SIII, SIV, SV, and SVI [85] in which Yukawa terms are used in place of the momentum-dependent terms, except for the effective mass (i.e., (A.1) is implemented in full assuming t 1 = 0 and t 2 = 0, but t 1 = t 2 = 0 elsewhere in the Skyrme mean field). These Yukawa versions of the Skyrme forces are re-fitted to agree with the original Skyrme forces in nuclear matter. The authors of this study found that in head-on collisions of 16 O+ 16 O the upper fusion threshold was strongly dependent on m * /m, which has a strong influence on the time-scale of the first reflection of single particle wave functions from the potential wall following collisions. The reflected wave functions then 're-flood' the neck thus acting against the separation of the two fragments.
Spin-orbit interactions
The earliest Skyrme-like TDHF calculations did not include the spin-orbit interaction, owing to the complication of its implementation and the desire to at least make calculations of e.g. spin-orbitsaturated 16 O collisions without the spin-orbit force to learn the first results from semi-realistic TDHF calculations.
The first implementation of the Skyrme interaction's spin-orbit force came in the mid-1980s by Umar, Strayer and Reinhard [43], with further elaboration coming from these authors plus collaborators [86,87]. Inclusion of the spin-orbit interaction has a dramatic effect of the dynamics of heavy-ion reactions, since it couples together the spatial motion of the nucleons with the spin degree of freedom, and gives a mechanism for kinetic energy of the incoming nuclei to strongly excite internal spin degrees of freedom. The spin-orbit force is responsible for resolving the so-called "fusion window anomaly" which was found in the earliest calculations, whereby TDHF calculations gave conspicuous transparency for central collisions. Such transparency was not observed despite extensive searches motivated by the theoretical results [88][89][90][91][92]. Figure 3 shows the fusion landscape in the E cm -b plane for the SkM* force both with and without the spin-orbit force. The shaded region shows the locus of fusion for the full SkM* interaction, while the lines indicated by "SkM*-nols" show the smaller region for fusion when the spin-orbit force is absent. One notices at small impact parameter that fusion occurs in the absence of the spin-orbit interaction only over a very limited range of energies. For the most peripheral reactions that result in fusion -i.e. for large impact parameter b -the effect of the spin-orbit force is much diminished. This is because very little kinetic energy is being turned into internal inelastic excitation, but the capture and fusion depends more upon the tail of the densities being able to form a neck to form a rotating compound nucleus with relatively little internal spin excitation. The striking increase observed with the spin-orbit interaction for small b effectively resolved the fusion window anomaly. The original work [43] examined the dependence upon the Skyrme interaction by using forces SII [75] and SkM* [38] both with and without spin-orbit, and found similarly large significant effects in both cases.
The Fusion window anomaly has subsequently been revisited within the TDHF picture to assess the extent to which the TDHF approximation itself, with its restriction to one-body dynamics, might be responsible for unwonted transparency. Tohyama and Umar [93] used an extended form of TDHF, known as TDDM [94][95][96] in which certain aspects of the dynamics of the two-body density matrix, and explicit two-body collisions, are taken into account. They found that the extra dissipation allowed by the TDDM approximation was almost as significant as the spin-orbit interaction: The upper fusion threshold increased from 30 MeV to 69 MeV due to the spin-orbit interaction and from 30 MeV to 66 MeV due to two-body collisions (but in the absence of spin-orbit). with both effects, the increase is to 80 MeV.
In table 1 details of all known TDHF calculations which map out the upper fusion limit for 16 O+ 16 O at zero impact parameter are presented, including those cases where the spin-orbit force has been deliberately switched on or off, and including TDDM results.
The standard form of the spin-orbit potential from the Skyrme interaction's spin-orbit potential is (see (A.3)) This one-parameter form has a fixed isospin dependence and its posited form is motivated in part through its simplicity. In relativistic mean field (RMF) approaches [97], in which the spin-orbit term arises naturally, the spin-orbit potential's isospin dependence comes in a form proportional to ∇ ∇ ∇ρ rather than Skyrme's ∇ ∇ ∇(ρ+ρ q ) while the strength has a density dependence. Various extensions to the Skyrme mean field have been proposed to explore more general spin-orbit forces [15,98,99], motivated by the Relativistic Mean Field. The simplest extension comes from allowing one extra parameter vary the isospin dependence as [100] [100] have been used for the study of fusion barriers [101], ternary fusion [102], in the study of equilibration within TDHF [103], and giant resonance calculations [104,105]. Of relevance for the case of heavy-ion reactions, Vo-Phuoc et al. [101], compared the barrier energy as calculated with TDHF for the SLy4d [30] and UNEDF1 [106] Skyrme interactions. These were chosen for comparison since they both treat the centre-of-mass correction in the same way (in that no correction is included at the Hartree-Fock level, in the spirit that and EDF should be capable of including such correlations in the fit), but differ in the form of spinorbit interaction. The dependence of the fusion barrier energy between the two Skyrme interactions is reproduced in figure 4. The authors calculate the barrier energy in the Frozen HF approximation [47,107] and in full TDHF. One sees the systematic difference between the two interactions in the Frozen HF approximation, and a reduction in this difference in full TDHF dynamics. The kink in the barrier energy at the N=28 magic number is observable in the frozen HF densities, but washed out in TDHF, presumable due to the deformation induced in the dynamics which allow orbitals either side of the magic number to be explored. For most values of A the TDHF barrier is lower than the Frozen HF barrier. This is to be expected since more degrees of freedom that enhance fusion open up in TDHF compared to Frozen HF. On the other hand, for very large A in A Ca, the Frozen HF barrier is lower. This is attributed to N/Z equilibrium within TDHF during fragment approach, driven by the nuclear force, but increasing the Coulomb barrier. Such equilibration is missing from the Frozen HF approach. The SQMC parameterisation is a parameterisation of the Skyrme interaction which is fitted to reproduce as closely as possible the QMC (Quark Meson Coupling) model's mean field. The QMC model [108][109][110][111] is a confined quark-level meson exchange interaction, from which a QMC EDF may be derived. It differs slightly in functional form from the Skyrme EDF in that there are density-dependent couplings in QMC where the Skyrme EDF has point couplings, and the spin-orbit term comes out naturally from the QMC approach, with a fixed form depending on the meson couplings and masses.
As a first step of exploring the QMC model in heavy-ion reactions, the Skyrme-QMC [112] parameterisation is an attempt to map the QMC energy density functional with its parameters fixed largely by the underlying quark-meson dynamics, to the Skyrme EDF. In particular, McRae et al. [113] explored the SQMC parameter set's spin-orbit interaction properties. In the mean-field spin-orbit potential (54) the standard SQMC parameter sets have b 4 /b 4 = 1.78 (in contrast to the standard Skyrme value of 1.0), and a comparison is made with the UNEDF1 functional, with b 4 /b 4 = 1.86. A plot of the frozen Hartree-Fock NN potentials for SQMC with its natural b 4 /b 4 = 1.78 dependence, SQMC with a forced b 4 /b 4 = 1.0, and UNEDF1 is reproduced in Figure 5 for 40 Ca + 132 Sn. The conclusion is that the spin-orbit dependence per se does not have a strong influence in the barrier height or location, at least as far as the frozen Hartree-Fock approximation goes. One might suspect the details arising in the single-particle spectrum could come become more evident in the DC-TDHF method, but further studies are called for here before reaching a stronger conclusion. Effects on radius isotope shift have already been noted for forces with the extended spin-orbit form [100,114,115], and one can generally expect matter radii to have an effect on NN potential. A full TDHF implementation of the QMC EDF will be necessary to fully explore its properties in heavy-ion collisions.
Dai et al. [116] studied dissipation (transfer of energy from relative motion to internal excitation) effects in 16 O+ 16 O collisions, paying particular attention to the role of the spin-orbit interaction. They used SLy4, SkM* and UNEDF1 Skyrme interactions. They implemented the full set of terms arising in the mean field from the spin-orbit force, including the time-even spin-orbit (ρ∇ ∇ ∇·J J J in the functional (28)) Figure 5: Frozen Hartree-Fock potentials for 40 Ca + 132 Sn for two forms of SQMC Skyrme functional which differ only in spin-orbit parameters, and the UNEDF1 functional. From [113] and the time-odd spin-orbit term (s s s·∇ ∇ ∇×j j j in (28)). They examined reactions above the upper threshold for fusion in order to explore the partial transfer of initial relative kinetic energy into a combination of final relative motion and internal excitation. A measure of dissipation was given as where E fin is the final relative kinetic energy between the two fragments and E CM is the initial centre of mass energy. When E fin = 0 the nuclei are below the threshold for separation and remain fused, indicating total dissipation from collective kinetic energy to modes internal to the compound nucleus. Figure 6 shows the enhanced dissipation caused by the inclusion of the spin-orbit interaction, as well as the increased importance of the time-odd spin-orbit force at higher initial energies (see also fig. 4 of [116]). Following the comparison of version of the SLy4 parameter set with and without time-odd and time-even spin-orbit forces, Dai et al. go on to look at the proportion of the dissipated energy which arises from the spin orbit force, defined as where P
(no−ls) dis
and P (f ull−ls) dis refer to the blue lines with triangular points and the black line with square points in figure 6 respectively. This proportion of dissipated energy due to the spin-orbit force is shown in figure 7 for SkM*, SLy4, and UNEDF1. There is a striking difference between UNEDF1 on the one hand and SkM* and SLy4 on the other, with the UNEDF1 dissipation being much less due to its spin-orbit interaction. It is possible that the b 4 /b 4 value as discussed by McRae [113], and above, causes this remarkable effect.
In [117], Iwata examines dissipation mechanisms by extracting a collective potential energy and following it as a function of time in collisions of 16 O and 16 O at 40 MeV centre-of-mass energy. SLy4d [30] and SkM* [38] Skyrme interactions are used. The author performs calculations for each of these forces with the spin-orbit interaction turned off, as well as on, to reproduce the result that the spin-orbit interaction is crucial for lowering the fusion threshold thanks to its role in dissipation. The results of Sly4d and SkM* are discussed as being qualitatively identical, in the sense that fusion does not occur with either force if spin-orbit is removed, but does occur when it is included. From the provided plots, though, one sees differences of the order 30 MeV in the collective potential energy between the two effective interactions under consideration, at the point where the two fragments are touching.
Tensor interaction
The tensor terms in the original Skyrme interaction (25) were omitted in the original Hartree-Fock implementation [75] since that work was restricted to ground states of spherical nuclei where the extra degrees of freedom allowed by the tensor terms extended only to details of spin-orbit interaction which were deemed beyond the necessity of first implementation where the basic spin-orbit force seemed to be adequate. The effect of including the tensor interaction was first studied by its effect on single particle levels [119] in which the authors concluded that it only minor improvements to the reproduction of observed spin-orbit splittings. The authors of this original work have since followed up with further explorations [120,121].
While the tensor terms had been occasionally included in implementations of the Skyrme interaction [122,123], a general renaissance in the use of tensor part of the effective interactions came from the interacting shell model [124]. New explorations with the Skyrme tensor force followed, and include a series of papers [69,125,126] in which a selection of tensor parameterisations were introduced, each fitted with the same protocol as the SLy parameter sets [127][128][129] each with different choices of isospin dependence given by the strength of the tensor parameters. Colò et al. introduced a Skyrme-tensor parameterisation [130] based on perturbatively adding the tensor terms to the SLy parameter set SLy5 [128]. A recent review of the tensor force in effective interactions by Sagawa and Colò [131] gives further details of the use of tensor terms across many observables, as well as an historical summary of its implementation.
Inclusion of the Skyrme tensor interaction for heavy-ion collisions has been implemented with one of two philosophies. One is to include the effects to the mean field from the tensor terms only to the spinorbit interaction. The argument here is that it is presumably the dominant effect of the tensor terms. Moreover, inclusion of particular terms that arise in the energy density functional from the Skyrme interaction can be and have been treated as individual terms whose inclusion is never mandatory. This basic inclusion of the tensor interaction gives rise to a term in the energy density for spherical even-even where J J J is the antisymmetrised part of the full J tensor, as defined in (32). Corresponding to this is a contribution to the spin-orbit potential (A.3) of ∆B B B n = αJ J J n + βJ J J p , ∆B B B p = αJ J J p + βJ J J n .
In fact, these terms functionally already exist in the spin-orbit potential as derived from the t 1 and t 2 terms of the central part of the Skyrme force, and the parameters α and β are given by [132] Iwata and Maruhn [133,134] studied the effect of the tensor terms on the spin-orbit interaction specifically to understand the relative contribution of the tensor and spin-orbit terms and their role in dynamic spin polarisation. They use a range of Skyrme parameterisations, including a series labelled SV-tls [135] which include a parameter allowing the t 1 and t 2 contribution to the spin-orbit potential to be dialed on (η tls = 1) or off (η tls = 0). All the SV-forces allow a fully free value of the spin-orbit b 4 /b 4 ratio when fitting. Iwata and Maruhn defined a time-dependent ratio of the strength of the spin-orbit field as arising from the J J J terms to those arising from the ∇ ∇ ∇ρ terms as W W W T q /W W W LS q (t) and found the size of the ratio at an indicative time varied between ∼ 1% and ∼ 22% depending on the interaction. A rather strong mass-dependence was found, with the ratio increasing as mass increased, at least for the three figure 10. The tensor force is shown to be able to enhance or hinder the transfer of centre of mass motion into spin excitation during a heavy ion collision depending on the way in which the tensor and central parameters combine. If α + β is negative, the dissipation into spin modes is enhanced. Stevenson et al. [45,132,136] used the full tensor interaction with all EDF terms except the unstable spin-dependent terms to study reactions of 16 O+ 16 O at the upper fusion threshold between fusion and deep-inelastic reactions. At b = 0 a large variation in the upper fusion threshold was found, ranging between 61 MeV (T12) and 87 MeV (T46) from amongst the forces considered. This study complemented previous studies of this benchnmark value, and a compilation of all known results is presented in Table 1. The authors analysed the contribution to the total energy from different terms in the functional, and found typical changes in terms due to the tensor interaction to be of order of a few hundred keV. The most pronounced changes were the J 2 term, justifying its use as the first approximation to including only these terms when adding the tensor force. The contribution from the J 2 terms to the total energy can be positive (decreasing binding) or negative (increasing binding). Figure 8 shows the energy contribution from the J 2 terms as a function of time during collisions of Long and Guo [139] use the full tensor interaction (i.e. with the full EDF, but with A ∇s and A ∆s again set to zero as always) and explore the fusion barrier for 16 O+ 16 O using all 36 of the TIJ tensor forces parameter sets along with SLy5 and SLy5t within the Frozen Hartree Fock approximation. For a selection of these forces, they calculated the barrier energy with full TDHF. They found the height of the barrier uniformly too low. It lay in the narrow range 9.96−10.12 MeV, compared to an experimental value [140] of 10.61 MeV. The narrow range will in part be due to the spin-saturated nature of 16 O and the lack of dynamical effects in the Frozen HF approximation, which render the ground state calculation quite insensitive to the tensor interaction. Including the dynamic effects afforded with TDHF reduce the barrier heights in each case by a small amount ranging from a Frozen HF → TDHF reduction of 10.08 → 10.05 MeV (T22) to 10.02 → 9.90 (T44). The radial separation of the nuclei at the Coulomb barrier are also systematically at variance with the data -the Frozen HF approximation gives between R = 8.50 fm and R = 8.58 fm compared with an experimental value of R = 7.91 fm. This suggests that the tensor force does not have the right degrees of freedom to overcome any deficiencies in the ability of Skyrme-TDHF to correctly reproduce the fusion barrier in 16 O+ 16 O. However, this conclusion may be too hasty, and the fact that all the forces used in this study were fitted with a centre-of-mass correction, but were necessarily used without it for the two-body study.
A more positive prospect for the role of tensor parameters in fusion reactions comes from a recent study by Guo et al. [141]. They considerd reactions of 40 Ca + 40 Ca, 40 Ca + 48 Ca, 48 Ca + 48 Ca, 48 Ca + 56 Ni, and 56 Ni + 56 Ni, making a detailed comparison between forces for 48 Ca + 48 Ca as a representative example. Figure 11 shows the Frozen HF and TDHF barriers (upper panel) across the forces SLy5, SLy5t, T22, T26, T44, and T62. As expected, the TDHF barriers are lower than those from Frozen HF. Depending on the Skyrme paratmerisation, the barrier height can be rather well reporduced in TDHF.
Calculations of the cross section using the sharp-cuttoff formula (20) in TDHF are shown for the set of Skyrme forces considered, reproduced in figure 12. While all interactions overestimate the cross section, the scale of the variation between forces is such that the discrepancy between calculation and experiment, given in the lower panel of the figure and defined by varies markedly, and is much lower for those forces which best reproduce the barrier height in TDHF. Thus, this work provides an example in which the Skyrme tensor force has a sufficiently large effect on reaction dynamics around the Coulomb barrier to make a changes of the order of the typical discrepancy between experimental and model calcualtions.
Dai et al. [118] studied dissipation in tensor interactions in 16 O+ 16 O collisions in a similar manner to their work on the spin-orbit interaction [116], looking at the energy transfer from initial relative motion to internal excitation in deep inelastic scattering. They found that the tensor interactions could reduce dissipation compared to the SLy5 fit, or enhance it. The T11 tensor interaction decreased the dissipation, presumably by resisting transfer of energy into the J 2 terms, and thus has a reduced cross section compared to the other interactions studied (SLy5, SLy5+T, T11, T13, T31, T33). The authors computed the sharp-cutoff fusion cross section in TDHF at 70.5 MeV centre of mass energy in order Figure 11: Upper panel shows fusion barrier for selection of Skyrme-tensor forces using the Frozen HF (FHF) approximation, or full TDHF, compared with the experimental value [142]. The lower panel shows collective quadrupole (2 + ) and octupole (3 − ) states which contribute to the lowering of the barrier in TDHF compared to FHF, with the lower energy collective states correlating with the stronger reduction in the barrier height in TDHF. Figure from [141] to compare with an experimental point of σ fus = 1056 ± 125 mb [143]. They found that the variation between tensor parameterisations (all fitted to the same set of ground state data) differed between T11 at 1161 mb -inside the error bar of the experimental point -and T33 at 1327 mb. The studies of the tensor force have therefore found that there are signigicant effects around the fusion barrier, at the top of the fusion region, and then beyond into deep-inelastic energies.
Variation of nuclear matter properties
The SV-range of Skyrme parameterisations [135] were fitted each in the same manner, with each having a specific nuclear matter property (incompressibility K, isoscalar effective mass m * /m, symmetry energy J, and Thomas-Reiche-Kuhn sum rule enhancement factor κ TRK ) varied with respect to a "basis" parameter set SV-bas. The SV-bas set has K = 234 MeV, m * /m = 0.9, J = 30 MeV, and κ TRK = 0.4. The parameter sets thus provide a set of interactions which can be used to study the role of nuclear matter properties in heavy-ion collisions in the Skyrme-TDHF framework. In [144], the authors study fusion barriers and cross sections for 48 Ca + 48 Ca using a set of the SV-Skyrme interactions in order to understand the extent to which fusion cross-sections would be sensitive to nuclear matter properties. In part this was motivated by the known link between the symmetry energy and the neutron skin thickness [145] that is currently a key driver of experimental determinations of neutron radii [146,147]. Figure 13 shows the ratio of fusion cross section for 48 Ca + 48 Ca between various SV-forces, and the SV-bas base version. The calculations are made from nucleus-nucleus potentials obtained by the DC-TDHF method (see sec. 2.4). The largest differences in cross section were seen by varying the symmetry energy, as expected, with the rather modest range of J = 28-34 MeV (being consistent with observation [71]) spanning a range of around a factor of 3 in cross section, just below the fusion barrier. One also sees from the figure a comparison with experimental data, showing that the modest variation of nuclear matter parameters is enough so that different model predictions differ from each other by much more than the experimental error bars (around and below the barrier, at least) and that none of the SVforces alone fit the data across all energies.
Other studies
Umar and Oberacker [137] made the first exploration of terms in time-odd densities that are not mandated by Galilean invariance, namely the terms in the functional (28) in s s s 2 , s s s · ∇ 2 s s s and (s s s · T T T − J 2 ). Note that they did not include tensor parameterisations, so they did not need the (∇ ∇ ∇ · s s s) 2 or s s s · F F F terms. They observed noticeable effects in the position of the upper threshold between fusion and deep inelastic scattering when activating the time-odd terms, highlighting the need to at least consider them for inclusion in one's density functional.
In [148], versions of the SV-bas force [135] are generated in which whole terms in the interactions are turned on or off, to examine the resulting transparency during collisions -and the corresponding connection to considering the travelling quantum mechanical wave packet resprsenting the nuclei to be solitons [149][150][151]. The soliton-like nature of the wave-packet is confirmed by a high-degree of transparency when (and if) the colliding nuclei pass through each other without change. In collisions of 4 He on 8 He, the transparency was found to be highly energy-dependent, with a value of around 30MeV incident energy giving high-transparency and hence soliton-like behaviour. In terms of the interactions, it is the momentum-dependent terms in the Skyrme interaction (the t 1 , t 2 and spin-orbit terms ) that suppress transparancy the most.
Loebl et al. followed up a study of dissipation in Skyrme-TDHF using the Wigner transformation [103] which concluded that full equilibration does not occur in TDHF, with a further study to see if Figure 13: Cross sections for the fusion of 48 Ca+ 48 Ca, calculated using the density-constrained TDHF method. Shown is the ratio of the calculated cross section for a variety of different Skyrme forces from the SV-set [135] to the basic SV parameterisation SV-bas. Points indicate the experimental cross section with data from [142]. Figure from [144].
there is any dependence on the parameterisation choice, or in the use of time-odd terms not usually activated [152]. In particular, they performed calculations with the SLy4 [127] force in its standard form, and also with the s s s 2 and s s s · T T T − J 2 terms from the functional (28). While no difference in the equilibration was found, the details of long-time differences in outcome near the upper fusion threshold were observed, with the location of the threshold being sensitive to the change in dissipation coming from the extra terms.
Godbey, Umar, and Simenel [153] took a single Skyrme interaction (SLy4) and separately calcualted contributions from the isovector and isoscalar terms in the EDF (i.e. the terms t=0 and 1 respectively in the sum in equation (28) as applied to the DC-TDHF potential. So doing, they were able to quantify the isovector contribution to the ion-ion potential. For fusion reactions in which transfer channels are active, the authors showed that an isovector reduction in the potential existed, demonstrating a fusion enhancement due to transfer. In principle one should then expect these calculated results to depend upon the isospin nature of a particualr Skyrme parameterisation.
Conclusion
The role of the effective interaction in the dynamics of heavy-ion reactions has been surveyed. Within mean-field dynamics, the effects of varying the effective interaction between reasonable limits (i.e., using only those interactions which are available in the literature and that fit ground state data well) produces qualitatively and quantitatively variable behaviour in heavy-ion collisions at energies below the Coulomb barrier, in the fusion region, and in the deep-inelastic region at the upper energy limits where one supposes mean-field dynamics to be a reasonable approximation. One concludes, therefore, that the role of the effective interaction in the calculation of reaction dynamics is instrumental in understanding the details of the reaction, and that results from heavy-ion reactions inform us about the details of the effective interaction. In the case of the Skyrme-tensor interaction, both structure of individual nuclei and their dynamics as they collide can be affected. Further study is needed on the interplay between these two aspects. | 12,070.8 | 2018-09-16T00:00:00.000 | [
"Physics"
] |
Factors affecting follower responses to movement calls in cooperatively breeding dwarf mongooses
In social species, individuals maximize the bene fi ts of group living by remaining cohesive and coor-dinating their actions. Communication is key to collective action, including ensuring that group members move together; individuals often produce signals when attempting to lead a group to a new area. However, the function of these signals, and how responses to them are affected by intrinsic characteristics of the caller and extrinsic factors, has rarely been experimentally tested. We conducted a series of fi eld-based playback experiments with habituated wild dwarf mongooses, Helogale parvula , a cooperatively breeding and territorial species, to investigate follower responses to movement calls. In our fi rst experiment, we found that focal individuals were more likely to respond to playback of ‘ movement calls ’ than control ‘ close calls ’ , indicating movement calls function as recruitment signals. In a second experiment, we found that focal individuals responded similarly to the movement calls of dominant and subordinate groupmates, suggesting that dominance status (an intrinsic factor) does not in fl uence receiver responses. In a fi nal experiment, we found that individuals responded to the simulated presence of a rival group, but that this outgroup con fl ict (an extrinsic factor) did not affect responses to movement calls compared to a control situation. This may be because attention is instead focused on the potential presence of an imminent threat. By using playbacks to isolate the acoustic signal from physical movement cues, our results provide experimental evidence of how movement calls help leaders to attract followers and thus adds to our understanding of recruitment signals more generally. ©
To maximize the benefits of group living (e.g.resource defence and reduced predation risk), group members need to act collectively; they must remain cohesive and coordinate with one another (Conradt & Roper, 2005;Krause & Ruxton, 2002;Ioannou et al., 2019).Since groups are composed of a heterogenous mix of individuals whose interests do not perfectly align (Conradt & Roper, 2005), communication is often crucial to ensure collective action (Bradbury & Vehrencamp, 2011).Signals relating to collective movement can be produced at two stages of the process, which are not necessarily mutually exclusive.Individuals may produce a signal to indicate their readiness to move and/or when they attempt to initiate group movement, either following earlier signals of readiness or independently (Bousquet et al., 2011;Sperber et al., 2017;Turb e, 2006).For instance, in wild dogs, Lycaon pictus, observational work indicates that a threshold of 'sneezing' individuals is needed to initiate group movements from a resting period (Walker et al., 2017), while 'moving calls' from several individuals are similarly required in meerkats, Suricata suricatta, for the group to change from one foraging patch to another (Bousquet et al., 2011).In some species, or certain contexts, a single individual may attempt to move elsewhere; attracting followers will avoid them becoming isolated and thus putative leaders may use movement signals to enhance the likelihood that they are joined.For example, meerkats also produce a distinct 'lead call', which is used when a potential leader attempts to initiate movement from a sleeping burrow to start foraging (Turb e, 2006).In white-faced capuchins, Cebus capucinus, backward glances seem to be important in recruiting others when shifting from resting to foraging, as the number of followers increases after a glance from a moving individual (Meunier et al., 2008).The faster 'grunt' rates of leaders compared to followers in redfronted lemurs, Eulemur rufifrons, when moving throughout the day suggests that this call may function as a movement signal (Sperber et al., 2017), and vocalizing when leaving the group increases the chances of an individual green woodhoopoe, Phoeniculus purpureus, being followed by its groupmates when changing foraging patches (Radford, 2004).While movement signals appear to be important in coordinating the actions of group members, there has been little experimental testing of the proposed function to recruit followers (for an exception, see Teixidor & Byrne, 1999), or of how follower responses differ depending on intrinsic characteristics of the signaller (e.g.their identity; but see Preston, 2020) and on extrinsic factors (e.g. the level of outgroup threat).
On hearing a movement signal, individuals might use information about the dominance status of the leader when deciding whether to follow.In principle, dominant individuals could be more likely to be followed if subordinates gain some benefit from doing so; for instance, if following increases future social tolerance or social-bonding opportunities (King et al., 2008;Smith et al., 2015).Dominant individuals could also be considered more reliable sources of information.For example, if they have greater knowledge of the environment, they may be more likely to lead individuals to better foraging patches (Brent et al., 2015;McComb et al., 2001).Alternatively, if group decisions are more evenly distributed across group members (Leca et al., 2003), then both dominants and subordinates could elicit similar responses from followers (Jacobs et al., 2011;Leca et al., 2003;Wang et al., 2016).Most work to date has investigated how dominance status affects the likelihood of leading.For example, in chacma baboons, Papio ursinus, the dominant individual tends to arrive at experimental food patches first, with subordinates following behind (King et al., 2008), while observations of Tibetan macaques, Macaca thibetana, suggest that dominance rank does not affect who leads the group away from depleted foraging patches (Wang et al., 2016).Far less work has examined how individuals respond to movement signals depending on the rank of the caller.One exception is an observational study of meerkats showing that dominant females producing a 'lead call' were more likely to be followed by group members than dominant males or subordinates producing the same call (Turb e, 2006), but experimental tests are needed.
Extrinsic factors can also affect follower decisionsdfor instance, simulated predator attacks on captive house sparrows, Passer domesticus, have been shown to reverse leaderefollower positions relative to an exploratory context (Tuliozi et al., 2021)dbut the influence of outgroup conflict in this regard has been little considered.Members of social species often interact with outside groups or individuals, which can pose a threat.For example, rival groups may be attempting to steal territory or resources (Dyble et al., 2019;Kelly, 2005), while individual outsiders may be seeking mating opportunities or a breeding position (Braga Goncalves & Radford, 2019;Mares et al., 2012).Contests with outsiders can have immediate consequences, such as physical injury or death (Dyble et al., 2019;Morris-Drake et al., 2022), while the threat of outgroup conflict can cause significant changes to within-group behaviour, including elevated levels of grooming, contact or aggression (Arseneau-Robar et al., 2018;Birch et al., 2019;Radford, 2008).Subsequent movement patterns and collective decision making have also been shown to be influenced by outgroup conflict (Christensen et al., 2016;Dyble et al., 2019;Morris-Drake, Linden et al., 2021;Radford & Fawcett, 2014).Deciding to follow another individual under conflict scenarios could have significant fitness implications; for instance, banded mongoose, Mungos mungo, males that follow a dominant female into violent contests suffer an increased mortality cost (Johnstone et al., 2020).When there is the prospect of an imminent outgroup contest, group members may want to stay more cohesive due to heightened anxiety or to prime for battle (Birch et al., 2019;Morris-Drake et al., 2019), and thus could be more receptive to movement signals from leaders.
Dwarf mongooses, Helogale parvula, are an ideal species in which to investigate experimentally the responses of group members to movement calls.They live in cooperatively breeding groups that each defend a year-round territory (Rasa, 1987), with group members spending most of the day foraging together throughout their territory before returning to a communal burrow to sleep (Rasa, 1987).Dwarf mongooses are highly vocal, maintaining contact during foraging by producing sporadic 'close' calls (Rasa, 1987).When departing or returning to a sleeping burrow, and when moving from one foraging patch to another, individuals move cohesively at a heightened pace, usually following a leader that has initiated the movement while producing a 'movement call'da fast burst of multiple close calls.Prior to movement from a resting position (e.g. from a sleeping burrow) there is also a gradual increase in the frequency of close calls, which may indicate an increasing willingness to move (Sperber et al., 2017).By contrast, when dwarf mongoose groups move from one foraging patch to another, there is no obvious predeparture behaviour; instead, an individual attempts to initiate group movement by moving at pace while producing a movement call.We focus on the latter behaviour in this paper.
Dwarf mongoose groups comprise a dominant breeding pair and subordinate helpers (all other adults); group members can obtain information about dominance status and individual identity from various calls (Kern et al., 2016;Morris-Drake, Kern et al., 2021;Sharpe et al., 2013).Previous work reported that dwarf mongoose movement decisions are despotic in nature, with the dominant female always leading the group (Rasa, 1987), but recent observations show that over half of group movements are led by subordinates (Cobb et al., 2022).Groups come into conflict with conspecific rivals, both neighbours and those from further afield (Christensen et al., 2016;Rasa, 1987), on average once every 2 weeks in the study population (Cobb, 2022); groups encounter faecal deposits of rival groups much more regularly (Christensen et al., 2016).Intergroup interactions (IGIs) involve a combination of group members looking at each other, vocalizing and, on some occasions, escalation to physical fights (Rasa, 1987).Individuals forage closer to their nearest neighbour after the simulated threat of a rival group (Morris-Drake et al., 2019), which could proximately be a response to heightened anxiety about imminent conflict (Radford et al., 2016), and ultimately represent priming behaviour to ensure the most collective response to outsiders (Birch et al., 2019;Radford, 2011).
We investigated subordinate group member responses to dwarf mongoose movement calls in three related field experiments.First, we tested whether the call functions to attract followers.We predicted that, compared to control close calls, movement calls would elicit a 'follow' response, with the focal individual becoming more vigilant, vocalizing and moving towards the loudspeaker.Second, we tested whether individuals respond differently to movement calls from dominant and subordinate group members, predicting either a stronger response to movement calls from dominant individuals, or for there to be no clear difference in response to movement calls from dominant versus subordinate individuals.Third, we tested how the threat of a nearby rival group affects the response to movement calls.We predicted that, compared to a control stimulus, the simulation of an intergroup threat would result in heightened responses to movement calls, such that the group would remain cohesive in case a contest occurred imminently.
Study Site and Population
We carried out the research at the Dwarf Mongoose Research Project (DMRP) in Limpopo Province, South Africa (24 11 0 S, 30 46 0 E); see Kern and Radford (2013) for more details.Eight wild but habituated groups, each comprising 4e12 adults (individuals >1 year old), were used in experiments during the study period (AprileAugust 2020).Groups are habituated to close human presence (<5 m) and individuals are uniquely dye-marked (Kern & Radford, 2013).The dominance status (dominant or subordinate; identifiable from the outcome of aggressive interactions such as foraging displacements) and sex (identifiable from anogenital grooming bouts) of all individuals is known from the long-term observations (Kern & Radford, 2013, 2016).We considered only adults for playback experiments because individuals less than 1 year old rarely lead the group (Cobb, 2022).
Experimental Overview
We conducted three playback experiments to investigate the responses of focal subordinate individuals to the movement call of another group member.In experiment 1 (10 Aprile8 June 2020), we determined the baseline responses to the movement call of a dominant individual by comparing them to the responses elicited by close calls (given while foraging) of the same dominant group member.In experiment 2 (27 Aprile25 June 2020), we tested whether responses differed depending on the dominance status of the caller, comparing those elicited by movement calls of dominant and subordinate group members of the same sex (the focal individual was not necessarily sex-matched to the signallers).In experiment 3 (10 Julye16 August 2020), we tested how the simulated presence of a rival group affected responses to movement calls.Experiment 3 involved two parts: an initial playback of close calls and 'lost' calls (high-pitched vocalizations usually produced while foraging, particularly when an individual becomes isolated) from a non-neighbouring rival group or control herbivore sounds, and then playback of the same movement call of a dominant group member.All three experiments had matched-pairs designs, with each focal subordinate in an experiment receiving two treatments in a counterbalanced order (N ¼ 18 individuals from six groups for experiment 1 and 2; N ¼ 16 individuals from eight groups for experiment 3).
Recordings and Playback Tracks
We recorded calls ad libitum within 3 m of an individual in calm conditions, using a Marantz PMD661MKII solid-state recorder (Marantz, Kanagawa, Japan) and a Sennheiser MKE600 shotgun microphone (Sennheiser, Wedemark, Germany) coupled with a Rycote Softie windshield (Rycote Microphone Windshields, Stroud, Gloucestershire, U.K.).As all groups are well habituated to close human presence, the behaviour and vocalizations of individuals were not impacted during recordings.We recorded individual close and lost calls while groups were foraging throughout the day, and we recorded individual movement calls when a group moved collectively (sometimes excluding individuals such as babysitters; B. Cobb, personal observation) from a sleeping burrow to a foraging site, from one foraging patch to another, or to a sleeping burrow before sundown (example recordings available in Supplementary Material).Collective group movements are initiated by one individual moving quickly away from the group while producing a movement call; those following often produce movement calls too.
To construct playback tracks, we used Audacity 2.3.3.For all tracks, we superimposed good-quality recordings of calls (e.g.no overlapping sounds such as conspecific calls) onto recordings of ambient sound recorded in calm conditions in the centre of a group's territory when no dwarf mongooses were present.We used a HandyMAN TEK 1345 sound meter (Metrel U.K. Ltd; Epsom, Surrey, U.K.) to standardize playback volume of calls to match natural vocalizations, as well as amplifying calls in Audacity where needed.We applied a high-pass filter (filtering out frequencies below 300 Hz) in all tracks to improve signal-to-noise ratio and to standardize background sound.The same ambient-sound recording was used for both playbacks within a pair (i.e. the two treatments to a focal individual in a given experiment).Movement calls, which are composed of fast-repeating close call elements, are often preceded by infrequent close calls (Maier et al., 1983).To replicate this combination and to standardize track length, movement call tracks for all three experiments consisted of 25 s of ambient sound, with two close calls (one at 2 s and one at 8 s after the start of the track) followed by a movement call commencing 14 s from the start of the track (Fig. 1, bottom).We standardized movement calls to be 10 close call elements within 6e7 s based on early analysis of a subset of recordings during the field season (mean ± SE call rate ¼ 1.5 ± 0.1 close call elements/s, range 0.4e3.6);thus, the movement call playback rate ranged from 1.4 to 1.6 close call elements/s.For all experiments, both female and male vocalizations were used for playbacks.The same calls were sometimes used across experiments.
In experiment 1, we compared responses to movement call and control tracks from the same dominant individual.Control tracks comprised 25 s of ambient sound with four close calls at 2, 8, 14 and 20 s from the start of the track (Fig. 1, top).We standardized both close calls and movements calls to 50e55 dB from 1 m.Within the experiment, a given individual was used as a source of calls no more than three times (mean ¼ 1.8), and a given call was only used once in playback tracks.
In experiment 2, we compared responses to movement call tracks from a dominant and subordinate individual.A given individual was used as a source of calls no more than three times (mean ¼ 1.4).We standardized calls to 50e55 dB from 1 m, and used a given call once within the experiment.The two playbacks to a focal individual were of calls from individuals of the same sex as each other (e.g. a dominant male and a subordinate male) to ensure the sex of the caller had no effect on responses.
Experiment 3 involved two parts.For part 1 (the rival group or herbivore control playback), we created tracks using similar methodology to Morris-Drake et al. ( 2019 Connochaetes taurinus, giraffe, Giraffa camelopardalis giraffe, and waterbuck, Kobus ellipsiprymnus.We pasted four herbivore sounds onto 12 s of ambient sound, to create four different sequences.We then pasted these sequences into a 1 min track (one sequence being used twice) in a random order, which we duplicated to make a 2 min herbivore track.Rival group tracks each contained calls from a single other group: close calls from four individuals, including at least one dominant, and lost calls from two individuals.We inserted four close calls (one from each individual) into a 3 s sequence.Four sequences were constructed, each with a randomized order of caller.We then inserted these four sequences into 12 s blocks of ambient sound, to make five 12 s blocks, with each block having a randomized sequence order.These blocks were then combined to make a 1 min track, and five calls were removed at random to create a call rate of 75/min, as per the natural call rate of a foraging group and in line with previous experimental work ( Morris-Drake et al., 2019;Sharpe et al., 2013).In this 1 min segment, four lost calls from two individuals (two each) were then inserted into the track at random time stamps within the first 30 s, alternating between individuals.As lost calls are difficult to predict and record, some recordings from previous field seasons from individuals no longer in the group were used.As we were playing back calls from nonneighbouring groups, we did not expect this to affect responses of the focal group.We then duplicated each 1 min track to make 2 min tracks.We faded rival group tracks so that the maximum amplitude (50e55 dB at 1 m for close calls and 60e65 dB at 1 m for lost calls) was reached at 1 min, to simulate a rival group approach.Previous work has shown that individuals are able to distinguish between calls of their own group and those of a rival group (Morris-Drake et al., 2019).
Some close calls and herbivore sounds were used more than once within part 1 of the experiment, but the component parts of each track were arranged randomly in a different order to generate unique tracks.We used the same group for playback construction no more than four times (mean ¼ 2.3), with a maximum of three focal individuals per group receiving playbacks (mean ¼ 2).The same rival group was used for playback on a maximum of two focal individuals from the same group.As rival tracks were from nonneighbouring groups (and thus all rivals were unknown outsiders from the perspective of a focal group), it is unlikely that group identity affected focal responses, and a 2-week gap was left between trials on different individuals within the same group to avoid habituation to the calls (see Experimental Protocol below for further details).
For part 2 (the movement call playback), a given individual was used as a source of call no more than twice (mean ¼ 1.2), with different calls used for different focal individuals.Calls were standardized to 50e55 dB from 1 m.After receiving the playback track in part 1, a focal individual received a movement call track from a given dominant individual within its group.The same movement call track was used following a herbivore or rival group track to ensure differences in movement calls had no effect on responses.
Experimental Protocol
For all three experiments, we conducted trials during the day when the group was foraging, in calm weather conditions and at least 10 min after a group movement, latrine behaviour, snake mob or other disturbance.If an IGI occurred, at least 30 min was left before running a trial in experiments 1 and 2; for experiment 3, trials were carried out on a different day to IGIs.We started trials when the focal individual was foraging at least 2 m from other individuals.
We carried out experiments 1 and 2 using a similar experimental protocol.We placed a loudspeaker (Rokono B10 or Rokono BASSþ Mini, Boundless Technology Limited, Devon, U.K.) connected to an MP3 device (either a Moto G 5 phone; Motorolo Inc, Chicago, IL, U.S.A., or a Kubik Evo; Kubik Digital Electronics) 3 m perpendicular from the focal individual (chosen randomly before visiting the group), hidden in vegetation.Trials to the same individual were separated by at least 1 day and performed at a similar time of day.Within a group, at least 30 min was left between trials on different individuals.If a trial was disturbed (e.g.due to conspecific alarm calls or the focal individual moving into vegetation and out of view), it was abandoned (experiment 1: N ¼ 4; experiment 2: N ¼ 7) and repeated that day or at a later date, but with the order of the treatments reversed.The playback track in the abandoned trial was therefore not used more than once on the same day, to avoid habituation.
For experiment 3, we used two loudspeakers, one for each part.To avoid disturbing the focal individual during loudspeaker set-up, a small amount of egg was used to attract it to an area where the two loudspeakers were already positioned.When playback started, the focal individual was thus 5 m from the first loudspeaker (used to broadcast either the rival group or herbivore track) (Morris-Drake et al., 2019).The second loudspeaker (used for the movement call playback) was placed diagonally ca. 3 m from the first loudspeaker so that, if the focal individual approached the first loudspeaker, the second loudspeaker would be positioned to one side of the individual.Following initial playback of a rival group or herbivore track, the movement call track was started at least 30 s, and no more than 5 min, later.Variation in time between playbacks was due to individuals moving out of view, for example into dense vegetation, before the movement call track could be started, but there was no difference between treatments (mean ± SE time after a rival group track ¼ 110 ± 22 s, herbivore track ¼ 112 ± 21 s).Trials to the same focal individual were separated by at least 1 day, and at least 2 weeks were left before conducting trials on another individual in the same group, to avoid habituation.Trials abandoned due to disturbances (e.g.alarm calls or the focal individual going out of view) were repeated with different rival group or herbivore tracks at least 2 days later (N ¼ 7).
For all experiments, we recorded the following responses to movement calls (and close calls in experiment 1): (1) whether the focal individual looked (head raised and directed towards the loudspeaker), orientated (whole body turned to face the loudspeaker) and/or approached (after orientating, moved at least 50 cm towards the loudspeaker); (2) whether they vocalized (gave either close calls and/or movement calls); (3) the rate and proportion of time spent vigilant (head raised).These responses were collected from 14 s after the start of the playback (i.e.once the movement call period had commenced; see Recordings and Playback Tracks), and focal individuals were observed for a minimum of 25 s after the playback finished.We analysed data for a maximum of 60 s response time, as we assumed that individuals would not be responding to movement calls after this point.Chi-square tests were performed to show that there were no differences between treatments in the response time analysed: experiment 1 (c 2 1 ¼ 0, P ¼ 1), experiment 2 (c 2 1 ¼ 1.45, P ¼ 0.229) and experiment 3 (c 2 1 ¼ 0, P ¼ 1).For part 1 of experiment 3 (the rival group or herbivore playback), we recorded whether the individual looked, orientated and approached the loudspeaker during the 2 min playback period, to ensure individuals were responding to rival group calls as expected from Morris-Drake et al. (2019).All trials were filmed using a GoPro Hero 7 strapped to the head of the observer, who also narrated responses into a Dictaphone (Sony ICD-PX370) while standing ~3 m away from the focal individual and loudspeaker to avoid disturbances.
Ethical Note
All work was conducted with permission from the Limpopo Department of Economic Development, Environment and Tourism (permit number: 001-CPM403-00013), the Ethical Committee of the University of Pretoria, South Africa and the Ethical Review Group of the University of Bristol, U.K. (University Investigator Number: UIN/17/074).Only those individuals comfortable with close presence of experimenters were included in the study.To minimize anxiety, rival group playbacks were limited to a maximum of three focal individuals per group.
Statistical Analysis
We extracted data using Boris 7.9.19 (Friard & Gamba, 2016).Video footage from GoPro recordings was used where quality was sufficient, but where recordings failed, or quality was poor (e.g.due to dense vegetation), only Dictaphone audio was used for both treatments in a pair.We used R v.4.0.3 for statistical analyses (R Core Team, 2020) and 'ggplot2' to construct figures (Wickham, 2016).McNemar tests (with continuity corrections) were used for paired responses with a binary outcome.Paired t tests were used for continuous response variables where assumptions were met (paired differences and residuals being normally distributed, checked visually with histograms and QeQ plots).Where assumptions were violated, Wilcoxon signed-rank exact tests were performed.To compensate for an increased likelihood of type I error due to multiple testing, we used sequential Bonferroni corrections (Rice, 1989) for tests within three grouped response variables for each experiment: (1) physical response (look, orientate, approach); (2) vocal response (close call, movement call) and (3) vigilance response (proportion of time vigilant, vigilance rate).
Adjusted a levels are given within each grouping where at least one significant result is reported.The data and R code used for analysis is available in Supplementary Material.
DISCUSSION
In response to movement call playbacks compared to control playbacks, dwarf mongoose individuals were more likely to look and approach the loudspeaker and were more vigilant (experiment 1), suggesting movement calls function as recruitment calls.Focal subordinates responded similarly to playbacks of movement calls from dominants and subordinates (experiment 2), suggesting that the dominance rank of the caller (an intrinsic factor) may not influence a decision on whether to follow another individual.The playback of a rival group caused individuals to look, orientate and approach the loudspeaker more than when played a control herbivore track, but this heightened outgroup conflict (an extrinsic factor) did not translate into a difference in response to movement calls (experiment 3).Using playback experiments allowed us to Dominant Subordinate B. Cobb et al. / Animal Behaviour 192 (2022) 159e169 eliminate confounding factors, such as physical movement cues, and thus isolate the importance of the acoustic movement call in follower decision making.
Much observational work suggests that signals are important in coordinating group movements in a variety of taxa (Conradt & Roper, 2005;Sperber et al., 2017).Here, we have shown experimentally that a movement call alone is sufficient to elicit a movement response in a nearby group member.While foraging for prey, dwarf mongooses spend the majority of their time with their heads down (Rasa, 1989), and vegetation can be dense, meaning that purely visual cues of a lead attempt may be obscured or missed.Thus, a salient acoustic signal of recruitment is likely useful in attracting the attention of other group members and increasing the likelihood of recruiting followers so that the putative leader is not left isolated.Similar vocalizations have been observed in other species and may be important for both recruiting followers and in coordinating movement among group members (Sperber et al., 2017); distinct vocalizations may exist for these somewhat different functions.In meerkats, for example, a 'lead call' is produced by a potential leader seemingly to attract followers (Bousquet et al., 2011); this is similar in context to the dwarf mongoose movement call that we studied.Meerkats also exhibit predeparture behaviour when changing foraging patches, with several group members giving 'moving calls', possibly to ensure a foraging patch is depleted before leaving (Bousquet et al., 2011).In dwarf mongooses, any potential 'voting' process, whereby individuals contribute to a group decision, is perhaps more likely to occur when changing activities, rather than when moving during foraging (the context that we investigated): prior to leaving a sleeping burrow or returning in the evening, there is a gradual increase in the frequency of close calls before an individual first produces a movement call and moves off (B.Cobb, personal observation).In our first experiment, there was a nonsignificant tendency for individuals to produce close calls more in response to movement call playbacks than in response to close call playbacks.This might be an indication that followers are signalling to the leader their intention to follow, although individuals did not produce movement calls more in response to movement call playbacks than in response to close call playbacks.The lack of a strong vocal response might perhaps be due to the use of a static loudspeaker in our experiment, which likely represents a weaker stimulus than a natural lead event involving a physical cue too; future experimental work could use a moving loudspeaker (Gall & Manser, 2017).Interactive playbacks (King, 2015) could also help our understanding of how followers and leaders vocally interact with one another to coordinate movements; for example, whether vocal feedback from followers is required to initiate a group movement (Bousquet et al., 2011).
In experiment 2, we found no significant differences in response to dominant versus subordinate movement calls, but responses for both were similar to those in the movement call treatment of experiment 1.In principle, one explanation could be that movement calls do not convey information on individual identity or dominance status.However, previous work on dwarf mongooses has shown that individuals respond differently to sentinel calls depending on the dominance status of the caller (Kern et al., 2016).Furthermore, Sharpe et al. (2013) showed that, in response to close calls of higher-ranked versus lower-ranked individuals of similar ages, focal individuals with a food item were more vigilant, suggesting discrimination based on social rank.We therefore suggest that individuals were still responding to movement calls, but with no preference in following individuals of different dominance status.Where within-group conflict is frequent, such as in chacma baboons, dominant leadership patterns have been observed, and following a dominant and maintaining social bonds with them could ease anxiety or reduce the chance of receiving aggression (Kalbitzer et al., 2015;King et al., 2008).In dwarf mongooses, there are relatively low levels of within-group conflict, perhaps in part because aggressors receive less grooming at the evening sleeping burrow (Morris-Drake, Kern et al., 2021).Rather than dominance status per se, other factors such as nutritional requirements may be more important (Sueur et al., 2013).If movement calls are a form of honest signal, in that they are often produced by individuals with the highest needs (Conradt et al., 2009;Rands et al., 2003), then other group members could respond to them regardless of the relative social rank of the caller due to inclusive fitness benefits (Hamilton, 1964).As playbacks were conducted while foraging, the experiments could mimic a situation whereby the caller is motivated to move to another foraging patch due to the current one being depleted.If the receiver's foraging success was low at the time, it could also be in their best interest to respond to movement calls, in anticipation of a richer foraging patch.Alternatively, other individual attributes regardless of status could be important.For example, individuals could be more likely to respond to those groupmates to whom they are more strongly bonded, as previous work in dwarf mongooses has demonstrated for snake mob calls (Kern & Radford, 2016).For our final experiment, which entailed an initial playback of either a rival group track or control herbivore track, we found a stronger response towards the former in line with previous work (Morris-Drake et al., 2019).But, we found no difference in response towards a subsequent dominant movement call, in contrast to our prediction of a heightened response.One explanation is that there could be no increase in response towards a movement call after simulated rival group presence due to heightened anxiety and alertness for rivals; rather than being more likely to respond to a movement call, the immediate threat of a rival group demands more attention from a given individual and thus movement calls might not elicit a different response, or even a weaker response.It would be interesting to conduct similar experiments during the breeding season, in which we might expect a stronger response to rival group calls.In pied babblers, Turdoides bicolor, for example, groups respond to rival group calls more strongly in the breeding season, likely due to increased food availability and having more energy to invest (Golabek et al., 2012).However, the lack of difference between treatments in our experiment could also be due to methodological reasons.In contrast to experiments 1 and 2, movement call playback in our control treatment elicited a weaker response.This could be due to the use of egg prior to playback to get focal individuals into position e it is possible individuals were less likely to respond to a movement call in both treatments if they anticipated more food in the area.The presence of a rival group would clearly demand more immediate responses from individuals despite the presence of food, which we found, but responses to a subsequent movement call may have been subdued.We also found no difference in vigilance levels during the movement call playback, despite previous work showing increased vigilance following rival group playback (Morris-Drake et al., 2019).As we gave egg to a single individual, rather than to the whole group as in Morris-Drake et al. (2019), the incentive for food may have been larger in our study and affected behaviour more.Conflict has previously been shown to affect movement decisions across taxa, with groups or individuals either staying in an area to defend their territory, or moving elsewhere to avoid any further costly contests (Christensen et al., 2016;Descovich et al., 2012;Radford & Fawcett, 2014;Yi et al., 2020).As costs and opportunities of contests differ between group members, conflict is likely to affect leaders and followers differently (Johnstone et al., 2020).Further work should look to use these conflicts of interests to investigate variation in responses to movement signals, and communication more generally, while under threat.
Rival
Our current work has focused on movement decisions, but recruitment signals are widespread in the animal kingdom and occur in a variety of contexts.In dwarf mongooses alone, three different recruitment signals exist: in addition to the movement call investigated here, there is a lost call and a snake mob call (Kern & Radford, 2016;Rubow et al., 2017).Different calls likely exist because different responses are required from the receivers in each context.Across species, there are a variety of other contexts in which recruitment signals may be produced, such as attracting groupmates to foraging patches (Hauser et al., 1993;Radford & Ridley, 2006).Similar or different intrinsic and extrinsic factors could affect how individuals respond to different recruitment signals.As we learn more about recruitment signals and follower responses, comparative studies will allow us to investigate this variety in more detail.
Figure 1 .
Figure 1.Spectrogram of close call control track (top) and movement call track (bottom).Blue indicates low-amplitude noise; red indicates higher-amplitude noise.Taken and adapted from Audacity 2.3.3.
Figure 2 .
Figure 2. Number of individuals that (a) looked, (b) approached and (c) orientated towards the loudspeaker, and that gave (d) movement calls and (e) close calls in response to playback of close calls and movement calls.White bars indicate no response, grey bars show a positive response.(f) Proportion of time spent vigilant and (g) vigilance rate in response to playback of close calls and movement calls.Box plots show medians and quartiles, whiskers show upper and lower quartiles (± 1.5 times the interquartile range).Dotted lines link data points from the same individuals in the two treatments (circles).*P < 0.05; ***P < 0.001.N ¼ 18 individuals receiving paired trials.
Figure 3 .
Figure 3. Number of individuals that (a) approached and (b) gave close calls, and (c) the vigilance rate of individuals in response to playback of dominant and subordinate movement calls.For (a) and (b), white bars indicate no response, grey bars show a positive response.For (c), box plots show medians and quartiles, whiskers show upper and lower quartiles (± 1.5 times the interquartile range) and dotted lines link data points from the same individuals in the two treatments (circles).N ¼ 18 individuals receiving paired trials.
Figure 4 .Figure 5 .
Figure 4. Number of individuals that (a) looked, (b) orientated and (c) approached towards the loudspeaker in response to playback of rival group or herbivore sounds.White bars indicate no response, grey bars show a positive response.*P < 0.05.N ¼ 16 individuals receiving paired trials. | 8,516.2 | 2022-08-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Towards Convergence of IoT and Blockchain for Secure Supply Chain Transaction
Supply chain management (SCM) is essential for a company’s faster, efficient, and effective product life cycle. However, the current SCM systems are insufficient to provide product legitimacy, transaction privacy, and security. Therefore, this research proposes a secure SCM system for the authenticity of the products based on the Internet of Things (IoT) and blockchain technology. The IoT-enabled Quick Response (QR) scanner and the blockchain-integrated distributed system will allow all the SCM stakeholders to begin secure and private transactions for their products or services. Resulting, the consumer will receive an authentic and genuine product from the original producer. A lightweight asymmetric key encryption technique, i.e., elliptic curve cryptography (ECC) and Hyperledger Fabric-based blockchain technology with on-chain smart contracts are applied for distributed IoT devices to make the authentication process faster and lighter. Each SCM stakeholder is registered by the service provider and receives corresponding public and private keys, which will be used for the authentication process of the participants and IoT devices. The authenticated QR scanner records all transactions on the blockchain. Consequently, there will be no human intervention for the SCM transactions. The security and scalability analysis demonstrates that the proposed system is more secure and robust than other state-of-the-art techniques.
Introduction
The internet and technology have been developed so rapidly that the whole world is experiencing the fourth industrial revolution (Industry 4.0) [1] in all aspects of humankind, where the Internet of Things (IoT) [2] plays a significant role for its diverse adoption. IoT is a network of interlinked physical objects (e.g., devices, machines, and appliances) installed with sensors, software, and electronics, provided with unique identifiers. IoT sensors also possess the capacity to exchange data over the internet without human intervention. It can create information about the associated objects, examine them and make decisions. It has enormous potential to give various elating services across numerous spaces from industry, healthcare [3], smart home [4], smart cities, social media, and supply chain. IoT devices have revolutionized the supply chain management (SCM) system [5].
SCM is the management of the movement of goods through various parties like manufacturers, distributors, retailers, and customers [3]. It helps to check the traversal of products and information without any complexities. A supply chain involves a series of steps to get a product or service to the customer. The steps include moving and transforming raw materials into finished products, transporting and distributing them to the end-user.
IoT devices can be connected to a product to confirm the product's authenticity, investigate the origin and quality. Moreover, IoT devices can ensure real-time tracking, traceability, and visibility of a product in the supply chain. A recent survey reveals that Australian retailers have integrated IoT devices into their supply chain. It includes internet-based barcode technology, sensors and scanners, palm-held tablets/smart devices, smartphones, mobile apps, GPS-based location awareness, and Internet-based security and surveillance system [6].
There is no doubt regarding the advantages of the IoT in the supply chain. Despite the benefits, some concerns are related to the IoT integrated supply chain. The IoT devices generate a large amount of data stored in a centralized server, i.e., in a cloud as a plaintext. As a result, there is a chance that the centralized server might act dishonestly and make fallacious use of users' sensitive data. There is a severe threat related to the privacy and security of user data in the centralized IoT infrastructure [7]. Even most of the existing supply chains are not IoT integrated, and because of human intervention [8], there is a high risk in the privacy and security of product and user's data.
Besides the above-discussed articles, there are some other investigations where IoT and blockchain [9] are integrated into the supply chain, whereas there are no studies that focus on the incorporation of asymmetric key encryption technique elliptic curve cryptography (ECC), IoT, and supply chain. Moreover, none of the earlier studies which are discussed in Section 2 focuses on key distributions and key agreements for authenticating IoT devices. Blockchain is a decentralized and distributed network of peers that shares the same ledger of transactions connected with the system without any central server.
The transaction records in the blockchain ledger are immutable, and therefore, it assures authenticity, transparency, traceability, security, and visibility among supply chain entities. The immutable nature of the blockchain platform ensures the SCM transactions data authenticity and security, but it does not ensure data privacy. Therefore, users' sensitive data needs to be protected from disclosure. Due to the resource limitations (i.e., small memory, limited battery power, and insufficient processing capability) of the IoT device, conventional PC-based cryptographic solutions are not appropriate for most IoT devices [10]. Therefore, a lightweight cryptographic protocol is required for the system. This research converges IoT, lightweight asymmetric key cryptography, i.e., ECC, and Hyperledger fabric for secure and trusted supply chain transactions to mitigate the existing supply chain problems. A lightweight key agreement scheme based on ECC has been introduced to ensure the authenticity of IoT devices. Hyperledger fabric assures faster and private supply chain transactions between participating entities. All products or services carry a quick response (QR) code from their production. The proposed system will scan QR codes with an IoT-enabled QR scanner, whereas the transaction data will be stored into the blockchain automatically and securely. Every participant's (e.g., manufacturer, distributor, and retailer) QR scanner will be registered through the lightweight authentication process in the blockchain network. After the registration and successful mutual authentication between the IoT device of two entities, the product information scanned by the QR scanner is stored in the blockchain. The proposed approach serves as a peer-to-peer, trusted distributed supply chain that introduces the product's real-time tracking and traceability and guarantees product information authenticity and confidentiality with an authenticated IoT device. Integration of IoT in the blockchain-based supply chain will enhance the supply chain's flexibility, traceability, transparency, real-time audibility, autonomy, and transaction privacy.
The main contributions of this paper are as follows: • IoT and Blockchain are used to reduce human intervention at the time of recording the supply chain transaction; • Asymmetric key encryption technique ECC based Key distribution and key agreement are developed in SCM. ECC is used for managing the cryptographic operations and also for lightweight authentication of entities; • Hyperleadger fabric based blockchain technology will ensure the transaction data privacy and security; • Security and Privacy analysis illustrate the efficiency of the proposed method.
The rest of the article is structured as follows. Related works are analyzed in Section 2. Preliminaries, System Overview, and Model Construction are delineated in Sections 3-5, respectively. Section 6 illustrates the Performance Evaluation. Finally, Section 7 concludes this article.
Related Work
This section briefly reviews previous works and also discusses their limitations and the novelty of these works.
Privacy by Design
Security of information with the help of technology design is called privacy by design. This concept can merge privacy at the development and production level. It is better to employ a proactive method for data security before they occur, instead of lingering till the breach happens [11,12]. End-to-end security for the entire lifecycle protection can be achieved by this concept. All data are processed securely and also being destroyed securely when needs are over. Specification of privacy context is necessary to defend user privacy. Recent studies [13][14][15][16][17] determined some privacy terms necessary for cyberspace. They are intruders, receivers, senders. and so on. Pfitzmann and Hansen [15][16][17] illustrate a setting related to privacy, which specifies the affinity among privacy terms. Moreover, privacy by design is important for information security.
IoT and Blockchain in Supply Chain
Malik [18] proposed TrustChain, which is a three-layered trust management framework for SCM integrated with blockchain. Tsang [19] presented a blockchain and IoTenabled food traceability system called BIFTS where incorporates IoT, fuzzy logic, and blockchain for complete traceability of perishable food. Shi [20] designed and developed an IoT and blockchain-integrated pharmaceutical supply chain management system to mitigate the concerns of belief, safety, traceability, and inefficiency. Caro [21] proposed a system for the agricultural food supply chain management, which is a comprehensively decentralized traceability system. It incorporates different IoT sensor devices with the supply chain. Abdel-Basset [22] proposed a framework based on RFID technologies for supply chain management that automate the identification process of products, trace and track products globally.
Cui [23] proposed a Hyperledger Fabric-based blockchain framework to trace and track every electronic chip in the supply chain. All the supply chain entities could benefit from this framework since it helps to preserve the supply chain from forged devices. Cocco [24] proposed a blockchain and IoT-based system for Carasau bread's supply chain management to ensure the product's transparent and auditable traceability. In their suggested system, every supply chain party can check the condition of the products and the agreement to the prescriptive about the hygienic-sanitary circumstances on the chain. Matteo [25] presented a DL-Tags solution based on IoT and blockchain that allows privacypreserving, decentralized, and verifiable management of commodities labeled with Smart Tags. All the product consumers and stakeholders can check its authenticity without disclosing their identity. Their recommended solution proves the product's source and journey throughout the supply chain while preventing label replication and manipulation. Bhutta [26] proposed a supply chain management framework for agricultural food supply that ensures secure traceability, identification, and real-time tracking of transportation using IoT and Blockchain. Grida (2020) [27] discusses the uncertainty of evaluating the outcomes of the supply chain based on IoT by blending pathogenic set with Vlse Kriterijumska Optimizacija Kompromisno Resenje and Best-Worst schemes in a judgment-making framework employed for this field. Yadav (2020) [28,29] employs a framework for regulating the performance of SCM for agriculture based on IoT and to develop an IoT-based effective system following natural outbreaks for advancing the coordination mechanism in agriculture supply chain management. Zhang (2020) [30] presents a thorough review of existing SCM-related studies. Table 1 illustrates the summary of the state-of-the-art techniques with the proposed studies. Most of the investigations utilized IoT and blockchain in SCM, and some of them used cryptographic technologies which are not lightweight. None of them showed the authentication of the entities in terms of privacy and security, and only a few of them focused on transaction data confidentiality. These studies utilized IoT devices to track the products' real-time information, such as product quality and location, without considering security and privacy issues. Some studies employed the transaction privacy module, but it lacks security proof. On the other hand, the proposed framework addresses all the limitations of the studies mentioned earlier, and it is lighter, secure, and faster for supply chain transactions.
Preliminaries
This section describes all the notations, which are shown in Table 2 and technologies related to the system.
Asymmetric-Key Encryption
Asymmetric encryption technique is known as public-key cryptography. This cryptographic system uses key pairs, i.e., public and private keys. Here, the public keys are declared openly, and private keys are kept secret by the key owners. The formation of the before-mentioned keys depends on cryptographic algorithms based on large prime numbers to build one-way cryptographic algorithms [31]. There are different types of asymmetric-key cryptography such as Diffie Hellman, Rivest-Shamir-Adleman (RSA), Elliptic Curve Cryptography (ECC), ElGamal, and so on. However, ECC is the lightweight asymmetric-key cryptography for data encryption and decryption [32].
Blockchain and Smart Contract
Blockchain is an immutable distributed ledger technology where the transactions are open to every node of the network associated with a peer-to-peer (P2P) design. It permits untrusted participants to interact and broadcast transactions among each other in a secure way and no trusted third party is needed. Figure 1 represents the smart contract and blockchain system. Blockchain is an ordered list and cryptographic hashes are used to identify each one of the blocks. A chain of blocks is created, where each block references the block that came before it. Every block has a group of transactions [9]. Again, an executable code, which operates on the blockchain in order to aid, execute and dictate the terms of an agreement is known as a smart contract. Its goal is to execute the terms of an agreement automatically if the specific requirements are fulfilled. Its capability fully depends on the programming language, which is utilized for expressing the contract but not on the technology. It has private storage, executable code, and account balance. This study used Practical Byzantine Fault Tolerance (PBFT) [33] for consensus protocol. PBFT is a way for a distributed network to reach the consensus set for the blockchain even if some nodes are malicious. It is used in Hyperledger, in the transaction approval process to avoid malicious decisions. When a Hyperledger transaction is made, the transaction details are sent to the nodes in the network. There are might some nodes that will approve the transactions and some nodes that will not. The majority of nodes have to approve the transaction for the transaction to be completed. To keep the system secure, PBFT requires 3 f + 1 nodes in the system, where f is the maximum number of faulty nodes that the system can tolerate. Therefore, for the group of nodes to make any decision, approval from 2 f + 1 nodes is required.
Elliptic Curve Cryptography
Elliptic curve discrete logarithm problem (ECDLP) [34]: Nowadays, 160 bit ECDLP is often used in cryptosystem where A failed to calculate u, when Q = uP for P, Q ∈ E(F p ) and u ∈ Z * q . Elliptic curve computational Diffie-Hellmen problem [34]: The length 160 bit ECDLP is secure [34] for that reason A failed to calculate uvP, where uP, vP ∈ E(F p ) and u, v ∈ Z * q .
System Overview
This section discusses the system model, threat model, and security goals.
System Model
This study envisage blockchain and IoT based data-driven supply chain ecosystem, which is showed in Figure 2. In this system, the registration protocol, consensus mechanism, and authentication protocol are studied in detail. Entities involved in this systems are Manufacturer (M), Distributor (D), Retailer (R), Customer (C), and Service Provider (S P). Their roles are described in Table 3. Table 3. Individual entities and their roles.
Entities Roles
Manufacturer produces the product and sells it to the D Distributor purchase the product from M and sells it to the R Retailer buys the the product from D and sells it to the C Customer are the end user who purchase the product from the R Service Provider are responsible for registering M, D, and R into the system Formally, the proposed system in Figure 2 consist of n number of Manufacturer respectively. Participants M, D, R, and SP perform their task by executing protocols Φ and Γ. Protocol Φ and Γ are used for registration and authentication process, respectively. These protocols can make sure privacy, security, and authenticity of the participants, which is described in Section 5.
The entire system is divided into two parts. They are: In the registration process, the M requests for registration to SP and SP approves the request and completes the registration process of M. Similarly, D and R complete the registration process. Each of M, D, and R follow Protocol Φ during the registration process and also receive their public key and private key from SP. All interactions are handled by smart contract and transactions are recorded in the chain 1. Any participant within the network can have the public key of other participants. The registration process and Protocol Φ are elaborately discussed in Section 5. In the authentication process, M, D, and R authenticate each other and their IoT devices by following the protocol Γ. Consider a scenario, where M and D want to participate in the authentication. Based on asymmetric encryption ECC they authenticate each other. Their smart contract handles all the interactions and transactions are recorded in the chain 2. All participants in the authentication portion will also authenticate the public key from chain 1 by their smart contract. The detailed authentication process and Protocol Γ are discussed in Section 5. Lastly, C can buy its product from the R by scanning the QR code of the product by C's smartphone but C will not participate in any of the above-mentioned protocols.
Threat Model
Participants in protocol Φ and Γ do not trust each other except SP. Others are semihonest adversaries (A), who are honest in following the protocol but also interested in the private data of other participants [35]. A can also be a middle man adversary. It can dominate the public channel by intercepting, modifying, and forging messages. Yet A failed to infer information from the private channel. In case of forwarding secrecy, A's attack has a minute probability of success against participants.
•
A might get all messages between two entities by initiating a passive attack. • A might execute any operation by initiating an active attack. • A might forge any message in a key agreement stage. • A might retrieve the session key of the entity.
Security Goals
The privacy-preserving protocol Φ and Γ satisfy the following security requirements of the supply chain. SP is the only trusted entity in the entire system. Moreover, A cannot be successful after a passive or an active attack.
Model Construction
The section describes the entire system in detail. This scheme mainly consists of two parts, i.e., registration and authentication.
System Setup
This section only focuses on the system setup. Here, SP selects an elliptic curve E (F p ), where F p is a finite field, which is decided by prime p. It also selects a generator P on the curve with order q and a master or secret key SK SP . It publishes the public key P K SP = (SK SP P ), P, p, q, h i (.)(i = 1, 2, 3) where h i : {0, 1} * → Z * q , i = 1, 2 and h 3 : {0, 1} * → {0, 1} n . Here, Z * q is a multiplicative group of integers modulo q.
Registration
This section describes the registration process and protocol Φ in detail, which illustrates the registration process of M, D, and R with SP. All these participants follow protocol Φ at the time of interaction. The registration process of M with SP is described below and D and R's registration follow the same protocol. M submits its identity ID M to the SP. The SP generates a nonce r M ∈ Z * q , and works out P K M = r M P, X M = h 1 (ID M P K M ), and SK M = r M + SK SP X M . Then, the SP sends {P K M , SK M } to M secretly. Figure 3 shows the entire registration process of M. During the registration stage through protocol Φ, the SP generates the hash of the P K of M, D, and R and encrypt them with the SK SP in order to generate a digital signature (DS). Now, the SP concats P Ks' of M, D and R, DS and its sign SN SP which are publicly available. The SP commits the concated information in the blockchain by calling the smart contract. Algorithm 1 shows the working process of smart contract for registration, where functions gen() and reg() stand for generation of keys and register for writing data into the chain 1. The procedure is described in detail below: SP utilize Equation (1)
M SP
Similarly, SP generates DS D , DS R and then (P K D DS D SN SP ), (P K R DS R SN SP ), respectively. Publicly available information from chain 1 are as follows: • Public key of the entities; • Verifiable digital signatures of the entities; • Sign of the service provider. Clearly, none of this information can be used to infer any private data of other participants. Therefore, in case A is a semi-honest adversary, he would not able to infer any private information of other participants from these data. Again, if A is an outsider dishonest adversaries, he might try to take control over the network and try to infer data but that's not possible as the interactions are happening under the Blockchain network. On the other hand, SP is a trusted entity. Lastly, it is important to discuss the security and privacy issues related to the public ledger of chain 1. Therefore, public view, which also can be seen by A: view Φ A = (P K M , P K SP , DS M , SN SP ) Now, P K M , P K SP , DS M and SN SP has no security concerns as they are just addresses. Thus, protocol Φ is secured in presence of semi-honest and dishonest adversaries for Figure 3.
Authentication
This section describes the authentication process and protocol Γ in detail, which illustrates the authentication process of M with D, and D with R. All these participants follow protocol Γ at the time of interaction. The authentication process of M with D is illustrated in this section and others follow the same protocol.
Verification of P K and Corresponding SP
This section describes the verification of participants' (M, D, and R) P K, where any participant can identify the corresponding SP for any P K. Let us consider a scenario where a D attempts to verify the P K of an M and identify its corresponding SP. Figure 4 illustrates the entire process. D retrieves M's P K M along with DS M and SN SP from chain 1. It recognizes P K SP from SN SP . It decrypts DS M with P K SP and gets h(P K M ) SP , which is generated by SP. It generates h(P K M ) D as H. It compares H and h(P K M ) SP , if matches then P K M is verified with SP. All participants use this process to verify the P K of other participants in the same process and follow the protocol Γ.
Authentication between M and D
This section is described in three phases and shown in Figure 5. D sends its IoT device ID to M using asymmetric encryption.
Blockchain Based Data Sharing (via Chain 2)
During the authentication stage through protocol Γ, all participants verify the authenticity of other participants' P K. In the case of Figure 5, M generates the hash of the MA 1 and commits it in the blockchain by calling the smart contract along with its P K M . On the other hand, D generates the hash of the MA 2 , [[ID iot D ]] P K M and commits it in the blockchain by calling the smart contract along with its P K D . Algorithm 2 shows the working process of smart contract for authentication, where functions auth() and reg() stand for authentication and register for writing data into the chain 2. The procedure is described in detail below: • D generates (P K D H D ) using (3) Again, in the case of the registration process of D generates the hash of the MA 1 and commits it in the blockchain by calling the smart contract along with its P K D . On the other hand, R generates the hash of the MA 2 , [[ID iot R ]] P K D and commits it in the blockchain by calling the smart contract along with its P K R . The procedure is described in detail below: • R generates (P K R H R ) using (5) Publicly available information from chain 2 are as follows: • Public key of the entities • Hash of the shared messages Figure 5 is secured in case of adversaries A.
Proof of Proposition 2. In Protocol Γ: mainly M, D, and R, three entities are involved in two scenario. The actions and processes for both of them are the same. Therefore, one scenario is secured means another one is also secured. This section considers the scenario of Figure 5. The function is F : The view of each M is: In the ideal case A can not infer any information from P K M , P K SP , P K D , H M and H D as P Ks' are addresses and hash values has no backward operations. Considering the threat from the threat model, A has far more ability and visibility than the publicly available data. It is also important to analyze the security of those threats. It is clear that the ID iot {M,D} are secured by the hash values h 2 (uP K M + uh 1 (ID iot M P K M )P K SP ) and h 2 (vP K D + vh 1 (ID iot D P K D )P K SP ), respectively. The outcomes needs SK SP or SK M and SK SP or SK M to directly or indirectly forge those hash values. These keys' are private to their respective owners. Again, in the case of Forward Secrecy A breaks and obtains all of the secret keys from M and D such as SK M and SK D . However, A failed to infer past session keys as all of them are generated based on the ECDH issue. Since u, v, P are not precisely calculable, the forward secrecy is preserved. Again for impersonation attack, if A intends to infer any message at the time of key agreement, it requires SK SP , SK M or SK D . Yet according to the premise of A, it cannot get any of them. Therefore, it will fail to build the entire message. Therefore, this invasion will fail. Lastly, in case of a reply attack, all individuals utilize unexplored random numerals v and u every time. A will not be able to crack the ECDH issue depending on (u P, v old P) or (u old P, v P), despite any message is being replayed. Thus, protocol Γ is secured in presence of semi-honest and dishonest adversaries for Figure 5.
Experimental Analysis
This section describes the test apparatuses and analyzes the performance evaluation of the suggested schema.
Score and Scalability Evaluation Metric
This subsection depicts the measures used to analyze the outcomes.
Evaluation Metrics
The outcomes of the suggested framework is evaluated based on execution time (E T ), average latency (AL), and average throughput (AT ).
• E T : The total amount of time (in seconds) consumed by a system to perform all transactions for a certain corpus, which is showed in Equation (6) shows the where N is the total number of transactions.
T 1 and T 2 represent the time when the transaction was made and the blockchain verified the transaction, respectively. • AL: The average latency is the norm of the difference between T 2 and T 1 in a dataset for a bunch of transactions, which is shown in Equation (7).
• AT : The average throughput is the norm of successful transaction's number per second over the execution time, which is shown in Equation (8).
Result Evaluation
This section demonstrates the result analysis of the system and also detail analysis of protocol Φ and Γ. The proposed system is evaluated in three ways: execution time, average latency, and average throughput. Figure 6 illustrates the performance analysis of Hyperledger Fabric and Ethereum. This study examines the diversity in performance time consumption by altering the numeral of transactions in Figure 6a with two types of blockchain technology such as Etherium and Hyperledger Fabric. The x-axis exhibits the transactions counts running from 1 to 1000 and the y-axis presents the total time consumption for various groups of transactions in seconds. The graph is represented in linear scale. The execution time is proportional to the number of transactions. In the scenario, Etherium hardly completes 980 transactions. Analysis shows that the Hyperledger Fabric constantly consumes less time than Ethereum. The difference between Ethereum and Hyperledger Fabric in execution time grows larger as the transactions amount increases. In Figure 6b,c, we assessed the latency and throughput, respectively by deviating the count of transactions with Etherium and Hyperledger Fabric. The x-axis of both figures shows the transactions number, which varies from 1 to 1000. The y-axis of Figure 6b shows the average latency in seconds for every set of transactions but on that same axis, Figure 6c shows the average throughput in transaction per second (tps) for individual transaction sets. Analysis of the performance reveals that latency of Hyperledger Fabric is constantly lower and throughput is constantly higher in comparison to Ethereum. Therefore, it proves Hyperledger Fabric is faster in comparison to Ethereum. In summary, the proposed system provides more reliable performance in Hyperledger Fabric than Ethereum in terms of scalability. Another important feature of Hyperledger Fabric is that it is a private network but Ethereum is public. Therefore the transaction privacy can also be achieved by Hyperledger Fabric. Figure 7 illustrates the performance analysis of protocol Φ and Γ on Hyperledger Fabric. Figure 7a. The result analysis of this study shows that the execution time of Hyperledger Fabric is pretty practical. When the number of transactions is 100, then the protocol Φ consumes 2.71235371 s and the protocol Γ consumes 3.39351912 s. When the number of transactions is 500, then the protocol Φ consumes 4.51649065 s and the protocol Γ consumes 3.37417463 s. When the number of transactions is 1000, then the protocol Φ consumes 3.626718443 s and the protocol Γ consumes 3.386043616 s.
Number of Transactions
We again assessed the average throughput by altering the transaction counts in Figure 7c with Hyperledger. The x-axis and y-axis show the same parameters as Figure 6c. When the number of transactions is 100, then the protocol Φ executes 0.00372358891 tps and the protocol Γ executes 0.0029467935 tps. When the number of transactions is 500, then the protocol Φ executes 0.000442821685 tps and the protocol Γ executes 0.0005927375 tps. When the number of transactions is 1000, then the protocol Φ executes 0.0002757313576 tps and the protocol Γ executes 0.0002953299 tps.
After the analysis of transaction time, it is important to have a look at execution time. Table 4 shows the execution time analysis. It focuses on each entity's time consumption. Precisely, there is no previous work whose result can be directly comparable with this proposed system. In the proposed system, entities SP, M, D, and R consume 2.049688 ms, 4.534202 ms, 4.011596 ms, and 4.373648 ms, respectively. ECC's time consumption of the proposed system shows better performance but the total execution time of the proposed system is a bit higher due to the time expenditure of blockchain. The performance of the proposed method can be compared with the method of other domains in terms of computational costs and the number of exchanged message. Analysis is showed in Table 5 and Figure 8. The proposed outperforms the methods of other domains also, where it takes 1260 bits for communication costs and only 3 exchanges of messages.
Conclusions
Integration of IoT devices in a centralized nature increases the issue of transaction data privacy and security of the supply chain management system. Therefore, this paper proposed a unified solution with the distributed ledger technology, i.e., Hyperledger fabric, IoT, and elliptic curve cryptography, to protect the transaction data from privacy and security breaches. ECC ensured the lightweight cryptographic operations and authentication of IoT devices. Authenticated IoT scanner guarantees an error-free supply chain transaction enabling the trusted immutable ledger among all participants. Rigorous implementation of the proposed system on the Hyperledger fabric network confirmed that the system works smoothly in a multi-party setup. The result and security analysis prove that the proposed system is robust and secure for real-life applications.
In future research, we want to integrate self-sovereign identity (SSI) with the distributed ledger technology for faster and more reliable peer-to-peer authentication processes for all supply chain entities. The decentralized SSI module will guarantee frictionless supply chain transactions where data privacy and security can also be ensured.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,513.8 | 2022-01-03T00:00:00.000 | [
"Computer Science"
] |
Machine Learning Model Stability for Sub-Regional Classification of Barossa Valley Shiraz Wine Using A-TEEM Spectroscopy
With a view to maintaining the reputation of wine-producing regions among consumers, minimising economic losses caused by wine fraud, and achieving the purpose of data-driven terroir classification, the use of an absorbance–transmission and fluorescence excitation–emission matrix (A-TEEM) technique has shown great potential based on the molecular fingerprinting of a sample. The effects of changes in wine composition due to ageing and the stability of A-TEEM models over time had not been addressed, however, and the classification of wine blends required investigation. Thus, A-TEEM data were combined with an extreme gradient boosting discriminant analysis (XGBDA) algorithm to build classification models based on a range of Shiraz research wines (n = 217) from five Barossa Valley sub-regions over four vintages that had aged in bottle for several years. This spectral fingerprinting and machine learning approach revealed a 100% class prediction accuracy based on cross-validation (CV) model results for vintage year and 98.8% for unknown sample prediction accuracy when splitting the wine samples into training and test sets to obtain the classification models. The modelling and prediction of sub-regional production area showed a class CV prediction accuracy of 99.5% and an unknown sample prediction accuracy of 93.8% when modelling with the split dataset. Inputting a sub-set of the current A-TEEM data into the models generated previously for these Barossa sub-region wines yielded a 100% accurate prediction of vintage year for 2018–2020 wines, 92% accuracy for sub-region for 2018 wines, and 91% accuracy for sub-region using 2021 wine spectral data that were not included in the original modelling. Satisfactory results were also obtained from the modelling and prediction of blended samples for the vintages and sub-regions, which is of significance when considering the practice of wine blending.
Introduction
Understanding the value of wine requires an appreciation of the influence of terroir-the interaction of physical, biological, and cultural aspects related to provenance and distinctive traits that influence product image, style, and quality.From its creation in the 1960s to today, the term terroir has endured and has even become the focus of investigations that aim to relate terroir to the properties of grapes and wine [1][2][3][4][5][6][7].Considering its underpinnings, terroir necessarily encompasses research disciplines ranging from microbiology, plant and soil science, and oenology to marketing, consumers, humanities, and philosophy.Aside from the influence on grape and wine composition, the complex interactions contributing to terroir lead to a degree of recognition of wine produced in a certain (especially renowned) region, known as its 'sense of place' [8].Despite the complexity and remaining scientific need to elaborate on the influence of terroir, wine producers endow terroir with commercial value and stimulate the potential institutional nature of terroir, making it a valuable marketing tool [1,9,10].The institutionalisation of terroir has arisen because production regions are controlled according to Protected Designation of Origin (PDO) or Geographical Indication (GI) regulations, which aim to guarantee the authenticity and quality of a wine from a delimited region [1,8].As reflected in the sub-regionalisation of wine regions, the subdivision of production zones has become a necessary means for the development of a wine-producing area because it is related to the reputation of the region and the interests of local enterprises [11][12][13].This can be exemplified by the Barossa Valley, a typical Shiraz-producing region with a long history in Australia, which has stood at a historic turning point in the development of wine sub-regions.Five different potential sub-regions-Northern Grounds (NG), Central Grounds (CGs), Eastern Ridge (ER), Southern Grounds (SG), and Western Ridge (WR)-have been divided and are beginning to be recognised within the industry, becoming a tool to assist in the marketing of Barossa Valley wines [14,15].
In addition to the pursuit of quality, the sub-regionalisation of wine production regions is intended to be used as a means of wine marketing, thereby further increasing a wine's value from a certain production area (with its associated terroir).This can be seen in the division of land into village appellations, which led to the rise in wine prices in Champagne and Burgundy [2,9,12,16].Wine is thus viewed as a value-added luxury product and an important contributor to the global beverage market, with the European wine sector alone generating billions of dollars in revenue each year [17,18].In this significant global market with substantial economic benefits, cases of profiteering through wine counterfeiting are common and have long plagued wine producers and local governments [19].According to an EUIPO report in 2016, the existence of counterfeit wine in the European Union leads to an estimated annual revenue loss of about USD 1.3 billion, equivalent to 3.3% of total sales and an employment loss of about 4800 jobs [20].As a mainstay of the global wine industry, Australia has also been affected by wine fraud, with counterfeit wine under the famous Penfolds brand, for example, flowing freely in overseas markets [21].Within Australia, wine fraud more likely relates to honesty around wine label information, the underlying details of which could be altered during the winemaking stage in terms of vintages, varieties, and regional blends, especially in relation to the 85% blending principle [22].
As a wine authentication method, the absorbance-transmission and fluorescence excitation-emission matrix (A-TEEM) approach is based on absorbance and fluorescence spectroscopy [23,24] using an Aqualog instrument with right-angle optical geometry for fluorescence detection [25,26].This method can generate multidimensional spectral information in the UV-Vis range for all chromophores and fluorophores and simultaneously combines absorbance-transmittance data with an excitation-emission matrix (EEM) to provide unique molecular fingerprints of wine [27].The total EEM data obtained with this technology involve a set of emission spectra across different wavelengths (λ em ), recorded within a range of excitation wavelengths (λ ex ).This provides information on fluorescent substances in each wine sample and is effective for comparing samples with small compositional differences [27][28][29].Ranaweera et al. [30] explored the use of A-TEEM for encapsulating the influence of biophysical and cultural factors associated with wine terroir by tracking wines through the winemaking process, and A-TEEM yielded impressive results for discriminating regions within a single GI [31].In a study on the origin traceability and authenticity verification of Chinese wine, the combination of EEM and chemometrics was once again proven effective as a wine traceability technology [32].In addition, compared with traditional wine analysis techniques like high-performance liquid chromatography (HPLC), specific natural isotope fractionation-nuclear magnetic resonance (SNIF-NMR), and isotope ratio mass spectrometry (IRMS) [33][34][35], acquiring A-TEEM data is relatively simple and accessible, especially for those working in winery laboratories who are already familiar with UV-Vis spectrophotometry.
In terms of spectral data analysis, principal component analysis (PCA) and parallel factor analysis (PARAFAC) are commonly used chemometric techniques.A-TEEM data can also be combined with a machine learning algorithm, such as extreme gradient boosting discrimination analysis (XGBDA), to classify wines from different vintages, varieties, and regions and even different sub-regions in Barossa Valley.Such chemometric methods demonstrate the ability to analyse nuanced EEM data.PCA and XGBDA can find patterns in datasets and classify samples based on similarities and differences in unsupervised and supervised modes, respectively.As an auxiliary technique, PARAFAC can effectively identify the type and concentration of fluorophores underpinning a dataset classification [24,26,27,31].
Previous work has undoubtedly provided encouraging results for terroir classification at the sub-regional level, with the discrimination of close geographical regions being difficult to accomplish with other analytical methods [31].The present study aimed to further explore the ultimate depth that this method can reach and consider scenarios encountered in the practical application of the method in the industry.The Barossa Valley GI remained the target region, with an additional vintage of 2021 added along with an assessment of stored wines used in the previous study [31], to explore the influence of bottle ageing.XGBDA was used for machine learning classification modelling, including for the prediction of unknown samples using models developed with a split dataset.The newly collected sample data for the stored wines were assessed with the prediction model established by Ranaweera, Bastian, Gilmore, Capone, and Jeffery [31] two years prior to determine how relevant that previous model was for classifying the current dataset.In addition, samples from the four vintages and five sub-regions were mixed in certain proportions to investigate the practice of wine blending, thereby providing relevance to a typical winemaking scenario.
Chemicals
High-purity water was obtained with the Milli-Q purification system (Elga Labwater, Woodridge, USA).Absolute ethanol for chromatography and analytical-grade hydrochloric acid (HCl, 37% w/v) were purchased from Rowe Scientific (Lonsdale, SA, Australia).
Wine Samples
Shiraz research wines (n = 217) produced in 2018, 2019, 2020, and 2021 from fruit collected in 20 vineyards were available from a previous project that investigated Barossa Shiraz terroir [36].As reported before, the wines were made with 100% single-site Shiraz grapes, with fruit parcels obtained from four sites within each of the five sub-regions of Barossa Valley, South Australia, across the four vintages, as follows: Sites 1-4, Northern Grounds (NG) = 42 wines; Sites 5-8, Central Grounds (CG) = 36; Sites 9-12, Eastern Edge (ER) = 48; Sites 13-16, Southern Grounds (SG) = 45; Sites 17-20, Western Ridge (WR) = 46.Replicate wines were available for each site (A, B, C), as shown in Table S1 of the Supporting Information.Winemaking was undertaken by WIC Winemaking Services and wines were bottled in 750 mL glass bottles with screw caps.Bottled wines were stored in a wine cellar under controlled temperature and humidity.Wines from 2018-2020 analysed in the present work (excluding Eden Valley given the focus on Barossa Valley) had aged for an additional 2 years since their previous A-TEEM analysis reported by Ranaweera, Bastian, Gilmore, Capone, and Jeffery [31] (total ageing time of 3-5 years), whereas 2021 wines had been cellared for only 2 years and were analysed for the first time.
Sample Preparation and A-TEEM Procedure
Wine samples (1 mL) obtained from freshly opened bottles were centrifuged (Eppendorf 5415D, Adelab Scientific, Thebarton, SA, Australia) at 9300× g for 10 min.The supernatant (40 µL) was obtained and diluted by 1:150 with degassed and filtered (0.45 µm PTFE membrane) 50% aqueous ethanol adjusted to pH 2 with HCl, according to Ranaweera, Gilmore, Capone, Bastian, and Jeffery [23].After dilution, samples were mixed with a benchtop vortexer (MS1 Minishaker IKA) for 60 s and sonicated for 15 min (SONICLEAN 250HD, Rowe Scientific, Lonsdale, SA, Australia) to remove air bubbles.Samples were analysed in duplicate using Hellma type 1FL (1 cm path length) macroscopic fluorescence cuvettes (Sigma-Aldrich, Castle Hill, NSW, Australia) using an Aqualog spectrophotometer (UV-800-C, HORIBA Scientific, Quark Photonics, Adelaide, SA, Australia).The settings consisted of 0.2 s integration time, excitation wavelength range 240-800 nm in 5 nm increments, emission range 242-824 nm in 4.66 nm increments, saturation mask width 10 nm, medium detector gain, and automatic spectral pre-processing including the correction of inner filter effects and Rayleigh masking.The EEMs were normalised by the measurement of a standard, sealed, high-purity water cuvette each time the instrument was used, as previously reported [26,31].The diluted wine sample was stirred in the 1FL cuvette within the sample holder for 120 s with a stir bar before the start of the analysis.Dilution solvent blanks were recorded in the same way prior to sample analysis for auto-subtraction from each sample in the batch [37].Absorption spectra (240-700 nm) and EEMs were recorded using Aqualog software (version 4.3, HORIBA Scientific, Quark Photonics).
Preparation of Wine Blends
Wines were prepared in a 12 mL glass vial with silicone/PTFE screw cap (Agilent Technologies, Santa Clara, CA, USA), following the mixture proportions shown in Tables S2-S4 to obtain a final volume of 10 mL of mixed wine sample (in duplicate).The approach was similar to that reported by Ranaweera et al. (2022).After thoroughly mixing vials for 60 s using a benchtop vortex, samples were prepared as above (i.e., centrifuged at 9300× g for 10 min in a 10 mL centrifuge tube; 40 µL of supernatant diluted 150-fold with dilution solvent; samples vortexed to mix and finally sonicated) and analysed to obtain A-TEEM data.
Data Fusion (Multi-Block Modelling)
The 3D EEMs and corresponding 2D absorbance datasets from A-TEEM were combined to enhance the classification and prediction accuracy [24].Then, 3D EEM data were reshaped into a two-way data array (unfold multiway mode 1) and joined with the absorbance data.Fused data were used in statistical analyses requiring a 2D dataset, namely, PCA and extreme gradient boosting (XGBoost) modelling.
Unsupervised Data Analysis
PCA and PARAFAC were variously applied to analyse the different datasets collected using the A-TEEM method.For PCA, fused data were auto-scale pre-processed with five principal components selected to classify the five different sub-regions for each of the four vintages.PARAFAC was used to decompose the 3D EEM data of the wine samples into the most dominant fluorophores.For pre-processing, normalisation of spectra to 1 (default) and EEM filtering were applied, with ±16 nm and ±32 nm for the first-order and second-order Rayleigh filters, respectively [38].Non-negativity constraints were imposed in all three modes (intensity, emission, and excitation wavelengths) of EEM data, and components were selected based split-half analysis results [38].
Data Analysis with Machine Learning
XGBDA was applied as a classification machine learning algorithm to build the wine authentication models with the fused dataset according to vintage, sub-region, and the specified blends.XGBDA was applied with partial least squares (PLS) compression, using a maximum for latent variables (LVs) of 10 for vintage and vintage blending classification and 20 LVs for sub-region and sub-region blending classification (blends as specified in Table S2 of the Supporting Information).The models developed for vintage and sub-region were applied to the blends specified in Tables S3 and S4, respectively.The number of LVs was selected according to the cross-validation (CV) result accuracy when comparing 10-45 LVs.
Pre-processing was undertaken with mean centring, autoscaling, and generalised least squares weighting (GLSW) with the declutter threshold at 0.02 to calibrate and crossvalidate (Venetian blinds procedure, k = 10).The xgboost algorithm and gbtree booster of XGBDA had an eta = 0.3, max_depth = 1, and num_round = 200.Model testing for both vintage and sub-region was taken further by splitting the data into about 80% used for calibration (n = 354) and about 20% used for validation (n = 80) (keeping the replicates together), using the same XGBDA approach as just described.Further validation was obtained by loading a random subset of the multi-block sample data obtained in the present work (for vintage based on 2018-2020 and for sub-region using 2018 and 2021 as examples) into the previously established model (based on the combination of vintage and sub-region) [31] to test prediction accuracy with newly recorded data for the wines that had aged for a further 2 years and for 2021 wines that were not used before in the classification modelling.
According to the most probable prediction rule, which assigns samples to the class with the highest probability overall, the validity of the model's prediction results was evaluated by the confusion matrix score probability.The scoring probabilities included true positive (TP), false positive (FP), true negative (TN), and false negative (FN).The magnitude of the probability was expressed as a number from 0 to 1 and a percentage.
Molecular Fingerprints (EEMs)
Figure 1 shows examples of molecular fingerprints for experimental Shiraz wines from Barossa Valley GI, indicating the variance between the 2018 and 2021 vintages for the five sub-regions (NG, CG, ER, SG, WR).The vintage difference can primarily be seen through the gross differences in the EEM fingerprints.Each panel in the first row from vintage 2018 (Figure 1a-e) had only one intense peak at around λ ex /λ em 270/310 nm, with panels in the second row from vintage 2021 (Figure 1f-j) having two intense peaks with λ ex /λ em at around 270/310 nm and 250/370 nm.Comparing the fingerprints of vintage 2021 wines, Northern Grounds (Figure 1f) and Eastern Ridge (Figure 1h) tended to have a similar fingerprint, as did Central Grounds (Figure 1g) and Western Ridge (Figure 1j), whereas Southern Grounds (Figure 1i) had a more unique fingerprint.In contrast, spectra for the 2018 wines (5 years old) were more similar across the sub-regions.The differences between vintage 2018 and 2021 and sub-region differences within vintage 2021 could be explained on the basis of climatic data such as growing season rainfall, mean January temperature, and growing degree days, as well as other terroir influences [31,39].
The EEMs were generally representative of the spectral fingerprints obtained with the A-TEEM approach, and, as seen with the sub-regions for vintage 2018, differences may not have been easily discernible by simple visual inspection.This demonstrates the importance of using chemometrics with these datasets to identify subtle patterns in the EEM fingerprints, as elaborated in subsequent sections.
PARAFAC Decomposition of EEMs
PARAFAC was undertaken to tentatively identify the main fluorophores that characterised the samples.These results could then be used to provide some understanding of the possible compositional drivers underpinning sub-regional classification.A four-component model comprising all sub-regions and vintage years was selected based on a splithalf analysis of 97%.PARAFAC modelling (Figure 2) yielded a component 1 peak at 270/305 nm (λex/λem), component 2 peak at 265/345 nm, component 3 peak at 255/375 nm, and component 4 peak at 315/375 nm.Components were tentatively assigned to respective compound classes: 1. flavan-3-ols [28,40]; 2. anthocyanins, aromatic amino acids, and hydroxybenzoic acids [28,40,41]; 3. phenolic acids/aldehydes and flavonols [28]; 4. caffeic and p-coumaric acids [40] and stilbenes like resveratrol and trans-piceid [41,42] or perhaps grape seed oils from maceration during fermentation (e.g., tocopherols and tetraenes) [43].The PARAFAC score plots (Figure 3a-d) revealed how the vintages differed based on the tentatively assigned fluorophores.The PARAFAC components related to ordinary red wine constituents that are influenced by grape growing conditions and terroir more broadly, which could be variable among the sub-regions used in this study.Components 2 (anthocyanins, amino and hydroxybenzoic acids) and 4 (hydroxycinnamates, stilbenes) showed less fluctuation according to vintage than components 1 (flavan-3-ols) and 3 (phenolic acids, flavonols).The seasonal climate could be a particular factor contributing to the variability in some vintage years more than others, with differences in growing season rainfall and temperature for 2018-2021 according to the vintage reports from Barossa Australia [44].As noted earlier, the gross differences observed in the EEMs presented in Figure 1 that underpin the PARAFAC results could also be related to this observation.Relative wine age could also exert some influence based on the evolution of phenolic profiles over time.Depending on vintage year, greater differences among the sub-regions were also evident, especially for components 1 and 3 (Figure 3a,c).
PCA Decomposition of A-TEEM Data
Dimensionality reduction with PCA was applied to multi-block A-TEEM data (i.e., combined absorbance and EEM datasets) to explore the separation of Barossa sub-regions (Figure 4).The first three principal components accounted for a total variance explained of 35.1%, 31.4%,28.2%, and 30.2% for vintages from 2018 to 2021, respectively.Vintage 2018 in particular showed an impressive result, with each sub-region tightly grouped and almost completely separated from each other (Figure 4a).This was reminiscent of the results obtained for this vintage in the previous work [31].However, apart from NG (red diamonds) in 2019 (Figure 4b), and 2020 to a lesser extent (Figure 4c), the other vintages did not exhibit significant differentiation of sub-regions according to PCA.SG (light blue inverted triangles) and WR (lilac stars) were similar to each other in vintages 2019-2021, whereas NG/CG/SG showed an obvious degree of separation in the four vintages, although less so in 2021 (Figure 4d), especially for NG and CG (green squares).The separation of WR and ER (dark blue triangles) from the other three sub-regions largely depended on the vintage, and WR and ER were themselves separated to a degree in vintages 2018 and 2021 (Figure 4a,d).These results were consistent with the study published by Ranaweera, Bastian, Gilmore, Capone, and Jeffery [31]: besides climate factors across vintages (and regions) as mentioned in previous sections, which could lead to more or less differentiation, localised factors of terroir such as soil properties and topography across sub-regions could play a role [15,39] via their influences on grape (and, thus, wine) composition.
Ageing could be another factor that correlated with the separation of sub-regions in the PCA plot (greater separation for older wines), although vintage differences might have a more pronounced influence, considering that the separation of sub-regions for 2018-2020 wines was more or less maintained upon re-analysis of the wines after several years of bottle ageing.This is an important result from an implementation perspective-despite the compositional changes associated with red wines as they age, which can impart changes in wine EEM fingerprints and absorbance values, the original differentiation among the sub-regions according to PCA was still evident several years later.Even so, PCA with multi-block spectral data for these aged wines from different vintages was not sufficient to consistently separate the sub-regions, although k-means clustering was able to resolve vintage year in the previous work [31].Improvement in sub-regional classification across multiple vintages was necessary, with supervised methods and particularly machine learning algorithms providing a possible solution, as evidenced previously [24,31].
Vintage and Sub-Region Validation
As an effective machine learning classification algorithm [23,30], XGBDA was carried out in an attempt to improve classification across the vintages and sub-regions.The XGBDA approach with CV afforded an excellent classification result (Figure 5 and Table S5 of the Supporting Information), with 100% accuracy for the vintage model (Figure 5a), and 99.5% accuracy (2 misclassified out of 434 sample spectra, Figure 5b)) for the subregion model.These exemplary results were consistent with the work of Ranaweera, Bastian, Gilmore, Capone, and Jeffery [31] and remarkable, considering the proximity of the sub-regions.A further step of splitting the datasets into about 80% for calibration (n = 354) and about 20% for validation (n = 80) led to slightly lower accuracy than the CV model results shown in Figure 5: in this case, 1 out of 80 samples was misclassified for vintage, giving a 98.8% classification accuracy (Figure S1a-d of the Supporting Information), and 5 out of 80 samples were misclassified at a sub-region level, giving a 93.8% classification
Vintage and Sub-Region Validation
As an effective machine learning classification algorithm [23,30], XGBDA was carried out in an attempt to improve classification across the vintages and sub-regions.The XGBDA approach with CV afforded an excellent classification result (Figure 5 and Table S5 of the Supporting Information), with 100% accuracy for the vintage model (Figure 5a), and 99.5% accuracy (2 misclassified out of 434 sample spectra, Figure 5b)) for the sub-region model.These exemplary results were consistent with the work of Ranaweera, Bastian, Gilmore, Capone, and Jeffery [31] and remarkable, considering the proximity of the sub-regions.A further step of splitting the datasets into about 80% for calibration (n = 354) and about 20% for validation (n = 80) led to slightly lower accuracy than the CV model results shown in Figure 5: in this case, 1 out of 80 samples was misclassified for vintage, giving a 98.8% classification accuracy (Figure S1a-d of the Supporting Information), and 5 out of 80 samples were misclassified at a sub-region level, giving a 93.8% classification accuracy (Figure S2a-e).Ideally, greater sample numbers would be used for splitting the datasets, but this outcome still highlights the robustness of the A-TEEM approach for classifying wine samples, with the accuracy easily being equal to other authentication techniques.Again, though, it is worth remarking that this study considered wines from sub-regions within a GI (as little as several km apart), thus highlighting the ability to authenticate at a fine scale and the potential of the approach for helping to objectively define unique terroirs within regions.
accuracy (Figure S2a-e).Ideally, greater sample numbers would be used for splitting the datasets, but this outcome still highlights the robustness of the A-TEEM approach for classifying wine samples, with the accuracy easily being equal to other authentication techniques.Again, though, it is worth remarking that this study considered wines from subregions within a GI (as little as several km apart), thus highlighting the ability to authenticate at a fine scale and the potential of the approach for helping to objectively define unique terroirs within regions.Furthermore, in view of the possible effects of bottle ageing on the A-TEEM data mentioned earlier, and to obtain a deeper understanding of the impact of bottle ageing on this authentication method, the multi-block A-TEEM data obtained in the present study were tested against the previous model developed by Ranaweera, Bastian, Gilmore, Capone, and Jeffery [31] using a subset of the same wine samples from 2018-2020, as well as wines from vintage 2021, which were not analysed previously.According to Figure S3a of the Supporting Information, the prediction of vintage for 2018, 2019, and 2020 wines with the previous model still showed 100% classification accuracy.For the prediction of sub-region, highlighted for 2018 and 2021 wines (excluding Eden Valley, which was not analysed in the present work), the model not only achieved prediction of the 2018 vintage samples with an accuracy of 92% (Figure S3b, four misclassified samples) but it also achieved a prediction accuracy of 91.2% for the samples from vintage 2021 (Figure S3c, three misclassified samples), which had not been involved in the development of the previous model.This was a highly encouraging result as it not only meant that the method was largely unaffected by bottle ageing (i.e., wines measured several years later could still be accurately predicted with the originally developed models) but also demonstrated the sub-region predictive ability using data from an 'unknown' vintage (i.e., wines from 2021).
Blended Wine Validation
Considering that commercial wines typically consist of blends, it was worthwhile considering the performance of the A-TEEM and XGBDA classification method upon wine blending.Previously, wines were tracked through the winemaking process and Furthermore, in view of the possible effects of bottle ageing on the A-TEEM data mentioned earlier, and to obtain a deeper understanding of the impact of bottle ageing on this authentication method, the multi-block A-TEEM data obtained in the present study were tested against the previous model developed by Ranaweera, Bastian, Gilmore, Capone, and Jeffery [31] using a subset of the same wine samples from 2018-2020, as well as wines from vintage 2021, which were not analysed previously.According to Figure S3a of the Supporting Information, the prediction of vintage for 2018, 2019, and 2020 wines with the previous model still showed 100% classification accuracy.For the prediction of sub-region, highlighted for 2018 and 2021 wines (excluding Eden Valley, which was not analysed in the present work), the model not only achieved prediction of the 2018 vintage samples with an accuracy of 92% (Figure S3b, four misclassified samples) but it also achieved a prediction accuracy of 91.2% for the samples from vintage 2021 (Figure S3c, three misclassified samples), which had not been involved in the development of the previous model.This was a highly encouraging result as it not only meant that the method was largely unaffected by bottle ageing (i.e., wines measured several years later could still be accurately predicted with the originally developed models) but also demonstrated the sub-region predictive ability using data from an 'unknown' vintage (i.e., wines from 2021).
Blended Wine Validation
Considering that commercial wines typically consist of blends, it was worthwhile considering the performance of the A-TEEM and XGBDA classification method upon wine blending.Previously, wines were tracked through the winemaking process and XGBoost regression (XGBR) modelling highlighted the sensitivity of the approach to blending one varietal wine with as little as 1% of another [30].As an extension, selected samples in the present study were blended across different vintages and separately for different subregions.The blending ratio for sub-regions was set as 50:50 and also 15:85, according to the Australian wine industry 85% principle [22], as well as 50:50, 10:90, and 5:95 for vintage (Tables S2-S4 of the Supporting Information).
As the first step, multi-block A-TEEM data of 50:50 mixed samples (Table S2) were selected to establish the model and explore the predictive power of XGBDA with CV. Figure 6a shows the class CV predicted results of 12 blended samples (analysed in duplicate) from combinations of the four vintages, with only one wine misclassified and achieving 95.8% overall classification accuracy (Table S5); the combination of 2018 + 2021 was classified as 2019 + 2020.Figure 6b shows the class CV predicted results of 20 samples (analysed in duplicate) from blending combinations of the five sub-regions, showing that three samples were misclassified (92.5% accuracy, Table S5): one from CG + WR was classified as CG + SG, one from NG + ER was classified as NG + SG, and one from CG + ER was classified as NG + WR.Despite the limited selection of data, the results were consistently quite outstanding for both vintage and sub-region blending.
XGBoost regression (XGBR) modelling highlighted the sensitivity of the approach to blending one varietal wine with as little as 1% of another [30].As an extension, selected samples in the present study were blended across different vintages and separately for different sub-regions.The blending ratio for sub-regions was set as 50:50 and also 15:85, according to the Australian wine industry 85% principle [22], as well as 50:50, 10:90, and 5:95 for vintage (Tables S2-S4 of the Supporting Information).
As the first step, multi-block A-TEEM data of 50:50 mixed samples (Table S2) were selected to establish the model and explore the predictive power of XGBDA with CV. Figure 6a shows the class CV predicted results of 12 blended samples (analysed in duplicate) from combinations of the four vintages, with only one wine misclassified and achieving 95.8% overall classification accuracy (Table S5); the combination of 2018 + 2021 was classified as 2019 + 2020.Figure 6b shows the class CV predicted results of 20 samples (analysed in duplicate) from blending combinations of the five sub-regions, showing that three samples were misclassified (92.5% accuracy, Table S5): one from CG + WR was classified as CG + SG, one from NG + ER was classified as NG + SG, and one from CG + ER was classified as NG + WR.Despite the limited selection of data, the results were consistently quite outstanding for both vintage and sub-region blending.S2) according to (a) vintage and (b) sub-region.Samples were blended in duplicate, and each was analysed in duplicate.NG, Northern Grounds; CG, Central Grounds; ER, Eastern Ridge; SG, Southern Grounds; WR, Western Ridge.Samples outlined in red were misclassified.
A further step was carried out by using the XGBDA models established in Section 3.4.1 of the Results and Discussion with multi-block A-TEEM data from selected samples (prepared in duplicate for a single analysis of each) using a stricter blending ratio of 10:90 and 5:95 for vintages (Table S3 of the Supporting Information), along with 15:85 and 50:50 for sub-regions (Table S4), to predict the probable class of each.Table 1 shows the prediction probability based on the vintage and sub-region blending.For wine samples comprising 95% vintage 2018 and 5% vintage 2021 wines (S1 and S2 for vintage), applying the model developed for vintage for all wines gave an average probability of 97.5% that the wine was from the 2018 and a 1.2% probability it was from 2021.For blends containing 90% 2018 wine and 10% 2021 wine (S3 and S4 for vintage), the model predicted that the samples were from 2018 and 2021 with averages of 89.95% and 8.75% probability, respectively.One sample containing 85% SG and 15% WR (S1 for sub-region) was predicted to consist of SG and WR wine with 89% and 6% probability, respectively, but the prediction S2) according to (a) vintage and (b) sub-region.Samples were blended in duplicate, and each was analysed in duplicate.NG, Northern Grounds; CG, Central Grounds; ER, Eastern Ridge; SG, Southern Grounds; WR, Western Ridge.Samples outlined in red were misclassified.
A further step was carried out by using the XGBDA models established in Section 3.4.1 of the Results and Discussion with multi-block A-TEEM data from selected samples (prepared in duplicate for a single analysis of each) using a stricter blending ratio of 10:90 and 5:95 for vintages (Table S3 of the Supporting Information), along with 15:85 and 50:50 for sub-regions (Table S4), to predict the probable class of each.Table 1 shows the prediction probability based on the vintage and sub-region blending.For wine samples comprising 95% vintage 2018 and 5% vintage 2021 wines (S1 and S2 for vintage), applying the model developed for vintage for all wines gave an average probability of 97.5% that the wine was from the 2018 and a 1.2% probability it was from 2021.For blends containing 90% 2018 wine and 10% 2021 wine (S3 and S4 for vintage), the model predicted that the samples were from 2018 and 2021 with averages of 89.95% and 8.75% probability, respectively.One sample containing 85% SG and 15% WR (S1 for sub-region) was predicted to consist of SG and WR wine with 89% and 6% probability, respectively, but the prediction of S2 for sub-region was extremely low, with only 6.9% and 1.1% probability of the sample coming from SG and WR, respectively.This was an anomalous result without an apparent explanation.Blends of 50% SG and 50% WR (S3 and S4 for sub-region, Table 1) were predicted to come from SG and WR with averages of 49.8% and 33.3% probability, respectively.Although these results did not automatically imply that class prediction should equal the percentage in the blend, the modelling did reasonably well (errant result aside) in reflecting the main vintage or sub-region component where one predominates, and at least indicates that any blend was not mistaken as a single vintage or individual sub-region.
Table 1.Class-predicted probability from XGBDA modelling of multi-block A-TEEM data for blended samples based on vintage (2018 and 2021, prepared as outlined in Table S3) and sub-region (Southern Grounds and Western Ridge, prepared as outlined in Table S4).The results presented in Table 1 nicely supplement the work of Ranaweera, Gilmore, Bastian, Capone, and Jeffery [30], who reported the use of XGBR modelling of the percentage of grape variety in a blend, with the present study addressing some gaps related to vintage and sub-region blending using A-TEEM data and XGBDA.Importantly, the results in Table 1 were obtained by loading blended wine sample data into the models established using the entire wine datasets for vintage and sub-region that did not involve any blends, thus providing further insight into the potential for the application of this wine authentication methodology in an industry-relevant context.
Conclusions
Overall, reliable results have been obtained for the classification of Shiraz research wines arising from adjacent areas within the Barossa Valley GI.The capability of A-TEEM and XGBDA has again shown its worth, with the important contribution of identifying subtle differences among wine samples after a period of bottle ageing but still being able to accurately classify such wines.This adds further weight to the utility of this approach regarding the stability of models over time, which is critical from a classification perspective as wines age.Notably, there was discernment of closely located vineyards even after wine ageing, thus highlighting the conservation of terroir influences on the spectral fingerprints of the wines.The ability of the technique to differentiate wine blends was also an important development in this work, considering that the blending of wine is a widespread and often necessary practice, but one that can be open to manipulation (through the falsification of region or variety, for example).Future improvements on the present outcomes can be envisaged by increasing the size of the dataset or, indeed, creating models for specific blends (potentially allowing for the detection of an unauthorised variety of a PDO wine).In addition, research could be extended to the analysis of commercial wines and the development of an authentication database over numerous vintages based on A-TEEM with machine learning classification.
Figure 2 .
Figure 2. Loadings from parallel factor analysis (PARAFAC) decomposition modelling of 3D EEM data for Shiraz wine samples from all sub-regions and vintages showing (a) excitation wavelengths (nm) and (b) emission wavelengths (nm) of components 1-4.
Figure 2 .
Figure 2. Loadings from parallel factor analysis (PARAFAC) decomposition modelling of 3D EEM data for Shiraz wine samples from all sub-regions and vintages showing (a) excitation wavelengths (nm) and (b) emission wavelengths (nm) of components 1-4.
Figure 2 .
Figure 2. Loadings from parallel factor analysis (PARAFAC) decomposition modelling of 3D EEM data for Shiraz wine samples from all sub-regions and vintages showing (a) excitation wavelengths (nm) and (b) emission wavelengths (nm) of components 1-4.
Figure 6 .
Figure 6.Class CV predicted from XGBDA modelling of multi-block A-TEEM data for different blends (50:50, prepared as outlined in TableS2) according to (a) vintage and (b) sub-region.Samples were blended in duplicate, and each was analysed in duplicate.NG, Northern Grounds; CG, Central Grounds; ER, Eastern Ridge; SG, Southern Grounds; WR, Western Ridge.Samples outlined in red were misclassified.
Figure 6 .
Figure 6.Class CV predicted from XGBDA modelling of multi-block A-TEEM data for different blends (50:50, prepared as outlined in TableS2) according to (a) vintage and (b) sub-region.Samples were blended in duplicate, and each was analysed in duplicate.NG, Northern Grounds; CG, Central Grounds; ER, Eastern Ridge; SG, Southern Grounds; WR, Western Ridge.Samples outlined in red were misclassified. | 8,542.8 | 2024-04-29T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Serotonergic modulation of face-emotion recognition
Facial expressions of basic emotions have been widely used to investigate the neural substrates of emotion processing, but little is known about the exact meaning of subjective changes provoked by perceiving facial expressions. Our assumption was that fearful faces would be related to the processing of potential threats, whereas angry faces would be related to the processing of proximal threats. Experimental studies have suggested that serotonin modulates the brain processes underlying defensive responses to environmental threats, facilitating risk assessment behavior elicited by potential threats and inhibiting fight or flight responses to proximal threats. In order to test these predictions about the relationship between fearful and angry faces and defensive behaviors, we carried out a review of the literature about the effects of pharmacological probes that affect 5-HTmediated neurotransmission on the perception of emotional faces. The hypothesis that angry faces would be processed as a proximal threat and that, as a consequence, their recognition would be impaired by an increase in 5-HT function was not supported by the results reviewed. In contrast, most of the studies that evaluated the behavioral effects of serotonin challenges showed that increased 5-HT neurotransmission facilitates the recognition of fearful faces, whereas its decrease impairs the same performance. These results agree with the hypothesis that fearful faces are processed as potential threats and that 5-HT enhances this brain processing.
Introduction
Anxiety disorders have been related to abnormalities in brain processes underlying defensive responses to environmental threats (1,2).The neurotransmitter serotonin (5hydroxytryptamine, 5-HT) seems to play a significant role in modulating defensive behavior.For instance, it has been suggested that 5-HT would facilitate risk-assessment behavior elicited by a potential threat, which has been related to anxiety, by acting on the amygdala.In turn, 5-HT would inhibit fight or flight reactions to a proximal threat related to panic by acting on the midbrain periaqueductal gray matter (3,4).Later, McNaughton and Corr (5) argued that two defense systems -an approach defense system that deals with potential threats and approach-avoidance conflict and is related to anxiety, and an avoidance defense system that commands withdrawal from a distal threat (related to fear) and proximal danger (related to panic) -are longitudinally distributed along the brain, although the former is largely represented in the forebrain and the latter in the hindbrain.In this view, the hypothesis cited above on the dual role of 5-HT in defense was preserved, with the approach defense system being stimulated and the avoidance defense system being inhibited by 5-HT.
The ability to identify facial expressions of emotion is important for social functioning and adaptation, and its study may contribute to the knowledge of the neurobiology of emotions.Neuroimaging studies have provided substantial data about the neural substrate of emotional face recognition.The amygdala has been consistently activated by the perception of fearful faces (6)(7)(8)(9), although it can also be activated by other kinds of facial expressions, such as disgust (7,10), sadness (11,12), happiness (9,11), and anger (8,9,13).However, in direct contrast, activations of the amygdala in response to fearful faces were greater than in response to angry faces (8).The orbitofrontal cortex (Brodman area = 47) also seems to be a common brain region involved in the processing of different emotional faces (10).
To understand the functional meaning of the brain activation patterns depicted by neuroimaging evidence, it would be necessary to know the adaptive meaning of each facial expression, which is by no means an easy task.Considering the processing of anxiety and related emotions, the facial expressions of fear and anger are likely to be of particular interest.It has been proposed that faces expressing fear could be considered to be an ambiguous stimulus (14), warning other people about a potential threat in the environment.On the other hand, an angry face directed toward a particular individual may represent a proximal threat.If these assumptions are true, the above mentioned hypothesis of a dual role of 5-HT in defense allows the following predictions: 1) increased 5-HT function should facilitate the identification of fearful faces, whereas reduced 5-HT function should impair the same process; 2) increased 5-HT should impair, whereas lack of 5-HT should enhance, the identification of angry faces.
In order to test these predictions, we carried out a review of the literature about the effects of pharmacological probes that affect 5-HT-mediated neurotransmission on the perception of fearful and angry faces.For comparison, drug effects on the perception of other emotions are also described.
Review protocol
A computer-based search of the literature indexed in MEDLINE was made using the key words: face(s), facial, expression, emotion, and serotonin, with no time limit.In the first section, we have focused our survey on studies that objectively measured the performance (accuracy and speed) of healthy volunteers in the recognition of the emotion portrayed in the facial expression, considering the effect of a serotonergic drug compared to placebo.In the second section, we have analyzed the results of neuroimaging studies, some of which include performance data that have been discussed in the preceding section.
We excluded studies that evaluated the performance of volunteers with current psychiatric disorders and studies that adopted faces to evaluate other cognitive func-tions, such as memory.The survey was complemented with the bibliography of the reviewed articles.For the discussion, some references outside the above criteria have also been included.
Behavioral data
The effects of serotonergic drugs on the identification of facial expressions are summarized in Table 1.
The reduction of brain serotonergic function induced by the acute intake of a mixture of essential amino acids free of tryptophan, the precursor of 5-HT synthesis, impaired the recognition of fearful facial expressions by healthy women, without any effect on other emotional faces or on male volunteers (20).The same effect occurred in s-carriers of the 5' promoter region (5-HTTLPR) of the serotonin transporter of both genders, but not in LL homozygotes (23).On the other hand, acute dietary supplementation with tryptophan increased the perception of facial expressions of happiness and fear in healthy female volunteers (15).Chronic supplementation with tryptophan (14 days) did not change the perception of fearful faces, but facilitated the perception of happy expressions and decreased the recognition of disgusted faces in female, but not in male volunteers (27).
In the same direction, acute intravenous administration of the selective serotonin reuptake inhibitor (SSRI) citalopram (10 mg) facilitated the recognition of fearful and happy faces by women (21).More recently, a similar effect regarding fearful faces has been shown in both genders after a single oral dose of citalopram (20 mg) (17).In contrast, healthy female participants submitted to subchronic treatment (7 days) with oral citalopram (20 mg/ day) showed impairment of the perception of facial expressions of fear, anger and disgust compared to placebo (25).A further study by the same research group (26) confirmed the reduction of fearful faces recognition after a 7-day treatment with citalopram in both genders.
Euthymic women with a previous history of major depressive episodes recognized facial expressions of fear more easily than healthy women without a history of depression.The acute intravenous administration of citalopram (10 mg) normalized the ability of the volunteers with past depressive episodes to recognize fearful faces and Serotonergic modulation of face-emotion recognition www.bjournal.com.brincreased the capacity of women without previous depression to recognize the same emotion (16).
In a naturalistic study, the acute administration of 3,4methylenedioxymethamphetamine (MDMA, "ecstasy") to volunteers of both genders increased the perception of fearful expressions.However, after 4 days of drug withdrawal, the opposite effect was observed, i.e., a decrease in the recognition of fearful faces (28).
No effect of the 5-HT 3 antagonist ondansetron on the modulation of facial emotional expressions has been found (22).
It is important to highlight that most of the reviewed studies did not report changes in subjective feelings of anxiety during the experimental procedures with serotonergic challenges.
Neuroimaging data
Pharmacological functional magnetic resonance imaging has become a very useful technique for investigating the effects of drugs on brain metabolic activity through the changes in the blood oxygen level-dependent signal.Nevertheless, only a few studies have investigated the pharmacological modulation of the hemodynamic response to facial expressions so far.Neuroimaging results are summarized in Table 2.
In an unconscious perception paradigm, where the volunteers were asked just to make a gender categorization of faces, Cools et al. (18) observed that tryptophan depletion in healthy male volunteers enhanced amygdala activation in response to fearful faces compared to happy and neutral faces as a function of self-reported threat sensitivity measured by BIS/BAS scales.These scales have been developed to test the constructs of the behavioral inhibition system and behavioral activation system proposed by Gray and McNaughton (2).
In a similar paradigm of gender categorization, a single dose (7.5 mg) of intravenous citalopram attenuated the hemodynamic response of the right amygdala and right orbitofrontal cortex to aversive (angry, disgusted and fear-Table 1. Table 1.Table 1.Table 1.Table 1.Effects of serotonergic probes on the perception of basic emotional facial expressions by healthy volunteers.
Reference Method Emotion
Happiness Sadness Anger Fear Disgust Surprise ful) faces compared to neutral faces in male volunteers (19).Pretreatment with citalopram (20 mg) for seven days attenuated amygdala activation to fearful faces compared to happy faces in volunteers of both genders (26).In these neuroimaging studies, no drug effect was observed on the task performed during scanning or on subjective measures, except for a reduction of self-rated hostility perception and behavior, evaluated by the Buss-Durkee Hostility Inventory, which was reported after 7 days of oral citalopram (26).
More recently, van der Veen et al. (24) replicated in females the results obtained by Cools et al. (18), showing a significant correlation between threat sensitivity (BIS scale) and higher right amygdala activation by fearful faces in contrast to happy faces under tryptophan depletion.This study has also shown that the mood depression caused by tryptophan depletion in healthy females with a family history of depression was associated with impaired performance of gender categorization of negative facial expressions (fear, sadness and disgust) and with increased activation of the right amygdala.
Conciliating the data from serotonergic challenges on facial emotional expressions
Although their number is small, the articles reviewed here point to a serotonergic modulation of the identification of basic emotional facial expression by healthy volunteers.Also, the lack of effect on subjective anxiety suggests that the serotonergic modulation of facial expression processing can occur independently of changes in and/or conscious recognition of the feelings aroused by emotional faces.
The results obtained with drugs that interfere with serotonergic neurotransmission reinforce the role of 5-HT in the processing of anxiety and fear (3), since 9 of 11 studies that measured the behavioral effects of serotonergic probes found changes in the ability to identify fearful faces.In fact, in one study (22), the lack of effect could be hypothesized a priori since, according to the Deakin and Graeff's hypothesis (3) on the dual role of 5-HT in defense, the 5-HT 3 receptor is not supposed to modulate anxiety or fear processing.The processing of other emotions, such as happiness and disgust, also seems to be under the influence of the 5-HT system, although the results reported so far are less consistent.Two studies (20,27) suggested a sexual dimorphism in the 5-HT modulation of emotional processing since the effects of either tryptophan depletion or tryptophan supplementation were observed only in female volunteers.However, it is impossible to further explore this hypothesis, given that most of the studies carried out so far with serotonergic manipulation have included just females in their samples.
Although MDMA releases dopamine and noradrenaline, its main mechanism of action is by serotonergic neurotransmission, inhibiting 5-HT reuptake in the neuron membrane and stimulating the release of 5-HT stored in pre-synaptic vesicles (29).Moreover, MDMA decreases 5-HT synthesis by means of tryptophan-hydroxylase inhibition, causing 5-HT depletion during the days following its administration.Correlating with these actions, in the results reviewed, MDMA facilitated the recognition of fearful faces after acute administration, whereas four days later the perception of fearful faces was impaired (29).
The effects of manipulation of the levels of tryptophan, the precursor of serotonin synthesis, in the diet show a similar pattern, at least in healthy female volunteers (15) and in volunteers of both genders carrying the s genotype of the serotonin transporter (23).Thus, intake of tryptophan, and the consequent increase in 5-HT availability have been shown to facilitate the identification of fearful faces (15,23), whereas the decrease of 5-HT function by tryptophan depletion impairs the identification of fear expressions (20).
Acute intravenous administration of citalopram has been shown to facilitate the perception of fearful faces (21), an effect similar to that of tryptophan intake and of acute administration of ecstasy, and opposite to that of tryptophan depletion and on the fourth day after ecstasy.Taken together, these data suggest that intravenous injection of citalopram increases 5-HT availability.The clinical efficacy of the SSRIs is attributed to enhanced serotonergic neurotransmission, which in turn depends on the desensitization of 5-HT 1A autosomal receptors occurring nearly two weeks after repeated daily administration of these drugs (30).In fact, the beginning of treatment with SSRIs is supposed to reduce 5-HT function, a fact that may be associated with the worsening of anxiety symptoms commonly observed in clinical practice (31).In contrast, sub-chronic treatment with citalopram for 7 days has been shown to reduce the recognition of fearful faces (25).These results may be related to downstream neuroadaptive changes that occur with repeated administration of antide-pressants, such as the down-regulation of specific 5-HT receptors.
Experimental data have shown that, following the administration of SSRIs, there is a greater increase in extracellular 5-HT in the raphe nuclei than in the cortex (32).Therefore, it is possible that a low acute dose of an SSRI would preferentially increase 5-HT concentration near the cell bodies of serotonergic neurons, reducing their firing rate due to the activation of somatodendritic autoreceptors (33) and, hence, decreasing 5-HT release and lower synaptic 5-HT concentration postsynaptically.On the other hand, microdialysis studies in animals have shown increases in cortical extracellular 5-HT following acute SSRI administration (34)(35)(36).Moreover, intravenous injection of low doses of citalopram in healthy volunteers has resulted in plasma cortisol and prolactin increases (37,38), taken as an indirect measure of 5-HT function in the brain.
Reported neuroimaging results are more difficult to interpret since few published studies have evaluated the pharmacological modulation of hemodynamic responses provoked by facial expression recognition.Moreover, the differences in the features of the samples studied, in the procedure of image acquisition and analysis, and in the paradigms of psychological activation used impair the comparison of the results reported.
Although behavioral data showed that acute citalopram increased, while chronic citalopram decreased the identification of fearful faces, both the 7-day administration of citalopram (21) and the acute intravenous dose of citalopram (19) decreased amygdala activation in response to aversive faces, an opposite effect to that of tryptophan depletion (18,24).Nevertheless, the effect of tryptophan depletion occurred only in threat-sensitive volunteers and therefore these seemingly contradictory results between behavioral and neuroimaging data may be due to the interference of personality traits with emotional processing.
Assuming a direct correlation between neuronal activation measured by functional magnetic resonance imaging and the performance of emotional face recognition, we would expect increased 5-HT function to enhance the hemodynamic response of the amygdala to fearful faces.However, the reported results have shown that increasing 5-HT availability with citalopram decreased (19,21), whereas decreasing 5-HT with tryptophan depletion enhanced, amygdala activation (18,24).
In seeming contrast to the above conclusion, clinical studies have pointed to higher activation of the amygdala by emotional facial expressions in anxiety-prone healthy volunteers (39) and in patients with anxiety disorders (40) than in controls, and there is evidence from studies with depressive patients that antidepressant treatment normal-izes the enhanced activation of the amygdala in response to negative faces (41,42).Even so, reported data have shown that the changes in fearful face recognition caused by 5-HT probes have not been associated with changes in subjective measures of anxiety.Hence, we do not know whether the improvement in the recognition of fearful faces is associated with higher levels of anxiety or whether its impairment is related to less subjective anxiety of the perceiving subject.
Another weakness is that there are important differences between the experimental paradigms of the neuroimaging studies reviewed above.For instance, while subchronic treatment with citalopram (21) has included volunteers of both genders, acute administration of citalopram (19) has been made only in male volunteers, and there is considerable evidence pointing to sexual dimorphism in emotional processing (43).Also, the features of the functional magnetic resonance imaging technique itself have to be taken into account when interpreting these results.In fact, the meaning of the recorded changes in the hemodynamic response of specific brain areas caused by pharmacological modulation of neuropsychological tasks is not straightforward.Increases in the blood oxygen level-dependent signal are thought to be an index of increased neuronal metabolism measured by oxygen consumption.The enhancement of neuronal metabolism caused by a pharmacological challenge can reflect both an improvement of the performance ("working better") and the need for an extra effort to achieve the same level of function ("working harder").More studies correlating neuroimaging and performance data are needed to clarify this issue.
Conclusion
The behavioral data reviewed here indicate that increased 5-HT neurotransmission facilitates the recognition of fearful faces, whereas its decrease impairs the same performance, without a significant change in subjective anxiety.These results are in agreement with the hypothesis that fearful faces are processed as potential threats and that 5-HT facilitates such brain processing.
In contrast, the hypothesis that angry faces would be processed as a proximal threat and, as a consequence, their recognition would be impaired by an increase of 5-HT function was not supported by the results reviewed since only 1 of 10 studies with serotonergic probes found a significant drug effect on the perception of angry faces.
Finally, the neuroimaging results reported thus far on the effect of 5-HT probes on the brain processing of emotional face recognition are rather inconsistent, more studies being needed in this field.
↑
et al. (15) Tryptophan supplementation; only females ↑ et al. (21) Intravenous citalopram (10 mg); only females ↑ et al. (16) Intravenous citalopram (10 mg); euthymic females with or without a past history of major depression = increase of the identification of the facial emotion (lower number of errors and/or lower response time) in comparison to placebo; ↓ = decrease of the identification of the facial emotion (higher number of errors and/or higher response time) in comparison to placebo; -= no differences between groups; *differences only among females; **significant only in s carriers.MD = major depression.C.M. Del-Ben et al. www.bjournal.com.br
Table 2 .Table 2 .Table 2 .Table 2 .Table 2 .
Effects of serotonergic probes on the BOLD-fMRI signal provoked by the perception of basic emotional facial expressions.
↑ = increase of the neuronal response by the serotonergic probe; ↓ = decrease of the neuronal response by the serotonergic probe.BOLD-fMRI = blood oxygen level-dependent-functional magnetic resonance imaging; OFC = orbitofrontal cortex. | 4,370.8 | 2008-04-01T00:00:00.000 | [
"Psychology",
"Biology"
] |
Right to reply Power and ethics in humanities research : A response to Stolp
In the spirit of open engagement we respond to the article published in the last issue of Acta Academia by Mareli Stolp entitled “Report to the Academy: Power and ethics in humanities research”. This article raises many important issues but also requires, in our opinion, the presentation of an alternative perspective or narrative of the events chronicled. In responding to Stolp’s discussion of this incident, four aspects will be discussed: (1) the conceptual delineation of the scope of research misconduct, research integrity and research ethics, (2) Research ethics and integrity at Stellenbosch University and the allegation that it used as a managerial tool to supress academic freedom (3) the investigation process itself, and finally (4) the question of innocence or guilt. In conclusion we believe that a limited knowledge and understanding of research ethics particularly as it applies to autoethnography, a context of intra-departmental conflict and a specific historical context led to the conflation of numerous issues and to this series of events.
Introduction
The article "Report to the Academy: Power and ethics in humanities research" published in 2016 by Acta Academia, (Stolp 2016) raises important questions about managerialism and ethics in uni ver sities, questions which should be debated and considered carefully.These issues are complex and multi-layered as the author, Her practice-based research resulted in a doctoral dissertation that included much "subjective and often autobiographical information" and that had a significant narrative component (Stolp 2016:12).Her research was partly autoethnographic; she was embedded in the narrative that formed this dissertation.Certain parts of this narrative included critical descriptions of persons who were in-effect participants in the research process and who were easily identifiable, either because they were named or because they were identifiable due to their specific occupational roles.It was this particular aspect of the dissertation that was identified by the complainant as problematic and it was this that became the focus of the ensuing investigation.From the perspective of those tasked with trying to resolve the complaint, this matter was about research ethics within the complex context of narrative research.When research involves other living humans in some capacity as material sources, certain principles, primarily the principle of 'respect for persons,' need to be observed as involved persons serve as a means to a research end.Guillemin & Gillam explain this fairly bluntly by stating that research involving human participants starts from a point of ethical tension as it invariably involves "a violation of the Kantian maxim that people should never be used merely as a means to someone else's end" (Guillemin, Gillam 2004:271).These authors continue to explore the important notion of reflexivity in qualitative research and how it can contribute to ensuring that research is ethical.Reflexivity is described by McGraw et al as "a process whereby researchers place themselves and their practices under scrutiny, acknowledging the ethical dilemmas that permeate the research process and impinge on the creation of knowledge" (McGraw, Zvonkovic & Walker 2000:68).
The authors aver that academic freedom need not be sacrificed on the altar of research ethics, or that a dissertation that aims to provide rigorous critique of an institution and its members cannot be presented in such a way that the principles of research ethics are nonetheless upheld.However it does require that the researcher reflects deeply on the ethical dimension of her narrative and her position in this narrative.CEELBAS a collaboration between several UK universities, supporting doctoral and post-doctoral research in Eastern Europe, explores the issue of power and ethics in qualitative and ethnographic research on its website and comments: Thus, for a researcher it is not possible to claim a neutral research identity during fieldwork and it is vital to critically examine a researcher's subjectivities.In addition, an important issue for the complex relationships between the researcher and his or her 'field' is ethical responsibility which is integral to any research project….The politics of research [are] revealed by the choice of one's research topic, the methods utilised and the social context in which the research takes place (CEELBAS 2016).
In responding to Stolp's article, four aspects will be discussed: (1) the conceptual delineation of the scope of research misconduct, research integrity and research ethics, (2) Research ethics and integrity at Stellenbosch University and the allegation that it used as a managerial tool to supress academic freedom (3) the investigation process itself, and finally (4) the question of innocence or guilt.
Research Ethics, Research Integrity and Research
Misconduct: Concept clarification Research institutions world-wide recognise that the ORI definition, while useful to this agency in fulfilling its mission to ensure the integrity and validity of research funded by the US government, is narrow, but that there are many other instances of wrongdoing within the context of the development, implementation and reporting of research that do occur and do undermine the integrity of research.The use of the term "other irresponsible research practices" in the Singapore statement reflects this stance.Particular note should be made of the fact that the ORI, while using this narrow definition for its own investigative purposes is active in promoting a far wider agenda when providing resources for the promotion of capacity development in the field of Responsible Conduct of Research or RCR, as it is now widely known.Available resources for RCR capacity development on the ORI website are wide ranging and include such topics as mentorship, peer review, data management, collaborative science, conflict of interest and conflict of commitment and human subject research protections among others.(Office for Research Integrity, Department of Health and Human Services (HHS), USA ).
The ethical aspects of research involving humans and animals is often referred to as research ethics and the bodies that review and regulate this research as research ethics committees (Kruger, Ndebele & Horn 2014).Research ethics thus falls under, or is a subset of the broader field of research integrity.Nicholas Steneck, co-chair of the forthcoming 5 th World Conference on Research Integrity and a world leader in the field of education in research integrity, has developed fully integrated training courses in research integrity appropriate to various domains, including the Biomedical sciences, Engineering, Social Sciences, Arts and Humanities (Steneck 2003, Steneck).Each course includes modules on the ethics of animal or human research as applicable.This integrated approach is increasingly regarded as essential in ensuring the integrity and validity of all forms of research including those that involve human and animals.Hence Stolp's assertion that SU had conflated "two separate issues, research misconduct and research ethics" (p.15) seems to be based on a conceptual distinction not shared by many who work in the field of research ethics and integrity.Researchers should respect themselves, their colleagues, the scientific and academic community, their animal and human research subjects, the environment and the public at large".
Research Ethics and
The SU 2011 procedural document used to investigate problems related to research studies did indeed refer to 'Research Misconduct'.However in contrast to this document clearly defined research misconduct broadly and did not confine itself to the narrow definition of 'Falsification, Fabrication or Plagiarism'.Rather the document states: Misconduct in research includes acts of omission as well as acts of commission.Research misconduct includes but is not limited to: ix.Improper allocation of authorship or the lack of allocation of deserved authorship x. Failure to comply with national statutory, professional or legal requirements Hence the document clearly and intentionally places breach of research ethics principles as a subset of activities or incidents that could fall under an allegation of research misconduct, and in this instance it did.There was never any question about whether or not Stolp was being accused of data fabrication, falsification or plagiarism.The answer to this is absolutely not; this idea or allegation was not ever on the table.However the complaint received revolved around a breach of the broad principle of 'respect for persons' and more particularly vi.(a) and vii.as referred to above.Of note, after this particular incident, the 2011 procedural document was extensively revised and is now called Stellenbosch University's Procedure for the investigation of allegations of breach of research ethics norms and standards (Division for Research Development, Stellenbosch University 2014).
It is also important to note that Stolp's research proposal did not go through any formal process of ethics review or approval, despite the fact that the SU Policy in place at the time stated the following: International guidelines for the need for ethics approval of nonhealth related research e.g.social science research involving human participants are less clear.However research involving direct interaction with human subjects or the capturing of any personal information should be approved by an ethics committee.[…] Research involving human participants must comply with the following principles: […] ensure research participants are well informed on the purpose of the research and how the research results will be disseminated and have consented to participate, where applicable; ensure research participants' rights to privacy and confidentiality are protected; ensure the fair selection of research participants be preceded by a thorough risk benefit analysis This lack of formal ethics approval was not part of the complaint or investigation, as it was deliberately decided to view this as a development opportunity for both student and supervisor.It is important to note that all current and future studies involving human participants from this environment have been and are now required to undergo ethical review.Stolp's dissertation also did not include a section on ethical considerations related to her chosen research methodology and it does seem that she did not consider the persons that she named and reported on in her dissertation to be research participants.This is perhaps the issue at the heart of this incident: the complainant and those involved in investigating the complaint did and still do regard many of the persons mentioned in Stolp's dissertation as research participants and thus deserving protection by research ethics principles implemented by a responsible and accountable researcher.It is important that all researchers (including Stolp) consider their own role and agency in relation to all those they represent in their work.As Tisdale puts it "we must negotiate ethics; we must ask difficult questions of ourselves and our work (Tisdale 2004).
Stellenbosch University does undoubtedly use research ethics processes as a tool to attempt to ensure that the research produced by our university adheres to accepted international principles of research ethics.This is required by most international publishers and funding bodies.We do however also try to ensure that, wherever possible, the widest range of research is approved for implementation.The overarching purpose of a REC is to safeguard the interests of those involved in the research process as participants.Current policy requires that all projects that involve interaction with humans and where the recording of that interaction contributes in some way to the research data or content, as requiring formal ethical review (Senate Research Ethics Committee 2013).In the broad humanities (i.e.all projects not considered biomedical) projects are reviewed according to a guided ethical risk assessment that the student and supervisor undertake together.Low ethical risk projects are reviewed at department level by Departmental Ethics Screening Committees (DESCs) and medium or high ethical risk projects are referred to the central REC.The REC: Humanities specifically uses eight widely accepted benchmarks to review research: Social value and relevance, Scientific validity, Stakeholder engagement, Fair recruitment of research participants, Informed consent, On-going respect for participants (including the protection of privacy and confidentiality), an acceptable Riskbenefit assessment and Researcher competence (National Health Research Ethics Council, South Africa 2015, Wassenaar, Mamotte 2012).We believe that the implementation of the above ethical benchmarks across all research projects can only improve the robustness of the research and do not hamper academic freedom in any way that could be considered unreasonable.We do not agree, as Stolp has argued and with those that have contended in the past, that social science researchers should not need to submit their projects for formal REC review and such processes represent the biomedicalisation of the social science space.Contrary to this view, we have embraced the perspective of authors such as Wassenaar, Marmotte, Slack, Guillemin and Gillam who have argued rigorously and in our view successfully, for the value of REC review in the social sciences.(Wassenaar, Mamotte 2012, Mamotte, Wassenaar 2009, Wassenaar, Slack 2016, Guillemin, Gillam 2004) Since January 2012 the REC Humanities has reviewed almost 600 applications.About half of these were approved directly, or with minor stipulations.Only just over 30 projects were deferred, meaning that the project required major revision or did not contain sufficient information for ethical aspects to be adequately reviewed.A good number of these projects were regarded as ethically high risk because they involved controversial and sensitive topics or vulnerable populations.In almost all instances this research was subsequently approved, even if certain revisions were made to improve research participant protection.No projects have been rejected out-right.Recently a good number of research projects have been approved that seek to explore various controversial issues at SU including Feesmustfall#, the language issue, transformation, hostel and campus culture.Many if not most of these projects seek specifically to explore the perspective of students coming from previously disadvantaged and previously excluded backgrounds.
The investigation process
The complainant in this particular instance was the Chair of the Department of Music at Stellenbosch University.Of particular note is that the complaint came from an environment where certain collegial relationships have broken down.The final report of the three person Investigation Committee (IC), made up of faculty-based academics (in this instance three senior professors) reflected this fact.Those tasked with receiving and investigating this complaint were initially completely unaware of this context.The complaint had two components, the first being concerns regarding ethical aspects vis-a-vis the protection of the identity of research participants in a dissertation which will be discussed in more detail later.The second complaint concerned potential copyright infringement, as it was reported that recordings related to the dissertation had been uploaded with the dissertation and were now available in the public domain without the necessary permissions in place.It was primarily this latter allegation that lead to the decision to place an urgent temporary embargo on the dissertation while the complaint was investigated.This decision was reasonable from a perspective concerned with reducing immediate institutional risk.The possible infringement of copyright is indeed internationally regarded as a valid case for immediate embargo (UCL Library Services 2014).However, in retrospect, we recognise that placing an embargo on a dissertation without first contacting the author and supervisor and explaining why such an action was urgently warranted was incorrect and led directly to the unfortunate break of trust in the process that followed.This was a mistake on the part of the RIO and SD: R&I for which they take full responsibility.
Conditions under which a dissertation may be embargoed were not included in the 2011 Investigation of Research Misconduct procedural document, to which Stolp refers.This incident led to a revision of this procedural document and the current procedure now states Should an allegation involve a thesis that is in the public domain (i.e. on the SU's Sun Scholar database), SU may, at its own discretion, place a temporary embargo on the thesis from the time that a formal investigation is instituted, until such time as the investigation has been finalised, to avoid any damage and/ or risk to SU's reputation.Prior to placing such an embargo on a thesis, the RIO must notify the Respondent and his/her supervisor of the thesis of this intention.Under exceptional circumstances SU reserves the right to place an embargo on a thesis earlier in this process.For example in cases where either SU or other parties are placed at risk by privacy or intellectual property issues.Wherever possible all concerned will be notified as soon as reasonably possible.(Division for Research Development, Stellenbosch University 2014) In this particular case, the placing of the temporary embargo on the dissertation without immediate communication to the respondents (Stolp and her supervisor) raised suspicion in their minds about the neutrality of the investigation.Several examples at this institution however exist where temporary embargos were placed on material in the public domain during such investigations, in order to address any potential risks (to the institution and individual respondents), without resulting in suspicion or the breaking down of trust.It could therefore be speculated that Stolp's suspicion and lack of trust arose to a large degree due to the history of conflict associated with her PhD studies.Communication from this point onwards became antagonistic and acrimonious, with the space for fruitful dialogue being compromised.A decision was made to appoint an independent investigation committee (IC), comprising three senior academics in the social sciences.This was viewed by those managing the case as the fairest and most effective way of addressing the allegation.The decision to appoint such a committee was apparently interpreted by the respondents as a presumption of guilt, which it was not, as clearly stipulated in SU's procedure.The RIO and others involved in managing cases of this nature are obliged to play entirely neutral roles with respect to complainant and respondent, which was indeed the case here.Research ethics, and in particular the stipulations of the SU Policy and the internationally recognised Singapore Statement on Research Integrity formed the only basis of this investigation.No one is presumed guilty at the start of such an investigation, and some examples indeed exist at this institution where similar investigations have completely exonerated the individuals involved.It could therefore again be speculated that the complex and contested history of Stolp's PhD study as a whole was the fundamental cause of this presumption, and not the specific actions taken by those managing the complaint.As mentioned previously, the procedural document in question has however been extensively revised to ensure clarity in the sequence of events in any future investigations, as it cannot be assumed that these interactions would be based on trust or reasonable reactions.These revisions were done primarily to remove any potential for differences in the interpretation of the sequence in which events should occur after receiving research integrity related complaints.
One final point that needs to be made is that after the IC had been appointed, but prior to it initiating its investigation, and because of the level of discontent expressed with the process by the respondents, the possibility of halting the investigation and winding it right back to the beginning was discussed by those involved in the case.However the DVC: Research invited Stolp to Stellenbosch; discussed the matter directly with her, and it was then decided that the IC process will continue with the IC interviewing Stolp.This fact is reflected in the IC's final report, dated 28 th August 2013.
It is the view of the committee that there were indeed deviations from university policy in terms of this matter, and it notes further that it was not party to the discussions between the Vice-Rector and Dr Stolp.We believe that though mistakes were made materially, that none were made in bad faith and that none was of a nature to impede the independent workings of the committee, and we were thus satisfied that with the cooperation of the main parties concerned we could continue with our work.
It is perhaps important to point out again that the above-mentioned deviation refers to the SU procedure (not policy), and specifically to the sequential following of steps in the procedural document that has since been revised and improved in this respect.
Innocent or guilty? And of what?
Stolp states categorically in her article that "it was determined that I was not guilty of either research misconduct or a breach of ethical principles" (Stolp 2016:3).This statement, is in the opinion of the authors of this response, somewhat misleading.What is true is that Stolp was not found guilty of research misconduct in the narrow sense (fabrication, falsification or plagiarism).However the IC did find that she had breached certain ethical norms and principles.The following statements are taken directly from the IC report and reflect the opinions of the three senior experienced researchers and academics, all from the broader humanities and social sciences environment (psychology, education, social work) who were tasked with investigating this matter In conversation with Dr Stolp, the committee was able to see how her perception of the way she was treated as a student by some members of the department, a perception largely shared by [her supervisor] but not shared by [the complainant] helped facilitate a view of herself as a student as relatively powerless within an hierarchical power system.This perception, which was clearly acutely felt by Dr Stolp, provides some of the context for the manner in which she conducted herself during the process of data collection and for the tone in which she chose to write up her thesis.
From the committee's conversation with Dr Stolp, and from the way in which the thesis itself was written, the committee came to the view that Dr Stolp appears to have conflated two issues.There is a difference between taking subjectivity seriously and giving it due weight, and of selectively privileging the subjective experiences of the author.Though it is correct to say that a subjective interpretation of events is important to understand and to respect and embrace, this is not the same as implying that the views of the author (in this case Dr Stolp) should not be subject to the same skeptical scrutiny as those of others.Dr Stolp does address this issue distally in her early chapters, but there are occasions when she discusses her findings that she does not seem to entertain as seriously as she could the possibility that her interpretation is but one of many ways of understanding what has occurred.This is a difficult issue, as it is her right methodologically and intellectually to use her own subjectivity as data, but the problem here is the privileging of this subjectivity.It was clear from our discussions with her that Dr Stolp felt to some degree victimised by the Department of Music, and this was indeed part of her experience.What she seems to have taken less cognisance of, in her writing of the thesis, was her own power and agency (admittedly within the context of asymmetrical power relationships in which she was structurally in a less powerful position) […] In research of this nature, it is not uncommon for people about whom the author is writing to be given sight of what the author intends to write, and to reply.The author does not have to agree with the opinions of others about her interpretations, but does have a responsibility to reflect the fact that her own views, like the views of all others, are necessarily partial, and to give due weight to the possibility that she herself may have made errors of interpretation.
The reflections in the above extract are echoed by authors writing and teaching in the field of narrative research (Johnson-Bailey 2004, Lapan 2003).The IC thus concluded that there were people identified or identifiable in the dissertation that were not fully aware of the role they had played in this research or of what was going to be said about them in this dissertation.The IC recommended that the dissertation be made available only via request to other scholars in this field.However the final management decision taken was that the dissertation could be made available on the Sun Scholar repository provided that names of those in the thesis be removed or blacked out and that the Chair of the Department of Music would have an opportunity to write a rebuttal; this rebuttal would be uploaded with the dissertation.In addition to this, the issue of copyright infringement and the attachment of a copyrighted recording to the thesis that was initially available in the open domain was indeed the third matter that was addressed in the university's requirements.Contrary to Stolp's statement in the article (see footnote 18 on page 10), the removal of these recordings from the public domain, or the instruction to obtain the necessary approval from copyright owners, was the third issue that the university management required in their letter to Stolp and her supervisor.The statement in footnote 18 on page 10 is therefore a misrepresentation of the facts.
As is clear in Stolp's article, the actions taken with respect to this dissertation, after the investigation was concluded, were regarded as 'censure and censorship'.We do not agree with this perspective and as stated previously, believe that both the principles of academic freedom, and research ethics could have been fulfilled simultaneously in this dissertation.This dissertation could have leveled a powerful critique at both Stellenbosch University and the Department of Music, including commentary on the apartheid legacy of the Music Department and issues related to transformation, without making this critique personal to the point where individuals were either directly or easily identifiable.
Conclusion
This has been a most unfortunate incident for all those involved including Stolp.We believe that a limited knowledge and understanding of research ethics, a context of intra-departmental conflict and a particular historical context led to the conflation of numerous issues and to this series of events.The fact that the researcher, supported by her supervisor, cast herself as the underdog in this research process, led we believe to an inadequate appraisal of her own agency and ethical responsibility.While she may well have indeed justifiably owned some level of victim status during this process, this does not mean ethical accountability in the research process was no longer required.
As mentioned in the Post Script to the article by Stolp, respondents in this case, including the internal examiner, persisted in their view that the university had censored this dissertation because it did not like the political critique regarding transformation, and eventually submitted a complaint to the university ombudsman who, we understand, recommended the lifting of all embargoes on the dissertation.We have not been given access to the ombudsman's report, so we are unable to comment on the reasoning of the ombudsman in this matter.
of data.v. Failures to follow accepted research procedures or to exercise due care in carrying out research (negligence).vi.Breach of responsibilities for avoiding unreasonable risk or harm to: a. humans; b. animals used in research and teaching; and c.The natural and cultural environment.vii.Breach of principles for the proper handling of privileged or private information of individuals collected during research.viii.Improper management of research funds and/or other resources.
Stolp's article demonstrates considerable confusion around the above concepts including scope of field and conventional use of terms.The Singapore Statement on Research Integrity is an international statement that resulted from the 2 nd World Conference on Research Integrity and was first published in 2010 (Second World Conference on Research Integrity 2010).The initial group of signatories represented more than 50 countries.However, subsequently the statement has been broadly accepted globally.Stellenbosch University incorporated it into its revised 2013 Policy for the Promotion of Responsible Research (Senate Research Ethics Committee 2013).It is a broad statement of four principles and 14 responsibilities that cover all aspects of research.Of particular note are the four principles: Honesty in all aspects of research, Accountability in the conduct of research, Professional courtesy and fairness in working with others', Good stewardship of research on behalf of others.The document does also refer to both | 6,279 | 2017-02-15T00:00:00.000 | [
"Philosophy",
"Sociology"
] |
Calculus of the fractional order operators in a discrete time domain
. The article presents the elementary theory of differential and integral operators of the fractional order in a discrete time approach. A notion of a simple proper fraction operator has been introduced. It has been done for the time equivalent by applying the Taylor series. On this basis a new theory of a certain complexity operators has been formed which includes differential operators of the fractional order. Somewhat more general approach has been presented in the later part of the article by introducing a rational power of the convolution operator. Both approaches to the fractional operators are realized by non–recursive digital filters of infinite impulse responses. The stability of such filters is also being considered. The article also contains the application to the distributed parameters electrical circuits theorem.
Introduction
The fractional order differential-integral calculus has been known for over 300 years. Eminent mathematicians of the era -Leibniz and Newton [27] (their famous calculus wars [3]) or more recent as Hadamard [6] took part in the creation of this calculus. The history of creation and development of fractional calculus can be found in literature [4]. However, the application of fractional order differential-integral calculus is a relatively recent part of this mathematical discipline in which [2,7,[11][12] can be considered as groundbreaking work and [1,5,[8][9][10]13,26,28] as significant achievement.
This publication presents the mathematical convolution method as well as digital filter impulse response method with the application for distributed parameters system analysis in the theory of electrical circuits. The authors' previous studies in this field can be found in publications [14][15][16][17][18][19][20][21][22][23].
The rest of this work is organized as follows. Section 2 introduces the theory of a fractional order differential operator using digital filters (discrete time domain) as well as discussion on stability of such filters. Section 3 proposes the use of differential operator in the theory of electric transmission line. Section 4 extends the fractional order differential operator to the rational order differential operator.
Theory of a differential operator of a noninteger order
The differential operator of an integer order ( is a natural number) may be presented in the following discrete time form: (1) where is a unit delay operator (a complex variable). As a matter of fact, it is a well-known Newton formula [27] and also a FIR digital filter which is always BIBO-stable.
For this operator becomes an integral operator which can be calculated from the inverse filter: The above implies a recursive formula for the impulse samples of the inverse filter: (2) Applying the inverse formulas (2) for the filter the integral filter is obtained: In fact, the formula (3) may be calculated from a geometric sequence which is convergent for or by applying the Taylor series: In fact, the function: used in the formula (4) gives the result: The integral filter is not a recursive one. It is not a FIR filter. Thus it is not a BIBO-stable (it has infinite impulse response). However, it may be a stable filter when it is assumed that the input signal is limited to a finite time peroid. In such a case the transient operator has a form of a finite convolution: (5) where the input signal fulfills the condition: A detailed development of the formula (5) renders: The expansion of (6) implies that in order to ensure the stability of the filter (5) it is sufficient to limit the impulse output. Absolute summation is not necessary. The generalization of the expansion (1) as a Taylor series (4) allows for a definition of a differential operator of the fraction order ( ): where: (8) for , or in a recursive way: The BIBO-stability of such a non-recursive filter (1), which is not a FIR filter this time, depends on a summation of the following series: The examination of the formula above is done through the integral criteria: (9) where: In order to estimate the value of the coefficient the following expression is extended in a power series: and after the integration: The remaining elements in (9) behave under the condition in the following way: and: Thus, the integral criterion when applied for the function (9) renders: It means that the digital filter (7) as a discrete-time model of a differential operator of a fractional order is BIBOstable. By applying the Taylor expansion in the series (7) for a definition of a fraction order integral operator: it is obtained: (10) where: and for : (11) Each element in the product (11) belongs to the range . It means that the sequence is limited.
However, it is not given as an absolute sum. Thus, the digital filter (10) is stable upon a condition only. A periodical, differential filter defined by the operator ( ) is a certain generalization: where: (13) and an appropriate integral filter ( ): where: (15) Both filters (12) and (14) are BIBO-stable when: .
The expressions (12)-(15) may be treated together for differential filters ( ): where: for and: for . The joined results of (1) and (7)-(8) help to find a differential operator of an improper fraction: (16) where is an integer positive number (an integral part of the order of the operator) and is a proper fraction of the operator ( ). When applying a partial decomposition: (17) where: and: (18) where: and after composing the filters (17) and (18) the filter (16) is obtained in which the convolution: Due to the fact, that one of the filters is a FIR filter each element is calculated as a finite sum. Yet, the given filter is not a FIR filter but a BIBO-stable one (a composition of BIBO-stable filters is still BIBO-stable).
For example, for a differential operator of the order a digital filter with an impulse response is obtained: The square value of the differential operator of the ½ order should render an operator of the first order. This means that a square value of the convolution is obtained: Fig. 1. Impulse responses of the operators: a) differential of the 1 st order, b) differential of the ½ order, c) integral of the 1 st order, d) integral of the ½ order.
Linear equations with differential operators of non-integer order and their application in the analysis of distributed parameters circuits
It is difficult to find a general formula for linear equations with a differential operator of the fraction order. The reason is that the polynomial method which is used in a classical theory of differential operators does not work here. The notion of a rational function does not apply here either. For linear equations with differential operator as a combination of a product of operators (12) which map the digital filters of the impulse responses denoted by: (19) for . This combination is made of the integral operator (14) for which: (20) for and the operators of the integer order, i.e. the integer (positive and negative) power of the operator for which .
It may be an equation with a linear combination of the fractional operators of the type: where , are any given positive numbers. In this way, a differential equation may take the following form: (22) where are fraction functions of the (21) type and , are the input and output signals. In order to solve equation (22) the impulse function should be found which will correspond to the fraction (21). It will be a combination of functions (19) and (20). Next, there should follow a juxtaposition of the impulse functions which create the operator on the left side of the equation (22). In order to find the solution of the differential equation (22) the inverse operator should also be determined according to the inverse formula (2).
The operator of the wave impedance.
Differential equations of the transmission line: take the operator forms: (23) or the form of one operator equation: or: The integration of the differential equation (24) will render a formula with the wave impedance operator: When applying the unit time delay with a sample time , the wave impedance operator takes the form: where: So, the wave impedance operator may be written with differential and integral operators of the ½ order: where: is a wave resistance. The equation (25) takes now a form of an equation with differential operator of the ½ order: A similar result for the wave impedance operator can be obtained by applying a bilinear transformation: where: and:
The propagation operator.
In the continuous-time domain the propagation operator, which appears in the solution of differential equations (23) has a form: (26) where: in which is the velocity of light and is geometric length of transmission line.
When applying the bilinear transformation it is obtained: where: , The solutions of differential equations (23) with wave impedance, propagation operator and boundary conditions as signals , can be presented in the following form: (27) For the adjusted circuit at the end of transmission line: The exponential operator (wave operator) from the equation (29) can be expanded into the following series: Thus, from (26) it is obtained: The wave impedance operator takes the form: where the components (differential): and (integral): are collected in the table 2. Table 2 Collection of integral and differential components of the wave impedance The operator, which is a convolution of other operators, is defined by the sequence: Fig. 2 shows a diagram of an infinite homogenous electric circuit containing operators: horizontal (resistance type) and vertical (conductance type). in which: , , It is convenient to designate the series recursively: An algorithm for determining the parametric convolution: is shown in the table 3.
Rational power of the convolution operator
Assuming that is a deterministic signal of the discrete-time type, such as : The cube convolution is given by the following expression: By introducing a symbol of -times convolution power ( is positive integer number): the following identity can be obtained: which can be proved by the induction: Assuming that (which does not affect mathematical generalization of the equation) an inductive proof of the formula (30) is obtained. By means of the definition of the convolution power a convolution root of the operator is defined: (35) thus: The identity (30) applied for the equation (31) implies the following recursive expression: In particular for the power of the ½ order: As an example the expansion of the convolution formula (32) were calculated for the differential operator : and for the integral operator : The expansion of the differential operator according to the DF impulse response formula (19) gives: and according to the convolution formula (33) gives: As for the integral operator the results are presented below, according to the DF impulse response method (20) on the left and according to the convolution method (33) on the right:
Results and Discussion
Using presented in the article methods it is possible to determine the differential-integral operator of the rational order with digital filters, i.e. in the discrete time domain. The results obtained by the digital filter impulse response method coincide with the results of the convolution method. It should be noticed that the convolution method is more complex and will need more computing power but unlike the digital filter impulse response it can be used to obtain a rational order operator.
The application of differential-integral operators in electrical engineering were also presented. In the theory of electric systems with distributed parameters, the differential and integral operators of the ½ order were used in order to obtain the wave impedance and the propagation operator as well as the discrete impedance of the infinite electric circuit.
In further research, the authors will focus on extending the theory of rational order operators and their application in theory of electric transmission line equations in discrete time domain.
Conclusion
The current electric transmission line theory allows to describe the phenomena for a lossless or non-distorting line loaded with resistance at most [24][25]. The discretetime analysis presented in the article allows to overcome this limitation. The operators appearing in the transmission line theory can be presented as a combination of differential-integral operators. The examples shown in Section 3 use the ½ order differential and integral operators. In future work it will be shown that it is possible to use operators of any rational order in electric transmission line theory. Perhaps this generalized theory will find application in other fields of science, in particular those involving the propagation of waves.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable. | 2,935.8 | 2020-07-29T00:00:00.000 | [
"Mathematics"
] |
Adaptive evolution of West Nile virus facilitated increased transmissibility and prevalence in New York State
ABSTRACT West Nile virus (WNV; Flavivirus, Flaviviridae) was introduced to New York State (NYS) in 1999 and rapidly expanded its range through the continental United States (US). Apart from the displacement of the introductory NY99 genotype with the WN02 genotype, there has been little evidence of adaptive evolution of WNV in the US. WNV NY10, characterized by shared amino acid substitutions R1331K and I2513M, emerged in 2010 coincident with increased WNV cases in humans and prevalence in mosquitoes. Previous studies demonstrated an increase in frequency of NY10 strains in NYS and evidence of positive selection. Here, we present updated surveillance and sequencing data for WNV in NYS and investigate if NY10 genotype strains are associated with phenotypic change consistent with an adaptive advantage. Results confirm a significant increase in prevalence in mosquitoes though 2018, and updated sequencing demonstrates a continued dominance of NY10. We evaluated NY10 strains in Culex pipiens mosquitoes to assess vector competence and found that the NY10 genotype is associated with both increased infectivity and transmissibility. Experimental infection of American robins (Turdus migratorius) was additionally completed to assess viremia kinetics of NY10 relative to WN02. Modelling the increased infectivity and transmissibility of the NY10 strains together with strain-specific viremia demonstrates a mechanistic basis for selection that has likely contributed to the increased prevalence of WNV in NYS.
Introduction
West Nile virus (WNV; Flavivirus, Flavivridae) is a mosquito-borne single-stranded, positive sense RNA virus with a genome of approximately 11 kb encoding a single open reading frame (ORF) consisting of three structural genes (C, prM, and E) and seven non-structural genes (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) [1]. WNV was first isolated from a febrile viremic patient in Uganda in 1937 and subsequently caused isolated outbreaks in Africa, the Middle East, and Australia, where the disease was rarely found to be neuroinvasive [2]. In the mid-1990s the intensity of outbreaks and WNV disease increased, marked by rising prevalence in Eastern Europe and Northern Africa [3]. WNV is now the most geographically widespread arbovirus and has been classified into as many as five genetically disparate lineages that differ by as much as 20-25% nucleotide identity [4]. The introduction of lineage 1 WNV to the United States (US) commenced in New York State (NYS) in 1999 [5]. There have been over 55,000 human cases diagnosed in the US since 1999, including over 2600 deaths [6]. Although most cases of WNV are subclinical, roughly 20% of cases progress to acute febrile illness, and 1% of cases progress to central nervous system (CNS) infection [7]. CNS infection results in a far more severe course of disease, marked by a range of clinical outcomes including encephalitis, meningitis, acute flaccid paralysis and death [8]. Given both the high proportion of subclinical infections and the fact that West Nile fever cases often go undiagnosed, the true number of infections in the US has likely exceeded 6 million over the last twenty years [9].
The exploitation of a naïve and permissive host environment together with highly competent vectors in North America facilitated rapid spread and establishment of WNV as the most prevalent arboviral pathogen in the US. WNV is maintained in an enzootic cycle between mosquitoes, primarily of the Culex genus, and avian hosts. The primary vector in Northeast US is Culex pipiens. Most passerine songbirds serve as reservoir hosts that amplify the virus to viremia levels sufficient for transmission back to the mosquito vector [10]. Despite the wide host breadth of WNV, American robins (Turdus migratorius) are known to play a disproportionally large role in amplification and dispersal, both because of their competence and the blood-feeding preferences of Culex spp. mosquitoes, as well as their migratory habits and short distance movements [11][12][13][14].
As an RNA virus with no proofreading mechanisms and a high rate of replication, WNV has enormous evolutionary potential, yet estimates of evolutionary rate ranging from 3.6×10 −4 -8.2×10 −3 substitutions/ site/year stand in contrast to this lack of fidelity in WNV genome replication [15][16][17]. While WNV has been relatively stable genetically, high levels of variability with largely uncharacterized phenotypic consequences have been noted over various temporal and geographic scales using geographically focal datasets [18][19][20]. In addition, evidence of adaptive evolution of WNV is scant, with notable exceptions. The invasive WNV strain, introduced to the US in 1999, possessed a characteristic amino acid substitution, NS3 T249P, which increased virulence and susceptibility in avian hosts [21]. Displacement of the previous NY99 genotype by the WN02 genotype, characterized by a single amino acid change in the envelope protein, V449A, likely contributed to the rapid dispersal of the virus across the US [22]. The WN02 genotype was found to be more infectious to mosquitoes, demonstrating earlier dissemination and a shorter extrinsic incubation period (EIP) in Culex tarsalis, which is widespread in the US west of the Ohio River [23]. An additional genotype, SW/WN03, characterized by the amino acid substitutions NS5 K314R and NS4A A85T, has been circulating in the US since 2003 [24].
Recent phylogenetic studies of WNV identified multiple mutations with evidence of positive selection and novel genotypes that have increased in prevalence in recent years in NYS [25]. In particular, the NY10 genotype, characterized by two shared amino acid substitutions with evidence of positive selection, R1331 K (NS2A R188 K) and I2513M (NS4B I240M), emerged in NYS in 2010 and increased in prevalence through 2015. Importantly, this displacement occurred in concert with increased WNV activity in the state, a trend that continued through 2018. Here, we sequenced an additional 48 WNV strains isolated from 2015-2018 to confirm the continued dominance of NY10. To test the hypothesis that adaptive evolution contributed to increased WNV transmission and prevalence, we characterized NY10 strains in vivo in both Cx. pipiens and American robins. Using these results, we modelled WNV transmissibility and demonstrated a clear role for viral genotype in driving WNV activity in the region.
West Nile virus mosquito surveillance and sample preparation
Mosquitoes were collected in Centers for Disease Control (CDC) light traps by NYS county health departments and speciated pools were submitted to the NYS Arbovirus Laboratory for processing and testing. Pools consisted of 15 -60 Cx. pipiens and/or Cx. restuans females in 1 mL mosquito diluent [MD, 20% heat-inactivated fetal bovine serum (FBS) in Dulbecco's phosphate-buffered saline (PBS) plus 50 μg/mL penicillin/streptomycin, 50 μg/mL gentamicin, and 2.5 μg/mL Fungizone] with 1 steel bead (Daisy Outdoor Products, Rogers, AR). Pools were processed by homogenization for 30 s at 24 Hz in a Mixer Mill MM301 (Retsch, Newtown, PA), followed by centrifugation at 6000 rcf for 5 min. WNV-positive pools were identified by quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) [26]. WNV prevalence was determined using maximum likelihood estimation (MLE) based on mosquito surveillance pool sizes using an Excel Add-In (https:// www.cdc.gov/westnile/resourcepages/mosqSurvSoft. html). Geographically and temporally representative pools (Table 1) were amplified on Vero cell culture (African green monkey, Chlorocebus sabaeus, ATCC, Manassas, VA) and the resulting supernatant was saved for subsequent characterization [27]. RNA was extracted on the MagMax-96 Express robot (Applied Biosystems, Foster City, CA) with the Magmax Viral isolation kit (ThermoFisher Scientific, Waltham, MA), according to manufacturer's recommendations with modifications. Briefly, 50 μL of supernatant samples were added to 130 μL of lysis buffer containing 20 μL of RNA binding beads that were diluted 1:1 with wash buffer 1. RNA was eluted in 90 μL of elution buffer. Primer pairs, AGTAGTTCGCCTGTGT-GAGCTGAC, GAGAGCCCCCAGCAATCC, and CCTTGCAAAGTTCCTATCTC, CTCTGCCAGCCC TCCGACGAT, and GGACCAACCAGGAGAAC ATTT, GATCCGAGTACACCCTGGCGTCAA, and CAAGGCGAGCAGGGTGAT, GAAGCTCGACTC ACCCAATACAT, and GCTCTGCCCCTACATGCC
Sequencing and genetic analyses
Sequencing was performed on the Illumina MiSeq platform (San Diego, CA). Paired-end reads were assembled to a WN02 genotype reference (DQ164190) deploying Geneious Pro's reference mapping tool using high sensitivity and free end gaps with 10 iterations of fine tuning, trimming paired read overhangs. The same parameters were used to map reads to the consensus assembly. The newly sequenced strains were submitted to GenBank and assigned the accession numbers MT967988 -MT968032 and OK631659 -OK631661. All alignments were performed using MAFFT alignment in Geneious Pro, with the algorithm set to the slow and accurate L-INS-I alignment algorithm, with the scoring matrix set to 200PAM/K = 2. The gap open penalty was set to 1.53, and the offset value set to 0.123. Phylogenetic analyses were carried out using BEAST2 and all available NYS WNV sequences containing a full open reading frame (ORF) and assigned collection dates using available metadata (n = 590). Evolutionary rates were estimated using the Bayesian Markov chain Monte Carlo method implemented in the programme BEAST2 [28]. The GTR + I substitution model was found to be the best-fit for this dataset using bModelTest and all subsequent Bayesian analyses used these parameters [29]. A Gamma site model was assigned to the dataset, and a general time reversible (GTR) model was used to estimate substitution rates. A relaxed lognormal clock was used to estimate the evolutionary rate. A coalescent Bayesian skyline model was applied to the dataset and run for 800,000,000 generations, sampling every generation and discarding the first 10% of generations as burn-in. This number of generations was sufficient to ensure convergence and estimated sampling size (ESS) of all parameters of >200.
Viruses
The WN02 strain used was isolated in 2003 (DQ164189) from an American crow (Corvus brachyrhynchos) found in Albany County, which was initially amplified on Vero cells for sequencing, and then later amplified on C6/36 cells (Aedes albopictus, ATCC, Manassas, VA) for downstream use. Distinct NY10 genotype isolates were amplified on C6/36 cells to generate virus stocks for characterization. The NY10A (KX547330) and NY10B (KX547391) strains used were isolated from Culex mosquito surveillance pools from Erie County in 2013 and 2010, respectively. NY10C (KX547356), was isolated from a Culiseta melanura pool from Oswego County in 2012. Each strain possesses the signature, shared NY10 mutations in addition to unique nonsynonymous mutations (Table 2). After 5 days of amplification on C6/36 tissue culture, following an infection at ∼1.0 multiplicity of infection (MOI), culture supernatant was harvested and stored in 20% FBS at −80°C.
Vector competence assays and infectivity studies
To assess WNV strain infectivity for the NY10 strains NY10A, NY10B, NY10C and WN02 we used Cx. pipiens, originally colonized from egg rafts collected in Pennsylvania in 2004 and subsequently maintained at the NYS Arbovirus Laboratory Insectary. Four-toseven-day-old adult females were collected and fed on doses of WNV ranging from 5 to 8 log 10 pfu/mL. Bloodmeals consisted of a 1:1 mixture of diluted virus stock and chicken blood (Colorado Serum Table 2. Polyprotein position and unique amino acid substitutions in each of the West Nile virus strains utilized for experimental infections. The NY01 mutation is denoted here with an asterisk. Company, Denver, CO), and a final concentration of 2.5% sucrose. Following one hour of feeding using an artificial feeding chamber (Hemotek, Blackburn, UK) at 37°C, mosquitoes were anesthetized, and the engorged females were collected and held at 27°C for 11 days post-infection (DPI). Individual mosquitoes were saved with a 4.5 mm zinc-plated steel ball (BB) (Daisy, Dallas, TX) in 1.0 mL MD at −80°C. To determine infectivity, thawed samples were homogenized at 24 Hz for 30 s and subsequently tested by WNV-specific qRT-PCR [30]. A total of 50 mosquitoes were tested for each strain and dose combination. Infectivity curves were generated by plotting proportion infected and dose, and fitting log-linear curves using Graphpad Prism 9. Doses at which 50% of mosquitoes are infected (ID50s) were determined by extrapolating from these curves. Slopes were compared using linear regression analyses and proportions infected at individual doses were compared using ANCOVA tests via Graphpad Prism 9.
For vector competence assays, all WNV strains were diluted to 7.3 log 10 pfu/mL in chicken blood and engorged females were held for 5 or 11 DPI and assayed for infection, dissemination, and transmission [23]. Legs were removed and stored at −80°C with a BB and 500 μL MD to assess dissemination. Transmission was determined by collecting saliva from anesthetized mosquitoes using in vitro transmission assays. Following 30 min of forced salivation, transmission fluid (1:1 FBS: 50% sucrose) was ejected into 150 μL MD and stored at −80°C. All samples were tested by plaque screening on Vero cells and proportions of infected, disseminated, and transmitting were compared using Fisher's exact tests using Graphpad Prism 9.
Avian inoculations and viremia kinetics
All procedures and methods were approved by the Wadsworth Center Institutional Animal Care and Use Committee and trapping was completed and approved by Federal and State Scientific Collection Permits (SC1386, MB194270), and Master Banding Permit (#23269). Twenty-two hatch-year (HY) American robins were captured during fall migration from 12 -22 October 2018 using mist nets (36 mm mesh; 12 m x 2.6 m) in Laingsburg, MI (42.82 -84.38). Upon capture, the condition of each bird was assessed for body mass (± 0.1 g), sex, wing length, and presence of ectoparasites. Initially, birds were placed in individual wire cages (30 × 38 × 38 cm) until they acclimated, at which point they were moved to small aviaries (183 × 61 × 274 cm) with 2-4 robins in each. On 25 October 2018, robins were placed into bird holding boxes and transported via car from East Lansing, MI to Albany, NY. They were housed in one ABSL3 room at the NYS Arbovirus Laboratory in individual cages as described above. Room temperature was maintained at an average of 20-21°C with 60% relative humidity and a 13-hour light: 11-hour dark photoperiod. All birds were fed a mixed diet appropriate for the species. Birds were provided ad libitum access to water throughout the entire experimental period. Prior to group assignment, all robins were screened for previous exposure to WNV by plaque reduction neutralization test (PRNT) upon capture and prior to experiments (∼14 days post capture). The blood samples were stored at 4°C until antibody titres were assayed. For the PRNT testing, sera were diluted in BA-1 [M199 medium with Hank's salts, 1% bovine albumin, TRIS base (tris [hydroxymethyl] aminomethane), sodium bicarbonate, 2% FBS, and antibiotics] and heat-inactivated at 56°C for 30 min. Sera were screened at a 1:10 dilution for WNV. Antibody titre was expressed as the inverse dilution of blood that neutralized 90% of the virus inoculum as compared to the virus-only control (no antibody) well [31]. Birds were randomly assigned to either WN02 (n = 10) or NY10 (n = 12) exposure groups and were subsequently inoculated subcutaneously in the cervical region with 0.1 mL of 5 log 10 pfu/mL of infectious WNV (WN02 1986, NY10A, or NY10C), diluted in a sterile PBS diluent (PBS with 1% FBS).
To assess viral titres, 0.05 mL blood was collected daily through 6 DPI, from the ulnar vein using a 25gauge needle [31]. Blood was dispensed in BA-1 and stored at −80°C. Viremia levels were subsequently quantified using the Vero cell plaque assay and compared among groups at each timepoint using t-tests [32]. At 14 DPI, all WNV-infected birds (control birds were held for a subsequent experiment not described here) were euthanized via CO 2 asphyxiation.
Infectivity and transmissibility indices
To assess the relative differences in the capacity for maintenance and spread of WN02 and NY10 strains, indices of infectivity and transmissibility were calculated. Avian infectivity (i a ) was quantified using the viremia values for WN02 and NY10 strains ( Figure 2B). Specifically, to account for the magnitude and duration of viremia, the area under the curve for each individual bird and strain was quantified for viremia levels of 4-5, 5-6, 6-7, 7-8 and >8 log 10 pfu/mL, and mean values for WN02 and NY10 strains were obtained. Mosquito infectiousness (i m ), which was quantified from mosquito infectivity experiments, was determined by extrapolation of mean levels of infection at the same blood meal titres from the linear relationship between dose and infection rates for each strain ( Figure 4B). The product of i m and i a is infectiousness (i), at a given titre. Infectivity index, I, ( Figure 7A) is defined as the sum of all i terms. Using transmission data from the vector competence experiments ( Figure 6) a mosquito transmission term is introduced, t m , which is the proportion of infected mosquitoes transmitting either WN02 or NY10. The product of I and t m equate to transmissibility, t, at a given titre, and the sum of all t terms equates to the overall transmissibility index, T ( Figure 7B). This assessment of the infectivity and transmissibility indices allows us to estimate the relative capacity for emergent genotypes to displace other genotypes in the transmission cycle of WNV. Figure 1). Updated sequencing efforts allowed for expansion upon previously observed trends in the genetic record established through mosquito surveillance in NYS. An initial displacement of the NY99 genotype, and fixation of the WN02 genotype, established a permanent change in the genetic record of circulating WNV genomes in the US, and the only selective sweep documented in North American strains of WNV. The characteristic WN02 mutation (E V449A) is present in all strains sequenced after 2003 and is the established genetic "backbone" of circulating WNV strains in the Americas. Of the 48 newly sequenced isolates, 33 were found to have the shared NY10 genotype amino acid substitutions (K1331R and I2513M) and 13 were found to have the shared NY07 genotype amino acid substitutions (T1195I, L1238F, S1838T, and S2287I). The NY10 genotype appears in 2010 and persists through 2018 ( Figure 2). Three years after the emergence of NY10 it became the dominant genotype, a trend that has continued through 2018 (Figure 3). The prevalence of NY07 genotype strains has been more variable, yet there was an increase from 2016 to 2018. Other strains that were previously recognized as either of increasing prevalence in past years, or as showing evidence of mutations under positive selection, such as NY01 and SW/WN03, appear more ephemeral in their frequency yet persist through 2018 (Figure 3).
Increased infectivity of West Nile virus NY10 genotype strains in Cx. pipiens
When considering the dose-dependent effects of individual strains on mosquito infection, a clear trend emerged in the proportion of infected mosquitoes resulting from peroral infection using NY10 strains relative to the ancestral WN02. Each NY10 strain infected a greater proportion of mosquitoes than WN02 at every dose tested (ANCOVA, p < 0.01, Figure 4). NY10C was found to be the most infectious strain, with an ID 50 >1 log 10 pfu/mL lower than that of WN02 and a minimal infectious dose of 4.0 log 10 pfu/ mL. The mean ID 50 for NY10 strains, 6.05 log 10 pfu/ mL, was 0.80 log 10 pfu/mL lower than that of WN02.
Vector competence of WNV NY10 strains in Culex pipiens
At 5 dpi, the infection rate for the NY10 strains is on average 30% greater than that of WN02, with a significantly greater proportion of mosquitoes with disseminated infections (Fisher's exact t-test, p < 0.0001, Figure 5). Mosquitoes infected with the NY10 strains also showed earlier transmission than WN02, which together with the significant increase in dissemination suggests a shorter EIP. This trend was more pronounced at 11 dpi, when significantly enhanced transmission was measured in Cx. pipiens infected with NY10 strains (Fisher's exact t-test p < 0.001, Figure 5). These highly significant differences demonstrate a phenotypic advantage that NY10 strains have over WN02 in terms of competence in Cx. pipiens mosquitoes.
West Nile viremia kinetics in American robins
Overall viremia kinetics were statistically similar for birds inoculated with WNV NY10 strains relative to WN02 (1-way ANOVA, p = 0.9971; Figure 6A). However, peak viremia was extended by an average of one day in individuals infected with NY10 strains ( Figure 6A). In addition, there was high variability among individuals, yet 5 of 6 birds with the highest peak viremia levels were infected with NY10 strains ( Figure 6B). Total viremia ( Figure 6C), viremic peak ( Figure 6D) and days infectious ( Figure 6E) are all higher in NY10 strains compared to WNV 02, though these results were not statistically significant. The individual bird with the highest viremia in the WN02 group represented a significant outlier (paired t-test, p = 0.0102). In fact, if this outlier was removed, mean and peak viremia levels would be significantly higher for the birds infected with NY10 strains (paired t-test, p < 0.05; Figure 6). While viremia for NY10 and WN02 strains were not independently statistically distinct, when differences in the threshold for mosquito infectivity are considered, days of infectious viremia were significantly higher for NY10 strains ( Figure 6F, p = 0.044, Mann-Whitney test).
Increased infectiousness and transmissibility of NY10 strains drives genotype displacement
To determine the extent to which phenotypic variation identified for NY10 strains could drive displacement, we quantified infectiousness and transmissibility indices for each experimental infection. Considering distinct viremia kinetics and increased infectivity of mosquitoes (Figures 4 and 6), the average mean infectiousness index for NY10 strains is 2.7 times greater than that of WN02 (Student's t-test p < 0.01, Figure 7). Further, incorporation of transmission data from vector competence results ( Figure 5) demonstrates that the mean transmissibility index for NY10 strains is 8.1 times greater than that of WN02 (Student's t-test p < 0.01, Figure 7). Together, these data demonstrate a clear mechanism for displacement of WN02, and increased activity of WNV in NYS since the emergence of NY10.
Discussion
The markedly increased WNV activity from 2010 to 2018 in NYS coincides with a greater number of human cases, and the rise in prevalence of new, emergent genotypes of WNV, with the most obvious and striking trend being the dominance of the NY10 genotype. The mutations that define this genotype, R1331 K (NS2A R188 K) and I2513M (NS4B I240M), occurred separately in years before 2010, but after 2010 were found largely in tandem, suggesting an adaptive linkage. Based on the distinct clades that NY10 occurs in, it has been selected on at least 2 different backgrounds [25]. Recently, there has been identification of the NY10 genotype on distinctly different, and geographically distant backgrounds (nextstrain.org/wnv/na). While further mechanistic studies are required to fully define the molecular mechanisms resulting in increased vector competence and altered viremia kinetics, the flavivirus genes NS2A and NS4B are known to play important roles in replication, virion assembly and immune evasion in both vertebrate and invertebrate hosts [32][33][34]. NS2A is a documented suppressor of RNA interference (RNAi) through direct binding and sequestration of the Dicer-2 enzyme in vertebrate hosts and mosquitoes [35]. Increased capacity to act in this regard could enhance viral replication and transmission, particularly in mosquitoes that rely on RNAi as a primary immune response to arboviruses [36]. While the primary phenotype identified here is increased infectivity and transmissibility of the NY10 genotype in Culex pipiens mosquitoes, it is possible modest changes in avian viremia could be related to strain-specific variability in the interferon response. There is a documented role for the NS4B as an interferon antagonist, which is known to be strain-dependent [37][38][39][40]. Though the position of the NS4B gene substitution has not been previously attributed to this function, it is certainly possible that this could influence the interferon response [38]. The flavivirus protein NS4B, although composed of just 255 amino acids, was previously found to possess the highest number of shared non-synonymous consensus mutations among sequenced WNV isolates, including three positions with evidence of positive selection [25]. The NS4B is known to interact with numerous host and viral proteins with diverse roles in viral replication and host immunity. Concordantly, individual substitutions in this protein are well documented to have the capacity to result in significant changes to host-specific fitness and pathogenesis [37]. Similar to NS2A, NS4B has been identified as an RNAi suppressor [41]. Additionally, NS4B likely contributes to both evasion and/or suppression of the cellular stress response [42,43]. Lastly, because NS4B interacts directly with the replication complex, substitutions could additionally perturb replication kinetics [40,44].
Flavivirus evolution is driven primarily by stochastic change within and between hosts and seasons, and purifying selection in the host and vector, with limited evidence of positive selection or selective sweeps, with the exception of the displacement of the NY99 genotype by the WN02 genotype [19,20,[45][46][47]. The comparison of the WN02 and NY10 genotype strains in Culex pipiens clearly demonstrates increased competence for NY10. A similar phenomenon was observed with WN02 strains relative to NY99 strains in Culex mosquitoes, which was attributed primarily to the V449A substitution in the envelope gene [16,23]. Surprisingly, additional genotypes possessing enhanced infectivity or transmissibility in Culex species mosquitoes have not been observed. The NY10 genotype strains of WNV tested here each differ in their amino acid sequences, with unique substitutions in both structural and non-structural proteins. Included in these differences is the G2377E substitution in the NS4B found in isolate 10C, a mutation previously attributed to the NY01 genotype with evidence of positive selection [25]. Additional distinct substitutions were identified in the prM, E, NS2 and NS5 proteins. While each of these could certainly contribute to altered viral fitness or transmissibility, none were shared among NY10 genotype isolates. Since NY10 strains were all associated with increased competence of Cx. pipiens relative to WN02, the presence of the signature NY10 mutations is most likely to be primarily responsible for the fitness advantage. Additionally, while transmission at 5 dpi was detected in individuals in all three groups infected with NY10 strains, none of the WN02 infected mosquitoes transmitted at 5 dpi, indicating a shorter mean EIP and an additional advantage for the propagation of NY10 genotype strains.
The fact that examples of adaptive evolution of WNV and other arboviruses are rare is often attributed to adaptive trade-offs imposed by the disparate selective pressures of vertebrate and invertebrate hosts [48][49][50][51][52]. Here, although the increased fitness in mosquitoes is more pronounced, we additionally demonstrated that more birds infected with NY10 genotype strains had higher mean viremia and longer mean sustained viremic periods. While the protracted viremic period is modest, on a population level it would have a substantial effect on infectiousness, particularly in the context of the significant increase in infectivity to mosquitoes. If such differences exist in other avian hosts, an additional consequence of these mutations could be expansion of host range, where species generally thought to be poorly competent (i.e. doves, non-Turdus thrushes, and catbirds) could ultimately play a much larger role in virus amplification [53][54][55].
Adaptive evolution of WNV occurring in North America two decades after its introduction is somewhat surprising, and perhaps a result of environmental changes and/or shifts in host or vector populations. Previous studies have reported that distinct interactions between viral genotype, mosquito population and temperature influence vector competence [14,56]. Vector populations have relatively fast generation times, restricted geographic ranges, and proposed mechanisms of overwintering in mosquitoes can drive the emergence of new genotypes [57]. It remains unclear whether NY10 strains are additionally The average infectivity of NY10 strains was 2.7 times greater than that of WN02 (student's t-test, p < 0.01). B) Transmissibility indices of NY10 strains when compared to WN02. Enhanced transmissibility of NY10 strains further increased the mean difference between genotypes (student's t-test, p < 0.01). more adaptive to Cx. quinquefaciatus, Cx. tarsalis, or other populations of Cx. pipiens. Previous studies did suggest a geographical bias among WNV genotypes, with NY10 more likely to occur in northern NY and the NY07 genotype strains having increased prevalence in downstate NY [25]. The question of differential vector competence between Culex species and populations is highly relevant, as these differences in infectivity can greatly shape the nature and magnitude of the spread of WNV, as was the case with Cx. tarsalis in the US and other Culex species implicated in enhanced WNV transmission in Europe [23,58,59]. Additionally, changes to the environment, particularly increases in temperature associated with climate change, may facilitate broad adaptation, perhaps expanding the host and geographical range of the virus [60]. Importantly, while our previous studies only identified the NY10 genotype in NYS, it is now present throughout the continental US, suggesting that it may have a broad adaptive advantage that could drive similar increases in WNV activity in other regions (nextstrain.org/wnv/na).
These data demonstrate the importance of analyzing spatially or temporally distinct datasets. For traditional phylogenetic studies, the focus is often placed on the size of the dataset, the idea being that more sequence data equate to increased capacity for inference of selection analysis and detection of adaptive change. If selection is variable over space and time a shortcoming of analyzing an expansive dataset is that signals of positive selection or displacement occurring throughout shorter periods or in discrete regions could be diluted with data from other regions when traditional counting methods of selection analysis are used. This can result in relevant evolutionary events being pushed into statistical insignificance.
While NY10 remains the dominant genotype in NYS and has now spread throughout the US, NY07, another previously identified emergent genotype, has also increased in prevalence since its emergence. Further phenotypic characterization and continued WNV surveillance will help elucidate if this and other novel genotypes could facilitate additional regional or national expansions to WNV transmission and disease. | 6,785.8 | 2022-03-23T00:00:00.000 | [
"Biology"
] |
Depth Map Improvement by Combining Passive and Active Scanning Methods
The paper presents a new method of more precise estimation of the depth map in 3D videos. The novelty of the proposed approach lies in sophisticated combination of partial results obtained by selected existing passive and active 3D scanning methods. The aim of the combination is to overcome drawbacks of individual methods and this way to improve the accessible precision of the final depth map. The active method used is incoherent profilometry scanning which fails on surface discontinuities. As a passive method, a stereo pair matching is used. This method is currently the most widely applied method of depth map estimation in the field of 3D capturing and is available in various implementations. Unfortunately, it fails if there is a lack of identifiable corresponding points in the scanned scene. The paper provides a specific way of combining these methods to improve the accuracy and usability. The proposed innovative technique exploits the advantages of both approaches. Specifically, the more accurate depth profiles of individual discontinuous objects obtained from the active method, and information about mean depths of the objects from the stereo pair are combined. Two implementations of the passive method have been tested for combination with active scanning: matching from stereo pair, and SIFT. The paper includes a brief description of the active and passive methods used and a thorough explanation of their combination. As an example, the proposed method is tested on a simple scene whose nature enables straight assessment of the achieved accuracy. The choice of a suitable implementation of the passive component is also shown and discussed. The obtained results of individual existing methods used and of the proposed combined method are given and compared. To demonstrate the contribution of the proposed combined method, also a comparison with the results obtained with a commercial solution is presented with significantly good results.
Introduction
3D video capturing can be realized by various camera systems working on many physical principles.We can observe two paths of development, namely active and passive 3D capturing systems.Active capturing systems utilize the projection of the measurement pattern on a scanned scene which could be in visible light spectrum [1], [2], near infrared field [3] or projected by a focused laser spot [4].In the mostly used passive system, the depth of the pixel is determined from its disparity in a stereo pair.The depth can be also estimated from a multicamera facility [5], depth field camera [6] or from monoscopic camera auto-focusing parameters [7].
Regardless the variety of capture systems principles, most of them have a similar output format (2D + a depth map).Based on this output format, it is possible to render more views by Depth Image Based Rendering (DIBR).These views are then compressed by Multiview Video Coding (MVC) and used also in television broadcasting [8].Some part of current 3D video shooting systems is based on combinations of more depth acquisition methods, such as Time-of-Flight (TOF) IR camera with a stereo pair, where a depth image is rectified to the color camera images [2].In some other approaches, the stereo pair is fructified by combining advanced and conventional methods of image segmentation [9], or more monocular cues are combined to estimate the depth map [10].Another combination is a profilometry scanning system with two cameras [1] which is a very similar system design as proposed in this paper, but with completely different data processing.The approach proposed in this article was first mentioned in our previous contribution [11] where two possible modifications were outlined.
The aim of this paper is to introduce a new specific capture system for depth map estimation in 3D TV.The idea is based on a combination of the active scanning with a passive method in which depth information is estimated from a stereo pair.A precise 3D model of the scene providing the true depth map was created to demonstrate good accuracy of the proposed system.The relevance of our method is also demonstrated by comparing of the results with a professional (commercial) 3D active system.
The rest of the paper is organized as follows.Section 2 contains a brief description of active and passive 3D capturing methods of interest.Section 3 deals with the definition of the theoretical depth accuracy.The proposed algorithm for the depth information synthesis is described in Sec. 4. Analysis of practical implementations of the proposed method and evaluation of obtained depth maps are presented in Sec. 5 and Sec.6, respectively.Finally, Section 7 concludes the paper.
Current Methods of Depth Map Generation
As mentioned above, there is a huge variety of depth map estimation methods.Before dealing with the proposed combined method, a brief description of existing methods of interest is given in this section, altogether with available technical information about the commercial KinectTM system.
Depth Map Estimation from a Stereo Pair
Most of today's 3D captures systems use a passive method for depth map estimation, based on a stereo pair analysis.
There are mainly two types of this method.Firstly, classic and older approaches are referred to as area-based methods [4].In most cases, well-matched camera parameters are assumed, namely focal length, depth of field and resolution.The description of epipolar geometrical parameters is epitomized in a fundamental matrix to be found.Then rectification [3] is performed, meaning transformation of input stereo pair images is carried out so that epipolar lines of output images correspond with the same image rows.After that the corresponding points are sought for just along these lines.The basic algorithms for Disparity Space Imaging (DSI) are Sum of Squared Differences (SSD), Sum of Absolute Differences (SAD) and Normalized Cross Correlation (NCC) [4].
The second category consists of feature-based methods.They can find corresponding points within the whole images of the stereo pair.The Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) are algorithms which assign a descriptor of each characteristic pixel.Correspondent points are then found at the base of this description [4].
Problems occur when objects of the scene have a large monochromatic surface, and thus characteristic points cannot be identified in order to find the correspondences.A similar situation can happen when the surface of the photographed object has a fine periodical structure in the horizontal direction.Although the algorithms for depth estimation are "best effort", meaning they choose the most probable variant of the depth map, an inaccuracy or error cannot be detected or reduced.
Profilometry Scanning
Profilometry is a very common method for accurate surface topography measurement.It can use coherent light, but in macroscopic scanning systems, incoherent methods (such as Fourier's profilometry, phase-shifting profilometry or moiré topography) are usually used.
In this work, the phase shifting profilometry is used because it is very easy to implement [12], [13].It should be noted that in the case of the profilometry ideal functionality, it is not important which implementation is applied for our purposes.
Incoherent methods are based on triangulation of a measured system.On its way from the source to the detector, reflection of a particular ray from the measured surface takes single valued information about the depth.The intensity of each ray (pixel) is modulated by a sample of the sine pattern which is projected to the scene.The pattern is phase-shifted in time.This basic principle also yields an advantage which is utilized in the proposed method.In case of a continuous surface, profilometry provides continuous information about the depth, meaning the depth value for each visible pixel [14].
Professional Active Scanning System
To show practical usability of the designed system, described in Sec. 4, it is useful to put its parameters into context with a commercial solution.For comparison, the commercial Kinect device with depth sensors by PrimeSense was used.Unfortunately, producers have not published details, but experimentally obtained parameters can be found in the report [15].
The sensor combines two methods of active scanning: a structural light analysis and a depth from focus.The first one is a triangulation method as well as profilometry is.Nevertheless, the classical profilometry approach codes information by a specific pattern to identify the position of each projected scanning point.Kinect structures infrared light to speckles of points which are randomly spread.The information about correspondence between projected and observed light spots has to be added another way.
Depth from focus is one of classical monocular cues of depth which rates blurring of an image, projected beside the focus plane [16].The producer PrimeSense uses astigmatic lenses for structure light projection.This solution is based on the changing of geometrical parameters of projected spots along the depth dimension.The combination of mentioned methods joins high accuracy of structural light scanning in a continuous surface of scanned objects with a robust approach to the detection of their mutual position.
Theoretical Achievable Accuracy of the Described Methods
In this section, first, the term of the depth map accuracy is defined.Then the interval of depth values, to which the true depth value of a particular pixel belongs, is mentioned.In this section, attention is focused on the accuracy of the depth estimation of individual pixels disregarding specific depth profile of the scene as a whole.An example of particular depth maps obtained by various methods is discussed in the following section.
Depth Obtained from a Stereo Pair
In the following explanation, the passive method with full-pixel accuracy is assumed.In modern algorithms using n-sub-pixels accuracy [17], final depth error intervals could be reduced n-times.
Figure 1 shows the corresponding pixel P which is viewed by the left and right camera.Both cameras have finite horizontal resolution h r .Transformation of pixel´s width to the y-depth object (plane) is d p .It can be seen that the real corresponding point, which is sampled as pixel P, could lie in the y interval.The formula for the width of the pixel d p at particular depth y is obvious: Inner parameters of the cameras and their parallel optical axes are assumed to match perfectly.Geometrical parameters such as cameras' stereo base d, horizontal viewing angle and depth value y are defined (see Fig. 1).Then the depth uncertainty y could be calculated as follows: The practical graphical interpretation of the previous equations is presented in Fig.
Depth from Profilometry
Profilometry scanning with sine phase shifted sets of patterns is quite a simple method which has an essential disadvantage, embedded ambiguity of depth, compared to alternative pattern.Figure 3 demonstrates the mentioned problem.Black lines illustrate light rays for a particular phase of the projected pattern.From the camera point of view it is not possible to distinguish from which of the green planes the light has been reflected.Examples of four planes are depicted in Fig. 3. Their distance corresponds to the period of the projected pattern.In other words, periodicity of the projected pattern results in the depth ambiguity: the same depth information is assigned to objects at a particular distance y and also (y -l).
The function (3) maps phase shift between the measurement pattern projected on the reference plane and the pattern projected on the observed surface, to the distance h between the reference plane and the observed surface For the phase shift equal to 2kk , the following formula expresses the dependency of the depth ambiguity interval l on the parameters of the profilometry capture system, i.e. the period of the measurement pattern p, the distance of the camera and the projector focal points d´, and the distance between the camera and the reference plane l: Figure 4 shows this dependence for l = 2 m and p = 110 -2 m.
Professional Solution Application
The analysis of Kinect parameters is not explicit because the depth range, linearity and resolution could be influenced by specific software variant, even if the same hardware is used.The hardware specifications [15], [18] define the sensor's nominal depth range from 0.8 m to 3.5 m.However, from the practical application, it is obvious that the sensor works from 0.5 m up to 15 m in specific conditions [18].The same problem is with the depth resolution which is declared as 1 cm in a 2 m distance.
The depth quantization step q has been found in [15]: 2.73 10 7.4 10 5.8 10 q y y y The quantization step is a parameter which is comparable with the depth uncertainty y, in the case of the depth from a stereo pair.The theoretical accuracies of the mentioned methods are compared in Fig. 5.
The Proposed Procedure: Combination of Two Methods
This section describes the implementation of the proposed combined system.The basic idea is based on combination of the two above mentioned methods.The static scene is captured by using of each of them and the information is combined to improve the relevance of the final depth map.All the objects are assumed to be illuminated both by the ambient light and the measurement pattern.
A flowchart of the proposed procedure is given in Fig. 6.Phase unwrapping is the most difficult step in active scanning method.The output from the block "Calculation of wrapped phase" is the phase structure within the range - to , in which wraps (rapid phase shifting by ) occur.We adopt the method Unwrapping via Graph [14] for unwrapping.However, the algorithm failed because a rapid change of phase occurs in the shadow region too often.Therefore, first, shadows must be detected and their influence eliminated.
Shadow Detection
For the shadow detection a formerly presented algorithm is used [19].Its flow chart is shown in Fig. 7.The input "Stereo depth map" has information about topography obtained from the stereo pair and 2D image of the captured scene.
In L × a × b space the background of the scene is thresholded and Suspicions for shadow (S_S) are found.The shadows are than excluded in areas of objects.Suspicions for objects (S_O) are detected from the smoothed depth map.
In the next step, data from both images (S_O, S_S) are combined.The basic assumption says that a pixel cannot be simultaneously included to the foreground and to the shadow because no of the objects is hidden in a shadow.In accordance with this assumption, the assignment of each pixel to shadow region is confirmed as expressed by the following pseudo-code: In the final step, small disturbing artifacts are removed by morphological operations and the MATLAB function bwreaopen.In the resultant shadow map of the scene, pixels belonging to shadow regions are labeled by logic 1 values.
Combination of Depth Maps
The main part of the proposed procedure consists in the combining of the two obtained depth maps.Inputs to this algorithm are the depth map achieved by the stereo method, the depth map obtained by the phase shifting profilometry, the shadow map and the original image of the scene.
The process of the combination is based on the properties of each depth map.The stereo depth map provides good information about mutual positions of objects, but the profile of each object is inaccurate.On the contrary, the profilometrical depth map has a precise profile of each object but does not provide the relationship among the positions of the objects.Therefore, it is needed to obtain the profile of each object from the profilometrical depth map and to transform it to the range given by the stereo map.
Firstly, individual objects in the image must be found.For this purpose, the shadow map and the profilometrical depth map will be used.This step is based on the assumption that an object belongs to the foreground, hence its values of the depth map will be high.Concurrently, objects are assumed not stay in the shadow.In consequence, we use the following condition: The pixel which satisfies this condition belongs to the object and its value in the new matrix Object is logic 1.
In the following step, objects are classified.The registration of an image means that for each object, linking pixels are defined.As a result, the matrix Class_objects (1920 × 1080) is obtained whose elements are integers i = 1,2,…, n defining the assignment of each pixel to one of n registered objects.In the next step, the range of the depth of each object is found.All the pixels belonging to the object are sorted according to their depth.Subsequently, the upper and lower threshold (th low , th up ) are determined as values corresponding to 95 and 5 percent of the depth of the object.This way, the range of depth of each object in the stereo depth map is obtained.This range is used as the range of object's depth in the final depth map DM.The minimum and maximum depths of each object in the profilometrical depth map are also found (min, max).Thus, each of n different objects is characterized by parameters (th low , th up, max, min).Then, the profilometrical depth map DM prof is transformed separately for each object as follows:
Implementations and Verification of the Idea
To verify the depth map accuracy improvement, a laboratory setup was prepared.Positions of three simple geometrical objects (two cylinders made of paper, one sphere made of white glass), of the cameras, and of the projector in the static scene are obvious from Fig. 8.
The photo of the scanning equipment is taken in from a perspective of 3D scene (see Fig. 9).Starting from the right, a projector for sinusoidal pattern projection, a stereoscopic camera with a reduced stereo base, an active camera Kinect and finally, a PC to record and process the captured signals can be seen.
One of possible principles of combining the two scanning methods is plotted in Fig. 10.The DLP data projector, which projects a measurement pattern by unpolarized light, is complemented with a linear polarizing filter.This filter is oriented vertically.Besides the projector, the scene is illuminated by another source of light (a spotlight).The second polarization filter with horizontal polarization is added to the left objective of the stereo camera.
To actively scan and to record a stereo pair simultaneously, the measurement pattern can be projected by the projector and captured by the right camera, in which the light intensity of this pattern is added to the background intensity of the spotlight.The left camera then captures just the pattern.During profilometry scanning, by using a signal processing, it is possible to separate the measurement pattern from ambient light.This filtered image forms the second image of the stereo pair.This system of measurement pattern separation has been tested and works fairly for metal objects or with metalized surfaces.However, most of dielectric surfaces do not retain polarization of the reflected light.
A wider practical application of such a system could be expected with near-infrared light (NIR) projection.The NIR projector, nowadays quite available, produces the measurement pattern.Its reflection from scanned objects with added ambient visible light is captured by the right camera.The left camera has an IR filter installed to be insensitive to the measurement pattern.The main motivation for the described methods of measurement pattern separation is movement in the scene.For static scenes, time multiplex can be sufficient for separation of the measurement pattern and the image itself.In such a way, the results presented below have been collected.
The True Depth Map
Comparison of the results of the proposed combined method and the results of individual sub-methods is described in the following.For rating the efficiency of the new method, the true (exact) depth map is needed.
For this purpose, an experimental scene has been designed and its accurate 3D model (with potential deviation less than 0.05%) has been prepared in MATLAB.Based on known intrinsic and extrinsic parameters of the real camera, perspective projection on Camera 1 sensor plane has been computed (Fig. 11 a).The true depth map (Fig. 12 a) has been also calculated from the precise 3D model, as the distance of the modeled object surface to the virtual camera's focal plane.The achievable accuracy of the 3D model and the true depth map derived from it is the main reason why quite a simple scene has been chosen for this experiment.
Metrics for Depth Map Error Estimation
In general, the depth map is a function which maps the pixels of the image into a 3D surface (generally discontinuous): where R is the space in the coordinate system of the original image, where the original image and also the depth map are placed.Output values of the function are depth values for a particular camera setup.These values should be in units of length, expressing the distance from the camera focus plane orthogonally to the mapped point on the object surface.This particular depth map is referred to as absolute values (DM A ).For later processing (e.g.compression, etc.) and TV broadcasting, it is not important to preserve the information about absolute depth and scale.All of the following realizations with arbitrary real coefficients a, b can be considered as true depth maps: (8) where DM R is depth map in relative scale.
The proposed method has inbuilt segmentation to n blocks with continuous surface (objects, see Sec. 4.2) and background R R .
Profilometry scanning provides a set of depth maps of each object surface DM i while coefficients a i , b i are obtained from information provided by the conventional depth map estimator (from stereo pairs) The resulting map combines information from two methods and their inaccuracies influence the final values.The described combination of the methods assumes the condition f 1 = f 2 = … = f n = f true , where functions f 1 … f n are just windowed parts (for sets R 1 ,…,R n ) of the true depth map mapping function f true .The error of this assumption is caused by the error from profilometry scanning.The second source of the error is a premise that the stereo pair matching provides true information about minimum and maximum depth value for each object even if it does not have enough information about the surface.As shown further, this claim is not true, because both sets of coefficients (a i , b i , i = 1,2,…,n) are set at the base of inexact prerequisite.Both errors are multiplied, which is one of the disadvantages of the proposed combination of methods, (Sec.4).
We have used two objective methods for depth map evaluation.An objective method means that the influence of incorrectness in the depth map on the stereo/multi-view Quality of Experience (QoE) was not determined.In the first method, the mean values of depth for each segment R i in the evaluated depth map are compared with the true depth map.In the second method, the minimum mean square error (MMSE) between the evaluated depth map (DM E ) and the true one (DM T ) is found as follows:
Alternative Finding of Mean Depth Value
The proposed system for depth map generation, as described above, is very sensitive to the accuracy of each object's extreme depth map values.
The first improvement which can suppress this drawback is the usage of 5% and 95% quantiles of depth values distribution for (a i , b i , i = 1,2,…,n) calculation (10), instead of negative, respectively positive peak value.This approach filters extreme values which can occur due to noise on edges or by inaccurate object segmentation.
If the camera and the projector are focused to infinity (see Fig. 3), the multiplicative factors of the depth map's segments can be assumed to be the same for all sets R i in profilometry scanning, i.e. a 1 = a 2 =… = a n .Then there is no need to search for multiplicative factors and only additive factors b i need to be found from mean depth values.In this case, errors surely increase with decreasing focal lengths and also with differences in b i .
The experiments have shown that better data on mean depth are needed than those provided by the conventional implementation of depth from stereo pair matching (by SW Triaxes Stereo Tracker [20], [21], Fig. 12 c).That's why horizontal parallax of corresponding points has been used to estimate mean values of depths.Scale-Invariant Feature Transform (SIFT) is the known method which provides 128-dimension features for specific image points.These features are invariant or "almost" invariant to many image geometrical transformations and they are also useable for finding corresponding points in a stereo pair.
In this work, the implementation from the free MATLAB toolbox, described in [20] was used.The corresponding points of both halves of the stereo pair, laying on the object's surface and simultaneously having high probability of correspondence, are shown in Fig. 11 d).
Comparison of Various Methods for Depth Map Generation
Examples of the resulting depth maps can be seen in Fig. 12.As mentioned above, the first map (Fig. 12 a) is the true depth map which has been computed as the perspective projection of a 3D model (Sec.5.1).professional device Kinect.This depth sensor maps a 16-bit dynamical range of depth to three 8-bit color components.For further processing, only 3 parts of a dynamic range are used wherein surfaces of objects lay.
The depth map from the stereo pair matching (provided by SW Triaxes Stereo Tracker [20], [21]) is shown in Fig. 12 c).The figure illustrates shortcomings of this sub-method with estimating depth caused by problems with finding correspondences.The obtained values are acceptable around edges, but the algorithm obviously fails in almost all monochromatic areas.Unfortunately, this failure is not cured by the combination with profilometry scanning and the error of the passive method manifests also in the final depth map.The result affected by these dynamic range errors can be observed in Fig. 12 d).As depicted in Fig. 12 e), much better depth map is obtained if profilometry scanning is combined with parallaxes from SIFT.The corresponding points have been chosen from SIFT significant points at the base of three parameters.Firstly, the pairs with minimal Euclidean distance of their SIFT feature values have been chosen as corresponding points.Secondly, the corresponding points have been chosen according to the fact that they have to belong to the same set R i , and thirdly, according to the fact that straight lines for all corresponding points' pairs should be parallel (in the case of rectified images they are parallel and exactly horizontal).
Relative mean values of depth for 3 objects in the depth map
Figure 12 f) is presented just for comparison.The depth map of the scene is obtained from the profilometry scanning combined with accurate information about the objects' mean depths (ideal coefficients b i applied).
Final Score
In this subsection, the benefits of the proposed system are demonstrated by comparing its results with those of individual methods.Furthermore, we also show competitiveness with the commercial depth sensor Kinect.
Table 1 compares the ratio among mean depths of three scanned objects R 1÷3 (colored by red, green and cyan in the 3D model image, shown in Fig. 11 a,b).The biggest deviation from the true depth map can be observed if the commercial implementation of stereo pair matching provided by SW Triaxes Stereo Tracker [20], [21] is applied.The algorithm is not suitable even to order objects correctly.The Kinect device also has a problem with this basic task.This is due to the intentional setting for scanning within a very small part of its dynamic range.However, it has to be mentioned that there is less error compared to the map from a stereo pair.It is predictable that the proposed combination of depth from stereo pairs with profilometry suffers from the same problem.The results from the parallax of corresponding points provided by SIFT with subpixel accuracy are presented in the last row of Tab. 1.These results are the best estimation of mean depth of objects from the tested primary methods.Table 2 sums up the MMSE values of the particular depth maps relative to the true one.The first and second rows are calculated from the maps resulting from the stereo pair matching and Kinect.The last three rows represent errors in the case of depth map combinations.Ideal mapping of the depth maps of segments obtained from profilometry scanning to the true depth dynamical range is performed and described by the error value in the third row.This value also determines the minimum achievable error of our setting of the profilometry scanning system.The fourth line of Tab. 2 gives the results obtained from the original version of the proposed method (Sec.4) combining the commercial implementation of stereo pair matching provided by SW Triaxes Stereo Tracker [20], [21] with profilomery scanning.As explained above, this combination suffers from vague inputs resulting from stereo pair matching.The last value in the fifth row of Tab. 2 refers to an alternative source of mean depth value obtained by SIFT (see Sec. 5.3).This result is obviously the best and demonstrates the contribution of the proposed combination of individual sub-methods, introduced in this paper.
Conclusion and Future Work
This paper, in detail, describes the combination of two depth map constructing methods and compares this combination with the results of the commercial depth sensor Kinect.Our method has been tested in a laboratory environment to prove better results than partial methods and the competitiveness with a contemporary depth camera.From various scanned scenes, a simple one has been chosen to demonstrate the obtained results, to compare them mutually and also with exactly defined real data.A significant improvement has been achieved by the proposed combination of the profilometry scanning with the stereo pair matching with SIFT.
In our future works, we would like to modify system parameters for instances with movement within the scene and a moving camera.For dynamic scenes, the time multiplex of the depth scanning method should be replaced by a different mentioned multiplexing method.Near infrared projection of a measurement pattern seems to be promising.It also solves the problem with ambient light conditions and shifts the proposed combined method from the laboratory to the practical usage.It could work sufficiently almost in the whole dynamic range of used cameras.Nevertheless, in practice, a price of a device would definitely be an important aspect.So, to avoid utilization of the NIR projector, the system with dichroic filters in visible light range could be tested to separate measurement patterns from ambient light.Maybe, also time-multiplexed scanning methods could be further used even in scenes with moving objects or cameras, if the scanning rate is increased sufficiently.Anyway, further analyses, computations and testing are planned to refine the proposed com-bined method of depth map estimation, to adapt it to moving scenes, to judge its feasibility, its advantages and drawbacks under various conditions, and last but not least to take into account possible economical aspects of its practical applicability.
Fig. 1 .
Fig. 1.The maximum theoretical accuracy of the depth value calculated from a stereo pair in relation with cameras' parameters and configuration.
Fig. 2 .
Fig. 2. Maximum theoretical accuracy of the depth estimation from a stereo pair in the case of variable cameras' horizontal resolution, viewing angle and stereo base ( = 30°, d = 6.310 -2 m).
2 .
It demonstrates how the course of the function y = f (y) depends on the mentioned parameters.Particular values in our examples are h r = 1920 pix (704 pix), = 30°, d = 63 mm.
Fig. 3 .
Fig. 3.The ambiguity of the depth representation caused by the periodical repetition of phase-coded depth information.
Fig. 6 .
Fig. 6.The flowchart of the proposed procedure: Active scanning profilometry with depth from stereo pair.
Fig. 8 .
Fig. 8.The ground plan of the experimental scene.
Fig. 11 .
Fig. 11.The image of the scene: a) captured by the left half of the stereo camera, b) captured from the model in MATLAB, c) after removal of the shadow, d) with highlighted corresponding points in the stereo pair.
Figure 12 bFig. 12 .
Figure 12 b) presents the depth map provided by the Sternberk, Czech Republic, in April 1985.He graduated from the Faculty of Electrical Engineering and Communication (FEEC), Brno University of Technology (BUT), in 2010.The field of his interest includes image processing, quality evaluation and photostereometric systems.Ladislav POLAK was born in Sturovo, Slovakia in 1984.He received the M.Sc.degree in 2009 and the Ph.D. degree in 2013, both in Electronics and Communication from the Brno University of Technology (BUT), Czech Republic.Currently he is an assistant professor at the Department of Radio Electronic (DREL), BUT.His research interests are Digital Video Broadcasting (DVB) standards, wireless communication systems, signal processing, video image quality evaluation and design of subjective video quality methodologies.He has been an IEEE member since 2010.Tomas KRATOCHVIL was born in Brno, Czech Republic, in 1976.He received the M.Sc.degree in 1999, Ph.D. degree in 2006 and Assoc.Prof. position in 2009, all in Electronics and Communications from the Brno University of Technology.He is currently an associated professor at the Department of Radio Electronics, Brno University of Technology.His research interests include digital television and audio broadcasting, its standardization and video and multimedia transmission including video image quality evaluation.He has been an IEEE member since 2001.
Minimum mean square error (MMSE) of the estimated maps relative to the true depth map.The last two rows demonstrate the influence of two different implementations of the passive method giving the mean depth. | 7,610.4 | 2016-09-15T00:00:00.000 | [
"Computer Science"
] |
Maximum Power-Point Tracking and Stall Control with Eddy Current Brake System on Small-Scaled Wind Turbines and its Application on Agricultural Harvesting
A R T I C L E I N F O A B S T R A C T Article history: Received: 21 May, 2020 Accepted: 21 June, 2020 Online: 12 July, 2020 This research aims to enhance the generated power of the small-scaled wind turbine using the eddy current brake system and Maximum Power Point Tracking (MPPT) control method. We analyzed the behavior of the generated power and power factor, with and without the MPPT control which implemented by eddy current brake system. Also, the feasibility of the system investigated using different wind conditions such as strong and calm wind conditions. The load data has different voltage respond to the system since its conditions depend on the day/night loads pattern, weather conditions, soil moisture. Moreover, the analogical experiment for small-scaled wind turbine blade destruction is analyzed to determine the maximum penetration value of mechanical power in order to retrieve an optimal angular velocity which resulting in provides a possible maximum power to loads. At the same time, emergency break is operated when angular velocity reaches to critical speed to avoid destruction. In the simulation, we collected the real load data from a mango farm in Okinawa prefecture in Japan. The results were analyzed through simulations for the different wind conditions. In the end of simulation, we could verify that either Maximum Power Point and emergency control are activated correspondingly.
Introduction
As a renewable energy generating equipment, small-scaled wind turbines have rising demand to fulfill the modern energy requirements. The small-scaled wind turbines have several merits such as less area needed for setting up the entire system, low cost maintenance and easy to assemble which are advantages for household use [1]. Comparing to the conventional large-scale wind turbines, small-scaled wind turbines produces small amount of power while large scale wind turbines produces maximum 8MW power. Therefore small-scaled wind turbines are suitable for household and small-scaled green houses and farms.
To supply a stable electricity output, small-scaled wind turbines uses several control methods such as pitch control [2][3], yaw control, brake control [4][5][6]. Considering brake control, it is mainly focus on brake pad-based system for the safety of the wind turbine [7] while contactless brake systems related research is ongoing currently. In [8] we proposed to realize a contactless angular velocity in emergency case by eddy current method, however, the optimal control was not concerned. In this research, we are focusing to enhance the power through maximum power point tracking (MPPT) using eddy current brake system and eddy current brake system for emergency rotational control.
Since the wind speed stability is always unpredictable, it is a difficult task to generate the maximum power continuously and keep the generated power in a stable state. In this regard MPPT control can be considered as an important controlling method to generate the maximum power. Many types of MPPT control methods are used for generating maximum power such as using fuzzy logic controllers [9], adaptive control [10], etc. In this research, we use the eddy current brake as the main controller tool for the MPPT control. Here, MPPT operation is to convey the maximum possible power from turbine to generator. Therefore, this operation contains power enhancement characteristics. Meanwhile, emergency break is implemented as well to avoid any destruction or malfunction of entire system since small-scaled wind turbine's angular velocity rise up drastically during the high wind penetration.
ASTESJ ISSN: 2415-6698
In this study, the mango green house is a mango cultivating farm where the location is in Okinawa, Japan. The mango plants are required to be grown in a controlled environment in order to make best condition for cultivating. Also, the conditions are monitored using Internet of Things (IoT) technology. Therefore, the farmer can analyze the real time mango plants conditions such as its soil moisture, farm temperature, CO 2 conditions, light conditions and humidity.
In this paper, first we explain about the structure of the proposed system including proposed brake system, control diagram, simulation block diagram. Next, we explain the mathematical modeling of the system. Afterwards, we describe the simulation and results. Finally, we describe the conclusion of our research. This research is an extended work of [11].
Blade destruction analysis
This section explains the analogical experiment of the blade destruction. We performed this experiment to analyze the blade destruction for different penetration values. In other words, we did this experiment to decide the safety margins of the small-scaled wind turbine. In Figure 1, the experimented broken small-scaled wind turbine blade is displayed. In the following equation (1) F max means the maximum allowable force on the wind turbine blade and m means the mass of the turbine's blades. Also, ω represents the blade destruction angular velocity. Table 1 shows the specification of the blade. The material of the blade is made using Fiber Reinforced Plastic (FRP). Figure 2, the A point stands for the maximum force safety margin. If the penetration exceeds the point A then the turbine's blades start to brake. When the penetration value reaches nearly 28kN (point D), the turbine's blades were destroyed. Therefore, to protect the turbine, we put c as the primary safety margin which has penetration value of 2.5kN. B means the secondary safety margin that 4kN is penetrated. For secondary safety margin, the angular velocity is 25rad/s and angular velocity for primary safety margin is 20rad/s. Hence, for the simulation purpose, we set the primary safety margin angular velocity ω as follows: ω = 20rad/s.
Structure of the system
It is necessary to consider the electrical and mechanical aspect of the proposed system structure. For the mechanical section, we consider the brake system controlling structure. Here, the emergency rotation control eddy current brake system and MPPT control eddy current brake works independently which is illustrated in Figure 3. They always control the reference value of the angular velocity ω ref and the angular velocity output ω. When the controller receives the angular velocity of the turbine, controller will send its signal to two servo motors which are operates as emergency rotation control and MPPT control unit to place the magnets nearby the copper plates to control the rotation.
When the wind turbine rotates, the signal controller will receive the angular velocity measurement and it will send the signals to the emergency rotation control eddy current brake and MPPT control eddy current brake. According to the current rotation situation, above mentioned two brake systems will control the f emergency(ω) which corresponds to emergency rotation control eddy current brake and f MPPT(ω) . Here, f MPPT(ω) corresponds to MPPT control eddy current brake. This process is illustrated in Figure 4. Figure 5, shows the entire block diagram of the small-scaled wind turbine system for realization in hardware. Here, 1/s symbol represent the integrator symbol. For this system, the generator is considered as DC generator. Figure 6, displays the equivalent circuit of DC generator including the protection diode, load and battery.
Regarding the electrical aspect of the proposed system, important conditions are explained as follows. When the battery charge becomes low, the additional current from the generator will charge the battery. In the other hand, if the generator unable to supply demanded current, then battery will supply the required current to the load. If the battery is full and the required power is supplied to the load, then the turbine rotation is reduced using the eddy current brake system.
Structure of eddy current brake system
Electromagnetic brake systems can be seen in modern world applications such as Maglev train brake system, gym instruments, and elevators, etc. Unlike the non-contactless braking operation, eddy current brake system can be considered as an efficient brake system [12].
Here, let us describe the eddy break system specifically. When the rotor shaft rotates in ω angular velocity, the magnetic flux between the two magnets are changing due to the interaction with the copper plate. Due to this magnetic flux change, eddy current will induce on the copper plate according to the faraday law of induction [13]. Due to the induced current, there will be opposing force or in the rotational case the torque generate against the rotation. This torque will act as the eddy current brake torque. In Figure 7, means the magnitude of magnetic charge on magnetic poles and qc means magnitude of induced magnetic charge on copper plate. The induced voltage is expressed in following equation that is based on the Faraday law of induction, where, ε stands for electromotive force and φ for magnetic flux.
Here, RC is the resistance of the copper plate and Ieddy is the induced eddy current on the copper plate. The relation with the RC and Ieddy is shown in (3), which is based on the ohm's law, ε = R c I eddy (3) From above equations, we can write down the eddy current as follows.
Due to occurrence of above current, the break system is activated. Next section describes the dynamics of whole system.
Mathematical modeling of the system
This section introduces the mathematical modeling of the entire system. Therefore, it is important to consider the dynamics representation of mechanical and electrical part of the proposed system. The parameters were used for the mechanical section is displayed in table 2 and the electrical section is displayed in table 3 along with combined electrical and mechanical equations.
Equation (5) displays the summation of the inertia moments of the wind turbine and blades
J=J ω +J G (5) The conventional mechanical and electrical dynamics of the system displayed in (6) J dω(t) This formula stands for controlling the brake plate rotation in a contact manner. However, in [8] we have proposed for contactless break system that is shown as follows.
Here, µ stands for permeability of the space. and stands for magnitude of the magnetic field of the magnet and copper plate respectively. This formula is based on gilbert model magnetic force [14].
Next, we are going to model the electrical part of the system. In the following table the parameters are described.
Generated power is displayed in (8).
Electrical torque of the generator displayed in (9) T e = P e ω (9) where, e stands for induced voltage and i for armature current. Equation (10) displays the battery electrical charge dynamics Induced eddy current brake torque for emergency brake initial capacity. Adversely, the current exchange among battery, turbine and loads can be retrieved by derivative of equation (10) which is Therefore, whenever equation (11) becomes zero, the consuming and generating currents are balanced or if iR is greater than zero it is both charging mode to battery and providing currents to loads, simultaneously. Adversely, if iR is less than zero then the battery only provides the power to loads that is discharging mode.
From equivalent circuit of DC generator in Figure 6, the electrical dynamics of the system is obtained as follows.
The purpose of setting diode in the output terminal of dc generator is to avoid being motor mode for dc generator when there is no wind penetration since battery is connected to generator's terminal. Thus, diode characteristic should be considered as well. The drop voltage of diode is shown in following equation.
Where, I d corresponds to armature currents as following expression (14).
Next, the induced electromotive force (emf) of the generator is displayed in (15).
Here, K e stands for Induced emf constant.
Up to here, the electrical model were described. From next equation the mechanical parts including the break terms are expressed.
The following equation shows the mechanical torque.
Below equation is based on the conventional break method, however, Cp is variable with respect to tip speed ratio.
Then we can reform the above equation for proposed method by adding the eddy current brake for emergency brake and MPPT control that is shown as (18).
In above equation, f MPPT and f emergency has different values and f em value is larger than f MPPT since f emergency requires more braking force to control the over-rotation of the turbine than the force use to realize MPPT.
Equation 19 task is to work as a switch to keep the output power in a maximum state which means keep the angular velocity ω in the following condition: ( ω < ω ref ) that is in the MPPT operation status. However, if ω exceed ω ref value ( ω> ω ref ), then the output power is not in a Maximum power state. Nevertheless, still the system is generating power since ω value isn't reached to ω lim yet which is the safety margin of the wind turbine. If this situation going to happen then the emergency brake will start to reduce the ω value to keep the state as ω < ω lim condition. Therefore, equation 20 [15] task is to work as a switch to determine ( ω<ω lim ) condition. Due to the compatibility of small-scaled turbine, we designed the mathematical expression of the C P as mentioned below in (22) and C P -λ relationship shown in Figure 8.
Where, the condition of coefficients is a > 0 and β > α. As it is shown in Figure 7 the output terminal of generator is connected to a battery and battery is connected to the loads in parallel. Therefore, in order to verify the battery voltage, we have designed a battery capacity charging in which the battery voltage can be obtained by the Equation (24) represents the characteristic of the accumulated consumed current.
Notice that the load voltage and battery voltage is identical since as it has been mentioned they are connected in parallel.
Simulation and results
In this section, we conduct the simulation in two steps. First, we simulated the entire system for different average wind velocities from 7m/s, 14m/s, 21m/s and 28m/s, respectively. Then plotted the stall control charts for above velocities. We did this step to analyze the wind turbines stall control behavior for different wind velocities. Afterwards, we plotted the Mechanical power (Pm) and power coefficient of the system for different conditions. For the second step, we simulated the entire system for calm and storm days wind conditions for one day and analyzed the behavior of outputs of the system.
Simulation conditions
This section explains the conditions of the simulation and system we used in order to conduct the simulation. The load data for the system is applied using real mango greenhouse system in Okinawa [16]. Load pattern is displayed in Figure 9. This load consists of Light emitting diode (LED) lights, Compact Fluorescent Light (CFL) lights, Electrical Fans. According to the Figure 9 the highest load value is 12.8A. Here, the sample time for the simulation is 0.01s. MATLAB/Simulink software platform is used for the simulation. For the simplicity, simulation time is set to be 100s from Figure 10 to Figure 21. From Figure 22 to Figure 33, simulation time is one day long. Values of the parameters for simulation are displayed specifically in Table 4.
Results
From figure 10 to figure 13, they display the stall control chart for 7m/s, 14m/s, 21m/s, 28m/s, respectively. According to above stall control figures, the wind turbine angular velocity emergency control is valid up to 28m/s velocity. Therefore, the eddy current brake system can control the overrotation of the wind turbine up to 28m/s. Table 5 shows the ωref which is optimal angular velocity to realize the maximum power operation and ωstall is the actual output of angular velocity. As it is obvious in the above table, the output angular velocity for input wind is lower than the optimal angular velocity since in high wind penetration emergency break is activated. Specifically, comparing to the ω , value is becoming lower when wind velocity increase. That means when the input wind velocity is 28m/s the value is 25.199 the MPPT control is stopped because ω> ω ref . However wind turbine is still operating using the emergency eddy current brake since ω is nearly the value of ω . Next, from Figure 14 to Figure 17, The instantaneous angular velocity deviation = − show their behavior for 7m/s, 14m/s, 21m/s, 28m/s average wind velocities, respectively. According to the Figure 18, when the average wind velocity is 7m/s the mechanical power with the MPPT and emergency brake has higher value comparing to without the MPPT and emergency brake. As well, for Cp, when the MPPT and emergency brake is triggered it contains higher value. But, when Cp is without the control of MPPT and emergency brake, the value is low.
In Figure 19, Figure 20 and Figure 21 which are correspond to average wind velocity of 14m/s, 21m/s and 28m/s, they have the same comparison results as mentioned in above for 7m/s. Therefore, it is obvious that when the system is under the controlled state for MPPT and over-rotation, the mechanical power of the wind turbine and the power factor (Cp) have their maximum values. We have artificially created two wind patterns for the simulation process. First, we have done the simulation for calm day and then for the storm day condition. For calm day and storm day wind patterns are shown in Figure 22 and Figure 23. For the first phase of calm day the velocity gradually increases from 7m/s to 14m/s. Afterward, it decreases gradually from 14m/s to 10m/s. Then the 10m/s wind velocity maintain stable for few hours. Afterwards the velocity will decrease until 8m/s. Then, it will increase from 8m/s to 12m/s. Again, it will increase from 12m/s to 14m/s gradually. Finally, the velocity will decrease from 14m/s to 8m/s. For calm day the highest wind speed is approximately 14.3m/s and storm day the highest wind speed is approximately 30m/s. Lowest wind speed for calm day and storm day are 7m/s and 10m/s. Next, Figure 24 and Figure 25 shows the angular velocity deviation ∆ values which changes with respect to the wind condition. There are some peaks occurred according to the Figure 22. These peaks happen due to the emergency braking for sudden wind condition changes. Here, the ∆ is nearly zero means the wind turbine produces its maximum power. In calm day, the angular velocity does not achieve to critical value which is 20 rad/s. Nevertheless, according to angular velocity deviation in Figure 24, it is attempting to reach the angular velocity that operates at maximum power point. Same to Storm day, angular velocity tries to operate at its maximum power point.
However, as it can be seen Figure 25, the deviation of angular velocity has negative two peaks for some determined period. Correspondingly, it is obvious in Figure 27 that during the peaks of angular velocity deviations the output of angular velocity is suppress by 20 rad/s. Thus, the emergency brake is also activated in storm situation.
The angular velocity always kept under
= 20 for both calm day and storm day stall control patterns which are shown in Figure 26 and Figure 27. Electrical power (Pe), Electrical torque (Te) for calm day and Pe, Pm, Te, Tm for storm day. According to those figures the Pe, Pm, Te, Tm have higher values comparing to the calm day for emergency brake and MPPT brake activated state. Therefore, the small wind turbine is capable of generating high power even in a storm day. Figure 30 and Figure 31, the battery voltage for storm day has good performance comparing to the battery voltage for calm day. Thus, for both cases we can say it has sufficient amount of voltage. Figure 32 and Figure 33 shows the generator output current for calm day and storm day. For storm day, the generator output current has significant increase comparing to the clam day current in the MPPT and emergency brake-controlled condition. Nevertheless, even in calm day sufficient capacity of battery is maintained which is sustainable for providing the power to greenhouse environmental loads. Thus, we can conclude that efficiency of power extraction is improved with cooperative of the emergency operation.
Conclusion
We have conducted the simulation for different wind condition and analyzed the behavior of the system when the system is in control of MPPT and Eddy current brake conditions. Moreover, the analogical experiment for the small-scaled wind turbines to decide the maximum penetration value and maximum angular velocity the wind turbine blades can withhold was performed. Therefore, we can conclude that the behavior of the system is in controlled state by using above mentioned control methods. We believe the eddy current brake system can be implemented in a small-scaled wind turbine system for the purpose of MPPT and control the over-rotation by strong wind. As future works, we will add pitch control for establish the system more robust. Also, the real system will be implemented. | 4,859.4 | 2020-07-01T00:00:00.000 | [
"Engineering"
] |
Discrete Train Speed Profile Optimization for Urban Rail Transit: A Data-Driven Model and Integrated Algorithms Based on Machine Learning
Energy-efficient train speed profile optimization problem in urban rail transit systems has attracted much attention in recent years becauseoftherequirementofreducingoperationcostandprotectingtheenvironment.Traditionalmethodsonthisproblemmainly focusedonformulatingkinematicalequationstoderivethespeedprofileandcalculatetheenergyconsumption
Introduction
In recent years, urban rail transit has developed rapidly around the world due to its high capacity, safety, superior energy performance, and reliable service with sufficient punctuality [1], which is becoming increasingly important for large cities development [2].For example, 35 cities in China have urban rail transit with total length over 4750 km in 2017 [3].According to the Web of China Rail Transit, there will be more than 50 cities operating urban rail transit in the next few years.In 2020, the total mileage of urban rail transit in China will be 6000 km, making the rail systems an important component of urban public transportation.Around the world, more and more cities are traveling oriented to public transportation.As shown in Figure 1 (which is from Global Cities Public Transit Usage Report of moovit), urban rail transit system has attracted much attention in recent years especially in some large cities and accounts for a high proportion of public transportation.However, the quick expansion of urban rail transit networks led to the problem of larger energy consumption.Taking Beijing rail transit as an example, in 2011, the total electric consumption of Beijing urban rail transit was 750 million kwh, and 470 million kwh was used for traction energy consumption, with the proportion as high as 55% which has attracted tremendous attention in recent years (Yin et al. [4]).In 2015, it reached 1.4 billion kwh, accounting for 40% of the total operating cost of the metro [5], which was equivalent to the annual electricity consumption of 730,000 households (annual electricity consumption of one household is based on 2016 BEIJING STATISTICAL YEARBOOK from Beijing statistical information website).In the European Union (EU), for instance, transport causes approximately 31% of total greenhouse gas (GHG) emissions.Within this sector, metropolitan transportation is responsible for about 25% of the total CO2 emissions (González-Gil et al. [6]).Therefore, energy saving has become an important issue in real train operating in order to reduce the operation cost and satisfy the requirement of environment protection.
To reduce the energy consumption in urban rail transit, a lot of models have been developed in recent years which mainly considered the train controlling between two stations based on the kinematic equations.There are three types in general, i.e., mathematical optimization models, simulation methods, and multiple linear regression, and neural network model based on the data.Although a lot of works had been done in optimizing speed profiles, existing methods have some limitations: (1) The mathematical optimization model in theoretical aspects has been sounded.However, the actual situation is often more complex, and the theory of optimization may not get a good performance when the actual facts are taken into consideration.(2) The establishment of the simulation model (e.g., agent-based simulation [7]) is complicated and costly.Further, there is a certain deviation between the simulation results and the actual measurement data.(3) The traction energy consumption and its influence factors are not linear, and the precision of the multiple linear regression model is limited.The neural network relies too much on the empirical information extracted from historical data.The phenomenon of overfitting is prone to occur, and the generalization ability may be hard to guarantee.Besides, it is easy to fall into the local optimum.In contrast, from view of the data-driven optimization on the basis of machine learning theories, the limitations could be avoided.Firstly, real-world data that contains the influences from actual factors can be utilized well.Secondly, machine learning has been well applied in many fields, which provides a method to study the existing information from data, acquire new information, and improve performance of data set.The process that utilizes input data (real-world profile) to obtain output data (energy consumption) is easier to be realized.Thirdly, machine learning is stable.For instance, the RFR and the SVR have stable performance in the data set, and they have been widely used in many fields, such as biology, medicine, economy, managementm and so on [8] Therefore, it becomes possible to optimize the train speed profile in the urban rail transit system on the premise of verifying their effectiveness.Main contributions of this research can be summarized as follows: (1) A data-driven optimization model (DDOM) is proposed to optimize the speed profile in urban rail transit system.The traditional speed profile optimization model is easy to be analyzed in the theoretical aspects.In this paper, the train speed profile is optimized based on the view of discrete profile which can be applied in the practice easily.
(2) Based on actual data obtained by experimental measurements, a novel method of utilizing the machine learning algorithm to calculate the energy consumption of speed profile is proposed which can avoid considering longitudinal train dynamics.Besides, the calculation error of machine learning algorithm (RFR and SVR) on speed profile energy is verified.
(3) To solve the proposed model, an integrated heuristic optimization algorithm based on RFR and SVR is developed.In addition, comparison of real data, results show average 2.84% energy reduction.
The framework of this paper is shown in Figure 2.
Literature Review
During last years, many studies have focused on the energyefficiency analysis of train traction; Scheepmaker et al. [23] summarized and gave a review from two aspects, (1) optimizing the speed profiles and driving strategies to reduce the energy consumption (e.g., Howlett [24,25]; Albrecht et al. [12]; Scheepmaker and Goverde [26]; Yang et al. [18,27]; Tian et al. [28]; Sun et al. [17]; Yang et al. [29]) and (2) optimizing the timetable by means of utilization of regenerative energy with minimum energy consumption (e.g., Chevrier et al. [30]; Li and Lo [19,20]; Wang and Goverde [31]; Wang et al. [32]; Zhao et al. [33]).Some typical publications about energy-efficient research are listed in Table 1.In essence, energy consumption is related to the train traction process.It is a fundamental work to improve the speed profiles.Over the past 25 years, the challenges in the train speed profile optimization have resulted in a variety of analysis frameworks.(1) Mathematical optimization models.The modern theory of optimal train control was developed during the years 1992-2014 by the Scheduling and Control Group (SCG) at the University of South Australia in a collection of papers.For example, Howlett and Cheng [9] built a discrete control model and confirmed the fundamental optimality of the accelerate-coast-brake strategy for energy-efficient train operation.On the basis of the Pontryagin maximum principle, if no energy is recovered during braking, then it becomes an optimal switching strategy.Wong and Ho [11] showed that a genetic algorithm was more robust in calculational processes.After reformulating the necessary conditions for optimal switching, Howlett et al. [34] proposed a less general model that the optimal switching points for each steep section can be found by minimizing an intrinsic local energy function.Albrecht et al. [13] used the Pontryagin principle to find necessary conditions on an optimal strategy and showed that a strategy of optimal type uses only a limited set of optimal control modes, Maximum Power, HoldP (Hold using Power), Coast, HoldR (Hold using Regenerative braking), and Maximum Brake.Albrecht et al. [14] developed general bounds on the position of optimal switching points and proved that an optimal strategy always exists.And an intrinsic local energy minimization principle for determination of optimal switching points was established, which shows that the optimal strategy is unique.Huang et al. [35] proposed an integrated approach for the energy-efficient driving strategy and timetable which was solved by a particle swarm optimization (PSO) algorithm.Yang et al. [36] employed an energy-efficient through the Taylor approximation.They [37] modeled electric trains energy consumption using neural networks, providing a reliable estimation of the consumption along a specific route when being fed with input data such as train speed, acceleration, or track longitudinal slope.Big data analytics (BDA) has increasingly attracted a strong attention of analysts, researchers and practitioners in railway transportation and engineering filed [38].From a data-driven view, this paper mainly focuses on how to obtain the optimal speed profile based on well-developed machine learning algorithms.There are still seldom researches aiming at optimal speed profile by this proposed method.
Data Analysis and Preprocessing
. .Data Overview.During the operation of the subway, the most widely used power is electricity.Some are used for the consumption of facilities in the train, such as air conditioning, lighting, etc.The rest is for traction of metro trains.Our data resources are formed by urban rail transit train running state and corresponding energy consumption, which are derived from Changping Line of Beijing urban rail transit.The operation section of Changping Line is from the Xi' erqi station to the Changpingxishankou station, with operating mileage of 31.9 kilometers and total of 12 stations opened (as illustrated in Figure 3).In order to accurately capture the actual traction power consumption during the operation of the subway, we installed sensors and computers on the train.The total energy consumption and the energy consumptions of various electrical appliances in the train are both recorded.Then, the total consumption is subtracted from the electrical energy consumed by the electrical appliances, and the rest is the energy consumed by the traction of the subway train.The provided data covers running stage of 4 months.There are two circle running tests every night in the up and down direction.The types of recorded data are showed in Table 2.
. . Data Preprocessing
Symbols : number of section is discretized to.V 0 : th speed point of original profile .∇: the time interval used to record the speed and displacement data during train traction.
Using these recorded data, we can draw out the running process of the urban rail transit train.Taking MingTombs-Changpingxishankou of the down direction, for instance (showed in Figure 4), the train operation process is divided into three stages.The first stage is accelerating until approaching the maximum speed limit; the second stage is fluctuating in the high-speed zone; the third stage is the deceleration braking until the train stops.Normally, differences in track conditions are caused by construction and geological reasons.There will be limited speed at different locations in each section of the urban rail transit.In this section, there are three speed limiting sections: 0 → 1 , 1 → 2 , 2 → .Each part has its maximum speed limit.
Train running state form is shown in Table 3 (m: the number of data recorded on an original speed profile).A speed profile has three elements, speed, time, and distance.The time interval between records in the table is 0.2 seconds.However, the running time between two stations varies from almost one to several hundred seconds.This means that a speed profile may be made up of thousands of records.We need to calculate the energy consumption from the profile, that is to say, to find the relationship between energy consumption and the thousands of data records, which is the so-called "high-dimensional" data in statistics.
Although machine learning algorithms under the back of big data are suitable for dealing with high-dimensional data, for extremely high-dimensional situations, large amounts of data are needed as training sets, and calculation precision is hard to be gained [39].Therefore, we choose dimensionality reduction for the limitation of data quantity.Not only can the algorithm achieve good training effect, but also the accuracy of the original high-dimensional data can be reserved.
Process of reducing the dimension is as follows: (1) The section length 0 can be obtained from records, then 0 is divided into small sections (the uniform segmentation method is chosen in this paper).Thus, the (n+1) points are represented by { 0 , . . . . . . | = 0, 1, . . .}.Clearly, 0 = 0, = 0 (section total length).Taking MingTombs-Changpingxishankou of the down direction, for instance, as shown in Figure 5, a uniform interval of 50 m and 5 m is selected for discrete process.In Figure 5(a), the speed profile record number drops to 26, getting 26 control points during the train traction, respectively, in Figure 5(b), speed profile record number is 247, and the density of control points is higher.
(2) Find the latter and previous positions of in original profile within ∇ interval, recorded as − and In the original velocity profile, we can get the velocity and time corresponding to the − and + , recorded as V − , V + , − , and + .In the small section from − to + , the train is assumed to be in a uniformly accelerated state.As shown in Figure 6, by using V − , V + , − , and + , the V can be obtained.Therefore, we can get the {V 0 . . .V . . .V }, where V 0 = V = 0. Figure 6(a) indicates speed profile can be represented by fewer points.Figure 6(b) shows error between the simplified profile and original one could be ignored when compared the whole length of section.
The speed profile sequence {V − }, = 1, 2, . . . and the traction energy consumptions of each sequence are extracted.And the data is shown in Table 5 (q: number of processed data records).Then, to eliminate dimension, the data is normalized.The extracted data is divided into two parts.80% is as the training set, and 20% is as the test set.
Formulation
In this section, a data-driven optimization model (DDOM) is proposed to optimize the urban rail transit traction energy consumption, which discretizes velocity profile and describes the relation between velocity profile and energy consumption as a complex mapping-relation.
V : minimum speed limit corresponding to .
V : maximum speed limit corresponding to . : minimum acceleration limit in operational section. : maximum acceleration limit in operational section. : minimum time limit in operational section. : maximum time limit in operational section.
Assumption.During the process of − → → + , because the interval is small enough, it is assumed that the train is in uniform acceleration.According to the theorem of V − relationship in physics, the quadratic function can be given.
Derived by formulas (1)-( 3), we get the velocity sequence {V 0 . . .V . . .V } as follows: or . .Train Operation Constraints.During the running state from one station to a neighboring station, some constraints should be satisfied.Speed limit (SL) constraints: the speed limit of the section at should be satisfied.
V and V are determined by the actual speed limit of the section.
Acceleration constraints: in order to satisfy the comfort of passengers on the train, the acceleration needs to be kept in a suitable range.As shown in formula ( 7)-( 8), and are determined by actual empirical parameters, and > 0, < 0.
Train operation time constraints: transportation efficiency also should be taken into account.Therefore, the train running time also needs to be within a certain range as shown in formula (9).
where and are determined by the service level and operational condition.
Train operation distance constraints: to ensure that the train can reach the station accurately, the total displacement of the train in the section must be equal to the length of the section.
. .Objective Function.When the section running time of train is , the corresponding energy consumption is , which has a complicated relationship with the sequence of velocity points.That is, ({ 0 −V 0 } . . .{ −V } . . .{ −V }) i=0,1. ..n.The optimization of urban rail transit speed profile is to minimize the energy consumption under the condition of satisfying transportation task, and the objective function of data-driven optimization model (DDOM) is showed in (11).
A Greedily Heuristic Algorithm for Model
In this section, firstly two energy consumption calculation methods based on machine learning algorithm are introduced.Then, by analysis the characters of them, an integrated optimization flow is developed with a combination of their merits.
. . Energy Consumption Calculation Based on Machine
Learning Algorithm.From the view of data-driven method, urban rail transit train runs within each section and produces a traction speed profile that corresponds to an energy consumption value.Although the factors affecting the energy consumption of each train are not only related to the speed profile, the external factors are determined once the operational section is fixed.Moreover, the transmission characteristic of the train is determined when the type of train is selected; then the energy consumption is only related to the speed profile during the traction process.Therefore, the speed profile becomes the key to the energy consumption of train traction.
In this paper, two typical machine learning algorithms (RFR and SVR) are introduced, where RFR is utilized to get velocity points' importance degrees in different positions, which can be responsible for obtaining these pairs spacespeed with a major contribution to the energy consumption.And, SVR is employed to calculate the energy consumption of the profile.The programming environment is Python 3 and its machine learning module is scikit-learn.
. . . Random Forest Regression (RFR) Algorithm Module.
Random forest is a kind of ensemble learning algorithm, which uses multiple trees to train and predict a classifier, and also can be used for regression [40].Based on decision trees combined with aggregation and bootstrap ideas, random forests were introduced by Breiman in 2001, which added an additional layer of randomness to bagging.In addition to constructing each tree using a different bootstrap sample of the data, random forests change how the classification or regression trees are constructed.They are a powerful nonparametric statistical method allowing consideration in a single and versatile framework regression problem [41].The random forest optionally produces two additional pieces of information: a measure of the importance of the predictor variables and a measure of the internal structure of the data (the proximity of different data points between one and another).In this paper, we can take advantages of this module to get velocity points' importance degree in different positions which can be used in heuristic solution process for model.
Evaluation and Analysis of RFR.
In the utilization of RFR algorithm, two important parameters should be calibrated: the number of split attributes (Mtry) and number of decision trees (Ntree).For simplicity, the enumeration method is used to traverse the two parameters.The convergence process is shown in Figure 7 over ten experiments.We can see that, when Ntree≥50, the average error is close to 0.1kwh.For different Mtrys, errors are shown in Figure 8(a), and there is an acceptable convergence range in Figure 8 Mtry=2 or 3, the error is minimal.Therefore, the optimal parameter combination used in this paper is Mtry=2 or 3 and Ntree≥50.By using the FR algorithm, the traction energy consumption evaluation average error is less than 0.1kwh and within range of 1%.
In addition to the high precision evaluation ability, we also get importance degrees of the velocity in different displacements during the traction energy consumption of the urban rail transit.We can find that the speed at which position is more significant to the energy consumption in a section, which indicates contributions to energy consumption of pairs space-speed.For instance, in the section of MingTombs-Changpingxishankou, section length is 1230 m, the importance degrees at different positions are shown in Figure 9.
. . . Support Vector Machine Regression (SVR) Algorithm
Module.Support vector machine (SVM) algorithm is from statistical learning theory (SLT), which is based on the structural risk minimization principle that can avoid excessive learning problems and ensure the generalization ability of the model.In essence, it can solve the convex quadratic programming problem and avoid falling into the local minimum.It can be applied not only to classification problems but also to the case of regression [42].Therefore, it can be divided into support vector classification (SVC) and support vector regression (SVR).Because of its solid theoretical foundation and its complete theoretical derivation, support vector machine is an effective tool in dealing with small samples, nonlinear, local issues.In this paper, it is applied to calculate the energy consumption based on real data.
Before using the SVR, the first step requires the determination of the kernel functions.The second step is to optimize parameters corresponding to different kernel functions.In this paper, three typical kernel functions are verified: radial basis kernel function (RBF), linear kernel function (LIN-EAR), and polynomial kernel function (POLY).
(1) For RBF, calibration parameters include penalty factor and value.As shown in Figure 10(a), convergence rate of RBF is very fast.When ≥ 20, the error will drop to a lower level.As ≥ 100, the average error of traction energy consumption can reach about 0.1kwh.The best combination of parameters is ≥ 30, and = 3.
(2) For LINEAR, calibration parameter is penalty factor.As shown in Figure 10(b), the convergence is slow.When ≥ 900, the average error of traction energy consumption also can reach about 0.1kwh, which means that it will take a little longer time to reach minimum errors.
(3) For POLY, calibration parameter is penalty factor.As shown in Figure 10(c), average error is fluctuating updown at 0.1Kwh and not stable, which fails to achieve better convergence results.
Comparing the performance of the three kernel functions, average error of the RBF kernel function is the best, which means that the traction energy consumption can be calculated under the optimal parameter conditions.
. . .Analysis of the Two Machine Learning Algorithms.For RFR algorithm, stable performance is in the data set, and the evaluation results are satisfactory.At the same time, the more momentous point is that the importance degrees of the velocity points in different positions can be sorted, which will be a valid guiding to the optimization control of the speed profile.For example, we can adjust the speed with high importance degree in the speed profile optimization process.As for the SVR algorithm, although the performance is not good in some kernel conditions, the ability to calculate in the RBF kernel function is also serviceable enough.For optimizing the speed profile of an urban rail transit train, we should find a speed profile that is not less than the existing energy consumption or is even lower than the existing energy consumption.However, the RFR algorithm has a fatal flaw: random forest cannot make the output beyond the range of data set, which may lead to overfitting in modeling of some specific data with noise.Therefore, the design of urban rail transit speed profile optimization algorithms could be beneficial to the combination virtues of the SVR and RFR.
. .Optimization Process.Form the view of discrete train speed profile optimization, the key problem is how to design a method to get a more energy-efficient profile; thus a group of combinations {V − }( = 0, 1 . . .) should be found.Velocity V in every position can be in a range, and the number of {V − }( = 0, 1 . . .) combinations will be beyond imagination.It is necessary to discretize the speed changing value.Thus, there should be a step size used for the speed adjustment.A simple and effective step size is the unit from recording instrument (in our experiment, it is 0.001km/h).Further, a heuristic process can be proposed to reduce the combinations: we can utilize important degree from RFR to adjust the velocity with fixed order.Then, energy-saving profile will be easier to get by the heuristic process.As shown in Figure 11, in one operation section, of the real-world data, there are many profiles under the same running time but with different energy consumptions.Under every running time condition, we can try to find a satisfactory profile at this fixed running time.Then, the best of them with different fixed running time is taken as the optimal solution.Based on this, we develop an integrated greedily heuristic algorithm combined with RFR and SVR.
Parameters
+ : set of index values corresponding to the speed at which the importance degree is arranged in descending order.
− : set of index values corresponding to the speed at which the importance degree is arranged in ascending order. () +: in descending order, the speed index value corresponding to the ℎ importance degree. () −: in ascending order, the speed index value corresponding to the ℎ importance degree.
Collection of all solutions
Feasible solutions at different times Local optimal solutions at different time Global optimal solutions Step .In the case of optimal parameters, random forest regression (RFR) Algorithm Module (Section 5.1.1))is used to obtain the importance degree of speed series {V − }.Then, sort them (because the importance degrees of {V 0 − 0 }.{V − } are zero, they are excluded) in descending order.And the speed sequences {V + − + } of the previous m%( = * /100) are selected.For the corresponding importance degree + (1 ≤ ≤ ), we can get + 1 ≥ + 2 . . .≥ + . . .≥ + .Then, in ascending order, similarly, the speed sequences {V − − − } of the previous m% are selected, and get Step .Initialize the operation time of the urban rail transit train, and set 0 = .According to the minimum and maximum time in the data, , are determined, and discretized unit of time is ∇.Then let = 1, = 0.
Step .Then, we can get a new profile after adjustment of V and V .Support vector machines regression algorithm (SVR) module (Section 5.1.2) is used to calculate the energy consumption.We adjust the velocity until = and get the minimum energy consumption , during the adjustment process and the corresponding speed Formulas (12) and (13) show the calculation of ∇ ∧ and ∇ − where velocity changes are ∇V ∧ and ∇V − .To ensure the balance of displacement, let ∇ ∧ = ∇ − .
Step .Get all the energy consumption ), calculate the energy Finally, algorithm flow is shown in Figure 13.We take Changping Line MingTombs-Changpingxishankou section of down direction as a numerical experiment to explain the optimization process, and the section parameters are listed as above.And there are two cases in different intervals.A complete operation state is showed in Figure 14.
. . Optimization Result
Case . ( = 0, 1 . . .) is set as an uniform interval of 5 m, and let V 0 = V 246 = 0, 0 = 0, 246 = 1230.The operation time is 103.4s.The results after optimization are shown in Figure 15.We can see that the optimal profile is not smooth.It suddenly increases or decreases in some places.Apparently, the availability of the optimized profile is not enough.
Case . ( = 0, 1 . . .) is set as an uniform interval of 50 m, and let V 0 = V 26 = 0, 0 = 0, 26 = 1230.Figure 16 shows the optimal results when = 50% (showed in Figure 16(a)) and = 100% (showed in Figure 16(b)).In this case, the operation time is also 103.4s.The optimized energy consumption can be reduced by 0.65 kwh.We can see that the speed profile is much smoother than Case 1 with rate of energy reduction is 3.1%(0.65/21* 100%).In Figure 16(a), for m=50%, after optimization, the acceleration stage is slightly flat.However, in Figure 16(b), when m=100%, whole speed profile is flatter compared to the original profile, and it is more valuable in practice.6.
Operation sections with different distances should not have the same discrete interval.For longer section, the interval could be bigger.For example, distance of Xi' erqi-Life Science Park is 5455 m, and interval could be 200 m.
In addition, the comparison of profile before and after optimization is shown in Figures 17(a)-17(j).Optimization results of other operation sections are listed in Table 6.We can see that, in some section, the maximum energy saving ).However, our improvement is compared with a real-world result that had already been imposed with an optimal control (traditional train optimal control with on the basis of Pontryagin maximum principle).There is an ATO (automatic train system, which is equipped with optimal control) in Beijing Changping Line and Yizhuang Line.Yizhuang Line and Changping Line have some similar features, train type, number of organized group, passenger intensity, power supply mode, and so on.
A well-designed method in real world that is applied into Yizhuang Line can achieve average saving energy blow 3% from the operator's statement.Therefore, the improvement based on an ATO profile which makes it look modest is reasonable.Besides, for different section, there are different improvements.The results may be triggered by many factors, like different section external environments (radius of curve, slope, air humidity, and so on).The optimized control effects in different sections are key to the room for improvement.If the room for improvement is limited, the real improvement may be also limited.Therefore, there is no quantitative result to illustrate the different improvements in each section.
Conclusion
Reducing train traction energy consumption is one of the efficient ways to cut energy cost in urban rail transit systems.And to protect the environment, the optimization of urban rail transit traction energy conservation has been a significant task in urban rail transit operation and management.The traction energy consumption of a single train is related to the speed profile between stations.When energy-efficient profiles are applied in every section, there will be a positive effect on reducing energy consumption of the urban rail transit system.Therefore, train speed profile optimization is a fundamental work.
In this paper, the speed profile optimization problem is discretized, and the decision variables of the speed profile become a series of space-speed points.From this viewpoint, a data-driven urban rail transit train speed profile optimization model (DDOM) is proposed to describe the relationship between profiles and energy consumption.Two machine learning algorithms, namely, random forest regression (RFR) and support vector regression (SVR), are taken into account.RFR is applied to get the important degree of velocity in positions, and the degree is utilized as heuristic information to decide the optimization order of velocity in different positions.SVR is used to calculate energy consumption of profiles with a high accuracy (95%).Combined with the advantages of the two algorithms, an integrated heuristic greedy optimization algorithm is developed to solve the model, which can reduce energy consumption by 2.84%.In some theory research, energy conservation percentage is higher than our results.However, few are verified based on the real-world data.Furthermore, our methods may be quite simple and can be applied to practice easily.
Nevertheless, because the data samples are far from enough, when adjusting velocity in different positions to get a new profile in the optimization process, range of velocity change is limited.There is still some room for an improvement on the basis of the optimization results.Although there are many different views, the data-driven method is new to the problem, and applying machine learning algorithms to the field of energy saving in urban rail transit is the innovation.Future research can be focused on the following areas.Firstly, a further improved algorithm for a different heuristic strategy could be studied.For instance, based on the data machine learning method, the regenerative electricity consumption in the braking process may be reused in the trains from neighboring sections.Thus, instead of optimizing one single train speed profile in each section separately, train speed profiles from neighboring sections should be taken into account.Secondly, in the urban rail transit networks, if power supply in the network nodes (transfer stations) is transmitted from the same transformer substation, the energy-saving optimization of trains can be extended to the urban rail transit network.
transportation and urban rail transit
Figure 1 :
Figure 1: Proportions of public transportation and urban rail transit.
Figure 8 :
Figure 8: Convergence process and errors in RFR.(a) Errors in different Mtrys.(b) Convergence range.
Figure 9 :Figure 10 :
Figure 9: Importance of velocity at different locations in the section.
Figure 12 :
Figure 12: Explanation of changes of velocity and displacement.
Figure 17 :
Figure 17: The obtained profiles in different sections.Section (a)-(j) are listed in Table6.
Table 1 :
Some typical publications about energy-efficient.
I: speed profiles/driving strategy; II: energy-efficient timetable.
Table 2 :
Overview of measurement characteristics.
Table 3 :
Part types of the original data.
Table 4 :
Part of the velocity series after being processed.
Table 5 :
Data format of training and testing set.
Table 6 :
Optimization results of other sections.08% (in the section Shahe to Shahe University Park), which is a good performance.And, for a 31.9kmlength with 12 stations train line, energy saving is 2.84%.The improvement may look modest when compared with previous researches (most claim saving energy above 4% | 7,480.2 | 2019-05-02T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Recovering The Principles of Humane Experimental Technique
The 3Rs, or the replacement, reduction, and refinement of animal research, are widely accepted as the best approach to maximizing high-quality science while ensuring the highest standard of ethical consideration is applied in regulating the use of animals in scientific procedures. This contrasts with the muted scientific interest in the 3Rs when they were first proposed in The Principles of Humane Experimental Technique (1959). Indeed, the relative success of the 3Rs has done little to encourage engagement with their original text, which remains little read and out of print. By adopting a historical perspective, this article argues that one explanation for this disjunction may be found in another, more celebrated, event of 1959: C. P. Snow’s Rede lecture on The Two Cultures. The moral outlook of The Principles of Humane Experimental Technique derived from an earlier ethos wherein humanistic and scientific values occupied a shared culture. While the synthetic style of The Principles has hindered its readership, this article concludes that there is value to recovering the notion that the humanities and social sciences can contribute to the improvement of animal research.
Introduction
Today, the 3Rs, or the replacement, reduction, and refinement of animal research, have become established worldwide as the ethical approach to governing animal-dependent science. In spite of the rapid rise of the 3Rs to recognition and implementation from the 1990s, few are familiar with their history. Indeed, a curious characteristic of the 3Rs is that one needs apparently to know nothing of their history in order to follow their precepts. Far from being required reading, the original text in which the 3Rs were introduced, The Principles of Humane Experimental Technique (W. M. S. Russell and Burch 1959), is out of print as it has been, almost continuously, since publication of the first editions by Methuen & Co. in 1959 (UK) and Charles C. Thomas in 1960 (United States). 1 This article adopts a historical perspective to explain this incongruity through reference to the ways in which relations between scientific and human values have changed over time within predominantly British or British-influenced culture. It concludes by considering implications for animal research today.
Contemporary commentators who have engaged with The Principles tend to report that text to be unclear and confusing, noting that the multiple definitions of the 3Rs today depart in significant ways from their original formulation (e.g., Tannenbaum and Taylor Bennett 2015). One explanation for this, explored by this article, is that The Principles is shaped by a historically specific moral outlook, which assumed that a shared set of values united work across the "sciences" and "humanities." Readers will be well aware this view is less prevalent today, indeed ST&HV's aims and scope state that over time "more and more, human values come into conflict with scientific advancement." Even at the time of writing, harmonious relations between humanities and the sciences were waning as the latter increasingly challenged the former for prestige, power, and societal leadership. In sum, The Principles is a difficult text to read because it is grounded in a specific formulation of scientific humanism that lent itself to a complex and eclectic multidisciplinary style and approach that has little traction today.
The notion that human and scientific values apparently conflict, which motivates ST&HV and drives the ongoing controversy around animal research, assumes and perpetuates a historically bounded distinction: that modern intellectual society is divided into the "two cultures" of the sciences and humanities. 2 Similarly, it can often seem to follow that humanism and the sciences operate with distinct knowledge forms, the former evaluative the latter factual. In contrast, The Principles assumes the very opposite: that the humanities and sciences reinforce each other and would continue to do so into the future through their shared values and broadly shared epistemology. This latter is rooted in a specifically Victorian scientific humanism described by as a "common context" wherein "scientists believed that the practice of science itself, with its openness to truth, was a model for good citizenship" (p. 237). Central to Victorian and Edwardian moral values was the notion of individual character, which, within the life sciences as in wider culture, operated as the ethical guarantor of scientific behavior, providing moral justification for scientific research as "biology explains character, and that character, or the capacity for individuality, is both the key top social advance and an ethical ideal" (Smith 2003, 178). Although The Principles is coauthored by W. M. S. Russell and Rex Burch, it was Russell, an Oxford trained biologist and polymath, who is responsible for the style, form, and content of the text. Russell was the son of the marine biologist Sir Frederick Stratten Russell through whom he would have been exposed to and embedded within the scientific humanism framework that dominated the elite intellectual culture of late nineteenth-to mid-twentieth-century Britain.
That this distinctive moral outlook was already fading by the 1950s and has lost all traction today can explain why The Principles was and remains a challenging text to understand. For instance, a contemporary reader may well be perplexed as to why The Principles gives no consideration whatsoever to the views and concerns of the lay public (cf. Hobson-West, this volume). However, such an approach follows logically from the "common culture" of interwar Britain that carefully balanced a commitment to social modernization through science with a conservative resistance to mass society (cf. Smith 2003, 236-38). As we shall see, these values allowed scientists "to present themselves as both special, a legitimate part of the traditional elite, and democratic, through their contribution to the wellbeing of all" (Smith 2003, 236). 3 Such an outlook precluded the value of democratic "inclusion" within the practice or governance of science, a view that shaped not only The Principles but also the broader work of the Universities Federation for Animal Welfare (UFAW), which funded the work.
This article is premised on the claim that the original formulation of the 3Rs can only be properly understood in the context of a scientific humanism that was inherited from the Victorian period and already in sharp decline at the time of their constitution. In part 1, the origins of the The Principles and the 3Rs are located within an effort to mobilize the common culture of humanities and the sciences to establish a science of animal welfare during the mid-twentieth century. Part 2 reviews a tension at the heart of the common culture that led to a schism in which humanities and sciences famously came to be understood as two cultures. Finally, part 3 examines how the common culture that sustained synthesized moral and scientific epistemology of The Principles stood in tension with the growing schism between the humanities and the sciences in the wake of the two cultures. In concluding, a case for the continued relevance to animal research of understanding the historical background of the 3Rs is presented.
Making a Science of Animal Welfare-The Common Context in Action
In the interwar period, the UFAW, a self-styled "scientific" animal advocacy organization, worked to establish animal welfare as an applied science. UFAW was committed to scientific meliorism: the ethos that science was the best route to improving society that derived from the common context of scientific humanism. The British Science Guild, for instance, saw itself as a politically and ideologically neutral body existing to promote the application of science to the improvement of society (MacLeod 1994). Prior to UFAW, no systematic efforts had been made to mobilize scientific meliorism to improve the place of animals in society. Animal advocacy, as then, was focused less on "improving" the lot of animals than preventing cruelty toward them (e.g., Beers 2006). In practice as much as in rhetoric, animal advocacy was political, framed by a discourse that historians have shown to be shaped as much by human concerns for nation, gender, class, and race than for the animal in and of itself (Ritvo 1989;Kean 1998). Interwar animal advocacy was widely perceived to be opposed to science largely through its association with antivivisectionism, which by the close of the nineteenth century had effectively polarized scientific and social values among those committed to the cause. In what remains the most comprehensive analysis of the late nineteenth-century British vivisection controversy, French (1975) concluded that "[a]ntivivisectionists foresaw the cold, barren, alienation of a future dominated by the imperatives of technique and expertise. It was not experiments on animals they were protesting, it was the shape of the century to come" (p. 412). Antipathy to scientific values, technique, and expertise was characteristic of early twentieth-century antivivisection, which constrained the possibility for scientific meliorism to take hold within this wider animal advocacy movement. It was in response to this problem that Charles Hume, Honorary Secretary of the British Science Guild, established the University of London Animal Welfare Society (ULAWS) in 1926.
Like the Guild, which served as inspiration and model, ULAWS was founded on the premise that science, being free of emotion and sentiment, was better placed to improve society than democracy and public debate alone. Hume believed scientists, veterinarians, and the elite intelligentsia were all hampered from acting to improve the welfare of animals through fear of being tarnished by association with the rampant emotionalism of radical animal advocates and antivivisectionists. ULAWS, renamed the UFAW as it grew from a local group to national network of university based branches, adopted a distinctive position within the animal advocacy movement by refusing to invoke the politics of mass emotionalism. Rather than appealing to the public, UFAW aimed to educate the educators by appealing "to scientific men in our universities to be generous enough to devote some part of their thought and effort to means for diminishing the hardships to which animals are subjected in their contacts with human civilization" (Hume 1939, 39). In the spirit of democratic elitism characteristic of scientific humanism, UFAW sought to harness scientific expertise and directly apply it to the improvement of "animal welfare." Hitherto, animal advocacy had tended to focus on the prevention of cruelty rather than the promotion of "welfare." The latter tended to be used to capture an existential but still political condition, explicit, for instance, in the naming of the National Council for Animals' Welfare. UFAW, in contrast, approached animal welfare as an "object" that could be scientifically identified, measured, manipulated, and improved.
Rather than intervening upon the traditional landscape of emotive public appeals and political campaigns, UFAW focused on applying science to achieve demonstrable improvements to animal welfare. The focus on practical action as opposed to political rhetoric was reflected in the organization's mission "to diminish, by methods appropriate to its special character as a university organization, the sum total of pain and fear inflicted by man on animals" (ULAWS 1934, 2). UFAW's approach reframed animal welfare as a scientific rather than political object with the intention of moving animal advocacy away from political campaigning toward the development of animal welfare as a science. As such, UFAW pioneered a middle ground where scientists and veterinarians could deploy their respective expertise to improve the welfare of animals with "a maximum of sympathy but a minimum of sentimentality" (Hume [1962(Hume [ ] 1982. One consequence of this logic was that UFAW had no objection to the killing of animals. Their objective was to improve welfare by reducing suffering; thus a painless death was "humane." Consequently, UFAW's interwar activities involved enrolling veterinary, medical, biological, zoological, engineering, and related scientific expertise to establish humane techniques of killing animals, with a focus on stray animals (Vinter 1950), vermin (Wright 1936), and livestock (Hume 1927). Hume, for instance, having endorsed the electrical stunning of pigs (Hume 1935), became deeply skeptical of the dramatic uptake of electrocution as a humane method of killing in the wake of the Slaughter of Animals Act, 1933. In order to ascertain the humanity of electrocution for other animals, particularly dogs and cats, he worked with electrical engineers uncovering a method of determining whether a procedure was humane or cruel that had "appeared in an engineering journal and being scarcely intelligible to anybody but an electrical engineer or a physicist [had] been almost entirely ignored by veterinary surgeons and by lay humanitarians" (Hume 1939, 153). The example of electrocution is emblematic of UFAW's vision for a science of animal welfare; it was less to be a discipline than an eclectic cross-disciplinary field. To this end, UFAW worked to connect disciplines that otherwise did not interact, seeking to catalyze collaborative approaches to problems that had escaped rigorous attention and could only be resolved by collaborative efforts across multiple forms of expertise.
UFAW worked to establish a culture of communication spanning science, society, and animal advocacy as well as expert specialisms that hitherto had not knowingly shared an object of interest. Establishing a science of animal welfare required the fostering of cross-disciplinary spaces of communication as Opponents of the gin-trap must understand agriculture as well as trapping; the problem of cruel poisons is one for the chemist; the use of electricity both for killing and for immobilising animals raises questions on which the electrical engineer no less than the veterinary surgeon has a point of view; the mechanical engineer has already provided the separator as a partial solution of the problem of oil pollution, and the casting-pen and the pig trap for the slaughter-house; the zoologist, physiologist, and veterinarian, the theologian and the psychologist, the historian and the economist, have all some special knowledge bearing upon man's relationship with animals. (ULAWS 1936, 164) In arguing that the "study of animal welfare" should be "a department of sociology which has been neglected in its scientific aspect," UFAW acknowledged that communication across disciplines had been a problem in itself (ULAWS 1936, 164). Nevertheless, making the study of "man's dealings with the animals" a "scientific sociology" was a "task worthy of our universities and demands contributions from ethics and theology, economics, psychology, zoology, veterinary science, and various other branches of learning" (ULAWS 1931, 1). Such an inclusive academic vision tied to social progress reflected and embodied the interwar common context of a "world of values shared by scientists and non-scientific peopleexpressed in scientific thought and practice as well as in other walks of life" (Smith 2003, 212). Here, the plight of animals became both a collective agenda and a shared object of interest through which UFAW established a science of animal welfare through which specialists with otherwise diverse technical and scientific knowledge could come together.
UFAW's interwar activities were diverse, addressing aspects of animal welfare from the humane slaughter of agricultural animals to the education of children in animal care. However, one critical area was absent: UFAW was "precluded by its constitution from engaging on either side in controversies relating to scientific experiments on animals" (ULAWS 1938, 167). Animal experimentation, or "vivisection" as it was still commonly known, proved a divisive subject that had threatened the viability of the society when established in 1926. Rather than serving as a platform upon which consensus and practical action could be cultivated to improve the welfare of animals, UFAW's claim to be a scientific society for the promotion of animal welfare initially made it a target for all sides. Antivivisectionists and the scientific community each associated the new organization with their perceived opponents and attacked it accordingly. Only by adopting a strong noncommittal position was the society saved and the goodwill of scientists secured (Worden 1951). As a result of the initial vehemence against UFAW caused by the vivisection controversy, it remained silent on the question of laboratory animal welfare for a considerable time after it had established substantive credibility and trust within the scientific community. It was not until 1942 that the society felt sufficiently established to address the question, and even then it did so indirectly and with strategic care. This shift in stance was a response to concerns emerging from within the scientific community about the suitability of animals then available for research. Hitherto, animals had been procured for experimental research ad hoc from commercial breeders who were more interested in fancy, fur and pet trade than the needs of science. In 1942, a coalition of British scientific stakeholders called for government action to establish a systematic means for the production and provision of "standard" laboratory animals. Reluctantly, the Medical Research Council took responsibility for what was increasingly perceived as a threat to scientific research: the absence of healthy, reliable laboratory animals of known backgrounds (Kirk 2008). For UFAW, this presented an opportunity. By aligning the scientific need for healthy animals with the promotion of animal welfare, UFAW recasts its agenda to promote animal welfare as an essential component of reliable of animal research.
Importantly, UFAW's point of entry was animal care not experimental practice per se. Helpfully, UFAW's pragmatic raison d 0 être to reduce the "sum total" of suffering inflicted on animals by society allowed the organization to circumvent the contentious political question of the moral legitimacy of using animals for scientific purposes. Nevertheless, intervening in the practice of animal research was both a risky and challenging move for UFAW as an animal advocate organization (albeit a self-styled scientific one). Accordingly, UFAW strategically focused on practices of animal care rather than animal experimentation. This history is important because it reveals how the promotion of laboratory animal welfare and the recognition of the epistemological importance of care for science originated from a hierarchical division between labor in the "animal house" and the experimental work of the "laboratory." In the 1940s, the care and management of animals for scientific research remained largely unskilled and undervalued labor with little import for the actual work of scientific research beyond the provision of required resources. It was, in short, a safe target that allowed UFAW to promote laboratory animal welfare without any suggestion of an attempt to instruct scientists on how they should approach animal research. On the contrary, UFAW presented its work as intended to improve the extrascientific labor of the animal house so as to better serve the needs of animal research. Animal care was also a strategic target as it posed the familiar challenge of having been hindered in its development because knowledge and technical expertise were scattered across literatures and disciplines that lacked a shared position or concern to act as a cohesive gathering point. Through meetings and publications such as the landmark UFAW Handbook on the Care and Management of Laboratory Animals (Worden 1947), UFAW constituted a new space where diverse parties could work to improve laboratory animal welfare without challenging the legitimacy, authority, or credibility of animal research. In particular, the UFAW Handbook established itself as a standard text, positively reviewed within scientific literature as an "authoritative work" remarkable for having "persuaded such a large number of people to meet on a common ground of humanity and utility" (Elton 1948, 87). 4 This work contributed to the professionalization of animal care, the emergence of the role of the "animal technician," and the increasing transformation of the animal house from an ad hoc space to a highly regulated scientific environment analogous to the laboratory in postwar Britain. Consequently, the care and welfare of laboratory animals came to be recognized as integral to scientific epistemology and experimental practice. As historians and science studies scholars such as Asdal (2012) have persuasively argued, to properly understand science "contexts" should be seen as "integral to the very action" of performing science (p. 388). Arguably, the wider social, cultural, and political contexts surrounding the scientific use of animals should, then, be seen as integral to the transformation of animal care and experimental practices over time. This historical process might best be understood as the animal research nexus.
Importantly, UFAW's strategic exploitation of the social and spatial separation of animal care and management from scientific research proper further entrenched the assumption that animal care and welfare concerned the animal house and not the laboratory. Research scientists had no objection to improving the work of the former, in large part because it made little obvious demand on the working practices of the latter. Indeed, the work of animal care and management could safely be ignored entirely, providing animals were available as research activities required them. Recognizing this, and buoyed by the success of the UFAW Handbook, UFAW initiated a second phase of work, which led to the 3Rs. In the first stage, interventions such as the UFAW Handbook (Worden 1947) located concern for the care and welfare of animals in the animal house. By improving husbandry and management practices, and embedding the promotion of animal welfare into the newly imagined role of expert animal technician, UFAW established concern for animal well-being as integral to, but a preliminary step away from, the experimental research in the laboratory. In the 1950s, UFAW turned its attention to experimental practice itself and the work of scientists proper, initiating the work that produced The Principles of Experimental Technique (1959). This strategic, social, and spatial distinction of the work of animal house and laboratory goes some way to explain why the language of care is almost entirely absent from The Principles. Instead, what came to be known as "humane experimental technique" established concern for the animal by grounding scientific practice within humanist values. The Principles was not, however, intended to be a work of moral philosophy: as the UFAW Handbook focused on the material labor of animal care, so too did The Principles address the practical work of experimental science. However, it did so in a number of ways not all of which coalesced into a cohesive and comprehensible study. For instance, very little systematic knowledge existed regarding the present or future needs of the animal the research community. Consequently, a preliminary step toward addressing animal welfare in the laboratory was to conduct what today would be seen as a social science investigation of animal research. UFAW's vision for the development of humane experimental technique was, from the start, rooted in a common context that brought together perspectives and approaches from what are now separated across the humanities, social sciences, and life sciences.
In 1955, William Moy Stratton Russell and Rex Burch were employed to conduct this work, the former leading the project as "UFAW Research Fellow" and the latter acting as "field assistant," conducting quantitative and qualitative mapping of animal research in the UK. Russell, a recent Oxford University zoology graduate, appeared ideally suited to UFAW's cross-disciplinary ethos. An experienced animal researcher, Russell also possessed diverse interests that ranged across biological, psychoanalytic, behavioral, historical, and the sociological sciences. His synthetic approach was evident from the start as he embarked on a "historical study of the factors underlying the development and utilisation of new techniques in experimental biology . . . with special reference to changes in technique making for greater humanity." Importantly, Russell considered "the power and precision of experimental methods . . . to be relevant to humanity, since they will tend to reduce the numbers of experimental animals required for obtaining results of a given accuracy." 5 This illustrated how social and scientific values were approached as one; integrated as opposed to separate, reflecting the ethos of the common context that shaped interwar intellectual life in Britain. Russell was a characteristic British interwar scientist, seamlessly weaving biological and cultural history into his narratives, producing synthetic accounts of evolutionary, societal, and human progress (cf. Smith 2003). Not only did this approach lend itself to an unproblematic synthesis of Russell's study of the past and Burch's focus on the present, but it also assumed without question that practical "factors making in general for increased humanity in the laboratory" were in large part social in character. As such, the improvement of animal welfare in experimental science for Russell required approaches and research techniques that today would be associated with the humanities and social sciences. While this interdisciplinary conjunction was retrospectively productive, eventually producing the 3Rs that are now widely established as ethical principles governing animal research, it equally hindered the intelligibility and reception of the ethos of humane experimental technique.
The Two Cultures Context
As UFAW continued to work within the common context to develop coherent and pragmatic approaches to the promotion of laboratory animal welfare, by the late 1950s, the shared value system that fostered the seamless weaving of humanities, social, and life sciences approaches was beginning to unravel. Writing in 1956, the lapsed chemist and active novelist C. P. Snow (1956) lamented that a "separation between the two cultures has been getting deeper under our eyes; there is now precious little communication between them, little but different kinds of incomprehension and dislike" (p. 413). Snow described how the loss of a unitary intellectual culture produced a schism in which the humanities and sciences had become increasingly hostile and alienated from each other. In the British intellectual atmosphere of the 1950s, cultural authority was perceived to rest with the "traditional" academic disciplines of the humanities. However, the dominance of the humanities was threatened by a confident and expansive scientific culture. This scientific challenge to the cultural authority of the humanities, which was equally seen to be a challenge to humanist values, was what Snow famously named a schism of the two cultures.
Snow's two cultures were not presented as a literal or essential incommensurable gulf between the humanities and sciences. Rather, he coined the term as a useful metaphor for a series of perceived sociocultural differences. Snow ([1959] 2014) acknowledged as much by describing the phrase as "more than a dashing metaphor, but a good deal less than a cultural map" (p. 9). One significant ambiguity was the place of the "social" sciences in Snow's thesis. At the time, the social sciences had yet to establish a clear identity for themselves within the British academic system and were still relatively novel in American universities (Ross 2003). To the extent that Snow ([1963] 2014) acknowledged the social sciences at all he seemed to have in mind "social historians" who, forming a potential "third culture," had discredited through rigorous scholarship the literary notion that science and technology had had only a deleterious effect upon society and its values (pp. 78-84). Nevertheless, Snow's perceived schism within intellectual culture, shaped by social, political, economic, and intellectual concerns, turned most forcefully on the question of humane values. When Snow later presented his thesis as a public lecture in 1959, he initiated a vociferous debate that continues to this day (Ortolano 2009).
The two cultures was presented in the 1950s as a "new crisis in the universities" because within academia the "split seems to be deepen as each side blames the other" (Lovell 1959, 68). From the scientific perspective, the challenge was to obtain equity and authority within a university system long dominated by the humanities. From the humanist perspective, however, the institutionalization of the sciences had placed "universities in mortal peril" by eroding humanist values in favor of "scientific proofs: model satellites, and a dog martyred in a flying kennel" (Mansell Jones 1959, 11). The two cultures debate was shaped by social concerns over class makeup of the educated elite, access to economic resources, and the relative worth of disciplinary knowledge. The rhetoric of the two cultures placed at stake the authority to speak on the "human condition" and in doing so transformed the epistemological approach and ontological presumptions around the same. Snow's initial 1956 presentation of the two culture schism addressed what was lost in the schism, which, in sum, was a fuller understanding of the human condition. Science was "driving down into the problems of will and cause and motive" and so "those who do not understand the [scientific] method will not understand the depths of their own culture" (Snow 1956, 413). Science was revealing a radically different portrait of what it was to be human than that which the humanities had traditionally assumed. Snow (1956) explained that where humanists had widely identified the human condition as fundamentally tragic the "impulse behind the scientists drives them to take nothing as tragic that can conceivably lie within men's will" (Snow 1956, 413). Science offered "nothing but contempt" for the "defeatist" humanist view of the human condition. Tragedy, Snow observed, sustained a sociocultural politics of conservatism that served humanists whose vested interest was to maintain their cultural authority "somewhere near the top" (Snow 1956). In this early rendition of the two cultures thesis, Snow focused upon the intellectual and moral characteristics of humanists and scientists. He concluded that it was the "moral health of the scientists which, in the last few years, the rest of us have needed most; and of which, because the two cultures scarcely touch, we have been most deprived" (Snow 1956). In other words, the most damaging consequence of the two cultures schism was that it deprived society of the progressive moral values inherent in the sciences.
Yet it was the very values of science that humanists most detested. Or, at least, this became the case following the 1962 retort of the British literary critic F. R. Leavis in 1962. Leavis was affronted not so much by the two cultures thesis, but by the fact Snow had had the audacity to speak on the subject. For Leavis, Snow could have no claim to be a scientist or a humanist. To speak for either was a symptom of a technocratic intellectual decrepitude that derived from science and threatened British culture. Leavis ([1962] 2003) dismissed Snow's naive commitment to social progress, demanding to know how one could isolate a "social condition" from the "individual condition" as the worth of "individual lives cannot be aggregated or dealt with quantitatively" (pp. 65-66). Responding to Snow's laudatory account of the industrial revolution as the motor for societal improvement, Leavis cited Ruskin for whom: well-being or welfare could not conceivably be matters of merely material standards of living, with the advantages of technology and scientific hygiene. And there we have the gap-the gap that is the emptiness beneath Snow's ignorance-between Snow and not only Ruskin, but the great creative writers of the century before Snow: they don't exist for him; nor does civilisation. (Leavis [1962(Leavis [ ] 2003 Civilization, which for Leavis translated as human worth or what he termed the "third realm," was not found in abstract scientific concepts but rather the "collaborative-creative process . . . in the living present, in the creative response of individuals, who collaboratively renew and perpetuate what they participate in-a cultural community of consciousness" (Leavis [1962(Leavis [ ] 2003. This third realm, of which for Leavis literature was the example par excellence, was neither private nor public but something else: what we might perhaps think of as a site of future-oriented creative collective becoming. Where Snow located "social hope" in the technocratic promise of science to improve societal conditions over generations, Leavis abhorred any suggestion that moral value could be reduced to and measured by the material improvement of a population. It was this intellectual climate in which The Principles of Humane Experimental Technique was developed and presented. As such, the soil was unlikely to be receptive to the seed.
The Principles of the Humane Experimental Technique in the Context of the Two Cultures
Where Snow and Leavis saw two cultures in conflict, W. M. S. Russell retained a notion of a common culture, albeit one made up of distinctive ways of thinking and working that continuously demanded the labor of synthesis. The Principles did not directly engage the challenge of the two cultures, though the problem clearly influenced, albeit incoherently, Russell's thinking on the challenge to communication posed by disciplinary specialism. Certainly, Russell's adoption of cybernetics was driven by the hope it could provide a shared interdisciplinary language to overcome the splintering of disciplines into ever more specialist and divided areas. Specialism, for Russell, was the problem, and the solution was the cultivation of a common context. This can be discerned from a short-story he penned in response to a challenge set by The Observer to imagine life in 2500 AD. 6 In Russell's story, three brothers competed to retrieve a mysterious object, which won the right to marry the daughter of the "system coordinator" (the ruler of this highly rational future society). The first, Cathodus, was a distinguished physicist (i.e., scientist) and the second, Census, a leading socio-psycho-technologist (for which we might read sociologist and humanist). The third, Biophile, was in the view of the first two a failure who owed his existence to an emotional weakness in their father who should have followed societal convention in having him euthanized as a child. Biophile was uneducated, irrational, and unable or unwilling to use technology. He spent his time isolated from rational society seeking to learn to live in natural harmony with wild animals (an activity he excelled at though it had no societal value). As the story unfolded, Cathodus's faith in technology was shown to be misplaced and Census's expertise led him to repeatedly misjudge the alien cultures he encountered. Only Biophile progressed, rescuing his brothers en route, and eventually winning the prize and thus the hand of the system coordinator's daughter. Biophile's success was achieved through his collaboration with a range of animal species. Wealthy, respected, and married to a gifted wife, Biophile established the first academic study of "animal behavior," instantiating intellectual respect for nontechnological knowledge of the natural world and establishing something akin to a common culture. The salient message, underlined by the disdain that the educated Cathodus and Census showed their father and brother, was that perceived cultural divides, whether between disciplines or intellectual and moral values, needlessly limited individual and collective ambition. Moreover, the pursuit of science and technology without recognition of their inherent humane value damaged society by eroding respectful interest in all life, human and animal alike.
An analogous valorization of synthesis framed The Principles, which dismissed the common misconception of "an irreconcilable conflict between the claims of science and medicine and those of humanity in our treatment of the lower animals" (W. M. S. Russell and Burch 1959, 3). Adopting the then-voguish discourse of cybernetics, which provided a synthetic interdisciplinary language attentive to fostering stability from chaotic systems via reflective "feedback," Russell argued that "the humanist possible treatment of experimental animals, far from being an obstacle, is actually a prerequisite for successful animal experiment" (W. M. S. Russell and Burch 1959, 4). The core message-that a respectful interest in animals and their welfare was a necessary condition for scientific epistemologywas identical to that espoused by Biophile. In a sweeping move that was intended to establish a clear agenda Russell identified a need: to create a new discipline of applied science. Now that specializations are multiplying with unheard-of rapidity, the creation of yet a new one may cause many hearts to sink; but this new science has the virtue of being a synthetic one, which brings together under a common view-point a vast variety of facts and ideas from a multitude of existing fields. (W. M. S. Russell and Burch 1959, 6) The new discipline was that of humane experimental technique and it was presented as three principles, the replacement, reduction, and refinement of animal research, or the "3Rs." The 3Rs function as the pragmatic heart of The Principles, a point at which the project of humane experimental technique most clearly aligned with the successful "how to" pragmatism of the UFAW Handbook. This is not to say there was any attempt to meet the style of an instructional handbook. On the contrary, the 3Rs were presented conceptually, as Russell himself admitted "we have made no attempt to begin the cataloguing of special techniques . . . we have sought only to establish the general principles of this new subject" (W. M. S. Russell and Burch 1959, 6). The 3Rs were the "methods" and later "modes" of "diminishing inhumanity in experimentation" (W. M. S. Russell and Burch 1959, 7), which Russell took to be the goal of humane experimental technique. Diminishing inhumanity was the "criterion of humanity" (W. M. S. Russell and Burch 1959, 157). Humanity entailed a humane disposition toward animals premised upon kindness and benevolence, which in practice required action to diminish suffering and distress. Rather than being a tightly defined normative value, humanity was a general descriptive term, which evoked a specific comportment toward other forms of life that was at once humanistic and scientific. Russell attempted to establish humane objectively as the causing or diminishing of suffering in the other. This presumed that the experiential state of the animals could be known and measured. At the same time, he retained-even valorized-the commonsense association between humane and moral values. Rather than a disadvantage, the long-standing moral sense of humanity was important because it "reflects the fact that man surpasses all other species in his capacity for social-cooperation" (W. M. S. Russell and Burch 1959, 14). In this somewhat convoluted and certainly incoherent way, Russell sought to establish the characteristics of good scientific practice through a complex and (particularly to modern readers) mixture of scientific, social scientific, and humanist understandings of behavior that framed humane experimental technique as both a human performance and a collaboration across species.
Russell's attempts to provide an objective understanding of humanity, without reducing complexity or evoking normative values, produced a wide-ranging and sometimes incoherent narrative style that preferred the addition of further layers of complexity over a progression toward clarity. For instance, a basic indicator of inhumanity was pain and fear in the animal, which collectively could be thought of as distress. Drawing on recent research on stress, Russell proposed that "anatomical responses to hormones are drastically affected by what might appear to be trivial factors disturbing the 'peace of mind' of the animal." 7 Russell's point was that the social relation between human and animal directly affected animal physiology and as such had consequences for experimental practice. However, his mode of making this point progressed from demonstrable physiological responses to presumptions of animal consciousness. Attribution of consciousness to animals was highly controversial at this time because consciousness was thought to be outside the bounds of experimental scientific inquiry. Yet Russell was unworried by his presumption. Drawing on psychoanalysis, another fashionable form of knowledge (though not so fashionable that the average pharmacologist would be familiar with its finer aspects), Russell claimed that to deny "consciousness in non-human animals" betrayed a "pathological" psyche resulting from traumatic experiences in childhood (W. M. S. Russell and Burch 1959, 15). Psychoanalytic statements such as this, which uncritically integrated moral values, scientific knowledge, and psychosocial health, offered the reader little more than a Hobson's choice: accept animal consciousness or reveal oneself to be psychologically unsound. In the absence of a scientific argument, Russell drew on psychoanalysis to pathologize a moral choice, a narrative style which held together only within the common context of scientific humanism.
Russell's forays though social science, psychoanalysis, and moral philosophy were a disappointment to Hume, who struggled to follow the argument and had, in any case, envisaged a text more in the style of the UFAW Handbook: pragmatic, applied, and actionable. Hume was equally worried by Russell's tendency to criticize as opposed to develop a positive case for reform. An early draft, possibly the first, read "it is somewhat remarkable that the realization of these facts by experimental workers (other than those specifically studying them) should be so slow and gradual; but so it is . . . and the laboratory is apparently the last stronghold of resistance to the unreserved acceptance." 8 Hume reminded Russell that as an animal welfare society, albeit one closely aligned with scientific culture, UFAW remained an "outsider" and so could not risk outright criticism pushing the potential reader to ask "who are you to tell me how to do my job?" 9 By publication, this modest criticism had been replaced by a capitalized assertion that reference to humanity must "NOT BE TAKEN TO IMPLY ETHI-CAL CRITICISM OR EVEN PSYCHOLOGICAL DESCRIPTION OF PERSONS PRACTICING ANY GIVEN PROCEDURE" (W. M. S. Russell and Burch 1959, 14). If the humane treatment of animals was the guarantor of reliable scientific research, as The Principles argued, then to suggest that animals had not or were not being treated as such was to cast doubt on the value of scientific work past and present. Such an adversarial position was anathema to cause; it had to be assumed that science was sound and that scientists sought to treat animals humanely. While Russell attempted to subtly assert the need for further work on the technical aspects of learning to differentiate humane from inhumane practices, the absence of clear criticism allows The Principles to be read in the main as a vindication of existing animal research as opposed to a manifesto for changed practices.
Disregarding Hume's concerns, Russell retained his complex interdisciplinary style and in the final chapter of The Principles marshalled cuttingedge approaches from the social sciences to further integrate humane, scientific, and psychiatric (psychoanalytic) values. Appropriating the recent work of Theodor W. Adorno, Else Frenkel-Brunswik, Daniel Levinson, and Nevitt Sanford of the University of California, Berkeley, which had identified the so-called authoritarian personality type as being most susceptible to fascism, Russell correlated the authoritarian personality with inhumane dispositions toward animals (C. Russell and Russell 1958). Further, he argued that the pathological "authoritarian" personality was incompatible with thinking "in terms of many variables" and thus was incompatible with science itself (C. Russell and Russell 1958, 154). Regardless of what we might make of such claims today, it is significant that humane experimental technique was premised on the assumption that scientific practice, moral values, and the psychological makeup of human and animal were presented as coconstituted and fundamentally integrated. 10 It followed that humane concern for the animal was integral to both science and scientific identity. Humanist values were aligned with and assimilated within scientific epistemology. Good science demanded good conscience.
The Principles wove together the humanities, the social sciences, and the life sciences, shifting from history to psychoanalysis, through biochemistry, pharmacology, ethology, physiology, cybernetics, psychosomatics, and psychosocial health, often without explicit acknowledgment of the change in register. Russell's shifting disciplinary perspectives were less an experiment in interdisciplinary writing than an exercise in transdisciplinarity. His writing freely took a concept from one discipline (say the biological phenomenon of heterosis) and used it to diagnose and pathologize a social phenomenon (such as the two culture schism) before reaching a quasimoralistic conclusion (the two cultures schism is bad because it threatens academic vitality). Yet viewed through the lens of the common context such imaginative leaps were unexceptional. For scientific humanists, physiological knowledge of how biological life regulated, stabilized, and organized the individual appeared necessarily to have implications for how social life achieved the same for the group (Smith 2003, 219-20). Russell's propensity to explain one discipline's problem with another's knowledge or method, for example, using a biological process to explain a historical trend, was made doubly obtuse by a tendency to draw on cutting-edge work that was not necessarily widely known within, never mind without, its respective discipline. Russell's apparent assumption that the reader was not only a polymath, but one who was up to date with the very latest ideas across a huge range of fields troubled and frustrated Hume, who complained the narrative style was: high-falutin', complicated, obscure, and too long winded. The references to psychoanalysis are of great interest to psychoanalysts, but hardly interesting to readers who have no knowledge of psychoanalysis, who will be in the majority. 11 Hume had a point. Such was the challenge of communicating across disciplines; one had to be familiar with, if not an expert in, each specialism to recognize the importance of each point and how the combination constructed a whole. Few animal-dependent scientists would have been familiar with psychoanalysis, the scientific credentials of which were much disputed. Yet psychoanalytic concepts provided the core justification for Russell's claim that humanist values, scientific epistemology, and a healthy psyche were integral guarantors of good animal research.
Russell's determination to transcend disciplinary boundaries and interlace distinct disciplinary knowledges, which included the physiological science of Ivan Pavlov, the ethological studies of Desmond Morris, and the sociohistorical lessons of classical history (Cleisthenes being a favored source when writing against the two cultures schism), demanded too much of the target audience: animal-dependent scientists. The latter were further distanced by Russell's preference for theoretical as opposed to practical approach. In any case, the attempt to locate humanist values at the heart of scientific practice was antithetical to the wider climate. Against the context of the two cultures schism, scientists had little interest in humanism and humanists distrusted science as having little to no meaningful value. A prominent review in Nature concluded that The Principles was not sufficiently informative to be used as a guide to details of experimental design or to husbandry of experimental animals. Perhaps its chief purpose is to stimulate thought on both of these topics, and it is to be hoped that it will succeed in doing so. (Weatherall 1959(Weatherall , 1676 More problematically, the strategic decision to shy away from explicit critique allowed The Principles to be read as endorsing the status quo. If humanity were an epistemological condition for the possibility of reliable science, then necessarily it followed that science had to meet this criteria, as evidentially the vast majority of published science was reliable science. Where, then, was the need for reform? This was the message taken by most scientific readers of The Principles. One reviewer, for instance, concluded that the book made difficult reading and would likely be "left on the shelf" only to be taken down on occasion to respond to antivivisectionist criticism (Anon 1959).
Conclusions: The Two Cultures as a Challenge to a Culture of Care
Historians of science and medicine have emphasized the importance of understanding context less as an explanatory device and more as an integral part of science (Asdal 2012). At the same, the language of common context or common culture (Anderson 2014) has been invoked as a means to understand how social, political, economic, and cultural values are embedded in knowledge, whether such knowledge is derived from the natural sciences, social sciences, humanities, or a composite (such as The Principles). Reading The Principles today, we might conclude that it lends support to the argument that the humanities and social sciences have value when aligned with the natural sciences or that they can make meaningful contributions to laboratory animal research. The contrast between the rise to prominence of the 3Rs as an ethical framework for animal research in recent years and the relative lack of interest in The Principles both at publication and today illustrates both the gains and risks of transdisciplinarity (working across several disciplines to produce coherent knowledge and approaches shared by the collective). As disciplinary constraints weaken new possibilities for asking and answering questions, the pathway to, and relevance for, a given audience can diminish in kind. Sensitivity to this tension, as much as to the importance of disciplinary language, would be critical to any endeavor to revive Russell's vision that the humanities and social sciences have something positive to contribute-not only to understanding but also to the practice of animal research (cf. Davies et al. 2016).
As the sciences usurped the cultural authority of the humanities in twentieth-century Britain, the distinctive common context that shaped Russell's world view broke up as the academic topology fractured into two cultures. As a result, The Principles failed to find an audience within the scientific community in large part because it was embedded within an ethos inherited from Victorian culture that had lost traction and meaning. In the absence of a common context, Russell's eclectic transdisciplinary style, combined with the intent to establish humanist values at the heart of scientific epistemology, held little practical or political value to animal research. Why, then, have the 3Rs survived? One reason is that the principle of "replacement" offered a fresh approach to the antivivisectionist movement. A century of antivivisectionism campaigning had failed to make any tangible impact on the growth of animal research, which in Britain had risen from the hundreds to the millions by the 1960s. For antivivisectionists, the third "R" of replacement provided a practical as opposed to political approach to reversing the exponential growth of animal research. Moreover, working to replace animals in scientific research overcame the longstanding disagreement that had fractured the British antivivisectionist movement on the issue of gradual or immediate abolition. In the 1961, inspired by The Principles, the National Antivivisection Society, the British Union for the Abolition of Vivisection, and the Scottish Society for the Prevention of Vivisection established the Lawson Tait Memorial Trust to fund research into the development and promotion of "alternatives" to animal research. 12 The goal of replacement, renamed alternatives, was taken up by antivivisectionism as a collective strategy to curtail animal research by funding the development of superior, nonanimal-based approaches to medical research. The antivivisectionist roots of the alternatives movement hindered the ability of early organizations to build trust and credibility with the scientific community. However, following the establishment of further organizations with less obvious ties to antivivisectionism-such as the Fund for the Replacement of Animal Research in 1969, which strategically appropriated scientific as opposed to antivivisectionist discourse-the 3Rs were eventually recovered and found an audience within the scientific community. For some, the 3Rs were the practical tools to bring about an abolitionist agenda, while for others, they stated in a formal sense what was implicit to animal research: a desire to reduce, refine, and, where possible, replace the use of animals. As such, the 3Rs became a useful ground to build a typical British compromise where a variety of otherwise opposed positions could coalesce and broadly agree to a way forward. However, in the process, the 3Rs lost their originary humanist roots to become rational procedures capable of aligning moral and scientific values within a pragmatic ethical framework.
Recovering the broader humanistic ethos that shaped The Principles invites reflection on the capacity of the humanities and social sciences to shape animal research. In one explicit retort to the separation of the humanities and sciences, Russell asserted: The other great progressive human activity is art, which is so closely related to science as to be virtually the same activity. Thus it comes that the greatest scientific experiments have always been the most humane and the most aesthetically attractive, conveying that sense of beauty and elegance which is the essence of science at its most successful. (W. M. S. Russell and Burch 1959, 157) Russell believed that the development of humane experimental technique would allow the "aesthetic aspect of experimentation" to "take its place among what are curiously and selectively called the humanities" (W. M. S. Russell and Burch 1959, 163). Here, perhaps, Russell has in mind an exploration of the affinities between the aesthetics and ethics of science. This was to act as a corrective to the incremental alienation of the humanities from sciences through specialization. In his deeply idiosyncratic style, Russell explained the danger of specialism by mobilizing biological phenomenon to reach sociological conclusions. While "harmless in itself," specialism became harmful over time by inhibiting communication across fields, thereby causing "whole streams of science to come to halt for lack of what we might properly call hybrid vigour" (W. M. S. Russell and Burch 1959, 159). Here again, Russell invoked the notion that biological populations that bred only with themselves declined in vigor, productivity, and general health over time and applied it to specialism and, by extension, the two cultures schism. In Russell's wider research on human behavior, the two cultures schism became an intellectual pathology attributed to a "neurotic taboo" that forbade "the exploration of human feelings and social interactions" (C. Russell and Russell 1957, 196). Integrating ethological, psychoanalytic, neuropsychological, and social sciences, alongside literary and historical argument, was the only appropriate response to challenge of the "scientific society" that must ensure "the development of social institutions which can oppose any permanent rigid specialization, while permitting maximal variance of behaviour" (C. Russell and Russell 1957, 197).
In writing The Principles, Russell explained that he "sought only to limn the barest of outlines" leaving it to others to "fill in the interior" and develop humane experimental technique (W. M. S. Russell and Burch 1959, 167). Today, the work of filling in the interior is well under way. Yet the relevance of The Principles to such work is not obvious unless the text is understood within its historical context. It was through reference to interiority-both human and animal-that humane experimental technique established its ethical guarantor. Moral values were not to be imposed upon the scientist from without via social pressure or law but rather were embodied in the very epistemology and practice of science. Russell mobilized scientific humanism to embed ethical behavior within the individual personality of the scientist. By investing moral values in scientific identity, Russell placed the scientist as knowing subject with as much at stake in the practice of experimental research as the animal subject. Within human experimental technique, cross-species epistemological cooperation was elided with that of ethical codependency: to safeguard their identity as human and scientist, the researcher had to care for the animal. While Russell chose not to develop or articulate the moral and practical importance of his notion of cross-species cooperation in any pragmatic or systematic sense, he did present it as an area of critical importance.
For instance, one suggested criterion for assessing the humanity of a procedure was "that of the animal's behaviour toward the experimenter" (W. M. S. Russell and Burch 1959, 32, emphasis original). This acknowledges that any successful culture of care is subjective, situated, and contingent. As such, humane experimental technique and the 3Rs were never intended to be institutionalized. Rather, they were to be internalized: embodied in the human scientist and enacted in scientific research.
Drawing on the ethologist Konrad Lorenz, Russell insisted that experimentalists must ensure their animal "subjects" possessed mens sana in corpore sano, a healthy mind in a health body, noting that we "will not get the one without the other" (W. M. S. Russell and Burch 1959, 13). This established a clear obligation toward domesticated animals that: have lost many of their original responses, and suffered disruption of a formerly well-organized and dove-tailed behaviour system, in connection with their long history in a new kind of environment, one in which many of their needs may be supplied by man . . . We have often to supplement their behaviour, for we are now an essential part of their world." (W. M. S. Russell and Burch 1959, 32) As the animals' environment was a "man-made ecology," the question of animal welfare was less of "an intrinsic problem, as those of wild animals are to some extent, but a problem of human sociology; for they are determined by human needs and decisions" (W. M. S. Russell and Burch 1959, 32-33). In the laboratory encounter, human and animal were codependent: shaped and shaping each other. The salient difference being that environmental factors within the laboratory were more open to human control. "[S]cience," Russell wrote, was "indissolubly linked to the social activity of co-operation, which will find its expression in relation to other animals no less than to our fellow humans" (W. M. S. Russell and Burch 1959, 157). Once it is accepted that the experimental animals' world is in large part a human construction, then it follows that the humanities and social sciences have a role to play in improving animal research and welfare. editors of Science, Technology, & Human Values, alongside the anonymous reviewers, for their advice, constructive criticism, and support, which collectively improved this work. Finally, I gratefully acknowledge the Wellcome Trust for their generous support of this research (via grant 205393/C/16/Z).
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Wellcome Trust (grant 205393/C/16/Z). | 13,055 | 2017-08-28T00:00:00.000 | [
"Philosophy"
] |
On-Site Validation of a Microwave Breast Imaging System, before First Patient Study
This paper presents the Wavelia microwave breast imaging system that has been recently installed at the Galway University Hospital, Ireland, for a first-in-human pilot clinical test. Microwave breast imaging has been extensively investigated over the last two decades as an alternative imaging modality that could potentially bring complementary information to state-of-the-art modalities such as X-ray mammography. Following an overview of the main working principles of this technology, the Wavelia imaging system architecture is presented, as are the radar signal processing algorithms that are used in forming the microwave images in which small tumors could be detectable for disease diagnosis. The methodology and specific quality metrics that have been developed to properly evaluate and validate the performance of the imaging system using complex breast phantoms that are scanned at controlled measurement conditions are also presented in the paper. Indicative results from the application of this methodology to the on-site validation of the imaging system after its installation at the hospital for pilot clinical testing are thoroughly presented and discussed. Given that the imaging system is still at the prototype level of development, a rigorous quality assessment and system validation at nominal operating conditions is very important in order to ensure high-quality clinical data collection.
Introduction
Microwave imaging for medical applications has been of interest for many years. The microwave images are maps of the electrical property distributions in the body. The electrical properties of various tissues may be related to their physiological state; notably, there has been some evidence of changes in the properties of cancerous tissues when compared to normal tissues. Cancer detection with microwave imaging is based on this contrast in electrical properties. Microwave imaging, as an alternative imaging modality to X-ray mammography for breast cancer detection, has interested many researchers during the last 20 years [1][2][3][4][5].
Among them, at least four research teams have performed clinical testing of their experimental prototypes [6][7][8][9][10][11], demonstrating numerous positive results and a potential added value of the microwave technology toward a better specificity and/or sensitivity in breast cancer diagnosis when combined with the state-of-the-art modalities. The potential for regular follow-up of the patient during breast cancer treatment has also been envisaged using the microwave technology [8]. The interested reader is directed to a series of review papers that have been recently published [12][13][14]; these papers provide an extensive overview of the microwave breast imaging system prototypes that have been As mentioned in the introduction, the device consists of the microwave breast imaging subsystem and the optical breast contour detection subsystem. The microwave breast imaging subsystem is an active device that illuminates the breast with non-ionizing low-power electromagnetic waves in the microwave frequency spectrum, which penetrate the breast under examination. The subsystem collects the scattered electromagnetic waves and recovers pertinent information about the breast tissue consistency based on the dielectric contrast of these tissues. The optical breast contour detection subsystem serves to provide the total volume and boundary contour of the breast, as a priori information for the microwave breast imaging subsystem.
During the examination, the patient will be lying in a prone position on the examination table. A dedicated circular opening on the examination table will permit the immersion of the breast in a specific liquid, which will serve as a coupling (transition) medium between the imaging system and the breast. The coupling liquid has been appropriately manufactured such that it has electromagnetic properties favoring the penetration of the electromagnetic wave in the breast.
The intended performance of the device is to unambiguously detect the presence of breast malignant lesions and estimate their 3D location within a given level of accuracy. While the ultimate goal is the diagnosis of breast cancer at an early stage of development, in the course of the pilot firstin-human trial, the achievable performance of the device will only be verified against benchmark cases of prediagnosed palpable cancers. To this extent, co-registration of the imaging results with available images from reference modalities (X-ray mammogram and/or ultrasound scans) will be performed. Thus, a "ground truth" will be available to assess the performance of the prototype device under test.
In Figure 2, a top view of the Wavelia microwave breast imaging subsystem examination table, as well as a zoomed view on the transition liquid in which the breast is immersed during the scan, are shown. During the examination, the patient will be lying in a prone position on the examination table. A dedicated circular opening on the examination table will permit the immersion of the breast in a specific liquid, which will serve as a coupling (transition) medium between the imaging system and the breast. The coupling liquid has been appropriately manufactured such that it has electromagnetic properties favoring the penetration of the electromagnetic wave in the breast.
The intended performance of the device is to unambiguously detect the presence of breast malignant lesions and estimate their 3D location within a given level of accuracy. While the ultimate goal is the diagnosis of breast cancer at an early stage of development, in the course of the pilot first-in-human trial, the achievable performance of the device will only be verified against benchmark cases of prediagnosed palpable cancers. To this extent, co-registration of the imaging results with available images from reference modalities (X-ray mammogram and/or ultrasound scans) will be performed. Thus, a "ground truth" will be available to assess the performance of the prototype device under test.
In Figure 2, a top view of the Wavelia microwave breast imaging subsystem examination table, as well as a zoomed view on the transition liquid in which the breast is immersed during the scan, are shown.
Microwave Breast Imaging at Prone Position: The Principle
The microwave imaging scan is performed using a network of 18 wideband Vivaldi-type antennas in a horizontal circular configuration. The sensors are located outside a container that hosts the coupling liquid. The sensors are piloted to perform a vertical motion such that the full breast volume is appropriately illuminated during the scan. The scan takes approximately 10 min for a breast of medium size, as the breast phantom used for the on-site validation of the system.
Microwave Breast Imaging at Prone Position: The Principle
The microwave imaging scan is performed using a network of 18 wideband Vivaldi-type antennas in a horizontal circular configuration. The sensors are located outside a container that hosts the coupling liquid. The sensors are piloted to perform a vertical motion such that the full breast volume is appropriately illuminated during the scan. The scan takes approximately 10 min for a breast of medium size, as the breast phantom used for the on-site validation of the system.
A schematic description of the prone examination setup is shown in Figure 3. (b) Zoomed view on the transition liquid in which the breast is immersed during the scan.
Microwave Breast Imaging at Prone Position: The Principle
The microwave imaging scan is performed using a network of 18 wideband Vivaldi-type antennas in a horizontal circular configuration. The sensors are located outside a container that hosts the coupling liquid. The sensors are piloted to perform a vertical motion such that the full breast volume is appropriately illuminated during the scan. The scan takes approximately 10 min for a breast of medium size, as the breast phantom used for the on-site validation of the system.
A schematic description of the prone examination setup is shown in Figure 3. The technology is very safe. The emitted microwave power level inside the breast is limited physically by the capacity of the radiofrequency components, such that the maximum radiated level inside the breast is always lower than 50 mW. Calculations have been performed for the localized specific absorption rate (SAR) in the breast. The maximum localized SAR in the breast complies with the International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommendations and the European Union (EU) Directive 1999/519/CE on the limitation of exposure of the public to electromagnetic fields (compliance with a safety factor of four). The Radio-Frequency (RF) front end is based on vector network analyzer architecture. The resulting emission/reception RF chain has a dynamic range of 75 dB.
From the perspective of microwave imaging, the anatomy of the breast can be simplified to the following [14]: • An adipose layer directly below the skin. This layer consists of vesicular cells filled with fat, which are aggregated into lobules and separated by Cooper's ligaments; • The mammary glands: the innermost tissue of the breast consists of about 15-20 sections, termed lobes, with many smaller sections of mammary glands, which are arranged in a circular fashion. The technology is very safe. The emitted microwave power level inside the breast is limited physically by the capacity of the radiofrequency components, such that the maximum radiated level inside the breast is always lower than 50 mW. Calculations have been performed for the localized specific absorption rate (SAR) in the breast. The maximum localized SAR in the breast complies with the International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommendations and the European Union (EU) Directive 1999/519/CE on the limitation of exposure of the public to electromagnetic fields (compliance with a safety factor of four). The Radio-Frequency (RF) front end is based on vector network analyzer architecture. The resulting emission/reception RF chain has a dynamic range of 75 dB.
From the perspective of microwave imaging, the anatomy of the breast can be simplified to the following [14]: • An adipose layer directly below the skin. This layer consists of vesicular cells filled with fat, which are aggregated into lobules and separated by Cooper's ligaments; • The mammary glands: the innermost tissue of the breast consists of about 15-20 sections, termed lobes, with many smaller sections of mammary glands, which are arranged in a circular fashion. These lobes and ducts are also surrounded by Cooper's ligaments, which have the function of maintaining the inner structure of the breast and supporting the tissue attached to the chest wall; • Posterior to the breast is the major pectoral muscle, as well as ribs two to six.
Breast tumors typically originate in the glandular tissue. The increased volume of water within the cancerous tissue is responsible for the strong electromagnetic scattering associated with microwave imaging. The increase of sodium and water, particularly inbound water within the tumor cells, leads to the greater conductivity and permittivity of the tumorous tissues [18,19].
Several studies have examined the dielectric properties of normal and cancerous breast tissue. Indicatively, in 1992, Campbell and Land measured in vitro the complex permittivity of female breast tissue at 3.2 GHz [20]. They reported a significant dielectric contrast between normal (fatty tissue and all other healthy breast tissues) and tumor tissue. They also suggested that due to the similarity in the dielectric properties of malignant and benign tumors, it might not be possible to distinguish between the two based on dielectric properties alone. Some additional characteristics that are inherent to benign and malignant tumors have the potential to be helpful for tumor classification using microwave imaging, such as the tumor shape and surface texture [21][22][23]. Malignant tumors usually present the following characteristics: irregular and asymmetric shapes, blurred boundaries (lack of sharpness), rough and complex surfaces with spicules or microlobules, non-uniform permittivity, distortion in the structure of the breast, and irregular tissue density (due to masses and calcifications). Conversely, benign tumors tend to have the following characteristics: spherical, oval, or at least present well-circumscribed contours, compactness, and a smooth surface.
In the Wavelia microwave breast imaging device, multistatic radar detection technology [24,25] is employed. In multistatic radar imaging systems, each element of a fixed-element array illuminates the imaging scene in turn, while the other antennas record scattering at various angles from the transmitter boresight. Due to the spatial diversity of the receiving antennas, the multistatic approach acquires enhanced information about the scatterers, using received signals that propagate outwards via different routes. The number of illuminating paths is limited by the array geometry.
Due to the dielectric contrast between the different breast tissues at the microwave frequency range [19,20,26], back-scattered radar signals are physically generated. The received radar echoes are appropriately processed in order to detect and localize any significant scatterers (tumors) in the breast. An increased level of coherence of reflections originating from a given location results in the high intensity of the radar image at the given location in the breast, thus suggesting the presence of a significant scatterer.
Prior to radar imaging of the interior of the breast, pre-processing of the backscattered signals is performed to remove artifacts in order to accentuate the useful radar echoes of weak power level. The strong artifacts mainly consist of direct coupling between the antennas, skin reflections, and antenna reverberation. Following artifact removal, an effective radar-imaging algorithm is employed to unambiguously detect the presence of tumors and accurately localize them, while simultaneously suppressing clutter due to the normal heterogeneity of breast tissue.
Apart from using reflected microwave energy to reconstruct images of the breast, the radar target signatures may contain additional information on the shape, size, and other features of the tumor. This information could potentially be exploited for discrimination between benign and malignant lesions.
The Breast Phantoms
During the design phase, but also for the on-site validation, the imaging device has been deployed with phantoms, which simulate the real breast. These phantoms have been manufactured considering: Realistic breast shapes extracted from a publicly available database of real MRI breast images [27]; The state-of-art knowledge in terms of dielectric properties of the breast normal and malignant tissues in the frequency range of interest [26,[28][29][30]; Realistic asymmetric tumor shapes and sizes [22,23,31]; The manufactured breast phantoms have been presented in further detail in [32,33], by A. Fasoula et al.
The breast phantom repository, as published by the University of Wisconsin [27], has been used to define MRI-based realistic breast geometries. Based on this, rigid plastic molds have been 3D-printed for the breast outer surface, as well as for the segmented fibroglandular tissue in the breast MRI image, after the minimum required simplification, such that the geometry is printable in a limited number of compartments. For the imaging tests, both molds are filled with liquids mimicking the adipose and fibroglandular tissue [34]. Either liquid is poured in the corresponding mold compartment; the compartment walls are sufficiently thin to avoid significant impact on electromagnetic wave propagation. Solid mixtures of graphite, carbon black, and urethane are used to manufacture the skin and tumor phantoms. The formula published in [35] by J.Garrett et al. has been slightly adjusted to achieve solid mixtures with appropriate dielectric properties mimicking the corresponding types of tissues.
In Figure 4, the geometry of one of the breasts of class ACR3 (heterogeneously dense) that has been selected from the database for the on-site validation of the imaging system, as well as the corresponding 3D printed molds, are depicted. The selected adipose tissue-mimicking liquid has a mean dielectric constant ε r = 5, while the fibroglandular tissue-mimicking liquid has a mean dielectric constant ε r = 36.
printed for the breast outer surface, as well as for the segmented fibroglandular tissue in the breast MRI image, after the minimum required simplification, such that the geometry is printable in a limited number of compartments. For the imaging tests, both molds are filled with liquids mimicking the adipose and fibroglandular tissue [34]. Either liquid is poured in the corresponding mold compartment; the compartment walls are sufficiently thin to avoid significant impact on electromagnetic wave propagation.
Solid mixtures of graphite, carbon black, and urethane are used to manufacture the skin and tumor phantoms. The formula published in [35] by J.Garrett et al. has been slightly adjusted to achieve solid mixtures with appropriate dielectric properties mimicking the corresponding types of tissues.
In Figure 4, the geometry of one of the breasts of class ACR3 (heterogeneously dense) that has been selected from the database for the on-site validation of the imaging system, as well as the corresponding 3D printed molds, are depicted. The selected adipose tissue-mimicking liquid has a mean dielectric constant εr = 5, while the fibroglandular tissue-mimicking liquid has a mean dielectric constant εr = 36. As depicted in Figure 4, a 2-mm thick skin layer with mean dielectric constant εr = 38 is attached to the breast outer surface mold. The selected material, apart from its adequate mean dielectric properties, also has a dispersive profile of complex permittivity well-fitting to the skin dielectric properties, as reported in the relevant literature [28].
A tumor is simulated by use of a microlobulated solid having a diameter of 14 mm and a mean dielectric constant εr = 52 within the frequency band of interest. The shape of the tumor phantom is based on a Gaussian random sphere (GRS) model of the breast lesions [21][22][23]. Aside from its adequate mean dielectric properties, the selected material used for this phantom has a dispersive profile of complex permittivity that fits well with the one of malignant breast tissue, as reported in the literature [29]. As depicted in Figure 4, a 2-mm thick skin layer with mean dielectric constant ε r = 38 is attached to the breast outer surface mold. The selected material, apart from its adequate mean dielectric properties, also has a dispersive profile of complex permittivity well-fitting to the skin dielectric properties, as reported in the relevant literature [28].
A tumor is simulated by use of a microlobulated solid having a diameter of 14 mm and a mean dielectric constant ε r = 52 within the frequency band of interest. The shape of the tumor phantom is based on a Gaussian random sphere (GRS) model of the breast lesions [21][22][23]. Aside from its adequate mean dielectric properties, the selected material used for this phantom has a dispersive profile of complex permittivity that fits well with the one of malignant breast tissue, as reported in the literature [29].
A photo of the tumor phantom used during the validation tests of the microwave breast imaging device is shown in Figure 5. A photo of the tumor phantom used during the validation tests of the microwave breast imaging device is shown in Figure 5. Two snapshots from the preparation of the experimental setup, before a typical validation test of the device, are shown in Figure 6. Two snapshots from the preparation of the experimental setup, before a typical validation test of the device, are shown in Figure 6. Two snapshots from the preparation of the experimental setup, before a typical validation test of the device, are shown in Figure 6.
The Physical Considerations and Modeling
In order to properly design the data processing algorithms for such a device, it is fundamental to take into consideration the anatomy of the human female breast and translate it into an electromagnetic wave propagation problem to be resolved. As depicted in Figure 7, the breast skin layer, the fat, and the network of glandular tissue lobules and ducts of the human female breast have been considered to model the wave propagation path through the breast, before potentially reaching a tumor. As stated earlier, a network of sensors encircles a cylinder about which a vertical microwave imaging scan is performed.
The cylinder is filled with coupling transition liquid into which the breast is immersed. The transition liquid allows for optimizing the transmission of the electromagnetic waves from the antennas into the breast (similar function as for the gel used in ultrasound echography for optimizing the transmission of the ultrasound waves from the probe to the interior of the body). Thus, the transition liquid has been designed to have real permittivity that well matches the permittivity of the
The Physical Considerations and Modeling
In order to properly design the data processing algorithms for such a device, it is fundamental to take into consideration the anatomy of the human female breast and translate it into an electromagnetic wave propagation problem to be resolved. As depicted in Figure 7, the breast skin layer, the fat, and the network of glandular tissue lobules and ducts of the human female breast have been considered to model the wave propagation path through the breast, before potentially reaching a tumor. As stated earlier, a network of sensors encircles a cylinder about which a vertical microwave imaging scan is performed. human skin, as specified by Lazebnik et al. [28]. At the same time, the conductivity of the liquid has been designed to be such that it introduces non-negligible propagation losses, thus mitigating the strong multipath waves that propagate in the cylinder without ever entering the breast, as initially suggested in [36] Given the above considerations, the data processing algorithms of such a device should be designed such that useful information for breast imaging is acquired if the electromagnetic wave that is emitted from a sensor is received by another sensor of the network in a bistatic configuration. This step comes after transition from the following chain of non-planar layers with distinct dielectric properties (real permittivity and conductivity), which are each: The cylinder is filled with coupling transition liquid into which the breast is immersed. The transition liquid allows for optimizing the transmission of the electromagnetic waves from the antennas into the breast (similar function as for the gel used in ultrasound echography for optimizing the transmission of the ultrasound waves from the probe to the interior of the body). Thus, the transition liquid has been designed to have real permittivity that well matches the permittivity of the human skin, as specified by Lazebnik et al. [28]. At the same time, the conductivity of the liquid has been designed to be such that it introduces non-negligible propagation losses, thus mitigating the strong multipath waves that propagate in the cylinder without ever entering the breast, as initially suggested in [36] by P. Meaney et al. The real permittivity of the transition liquid ranges between 25 and 30, and its conductivity ranges between 0.2 S/m and 1.2 S/m in the working frequency band F = [1][2][3][4] GHz. The liquid is based on organic oil and deionized water mixed at a given proportion such that the desired dielectric properties are achieved.
Given the above considerations, the data processing algorithms of such a device should be designed such that useful information for breast imaging is acquired if the electromagnetic wave that is emitted from a sensor is received by another sensor of the network in a bistatic configuration. This step comes after transition from the following chain of non-planar layers with distinct dielectric properties (real permittivity and conductivity), which are each: The contrast in terms of the dielectric properties of the consecutive layers is responsible for the intensity of the echoes that are generated due to the transition of the electromagnetic wave via the respective layers. Thus, a significant tumor echo would be evoked that is conditioned on sufficient dielectric contrast between the normal glandular tissue and the cancerous tissue.
In addition, both the breast tissue and the transition liquid are materials of non-negligible conductivity that introduce noticeable radar wave propagation losses. This means that even if sufficient dielectric contrast exists to evoke significant reflection from tumors, the propagation losses along the path between the sensors and the tumor will lead to reflected signals of weak intensity compared to the unwanted reflections originating closer to the sensors. Namely, it is the interaction (coupling) between the antennas themselves, as well as the reflections that are generated by the skin layer once the electromagnetic wave impinges on the external surface of the breast, which represent signals that are several orders of magnitude larger in intensity than the weak reflections originating from the interior breast tissues.
Given the above principles, which are related to the physical nature of the problem, an imaging algorithm that is carefully customized for the application has been designed.
The Data Pre-Processing Steps
Several pre-processing steps are applied to the data measured by each couple of transmitting/receiving antennas, before this data can be efficiently used for imaging. The objective of the pre-processing steps is to mitigate the strong coupling between the antennas and the strong interference originating from the skin and other interwall reflections close to the breast surface. In the actual experimental setup, the effective employment of the data pre-processing steps reveals useful radar target echoes 30-40 dB below the raw measured data power level. However, the data pre-processing steps, being directly linked to the nature of the measured signal, are susceptible to evolving once the imaging system is employed in the clinical setting.
•
Data calibration at the presence of the breast As a first step, drift correction, with respect to a reference channel, is applied to the raw data measured by each couple of transmitting/receiving antennas; any time-varying drifts are thus eliminated before further processing of the signal.
The presence of the breast at a close vicinity to the sensor network significantly modifies the measured coupling between sensors. For this reason, a calibration process is employed to dynamically estimate the coupling signal based on a bunch of data from the scan that has been measured at similar conditions. The data-driven estimation is performed in the frequency domain at each vertical scan position and for each Tx/Rx couple in the network. The estimated coupling signal DCal Txi/Rxj,H n (f), is further subtracted from the drift-corrected raw data Dat DriftCorr,Txi/Rxj,H n (f).
A multiplicative compensation factor PhCen Corr,Txi/Rxj (f, e r,trans (f)) is then applied to the calibrated data in order to geometrically align the data. The phase-center compensation term is computed for each Tx/Rx couple in the network for each operating frequency point and is subject to the dielectric constant e r,trans (f) of the transition liquid, as a function of the frequency. Conditioned on temperature preservation in the operating limits of the device, such that the transition liquid dielectric properties are known, this term does not require dynamic data-driven estimation; it is a priori defined and stored during the system characterization at factory. This estimation module uses as input a reduced set of data from the microwave breast imaging system, which after calibration for removal of the strong antenna coupling, is used to reconstruct the external surface of the breast with limited accuracy. The calibrated data is used in conjunction with an active contour model to estimate a simple closed contour representing the skin return boundary, based on bistatic wave-front detection, at each vertical scan position. The algorithm has been presented in more detail in [37] by P. Lawrence et al.
•
Independent Component Analysis, in the frequency domain The independent component analysis (ICA) is a well-known method for finding underlying factors, or components, from multivariate statistical data [38,39]. The ICA method has been used extensively in various application domains, among which medical imaging is included, for feature extraction and selection, or even pathology identification [40,41].
What distinguishes ICA from other methods is that it looks for components that are both statistically independent and non-Gaussian. Given a set of observations of stochastic processes x 1 (t), x 2 (t), . . . , x m (t), where t denotes the sample index, assume that they are generated as a linear mixture of independent components y = W·x, where W is some unknown matrix. Independent component analysis consists of estimating the mixing matrix W, such that the non-Gaussianity of the components y i (t) is maximized. The kurtosis and the negentropy are two of the most commonly employed measures of non-Gaussianity for estimating the mixing matrix W [38].
In the case of radar signals, ICA can be performed either in the time domain, or in the frequency domain [42][43][44][45]. In our data processing chain, we have opted for the frequency-domain ICA, applied to the calibrated data DCal Txi/Rxj,H n (f) per Tx/Rx couple at each vertical scan position H n .
Segmentation of the data vector in frames of appropriate length, via application of a sliding window in frequency, is initially applied. Principal component analysis (PCA) is subsequently performed for data pre-whitening and dimensionality reduction [46], prior to input into the ICA algorithm. The selected sliding step in frequency is an important parameter that is directly linked to the spectral properties of the underlying signal and the principal modes to be preserved after pre-processing. The ICA operation is denoted in Equation (2), where Dat PCA−TAB,Txi/Rxj,H n , (M · N f ) is the block of M principal modes that is provided as input to the ICA algorithm, Dat ICA−TAB,Txi/Rxj,H n , (M · N f ) is the block of M ICs, as estimated by the algorithm, M corresponds to the number of sliding windows in the frequency that is initially selected, and N f is the number of frequency samples in the measured data vector.
Dat ICA-TAB,Txi/Rxj,H n = W H · Dat PCA−TAB,Txi/Rxj,H n , ∀H n and Tx i /Rx i (2) • Data filtering: IC Selection with Appropriate Spectral and Geometry-based Features The clear function of the ICA data pre-processing step is to classify/separate useful against interference (strong clutter components), based on: The distinct spectral properties of the various radar target echoes i.e., frequency dispersion is normally translated to higher kurtosis [47,48]. The estimated location from which each IC radar echo originates: an inverse fast Fourier transform (IFFT) for transformation of the IC from the frequency domain to the time domain is applied for this purpose; the correspondence between time and distance is established using as input the prior estimate of the breast contour, the known dielectric properties of the transition liquid, and an assumption on the average dielectric properties in the interior of the breast (directly derived from an assumption on the percentage of fibro-glandular versus adipose tissue in the breast).
Given the above considerations, two filtering steps are sequentially applied to the data: Filtering-out ICs with spectral profile incompatible with radar target echoes originating from the breast tissues, given the expected level of frequency dispersion; in the future, additional pattern features may be identified and employed at this filtering step, based on measurements with real breast tissues. Filtering-out ICs that are associated with radar target echoes originating from either very short distances (residual coupling) or very long distances (multipath) with respect to the sensors; the ICs that are filtered out at this step cannot physically correspond to the breast tissues, in terms of geometry.
• Propagation Loss Compensation
In order for the imaging algorithm to work properly, it is important to compensate for the electromagnetic wave propagation losses, which vary significantly along the working frequency band in the case of the highly-dispersive breast tissues.
Given the estimate of the distance from which the radar target echo that is associated with each IC originates (as estimated for the purpose of the distance-based filtering), a multiplicative propagation loss compensation term that is both frequency and distance dependent is applied to each IC. A characterization of the propagation loss model, which is applicable to the specific near-field radar imaging setup, is required to perform a good compensation. For now, an estimate, which is planned to be further refined in the future, is applied which achieves partial compensation of the propagation losses.
The energy focusing level, which is retrieved on the images, is expected to be degraded in the case of target sources for which the propagation loss compensation has not been properly performed at this pre-processing step. The propagation loss compensation term being dependent on the distance between the sensors and the target location to which each IC is associated means that it is also dependent on an assumption of the percentage of fibroglandular tissue pc fib present along the specific bistatic radar path.
Given all of the above considerations, a filtered version of DCal Txi/Rxj,H n (f) is reconstructed using the ICs obtained from the two filtering stages. A multiplicative propagation loss compensation term is applied separately to each IC before concatenation.
In Equation (3), IC rem denotes the set of IC indices that have been maintained after the two-step filtering, while d i denotes the bistatic radar distance of the target echo that has been associated with the i th IC.
The TR-MUSIC (Time-Reversal Multiple Signal Classification) Imaging Algorithm
After pre-processing the signals, as measured by various combinations of transmitting/receiving antennas-thus in various bistatic configurations-are combined in a multistatic radar imaging algorithm to generate an image of the interior breast tissues. The combination of multistatic radar paths in the same imaging algorithm enhances the angular diversity of the input information, thus making the algorithm more robust against clutter (unwanted distributed interference echoes from the interior of the breast) and enhancing the focusing of the image energy on small pronounced targets.
The imaging algorithm that is used is the time-reversal multiple signal classification (TR-MUSIC) algorithm, which was originally conceived for the detection of obscured radar targets in heavily cluttered environments, in the case of surveillance and tracking defense radars [49]. The original definition of the algorithm works optimally for a finite collection of point targets, as is the case when small targets are observed by a radar with limited spatial resolution, or when the first-order Born approximation is valid for the scattering mechanisms that dominate the imaging scene [50]. Further studies have been subsequently performed to generalize the algorithm in cases of multiple scattering phenomena [51] or extended targets, as is the case when a target is large relative to the size of the radar resolution cell [52]. More recently, the algorithm has been also proposed for breast cancer detection in dense breasts [53][54][55][56][57][58], albeit limited to simulations and no experimental data.
The main steps of our implementation of the algorithm are outlined as follows: A limited number of N fsel frequency points is selected from the total of measured frequency points in the operating band. Sectorization is performed, such that multiple images are formed at each frequency and each vertical position of the sensor network, each time using a different sector of the circular network. The selected number of sensors in the sector is further denoted as N s . The total number of sectors required to scan over the full 360 • around the breast is denoted as N sect .
Both the selection of specific frequency points, and the physical size and number of elements in the sub-arrays (sectors) used for the elementary image formation, can be critical to the achievable system performance in terms of unambiguous target (tumor) detection in the breast.
Monochromatic (single frequency) images are formed for each selected frequency point and each sector of sensors as follows:
•
The multistatic frequency response matrix (MFRM) is formed using the calibrated and filtered data at the specific frequency: where: The time-reversal operator is subsequently formed as: with H denoting the Hermitian transpose.
• Eigenvalue decomposition is performed on T sect (f), and an appropriate model order selection criterion is used to separate the resulting eigenspace into signal and noise subspaces [59]: where M ord is the selected model order. The separation can be a challenging task, if in the imaging scene there are multiple interacting non-point targets, as is typically the case for breast imaging. The effective separability between the signal and noise subspace has a significant direct impact on the final imaging result, given that the principle for the formation of this type of image is the orthogonality between the two subspaces.
The image, or the so-called TR-MUSIC pseudospectrum at the pixel p and the frequency f, when using the sector of sensors sect at the vertical scan position h j , is formed as: where: is the illumination vector of the sector sensor array sect at the frequency f and the pixel location p in the imaging zone.
In Equation (10), g 0 p TRx sect,i , p, f denotes the elementary Green function (i.e., the impulse response function of the propagation path) from the individual antenna at position p TRx sect,i to the arbitrary point p in the scanning region at the frequency f, while T denotes the matrix transpose.
The TR-MUSIC pseudospectrum in Equation (9) gets maximized, thus highlighting a target presence, at the pixel location p, at which the orthogonality constraint between the sensor array illumination vector and the signal noise subspace is better met.
This arises from the assumption that a linear decomposition of the illumination vector G sect (p, f) in the signal subspace Q s exists such that: T a set of linear coefficients (11) and the orthogonality constraint in Equation (8).
The Composite Image Formation
The monochromatic (single-frequency) image, as defined in Equation (9), may be difficult to be exploited as such for unambiguous and comprehensive interpretation of the imaging scene, due to inevitable corruption of the signal by residual noise and interference, even after pre-processing. Frequency diversity is commonly employed to mitigate the presence of frequency-dependent clutter (unwanted interference) radar echoes. The multi-frequency TR-MUSIC image at the sector sect and the vertical scan position h j is defined in Equation (12): In order to assure visibility of the breast over the full azimuth domain of 360 • , integration is performed on multiple partial images, computed per sectors of sensors all around the breast. The composite image that is formed using all the N sect elementary multi-frequency images at a given vertical position of the sensor network is defined in Equation (13): Im sect,h j (p) (13) The composite image of Equation (13) is the first type of image that is used for the validation of the imaging system using a well-controlled breast phantom, imaged at a single vertical position of the sensor network, in the vicinity of the tumor phantom.
Integration of multiple partial images of the complete imaging scene, computed all along the vertical scan of the sensor network, is further applied to form the full 3D image of the breast.
The composite image using data from multiple vertical scan positions of the sensor network is defined in Equation (14): Im TOT,h j (p) (14) where N h is the number of vertical scan positions of the sensor network that are used to form the full 3D image.
The Focusing Metrics, as a Means of Adjustment of the Breast Mean Permittivity
In order to map the multistatic radar echoes to the imaging grid under investigation, a model for the electromagnetic wave propagation modes is employed, as defined in Equations (9) and (10).
In the actual version of the microwave imaging device, propagation in two homogeneous lossless media is considered in the model. Lossless media are justified, given that loss compensation has been applied to the pre-processed signals before entering the imaging algorithm, as defined in Section 2.4.2.
Separation of the space in two media is assumed, given that the heterogeneous distribution of the tissues in the interior of the breast is unknown and sought to be estimated by the imaging algorithm. Thus, the two media that are provided as a priori to the imaging algorithm are: the transition liquid between the antennas and the exterior breast surface, and then the interior of the breast associated with an "average" dielectric permittivity, which remains homogeneous per coronal slice of the breast. The breast external surface has been estimated prior, using a subset of the calibrated data, as mentioned briefly in Section 2.4.2; this information is exploited here to define the border between the two distinct media of propagation.
The elementary Green function g 0 p TRx sect,i , p, f involved in Equation (10) is further defined as: where: 0 is the Hankel function of first kind and zero order: c 0 · e r,trans (f) is the wavenumber for propagation in the transition liquid, Dk breast (f) = 2πf c 0 · ê r,InBreast (f) − e r,trans (f) is an 'average' differential wavenumber for propagation in the breast, c 0 is the speed of light in vacuum, e r,trans (f) is the known dielectric constant of the transition liquid, ê r,InBreast (f) is an estimate of the average equivalent dielectric constant of the breast, and d InBreast,i,p is an estimate of the propagation path in the breast, in the case of a wave propagating from the sensor TRx sect,i to the pixel p, knowing the wavefront corresponding to the external surface of the breast.
The "average" equivalent dielectric constant of the breast is defined in Equation (16) as a function of the dielectric constant of the adipose and fibroglandular tissue, mixed at proportion pc fib . e r,InBreast (f) = pc fib ·ê r,fibroglandular (f) + (1 − pc fib ) ·ê r,adipose (f) · 10 −2 (16) e r,InBreast (f) is plotted in Figure 8 for various assumptions pc fib , while considering for the adipose and fibroglandular tissue dielectric properties the ones of the corresponding tissue-mimicking liquids used to fill the breast phantom molds of the Wavelia microwave breast imaging system, as defined in Section 2.3. Parametric images are generated under varying assumptions of percentage of fibroglandular tissue pc fib along the propagation path from a given transmitting antenna, to the breast and back to a given receiving antenna. The parameter pc fib impacts on both the estimate of the lossless elementary Green function g 0 p TRx sect,i , p, f , as defined in Equation (15), but also on the computation of the propagation loss compensation term LossComp (f, d i , pc fib ), in Equation (3) of the data pre-processing chain. The generated set of parametric images is further evaluated in terms of focusing, using appropriate image focusing measures [60][61][62]. The optimal fib pc assumption is automatically selected based on maximization of the focusing capability of the imaging algorithm, under the specific fib pc assumption. Given the varying consistency of the heterogeneous breast along the vertical scan, the focusing operation is performed per vertical position of the sensor network, thus on the image type defined in Equation (13).
For the analysis presented in this paper and used for the on-site validation of the Wavelia imaging system before its pilot clinical test, the image curvature, as defined in [60] FM c0 c1 c2 c3 = + + + The quadratic surface fitting and FM computation is actually performed per regions of interest (ROIs) of limited size on the image. The selected ROI size is related to the image resolution, as well as the size of detectable scattering objects in the radar imaging scene. The maximal image curvature The generated set of parametric images is further evaluated in terms of focusing, using appropriate image focusing measures [60][61][62]. The optimal pc fib assumption is automatically selected based on maximization of the focusing capability of the imaging algorithm, under the specific pc fib assumption. Given the varying consistency of the heterogeneous breast along the vertical scan, the focusing operation is performed per vertical position of the sensor network, thus on the image type defined in Equation (13).
For the analysis presented in this paper and used for the on-site validation of the Wavelia imaging system before its pilot clinical test, the image curvature, as defined in [60] by S. Pertuz et al., is used as the focusing metric (FM) for the parametric images. The intensity of the TR-MUSIC pseudospectrum of Equation (13) is interpolated by means of a quadratic surface f(x, y) = c0 · x + c1 · y + c2 · x 2 + c3 · y 2 , where the vector of coefficients C = [c0 c1 c2 c3] T is computed through least squares by applying two convolution masks, as defined in [60] by S. Pertuz et al. The curvature of the quadratic surface is used as the focusing metric (FM) for the image: The quadratic surface fitting and FM computation is actually performed per regions of interest (ROIs) of limited size on the image. The selected ROI size is related to the image resolution, as well as the size of detectable scattering objects in the radar imaging scene. The maximal image curvature (FM) over all of the ROIs is computed per parametric image. The pc fib associated with the image with overall maximal curvature is selected as optimal at a given vertical section of the breast (coronal breast size) in front of the sensor network. The composite multi-height image, as defined in Equation (14), is automatically formed via concatenation of all the coronal slices with maximal curvature (FM).
At the current stage of system development, the image formation is performed offline. It may take a few hours for the focusing algorithm to run the multiparametric (multi pc fib ) multi-sector images for all the vertical (coronal) slices of the breast. The total duration for the composite image formation will depend on the size of each breast (= i.e., number of coronal slices to be processed) and the number of assumptions on the background breast permittivity under test (=size of the parameter set pc fib ).
The actual implementation is valid, as such, in the case of a single dominant target (tumor) in each coronal slice of the breast. Both the breast phantoms and the clinical setting for the pilot first-in-human testing of the device are compliant with such a physical assumption. Appropriate complexification of the algorithm is planned for the near future in order to properly handle the realistic case of multiple lesions being present, sought to be detected, and accurately localized per coronal slice of the breast.
An example of the computed FM for a set of five pc fib -parameterized images, as well as the result of optimal pc fib selection, is shown in Figure 9. The FM values are appropriately rescaled by the algorithm, such that the resulting values are comparable among various coronal cross-sections of the breast. The depicted images are normalized to maximum intensity.
The Optical Breast Scan and Metrology
As mentioned in Section 2.1, the Wavelia medical device consists of two subsystems, both performing a non-invasive examination: the microwave breast imaging subsystem, which is the main part of the system, and the optical breast contour detection subsystem, which plays an auxiliary role. The objective of the optical subsystem is triple:
Compute the volume of the patient's breast, thus indirectly deriving the required volume of transition liquid such that the container of the microwave breast imaging subsystem is optimally filled after immersion of the breast; Compute the vertical extent of the pendulous breast, in order to optimally dimension the vertical scan of the microwave breast imaging system;
The Optical Breast Scan and Metrology
As mentioned in Section 2.1, the Wavelia medical device consists of two subsystems, both performing a non-invasive examination: the microwave breast imaging subsystem, which is the main part of the system, and the optical breast contour detection subsystem, which plays an auxiliary role. The objective of the optical subsystem is triple: Compute the volume of the patient's breast, thus indirectly deriving the required volume of transition liquid such that the container of the microwave breast imaging subsystem is optimally filled after immersion of the breast; Compute the vertical extent of the pendulous breast, in order to optimally dimension the vertical scan of the microwave breast imaging system; Reconstruct fully the external envelope of the breast, with high precision; such information will further serve to control the potential level of deformation of the breast due to immersion in the transition liquid during the microwave imaging scan. It may also serve as an intermediate step when registering the 3D microwave image with reference to the 2D mammographic projections of the patient's breast, for comparison and validation of the microwave breast imaging modality.
The optical scan of the breast will be performed just before the microwave imaging scan, during the clinical testing of the Wavelia system. In order for the optically reconstructed breast envelope to be useful a priori information for the microwave imaging system, it is important that the patient is lying in the same prone position during both examinations. Thus, an identical examination table as the one used for the microwave imaging and shown in Figure 1, is integrated with the optical breast contour detection subsystem as well.
The patient is lying on the examination table, with her breast under examination inserted in the circular opening of the examination table. For this examination, there is no coupling liquid, as shown in Figure 2 for the microwave imaging system. The breast is in the air, hanging below the examination table. A 3D infrared camera is placed below the examination table at a distance of several tens of centimeters below the breast. A motorization system enables the azimuthal motion of the camera in one single horizontal plane. The azimuthal scan of the 3D camera permits reconstructing the external envelope of the breast with sub-millimetric precision.
In Figure 10, the reconstructed outer surface for the breast phantom that has been specified in detail in Section 2.3 and is used for the validation of the Wavelia imaging system on site is shown. Both a side view and a bottom view are indicatively shown, as provided to the system user for acceptance of the scan. be useful a priori information for the microwave imaging system, it is important that the patient is lying in the same prone position during both examinations. Thus, an identical examination table as the one used for the microwave imaging and shown in Figure 1, is integrated with the optical breast contour detection subsystem as well.
The patient is lying on the examination table, with her breast under examination inserted in the circular opening of the examination table. For this examination, there is no coupling liquid, as shown in Figure 2 for the microwave imaging system. The breast is in the air, hanging below the examination table. A 3D infrared camera is placed below the examination table at a distance of several tens of centimeters below the breast. A motorization system enables the azimuthal motion of the camera in one single horizontal plane. The azimuthal scan of the 3D camera permits reconstructing the external envelope of the breast with sub-millimetric precision.
In Figure 10, the reconstructed outer surface for the breast phantom that has been specified in detail in Section 2.3 and is used for the validation of the Wavelia imaging system on site is shown. Both a side view and a bottom view are indicatively shown, as provided to the system user for acceptance of the scan. In Figure 11, the reconstructed outer surface of a second breast phantom of different shape and a significantly bigger size is illustrated.
In Table 1, the measurement results for both breast phantoms are given, for one optical scan performed at factory and another scan performed after the installation of the system on site. Reproducible results have been achieved with very good accuracy; these results served for the site acceptance of the optical system at the hospital. In Figure 11, the reconstructed outer surface of a second breast phantom of different shape and a significantly bigger size is illustrated. At this stage of prototype development, the imaging system is required to operate in a controlled environment, for the nominal system performance to be assured. The examination room temperature should range between 20-25 °C during the full examination, which takes approximately 1 h, including: the optical and microwave scan of both breasts of the patient, all the intermediate system preparation steps, the transition liquid preparation steps, and the system quality checks.
In order to assure compliance with these temperature limits during the examination, it is recommended that the room temperature does not exceed 21-22 °C at the beginning of the examination.
For the system on-site validation tests with breast phantoms, the temperature is monitored both at the beginning and at the end of each test. The monitoring is performed at the following control points: container filled with transition liquid: measurement at the center and close to the borders of the container breast mold compartments filled with fibroglandular tissue-mimicking liquid: measurement at three different points, or compartments In Table 2, the temperature monitoring data for an on-site system validation test, which has been marked as compliant with the nominal operating conditions, is indicatively provided. In Table 1, the measurement results for both breast phantoms are given, for one optical scan performed at factory and another scan performed after the installation of the system on site. Reproducible results have been achieved with very good accuracy; these results served for the site acceptance of the optical system at the hospital. The achievable level of accuracy for both the computation of the breast volume and the computation of the vertical extent of the pendulous breast is compatible with the expected values and independent of the breast size and shape, as long as the breast is within the limits of acceptable sizes specified in the clinical protocol NCT03475992 [63]. At this stage of prototype development, the imaging system is required to operate in a controlled environment, for the nominal system performance to be assured. The examination room temperature should range between 20-25 • C during the full examination, which takes approximately 1 h, including: the optical and microwave scan of both breasts of the patient, all the intermediate system preparation steps, the transition liquid preparation steps, and the system quality checks.
In order to assure compliance with these temperature limits during the examination, it is recommended that the room temperature does not exceed 21-22 • C at the beginning of the examination.
For the system on-site validation tests with breast phantoms, the temperature is monitored both at the beginning and at the end of each test. The monitoring is performed at the following control points: container filled with transition liquid: measurement at the center and close to the borders of the container breast mold compartments filled with fibroglandular tissue-mimicking liquid: measurement at three different points, or compartments In Table 2, the temperature monitoring data for an on-site system validation test, which has been marked as compliant with the nominal operating conditions, is indicatively provided.
. System Stability Verification
A series of systematic tests are regularly performed at system installation in order to assess the repeatability of the measuring capability of the system. The assessment of the repeatability before performing a RF scan is fundamental to assure that a reliable and exploitable measurement can be performed.
A procedure for quantitative assessment of the system reliability has been developed. A reduced version of this is also performed automatically by the system before the examination of each patient. It consists of repeating a dummy (no breast immersion) measurement several times and performing three tests to quantify the level of variability of the complex measurements, both in terms of amplitude and phase.
•
Verify that the amplitude envelope of the raw measured data keeps consistent with the lower and upper-level masks, as predefined at factory; • Perform first and second-order statistics on raw measured data after drift correction: evaluate the stability, both in amplitude and phase, of the reference channel • Perform first and second-order statistics on calibrated data: evaluate the multi-run stability, both in amplitude and phase, on a limited set of Tx/Rx couples.
Imaging Test with Complex Breast Phantom at Two Azimuthal Rotational Positions
For the on-site validation of the system imaging performance after installation, a controlled test with a complex breast phantom is performed. A tumor phantom is included at a given known position in the breast. Quantitative evaluation of a series of metrics is performed for the quality assessment and validation of the scan. For this reason, it is important that repeated testing with the exact breast and tumor location configuration has been prior performed and thoroughly characterized at the factory. The breast and tumor phantoms that are used for the on-site system validation have been defined in Section 2.3.
The scan is repeated for two distinct azimuthal rotations of the phantom (azimuthal rotation of both the breast and tumor by 180 • , such that the relative location of the tumorous inclusion in the breast remains constant). The purpose of the breast rotation is to identify and characterize any "non-symmetries" in the system imaging performance, due to residual uncalibrated imperfections of the system circular network. For the definite on-site validation of the system after installation, a follow-up of the system imaging performance, as evaluated on the two azimuthal rotations of the breast phantom, is performed over several days.
After the system acceptance on site, and while the pilot clinical test is running on patients, the scan of the two breast phantom positions is recommended to be repeated and evaluated at regular intervals in time (e.g., regular monthly, or bi-monthly, interventions by the device manufacturer on site for control and maintenance). It is important to put into place such a regular follow-up in order to better assure the pilot clinical trial data quality, using a system at the prototype level of development.
In Figure 12, a top view of the Wavelia examination table, after installation of the breast phantom for the regular validation test, is shown. The breast phantom is maintained at the known position, using a supporting ring structure. The tumor is inserted at the predefined 3D location, using a rigid string of known length, inserted via a hole at a precisely known (x, y) position on the phantom support structure. A photo of the two azimuthal rotation positions of the phantom, as used for system validation, is shown in Figure 12a
Centering Assessment of the Reconstructed Breast Outer Surface
The breast-centering quality test is performed each time on a single breast contour that is associated with a single vertical scan position of the sensor network predefined by the user. A coronal slice close to the middle vertical extent of the pendulous breast is normally selected for the evaluation of the centering of the breast with respect to the imaging zone.
Given the breast contour estimate g c chosen for the breast-centering assessment, at each point along this test contour, the minimum bistatic distance ( ) gc r x between this point and any pair of RF sensors (among the reduced set of pairs preselected for use with this estimation module) that can "see" that point, is computed.
In order to assess the centering quality of the estimated contour, an ideally centered reference contour is derived by translating the estimated contour by a varying amount T For brevity, the notation x is used in the sequel of this section to refer to a given point along the estimated breast contour, and also to refer to the corresponding point T x x + on the ideally centered reference contour. The centering assessment is then performed by comparing the bistatic ratio: to the threshold value r dist _ thresh B p ⋅ where r B is the largest bistatic ratio of any possible translation of the estimated contour, and dist _ thresh p is a user-defined parameter, which is by default set to 0.85 for the system validation test.
Centering Assessment of the Reconstructed Breast Outer Surface
The breast-centering quality test is performed each time on a single breast contour that is associated with a single vertical scan position of the sensor network predefined by the user. A coronal slice close to the middle vertical extent of the pendulous breast is normally selected for the evaluation of the centering of the breast with respect to the imaging zone.
Given the breast contour estimate gc chosen for the breast-centering assessment, at each point X along this test contour, the minimum bistatic distance r gc (x) between this point and any pair of RF sensors (among the reduced set of pairs preselected for use with this estimation module) that can "see" that point, is computed.
In order to assess the centering quality of the estimated contour, an ideally centered reference contour is derived by translating the estimated contour by a varying amount x T around the 2D region of interest until it yields the largest value of gc T r gc T (gc T (s))ds, where gc T = gc + x T is the translated contour and r gc T (x) denotes the minimum bistatic distance to a point x on this translated contour. For this ideal centered reference contour, the minimum bistatic distances, associated to each point x of this curve, denoted by r c (x), are similarly calculated as for the estimated contour. For brevity, the notation x is used in the sequel of this section to refer to a given point along the estimated breast contour, and also to refer to the corresponding point x + x T on the ideally centered reference contour.
The centering assessment is then performed by comparing the bistatic ratio: to the threshold value B r · p dist_thresh where B r is the largest bistatic ratio of any possible translation of the estimated contour, and p dist_thresh is a user-defined parameter, which is by default set to 0.85 for the system validation test. If b r exceeds B r · p dist_thresh , then the estimated breast contour is marked as remarkably off-centered, and the breast centering confidence level P is set to a minimal value that is preset via the parameter perc at_max_dis tan ce . Otherwise, the ratio b r /(B r · p dist_thresh ) is used to compute the breast centering confidence level, as defined in Equation (19): To this extent, the centering assessment will have a confidence level percentage ranging between a maximum of 100 (if the test contour is coincident with the ideal centered contour) and a minimum of 100 · perc at_max_distance . For the on-site validation tests, the minimal value 50% has been used for all of the centering assessment tests.
In Figure 13a,c the breast surface contour estimate chosen for the breast centering assessment (depicted in blue) and the associated ideally centered contour (depicted in cyan) are shown for the tests at the breast rotational position #1, on Test Date 1 and Test Date 2, correspondingly. The red dots depict the location of the sensors, while the black circle represents the inner wall of the transition liquid container.
To this extent, the centering assessment will have a confidence level percentage ranging between a maximum of 100 (if the test contour is coincident with the ideal centered contour) and a minimum of at _ max_ dis tan ce 100 perc ⋅ . For the on-site validation tests, the minimal value 50% has been used for all of the centering assessment tests.
In Figure 13a,c the breast surface contour estimate chosen for the breast centering assessment (depicted in blue) and the associated ideally centered contour (depicted in cyan) are shown for the tests at the breast rotational position #1, on Test Date 1 and Test Date 2, correspondingly. The red dots depict the location of the sensors, while the black circle represents the inner wall of the transition liquid container. The associated spatial map of breast-centering assessment is shown in Figure 13b,d for the two test dates of the breast rotational position #1, correspondingly.
In either figure, the purple square represents the location of the center of mass of the breast surface contour estimate that was used for the breast-centering assessment; the resulting breast- The associated spatial map of breast-centering assessment is shown in Figure 13b,d for the two test dates of the breast rotational position #1, correspondingly.
In either figure, the purple square represents the location of the center of mass of the breast surface contour estimate that was used for the breast-centering assessment; the resulting breast-centering confidence level is marked on the title of each figure. The black circle depicts the inner wall of the transition liquid container.
The results for the breast rotational position #2 are given in Figure 14a,b for the data recorded on Test Date 1, and in Figure 14c,d for Test Date 2. All of the notations are consistent with the definitions provided earlier as explanation to Figure 13. The results for the breast rotational position #2 are given in Figure 14a,b for the data recorded on Test Date 1, and in Figure 14c,d for Test Date 2. All of the notations are consistent with the definitions provided earlier as explanation to Figure 13.
The breast-centering confidence levels, as computed for all four test cases, are comparatively presented in Table 3.
It is shown that the natural off-centering of the breast phantom at the selected coronal slice is repeatedly identified with a fair level of accuracy, associated with centering confidence levels varying between 83-87.4%. It is worth noting that a slight offset is repeatedly identified between the estimates for the two distinct rotational positions of the breast phantom. This is an indication of a slight nonsymmetry in the reconstructed geometry, introduced either by the sensor network itself, or possibly due to a non-homogeneous thermal distribution in the interior of the examination table. The breast-centering confidence levels, as computed for all four test cases, are comparatively presented in Table 3. It is shown that the natural off-centering of the breast phantom at the selected coronal slice is repeatedly identified with a fair level of accuracy, associated with centering confidence levels varying between 83-87.4%. It is worth noting that a slight offset is repeatedly identified between the estimates for the two distinct rotational positions of the breast phantom. This is an indication of a slight non-symmetry in the reconstructed geometry, introduced either by the sensor network itself, or possibly due to a non-homogeneous thermal distribution in the interior of the examination table.
2.6.5. Image Quality Assessment (QA) Metrics for System Performance Acceptance The microwave breast imaging system evaluation and acceptance is performed based on a series of quality metrics which are computed on multistatic radar images of the breast phantoms, as defined in Equations (13) and (14).
• QA Metric 1: Focusing Metric (FM) evaluated on the composite image formed at single vertical position of the sensor network (as per Equation (13)), in front of the tumor: The FM is evaluated on a series of images, parameterized by the assumed percentage of fibroglandular tissue pc fib in the breast.
•
Acceptance Criterion (AC) #1: The optimal pc fib value, for which the focusing measure is maximized, should remain constant at every repetition of the controlled imaging test, and for every rotational position of the breast phantom (testing with pc fib intervals equal to 5%). • AC#2: the value of the focusing measure, for the optimal pc fib , should exceed a preset threshold value thr QA_1 .
• QA Metric 2: Intensity of the TR-MUSIC pseudospectrum at the tumor location (Im max ), evaluated on the composite image formed at a single vertical position of the sensor network (as per Equation (13)), in front of the tumor: The Im max is evaluated for a series of images parameterized by the assumed percentage of fibroglandular tissue pc fib in the breast: • AC#3: The optimal pc fib value for which Im max is maximized should remain constant at every repetition of the controlled imaging test, and for every rotational position of the breast phantom (testing with pc fib intervals equal to 5%). • AC#4: The value of Im max , for the optimal pc fib value, should exceed a preset threshold value thr QA_2 . • AC#5: The two patterns FM(pc fib ) and Im max (pc fib ) should be consistent with each other, meaning that maximization and identical slope(s) are observed for the same pc fib values on both patterns.
• QA Metric 3: Variation of the maximal achievable focusing FM over the height, evaluated for images formed using various vertical scan positions of the sensor network (as per Equation (14)): • AC#6: The maximal FM should be observed at the same height: the one closer to the tumor, at every repetition of the controlled imaging test, and for every rotational position of the breast phantom. • AC#7: The contrast between the maximal FM and the FM achievable at all of the other heights should exceed a given threshold thr QA_3 .
• QA Metric 4: Ratio between the average image intensity at the exterior of the breast and the Im max in the interior of the breast: Evaluation on the composite image formed using data from multiple vertical scan positions of the sensor network (as per Equation (14)): The multi-height image is formed via concatenation of the single-height images with pc fib automatically selected to allow optimal image focusing independently per height.
• AC#8: The ratio should not exceed a preset upper-limit value UL QA_4 .
Results
In this section, indicative results are presented from the test campaign that has been recently carried out for the site acceptance of the Wavelia microwave breast imaging system after its installation at the Galway University Hospital for a pilot first-in-human clinical test [63]. The series of image quality assessment (QA) metrics, as defined in Section 2.6.5, have been evaluated on four scans of a realistically complex breast phantom, as detailed in Section 2.6.3. In Figure 15, the experimental setup for the tests at the rotational position #1 of the breast phantom is illustrated. For these tests, the microlobulated tumor of average size (14 mm) has been immersed in the fibroglandular tissue-mimicking liquid, at the location (x, y, z) = (20, 0, 110) mm (=center of the tumorous lesion).
The test has been repeated on two distinct dates. Imaging results from the two identical tests are presented and compared in this section.
Results
In this section, indicative results are presented from the test campaign that has been recently carried out for the site acceptance of the Wavelia microwave breast imaging system after its installation at the Galway University Hospital for a pilot first-in-human clinical test [63]. The series of image quality assessment (QA) metrics, as defined in Section 2.6.5, have been evaluated on four scans of a realistically complex breast phantom, as detailed in Section 2.6.3.
Breast Rotational Position #1
In Figure 15, the experimental setup for the tests at the rotational position #1 of the breast phantom is illustrated. For these tests, the microlobulated tumor of average size (14 mm) has been immersed in the fibroglandular tissue-mimicking liquid, at the location (x, y, z) = (20, 0, 110) mm (=center of the tumorous lesion).
The test has been repeated on two distinct dates. Imaging results from the two identical tests are presented and compared in this section.
In Figure 16, the composite TR-MUSIC pseudospectra, as formed using Equation (13) and data from a single vertical scan position of the sensor network, are depicted for the two data snapshots recorded on two different dates. The full imaging domain, both in the interior and the exterior of the breast phantom, is evaluated. The objective of such a visualization is to highlight the absence of any significant artifact radar echoes at the exterior of the breast, in the case of both measurements. In Figure 16, the composite TR-MUSIC pseudospectra, as formed using Equation (13) and data from a single vertical scan position of the sensor network, are depicted for the two data snapshots recorded on two different dates. The full imaging domain, both in the interior and the exterior of the breast phantom, is evaluated. The objective of such a visualization is to highlight the absence of any significant artifact radar echoes at the exterior of the breast, in the case of both measurements.
These composite images have been formed with the integration of monochromatic (single-frequency) TR-MUSIC pseudospectra, computed as per Equations (9)-(13), using: a given number Nfreq of frequency points, uniformly spanning the working frequency band, a given number Nsec of sectors of antenna sub-arrays spanning the full 360 • azimuth domain around the breast.
The images that have been formed under the assumption pc fib that resulted in maximized focusing are here depicted. The optimal pc fib has been automatically selected with the method that has been defined in Section 2.4.5.
The applied data processing chain is meant to result, ideally, in the formation of very spiked images indicating the probability of the target presence on each pixel of the imaging domain. The unambiguously detected and accurately localized targets are expected to be associated with constellations of very small bright spots, highlighting the target position in an overall dark spatial map. In Figure 16a,b, a clear and pronounced peak of the TR-MUSIC pseudospectrum is visible on both images in the vicinity of the ground truth location of the tumor.
It is noticeable that the intensity of the TR-MUSIC pseudospectrum is slightly higher on the first Test Date 1, as compared to Test Date 2. In addition, two secondary radar echoes (of significantly lower intensity compared to the dominant echo, which is clearly attributed to the tumor) are present on the image of Test Date 1. These secondary echoes can be attributed to a "cavity" of adipose tissue that is formed in between the three compartments of the mold filled with fibroglandular tissue-mimicking liquid in the breast phantom. This adipose 'cavity', which has significant negative dielectric contrast with respect to the surrounding fibroglandular tissue, is visible in Figure 17a, and can be spatially correlated with the secondary radar echoes seen in Figure 16a. The applied data processing chain is meant to result, ideally, in the formation of very spiked images indicating the probability of the target presence on each pixel of the imaging domain. The unambiguously detected and accurately localized targets are expected to be associated with constellations of very small bright spots, highlighting the target position in an overall dark spatial map. In Figure 16a,b, a clear and pronounced peak of the TR-MUSIC pseudospectrum is visible on both images in the vicinity of the ground truth location of the tumor.
It is noticeable that the intensity of the TR-MUSIC pseudospectrum is slightly higher on the first Test Date 1, as compared to Test Date 2. In addition, two secondary radar echoes (of significantly lower intensity compared to the dominant echo, which is clearly attributed to the tumor) are present on the image of Test Date 1. These secondary echoes can be attributed to a "cavity" of adipose tissue that is formed in between the three compartments of the mold filled with fibroglandular tissuemimicking liquid in the breast phantom. This adipose 'cavity', which has significant negative dielectric contrast with respect to the surrounding fibroglandular tissue, is visible in Figure 17a, and can be spatially correlated with the secondary radar echoes seen in Figure 16a. In Figure 17a,b, the same images as Figure 16a,b are shown, but after having filtered out the parts corresponding to the exterior of the breast phantom. The breast external contour has been a priori extracted from the data as defined in Section 2.4.2, and is used here for spatial filtering in order for the image to be easier interpretable from a physical point of view. The borders of the In Figure 17a,b, the same images as Figure 16a,b are shown, but after having filtered out the parts corresponding to the exterior of the breast phantom. The breast external contour has been a priori extracted from the data as defined in Section 2.4.2, and is used here for spatial filtering in order for the image to be easier interpretable from a physical point of view. The borders of the fibroglandular tissue-mimicking molds, which are a priori known, and a red sphere with diameter equal to the average size of the microlobulated tumor, have also been superimposed on the images in Figure 17a,b. The objective of this second visualization is a straightforward linking of the bright spots on the images of Figure 16a,b and the experimental setup.
In Figure 17c,d, an alternative viewpoint is provided for the same images of the breast interior. The selected viewpoint would correspond to a front-side view of the breast, while the patient is in the standing position. The borders of the fibroglandular tissue mimicking molds have not been superimposed with the images in this third visualization.
Clean images that can be clearly associated with unambiguous detection of the tumor have been retrieved on both tests of the rotational position 1 of the breast phantom. fibroglandular tissue-mimicking molds, which are a priori known, and a red sphere with diameter equal to the average size of the microlobulated tumor, have also been superimposed on the images in Figure 17a,b. The objective of this second visualization is a straightforward linking of the bright spots on the images of Figure 16a,b and the experimental setup.
In Figure 17c,d, an alternative viewpoint is provided for the same images of the breast interior. The selected viewpoint would correspond to a front-side view of the breast, while the patient is in the standing position. The borders of the fibroglandular tissue mimicking molds have not been superimposed with the images in this third visualization.
Clean images that can be clearly associated with unambiguous detection of the tumor have been retrieved on both tests of the rotational position 1 of the breast phantom. The evaluation of the QA metric 1 is shown in Figure 18a,b for the two images, formed on Test Date 1 and Test Date 2 correspondingly. It can be observed that maximal focusing is achieved for pc fib = 40% on Test Date 1, while on Test Date 2, the optimal pc fib value is 45%. The evaluation of the QA metric 1 is shown in Figure 18a,b for the two images, formed on Test Date 1 and Test Date 2 correspondingly. It can be observed that maximal focusing is achieved for fib pc = 40% on Test Date 1, while on Test Date 2, the optimal fib pc value is 45%. The acceptance test (AC) #1 would strictly fail in such a case. However, given the proximity of the 'average' breast tissue dielectric properties that are associated with the two fib pc values, as depicted in Figure 8, and also considering the constrained and yet non-optimized stability and robustness of both the imaging system and the transition liquid itself against slight variations in the nominal environmental operating conditions (e.g., slight temperature variations), such a variation in the optimal fib pc value, in terms of focusing, is still considered acceptable for the on-site validation tests of the actual version of the imaging system prototype.
The threshold value for the optimal focusing metric (FM) per image has been set to QA _1 thr 0.0004 = . This is valid for the specific experimental setup, which has been reproduced both at factory and after system installation on-site. This is the threshold value that is used with acceptance test #2 all along the on-site validation of the imaging system. Both tests at the rotational position 1 of the breast are thus validated in terms of AC #2.
The evaluation of the QA metric 2 is shown in Figure 18c,d for the two images, formed on Test Date 1 and Test Date 2, correspondingly. It can be observed that the maximal intensity Immax of the TR-MUSIC pseudospectra is maximized for the same values as the FM. Concerning acceptance test #3, the same considerations hold as for AC#1. In terms of acceptance test #4, the threshold value for the image intensity at the target (tumor) position has been set to Acceptance test #5 is validated in such a case of similarity between the two patterns at the given prototype state of the imaging system. The acceptance test (AC) #1 would strictly fail in such a case. However, given the proximity of the 'average' breast tissue dielectric properties that are associated with the two pc fib values, as depicted in Figure 8, and also considering the constrained and yet non-optimized stability and robustness of both the imaging system and the transition liquid itself against slight variations in the nominal environmental operating conditions (e.g., slight temperature variations), such a variation in the optimal pc fib value, in terms of focusing, is still considered acceptable for the on-site validation tests of the actual version of the imaging system prototype.
The threshold value for the optimal focusing metric (FM) per image has been set to thr QA_1 = 0.0004. This is valid for the specific experimental setup, which has been reproduced both at factory and after system installation on-site. This is the threshold value that is used with acceptance test #2 all along the on-site validation of the imaging system. Both tests at the rotational position 1 of the breast are thus validated in terms of AC #2.
The evaluation of the QA metric 2 is shown in Figure 18c,d for the two images, formed on Test Date 1 and Test Date 2, correspondingly. It can be observed that the maximal intensity Im max of the TR-MUSIC pseudospectra is maximized for the same pc fib values as the FM. Concerning acceptance test #3, the same considerations hold as for AC#1. In terms of acceptance test #4, the threshold value for the image intensity at the target (tumor) position has been set to thr QA_2 = 0.0001, while performing tests with the same experimental setup as at the factory. This is the threshold value that is used with the AC #4 all along the on-site validation of the imaging system. Both tests at the rotational position 1 of the breast are thus validated in terms of AC #4. Finally, the two patterns FM (pc fib ) and Im max (pc fib ) remain consistent between each other, as far as the dependence on pc fib is concerned, with the exception of the outlier point: Im max (pc fib ), pc fib = 30%. Acceptance test #5 is validated in such a case of similarity between the two patterns at the given prototype state of the imaging system.
Breast Rotational Position #2
In this section, the same QA metrics 1 and 2 are evaluated for the two imaging tests that have been performed at the rotational position 2 of the same breast phantom on two distinct dates: Test Date 1 and Test Date 2. The breast phantom is rotated by 180 • , with respect to the two first tests, which have been thoroughly evaluated and validated in terms of QA 1 and QA 2 in the previous section. In Figure 19, the experimental setup for the tests at the rotational position #2 of the breast phantom is illustrated. In this section, the same QA metrics 1 and 2 are evaluated for the two imaging tests that have been performed at the rotational position 2 of the same breast phantom on two distinct dates: Test Date 1 and Test Date 2. The breast phantom is rotated by 180°, with respect to the two first tests, which have been thoroughly evaluated and validated in terms of QA 1 and QA 2 in the previous section. In Figure 19, the experimental setup for the tests at the rotational position #2 of the breast phantom is illustrated. For these tests, a microlobulated tumor of average size (14 mm) has been immersed in the fibroglandular tissue-mimicking liquid at the location (x, y, z) = (−20, 0, 110) mm (=center of the tumorous lesion).
In Figure 20, the composite TR-MUSIC pseudospectra, as formed using the Equation (13), and data from a single vertical scan position of the sensor network are depicted for the two data snapshots recorded on two different dates. For these tests, a microlobulated tumor of average size (14 mm) has been immersed in the fibroglandular tissue-mimicking liquid at the location (x, y, z) = (−20, 0, 110) mm (=center of the tumorous lesion).
In Figure 20, the composite TR-MUSIC pseudospectra, as formed using the Equation (13), and data from a single vertical scan position of the sensor network are depicted for the two data snapshots recorded on two different dates. These images have been formed in exactly the same way, as detailed in Section 3.1.1 for the images in Figure 16. A clear and pronounced peak of the TR-MUSIC pseudospectrum is visible on both images in the vicinity of the ground truth location of the tumor. However, when comparing these images with the ones in Figure 16, it is noticeable that the maximal intensity of the TR-MUSIC pseudospectrum in Figure 20b is lower than the maximal intensity in the three other images. The dominant peak that is unambiguously associated with the tumor multistatic radar echo is also slightly misplaced with respect to the ground truth location of the tumor. The observed shift can be better seen in Figure 21b. The four images in Figure 21 have been formed in exactly the same way as the corresponding images in Figure 17 in Section 3.1.1.
Clean images that can be clearly associated with unambiguous detection of the tumor have been retrieved from both tests at rotational position 2 of the breast phantom. The imaging performance is slightly degraded on Test Date 2; however, such a level of degradation lies within the limits of acceptable variability in the system performance at this stage of development. All four datasets presented in the article are thus examples of test data that have served the on-site validation of the imaging system. The quantified evaluation of the system performance, in terms of the QA metrics 1 and 2, is shown in Figure 22 for the two tests at rotational position 2 of the breast phantom.
The result representation in Figure 22 is identical to the one in Figure 18 for the two tests at rotational position 1 of the breast phantom, which has been detailed in Section 3.1.1. These images have been formed in exactly the same way, as detailed in Section 3.1.1 for the images in Figure 16. A clear and pronounced peak of the TR-MUSIC pseudospectrum is visible on both images in the vicinity of the ground truth location of the tumor. However, when comparing these images with the ones in Figure 16, it is noticeable that the maximal intensity of the TR-MUSIC pseudospectrum in Figure 20b is lower than the maximal intensity in the three other images. The dominant peak that is unambiguously associated with the tumor multistatic radar echo is also slightly misplaced with respect to the ground truth location of the tumor. The observed shift can be better seen in Figure 21b. The four images in Figure 21 have been formed in exactly the same way as the corresponding images in Figure 17 in Section 3.1.1.
Clean images that can be clearly associated with unambiguous detection of the tumor have been retrieved from both tests at rotational position 2 of the breast phantom. The imaging performance is slightly degraded on Test Date 2; however, such a level of degradation lies within the limits of acceptable variability in the system performance at this stage of development. All four datasets presented in the article are thus examples of test data that have served the on-site validation of the imaging system. The quantified evaluation of the system performance, in terms of the QA metrics 1 and 2, is shown in Figure 22 for the two tests at rotational position 2 of the breast phantom.
The result representation in Figure 22 is identical to the one in Figure 18 for the two tests at rotational position 1 of the breast phantom, which has been detailed in Section 3.1.1. It can be observed in Figure 22a,b that maximal focusing is achieved for fib pc = 35% on both test dates. The optimal fib pc value remains constant between the two test dates, as required by acceptance test #1; however, this value is lower than the optimal value identified for rotational position 1 of the breast phantom. This phenomenon of slightly shifted optimal fib pc , depending on the orientation of the breast phantom with respect to the sensor network, has been consistently observed on more validation test datasets of the imaging system, and could be attributed to the slight inhomogeneity in the temperature spatial distribution in the interior of the device, at its actual version. This is accepted as such, and validated for the clinical pilot testing of the system; a thermoregulation of the device interior is planned to be put in place when upgrading the device design in the future, such that this type of inhomogeneity can be avoided. The 'average' breast tissue dielectric properties that are associated with each fib pc value are defined in Figure 8. It can be observed in Figure 22a,b that maximal focusing is achieved for pc fib = 35% on both test dates. The optimal pc fib value remains constant between the two test dates, as required by acceptance test #1; however, this value is lower than the optimal value identified for rotational position 1 of the breast phantom. This phenomenon of slightly shifted optimal pc fib , depending on the orientation of the breast phantom with respect to the sensor network, has been consistently observed on more validation test datasets of the imaging system, and could be attributed to the slight inhomogeneity in the temperature spatial distribution in the interior of the device, at its actual version. This is accepted as such, and validated for the clinical pilot testing of the system; a thermoregulation of the device interior is planned to be put in place when upgrading the device design in the future, such that this type of inhomogeneity can be avoided. The 'average' breast tissue dielectric properties that are associated with each pc fib value are defined in Figure 8.
Considering the threshold value thr QA_1 = 0.0004 for the optimal focusing metric (FM) per image, as defined in Section 3.1.1, acceptance test #2 is clearly validated on Test Date 1, but it is hardly reached on Test Date 2, as can be observed in Figure 22a,b.
The evaluation of the QA metric 2 is shown in Figure 22c,d. It is shown that the maximal intensity Im max of the TR-MUSIC pseudospectra is maximized for the same pc fib values as the FM, such that AC #3 is validated on both test dates. Given the threshold value for the image intensity at the target (tumor) position, thr QA_2 = 0.0001, as specified in Section 3.1.1, AC #4 is clearly validated on Test Date 1 and just met on the Test Date 2.
The two patterns: FM (pc fib ) and Im max (pc fib ) remain consistent between each other, as far as the dependence on pc fib is concerned; acceptance test #5 is validated on both test dates. Figure 22a,b.
The evaluation of the QA metric 2 is shown in Figure 22c,
QA Metrics 3 and 4: Images Formed at Multiple Vertical Positions of the Sensor Network
In Figure 23, the maximal focusing metric (FM), as extracted from Figures 18a,b and 22a,b for the four test datasets at single H = 118 mm (sensor network in front of the tumor), is plotted as evaluated on images that have been formed using six different vertical scan positions of the sensor network (vertical sampling rate = 5 mm). The result, which is QA metric 3 as defined in Section 2.6.5, is plotted in Figure 23a,b for the breast rotational position 1, Test Date 1, and Test Date 2, correspondingly. In Figure 23c,d, QA metric 3 is plotted for the breast rotational position 2, Test Date 1, and Test Date 2, accordingly. The maximal FM is observed at the same height, H = 118 mm, for both rotational positions of the breast phantom, and for both repetitions of either of the two controlled imaging tests. AC #6 is validated based on the results presented for the four test datasets, as shown in Figure 23.
Ideally, an overall contrast between the maximal FM (at H = 118 mm, coronal slice of the breast on which the tumor is better 'seen' by the sensor network) and the FM that is achievable at any other coronal breast slice should exceed a given threshold 1.1 = . In the case of Figure 23d, the computed contrast is 1.08. It has been concluded in the course of the on-site validation of the imaging system that the three first test datasets are validated in terms of AC #7, while the fourth test dataset hardly meets the set threshold value. It is interesting to notice that the breast rotational position 2-Test Date 2 scan is the only one that has been marked
QA Metrics 3 and 4: Images Formed at Multiple Vertical Positions of the Sensor Network
In Figure 23, the maximal focusing metric (FM), as extracted from Figure 18a,b and Figure 22a,b for the four test datasets at single H = 118 mm (sensor network in front of the tumor), is plotted as evaluated on images that have been formed using six different vertical scan positions of the sensor network (vertical sampling rate = 5 mm). The result, which is QA metric 3 as defined in Section 2.6.5, is plotted in Figure 23a,b for the breast rotational position 1, Test Date 1, and Test Date 2, correspondingly. In Figure 23c,d, QA metric 3 is plotted for the breast rotational position 2, Test Date 1, and Test Date 2, accordingly. The maximal FM is observed at the same height, H = 118 mm, for both rotational positions of the breast phantom, and for both repetitions of either of the two controlled imaging tests. AC #6 is validated based on the results presented for the four test datasets, as shown in Figure 23.
Ideally, an overall contrast between the maximal FM (at H = 118 mm, coronal slice of the breast on which the tumor is better 'seen' by the sensor network) and the FM that is achievable at any other coronal breast slice should exceed a given threshold thr QA_3 = 1.2, as is the case in Figure 23b for breast rotational position 1 on Test Date 2. While such a case represents the goal in terms of unambiguous retrieval of the tumor echo along the vertical scan of the heterogeneous breast, AC #7 is validated also in the case of Figure 23a,c, where the contrast in terms of FM exceeds the value thr QA_3 = 1.1. In the case of Figure 23d, the computed contrast is 1.08. It has been concluded in the course of the on-site validation of the imaging system that the three first test datasets are validated in terms of AC #7, while the fourth test dataset hardly meets the set threshold value. It is interesting to notice that the breast rotational position 2-Test Date 2 scan is the only one that has been marked as invalid (or potentially critically valid) by the total of three quantitative evaluation tests: AC#2, AC#4, and AC#7. as invalid (or potentially critically valid) by the total of three quantitative evaluation tests: AC#2, AC#4, and AC#7. In Figure 24, a top and side view of the composite image formed using the data from the six vertical scan positions of the sensor network (as per Equation (14)) are shown for breast rotation position 1 on Test Date 1. The ground truth location of the tumor phantom is illustrated with a spherical inclusion with a diameter of 14 mm, which is equal to the average size of the microlobulated tumor that is superimposed on the images. This type of multi-height composite image has been formed via concatenation of the single-height images with automatically selected fib pc , to allow optimal image focusing independently per height, as explained in Section 2.4.5. The sensor positions, as mapped on the inner wall of the container filled with transition liquid, are illustrated with the purple dots that are overlaid on the images. Overlapping zones exist in the 3D imaging domain among the elementary images formed from data at a single vertical scan position of the sensor network. Intensity normalization operations are also involved in the concatenation of the elementary images for formation of the composite multi-height image; this is the reason why the scaling of the intensity is different for the single-height (formed as per Equation (13)) and the multi-height images (formed as per Equation (14)). The difference in scaling depends on the number of integrated vertical scan positions and the amount of overlap among the elementary images. These parameters are not detailed any further in this paper.
In Figure 25, a top and side view of the composite image formed using the data from the six vertical scan positions of the sensor network (as per Equation (14)) are shown for breast rotation position 1 on Test Date 2.
By comparing the imaging results in Figures 24 and 25, it is clear that while unambiguous detectability of the tumor in the breast interior is assured all along the six vertical scan positions, the maximal intensity of the composite TR-MUSIC pseudospectrum in the breast (tumor constellation of echoes) is lower in Figure 25 as compared to Figure 24. Few spots of unfiltered clutter/interference close to the sensor network are also visible in the images in Figure 25. It is well seen on the side view in Figure 25b that the unfiltered interferers appear a bit higher than the tumor (=closer to the examination table). In Figure 24, a top and side view of the composite image formed using the data from the six vertical scan positions of the sensor network (as per Equation (14)) are shown for breast rotation position 1 on Test Date 1. The ground truth location of the tumor phantom is illustrated with a spherical inclusion with a diameter of 14 mm, which is equal to the average size of the microlobulated tumor that is superimposed on the images. This type of multi-height composite image has been formed via concatenation of the single-height images with automatically selected pc fib , to allow optimal image focusing independently per height, as explained in Section 2.4.5. The sensor positions, as mapped on the inner wall of the container filled with transition liquid, are illustrated with the purple dots that are overlaid on the images. Overlapping zones exist in the 3D imaging domain among the elementary images formed from data at a single vertical scan position of the sensor network. Intensity normalization operations are also involved in the concatenation of the elementary images for formation of the composite multi-height image; this is the reason why the scaling of the intensity is different for the single-height (formed as per Equation (13)) and the multi-height images (formed as per Equation (14)). The difference in scaling depends on the number of integrated vertical scan positions and the amount of overlap among the elementary images. These parameters are not detailed any further in this paper.
In Figure 25, a top and side view of the composite image formed using the data from the six vertical scan positions of the sensor network (as per Equation (14)) are shown for breast rotation position 1 on Test Date 2.
By comparing the imaging results in Figures 24 and 25, it is clear that while unambiguous detectability of the tumor in the breast interior is assured all along the six vertical scan positions, the maximal intensity of the composite TR-MUSIC pseudospectrum in the breast (tumor constellation of echoes) is lower in Figure 25 as compared to Figure 24. Few spots of unfiltered clutter/interference close to the sensor network are also visible in the images in Figure 25. It is well seen on the side view in Figure 25b that the unfiltered interferers appear a bit higher than the tumor (=closer to the examination table). In Figures 26 and 27, a top and side view of the composite images formed using the data from the same six vertical scan positions of the sensor network (as per Equation (14)) are shown for breast rotation position 2, on Test Date 1 and Test Date 2, correspondingly. The image intensity associated with the constellation of tumor radar echoes is a bit lower on both test dates, as compared to the images in Figure 24. The constellation of more than a single peaked spot is associated with the tumor on the TR-MUSIC pseudospectra of Figure 26. This is acceptable, given the size and irregular shape of the target, as seen in Figure 5.
In both Figures 26 and 27, a slightly higher level of overall intensity in the exterior of the breast phantom (level of residual interferer echoes) is observed, as compared to the images in Figures 23 and 24. This is an indicator of the slightly degraded imaging performance of the system in the case of breast rotational position 2 on both test dates. This is quantifiable by means of QA metric 4, i.e., the ratio between the average image intensity in the non-focused image in the exterior of the breast versus the maximal image intensity in the focused image in the interior of the breast (clearly associated with In Figures 26 and 27, a top and side view of the composite images formed using the data from the same six vertical scan positions of the sensor network (as per Equation (14)) are shown for breast rotation position 2, on Test Date 1 and Test Date 2, correspondingly. The image intensity associated with the constellation of tumor radar echoes is a bit lower on both test dates, as compared to the images in Figure 24. The constellation of more than a single peaked spot is associated with the tumor on the TR-MUSIC pseudospectra of Figure 26. This is acceptable, given the size and irregular shape of the target, as seen in Figure 5.
In both Figures 26 and 27, a slightly higher level of overall intensity in the exterior of the breast phantom (level of residual interferer echoes) is observed, as compared to the images in Figures 23 and 24. This is an indicator of the slightly degraded imaging performance of the system in the case of breast rotational position 2 on both test dates. This is quantifiable by means of QA metric 4, i.e., the ratio between the average image intensity in the non-focused image in the exterior of the breast versus the maximal image intensity in the focused image in the interior of the breast (clearly associated with In Figures 26 and 27, a top and side view of the composite images formed using the data from the same six vertical scan positions of the sensor network (as per Equation (14)) are shown for breast rotation position 2, on Test Date 1 and Test Date 2, correspondingly. The image intensity associated with the constellation of tumor radar echoes is a bit lower on both test dates, as compared to the images in Figure 24. The constellation of more than a single peaked spot is associated with the tumor on the TR-MUSIC pseudospectra of Figure 26. This is acceptable, given the size and irregular shape of the target, as seen in Figure 5.
In both Figures 26 and 27, a slightly higher level of overall intensity in the exterior of the breast phantom (level of residual interferer echoes) is observed, as compared to the images in Figures 23 and 24. This is an indicator of the slightly degraded imaging performance of the system in the case of breast rotational position 2 on both test dates. This is quantifiable by means of QA metric 4, i.e., the ratio between the average image intensity in the non-focused image in the exterior of the breast versus the maximal image intensity in the focused image in the interior of the breast (clearly associated with the tumor radar echo on all the presented images). The values of QA metric 4 are given in Table 4 for all four composite images in Figures 24-27. Ideally, QA metric 4 should not exceed the upper limit value UL QA_4 = 10% for on-site acceptance of a test scan using the specific controlled imaging scenario. The scans of both breast rotational positions on Test Date 2 (this is not the same date for both scans) are thus at the limit of being marked as incompatible in terms of AC #8. Table 4 for all four composite images in Figures 24-27. Ideally, QA metric 4 should not exceed the upper limit value QA _ 4 UL 10% = for on-site acceptance of a test scan using the specific controlled imaging scenario. The scans of both breast rotational positions on Test Date 2 (this is not the same date for both scans) are thus at the limit of being marked as incompatible in terms of AC #8. Table 4 for all four composite images in Figures 24-27. Ideally, QA metric 4 should not exceed the upper limit value QA _ 4 UL 10% = for on-site acceptance of a test scan using the specific controlled imaging scenario. The scans of both breast rotational positions on Test Date 2 (this is not the same date for both scans) are thus at the limit of being marked as incompatible in terms of AC #8.
Discussion and Conclusions
In Table 5, a summary of the acceptance test results is reported for the four breast phantom scans, which have been thoroughly analyzed in Section 3.
Site acceptance of the imaging system is suggested, if more than N test /2 valid tests are consistently reported, at every scan repetition, during a one-week test validation period (N test = 8 is the total number of acceptance tests performed and evaluated after each scan, as defined in Section 2.6.5). This summarized result presentation makes clear the degradation that has been observed for the scan at breast rotational position #2, on Test Date 2, when compared to the other three scans, in terms of the defined QA metrics. This is indicative of the expected and acceptable level of variability in the performance of the imaging system prototype.
At this stage of system development, and toward pilot clinical testing, all the "critically valid" AC test results in Table 5 have been considered acceptable. On-site acceptance of the imaging system is validated with such results, provided that such a performance is consistently achieved along the total duration of the one-week validation period.
In the case of the breast phantom defined in Section 2.3 and used for the validation tests of the system, even if a single tumor model is inserted in the breast phantom under test, the complex geometry of the plastic molds, filled with either adipose or fibroglandular tissue-mimicking liquid, may result in unfiltered radar echoes originating from the corners on the mold surface, which may be erroneously seen as "scattering objects of interest". This complexity renders the test scenario, which is used for system validation, particularly challenging (from a radar point of view), and does not necessarily correspond to a physical complexity that is expected to be found in the real breast; less discontinuous transitions and a less structured multi-layered configuration is naturally expected to be found in the real breast, but cannot be easily reproduced in a phantom.
On the other hand, it is clear that any perturbation that may be introduced in the microwave breast scan due to either an intentional motion of the patient or unintentional 'micromotions' of living body cells during the scan, has not been considered so far, and its impact will be investigated based on clinical data only. Interference due to blood flow in the breast, or due to the interface between the examination table and the patient's chest wall have not been investigated either. The inherent interpatient anatomical variability and its impact on the pre-processing modules for sensor coupling and breast skin echo suppression will also need to be carefully investigated during the pilot clinical test. Finally, the interpatient variability in terms of: normal and cancerous breast tissue dielectric properties and associated contrasts, breast density, and skin texture depending on age, are all examples of physical phenomena that have not been modeled by the phantoms used for the system design and validation.
Given the above considerations on potential sources of mismodeling of the breast with the available phantoms, it is inevitable that some adjustments of both the hardware and software modules of the system architecture may need to be performed as a conclusion of the planned pilot clinical testing. Such adjustments may be required such that the intended imaging performance, as validated with the indicative results presented in this paper, is assured and validated when processing the clinical data from patient breast scans as well. | 25,200 | 2018-08-18T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Operational Dst index prediction model based on combination of artificial neural network and empirical model
In this paper, an operational Dst index prediction model is developed by combining empirical and Artificial Neural Network (ANN) models. ANN algorithms are widely used to predict space weather conditions. While they require a large amount of data for machine learning, large-scale geomagnetic storms have not occurred sufficiently for the last 20 years, Advanced Composition Explorer (ACE) and Deep Space Climate Observatory (DSCOVR) mission operation period. Conversely, the empirical models are based on numerical equations derived from human intuition and are therefore applicable to extrapolate for large storms. In this study, we distinguish between Coronal Mass Ejection (CME) driven and Corotating Interaction Region (CIR) driven storms, estimate the minimum Dst values, and derive an equation for describing the recovery phase. The combined Korea Astronomy and Space Science Institute (KASI) Dst Prediction (KDP) model achieved better performance contrasted to ANN model only. This model could be used practically for space weather operation by extending prediction time to 24 h and updating the model output every hour.
Introduction
Large-scale interplanetary disturbances from the Sun interact with Earth's magnetic field, resulting in severe space weather events, such as geomagnetic storms (Gonzalez et al., 1994;Ohtani et al., 2000;Bhaskar & Vichare, 2019). The Dst index is a representative index of geomagnetic activity in the space weather community (Sugiura, 1964;Rangarajan, 1989;Daglis et al., 1999;Wanliss & Showalter, 2006), and there have been numerous attempts to predict the Dst index (Lyons, 1998;Birn et al., 2001;Raeder & Maynard, 2001;Rastätter et al., 2013;Eastwood et al., 2017). Empirical and artificial neural network (ANN) models have been handled as important methods for predicting the Dst index (Murayama, 1982;Feldstein, 1992;Boyle et al., 1997;Kugblenu et al., 1999;Weigel, 2010). Many previous studies established that the southward component of the interplanetary magnetic field (IMF) presents the main cause of geomagnetic storms (e.g., Burton et al., 1975;Gonzalez et al., 1994;Echer et al., 2005). Burton et al. (1975) described the Dst index using the following empirical formula: where Dst* is the Dst index corrected by solar wind dynamic pressure. The term Q(t) is a function that expresses the rate at which the ring current is intensified by the dusk-ward solar wind electric field (represented in the geocentric solar magnetospheric coordinate system); P dyn is the solar wind dynamic pressure, s is the decay time when the ring current weakens by Topical Issue -Space Weather research in the Digital Age and across the full data lifecycle 1/e, and b and c are coefficients determined by observational data. As Q(t) is determined solely by the solar wind, we can accurately predict the Dst index using these formulas if we can observe or predict solar wind states. Many Dst models have been derived from this model (Lundstedt et al., 2002), and there have been numerous studies based on the above empirical formula (O'Brien & McPherron, 2000;Wang et al., 2003) since Burton et al. (1975). The model for predicting the Dst index 1 h ahead has been significantly improved by Temerin & Li (2002. In the 1990s, advances in computer technology led to ANN for predicting the Dst index (Freeman & Nagai, 1993;Lundstedt & Wintoft, 1994;Watanabe et al., 2002). These studies, which used solar wind data as input, initially utilized a simple feedforward ANN. Wu & Lundstedt (1996) applied a more complex Elman recurrent neural network. More data from solar wind observatories, Wind, and Advanced Composition Explorer (ACE), have been archived and used to improve the results obtained from neural networks. Currently, we can predict the Dst index 1 h ahead with relatively high accuracy. However, it is not clear whether this is sufficient for space weather operation, as it is not easy to determine the forecast time required for space weather applications. Obviously, more than 1 h of forecast time is required for space weather applications, changing scheduled satellite operations and the air routes in a polar region, and sending a warning signal to a complex communication system.
As Earth's magnetic field is disturbed almost simultaneously with solar wind variations, it is not easy to predict long-term changes in Earth's magnetic field using only the solar wind data observed at Lagrange point, L1, without forecasting solar winds. Lazzús et al. (2017) created a model that combined ANN with particle swarm optimization and predicted the Dst index 1-6 h ahead. Gruet et al. (2018) combined a long short-term memory recurrent neural network with a Gaussian process to predict the Dst index up to 6 h ahead. Naturally, the prediction accuracy tends to decrease sharply as prediction time increases. In this study, we also tried to extend the leading time of prediction up to 24 h differently.
The ACE mission, launched in 1997 and replaced by the Deep Space Climate (DSCOVR) satellite in 2015 for real-time operations, collected considerable solar wind data that can be used by ANN algorithms to predict the Dst index. Nevertheless, it is doubtful whether sufficient data is available for artificial intelligence and machine learning. Since geomagnetic storms with the Dst index of less than À100 nT occur only a few times a year during the solar maximum and significantly fewer times during the solar minimum, more than 20 years of solar wind data appear to be insufficient for predicting geomagnetic storms (Watari, 2017;Makarov, 2018). This paper presents three Dst prediction models, simple ANN, empirical model, and combination model (Korea Astronomy and Space Science Institute (KASI) Dst Prediction, KDP). The simple ANN is the baseline model for comparison. Here, we just tried to follow the existing neural network algorithm, so there are no significant advances and show a comparable prediction performance to the previous models, such as shown in Section 3. The empirical model (equations) indicates the intensity of a magnetic storm according to the degree of solar wind changes. Two kinds of geomagnetic storms are considered separately; coronal mass ejection (CME) driven and corotating interaction region (CIR) driven storms in Section 4. This preemptive information can help improve Dst prediction accuracy because a space weather operator can determine in advance whether CIR or CME causes future geomagnetic storms. Finally, the present study proposes a combination of empirical and ANN models in Section 5. The predicted values by the empirical model are used as an input parameter to the ANN model. Thus we could improve the prediction performance better than other models.
The method combining empirical and ANN models is similar to that of Revallo et al. (2014Revallo et al. ( , 2015; the difference is in the empirical model. They used time histories of solar wind and magnetosphere interaction proposed by Romashets et al. (2008) a neural network input data. They constructed an analytical representation of magnetic fields in the region where the solar wind interacted with the earth's magnetosphere and used the results as an input parameter for predicting the Dst index. On the other hand, we directly derived the Dst index's prediction values from the empirical equations and applied them to the ANN model. Thus, the KDP model could extend the prediction time up to 24 h while Revallo et al. (2014Revallo et al. ( , 2015 predict the Dst index just 1 h ahead. See and compare the model performance in Figure 9.
Data
For the development of a model, we use the solar wind data collected by the ACE and DSCOVR satellite and the Dst index as input values for the prediction model from 1999 to 2017. The Dst index is obtained from NASA (https://omniweb.gsfc.nasa. gov/ow.html), and the hourly average solar wind data are acquired from CDAWeb (https://cdaweb.gsfc.nasa.gov/index. html). For the model's operation, current Dst and solar wind data are downloaded from World Data Center for Geomagnetism, Kyoto (http://wdc.kugi.kyoto-u.ac.jp/dst_realtime/presentmonth/ index.html) and NOAA (https://services.swpc.noaa.gov/products/ solar-wind/). Tables A.1 and A.2 in Appendix list the CME-and CIRdriven geomagnetic storms from February 1999 to September 2017. Here, the geomagnetic storm is defined with a minimum Dst index of less than À50 nT. The event start time is defined as the maximum Dst time at the main phase, and the event end time is the recovery time exceeds À30 nT. These storm lists are mainly obtained from published papers (referenced therein), and some CIR-driven storms are added in this study. The effects of CME and CIR are known to be different (Liemohn et al., 2010). CME can potentially generate large geomagnetic storms, while CIR generally triggers minor storms. The empirical model developed in this study distinguishes the CME and CIR-driven storms, and the empirical model results are used as the input parameter of the combination model.
Artificial neural network (ANN) model
This section introduces a simple ANN model and shows its performance. The model is combined with an empirical model in Section 5. The ANN used in this study is the feed-forward network (Gardner & Dorling, 1998) known as the most simple ANN-type algorithm (Haykin, 1998). The training technique of weights and biases uses an error backpropagation learning algorithm with a gradient descent learning method. The basic structure of this ANN is composed of an input layer (I), a hidden layer (H), and an output layer (O) as shown in Figure 1. The hidden layer uses a nonlinear hyperbolic tangent function, H = tanh(I), and the output layer uses the O = H type linear function. The following equations express these layers: where H j and O k are the nodes of the hidden and output layers, respectively, wh i,j , wo j,k are the weights of the hidden and output layers, and b i and b k are biases, respectively. The suffixes i, j, and k denote the node numbers in the input, hidden, and output layers, and N and M denote the total number of nodes in the input and hidden layers, respectively. We use the input parameters listed in Table 1 for the input layer. Note that the simple ANN model uses the data collected for storm and quiet time and does not distinguish CME and CIR-driven storms. We use the current observations, and the differences between the current observations and the observations obtained 1 or 2 h earlier. The use of these differences as input data helps predict the Dst index trend and, therefore, plays an important role in predicting the Dst value more than 1 h ahead. The differences are estimated based on hourly averaged solar wind data. We also predict 1-24 h later Dst index as the target values of the output layer. The output layer has one node, as shown in Figure 1, which means the simple ANN model consists of 24 models for each hour predictions in advance. We train the model by adjusting hyperparameters such as the node numbers of the input and hidden layers, learning rate, and learning cycle to optimize the neural network. The optimization selects the best condition by calculating the correlation coefficient and root mean square error (RMSE) between the predicted and actual values for all hyperparameters. The equations are given as follows: where Y pre is a predicted value, Y real is a measured value, and N is the number of values. Parameter Description N(tc) Current solar wind density N(tc) À N(tc À 1) Difference between current solar wind density and value obtained 1 h earlier V(tc) Current solar wind speed V(tc) À V(tc À 1) Difference between current solar wind speed and value obtained 1 h earlier Bt Current total IMF Bt(tc) À Bt(tc À 1) Difference between current total IMF and value obtained 1 h earlier Bz(tc) Current IMF Bz Bz(tc À 1) IMF Bz obtained 1 h earlier Bz(tc À 2) IMF Bz obtained 2 h earlier Bz(tc) À Bz(tc À 1) Difference between current IMF Bz and value obtained 1 h earlier Bz(tc À 1) À Bz(tc À 2) Difference between IMF Bz values obtained one and 2 h earlier Dst(tc) Current measured Dst index Dst(tc À 1) Measured Dst index values obtained 1 h earlier Dst(tc À 2) Measured Dst index values obtained 2 h earlier W. Park et al.: J. Space Weather Space Clim. 2021, 11, 38 For performance optimization, all hidden layers with node numbers of 7-50, a learning rate of 0.0001-0.01, and learning cycles of 1200-1500 were trained and validated (see Table 2). For the ANN training, we used the data observed in , 2007, 2011, 2012, 2016. For the model validation, we use 2000, and for the test, we use 2003,2005,2009,2015. A total of 53,839, 28,526, and 25,421 data points are used for training, validation, and testing, respectively. The data covered storm and quiet times.
The red dots (ANN model) in Figure 2 show the RMSE and the correlation coefficients obtained by comparing the prediction results with the observed Dst index during the test period. The statistical analysis is conducted for the entire period without dividing it into the storm and quiet period regardless of distinguishing between CME-and CIR-driven geomagnetic storms. Compared with other models, this figure also shows the results from the various ANN models reported in the literature (Lazzús et al., 2017). As seen in Figure 2, many models predict the Dst index just 1 h ahead, and more advanced models (Wu & Lundstedt, 1997;Stepanova et al., 2005) can predict up to 6 h, while our model attempts to predict the Dst index up to 24 h. The initial version of the ANN shows similar or slightly lower performance than other models because no improved techniques are used.
Empirical model
The relationship between the solar wind and the Dst index could be derived using an arbitrary empirical formula (McPherron & O'Brien, 2001). In deriving the equations, we regarded a geomagnetic storm as having only one main phase and one recovery phase to simplify the relationship between the measured Dst index and input parameters. However, this model could also be used to predict the multi-peak storms when combined with the ANN model. The empirical model consists of three equations: predicting the minimum Dst index, finding the time reaching Dst Minimum, and expressing the recovery phases.
Equations (7) and (8) are used to determine the minimum Dst index (Dst min ) from the current solar wind condition caused by CME and CIR, respectively. While Burton et al. (1975) expressed the ring current injection rate in terms of the solar wind speed (V) and the southward IMF (Bs), in this work, we add the solar wind density (N), total IMF (Bt), and current Dst index to complete the equation. The solar wind parameters on the right-hand side of equations (7) and (8) are measurements when the model starts to run each hour. Recall that this model is designed to produce new prediction values every hour for the space weather operation. These equations were derived by a polynomial fitting algorithm in commercial software for the storm period and added exponential terms for adjusting large storm trends. In deriving these equations, we fixed some arbitrary coefficients and fit a function to the measurements to find the unfixed coefficients. We do not think we found the optimum values describing minimum Dst because the equations have six input parameters. Note that there is still a large inconsistency between prediction and measurements in Figure 3. Nevertheless, these empirical equations effectively build the combination model, as shown in Section 5. At the initial storm phase, the predicted Dst min is mainly controlled by solar wind measurements because Dst current is generally a small value, and the term, e jDst current j 450 does not significantly contribute to the Dst min prediction. However, as a big storm progresses, only solar wind data are not enough to describe geomagnetic storm developments. The exponential term adjusts the estimation of Dst min . Figure 3 shows the minimum Dst predicted by using equations (7) and (8) (blue symbols), measured minimum Dst (black symbols), and trend (red line) of equations (7) and (8) under specific conditions, Bt = Bs, V = 500 km/s, and N = 30/cm 3 . In these figures, the predicted Dst min is calculated at the initial phase of the storm when V Â Bs is just over 0.49 mV/m, and several hours later, minimum Dst measurement is obtained. As shown in Figure 3, the frequency of smaller geomagnetic storms is higher. For the prediction algorithms based on artificial intelligence, a larger weighting factor is applied to more frequent storms. Therefore, the ANN algorithm shows the worse prediction accuracy for the greater storms. In contrast, the empirical models are derived by human intuition and attempt to find a correlation between minimum Dst index and input parameters, regardless of the amount of data.
Next, we calculate the time (T min ) required to reach the minimum Dst obtained from equations (7) and (8). If we could estimate the Dst variation (dDst) for 1 h, we can estimate T min as the following equation: As the Dst index represents the variation in the geomagnetic field due to ring current development, the amount of change in the Dst index is proportional to the injection rate of the ring current particles. In this study, we adopt Wang et al. (2003) empirical model to derive the rate of change in Dst per hour (dDst) during the storm main phase.
Dst à ¼ Dst À 7:26 ffiffiffi P p þ 11 nT; where P (in nPa) is the solar wind dynamic pressure and P 0 and c are parameters provided by Wang et al. (2003). These parameters depend on the Dst index, as shown in Table 3. Here, the Dst index refers to a measurement at the current time.
For more information about the model, refer to Wang et al. (2003). Now, we obtain the dDst, and the prediction equation for the main phase can be expressed as follows: (7) and (8) with Bt = Bs, V = 500 km/s, and N = 30/cm 3 . W. Park et al.: J. Space Weather Space Clim. 2021, 11, 38 Here, we assumed the Dst index decrease linearly to Dst min from Dst current . While the Dst index changes during the main phase can be predicted using equation (11), the recovery phase's equation is required to be derived. Figure 4 is a superposed epoch plot of the Dst index for the storms' recovery phases listed in Tables A.1 and A.2 (Appendix). Here the minimum Dst is normalized to À1. We examined several fitting functions to derive an equation describing the Dst index trends for the recovery phase. We found that an equation represented by two exponential functions, such as equation (12), effectively follows the recovery phase trend. Here, equation (12) presented by the red line is not the averaged Figure 4 but just a plausible expression derived by intuition. Other researchers can get a different equation.
From equation (11), we can get the profile of Dst variation for the storm main phase and from equation (12) for the recovery phase from 1 h to 24 h ahead. During a storm is progressing, Dst min is updated every time, and if the predicted Dst min is less than the current Dst, the empirical model recognizes the storm is in the recovery phase. Otherwise, if the Dst min is larger than the current Dst, then the algorithm predicts another storm main phase. Thus, this model can generate multi-peak storm prediction. These values are used as input parameters of the ANN model.
Combination model
The KASI Dst prediction (KDP) model effectively predicts the Dst index by combining the empirical and ANN models. First, we estimate the Dst index from 1 h to 24 h ahead using empirical equations (11) and (12). The prediction results from the empirical model are combined with the measurements listed in Table 1. For example, the 6th ANN model's input parameters are the 6 h ahead prediction value from equations (11) and (12) and the measurements in Table 1. Hence, it is as if there are 24 independent models for predicting Dst for each hour. Even if the predicted Dst from the empirical model is just one among 15 inputs, note ANN gives greater weight to the empirical predictions. Therefore, just adding the empirical model outputs on the ANN's input data improves the final prediction performance. The method of training and validation is the same as the original ANN model. Table 4 shows the combination model's hyperparameters. Figure 5 shows an example of how the KDP model predicts the Dst index during geomagnetic storms. A halo CME was observed at 08:06 UT on 18 November 2003 by Solar and Heliospheric Observatory (SOHO) Large Angle and Spectrometric Coronagraph (LASCO) C2. At this time, the CME's speed was approximately 1150 km/s, and it would reach the Earth after approximately 32 h by simply dividing the Sun-Earth distance (150,000,000 km) by the solar wind speed (1150 km/s). However, there were big uncertainties at this moment whether the geomagnetic storm would occur by this CME, how much the Dst index would decrease, how fast it would reach the minimum Dst, and how long it would take recovery. At this moment, space weather operators set the KDP as CME model and wait for the CME shock arrival. The ACE satellite observed the CME shock at 07:26 UT on 20 November, indicating that the arrival time was approximately 47 h, approximately 15 h later than expected. However, the model did not predict geomagnetic storms because the IMF B z component was northward at the shock arrival. In the bottom panel of Figure 5, each color dots indicate the Dst values predicted 1 h at each hour, and the lines show the following predicted values for 24 h with one-hour interval. Note the first red dot indicates around zero, and no significant Dst changes are expected for 24 h. As time goes on, the colored dots and lines show the magnetic storm proceeding.
As the IMF B z component rapidly turned southward at 10:45 UT on 20 November, the KDP model started to predict a minor storm event, where a Dst minimum of À140 nT would be reached around 13:00 UT. As the IMF enhanced southward Table 3. Parameters which used for calculating the ring current injection rate, Q.
Dst index
P 0 c > À50 nT 3.3 0.2 À100 nT < Dst < À50 nT 3.1 0.19 À150 nT < Dst < À100 nT 3.5 0.25 < À150 nT 3.5 0.18 and the solar wind speed increased, the KDP model produced new predictions over time and provided results similar to actual Dst values. At 12:00 UT, the KDP model predicted that a Dst minimum of À430 nT would be reached around 20:00 UT. This value was similar to the measured value of À422 nT at 21:00 UT. At 21:00 UT on 20 November, the KDP model predicted that the storm would recover to À100 nT after one day. In this manner, space weather operators can deliver information about space weather conditions to users several hours before the minimum Dst is reached. Figure 6 shows another example of KDP model prediction where the magnetic storm occurred by CIR on 5 August 2019. Generally, CIR could be identified by the corona holes on the solar UV images and produces minor magnetic storms. From 03:00 UT on 5 August, the solar wind speed increased slowly to about 700 km/s, similar speed to the CME storm shown in Figure 5. However, the IMF B z was larger than À10 nT during the storm was progressing and caused different scale storms from the CME-driven storm described in Figure 5. At the initial stage of the storm, the KDP model predicted the Dst index well. However, the solar wind speed increased extraordinary, and the model overestimates the minimum Dst with À80 nT while the measurement was larger than À50 nT. Nevertheless, the model successfully predicted the minor storm caused by CIR.
Regarding the KDP model performance, Figure 7 shows the correlation coefficient and RMSE derived from equations (5) and (6) model for space weather operation. In other literature, the space weather model's performance has been shown for both storm and quiet periods. These algorithm performances would be governed by the quiet period because it is considerably longer than the storm period. However, from a space weather operator's perspective, the important thing is how storm events are accurately predicted. We believe a better operational model should have good performance in predicting storm events. Thus, these results are obtained for the storm events in the test period. Figure 8 shows the KDP model's correlation coefficients and RMSEs for training, validation, and test group to illustrate the model's prediction capability. The higher correlation coefficient and smaller RMSE show that the Dst index is predicted more accurately using the combination model than the empirical and ANN models.
Discussion and conclusion
This paper introduces the KDP model that combined an ANN and an empirical model. The combination model showed better performance than each model in predicting the Dst index from 1 h to 24 h ahead, as shown in Figure 7. Therefore, we conclude that better space weather prediction can be achieved by using the combined models.
While this study aims to show an ANN model's improvement by combining an empirical model, as shown in Figure 7, there have been demands for comparing the KDP model's performance with others. While the ANN in the KDP model is trained only during the storm periods, we can find the input parameters even in quiet time. Thus, the KDP model can produce non-storm predictions. Figure 9 shows the storm and quiet time results, implying the combination model has superior prediction performance than other developed models. However, such a comparison does not tell us which model is better or worse because the test periods and conditions are different. The persistence model shown in Figure 9 assumes the prediction values are the same as the current Dst. Figure 9 clearly shows the KDP model's performance is better than the persistence model, and this model might be useful in space weather operation.
In addition to the persistence model, prediction efficiency is also used to validate the space weather model. Figure 10 shows the prediction efficiency of the KDP model calculated as below: W. Park et al.: J. Space Weather Space Clim. 2021, 11, 38 where Dst s , Dst nn s , and Dst stand for the measured, predicted, and mean Dst values respectively, and M is the length of the record. The prediction efficiency tests the ability of the model to predict the variation of Dst around the mean. A prediction efficiency of 1 is perfect agreement at all times. Prediction efficiencies less than or equal to zero do not provide useful predictions of the observations' time variation. Figure 10 shows the prediction efficiency decrease as prediction time increases, while the model is still useful in predicting the 24-hour Dst index.
It should be noted that the ANN and empirical models used in this study are developed with a conventional method. The ANN is a commonly used feed-forward network, and roughly estimated equations implement the empirical models. As shown in Figure 2, the ANN model is not superior to other existing models. As shown in Figure 3, the minimum Dst values are spread widely, and the empirical equations are not fitted well to the data. Nevertheless, the combined model yielded better results, which means the combination model could be significantly improved by adopting modernized ANN and sophisticate empirical models. Further research will improve the empirical equations for predicting the Dst index. If two improved models are combined, the prediction accuracy would be more increase. Thus, this paper encourages researchers to improve their neural network models by adopting empirical models.
Besides, by categorizing geomagnetic storms into CMEdriven and CIR-driven storms, we improved the model performance. Through such categorization, we can increase the correlation between the Dst index and input parameters. In a future study, the solar cycle and seasonal effects could also be considered to make an improved empirical model. Space weather operators can easily recognize whether CME or CIR will cause magnetic storms from solar images. However, sometimes CMEs could be embedded in a CIR and resulting in a compound storm event (Al-Shakarchi & Morgan, 2018). In this case, the decision for selecting CME or CIR mode remains to the operators. Even if they fail in identifying CME or CIRdriven storms, the KDP model predicts the Dst index anyhow by solar wind and current Dst.
The KDP model has been implemented to predict the Dst index 24 h ahead, and the prediction output is updated every hour for space weather operations. In general, space weather information is delivered to a professional group of users, such as radio communication companies, satellite operators, and flight dispatchers. Space weather users state that they can cope with severe space weather conditions in reasonable prediction time. For example, flight dispatchers (Privet communication with Korea air dispatchers) say they can change routes 3 h before a flight. It is well known that satellite launches can be delayed just a few minutes before launch. GEO satellite operators monitor their satellites for 24 h, and they can act quickly for emergencies. As shown in Figure 7, the KDP model can predict the Dst storms 6 h ahead with a correlation coefficient of 0.8 and an RMSE of 24 nT or less. Figure 11 shows the correlations between measurements and prediction values for 3-hour, 6-hour, 12-hour, and 24-hour ahead. The KDP model shows good correlations up to 6-hour ahead. We emphasize these are acceptable characteristics for space weather operation. Thus, this model can be used as an operational model for delivering warning signals to space weather users.
We have noted that space weather users do not want to know only the minimum Dst index but the recovery time of storms. Users like to know when they can return to their daily lives. Even though the 24-hour prediction correlation coefficient of the KDP model is not satisfactory at 0.38, we attempted to develop a model to track the storm's progress using the 24-hour prediction model and predicted the recovery phase. While the Dst index's recovery does not mean that a space storm event has ended completely, it would be helpful information for space weather alerts.
W. Park et al.: J. Space Weather Space Clim. 2021, 11, 38 downloaded from World Data Center for Geomagnetism, Kyoto (http://wdc.kugi.kyoto-u.ac.jp/dst_realtime/presentmonth/index. html) and NOAA (https://services.swpc.noaa.gov/products/ solar-wind/). This work was supported by the National Meteorological Satellite Centre (NMSC) of the Korea Meteorological Administration (KMA) through the "Geostationary Meteorological Satellite Ground Segment Development" research project. The editor thanks two anonymous reviewers for their assistance in evaluating this paper. | 7,117.6 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Spectral data of tropical soils using dry-chemistry techniques (VNIR, XRF, and LIBS): A dataset for soil fertility prediction
Proximal soil sensing technologies, such as visible and near infrared diffuse reflectance spectroscopy (VNIR), X-ray fluorescence spectroscopy (XRF), and laser-induced breakdown spectroscopy (LIBS), are dry-chemistry techniques that enable rapid and environmentally friendly soil fertility analyses. The application of XRF and LIBS sensors in an individual or combined manner for soil fertility prediction is quite recent, especially in tropical soils. The shared dataset presents spectral data of VNIR, XRF, and LIBS sensors, even as the characterization of key soil fertility attributes (clay, organic matter, cation exchange capacity, pH, base saturation, and exchangeable P, K, Ca, and Mg) of 102 soil samples. The samples were obtained from two Brazilian agricultural areas and have a wide variation of chemical and textural attributes. This is a pioneer dataset of tropical soils, with potential to be reused for comparative studies with other datasets, e.g., comparing the performance of sensors, instrumental conditions, and/or predictive models on different soil types, soil origin, concentration range, and agricultural practices. Moreover, it can also be applied to compose soil spectral libraries that use spectral data collected under similar instrumental conditions.
a b s t r a c t
Proximal soil sensing technologies, such as visible and near infrared diffuse reflectance spectroscopy (VNIR), X-ray fluorescence spectroscopy (XRF), and laser-induced breakdown spectroscopy (LIBS), are dry-chemistry techniques that enable rapid and environmentally friendly soil fertility analyses. The application of XRF and LIBS sensors in an individual or combined manner for soil fertility prediction is quite recent, especially in tropical soils. The shared dataset presents spectral data of VNIR, XRF, and LIBS sensors, even as the characterization of key soil fertility attributes (clay, organic matter, cation exchange capacity, pH, base saturation, and exchangeable P, K, Ca, and Mg) of 102 soil samples. The samples were obtained from two Brazilian agricultural areas and have a wide variation of chemical and textural attributes. This is a pioneer dataset of tropical soils, with potential to be reused for comparative studies with other datasets, e.g. , comparing the performance of sensors, instrumental conditions, and/or predictive models on different soil types, soil origin, concentration range, and agricultural practices. Moreover, it can also be applied to compose soil spectral libraries that use spectral data collected under similar instrumental conditions. © 2022 The Author(s Value of the Data • The techniques applied for data acquisition allow rapid, non-destructive, and reagent-free analysis; studies involving these techniques are incipient in the context of proximal soil sensing and this database is among the pioneers in tropical soils. • This database can be used for comparative studies with other datasets, e.g., comparing the performance of sensors, instrumental conditions, and/or predictive models on with different soil types, soil origin, concentration range, and agricultural practices. • This database can also be used to evaluate predictive models little explored in the literature to exploit the synergies among sensors. • This database also could be used to compose soil spectral libraries that use data collected under similar instrumental conditions.
Data Description
The dataset contains spectral data and characterizations of key soil fertility attributes of 102 soil samples. These samples are from two Brazilian agricultural areas, which have soils classified as Lixisol (Field 1) and Ferralsol (Field 2) [1] . Both type of soils are commonly found in Brazil's tropical regions [2] , as well as in tropical regions of Africa, Asia, and Oceania [1] . Agricultural areas in the Brazilian tropical region are generally acidic and have low natural fertility, this characteristic makes soil fertility analysis fundamental for the correct prescription of fertilizers [3] . It is estimated that about 7 million samples are analysed annually in traditional analytical laboratories; furthermore, Brazil is the fourth largest consumer of fertilizers in the world [4] . The chosen fields have different soil matrices due to considerable textural and total elemental composition contrast. Regarding the fertility attributes, the two-field dataset present wide ranges of the variability of fertility attributes, as shown in Fig. 1 . After soil fertility tests, the samples were scanned with the following direct analysis techniques: (i) visible and near infrared diffuse reflectance spectroscopy (VNIR), (ii) X-ray fluorescence spectroscopy (XRF), and (iii) laser-induced breakdown spectroscopy (LIBS). The components of the shared dataset are schematized in Fig. 2 .
The shared dataset contains four tables (which were shared in both .txt and .xlsx format) named as "soil fertility data'', "VNIR data'', "XRF data'', and "LIBS data'', which respectively contain the data from the soil fertility analysis, VNIR, XRF, and LIBS spectroscopies. The tables/datasets "soil fertility data'', "VNIR data'', and "XRF data'' are organized in dataframes with long format ( i.e. , observations in rows and variables in columns) and the table containing the LIBS data is a dataframe with wide format ( i.e., observations in columns and variables in rows). All datasets have 102 observations and have as primary key the variable ID (first variable of all datasets), which identifies the samples (observations) with sequential numbers. The second variable of all datasets is named "Field" and contains the category "1", for samples from Field 1 Fig. 1. Boxplot of the clay, organic matter (OM), cation exchange capacity (CEC), pH, base saturation (V), and extractable (ex-) P, K, Ca, and Mg content ( n = 102 soil sample from Field 1 and 2), which are the soil fertility attributes to be used as Y-variables in predictive modelling. The coefficient of variation (CV) for each attribute was also shown and expressed in %. This figure was modified from Tavares et al [5] .
Fig. 2.
Framework of the shared dataset. In this study, 102 soil samples were collected from tropical agricultural fields were scanned using visible and near infrared diffuse reflectance spectroscopy (VNIR), X-ray fluorescence spectroscopy (XRF), and laser-induced breakdown spectroscopy (LIBS), and also sent to a commercial laboratory for determining clay, organic matter (OM), cation exchange capacity (CEC), pH, base saturation (V), and extractable (ex-) P, K, Ca, and Mg content. Soil spectra can be used as X-variables and soil fertility attributes as Y-variables in predictive modelling. This figure was modified from Tavares et al [5] .
( n = 58), and the category "2", for samples from Field 2 ( n = 44). The other variables of each dataset are specified below.
• "Soil fertility data'': from column 3 to 11 (9 variables) are the contents of clay, organic matter (OM), cation exchange capacity (CEC), pH, base saturation (V), exchangeable (ex-) P, ex-K, ex-Ca, and ex-Mg, respectively. The values are given in g dm −3 for clay and OM; in mmol c dm −3 for CEC, ex-K, ex-Ca, and ex-Mg; in % for V; and, for ex-P, it was given in mg dm −3 .
Soil samples, fertility analysis, and sample preparation for spectroscopic analyses
A total of 58 samples were collected in the Field 1, which is located in the municipality of Piracicaba, State of São Paulo, Brazil. The remaining samples ( n = 44) were collected in the Field 2, situated in the municipality of Campo Novo do Parecis, State of Mato Grosso, Brazil. All samples were collected from 0 to 20 cm depth. The soil samples were subjected to laboratory analyses, which provided the contents of clay, OM, CEC, pH, V, ex-P, ex-K, ex-Ca, and ex-Mg. These determinations followed the methods described by Van Raij et al [6] , in which, clay was determined by using the Bouyoucos hydrometer method; extractable nutrients by using ion exchange resin extraction; OM by oxidation with potassium dichromate solution, and pH by calcium chloride solution. The CEC was calculated by the sum of the soil potential acidity and the sum of bases (ex-K + ex-Ca + ex-Mg); in turn, the soil potential acidity determined via buffer solution (SMP). The V was also calculated and represents the percentage of bases in the CEC.
For VNIR and XRF data acquisition, it was used loose powder soil samples (air-dry and grain size ≤ 2 mm). Whereas, for LIBS data acquisition, it was used pelletized samples. For pelletizing, samples were comminuted in a planetary ball mill with a 10% w w −1 binder material (microcrystalline cellulose, Merck, Darmstadt, Germany)and then pressed with a press, as detailed by Tavares et al [7] .
VNIR data acquisition
The device Veris MSP3 (Veris Technologies, Salina, Kansas, USA) was used for VNIR data acquisition. This system consists of a tungsten halogen lamp, as energy source, and a detection system composed by two spectrometers: (i) a CCD array (USB40 0 0, Ocean optics, Largo, FL, USA) and (ii) an InGaAs photodiode-array (C9914GB, Hamamatsu Photonics, Hamamatsu, Japan). This spectrometer set allows to record the spectra from 343.00 to 2222.00 nm, with spectral resolution of ±5 nm. The VNIR spectrometer automatically checks the measured reflectance behaviour using four references materials with known spectral behaviour. In addition, it was self-calibrated, by making a dark and white reference measurements, before each spectra acquisition. The sample holder isolates the sample from ambient light. Each sample was scanned in triplicate, by repositioning the sample after each reading, and then the replicates were averaged. The spectra edges (from 343.00 to 431.59 nm and from 2153.11 to 2222.00 nm) were removed due to the high presence of noise.
XRF data acquisition
A portable energy dispersive X-ray fluorescence spectrometer, Tracer III-SD model (Bruker AXS, Madison, Wisconsin, EUA), equipped with a 4 W Rh X-ray tube and a Peltier-cooled Silicon Drift Detector (with 2048 channels, gain of ∼20 eV/channel) was used for XRF data acquisition. The following instrumental conditions were used: (i) X-ray tube voltage of 35 kV and current of 7 μA; (ii) dwell time of 90 s; (iii) no filter was used; and (iv) scans were performed under atmospheric pressure. Three measurements were taken from each soil sample at three different spots, and these were then averaged.
LIBS data acquisition
For LIBS data acquisition, it was used a benchtop LIBS system composed by a pulsed Nd:YAG laser (Brilliant, Quantel, France) and an ESA 30 0 0 spectrometer (LLA Instruments GmbH, Berlin, Germany). The laser operates at 1064 nm, generating 5 ns pulses of up to 365 mJ, in a 6 mm diameter beam, at 10 Hz repetition rate. The laser pulse was focused on the sample surface by a plane-convex lens with 2.54 cm diameter and 20 cm focal length. Pressed pellets were placed into a plastic sample holder positioned in a two-axes manually-controlled translation stage, movable in the plane orthogonal to the laser direction. A laminar stream of argon (5.0 L min −1 ) was continuously fed from the bottom of the sample holder in order to dislocate the atmospheric air around the sample surface. The emission from the plasma was collected by using a telescope (positioned about 25 °from the laser axis) composed of 50 and 80 mm focal length fused silica lenses and coupled to the entrance slit of the spectrometer using an optical fiber. The spectrometer device is equipped with Echelle optics (focal length of 25 cm with numerical aperture of 1:10) and an ICCD camera detector that is comprised of a Kodak KAF 1001 CCD array of 1024 × 1024 pixels full frame, enabling registers spectra from 200 to 780 nm with a resolution oscillating from 5 pm at 200 nm to 19 pm at 780 nm.
LIBS instrumental conditions were optimized in initial tests to obtain the maximum signalto-noise ratio of the emission lines of interest, as suggested by Nunes et al [8] . The experimental conditions used for data acquisition were: 65 mJ laser pulses, 225 J cm −2 (65 mJ per pulse at 180 μm laser spot size, 19.5 cm of lens-to-sample distance), 15 accumulated laser pulses per site, 2.0 μs of delay time, and 7.0 μs of integration time gate. The pressed pellets were sampled at 21 different sites in order to consider the analytes micro-heterogeneity in the samples; then, the replicates were averaged.
Ethics Statement
The authors declare that the work does not involve the use of human subjects, animal experiments, or data collected from social media platforms, being exempt from an ethic approval process.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships which have or could be perceived to have influenced the work reported in this article.
Data Availability
Spectral data of tropical soils using dry-chemistry techniques (VNIR, XRF, and LIBS): a dataset for soil fertility prediction (Original data) (Mendeley Data Repository). | 2,902.2 | 2022-03-01T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Highly Strain‐Stable Intrinsically Stretchable Olfactory Sensors for Imperceptible Health Monitoring
Abstract Intrinsically stretchable gas sensors possess outstanding advantages in seamless conformability and high‐comfort wearability for real‐time detection toward skin/respiration gases, making them promising candidates for health monitoring and non‐invasive disease diagnosis and therapy. However, the strain‐induced deformation of the sensitive semiconductor layers possibly causes the sensing signal drift, resulting in failure in achievement of the reliable gas detection. Herein, a surprising result that the stretchable organic polymers present a universal strain‐insensitive gas sensing property is shown. All the stretchable polymers with different degrees of crystallinity, including indacenodithiophene‐benzothiadiazole (PIDTBT), diketo‐pyrrolo‐pyrrole bithiophene thienothiophene (DPPT‐TT) and poly[4‐(4,4‐dihexadecyl‐4H‐cyclopenta[1,2‐b:5,4‐b′]dithiophen‐2‐yl)‐alt‐[1,2,5]thiad‐iazolo [3,4‐c] pyridine] (PCDTPT), show almost unchanged gas response signals in the different stretching states. This outstanding advantage enables the intrinsically stretchable devices to imperceptibly adhere on human skin and well conform to the versatile deformations such as bending, twisting, and stretching, with the highly strain‐stable gas sensing property. The intrinsically stretchable PIDTBT sensor also demonstrates the excellent selectivity toward the skin‐emitted trimethylamine (TMA) gas, with a theoretical limit of detection as low as 0.3 ppb. The work provides new insights into the preparation of the reliable skin‐like gas sensors and highlights the potential applications in the real‐time detection of skin gas and respiration gas for non‐invasive medical treatment and disease diagnosis.
The gas permeability of the semiconductor is confirmed by sealing the container containing the gases (including TMA, NO2, NH3) with a polymer semiconductor film, and then checking the degree of gas leakage.The photograph of the container sealed by the polymer semiconductor is shown in Fig. S2a, c.
Herein, a stable atmospheric pressure gas environment can be obtained by slowly injecting the target gas into the container and then closing the two outlets.In order to avoid the influence of the disturbance of ambient gas on the leakage gas, we put the semiconductor film-sealed container into a larger airtight tank (7000 ml).Then a low-flow vacuum pump (~500 ml/min, calibration by comparison with mass flow meter bubble velocity) is used to pump the gas from the tank into another gas detection chamber to detect changes in concentration.Due to the low flow rate, gas replenished from the outside of the tank for a short period (100s) has less effect on its internal gas concentration.
Two types of semiconductor thin films were used in the specific experiments.
First, the suspended semiconductor films were used to confirm the gas permeability of the unstretched film, which can be obtained via the peel-off process using a PDMS film with small holes.As shown in Figure S2b, when the gas was injected into the semiconductor-sealed container, the sensor quickly exhibited a strong response and a slow recovery when the container was removed.This directly demonstrates the excellent gas permeability of unstretched polymer semiconducting films.However, the further stretched suspended films require PDMS films (self-supporting elastomeric films) with regular and small pores and smoother pore/film interfaces.It is currently still challenging to obtain high-quality through-holes in elastomer with a thickness of tens of micrometers, which is not the focus of this work.
The air permeability of stretched semiconductor films was determined using semiconductor films that were attached to ultra-thin and complete PDMS substrates.
Because PDMS as an amorphous polymer has exhibited gas permeability in numerous previous works.As shown in Figure S2d, a significant response was also observed in the black line (unstretched film), but the time point of the response lagged behind the time point of gas injection.This may be because the thickness of PDMS reaches tens of microns, and gas molecules need to undergo slow diffusion or dissolution to diffuse out.When the film was stretched at 30% strain, the sensor's response increased slightly by ~10% (red line), compared to the former.This may be because the smaller thickness caused by stretching shortens the path of gas permeating the film.In summary, polymer semiconductor films are gas-permeable regardless of whether they are stretched or not, that is, gas molecules are allowed to enter the interior of the film through free diffusion and then pass through the film.
Further, the gas permeability of the polymer film to NH3 and NO2 was visually demonstrated using PH test paper.Specifically, a container containing a solution of concentrated nitric acid and ammonia was sealed with a polymer film, and PH paper was placed on top of the film.As NH3 and NO2 permeate the film, it can be observed that the pH test paper undergoes an obvious discoloration reaction corresponding to the acidity and alkalinity, as shown in Figure S2e.The color change of the pH test paper placed on the stretched and unstretched film was consistent, which may be related to the lower resolution.Nevertheless, these results still provide intuitive evidence that gases can permeate polymer semiconductor thin films.Figure S3 shows that the morphology of PIDBTT film is uniform without any cracks, even after stretching to 90%.DPPT-TT starts to crack at ~20% strain.As the strain increases, cracks with larger gaps appear.PCDTPT starts to crack at ~5% strain.
As the strain increases, dense but small gap cracks appear.
Figure S1 .
Figure S1.The detailed fabrication process of the intrinsically stretchable gas sensor.
Figure S2 .
Figure S2.The real photograph of the container sealed by the two types of polymer semiconductor film (a, c) and the corresponding response curve to leakage of gas (b, d).
Figure
Figure S3 the Optical micrographs of polymer semiconductors in different stretched states.Insets show crack-onset strains of DPPT-TT and PCDTPT, respectively. | 1,254.4 | 2023-08-23T00:00:00.000 | [
"Materials Science",
"Engineering",
"Chemistry"
] |
Complete mitochondrial genome sequences from five Eimeria species (Apicomplexa; Coccidia; Eimeriidae) infecting domestic turkeys
Background Clinical and subclinical coccidiosis is cosmopolitan and inflicts significant losses to the poultry industry globally. Seven named Eimeria species are responsible for coccidiosis in turkeys: Eimeria dispersa; Eimeria meleagrimitis; Eimeria gallopavonis; Eimeria meleagridis; Eimeria adenoeides; Eimeria innocua; and, Eimeria subrotunda. Although attempts have been made to characterize these parasites molecularly at the nuclear 18S rDNA and ITS loci, the maternally-derived and mitotically replicating mitochondrial genome may be more suited for species level molecular work; however, only limited sequence data are available for Eimeria spp. infecting turkeys. The purpose of this study was to sequence and annotate the complete mitochondrial genomes from 5 Eimeria species that commonly infect the domestic turkey (Meleagris gallopavo). Methods Six single-oocyst derived cultures of five Eimeria species infecting turkeys were PCR-amplified and sequenced completely prior to detailed annotation. Resulting sequences were aligned and used in phylogenetic analyses (BI, ML, and MP) that included complete mitochondrial genomes from 16 Eimeria species or concatenated CDS sequences from each genome. Results Complete mitochondrial genome sequences were obtained for Eimeria adenoeides Guelph, 6211 bp; Eimeria dispersa Briston, 6238 bp; Eimeria meleagridis USAR97-01, 6212 bp; Eimeria meleagrimitis USMN08-01, 6165 bp; Eimeria gallopavonis Weybridge, 6215 bp; and Eimeria gallopavonis USKS06-01, 6215 bp). The order, orientation and CDS lengths of the three protein coding genes (COI, COIII and CytB) as well as rDNA fragments encoding ribosomal large and small subunit rRNA were conserved among all sequences. Pairwise sequence identities between species ranged from 88.1% to 98.2%; sequence variability was concentrated within CDS or between rDNA fragments (where indels were common). No phylogenetic reconstruction supported monophyly of Eimeria species infecting turkeys; Eimeria dispersa may have arisen via host switching from another avian host. Phylogenetic analyses suggest E. necatrix and E. tenella are related distantly to other Eimeria of chickens. Conclusions Mitochondrial genomes of Eimeria species sequenced to date are highly conserved with regard to gene content and structure. Nonetheless, complete mitochondrial genome sequences and, particularly the three CDS, possess sufficient sequence variability for differentiating Eimeria species of poultry. The mitochondrial genome sequences are highly suited for molecular diagnostics and phylogenetics of coccidia and, potentially, genetic markers for molecular epidemiology.
Background
As many as seven Eimeria species, Eimeria dispersa, Eimeria meleagrimitis, Eimeria gallopavonis, Eimeria meleagridis, Eimeria adenoeides, Eimeria innocua and Eimeria subrotunda, can cause coccidiosis in the turkey, Meleagris gallopavo [1]. Coccidiosis is widespread and pathogenic with considerable economic losses to the poultry industry [2,3]. These parasites possess morphotypes of oocysts with overlapping biological features that make identification, characterization and diagnosis challenging [4,2]. Delimiting individual species using morphological features, even when supplemented by 18S rDNA or internal transcribed spacer (ITS) sequence data, has been reported to be less than ideal for coccidia, especially for closely related parasites [5][6][7][8][9][10]. Sequences from the mitochondrial cytochrome c oxidase subunit I gene (mtCOI) have been shown to be reliable for delimiting closely related species [9] and the mtCOI locus appears to lack paralog issues associated with rDNA of these parasites [10].
A single, complete mitochondrial (mt) genome copy for parasites within the Apicomplexa is about 6 KB long [11,12]. Genome organisation varies considerably among eukaryotes in general and also within the Apicomplexa [13,14]. Among apicomplexan parasites, genome structures that have been reported include linear concatemers [15,16], linear genomes with terminal inverted telomeric repeats [17,18] and circular genomes [12,19]. Regardless of overall genome structure, all apicomplexan mt genomes examined to date possess three genes encoding cytochrome c oxidase subunit I (COI), cytochrome c oxidase subunit III (COIII) and cytochrome b (CytB), as well as numerous fragments of discontinuous and scrambled small subunits (SSU) and large subunit (LSU) rDNA. The specific LSU and SSU rDNA fragments found in the mt genome of apicomplexan parasites differ among distantly related parasites. Unlike many eukaryotic mt genomes, apicomplexan mt genomes do not encode 5S rRNA or tRNAs [20][21][22][23][24].
Parasites
Six single oocyst-derived lines of five Eimeria species were used in this study. A description of the origins of the original isolates from which each line was derived is provided by El Sherry et al. [25]. The resulting lines used were as follows: 1) Eimeria adenoeides Guelph strain [26] in submission, for biological features of the line), 2) Eimeria dispersa Briston strain; 3) E. meleagrimitis USMN08-01 strain see [27,28] for biological features); 4) E. meleagridis USAR97-01 strain see [29], for biological features); 5) E. gallopavonis Weybridge strain see [30] for biological features); and 6) E. gallopavonis USKS06-01 strain. All lines were derived from parent isolates using the method of Remmler and McGregor [31] with the modification that agar plugs carrying a single oocyst were given within gelatin capsules orally to specific-parasite free poults. All animal experimentation was conducted in SPF birds at the Campus Animal Facility, (University of Guelph, Guelph ON, Canada); all experimental procedures were reviewed and approved by the University of Guelph's Animal Care Committee and complied with the Canadian Council on Animal Care's Guide to the Care and Use of Experimental Animals (2nd edition).
DNA extraction and long PCR amplification
Purification of oocysts and genomic DNA extraction was carried out as previously described by Ogedengbe et al. [9,24]. Mitochondrial whole genome amplification for all five Eimeria species was initiated using two sets of specific primers that generated overlapping PCR fragments: 1) Cocci_MT-WG-F (5′-TACACCTAGCCAACACGAT-3′) and Cocci_MT-WG-R (5′-GCAGCTGTAGATGGATG CTT-3′); and, 2) Inv_COI_262R (5′-AAWGCGGCATCR TAGAATTG-3′) and Inv_COI_461F (5′-CTAGCYATGG GATGTATTACTG-3′). Primers were designed from highly conserved regions within publically available mitochondrial genome sequences for Eimeria species infecting chickens (see Ogedengbe et al. 2013 for the species used in the primer design). The primer pairs Inv_COI_461F and Inv_COI_262R annealed 148 bp apart at bp 2069-2090 and bp 1920-1901 respectively, and the primer pairs Cocci_MT-WG-F and Cocci_MT-WG-R annealed 97 bp apart at bp 6322-6340 and bp 6224-6205, respectively, on the published mitochondrial genome sequence of Eimeria mitis [GenBank: KF501573]. Each pair of primers was used independently in a 50 μl reaction. PCR reactions using QIAGEN LongRange PCR kit (QIAGEN, Valencia, CA, USA) protocol according to the manufacturer's instructions with the modification that an additional 1.5 mM MgCl 2 was added to the PCR buffer provided by the manufacturer. For each Eimeria species, long PCR reactions consisted of~200 ng genomic DNA template (when using the Inv_COI_461F/Inv_COI_262R primers), and 25 ng genomic DNA template (when using primers Cocci_MT-WG-F and Cocci_MT-WG-R), 1× LongRange PCR buffer, 4 mM MgCl 2 , 500 μM of each dNTP, 2U LongRange PCR enzyme mix and 0.4 μM of each primer. The PCR reaction profile consisted of denaturing at 93°C for 3 min followed by 35 cycles of 93°C for 15 s, 50°C for 30 s, 68°C for 6 min with a final extension cycle of 68°C for 10 min in an MJ mini thermal cycler (Bio Rad, CA, USA). PCR products were electrophoresed at 50 V through a 0.8% agarose gel prepared with 1 × TAE buffer containing ethidium bromide. DNA bands were viewed using UV transillumination (Spectronics Corporation, New York, USA) and their sizes were compared to a 100 bp to 10 kb DNA ladder (Bio Basic Inc., Mississauga ON, Canada). DNA bands were excised from the gel and purified using a QIA quick gel extraction and purification kit (Qiagen, Toronto ON, Canada) according to the manufacturer's instruction.
Sequencing
Purified PCR products were sequenced in both directions using a primer-walking strategy to generate nearcomplete mitochondrial genomes essentially as described by Ogedengbe et al. [24]. Sequencing was carried out using the ABI PRISM 7000 Sequence Detection System (Applied Biosystem Inc., Foster City, CA, USA) at the Laboratory services Division, University of Guelph (Guelph, ON, Canada).
Sequence data assembly and analysis
The de novo sequence assembler within Geneious bioinformatics software (Version 6.1 and later versions, available from http://www.geneious.com) was used to trim and assemble Sanger sequencing chromatograms into high quality contigs for the primary PCR product from each species. To complete each mt genome, PCR products were generated using a reverse primer downstream of the original forward primer (i.e. Cocci_MT-WG-F or Inv_COI_461F) and a forward primer upstream of the original reverse primer (i.e. Cocci_MT-WG-R or Inv_ COI_262R, respectively); primers were designed such that a minimum of 100 bp of the resulting fragment overlapped the original long PCR product at each end. Each resulting PCR product was sequenced in both directions and the resulting consensus sequence was used to fill in the region between the two original long PCR amplification primers. The coding genes and rDNA fragments were first mapped by comparison with other Eimeria mt genomes (i.e. E. tenella AB564272, annotated by Hikosaka et al. [22], and E. mitis KF501573, annotated by Ogedengbe et al. [24]. Additional putative rDNA fragments were identified by comparing well conserved unannotated regions found in all of the aligned Eimeria sp. genomes to the mt genome of Plasmodium falciparum (M76611). Sequence identity between such conserved regions and rDNA fragments from P. falciparum greater than 60% were mapped as putative rDNA fragments. Putative start and stop codon positions for each of the coding DNA sequences (CDS) were identified following methods previously described by Ogedengbe et al. [24]. Translations using the mold/protozoan mitochondrial codon translation (i.e. translation_table_4) were searched using Blastp against the non redundant sequence database to confim the identity of the translation product produced by each CDS. Base compositions and nucleotide changes within the CDS among the six mt genome sequences were analysed from within the Geneious software package.
Phylogenetic analyses
The six newly generated, PCR-based mt genome sequences of Eimeria spp infecting turkey: Eimeria dispersa Briston strain; E. meleagrimitis USMN08-01 strain; E. meleagridis USAR97-01 strain; E. adenoeides Guelph strain and E. gallopavonis Weybridge strain; E. gallopavonis USKS06-01 strain were aligned with the 10 publically available complete mt genome sequences from seven Eimeria spp. infecting chickens and Eimeria magna that infects rabbits (i.e. all available apicomplexan taxa that had the same genome structure). Three sequences of Eimeria mitis (KCE409029; KC409030 and KC409031) generated from clones were not included in the phylogenetic analysis because of the likelihood of PCR artifacts in these sequences as documented by Ogedengbe et al. [24]. GenBank sequence accession numbers are indicated on the trees.
To permit whole genome alignments, all mt genome sequences were linearized at the same position, 85-87 nt upstream of the small subunit rDNA fragment SSU/A corresponding to the binding site of the Cocci_MT-WG-F primer. Linearized sequences were aligned based on the primary structure using the multiple sequence alignment algorithm implemented from within Geneious 6.1; indels downstream of the Cocci_MT-WG-R primer binding site made unambiguous alignment in that region unlikely so the short sequences downstream of this primer binding region (44 to 97 bp depending on the Eimeria sp.) were not included in subsequent phylogenetic analyses using whole genome sequences.
Regions between rDNA fragments contained frequent indels that made unambiguous alignment of these regions difficult and the CDS for the three genes contained the majority of the genetic diversity found within the mt genomes. For these reasons, we chose to use concatenated CDS for CytB, COI and COIII (or their corresponding amino acid sequences) as datasets for phylogenetic analyses. The sequence data was thus partitioned into 3 datasets as follow: 1) a global nucleotide sequence data set for all 16 whole genome sequences (after removal of the short regions downstream of the Cocci_MT-WG-R primer); 2) concatenated DNA sequences for the 3 CDS; 3) concatenated amino acid (aa) translations of the 3 CDS.
Data set one, consisting of whole mt genome nucleotide sequences (excluding the short regions downstream of the Cocci_MT-WG-R primer), for all 16 whole genome sequences and data set two, consisting of concatenated CDS were analysed using all three tree-building methods (BI, ML or MP). Selection of the best fit evolutionary model for the BI and ML analyses was evaluated in both MrModeltest v2.3 (Nylander J. A. A. 2004. MrModeltest v2. Program; distributed by the author, Evolutionary Biology Centre, Uppsala University) and MEGA [35]. For the Bayesian analyses, Markov Chain Monte Carlo was performed for 1,000,000 generations with four chains and heated chain temperature of 0.2. The burn-in length was set at 400,000 and subsample frequency of 1000 [32,36]. For the ML analyses, 500 bootstrap replicates were calculated to estimate node support. In the MP analyses, characters were unordered and given equal weight; trees were searched using the branch and bound search algorithm.
Data set three, consisting of concatenated amino acid (aa) translations of the 3 CDS was analysed with the same three tree-building methods. The empirical Jones-Taylor-Thornton (JTT) model of amino acid substitution with gamma distribution frequency (G + F) for all sites by Jones et al. [37] was selected for the ML and BI analyses. Substitution models were assessed in MEGA [35]. Where outgroup rooting was appropriate, the Eimeria magna mitochondrial genome sequence [GenBank: KF419217] was used as the functional outgroup.
Although sequences were truncated to remove the short indel-rich region downstream of the reverse primer for the phylogenetic analyses, complete genome sequence alignments were used within Geneious for calculating the pairwise genetic distances and number of nucleotide differences among the six newly sequenced genome sequences.
Six mt genomes from five Eimeria species infecting turkeys
The six complete mitochondrial genome sequences obtained from direct sequencing of PCR products from five Eimeria spp infecting turkeys varied modestly in their lengths: (Table 2) indicate a high degree of sequence identity among these new genome sequences; 5311 nucleotide positions (84.9% of the aligned sequence lengths) were invariant among all 6 genome sequences. There was no intraspecific variation noted between two strains of E. gallopavonis (Weybridge and USKS06-01) whose complete mt genomes were identical. The two closely related Eimeria spp. causing 'cecal coccidiosis' in turkeys (i.e. E. adenoeides and E. meleagridis) demonstrated a genetic distance of 1.8% from each other and each was 3.1% divergent from the two strains of E. gallopavonis ( Table 2). The physical form of the mitochondrial genomes was not directly assessed in this study, however, the mt genomes of these Eimeria species must be either linear concatenated or circular to permit successful PCR amplification of near full length mt genomes.
Genome organization
Genome content and organisation of all six Eimeria sp. mt genomes consisted of three protein-coding genes (COI, COIII and CytB) interspersed with 15 LSU and 11 SSU rDNA fragments (Figure 1). Pairwise sequence alignments between individual rDNA regions annotated in the Eimeria sp. genomes and the corresponding rDNA fragments of P. falciparum (M76611) identified by Feagin et al. [19] demonstrated pairwise sequence identities that ranged from 68.5% to 93.8%.
Searching conserved regions along the aligned mt genomes in the present study against the annotated P. falciparum mt genome identified three additional regions that are putative rDNA. The first two regions had high sequence identity to a single rDNA of P. falciparum encoding RNA14 (SSU/1) that appears to have been further fragmented on the Eimeria sp. mt genomes; the two resulting smaller fragments were found to map to two widely separated regions on these genomes. The first 29 bp of RNA14 from the P. falciparum mt genome (bp 5576-5548 of M76611) has high pairwise sequence identity (~79%) to a region designated RNA14a on the mt genomes of all Eimeria spp. The following 41 bp of RNA14 from the P. falciparum mt genome (bp 5547-5508 of M76611) has high pairwise sequence identity (75.6%) to a region designated RNA14b on the six mt genomes reported in the present study. The newly annotated rDNA fragment RNA14a (SSU/1a) was found in reverse orientation starting at bp 5141-5110 (varies in each Eimeria sp. mt genome) and RNA14b (SSU/1b) was found in forward orientation starting at bp 3104-3111 (varies in each Eimeria sp. mt genome). The remaining conserved region for which high sequence identity was discovered with the mt genome of P. falciparum corresponded to RNA5 (SSU/9) annotated by Feagin et al. [19]. This putative rDNA fragment was found in reverse orientation starting at bp 6130-6199 (varies in each Eimeria sp. mt genome) and corresponded to bp 4724-4802 on the P. falciparum mt genome. Although the pairwise sequence identity between the complete RNA5 (SSU/9) regions on the Eimeria spp. and P. falciparum mt genomes was only 63.2%, both the 5′ and 3′ ends of these regions were highly conserved (i.e. 80-85% sequence identity in the 20 bp at each end of the region).
The COIII CDS was most divergent (76.3% identical sites across the six mt genome sequences). The COI and CytB CDS showed 81.2% and 81.8% identical sites, respectively. Of the 272 sites demonstrating variation among the 6 COI CDS examined, 239 were synonymous (K S ) changes and 33 were non-synonymous (K A ) changes. The COIII CDS had 179 sites with variation (74 K S and 40 K A ) and the CytB CDS had 197 variable sites (167 K S and 30 K A changes). The CDS were more divergent than the rDNA fragments (80.2% sequence identity over the 3279 bp of the genomes identified as CDS versus 95.9% sequence identity over the 1880 bp identified as rDNA regions). Nucleotide differences and indels were observed within some fragmented rDNA regions but were most commonly observed within intergenic regions (i.e. between regions annotated as CDS or rDNA).
Start codon determinations for COI, COIII and CytB
Start codon assignments were made by comparison with 13 publically available complete mt genomes from various Eimeria species and subsequent confirmation of appropriate open-reading frames. In the six mt genomes obtained in this study, an ATG start codon for the CytB CDS beginning 214 or 215 bp downstream of the start of the Cocci_MT-WG-F primer binding site was shared among all Eimeria spp. for which complete mitochondrial sequences have been obtained. Preceding the ATG start codon was a poly-T-rich 'GTTTATGTTTA' motif that was conserved in all Eimeria spp. of turkeys with the exception of Eimeria meleagrimitis USMN08-01. The latter sequence had a single substitution of 'T' with a 'C' producing a slightly different motif 'GTTTATGTTCA'. A single stop codon, TAA, terminated the CDS for CytB, COI and COIII in all six mt genome sequences. Potential start codons for the COI CDS identified upstream of the highly conserved ' Asn-His-Lys' motif associated with the start of the heme-copper oxidase subunit I core region of COI were numerous for most of the new genome sequences except for E. meleagrimitis that had only 2 ORF's that start upstream of that functionally conserved region c.f. [24]. Of these two potential start codons for COI in the E. meleagrimitis sequence, only one potential start codon was shared among all Eimeria species; this ATD (ATG or ATA or ATT) start codon is located 27 bp upstream of the ' Asn-His-Lys' site in all Eimeria species sequenced to date. The start codon for the COIII CDS was determined to be a TTA codon located 14-20 bp downstream of the LSU/1 (LSUA) region. Use of this conserved start codon produces a COIII product of 252 aa. In all Eimeria species studied thus far there is a poly-A-and poly-T-rich region located upstream of both the COI and COIII start codons.
Phylogenetic analyses
After trimming the alignment of whole genome sequences to remove the short indel-rich region downstream of the Cocci_WG-MT-R primer, the alignment of 16 available mt whole genome sequences used for phylogenetic analyses was 6416 bp in length, including gaps. The general time reversible model with discrete Gamma (GTR + I + G) distribution of nucleotide substitution [38] was determined to be optimal for the BI and ML analyses. Figure 2 illustrates the phylogenetic relationships based on Bayesian inference (BI) and Maximum likelihood (ML) among the 10 publically available complete mt genome sequences from eimeriid coccidia and the six newly generated complete mt genome sequences from Eimeria spp. infecting turkeys. Phylogenetic relationships inferred using a Maximum parsimony (MP) model are illustrated in Figure 3. The Eimeria magna mt genome sequence was used as a functional out-group in all phylogenetic analyses. Phylogenetic trees generated from aligned concatenated CDS for COI, COIII and CytB are illustrated in Additional file 1: Figure S1 for BI and ML analyses and Additional file 2: Figure S2 for the MP analysis. Trees generated using concatenated amino acid translations of the CDS matched the trees based on the concatenated CDS dataset under the same phylogenetic inference model (data not shown).
In the BI and ML trees, for global complete mitochondrial nucleotide sequences and the concatenated CDS, all Eimeria species causing 'cecal coccidiosis' in turkeys (i.e. E. meleagridis, E. gallopavonis and E. adenoeides) formed a monophyletic clade that was the sister group to E. meleagrimitis; the latter species infects the intestinal tract of turkeys excluding the ceca. The Eimeria species causing 'cecal coccidiosis' in chickens (i.e. E. tenella and E. necatrix) formed a monophyletic clade that was the sister clade to these four Eimeria species infecting turkeys. In the MP trees based on the same DNA sequences (complete genome or concatenated CDS), Eimeria meleagrimitis was the sister taxon to a monophyletic clade consisting of species causing 'cecal coccidiosis' in chickens and turkeys. In none of the analyses
Figure 2
Bayesian inference and maximum likelihood phylogenetic reconstructions using mitochondrial genome sequences of 16 Eimeria species. The analyses included 5 species infecting turkeys and 7 species infecting chickens and used Eimeria magna (a parasite of rabbits) as the functional outgroup to root the tree. Node support is indicated for BI (posterior probability, first number) and for ML (% bootstrap, second number) for all nodes with greater than 0.5 posterior probability. Neither the Eimeria species infecting chickens nor the Eimeria species infecting turkeys formed monophyletic groups. Both the BI and ML analyses supported monophyly of the 5 Eimeria species of chickens that do not usually invade the cecal pouches but branching order among these parasites was poorly resolved in both. Table 1). did all Eimeria species infecting turkeys form a monophyletic group; in all phylogenetic analyses E. dispersa branched near the base of the tree and was the sister taxon to all other Eimeria species within the functional ingroup. The Eimeria spp infecting chickens, excluding E. tenella and E. necatrix, formed a monophyletic clade in all analyses and all datasets (DNA and AA-based); however, the branching order within this monophyletic clade varied among analyses. All of these parasites (i.e. E. acervulina, E. brunetti, E. mitis, E. praecox and E. maxima) infect the intestinal tract of chickens outside of the cecal pouches.
Discussion
The six newly reported mt genome sequences obtained in this study varied modestly in genome lengths (6165-6238 bp) and were comparable to the lengths (6148 bp to 6408 bp) of the mt genomes of Eimeria spp infecting chickens [22][23][24]39] and rabbits [40]. Genome organization of all mt genome sequences is highly conserved among eimeriid coccidia; however, eimeriid mt genome organization differs markedly from that of other apicomplexan mt genomes e.g. [12,17,20]. No sequence differences (100% sequence identity) were recorded between the two strains of Eimeria gallopavonis (i.e. Weybridge strain and USKS06-01 strain) analysed in this study despite being isolated from different geographical regions. The two E. tenella sequences isolated from two geographical areas (Japan and China) also did not differ in their sequences [22,23], respectively). In comparison, the two E. mitis isolates from the US and China showed sequence differences at 32 positions; perhaps the longer domestication of the chicken host has permitted greater genetic variation in its parasites compared to the domesticated turkey.
The number, direction and lengths of the three CDS were identical in all six mt genome sequences obtained in the present work. Although the COI, COIII and CytB CDS have been annotated inconsistently in the publically available mt genome sequences, alignment of all 16 complete mt genomes from 13 Eimeria species demonstrated conserved CDS using the start codons identified by Ogedengbe et al. [24] for E. mitis USDA50. An assessment of the three CDS across all six genome sequences yielded large numbers of nucleotide substitutions scattered within each gene.
Fragmented rDNA (from 16 to 188 bp in length) annotated in the present study were more highly conserved than the CDS, possibly due to functional constraints in
Eimeria adenoeides (Guelph strain) -KJ608415
Eimeria meleagridis (USAR97-01 strain) -KJ608418 the former. A single rDNA fragment (SSUA (SSU/4) was found upstream of the CytB and COI genes, fifteen rRNA fragments were located between the COI and COIII genes and the ten remaining rRNA fragments were found between the COIII and the end of the mt genome. In addition to rDNA fragments identified in P. falciparum that had been previously annotated as putative homologs on Eimeria sp. mt genomes see, [24], three regions of each Eimeria sp. mt genome had high sequence identities with rDNA fragments encoding RNA14 (SSU/1) or RNA5 (SSU/9) in P. falciparum see, [19]. A nearly complete rDNA encoding RNA5 (SSU/9) was located near the 3′ end of each genome and includes the binding site for the Cocci_MT-WG-R primer. The remaining two regions had high sequence identities to two portions of the rDNA encoding RNA14 (SSU/1) in P. falciparum see, [19]. However, this rDNA fragment appeared to have been further fragmented on the Eimeria sp. mt genomes and the two resulting smaller fragments (29 bp and 41 bp) were found to map to two widely separated regions on these genomes that we annotated as RNA14a (SSU/1a) and RNA14b (SSU/1b), respectively, on all six mt genomes reported in the present study.
All putative ribosomal fragments (fragmented LSU and SSU rDNA) were highly conserved among all Eimeria spp [22][23][24]39,40]; present study. These putative rDNA fragments showed high sequence identity (from 62% to 93.8% pairwise identity) to functionally annotated rRNA fragments of P. falciparum M76611 [19]. Occurrence of fragmented and incomplete rRNA genes is not an uncommon phenomenon in apicomplexan parasites; similar fragmented rRNA genes have been reported in all other apicomplexan mt genomes examined to date e.g. [12,17,19,22,23]. Although three additional conserved regions were annotated as putative rDNA fragments in the present study, other highly conserved regions in the six genome sequences remain unannotated but these comparatively conserved regions may represent as yet uncharacterized rDNA fragments.
Phylogenetic analyses under Bayesian, Maximum likelihood and Maximum parsimony evolutionary models using complete mt sequences or concatenated sequences from the three CDS from each mt genome did not support the conclusion that all Eimeria species infecting turkeys evolved from a common ancestor. Instead, although many turkey coccidia apparently share a common ancestor, at least one, E. dispersa was found branching as the sister taxon to all other Eimeria spp in the functional ingroup. It is possible that E. dispersa may not have evolved within turkeys but rather arrived in that host via a host switch from some other avian host. Eimeria dispersa has been shown to infect both Bobwhite quail (Colinus virginianus) and turkeys [41], and perhaps other hosts as well [42]. In addition, the mt genome sequences suggest that the cecal coccidia of chickens (E. tenella and E. necatrix) are distantly related to the other Eimeria of chickens and are more closely related to some of the Eimeria spp that infect turkeys; this had been previously suggested on biological [43] and molecular [44] grounds. Analyses of the mt genome sequence data support the suggestion that Eimeria spp in chickens represent two distinct ancestral colonisations of the intestine. In one, E. tenella and E. necatrix, that appear closely related to a number of coccidia infecting turkeys, invaded the ceca of chickens; the remaining five Eimeria spp. infecting chickens are closely related using nu 18S rDNA [24,43,44], partial mt COI sequences [24,44] or complete mt genome sequences (current study) and all of these species colonize regions of the intestine excluding the cecum.
Complete mt genome sequences could easily differentiate closely related parasites. For example, the pairwise genetic distance of E. adenoeides and E. meleagridis of turkeys and E. tenella and E. necatrix of chickens was 98.2% and 98.4%, respectively. Interestingly, the COI partial sequences for E. adenoeides (KCH strain) and E. adenoeides (KR strain) [8], are 100% identical to the COI CDS of E. adenoeides (Guelph strain) and E. meleagridis (USAR97-01 strain), respectively, suggesting that the KCH and KR strains of E. adenoeides of Poplstein and Vrba [8] are distinct species rather than strains of a single species.
Conclusions
The mt genomes of Eimeria species infecting turkeys are similar with respect to genome size, organisation, start codon positions and overall base composition with all other Eimeria species. Complete mitochondrial genome sequences possess sufficient sequence variability for differentiating Eimeria species infecting turkeys or chickens and, in the three cases where more than one complete mt genome is available from a single species (i.e. E. mitis, E. tenella and E. gallopavonis), the intraspecific variation between mt genomes was much smaller (0-0.5%) than the genetic distance between that species and the most closely related Eimeria species (1.6 -3.2%). Genetic variability is concentrated within the three CDS encoding COI, COIII and CytB. This makes these mt genes of Eimeria spp. suitable (either as individual genes or as concatenated sequences) for species delimitation studies and phylogenetic analyses without the confounding presence of paralogous genome copies encountered with nu rDNA sequences e.g. [24,45]. The nature of the mt genome sequences, and particularly the CDS regions, of Eimeria spp. make the mt genome highly suited for development of diagnostic assays as well as, potentially, genetic markers for molecular epidemiology and phylogenetics of coccidia. | 6,806.8 | 2014-07-17T00:00:00.000 | [
"Biology"
] |
HITSZ-ICRC: A Report for SMM4H Shared Task 2019-Automatic Classification and Extraction of Adverse Effect Mentions in Tweets
This is the system description of the Harbin Institute of Technology Shenzhen (HITSZ) team for the first and second subtasks of the fourth Social Media Mining for Health Applications (SMM4H) shared task in 2019. The two subtasks are automatic classification and extraction of adverse effect mentions in tweets. The systems for the two subtasks are based on bidirectional encoder representations from transformers (BERT), and achieves promising results. Among the systems we developed for subtask1, the best F1-score was 0.6457, for subtask2, the best relaxed F1-score and the best strict F1-score were 0.614 and 0.407 respectively. Our system ranks first among all systems on subtask1.
Introduction
Adverse drug reaction (ADR), namely adverse drug effect, is one of the leading causes of posttherapeutic deaths (Saha, Naskar, Dasgupta, & Dey, 2018). Nowadays, more and more people share information in social platform, including health information such as drugs and their ADRs. Twitter, as one of the most popular social platforms, has attracted a great deal of attention from researchers in the medical domain. Some methods, such as HTR_MSA (Wu et al., 2018) and Neural DrugNet (Nikhil & Mundra, 2018), have been proposed to detect tweets mentioning ADRs and medicine intake. In order to facilitate the use of social media for health monitoring and surveillance, the health language processing lab at University of Pennsylvania organized Social Media Mining for Health Applications (SMM4H) shared task four times. In 2019, the fourth SMM4H shared task was comprised of four subtasks: (1) Automatic classifications of adverse effect mentions in tweets, (2) Extraction of Adverse Effect mentions, (3) Normalization of adverse drug reaction mentions (ADR), and (4) Generalizable identification of personal health experience mentions (Weissenbacher et al., 2019).
We participated in subtask 1 and subtask2, and developed two systems based on bidirectional encoder representations from transformers (BERT) (Devlin, Chang, Lee, & Toutanova, 2018) for the two subtasks respectively. The system for subtask 1 achieved the best F1-score of 0.6457, ranking first. Among the systems we developed for subtask2, the best relaxed F1-score and the best strict F1-score were 0.614 and 0.407 respectively.
Task 1: Automatic Classifications of Adverse Effect Mentions in Tweets
Task 1 was formulated as follows: given a tweet, determine whether it mentions drug adverse effect mentions, denoted by 1 and 0, indicating a tweet mentions drug adverse effects and not, respectively. The organizers provided a train dataset consisting of 25,678 tweets for all participants to develop their system, and a test dataset consisting of 4,575 tweets to evaluate the performance of all systems.
Task 2: Extraction of Adverse Effect Mentions
Task 2 as a follow-step of Task 1 was formulated as follows: given a tweet, identify the text span of adverse effect mentions. The challenge of task 2 is to distinguish adverse effect mentions from similar non-ADR expressions. A training set of 3,225 tweets annotated with 1830 adverse effect mentions was provided for system development, and a test set of 1,573 tweets was provided for system evaluation. The statistics of the training and test datasets are listed in Table 2.
Methods
Our systems for both task 1 and task 2 were based on BERT, an unsupervised language representation method to obtain deep bidirectional representations of sentences by jointly conditioning on both left and right context in all layers from free text. Below we described in detail the methods for the two tasks: task 1 and task 2, respectively.
Task 1: BERT and BERT+Knowledge Base
In this task, we designed two methods, BERT and BERT +Knowledge Base. The model architecture is shown in Fig. 1.
BERT:
Like what BERT did, we took the final hidden state of the first input token [CLS] as the representation of a tweet. Then we applied a softmax layer over the output to classify a tweet. We denote the representation vector as , then the predicted label ̂ is computed as: where , is the parameters of the fully connected layer.
BERT+Knowledge Base: Inspired by Li et al. (2018), we tried to combine the BERT output with features from knowledge bases to improve the performance of systems. We firstly extracted drugs which appear in the SIDER 4.1 (a side effect resource which contains information on marketed medicines and their recorded adverse drug reactions) from the train dataset, and obtained a drug lexicon of 538 drugs. Then we extracted corresponding adverse effects in SIDER according to the drug lexicon, and obtained 4,411 <drug, ADR> pairs. For each tweet, according to the presence of <drug ADR> pairs, we could build a binary feature. We incorporated the binary feature into representation vectors of a tweet. The final representation of a tweet is a concatenation of its BERT output and lexicon feature. Then we used a fully connected layer to fuse information from different feature spaces, and applied a softmax layer on it to classify tweets. We denote the output of BERT as 1 , the lexicon feature as 2 , then the predicted label ̂ of a tweet is computed as : where , is the parameters of the fully connected layer. The loss function for two models training is crossentropy: Where and ̂ are gold label and predicted label for the ℎ sample in the ℎ label category. is the number of samples in a batch, is the number of label categories.
Task 2: BERT and BERT+CRF
In task2, we still took BERT as the basic architecture, and designed two methods. The model architecture is shown in Fig. 2.
BERT: This method is very similar to the first method in Task 1. The difference is that we feed the final hidden representation for to each token into a classification layer over the NER tags set, because we need to obtain predicted tag of each input token.
BERT +CRF: This method is a follow step of the first method. For BERT method, the predictions are not conditioned on the surrounding predictions. A CRF layer has a state transition matrix as parameters (Huang, Xu, & Yu, 2015). With such a layer, the system can efficiently use past and future tags to predict the current tag. Therefore, we applied a CRF layer on the classification layer. We denote the output sequence after softmax layer as = [ℎ 1 , ℎ 2 , … ℎ ], then the predicted tag sequence = [ 1 , 2 , … ] is as follows: , , = T ℎ is the score of predicting tag at the ℎ time, and +1 is the score of transitioning from to +1 .
Experiments
For task1, we compared BERT and BERT+knowledge base with two classic deep learning methods, TextCNN (Kim, 2014) and LSTM (Hochreiter & Schmidhuber, 1997), and also investigated the effect of different BERT models, including the BERT model (Devlin et al., 2018) publicly released by (https://github.com/google-research/bert) (denoted by BERT_noRetrained) and the BERT model retrained on a large-scale tweet unlabeled corpus based on the previous BERT model (denoted by BERT_Retrained). The unlabeled corpus consisted of 1,500,000 tweets crawled from Twitter according to 150 drug names collected from the training set. For task2, we only used the retrained BERT model.
In our experiments, we set batch size to 32, learning rate to 5e-5 when training all models. The epoch number was set to 8 for BERT retraining, and 20 for other models. The dimension of word embeddings used in TextCNN and LSTM was set to 200. We split out about 10% from the training set as a validation set for parameter optimization. The performance of all methods for the two tasks were measured by precision, recall and F1-score, which can be calculated by the official tools provided by the organizers. For task2, there were two criteria for system performance evaluation: relaxed and strict. Table 3 and Table 4 show the performance of our systems for task 1 and task 2 on the test set, respectively.
Results
For task 1, among the systems we developed, "BERT_Retrained" achieved the best F1-score of 0.6457 and recall of 0.6885 on the test set, "BERT_Retrained+Knowledge Base" achieved the best precision of 0.6916 on the test set. Compared with TextCNN and LSTM on the validation set, methods based on BERT showed much better performance. As officially reported, "BERT_Retrained" ranked first among all systems.
For task 2, among the systems we developed, "BERT_Retrained+CRF" achieved the best relaxed F1-score of 0.614 and the best strict F1score of 0.407, outperforming "BERT_Retrained" by 0.024 in relaxed F1-score and 0.060 in strict F1score.
Discussion
For Task 1, the distribution of 0 and 1 is highly imbalanced, 90% of samples are negative, 10% of samples are positive. When we used CNN and LSTM, if we did not deal with the data imbalance problem, the performance of them was quite poor, most tweets were classified to 0. In order to balance the number of positive and negative samples, we randomly divided into the negative samples into five equal parts, and combined each part with the positive samples to form a new training dataset. After this operation, we obtained five new balanced training datasets. Then we trained five models on them, and ensembled the five models. The ensembled model brought an increase of about 8% in F1-score. However, when applying this operation to BERT and "BERT+Retrained", we obtained little increase on F1-score. By analyzing results of "BERT_Retrained", we found that the main errors are: ADR mentions cannot be compeletely distinguished from the reason mentions of taking drugs. For example, in "oxycodone just took my headache away so fast", "headache" is the reason of taking oxycodone, not an adverse effect mention of oxycodone. The tweet was wrongly classified to 1.
Implicit adverse effect mentions are difficult to identified. For example, "pristiq and im livin in a cold world" and "uhh my gabapentin does went up today and I don't even know what planet i'm on. i hope i adjust to this quickly ... #endometriosis".
For task 2, because the CRF layer takes full advantages of relations between neighbor labels, "BERT_Retrained+CRF" could avoid some terrible tag sequences such as "I-B-B-O-O". The main errors appearing in task 2 are the same as task 1.
For further improvement, a possible direction is dealing with task 1 and task 2 at the same time using joint learning methods.
Conclusion
In this paper, we developed systems for task 1 and task 2 of the SMM4H shared task in 2019. Our systems were based on BERT and achieved promising results, especially ranking first on task 1. | 2,451.4 | 2019-08-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Combined RIS and EBG Surfaces Inspired Meta-Wearable Textile MIMO Antenna Using Viscose-Wool Felt
In this paper, we present a textile multiple-input–multiple-output (MIMO) antenna designed with a metamaterial inspired reactive impedance surface (RIS) and electromagnetic bandgap (EBG) using viscose-wool felt. Rectangular RIS was used as a reflector to improve the antenna gain and bandwidth to address well known crucial challenges—maintaining gain while reducing mutual coupling in MIMO antennas. The RIS unit cell was designed to achieve inductive impedance at the center frequency of 2.45 GHz with a reflection phase of 177.6°. The improved bandwidth of 170 MHz was achieved by using a square shaped RIS under a rectangular patch antenna, and this also helped to attain an additional gain of 1.29 dBi. When the antenna was implemented as MIMO, a split ring resonator backed by strip line type EBG was used to minimize the mutual coupling between the antenna elements. The EBG offered a sufficient band gap region from 2.37 GHz to 2.63 GHz. Prior to fabrication, bending analysis was carried out to validate the performance of the reflection coefficient (S11) and transmission coefficient (S21). The results of the analysis show that bending conditions have very little impact on antenna performance in terms of S-parameters. The effect of strip line supported SRR-based EBG was further analyzed with the fabricated prototype to clearly show the advantage of the designed EBG towards the mutual coupling reduction. The designed MIMO-RIS-EBG array-based antenna revealed an S21 reduction of −9.8 dB at 2.45 GHz frequency with overall S21 of <−40 dB. The results also indicated that the proposed SRR-EBG minimized the mutual coupling while keeping the mean effective gain (MEG) variations of <3 dB at the desired operating band. The specific absorption rate (SAR) analysis showed that the proposed design is not harmful to human body as the values are less than the regulated SAR. Overall, the findings in this study indicate the potential of the proposed MIMO antenna for microwave applications in a wearable format.
Introduction
Antenna design for on-body applications has been popular in the past few decades. On-body antennas are mainly known for wireless body area networks (WBAN) and they are designed for various applications such as emergency rescue services, global positioning systems (GPS) [1] and health monitoring [2]. The antenna's performance for the WBAN antenna is crucial because the antenna is placed close to the human body. As a result, some considerations are usually given to antennas for on-body applications where, (1) deformation analysis is usually carried out to check the performance of the antenna under various bending conditions (if the antenna is designed using flexible material), (2) specific absorption rate analysis is measured for the antenna with on-body application to verify the safety of the antenna's electromagnetic (EM) effect on the human body and (3) antenna characteristics such as S 11 and gain analysis with the on-body condition.
The choice of material becomes essential whenever an antenna is designed using flexible materials. Using polymer as a flexible material has been a common practice in wearable antenna designs [3,4]. The comprehensive review conducted in [4] has revealed that EM radiation characteristics of the antennas are affected hugely when the flexible polymer-based antennas undergo bending. In general, antenna designs consist of dielectric material as the substrate and conductive material as the patch that serves as the radiating element. Mixed metals with fabrics and conductive inks are some examples of conductive materials that have been adopted in previous works [5]. Polymers are widely been used as conductive materials in antenna design. They have been used as conductive threads [6], conductive polymers [7] and conductive textiles [8]. In addition, polymers are also commonly used as dielectric material or substrates used in antenna design. In [9], the substrate of the antenna has adopted the use of viscose-wool felt since it provides easier fabrication with sufficient flexibility while enabling strong adhesion with conductive textile Shieldit Super TM . Apart from antenna design, polymers are also widely used in applications such as energy harvesting [10], supercapacitor [11][12][13], tissue engineering [14], immunosensor [15] and gas sensor [16]. As for the on-body application, flexible wideband antennas based on polymer technology have been proposed for medical imaging systems [17,18].
Recent research has shown that multiple-input-multiple-output (MIMO) antennas were also designed for WBAN applications to overcome the multipath fading that can happen due to the on-body communication links [19][20][21]. Reflections or scatterings that occur around the human body or the surrounding environment cause multipath fading. As a result, the reliability of multi-signal communication, and the performance of a WBAN system are reduced [22]. A diversity technique such as MIMO is needed to improve effective communication under the influence of multipath fading. Therefore, designing MIMO antenna with lower mutual coupling between antenna elements becomes significant to overcome the multipath fading issue. In the last decade, a few works that designed MIMO wearable antennas have been reported. In [23], a dumbbell-shaped stub on the ground was used for a wearable MIMO antenna to reduce mutual coupling. For wearable 5G devices, a folded-dipole MIMO antenna was developed in [20]. Likewise, an ultra-wideband (UWB) MIMO antenna was designed for wearable devices with C-shaped slots to improve isolation [21]. Although these works have designed MIMO, the antennas designed are not directly used for on-body applications and the material used is not flexible. The design and analysis of the MIMO wearable antenna can be found in [24]. In that work, reported important results for on-body condition such as bending and specific absorption rate (SAR). Although the design outperforms other related works in terms of isolation with a limited gap between elements, the MIMO was not implemented with a common ground. MIMO designs need to have a single ground plane for MIMO to ensure the system has a common reference level (zero for ground) thus all the signals in the system can be interpreted properly [25].
The use of metamaterials or metasurfaces for on-body antenna design has found its interest to attain various performance improvements. Antenna characteristics such as gain, bandwidth and directivity are usually improved with the use of metamaterial. For instance, the artificial magnetic conductor (AMC) is used to improve gain and improve bandwidth [1,26]. In [27], a via-less EBG was designed for wearable antenna to increase the antenna gain and front-to-back ratio (FBR). In particular, minimal works have adopted metamaterial to solve the mutual coupling issue in flexible/textile MIMO antennas. A recently reported work [28] has adopted electromagnetic bandgap (EBG) to improve the isolation between dual-band MIMO antennas. However, this work lacks analyses such as deformation and SAR examination.
This research work presents a wearable textile MIMO antenna featuring two types of metasurface. First, a reactive impedance surface (RIS) array was designed to improve the antenna bandwidth and gain. Then, a split ring resonator (SRR) backed with strip line based EBG was implemented to reduce the mutual coupling between the multiple antennas. To the best of the authors' knowledge, the use of different types of metasurface in a MIMO wearable antenna has never been investigated, especially when flexible materials are used. The metasurfaces and the antenna were designed using viscose-wool felt. The following sections present a comprehensive insight into the proposed work's design stages.
Flexible Polymer-Based Meta-Wearable Antenna Design
Polymer-based flexible material was adopted for the proposed meta-wearable antenna. Flexible polymers have been studied in many recently investigated antenna research works [4]. The three main components of this work, namely RIS, EBG and the antenna, were all designed using a flexible polymer. Shieldit Super TM with a thickness of 0.17 mm and a conductivity value of 1.18 × 10 5 S/m was used as the ground plane of the RIS structure, EBG structure and the radiator. The commercially available Shieldit Super TM is made from a rugged rip-stop polyester substrate, conductive nickel and copper plating. The other side of the sheet is coated with a non-conductive hot melt adhesive. This ensures the sheet is easily ironed onto the textile substrates. Meanwhile, a viscose-wool felt with a thickness of 3 mm, a dielectric constant of 1.44 and a loss tangent of 0.044 was employed as the substrate. Existing EM related works using this felt with Shiedlit Super TM have shown good performance where the simulated and measured results were approximately the same [4,9,18]. The felt consists of 70% wool and 30% viscose which forms a good composition of fibers with a density of 0.25 gm/CC. This property ensured the Shieldit Super TM can be easily ironed and attached to the viscose-wool felt. Apart from that, it meets British Standard 4060 for pressed wool felts for reliability and quality tests thus it was adopted as the substrate for the metasurfaces and antenna design. Computer Simulation Technology (CST)-Microwave Studio Suite (MWS) was used to model and simulate the metasurfaces and meta-inspired antenna. The analyses of these structures are reported in the following subsections.
Reactive Impedance Surface Design with Rectangular Patch Antenna
In this work, a square-shaped RIS unit cell [29,30] with a dimension of a × a was modeled and simulated. The square shape was adopted due to its simplicity in design and fabrication of the antenna using flexible materials. The optimized dimension values, a is 18 mm and the gap between unit cell, g is 3 mm. As shown in Figure 1a, the unit cell RIS was backed by a perfect electric conductor (PEC) and interacted with transverse electromagnetic wave (TEM) from +z direction, establishing PEC and perfect magnetic conductor (PMC) boundaries perpendicular to the incident electric (E) and magnetic (H) fields. The resonant frequency of the RIS substrate is f RIS = 4.9 GHz, at which point the substrate acts like a PMC (open circuit). The RIS acts as an inductor below this resonance frequency. In particular, as shown in Figure 1b, at 2.45 GHz, the RIS acts as an inductor with a reflection phase at 177.6 • . At this range, the surface can store magnetic energy, and this magnetic energy will compensate for the electric energy associated with a patch antenna. Figure 2 presents a structure of a grounded dual-layer substrate with similar relative permittivity and height. We adopted a rectangular patch antenna in this design. At the same time, the proposed RIS metasurface was modeled as an array on the top of the lower layer, i.e., at the interface between both substrates. Figure 2 presents a structure of a grounded dual-layer substrate with similar relative permittivity and height. We adopted a rectangular patch antenna in this design. At the same time, the proposed RIS metasurface was modeled as an array on the top of the lower layer, i.e., at the interface between both substrates. Figure 2 presents a structure of a grounded dual-layer substrate with similar relative permittivity and height. We adopted a rectangular patch antenna in this design. At the same time, the proposed RIS metasurface was modeled as an array on the top of the lower layer, i.e., at the interface between both substrates. The coaxial cable was connected at the edge of the line, whose width and length were set to match the antenna at 50 Ohm. These two parameters and the patch length were optimized to increase the gain, widen the bandwidth and miniaturize the antenna size at the frequency of 2.45 GHz. We optimized the patch antenna and RIS dimensions to attain the best performance of the MIMO antenna design which describes in the next section. Table 1 lists the optimized dimensions of the patch antenna.
Electromagnetic Band-Gap Design
The EBG unit cell simulation was conducted using the Eigenmode Solver in CST MWS. The dispersion diagram method recommended in [31] was used to examine the properties of the EBG unit cell. We chose the SRR structure as it is via-less, thus making it easier for fabrication and integration with viscose-wool felt. The EBG structure with vias may increase the fabrication complexity when using textile-based materials. However, a via-less EBG without any splits on the structure could increase the frequency of mode I [27]. Therefore, the SRR-based EBG was implemented to maintain mode I at a lower frequency and mode II at a higher frequency to obtain a sufficient stop band. Figure 3 shows the unit cell structure of the EBG. The two split rings modelled in the SRR structure are capable of controlling mode I and mode II of the EBG. The parameters of the designed EBG are shown in Table 2. The coaxial cable was connected at the edge of the line, whose width and length were set to match the antenna at 50 Ohm. These two parameters and the patch length were optimized to increase the gain, widen the bandwidth and miniaturize the antenna size at the frequency of 2.45 GHz. We optimized the patch antenna and RIS dimensions to attain the best performance of the MIMO antenna design which describes in the next section. Table 1 lists the optimized dimensions of the patch antenna.
Electromagnetic Band-Gap Design
The EBG unit cell simulation was conducted using the Eigenmode Solver in CST MWS. The dispersion diagram method recommended in [31] was used to examine the properties of the EBG unit cell. We chose the SRR structure as it is via-less, thus making it easier for fabrication and integration with viscose-wool felt. The EBG structure with vias may increase the fabrication complexity when using textile-based materials. However, a via-less EBG without any splits on the structure could increase the frequency of mode I [27]. Therefore, the SRR-based EBG was implemented to maintain mode I at a lower frequency and mode II at a higher frequency to obtain a sufficient stop band. Figure 3 shows the unit cell structure of the EBG. The two split rings modelled in the SRR structure are capable of controlling mode I and mode II of the EBG. The parameters of the designed EBG are shown in Table 2. The EBG unit cell simulation was essential to ensure the desired stop band or bandgap region is suitable for the developed MIMO antenna. Figure 4 shows the dispersion diagram of the Brillouin Triangle (Γ-X-M) [27] that corresponds to the Eigenmode simulation of the EBG unit cell shown in Figure 3. Mode I and II are the fundamental modes of transverse magnetic (TM) and the higher mode of transverse electric (TE) polarized waves, respectively. The black dotted lines represent the light lines (no dispersion case). EBG characteristic is obtained between mode I and mode II under the graph area of light-lines. From Figure 4 it can be seen that the bandgap region obtained is from 2.37 GHz to 2.63 GHz. This bandgap is sufficient for the operating frequency of the MIMO antenna, where the operating frequency range is from 2.4 GHz to 2.5 GHz. The EBG characteristic was expected to reduce the mutual coupling of the MIMO antenna result obtained in the previous section. In other words, the proposed EBG could reduce the S 12 or S 21 magnitude. The EBG unit cell simulation was essential to ensure the desired stop band or bandgap region is suitable for the developed MIMO antenna. Figure 4 shows the dispersion diagram of the Brillouin Triangle (Г-X-M) [27] that corresponds to the Eigenmode simulation of the EBG unit cell shown in Figure 3. Mode I and II are the fundamental modes of transverse magnetic (TM) and the higher mode of transverse electric (TE) polarized waves, respectively. The black dotted lines represent the light lines (no dispersion case). EBG characteristic is obtained between mode I and mode II under the graph area of light-lines. From Figure 4 it can be seen that the bandgap region obtained is from 2.37 GHz to 2.63 GHz. This bandgap is sufficient for the operating frequency of the MIMO antenna, where the operating frequency range is from 2.4 GHz to 2.5 GHz. The EBG characteristic was expected to reduce the mutual coupling of the MIMO antenna result obtained in the previous section. In other words, the proposed EBG could reduce the S12 or S21 magnitude.
MIMO Antenna Design Geometry and Configurations
The MIMO-RIS-EBG antenna design flow is illustrated in Figure 5. Since the MIMO antenna consists of two antenna elements, the overall antenna size was increased to Xs × Ys where Xs = 190 mm and Ys = 104 mm. Apart from this, Lp was fine-tuned to 43 mm to
MIMO Antenna Design Geometry and Configurations
The MIMO-RIS-EBG antenna design flow is illustrated in Figure 5. Since the MIMO antenna consists of two antenna elements, the overall antenna size was increased to X s × Y s where X s = 190 mm and Y s = 104 mm. Apart from this, L p was fine-tuned to 43 mm to obtain optimum performance in terms of S 11 when the antenna works as MIMO. To enable strip line for the EBG, a slot was created on the ground plane with a size of X slot × Y slot Polymers 2022, 14, 1989 7 of 19 where X slot = 18 mm and Y slot = 99 mm. The EBG array was placed between the antenna elements as shown in Figure 4. To accommodate the EBG array, the RIS array that overlaps with the EBG substrate was removed, as depicted in Figure 4e. Although the bottom layer of EBG has strip line, this should not create a direct split on the antenna's ground plane. Therefore, a common ground plane was ensured at the edge of the strip line as shown in Figure 4f. This is because in a real system the signal should have a single/common ground (GND). Certainly, a direct split can improve the isolation of the antenna element, but this is not a recommended practice by [25].
Polymers 2022, 14, x 7 of 20 obtain optimum performance in terms of S11 when the antenna works as MIMO. To enable strip line for the EBG, a slot was created on the ground plane with a size of Xslot × Yslot where Xslot = 18 mm and Yslot = 99 mm. The EBG array was placed between the antenna elements as shown in Figure 4. To accommodate the EBG array, the RIS array that overlaps with the EBG substrate was removed, as depicted in Figure 4e. Although the bottom layer of EBG has strip line, this should not create a direct split on the antenna's ground plane. Therefore, a common ground plane was ensured at the edge of the strip line as shown in Figure 4f. This is because in a real system the signal should have a single/common ground (GND). Certainly, a direct split can improve the isolation of the antenna element, but this is not a recommended practice by [25]. The steps were used to fabricate the prototype as shown in Figure 6. These steps were adopted from the literature that used similar polymer and conductive materials [9,32]. After finalizing the modeling in the simulation, the structures were printed using computer aided design (CAD) software. The dielectric polymer material (viscose-wool felt) and conductive material (Shieldit Super TM ) of the prototype were then cut. An iron with medium heat was then used to paste the Shieldit Super TM on the viscose-wool felt. Alternatively, the Shieldit SuperTM can be sewn to the viscose-wool felt. This could ensure the bonding between them remain strong even after washing or repeated bending. The final fabricated prototype is shown in Figure 7. The figure shows clearly the two layers of the antenna: the top layer with a patch antenna and EBG array (no patch is underneath this layer); the middle layer that consists of the RIS array; and the bottom layer that consists of the strip line that is associated with the EBG array of the top layer. The steps were used to fabricate the prototype as shown in Figure 6. These steps were adopted from the literature that used similar polymer and conductive materials [9,32]. After finalizing the modeling in the simulation, the structures were printed using computer aided design (CAD) software. The dielectric polymer material (viscose-wool felt) and conductive material (Shieldit Super TM ) of the prototype were then cut. An iron with medium heat was then used to paste the Shieldit Super TM on the viscose-wool felt. Alternatively, the Shieldit SuperTM can be sewn to the viscose-wool felt. This could ensure the bonding between them remain strong even after washing or repeated bending. The final fabricated prototype is shown in Figure 7. The figure shows clearly the two layers of the antenna: the top layer with a patch antenna and EBG array (no patch is underneath this layer); the middle layer that consists of the RIS array; and the bottom layer that consists of the strip line that is associated with the EBG array of the top layer.
Results and Discussion
This section presents the related results at each stage of the design. First, the simulated results in terms of S11 and gain are presented for the single element patch antenna developed with the RIS. The rest of the section discusses the MIMO-RIS-EBG antenna results in various terms such as S-parameter, gain, radiation pattern and mutual coupling analysis.
Advantages of RIS for Patch Antenna
From Figure 8, it is evident that the S11 bandwidth of a single patch antenna with RIS is 2.292 GHz-2.632 GHz (340 MHz). Meanwhile, the operating bandwidth without RIS is 2.354 GHz-2.535GHz (181 MHz). Additional bandwidth of almost 170 MHz is obtained with the RIS. Therefore, it is evident that the use of RIS gives more significant greater benefits in terms of bandwidth enhancement.
Results and Discussion
This section presents the related results at each stage of the design. First, the simulated results in terms of S11 and gain are presented for the single element patch antenna developed with the RIS. The rest of the section discusses the MIMO-RIS-EBG antenna results in various terms such as S-parameter, gain, radiation pattern and mutual coupling analysis.
Advantages of RIS for Patch Antenna
From Figure 8, it is evident that the S11 bandwidth of a single patch antenna with RIS is 2.292 GHz-2.632 GHz (340 MHz). Meanwhile, the operating bandwidth without RIS is 2.354 GHz-2.535GHz (181 MHz). Additional bandwidth of almost 170 MHz is obtained with the RIS. Therefore, it is evident that the use of RIS gives more significant greater benefits in terms of bandwidth enhancement.
Results and Discussion
This section presents the related results at each stage of the design. First, the simulated results in terms of S 11 and gain are presented for the single element patch antenna developed with the RIS. The rest of the section discusses the MIMO-RIS-EBG antenna results in various terms such as S-parameter, gain, radiation pattern and mutual coupling analysis.
Advantages of RIS for Patch Antenna
From Figure 8, it is evident that the S 11 bandwidth of a single patch antenna with RIS is 2.292 GHz-2.632 GHz (340 MHz). Meanwhile, the operating bandwidth without RIS is 2.354 GHz-2.535 GHz (181 MHz). Additional bandwidth of almost 170 MHz is obtained with the RIS. Therefore, it is evident that the use of RIS gives more significant greater benefits in terms of bandwidth enhancement. Apart from this, the RIS also gives advantages in terms of size reduction. Without RIS, the length of the antenna Lp was 48 mm. With the RIS, the Lp could be reduced to 39.5 mm. Approximately 18% of the size reduction was attained with the use of RIS.
The radiation pattern is another important result that was investigated to see the advantages of using RIS. Figure 9 shows that the gain of the antenna with RIS is greater than the gain of the antenna without RIS. The attainable antenna gain without RIS is 4 dB, while the inclusive RIS layer provided an improved gain of 5.29 dB. This attribute was mainly contributed by the RIS layer when it acts as an inductor at 2.45 GHz. Meanwhile, the RIS surface stored magnetic energy, and this magnetic energy compensated for the electric energy associated with the patch antenna. This helped the EM radiation be further reflected toward the +z direction while the back-lobe radiation was reduced. Overall, an additional gain of 1.29 dB could be obtained with the use of RIS. It can be noted that the use of RIS not only improved the bandwidth and reduced the size, but also increased the antenna gain. Apart from this, the RIS also gives advantages in terms of size reduction. Without RIS, the length of the antenna L p was 48 mm. With the RIS, the L p could be reduced to 39.5 mm. Approximately 18% of the size reduction was attained with the use of RIS.
The radiation pattern is another important result that was investigated to see the advantages of using RIS. Figure 9 shows that the gain of the antenna with RIS is greater than the gain of the antenna without RIS. The attainable antenna gain without RIS is 4 dB, while the inclusive RIS layer provided an improved gain of 5.29 dB. This attribute was mainly contributed by the RIS layer when it acts as an inductor at 2.45 GHz. Meanwhile, the RIS surface stored magnetic energy, and this magnetic energy compensated for the electric energy associated with the patch antenna. This helped the EM radiation be further reflected toward the +z direction while the back-lobe radiation was reduced. Overall, an additional gain of 1.29 dB could be obtained with the use of RIS. It can be noted that the use of RIS not only improved the bandwidth and reduced the size, but also increased the antenna gain. Apart from this, the RIS also gives advantages in terms of size reduction. Without RIS, the length of the antenna Lp was 48 mm. With the RIS, the Lp could be reduced to 39.5 mm. Approximately 18% of the size reduction was attained with the use of RIS.
The radiation pattern is another important result that was investigated to see the advantages of using RIS. Figure 9 shows that the gain of the antenna with RIS is greater than the gain of the antenna without RIS. The attainable antenna gain without RIS is 4 dB, while the inclusive RIS layer provided an improved gain of 5.29 dB. This attribute was mainly contributed by the RIS layer when it acts as an inductor at 2.45 GHz. Meanwhile, the RIS surface stored magnetic energy, and this magnetic energy compensated for the electric energy associated with the patch antenna. This helped the EM radiation be further reflected toward the +z direction while the back-lobe radiation was reduced. Overall, an additional gain of 1.29 dB could be obtained with the use of RIS. It can be noted that the use of RIS not only improved the bandwidth and reduced the size, but also increased the antenna gain.
S-parameter results were then investigated for the final design (MIMO-RIS-EBG).
Motivated by previous work, EBG is capable of reducing mutual coupling between the antenna [33]. In this work, we designed and analyzed a new EBG structure to deploy in a RIS based MIMO antenna. Therefore, it was necessary to investigate the effect of EBG cautiously. Additionally, the bottom part of EBG consists of a strip line, thus a careful analysis was carried out to indicate the effect of EBG solely on the performance enhancement in terms of mutual coupling reduction. Figure 10 shows a thorough study directed to see the difference of S 21 result for two conditions as follows,
Performance Enhancement by Stripline Backed SRR-EBG
S-parameter results were then investigated for the final design (MIMO-RIS-EBG). Motivated by previous work, EBG is capable of reducing mutual coupling between the antenna [33]. In this work, we designed and analyzed a new EBG structure to deploy in a RIS based MIMO antenna. Therefore, it was necessary to investigate the effect of EBG cautiously. Additionally, the bottom part of EBG consists of a strip line, thus a careful analysis was carried out to indicate the effect of EBG solely on the performance enhancement in terms of mutual coupling reduction. Figure 10 shows a thorough study directed to see the difference of S21 result for two conditions as follows, Condition 1: simulation result of MIMO with the top part of EBG (as shown in Figure 5c) was removed.
Condition 2: simulation result of MIMO with full part of the EBG, top and bottom part available. The results in Figure 10 clearly show that the full model of the EBG with the stripline outperforms the antenna without the top EBG structure but with the stripline at the bottom. Without this analysis, one can claim that the S21 reduction may be due to the defected ground structure formed by the stripline. Approximately the use of SRR EBG backed by strip line reduced the S21 magnitude by 9.8 dB. Therefore, this investigation provides clear evidence of the EBG performance.
The measurements were conducted using the Agilent E5071C Network Analyzer (Agilent Technologies, Bayan Lepas, Penang, Malaysia) to validate the performance of the antenna. Figure 11 illustrates the experimental setup to measure the proposed antenna. To measure the S-parameters of the antenna, the coaxial probes from the antenna were connected to port 1 (P1) and port 2 (P2) of the network analyzer. The radiation pattern measurement was conducted with the aid of an Anechoic Chamber and a commercialized double-ridged horn antenna. The P2 of the network analyzer was connected to the The results in Figure 10 clearly show that the full model of the EBG with the stripline outperforms the antenna without the top EBG structure but with the stripline at the bottom. Without this analysis, one can claim that the S 21 reduction may be due to the defected ground structure formed by the stripline. Approximately the use of SRR EBG backed by strip line reduced the S 21 magnitude by 9.8 dB. Therefore, this investigation provides clear evidence of the EBG performance.
The measurements were conducted using the Agilent E5071C Network Analyzer (Agilent Technologies, Bayan Lepas, Penang, Malaysia) to validate the performance of the antenna. Figure 11 illustrates the experimental setup to measure the proposed antenna. To measure the S-parameters of the antenna, the coaxial probes from the antenna were connected to port 1 (P1) and port 2 (P2) of the network analyzer. The radiation pattern measurement was conducted with the aid of an Anechoic Chamber and a commercialized double-ridged horn antenna. The P2 of the network analyzer was connected to the antenna under test (AUT) which acts as the receiver. The double-ridged horn antenna (transmitter) was connected to P2. The data from the network analyzer were transferred to the computer using General Purpose Interface Bus (GPIB) cable.
Polymers 2022, 14, x 11 of antenna under test (AUT) which acts as the receiver. The double-ridged horn antenn (transmitter) was connected to P2. The data from the network analyzer were transferre to the computer using General Purpose Interface Bus (GPIB) cable. Figure 11. Experimental setup to measure the proposed antenna performance. Figure 12 shows the complete S-parameter results for both simulation and measur ment. The simulated and measured S11 indicates a good agreement. The fabricated an tenna resonant frequency is slightly shifted both in terms of S11 and S22. However, in term of bandwidth both MIMO elements can cover the wireless body area network and Wibandwidth. Apart from that, the measured S21 and S12 result indicating the performanc of the antenna in terms of mutual coupling reduction also shows reasonable agreemen with the simulated results. The measured S11 bandwidth is from 2.16 GHz to 2.66 GH Meanwhile, the measured S21 and S12 magnitudes are less than −40 dB at the frequenc range from 2.36 to 2.52 GHz. Figure 12 shows the complete S-parameter results for both simulation and measurement. The simulated and measured S 11 indicates a good agreement. The fabricated antenna resonant frequency is slightly shifted both in terms of S 11 and S 22 . However, in terms of bandwidth both MIMO elements can cover the wireless body area network and Wi-Fi bandwidth. Apart from that, the measured S 21 and S 12 result indicating the performance of the antenna in terms of mutual coupling reduction also shows reasonable agreement with the simulated results. The measured S 11 bandwidth is from 2.16 GHz to 2.66 GHz. Meanwhile, the measured S 21 and S 12 magnitudes are less than −40 dB at the frequency range from 2.36 to 2.52 GHz. The 3D radiation pattern results shown in Figure 13 indicate that the EBG provides sufficient mutual coupling reduction to ensure the antenna gain is not affected. First, the single antenna gain was improved using RIS from 4 to 5.29 dB as shown in Figure 9. With the implementation of MIMO-RIS, the attainable antenna gain was 5.93 dB. Interestingly, the mutual coupling was further reduced using the proposed EBG, thus finally the MIMO-RIS-EBG antenna achieved 6.15 dB. The comparison with simulated results shows that the radiation pattern beamwidth is slightly affected for the antenna at port 2. The other antenna results exhibit a good agreement with the simulated results. The measured gain was also approximately 5.8 dB for antenna elements at both ports. The 3D radiation pattern results shown in Figure 13 indicate that the EBG provides sufficient mutual coupling reduction to ensure the antenna gain is not affected. First, the single antenna gain was improved using RIS from 4 to 5.29 dB as shown in Figure 9. With the implementation of MIMO-RIS, the attainable antenna gain was 5.93 dB. Interestingly, the mutual coupling was further reduced using the proposed EBG, thus finally the MIMO-RIS-EBG antenna achieved 6.15 dB. The 3D radiation pattern results shown in Figure 13 indicate that the EBG provides sufficient mutual coupling reduction to ensure the antenna gain is not affected. First, the single antenna gain was improved using RIS from 4 to 5.29 dB as shown in Figure 9. With the implementation of MIMO-RIS, the attainable antenna gain was 5.93 dB. Interestingly, the mutual coupling was further reduced using the proposed EBG, thus finally the MIMO-RIS-EBG antenna achieved 6.15 dB. Figure 14 shows simulated and measured polar radiation pattern results for the MIMO-RIS-EBG antenna. The comparison with simulated results shows that the radiation pattern beamwidth is slightly affected for the antenna at port 2. The other antenna results exhibit a good agreement with the simulated results. The measured gain was also approximately 5.8 dB for antenna elements at both ports.
MIMO Properties of the Proposed Antenna
The performance of the proposed MIMO antenna was carried out in various terms, such as S11, S21 and radiation pattern. Additionally, the envelope correlation coefficient (ECC) and mean effective gain (MEG) properties of the antenna were also investigated [34,35]. ECC is a measure of how closely the antenna elements are coupled to each other, and it was calculated using the far-field radiation patterns using Equation (1). The ECC is given by, where, Mi and Mj represent the antenna elements, φ represents the azimuth angle (0-360 degrees), θ represents elevation angle that pointed by the vector itself, , j M θ φ describes the 3D radiation pattern when element/antenna j is excited. Ω represents the solid angle. The acceptable value for ECC is <0.3 [36].
In addition to the ECC, the MEG ratios |MEGi/MEGj|, where i and j denote specific antenna elements that were computed to quantify the imbalanced levels of the diverse propagation branches [37]. The MEG is given by expression (2) where it was assumed that the channel is uniform Rayleigh with equal vertical and horizontal polarization power densities [36]. In other words, MEG then is equal to half of the radiation efficiency.
MIMO Properties of the Proposed Antenna
The performance of the proposed MIMO antenna was carried out in various terms, such as S 11 , S 21 and radiation pattern. Additionally, the envelope correlation coefficient (ECC) and mean effective gain (MEG) properties of the antenna were also investigated [34,35]. ECC is a measure of how closely the antenna elements are coupled to each other, and it was calculated using the far-field radiation patterns using Equation (1). The ECC is given by, where, M i and M j represent the antenna elements, φ represents the azimuth angle (0-360 degrees), θ represents elevation angle that pointed by the vector itself, describes the far-field radiation pattern when element/antenna i is excited and → M j (θ, φ) describes the 3D radiation pattern when element/antenna j is excited. Ω represents the solid angle. The acceptable value for ECC is <0.3 [36].
In addition to the ECC, the MEG ratios |MEG i /MEG j |, where i and j denote specific antenna elements that were computed to quantify the imbalanced levels of the diverse propagation branches [37]. The MEG is given by expression (2) where it was assumed that the channel is uniform Rayleigh with equal vertical and horizontal polarization power densities [36]. In other words, MEG then is equal to half of the radiation efficiency.
Polymers 2022, 14, 1989 14 of 19 where η i,rad is the radiation efficiency, M represents total antenna elements and S ij denotes the related scattering parameters.
where K is the MEG variations and must be below 3 dB to have a comparable MEGs. Figure 15 shows the MEG results of the proposed antenna. It can be noticed that the maximum MEG variation is at a 1.1 GHz frequency with 1.4 dB. At the desired operating range (2.4 to 2.5 GHz), the MEG variation is less than 3 dB. With this, good power balance and low diversity loss can be guaranteed.
Polymers 2022, 14, x where ηi,rad is the radiation efficiency, M represents total antenna elements and notes the related scattering parameters.
where K is the MEG variations and must be below 3 dB to have a comparable ME Figure 15 shows the MEG results of the proposed antenna. It can be noticed maximum MEG variation is at a 1.1 GHz frequency with 1.4 dB. At the desired op range (2.4 to 2.5 GHz), the MEG variation is less than 3 dB. With this, good power and low diversity loss can be guaranteed. The performance of the proposed antenna was also validated under defor analysis. The deformation analysis was conducted with bending conditions applie antenna along the x and y axes. Figure 16 displays the bending analysis carried ou x axis from 30 degrees to 120 degrees. It can be noted that the bending does not af antenna results critically. The changes on S11, S21 and ECC results are very small the bending condition, thus the antenna performance is expected not to be affec verely for on-body application. The performance of the proposed antenna was also validated under deformation analysis. The deformation analysis was conducted with bending conditions applied to the antenna along the x and y axes. Figure 16 displays the bending analysis carried out on the x axis from 30 degrees to 120 degrees. It can be noted that the bending does not affect the antenna results critically. The changes on S 11 , S 21 and ECC results are very small due to the bending condition, thus the antenna performance is expected not to be affected severely for on-body application.
On the other hand, Figure 17 shows the bending effect when the antenna is bent along the y axis. The analysis indicates that the antenna S 11 is shifted when the bending range is increased. It also can be noted that for 120 degrees, the S 21 result was primarily affected where it was reduced to −30 dB. Therefore, it can be concluded that the bending along the y axis can increase the antenna's mutual coupling. The information in this analysis is important as when the antenna is deployed in the human body, the bending along the y axis should be avoided or minimized. The analyzed ECC value is less than 0.01 dB regardless of bending conditions, which indicates the mutual coupling reduction is effective with EBG. On the other hand, Figure 17 shows the bending effect when the antenna is bent along the y axis. The analysis indicates that the antenna S11 is shifted when the bending range is increased. It also can be noted that for 120 degrees, the S21 result was primarily affected where it was reduced to −30 dB. Therefore, it can be concluded that the bending along the y axis can increase the antenna's mutual coupling. The information in this analysis is important as when the antenna is deployed in the human body, the bending along the y axis should be avoided or minimized. The analyzed ECC value is less than 0.01 dB regardless of bending conditions, which indicates the mutual coupling reduction is effective with EBG. Specific Absorption Rate (SAR) analysis was also conducted in addition to deformation analysis since the antenna could be used for wearable applications. The SAR results should be lower compared to the regulated SAR value. The regulated SAR value of 1.6 W/kg is taken over 1g of tissue that absorbs most EM energy and 2 W/kg is taken over 10g of tissue that absorbs EM energy. The location of the antenna was selected to be on the chest of the body since the antenna size is considerably large with the MIMO method. Specific Absorption Rate (SAR) analysis was also conducted in addition to deformation analysis since the antenna could be used for wearable applications. The SAR results should be lower compared to the regulated SAR value. The regulated SAR value of 1.6 W/kg is taken over 1g of tissue that absorbs most EM energy and 2 W/kg is taken over 10 g of tissue that absorbs EM energy. The location of the antenna was selected to be on the chest of the body since the antenna size is considerably large with the MIMO method. Figures 18 and 19 show that the peak rate of the SAR is 0.37 W/kg and 0.207 W/kg for 1 g and 10 g, respectively. The antenna at port 2 yields the maximum value for both SAR regulations. However, all SAR results are still below the regulated SAR value which is 1.6 W/kg taken over 1 g of tissue that absorbs most EM energy and 2 W/kg taken over 10 g of tissue that absorbs EM energy. These findings indicate that the proposed antenna is safe for on-body application. The use of RIS in this antenna also helps reduce the antenna's back-lobe radiation pattern. Hence, properly implementation of the RIS structure helped reduce the SAR value. A comparative analysis of the proposed high-performance textile antenna with previously investigated MIMO textile antenna is presented in Table 3 in terms of material used, operating frequency band, techniques used, antenna gain and isolation performance. The comparison shows that some works adopt materials that are difficult for fabrication, such as jeans as a substrate and copper sheet as the radiating element. None of the existing work has deployed two types of metamaterials in a single design to achieve different performance attributes as proposed in this work. A comparative analysis of the proposed high-performance textile antenna with previously investigated MIMO textile antenna is presented in Table 3 in terms of material used, operating frequency band, techniques used, antenna gain and isolation performance. The comparison shows that some works adopt materials that are difficult for fabrication, such as jeans as a substrate and copper sheet as the radiating element. None of the existing work has deployed two types of metamaterials in a single design to achieve different performance attributes as proposed in this work. A comparative analysis of the proposed high-performance textile antenna with previously investigated MIMO textile antenna is presented in Table 3 in terms of material used, operating frequency band, techniques used, antenna gain and isolation performance. The comparison shows that some works adopt materials that are difficult for fabrication, such as jeans as a substrate and copper sheet as the radiating element. None of the existing work has deployed two types of metamaterials in a single design to achieve different performance attributes as proposed in this work.
Conclusions
A metasurface-inspired textile MIMO antenna featuring both RIS and EBG surfaces was proposed and studied. The combined metasurface-inspired antenna prototype was fabricated using flexible polymer dielectric, viscose-wool felt as it enables easier fabrication with Shieldit Super TM . The RIS array mainly helped increase the gain and bandwidth of the patch antenna. On the other hand, the proposed strip line backed SRR EBG exhibited band stop properties in the desired frequency range, from 2.4 GHz to 2.5 GHz. Via-less EBG was chosen as the via could complicate the fabrication process with the textile antenna. The implementation of the RIS and EBG into the antenna design was proven experimentally, where they improved the antenna gain and bandwidth reducing the mutual coupling effects. Measurements showed S 11 bandwidth from 2.16 GHz to 2.66 GHz for magnitude <−10 dB, with a peak gain of 5.8 dBi. The S 21 was <−40 dB over the frequency ranges from 2.36 to 2.52 GHz. Apart from this, the proposed antenna exhibits acceptable results in terms of MEG and ECC. The bending analysis also showed that the antenna performance effect is very minimal when bent at the x-axes. The overall findings indicate that the proposed design has the potential to be applied for wearable applications as the SAR analysis also showed a good result where the SAR value was less than 1.6 W/kg taken over 1 g of tissue and less than 2 W/kg taken over 10 g of tissue that absorbing EM energy. | 10,600.8 | 2022-05-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Identification of Potential Risk Genes and the Immune Landscape of Idiopathic Pulmonary Arterial Hypertension via Microarray Gene Expression Dataset Reanalysis
Gene dysfunction and immune cell infiltration play an essential role in the pathogenesis of idiopathic pulmonary arterial hypertension (IPAH). We aimed to investigate the immune landscape and novel differentially expressed genes (DEGs) of IPAH. In addition, potential druggable molecular targets for IPAH were also explored. In this study, the GSE117261 dataset was reanalyzed to explore the immune landscape and hub DEGs of IPAH. Lasso Cox regression analysis and receiver operating characteristic curve analysis were performed to detect the predictive value of IPAH. Additionally, the underlying drug targets for IPAH treatment were determined by drug–gene analysis. IPAH was significantly associated with the transforming growth factor-β (TGF-β) signaling pathway and Wnt signaling pathway as well as energetic metabolism dysfunction. We identified 31 upregulated and 39 downregulated DEGs in IPAH patients. Six hub genes, namely, SAA1, CCL5, CXCR1, CXCR2, CCR1, and ADORA3, were related to IPAH pathogenesis regardless of sex differences. Prediction model analysis showed that the area under the curve values of the hub DEGs except CXCR2 were all above 0.9 for distinguishing IPAH patients. In addition, the relative proportions of 5 subtypes of immune cells, namely, CD8+ T cells, CD4+ memory resting T cells, γ delta T cells, M1 macrophages, and resting mast cells, were significantly upregulated in the IPAH samples, while 6 subtypes of immune cells, namely, CD4+ naive T cells, resting NK cells, monocytes, M0 macrophages, activated mast cells, and neutrophils, were downregulated. Additionally, a total of 17 intersecting drugs targeting 5 genes, CCL5, CXCR1, CXCR2, CCR1, and ADORA3, were generated as potential druggable molecular targets for IPAH. Our study revealed the underlying correlations between genes and immune cells in IPAH and demonstrated for the first time that SAA1, CCL5, CXCR1, CCR1, and ADORA3 may be novel genetic targets for IPAH.
Introduction
Pulmonary arterial hypertension (PAH), defined as a mean pulmonary artery pressure ≥ 25 mmHg and pulmonary capillary wedge pressure ≤ 15 mmHg on resting right heart catheterization, is a progressive disease that may lead to right heart failure and hemodynamic disorder [1]. Despite the use of targeted drugs in the clinic, PAH remains a life-limiting disease. High pressure in the pulmonary artery is attributed to vasoconstriction, pulmonary vascular remodeling and vascular inflammation, and current research focuses on exploring more novel pathogenic mechanisms to reverse PAH; however, this heart catheterization, is a progressive disease that may lead to right heart failure and hemodynamic disorder [1]. Despite the use of targeted drugs in the clinic, PAH remains a life-limiting disease. High pressure in the pulmonary artery is attributed to vasoconstriction, pulmonary vascular remodeling and vascular inflammation, and current research focuses on exploring more novel pathogenic mechanisms to reverse PAH; however, this research still far from clinical practice [2]. Several genetic targets and immune patterns of PAH have been revealed [3][4][5]. The gene spectrum and immune landscape have gained great attention for their value in reversing PAH.
PAH without a cause or associated condition is called idiopathic PAH (IPAH). Although genetic dysfunction is commonly regarded as the basic pathogenesis of IPAH, several known targeted genes explain only 15-30% of IPAH cases. Moreover, although recent studies have demonstrated that both bone morphogenetic protein (BMP) 9 [2] and prostacyclin synthase [6] genetic variants may also be involved in the pathogenesis of IPAH, they still fail to fully explain the cause of IPAH in patients, indicating that the genetic basis of IPAH needs further investigation. In addition, immune and inflammatory cells play essential roles in the pathology of IPAH [2], and studies of the immune landscape may be valuable for developing novel approaches to treat IPAH.
Bioinformatic research has been used to investigate the potential pathogenic mechanisms of cardiovascular diseases [7]. In this study, the GSE117261 dataset profiles produced by Stearman et al. [8] were acquired from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/geo/). GSE117261 contains gene expression data from the complete transcriptomics analysis of IPAH and control lung biopsy tissues. To date, few data-based studies have been performed to analyze the potential genes and immune cell infiltration of IPAH. We analyzed the transcriptome differences and immune landscape of IPAH patients as well as potential druggable molecular targets for IPAH treatment, which may provide novel insights for disease development. Figure 1 shows the flowchart of the analysis procedure. The overview of the analysis procedure. GSE117261 dataset profiles were downloaded from the Gene Expression Omnibus database, and Gene Set Enrichment Analysis (GSEA) was conducted to investigate the potential biological pathways using the entire gene set. Thirty-one common upregulated differentially expressed genes (DEGs) and 39 common downregulated DEGs were identified. The DAVID database, ClueGo and Clupedia were used to perform GO and pathway enrichment of the DEGs, and STRING was used to construct the PPI network. The hub genes were detected by Cytoscape software. The immune landscape in the dataset samples was determined by the CIBERSORT algorithm. Lasso Cox regression analysis and ROC analysis were performed to build the IPAH prediction model. Additionally, drug-gene analysis was conducted to explore underlying drug targets for IPAH treatment. GSEA: Gene set enrichment analysis; DEGs: Differentially expressed genes; GO: Gene ontology; PPI: Protein-protein interaction; ROC: Receiver operating characteristic.
Data Resources
We downloaded the normalized gene expression profiles from the GEO database ( https://www.ncbi.nlm.nih.gov/geo/) [9]. The GSE117261 dataset, tested on GPL6224 based on the Affymetrix Human Gene 1.0 ST Array included gene expression data from the complete transcriptomics analysis of PAH and control lung tissues. The dataset produced by Stearman et al. [8] contained 58 PAH and 25 control lung tissue samples. After excluding samples from patients diagnosed with other types of PAH, 32 IPAH samples and 25 normal control samples from failed donors were finally included in the subsequent analysis. To start, 33,297 gene probes were matched to the corresponding official gene symbol after the platform description matrix files were downloaded. After considering multiple probes that matched to one gene, retaining the probes with the most significant gene expression value (adjusted p value), and deleting the non-mRNA probes, 23,307 genes were identified. The following procedures were performed based on the matched matrix file.
Screening and Identification of Differentially Expressed Genes
We used the limma package to screen for differentially expressed genes (DEGs) between IPAH patients and healthy controls based on the R platform (R-project.org). The fold change (FC) value was obtained by calculating the ratio of the expression level of each gene between IPAH and control samples. Logarithmic operations with 2 as the base number were used to make easier comparisons. Genes with |log 2 FC| ≥ 1 were considered DEGs, and to further limit the number of DEGs to facilitate the construction of the prediction model, an adjusted p value < 0.01 was considered the threshold value, corrected by the Benjamini-Hochberg method. DEGs with log 2 FC < 0 were considered downregulated, whereas those with log 2 FC > 0 were considered upregulated. The results were further validated by GEO2R, an online R-based web application supported by the GEO database [10].
Functional Analysis of the Expression Profiles
Gene Set Enrichment Analysis (GSEA) was performed to investigate the relevant biological pathways from an overall perspective using the original probe-matched matrix file of IPAH and normal control samples. GSEA software v4.0.3 was downloaded from the official website of the Broad Institute (http://www.broadinstitute.org/gsea) [11], and the analysis was conducted using the Molecular Signatures Database (MSigDB) of KEGG gene sets (c2.cp.kegg.v7.2.symbols). The normalized enrichment scores (NES) and nominal p values were generated by running GSEA. |NES| ≥ 1 and nominal p value < 0.05 were considered significant [12]. The GO enrichment analysis of the DEGs was performed by the ClueGO (version 2.5.7) and CluePedia (version 1.5.7) tool kits, which can decipher functionally grouped gene ontology (GO) and pathway annotation networks with a hypergeometric test and analyze functional correlations among pathways via Cytoscape software (version 3.7.1) [13][14][15]. To further validate and investigate the results of GO analysis, the biological process (BP), cellular component (CC), molecular function (MF), and KEGG pathway annotations of the hub genes were conducted via DAVID ( http://david.ncifcrf.gov/, version 6.8). In particular, Homo sapiens was selected to limit the annotation of the species. A p value < 0.05 was considered the threshold value to explore more comprehensive GO results.
Protein Interaction and Module Analysis
The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING, http:// string-db.org/, version 11.0) was used to establish the protein-protein interaction (PPI) network of the DEGs [16]. The STRING database contains multisource information, including the integration of text mining in PubMed, experimental/biochemical evidence, coexpression, and database association to provide functional interactions between proteins. The DEGs were entered, and Homo sapiens was selected as the organism. To further narrow the candidate gene field, the highest confidence level of 0.90 was used. Then, the Genes 2021, 12, 125 4 of 16 PPI network was constructed using Cytoscape software. The Molecular Complex Detection (MCODE, version 1.6.1) plug-in, a well-known automated kit based on topology to identify densely connected regions as molecular complexes in large PPI networks, was used to screen the modules of the PPI network. The MCODE parameter criteria were set by default as follows: Degree cutoff = 2, node score cutoff = 0.2, max depth = 100 and k-score = 2.
Evaluation of Immune Cell Infiltration
The normalized gene expression data with gene symbols were analyzed to infer the relative proportions of infiltrating immune cells of the selected samples via the CIBERSORT algorithm, a computational method for quantifying immune cell fractions from bulk tissue gene expression profiles based on gene expression reference values from a signature matrix of 547 genes in 22 types of immune cells. The modified expression file of GSE117261 was uploaded to the CIBERSORT website (http://cibersort.stanford.edu/), with the algorithm run by setting the default signature matrix at 1000 permutations. CIBERSORT generates a p value for the deconvolution for each sample using Monte Carlo sampling, offering a measure of confidence in the results. Significant alterations in immune cells between IPAH and control samples were identified according to the threshold of the Wilcoxon test at a p value < 0.05.
Prediction Model Analysis
The glmnet package in R software was utilized to calculate and select the linear models and preserve valuable variables by Lasso Cox regression analysis. According to the binary output variable in the processed data, we used a binomial distribution variable in the LASSO classification as well as the 1 standard error of the minimum criteria (the 1-SE criteria) lambda value in order to build the model with decent performance but the least number of variables. The expression level of the hub genes and the diagnosis of the 57 samples were obtained from the probe-matched matrix file. The drawing of the receiver operating characteristic (ROC) curves and the calculation of the area under the curve (AUC) were conducted by the ROC package in R, and the samples were randomly assigned to the training or testing cohort in an approximately 2:1 ratio. Thus, we investigated the feasibility of the hub genes in prediction via the AUC value.
Drug-Gene Interaction Analysis
The hub genes also served as potential targets in the search for drugs through the Drug-Gene Interaction database (DGIdb, http://www.dgidb.org/, version 3.0.2-sha1 ec916b2). This web-based database provides relevant drug-gene interaction data and gene druggability information from multiple sources, including clinical trial databases, web resources, and scientific papers in NCBI PubMed.
Functional Annotation and Enrichment of the Expression Profiles
To explore the possible biological mechanisms to uncover the collective behavior of gene expression in states of IPAH and normal controls, GSEA was utilized to interpret the genes distributed across the entire network. IPAH was significantly associated with the transforming growth factor-β (TGF-β) and Wnt signaling pathways ( Figure 2A,B, p value < 0.01) as well as relatively downregulated activities in energetic metabolism, including the citrate cycle, tricarboxylic acid cycle, glycolysis, gluconeogenesis, and starch and sucrose metabolism in IPAH samples compared with normal controls ( Figure 2C,E, p value < 0.01). Additionally, we found that IPAH shared a number of KEGG pathways with cardiomyopathy, viral myocarditis, and melanogenesis, and details are shown in Figure S1, Figure S2, and Supplementary File 2. GO analysis of the DEGs was conducted via the ClueGO and CluePedia tool kits in Cytoscape. As shown in Table 2, a total of 149 significant GO terms (p value < 0.01, see Supplementary File 3 for details) were classified into 11 groups according to Cohen's kappa score based on the shared genes between the terms [11]. The ontology relations between different GO terms are shown in Figure 2F.
Evaluation of Immune Cell Infiltration
The CIBERSORT algorithm was used to investigate the infiltration percentages of 22 subpopulations of immune cells in the IPAH and control samples from GSE117261. The relative percentage of each cell in 32 IPAH and 25 control samples is shown in Figure 3A. Moreover, as shown in Figure 3B, the relative proportions of 11 subtypes of immune cell were significantly different and objectively detectable between IPAH and control samples.
Protein Interaction and Module Analysis
To construct the PPI network of DEGs, the STRING online database and Cytoscape software were utilized. A total of 70 DEGs were filtered into the PPI network, which included 29 nodes and 39 edges ( Figure 4A). Based on the confidence level of 0.90, 41 genes were not included in the PPI network. According to the node degree >c5 criterion, the 6 hub genes were SAA1 (degree = 8), CCL5 (degree = 5), CCR1 (degree = 5), CXCR2 (degree = 5), CXCR1 (degree = 5), and ADORA3 (degree = 5). The MCODE plug-in was used to analyze the significant modules, and a module with 6 nodes with 15 edges was selected from the PPI network ( Figure 4B), showing that the results were consistent with the 6 hub genes. We also conducted the Wilcoxon rank-sum test to investigate the expression values of these hub genes in different samples based on sex, and as shown in Figure 4C, the expression of CCL5 (p = 0.077), CCR1 (p = 0.31), CXCR1 (p = 0.76), CXCR2 (p = 0.23), SAA1 (p = 0.19), and ADORA3 (p = 0.51) was not significantly different between males and females. The significant functional annotations, including BP, CC, MF, and KEGG pathways, are shown in Table 3.
Protein Interaction and Module Analysis
To construct the PPI network of DEGs, the STRING online database and Cytoscape software were utilized. A total of 70 DEGs were filtered into the PPI network, which included 29 nodes and 39 edges ( Figure 4A). Based on the confidence level of 0.90, 41 genes were not included in the PPI network. According to the node degree >c5 criterion, the 6 hub genes were SAA1 (degree = 8), CCL5 (degree = 5), CCR1 (degree = 5), CXCR2 (degree = 5), CXCR1 (degree = 5), and ADORA3 (degree = 5). The MCODE plug-in was used to analyze the significant modules, and a module with 6 nodes with 15 edges was selected from the PPI network ( Figure 4B), showing that the results were consistent with the 6 hub genes. We also conducted the Wilcoxon rank-sum test to investigate the expression values of these hub genes in different samples based on sex, and as shown in Figure 4C Table 3.
Exploring Candidate Biomarkers by Lasso Regression and Receiver Operating Characteristic Curves
First, a Lasso regression model for the hub DEGs of IPAH and control samples from GSE117261 was established to determine an optimum linear combination for predicting IPAH ( Figure 5A,B), with coefficients of −0.5826, 0.5619, −0.4437, −0.1321, and −0.028 for CXCR1, CCL5, ADORA3, CCR1, and SAA1, respectively. Then, ROC curve analysis of the Lasso regression model was conducted separately to predict IPAH in the training cohort, testing cohort, and combination cohort, and the AUC values were all above 0.9, which suggests that the genes in the model might have outstanding potential to as biomarkers for distinguishing IPAH patients ( Figure 5C). BP: Biological process.
Exploring Candidate Biomarkers by Lasso Regression and Receiver Operating Characteristic Curves
First, a Lasso regression model for the hub DEGs of IPAH and control samples from GSE117261 was established to determine an optimum linear combination for predicting IPAH ( Figure 5A,B), with coefficients of −0.5826, 0.5619, −0.4437, −0.1321, and −0.028 for CXCR1, CCL5, ADORA3, CCR1, and SAA1, respectively. Then, ROC curve analysis of the Lasso regression model was conducted separately to predict IPAH in the training cohort, testing cohort, and combination cohort, and the AUC values were all above 0.9, which suggests that the genes in the model might have outstanding potential to as biomarkers for distinguishing IPAH patients ( Figure 5C).
Drug-Gene Interaction Analysis
The drug-gene interaction network of the hub genes was screened via the DGIdb database (http://www.dgidb.org/), aiming to identify druggable targets. As shown in Table 4, a total of 17 intersecting drugs targeting 5 genes, CCL5, CXCR1, CXCR2, CCR1, and ADORA3, were generated as potential druggable molecular targets for IPAH.
Drug-Gene Interaction Analysis
The drug-gene interaction network of the hub genes was screened via the DGIdb database (http://www.dgidb.org/), aiming to identify druggable targets. As shown in Table 4, a total of 17 intersecting drugs targeting 5 genes, CCL5, CXCR1, CXCR2, CCR1, and ADORA3, were generated as potential druggable molecular targets for IPAH. Table 4. Potential druggable molecular targets for idiopathic pulmonary arterial hypertension (IPAH).
Gene
Drug Interaction Type CHEMBL472925 agonist
Discussion
IPAH, a rare but life-threatening disease, remains challenging in terms of its diagnosis and treatment, leading to a 5-year survival rate of approximately 50% [17,18] even with the administration of targeted drugs. Investigations of effective treatment strategies and the underlying methods are still needed. Genetic dysfunction is commonly recognized as the underlying pathogenesis of IPAH, and immune disorders also play an essential role in disease progression [2]. Recently, the exploration of gene dysfunction and the immune landscape in IPAH has received unprecedented attention due to their great potential for reconstructing therapeutic ideas, which may improve the unsatisfactory treatment situation of IPAH. Stearman et al. [8] conducted the largest PAH lung transcriptome study to date to provide insights into therapies and generate novel hypotheses for preclinical testing. Their study included patients diagnosed with group 1 PH, including IPAH, associated PAH, heritable PAH, and others, for analysis. Here, we specifically focused on the transcriptome differences and immune landscape between IPAH and control samples.
Our study demonstrated that IPAH was significantly associated with upregulation of both the TGF-β signaling pathway and Wnt signaling pathway. The TGF-β signaling pathway, closely related to inflammation, plays a vital role in numerous biological processes by regulating cell growth, differentiation, apoptosis, and cellular homeostasis, etc., and dysfunction is associated with the occurrence of cancer, immune disease as well as cardiovascular diseases [19,20]. An increasing amount of evidence has demonstrated the essential role of inflammation in the pathogenesis of IPAH [21,22], and the underlying mechanisms between the TGF-β signaling pathway and IPAH are currently under heated exploration. TGF-β/activin/nodal signaling, one of the TGF-β signaling pathways, branches through Smad2/3. After pSmad2/3 oligomerizes with Smad 4, they translocate into the nucleus to regulate the transcription of target genes, exerting effects on pulmonary vascular remodeling and pulmonary artery smooth muscle cell proliferation. In addition, dysregulated TGF-β/activin/nodal signaling enables the activation of extracellular-signal-regulated kinase, nuclear factor-κB and Rho kinase pathways, which may also promote PAH [23]. A clinical study [24] noted that a higher level of TGF-β1 could be identified in patients with IPAH compared with the control group. The Wnt signaling pathway is of utmost importance in regulating proliferation and differentiation [25]. Upregulation of the Wnt signaling pathway is regarded as the pathogenesis of both IPAH and heritable PAH [26]. Meanwhile, Hemnes et al. [27] also determined that higher increased stimulated Wnt signaling pathway activity in IPAH patients than in the control group could be detected after analyzing human lung fibroblasts. Moreover, dysfunction of energetic metabolism in terms of downregulation of the tricarboxylic acid cycle in IPAH has also been demonstrated, which uncovers IPAH as an energy metabolism-related disease.
The immune landscape provides a deeper understanding of the inflammatory components in the pathogenesis of IPAH, which helps in the investigation of novel treatments. Our results showed that CD 8 + T cells, CD 4 + memory resting T cells, γ delta T cells, M1 macrophages, and resting mast cells were upregulated, while CD 4 + naive T cells, resting NK cells, monocytes, M0 macrophages, activated mast cells, and neutrophils were downregulated in IPAH samples. It has been reported that varying degrees of perivascular inflammatory infiltrates, such as T-and B-lymphocytes, mast cells, macrophages, and dendritic cells, occur in PAH patients or animal models [28]. Similar to our findings, Marsh et al. [22] also showed that CD 4 + , CD 8 + , and γ delta T-cell subsets were also increased in the lungs of patients with IPAH. CD 4 + and CD 8 + T cells are able to induce proinflammatory cytokine release, leading to pulmonary artery injury. γ delta T-cells, serving as a link between innate and adaptive immune responses, help tissue homeostasis and wound healing by releasing insulin-like growth factor-1, which exerts a proliferative promotion effect on smooth muscle cells to induce IPAH [9,29]. M1 macrophages, also called classically activated macrophages, produce proinflammatory cytokines such as IL-1β, TNF, IL-12, and IL-18 [30], and increased inflammatory markers can exacerbate damage to pulmonary vessels. The study showed that inflammation and vascular smooth muscle cell phenotypic switching induced by activated M1 macrophages are related to the increased expression of carbonic anhydrase 2. The use of carbonic anhydrase inhibitors exerts an immunomodulatory effect to treat macrophage-mediated inflammation [31]. Mast cells are the first immune cells recognized in pulmonary vascular lesions in IPAH patients and can release cysteinyl leukotriene C4 and endothelin to enhance lung vascular remodeling and PH pathogenesis [32]. Our results showed that resting mast cells, not activated mast cells, were increased in IPAH patients. Similarly, Wang et al. [33] revealed that resting mast cells were increased in idiopathic pulmonary fibrosis, which has been regarded as another immune disorder disease, and the role of resting mast cells in the pathogenesis of lung immune diseases such as IPAH may be worthy of exploration. Recently, reductions in NK cells in both PAH mouse models and PAH patients were identified. The dysfunction of NK cells has been regarded as an important regulator of angiogenesis and vascular remodeling, potentially according to the induction of angiogenic factors and chemokines [34,35]. A clinical study [36] showed that after 1 year of follow-up, PAH patients, including IPAH patients with deficiencies in NK cells and cytotoxic CD 8 + T cells, were deceased, while patients with normal lymphocyte profiles were all alive. Decreased NK cells are linked to a high risk of death, but the underlying mechanisms are still unclear. This suggests that NK cell depletion may be a consequence of or a predisposing factor for PAH, and the association between programmed death-1 expression on NK cells and disease progression needs further investigation [35]. Monocytes induce cytokines to promote inflammation and remodeling [37,38]. Neutrophils release proteolytic enzymes that modulate the activity of cytokines and degrade the extracellular matrix, releasing growth factors to promote vascular remodeling. Moreover, proteolytic enzymes may also alter the inflammatory environment, enhance leukocyte responses, and exacerbate inflammatory effects [39,40]. It seems that monocyte and neutrophil infiltration contributes to the pathogenesis of PAH, while our study revealed relatively lower fractions of these cells in IPAH samples. PAHtargeted drugs can inhibit inflammatory effects [41,42], and the reduction in inflammatory cells, including monocytes and neutrophils, may be attributed to the use of PAH drugs in our samples. Novel insights into the promising correlations and mechanisms are needed. Research on the immune landscape provides prospective evidence of IPAH pathogenesis, which can be used to explore more treatment strategies via further investigation.
In our study, a total of 70 DEGs were identified. Similar to Stearman's study [8], we found that upregulated genes such as CCL5, VCAM1, and EDN1 and downregulated genes including CXCR2 were also identified in IPAH. We first demonstrated that the dysregulation of SAA1 (log 2 FC = −1.57), CCR1 (log 2 FC = −1.04), CXCR1 (log 2 FC = −1.24), and ADORA3 (log 2 FC = −1.03) may also play essential roles in the pathogenesis of IPAH. After reanalyzing the GSE117261 series matrix dataset, 6 hub DEGs of IPAH in terms of SAA1 (degree = 8), CCL5 (degree = 5), CCR1 (degree = 5), CXCR2 (degree = 5), CXCR1 (degree = 5), and ADORA3 (degree = 5) were identified in our study, and no significance was found in the comparison of female and male patients, indicating that these hub DEGs were not a result of sex-associated discrepancies. SAA1, located on the short arm of chromosome 11, showed the highest degree of connectivity with IPAH. The SAA protein, encoded by SAA1, is highly induced during the acute-phase response and plays an important role in lipid metabolism, bacterial clearance, and tumor pathogenesis [43,44]. Recently, the function of the SAA protein in regulating inflammation has been fully discussed, and studies have consistently demonstrated that SAA [45,46] induces the expression of proinflammatory factors. One study suggested recombinant human SAA as an inflammatory cytokine, but it does not belong to any family of known chemokines and inflammatory cytokines due to its different structure [47]. In addition, anti-inflammatory factors such as IL-1R antagonist and IL-10 can also be induced by recombinant human SAA, suggesting that the primary role of SAA during inflammation may be homeostatic [48][49][50]. Our study showed that the downregulation of the SAA1 gene was related to IPAH and hypothesized that the underlying mechanism may be attributed to the dysfunction of inflammation homeostasis. However, ongoing and future studies are warranted to explore powerful evidence on the association between SAA1 and IPAH. Other potential gene targets of IPAH, including CCL5, CCR1, CXCR2, CXCR1, and ADORA3, are all associated with inflammatory-immune regulation. IPAH is a kind of immune-mediated inflammatory disease [28]. Similar to previous studies, we revealed that the upregulated CCL5 gene was a risk factor for the pathogenesis of PAH [51][52][53]. CCL5 is one of the members of the CC-chemokine family, having a complex impact on immune cells such as monocytes, T lymphocytes, and NK cells [54]; it is strongly expressed on vascular endothelial cells and exerts vasoconstriction and remodeling effects on the lung tissue of patients with PAH [52,55]. Interestingly, CCL5 interacts with bone morphogenetic protein receptor type-2 (BMPR2), which is regarded as an identified IPAH pathogenic gene [56,57]. Nie et al. [58] showed that PAH patients with decreased BMPR2 expression have higher expression levels of CCL5 in pulmonary artery endothelial cells. The deletion of CCL5 inhibited pulmonary vascular remodeling in mice by restoring BMPR2 and activating the phosphorylation of BMP target proteins. In addition, the reduction in CCL5 improved pulmonary artery endothelial cell survival and suppressed the proliferation of pulmonary artery smooth muscle cells to reverse IPAH through BMPR2 signal enhancement. The adenosine A3 receptor (A3R) coupled to Gi proteins is encoded by ADORA3, suggesting that it is a regulator of inflammatory responses [59]. The role of this receptor in the pathophysiology of inflammation is complex. Putten et al. [60] pointed out that A3R-mediated signaling induced proinflammatory cytokines, while another study showed the protective effect of preventing excessive immune response and immune-mediated damage after A3R activation [61]. To date, research on downregulated ADORA3 and IPAH pathogenesis is insufficient, and the underlying mechanisms still need to be explored.
More importantly, the ROC curve analysis of the Lasso regression model showed that the AUC values were all above 0.9, indicating the outstanding potential of the 5 hub DEGs, namely, CXCR1, CCL5, ADORA3, CCR1, and SAA1, as biomarkers for distinguishing IPAH patients, which has significant clinical feasibility in auxiliary diagnosis and disease classification. In Stearman's study [8], differentially regulated drug targets in PAH, including EDN1, EDNRA, PDE5A, GUCY1B1, PTGIR, PTGIS, and CACNA1C, which are related to the endothelin pathway, phosphodiesterase family, prostanoid pathway proteins, and voltage-gated calcium channels, were demonstrated. However, in this study, we showed that CCL5, CXCR1, CXCR2, CCR1, and ADORA3, associated mainly with inflammatory and immune pathways, were all identified as potential druggable molecular targets for IPAH, which might also reflect the inflammatory and immune pathogenesis of IPAH. Currently, IPAH pharmacotherapy, except for classic targeted drugs related to the nitric oxide pathway, endothelin pathway, and prostacyclin pathway, is still limited. The development of a treatment strategy relies on full insights into the pathogenesis of IPAH. Here, we highlight that the dysfunctions of 6 genes with great potential induce IPAH, and investigational drugs targeting CCL5, CXCR1, CXCR2, CCR1, and ADORA3 may provide a prospective direction for the treatment of IPAH. Our work is promising with regards to advancements in treating this devastating condition, which may significantly the prognosis of patients.
However, several limitations remain in our study. First, our results were based on GSE117261, and other datasets or clinical data are needed for further research. In this study, all of the samples were collected from Caucasians, which may cause selection bias. Second, smoking exposure could alter the inflammatory status of the lungs. SAA is also reported to be an indicator of the inflammatory status of the lung associated with an increased risk of developing lung cancer in heavy smokers. However, in this dataset, no smoking or cancer data are provided. Third, biomarkers exploration in our study has not been verified by external IPAH cohorts and in the future other cohorts are eager to conduct to test candidate biomarkers mentioned in our study. Also, biological experiments are needed for further verification.
Conclusions
In conclusion, therapeutic strategies for IPAH are currently limited due to the complex pathogenesis of IPAH. The dysregulation of genes and immune cell infiltration are regarded as important mechanisms that promote disease progression. Our study demonstrated related signaling pathways and the immune landscape of IPAH as well as identified 6 hub genes, which might help to further provide novel insights for candidate biomarker exploration and treatment development in IPAH.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,044.8 | 2021-01-01T00:00:00.000 | [
"Biology"
] |
Modulation of the Microstructure and Enhancement of the Photocatalytic Performance of g-C 3 N 4 by Thermal Exfoliation
.
Introduction
As industrialization progresses, environmental pollution issues, particularly those involving heavy metal ion contamination, have become an increasingly severe global challenge.Hexavalent chromium (Cr(VI)) is a typical heavy metal pollutant that mainly originates from the wastewater discharge of industries such as leather processing, electroplating, printing, and pigments [1][2].Due to its high toxicity, high extent, they are often accompanied by high costs, high energy consumption, and the potential for secondary pollution.Therefore, it is particularly important to develop a treatment technology that is both economical and environmentally friendly.Photocatalytic technology, which can utilize the energy of natural sunlight to drive the reaction of Cr(VI), is characterized by low cost and environmental friendliness, showing its potential in treating Cr(VI) contamination.Therefore, through photocatalytic technology, Cr(VI) can be converted into less toxic Cr(III), making it an effective and environmentally friendly pollution control strategy [1][2][3][4][5][6].
However, photocatalytic technology still needs to improve in practical applications, mainly due to the need for high-performance, costeffective, and environmentally friendly photocatalysts.Therefore, the development of high-performance photocatalysts is an important research direction for the development of environmental science.Graphitic carbon nitride (g-C3N4) is a layered photocatalyst with remarkable photocatalytic potential, which possesses non-toxicity, good chemical stability, and a suitable band gap structure [7][8][9][10][11][12].Nevertheless, due to disordered growth and interlayer intermolecular interactions, the conventional thermal polymerization method for preparing g-C3N4 makes the synthesized g-C3N4 usually present a bulk structure (bulk-g-C3N4).Its structural defects manifest in a small specific surface area, fewer surface active sites, and a high complexation rate of the photogenerated electronhole pairs [7,8].These factors limit its activity in visible light photocatalytic reactions [7][8][9][10][11][12].Its structural defects manifest in a small specific surface area, fewer surface active sites, and a high complexation rate of the photogenerated electronhole pairs [7,8].These factors limit its activity in visible light photocatalytic reactions [7][8][9][10][11][12].
Recent investigations have employed various strategies to enhance the performance of g-C3N4 to address these limitations, including morphological control [9], doping modification [8,10], heterojunction construction [11][12][13][14], chemical and thermal exfoliation [15][16][17], and dye sensitization [18].For instance, Yang et al. [8] successfully synthesized g-C3N4 nanosheets with excellent photocatalytic degradation performance for Rhodamine B through the synergistic effect of dual-element doping and secondary calcination.Nguyen et al. [11] prepared Ag/ZnO/g-C3N4 through a physical mixing calcination method, enhancing its visible light photocatalytic degradation activity for methylene blue (MB).Wang et al. [11] constructed a TiO2@C/g-C3N4 heterojunction for efficient removal of NO.Zhang et al. [15] used an aqueous sodium hydroxide solution to treat g-C3N4 to improve its photocatalytic activity for reducing Cr(VI) under visible light.On the other hand, Medeiros et al. [16] examined the effects of chemical and thermal exfoliation on the physicochemical and optical properties of carbon nitride and the underlying reasons.
Based on the various modification methods for g-C3N4, thermal exfoliation treatment has been widely studied for its simplicity, effectiveness, and minimal alteration of the material structure [8,17,19,20].However, in the existing works, the heat treatment temperature is higher than 590 °C [19], and gas protection is required [20], increasing energy consumption and costs.Additionally, further research is necessary to apply photocatalytic reduction of heavy metals.In this work, we modulated the microstructure of g-C3N4 by low-temperature thermal exfoliation (500-540 °C) in air, obtaining the effects of thermal exfoliation temperature on grain size and bandgap structure.By comparing the visible-light photocatalytic reduction activity of bulk-g-C3N4 and CN for Cr(VI), combining electrochemical tests and band structure, we propose the mechanism of CN photocatalytic reduction of Cr(VI) and discuss the possible reasons for the enhanced photocatalytic activity.
Synthesis
Weigh 3 g of dicyandiamide, place it into a capped crucible, and then put it into a muffle furnace.Heat it at a rate of 10 °C/min to a reaction temperature of 540 °C for 2 h.The resulting product is denoted as bulk-g-C3N4.The homemade bulk-g-C3N4 was put into a muffle furnace again and heated at 500 °C, 520 °C, and 540 °C with the same heating rate for 2 h.The products were labeled as CN-500, CN-520, and CN-540.It should be noted that as the temperature increases, the yield of CN obtained by thermal oxidation decreases.Considering the yield factor, select a temperature of up to 540 °C.
Characterizations
The composition of the synthesized materials was analyzed using an X-ray powder diffractometer (XRD, Ultima IV X, Rigaku Corporation, Japan), a Fourier-transform infrared spectrometer (FT-IR, ALPHA, Bruker, Germany), and an X-ray photoelectron spectrometer (XPS, Thermo escalab 250 XI, Thermo Fisher Scientific, USA).The morphological characteristics of the materials were obtained using a scanning electron microscope (SEM, SU8600, Hitachi, Japan).The ultraviolet-visible-near-infrared diffuse reflectance spectrum of the synthesized photocatalyst was obtained using a UV-Vis DRS spectrometer (Lambda750, PerkinElmer, USA).The optical properties of the samples were measured using a photoluminescence spectrometer (PL, F-2700, Hitachi, Japan).Transient photocurrent response curves (i-t), electrochemical impedance spectroscopy (EIS), and Mott-Schottky (M-S) curves were obtained using an electrochemical workstation (CHI 660E, Chenhua Instruments Co., Ltd., Shanghai, China, using a three-electrode system with Ag/AgCl as the reference electrode).The mineralization rate of organic pollutants was measured by total organic carbon/total nitrogen tester (TOC, Model TNM-L, Shimadzu, Japan).
Cr(VI) Reduction Experiments
The experiments for the visible-light photocatalytic reduction of aqueous Cr(VI) by g-C3N4 were conducted using a GHX-Z photochemical reaction apparatus.The experimental conditions were as follows: a 250 W Xe lamp (filtered to remove UV light with wavelengths less than 420 nm), a reaction temperature of 25 °C, with 1 mL of 0.5 mol/L citric acid as the hole scavenger, and 300 mg of photocatalyst added to 300 mL of 10 mg/L K2Cr2O7 solution.
First, an adsorption-desorption experiment was carried out for 40 min in the dark.During the photocatalytic reaction after turning on the light source, the reaction solution was pipetted at fixed intervals, and the post-reaction clarified Cr(VI) solution was obtained by filtering through a fiber filter membrane with a pore size of 0.22 µm.The concentration of the aqueous Cr(VI) was determined using a spectrophotometer.The removal rate of Cr(VI) was obtained using Equation (1).In the equation, c0 and ct represent the concentration of aqueous Cr(VI) at 0 and t min, respectively.
TC-HCl and RhB Degradation Experiments
The selective experiments of CN-540 were conducted through adsorption and photocatalytic experiments of 10 mg/L TC-HCl and RhB, with experimental parameters consistent with those of Cr(VI) reduction experiments.Rhodamine B (RhB) is an artificially synthesized rose-red, cationic dye commonly found in industrial wastewater from printing, textile, and food industries and is a common pollutant in such wastewater [21,22].Tetracycline hydrochloride is a water-soluble polar compound and a widely used antimicrobial drug in clinical settings.Due to the low effective utilization rate of tetracycline hydrochloride (TC-HCl), 75% of it is excreted as metabolites [23], posing a threat to human health and the ecological environment.Given the high chemical stability of TC-HCl and RhB [21], these organic pollutants are typically not directly oxidized by O2 in the air.Utilizing photocatalytic technology to purify organic wastewater is a potentially feasible strategy [21,24].
Structural and Compositional Characterization
Figure 1 shows the XRD patterns of bulk-g-C3N4, and CN obtained through thermal exfoliation.Compared with the standard card (JCPDS 87-1526) [8], all samples exhibit the two characteristic peaks of graphitic carbon nitride.The strong peak at 27.4° is attributed to the (002) plane of graphitic carbon nitride, formed by stacking aromatic rings [17,25].The weaker peak at 13.1° belongs to the 3-s-triazine units within the planar structure, corresponding to the (100) plane of g-C3N4 [17,25].Notably, the diffraction angle of bulk-g-C3N4 on the (002) plane is 27.47°, while the diffraction angle of the exfoliated CN photocatalyst on the same plane is 27.64°.According to the change of diffraction angle and Equation ( 2) [26], it can be inferred that the interlayer spacing decreases.
Figure 2(a) provides the FT-IR spectra of bulk-g-C3N4, CN-500, CN-520, and CN-540, which are similar to each other, indicating that the thermal exfoliation in the air atmosphere did not destroy the basic structure of g-C3N4.However, there are changes in the intensity of the characteristic peaks at the typical breathing vibration mode of the triazine ring at 810 cm −1 , as well as the vibrational modes of C−N hybridization within the range of 1200 cm -1 to 1400 cm −1 (Figure S1 Supporting Information).These variations are attributed to the adjustment of the interlayer spacing.The absorption peak observed at 810 cm −1 is characteristic of the bending vibration of the triazine ring [5,13,27].The absorption bands in the range of 1200 cm −1 to 1600 cm −1 are typical of the stretching vibrations of the aromatic CN heterocycles, with absorption peaks at 1230 cm −1 , 1315 cm −1 , and 1400 cm −1 attributed to the stretching vibrations of the aromatic C−N single bonds [5,13,27].Additionally, the absorption peaks at 1560 cm −1 and 1629 cm −1 are attributed to the stretching vibrations of −C=N and the stretching vibrations of C=O [5,13,27].The broad absorption peak near 3200 cm −1 is attributed to the stretching vibrations of O−H or N−H bonds [17].
Using X-ray photoelectron spectroscopy (XPS) technology, we conducted a detailed analysis of the chemical states of elements on the surface of the photocatalyst.As shown in Figure 2(b), the survey spectrum indicates that both bulk-g-C3N4 and CN-540 consist of carbon (C), nitrogen (N), and oxygen (O) elements, with the presence of oxygen mainly due to the adsorption of CO2 and H2O on the surface of the photocatalyst.Further high-resolution XPS analysis shows that in Figure 2(c), the C1s peak is fitted to two peaks located at 288.2 eV and 284.9 eV, corresponding to the C−C bonds of adventitious carbon and the carbon atoms in the N=C−N2 structure of the g-C3N4 molecular structure [10,13].The N1s peak in Figure 2(d) is fitted to two peaks, with the binding energies at 398.5 eV and 399.9 eV for CN-540 corresponding to the nitrogen atoms in the sp 2 hybridized C=N−C bonds [20] and the nitrogen atoms in N−(C)3, respectively.It is particularly noteworthy that compared to bulk-g-C3N4, the C1s binding energy of CN-540 is higher, while the N1s binding energy is lower.XPS peak separation software calculated the peak area of N1s spectra of bulk-g-C3N4 and CN-540.The N content of N−(C)3 in CN-540 decreased from 29.4% to 24.3%.This experimental result suggests that during the thermal exfoliation process in the air atmosphere, CN-540 may have undergone the removal of nitrogen atoms, thereby introducing nitrogen vacancies [28,29], which could significantly affect the photocatalytic performance of the photocatalyst.
Figure 3 presents the SEM images of the prepared photocatalysts.Specifically, Figure 3(a) provides the morphology of bulk-g-C3N4, which exhibits a smooth bulk structure on its surface.Figures 3(b) to (d) display images of CN photocatalysts, indicating that as the temperature of thermal exfoliation treatment in the air atmosphere increases, the degree of surface porosity and sponginess of CN increases.Figures 3(e) and 3(f) present the EDX elemental distribution maps for bulk-g-C3N4 and CN-540, respectively, showing the uniform distribution of C and N elements.The nitrogen content in CN-540 has decreased from 45.55% to 37.35%, indicating the removal of nitrogen atoms from the structure of these materials.
Ultraviolet-visible diffuse reflectance absorption spectroscopy is an excellent means of evaluating the light-harvesting ability of photocatalysts.Figure 4(a) shows the UV-Vis spectra of bulk-g-C3N4, CN-500, CN-520, and CN-540, where CN-500, CN-520, and CN-540 all exhibit enhanced visible light absorption compared to the bulk-g-C3N4.g-C3N4 is an indirect band gap semiconductor [13], by plotting (αhν) 1/2 versus (hν) [13], Figure 4(b) can be obtained.Extrapolating the linear part of the figure to y = 0 obtains the band gap energies for bulk-g-C3N4, CN-500, CN-520, and CN-540, which are 2.70 eV, 2.69 eV, 2.67 eV, and 2.66 eV, respectively.It can be seen that the bandgap energies of CN-500, CN-520, and CN-540 are all smaller than that of bulkg-C3N4, and the bandgap energy decreases with the increase in thermal exfoliation temperature.reduction rate by bulk-g-C3N4 without citric acid was only 9.5% under the same conditions.This indicates that the photocatalytic performance of g-C3N4 can be improved by thermal exfoliation in the air atmosphere.
Photocatalytic Performance
A pseudo-first-order kinetic equation ( Eq. 3) was employed to further analyze the photocatalytic reduction process.Based on the relationship between ln(ci0/cit) and t shown in Figure 5(b), the reaction rate constants for the photocatalytic reduction of Cr(VI) by different catalysts were determined.The results show that the photocatalytic reaction rate constant for CN-540 on Cr(VI) is about 0.0298 min -1 , 6.21 times that of bulk-g-C3N4, indicating a faster reaction rate.
where, cit and ci0 denote the concentration of Cr(VI) solution when the light irradiation time is t and 0 min, respectively.
Figures 5(c-e nm remained unchanged.These results indicate that TC-HCl removal by CN-540 under light is a photocatalytic process.The inset of Figure 5(e) shows the adsorption and removal of RhB (40.2%) by CN-540 under dark conditions within 140 min.It was observed that the change in absorbance of RhB was prolonged, and the position of the absorption peak near 550 nm hardly moved.This suggests that the removal of RhB by CN-540 under dark conditions is mainly through adsorption.However, as the time of visible light illumination increases, the efficiency of CN-540 photocatalytic degradation of RhB gradually increases, and the absorption peak near 550 nm has shifted towards a smaller wavelength.This shift may be due to changes in the molecular structure of RhB. Figure 5(f) illustrates the mineralization rates of CN-540 during the adsorption and photocatalytic removal processes of TC-HCl, which were calculated based on the measured TOC and are 17.7% and 51.5%, respectively.Similarly, the adsorption and photocatalytic mineralization rates for RhB by CN-540 are 22.3% and 46.6%, respectively.The experimental results indicate that CN-540 exhibits varying catalytic capabilities towards different pollutants.It can be observed from the figures that as the illumination time increases, the absorption peak intensities of the functional groups decrease, indicating that the characteristic functional groups have been cleaved.
Photocatalytic Mechanism
To reveal the active species in the photocatalytic reduction of Cr(VI) by CN-540 photocatalyst, ammonium oxalate (AO), isopropanol (IPA), and benzoquinone (BQ) were introduced to capture holes (h + ), hydroxyl radicals (•OH), and superoxide radicals (•O2 -), respectively.Figure 6(a) shows that after the addition of AO, IPA, and BQ, the reduction rates of CN-540 for Cr(VI) decreased to 12.6%, 7.82%, and 26.5%, respectively, indicating that h + , •OH, and •O2 -all play essential roles in the reduction process of Cr(VI).Therefore, it can be inferred that h + , •OH, and •O2 -act as active radicals in the photocatalytic reaction.
It has been shown that a relatively weak PL peak typically indicates a lower recombination rate of photogenerated holes and electron pairs in semiconductor photocatalysts [30,31].The PL emission spectral intensities shown in Figure 6b decrease in bulk-g-C3N4 > CN-500 > CN-520 > CN-540, indicating that thermal etching in the air atmosphere can improve the separation efficiency of photogenerated electrons and holes.Transient photocurrent (TPC) and electrochemical impedance spectroscopy (EIS) were used to evaluate the generation and separation of interfacial charges and to investigate further the separation and migration of photogenerated electrons and holes in the photocatalytic process.According to the Nyquist plots, the smaller the semicircle diameter, the smaller the carrier transfer resistance of the sample [30].The Nyquist plots shown in Figure 7(a) indicate that the charge transfer resistances of the CN-500, CN-520, and CN-540 samples are smaller than that of bulk-g-C3N4, with CN-540 having the most minor charge transfer resistance.The transient photocurrent results in Figure 7(b) show that the current density of the CN-540 material is the highest, confirming its fastest separation efficiency of photogenerated carriers.
Figure 8 presents the Mott-Schottky (M-S) plots for bulk-g-C3N4 and CN-540.The positive slopes in the plots indicate that both bulk-g-C3N4 and CN-540 are n-type semiconductors [32] and that the thermal exfoliation in the air atmosphere has not changed the semiconductor type of g-C3N4.Using the extrapolation method where (1/C) 2 = 0, the flat band potentials (EFB) of bulk-g-C3N4 and CN-540 (vs.Ag/AgCl) are determined to be -0.75 and -0.76 eV, respectively.According to EFB(vs.NHE) = EFB (vs.SCE) + 0.222 + 0.0592 × pH [33], where the potential of the reference electrode AgCl/Ag is 0.222 V, and the conduction band edge (ECB) is typically 0.10 to 0.3 eV lower than the flat band potential (EFB) of n-type semiconductors (this article takes 0.3), the ECB (vs.NHE) of bulkg-C3N4 and CN-540 are calculated to be -0.414eV and -0.424 eV, respectively.Furthermore, based on the equation EVB = Eg + ECB, the valence band potentials (EVB) of bulk-g-C3N4 and CN-540 (vs.NHE) are calculated to be +2.286 and +2.236 eV, respectively.
Conclusions
This study has regulated the microstructure of g-C3N4 through thermal exfoliation in an air atmosphere, significantly enhancing its photocatalytic performance.The experimental results indicate that the CN obtained through thermal exfoliation treatment has reduced crystal grain size, decreased bandgap width, formed certain nitrogen-vacancy defects, and increased surface oxygen content, enhancing visible light absorption capability.In particular, the CN-540 sample exhibited a 96.9% reduction rate of Cr(VI), and its reaction rate constant was constant at 6.25 times that of the original g-C3N4.The photocatalytic degradation and mineralization rates for TC-HCl by CN-540 are 66.7% and 51.5%, Similarly, RhB was 60.6% and 46.6%, respectively.This indicates that CN-540 possesses good photocatalytic oxidation and reduction performance.Through transient photocurrent response and electrochemical impedance spectroscopy tests, it was confirmed that CN-540 has the best separation and transport efficiency of photogenerated carriers.This study provides experimental evidence for optimizing the photocatalytic performance of g-C3N4 through thermal exfoliation strategies and offers new ideas for developing efficient photocatalysts.
Xinshan
Zhao and Yuanyuan Luo: Investigation, original draft, and Writing.Junwei Yu and Tingyu Meng: Formal analysis and Investigation.Zhao Li and Lin Tian: Revision and Funding.Yanzhen Fu and Limei Sun: Review and editing.Jing Li: Supervision, Design, Revision & Funding.
Figure 5 .
Figure 5. (a) Comparative activity chart of visible-light photocatalytic reduction of Cr(VI) by bulk-g-C3N4, CN-500, CN-520, and CN-540, (b) The pseudo-first-order reaction kinetics of samples, (c) Activity comparison of CN-540 for different pollutants, The UV-vise absorption spectra of CN-540 for the degradation and adsorption of organic pollutants (d) TC-HCl and (e) RhB, (f) The mineralization removal rate of TC-HCl and RhB by CN-540.
Figure 9 .
Figure 9. Schematic diagram of the photocatalytic Cr(VI) reduction by CN-540 under visible light. | 4,184.6 | 2024-08-16T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Bacterial Cellulose Production in The Overripe Guava Juice by Acetobacter xylinum as A Solution to Reduce Organic Waste
The exploration for a new cost-effective carbon source with shorter fermentation process for high yield BC production is still needed. In this study, bacterial cellulose (BC) was synthesized by Acetobacter xylinum using overripe guava juice as a carbon source. The results showed that A. xylinum was able to grow on the overripe guava juice with different concentration and produced BC after two days incubation. In the later on, the BC is called as nata de guava. The overripe guava juice which containing reducing sugar content 23 g /L (substrate 100%) at pH 4 produced the thickest BC (1.267 cm). This study showed due to the high reducing sugar and protein contents in the overripe guava, without the addition of carbon and nitrogen from external sources, BC could be formed. Considering the huge disposal overripe guava fruit waste in Java, the present study provides an alternative methodology to synthesize BC. Besides, most importantly, this study provides a new insight to manage organic waste specifically from overripe guava fruit rather than the waste being thrown away and becoming organic waste generation. Keywords: bacterial cellulose, Acetobacter xylinum, overripe guava, organic waste
INTRODUCTION
Indonesia, as a tropical country, can produce a variety of fruits including guava (Psidium guajava L.). Due to its climacteric nature, Guava is one of the horticulture products that is easily damaged mechanically, chemically and microbiologically [1] [2]. During the transportation and sale process, guava fruit will experience ripening, leading to the aging process (senescence). As a result, the fruit will become overripe so it can decrease the selling point [3] and potentially become organic waste. Based on data from the Central Bureau of Statistic [4], the amount of guava fruit production on the island of Java was quite high in the last two years, in the amount of 170,339 tons in 2019 and 287.570 tons in 2020. Thus, it is possible that the island of Java produced a high amount of guava fruit waste.
In fact, overripe guava still contains reducing sugar, acid and nutrients. The reducing sugar The exploration of a new cost-effective carbon source with a time-saving fermentation process for obtaining the high yield of product is currently getting more attention. This research aims to study the use of overripe guava as an inexpensive carbon source to produce bacterial cellulose (BC), namely Nata de guava, by employing Acetobacter xylinum. The overripe guava was prepared for fermentation medium by extracting the fruit in distilled water with various concentrations. The result indicated that bacterial cellulose started to arise on the surface of the medium after two days of incubation. The thickest BC product (1.27 cm) was achieved from the media containing 23 g /L of reducing sugar (substrate 100%) at pH 4. Furthermore, this study showed that BC could be formed without the addition of carbon and nitrogen from external sources due to the high contents of reducing sugar and protein in the overripe guava medium. Considering the abundant resource of the overripe guava fruit waste in Java, the present study attempts to develop an alternative methodology to synthesize BC. Most importantly, this study provides a new insight to manage organic waste specifically from overripe guava fruit. content in the overripe guava is about 6% [5] [2]. Thus, it is potential substrate for nata production, which requires reducing sugar by 2-10% [6] [7]. Nata is a white, insoluble, gelatin-like solid, which is a thin layer of cells and polysaccharides, namely bacterial cellulose (BC) formed by Acetobacter xylinum [6], the most efficient bacterial cellulose producer [8]. During the fermentation process, A. xylinum converts glucose into a layer of extracellular cellulose, which releases and gradually covers the surface [9]. Later on, the nata is called nata de guava. In addition, the high reducing sugar content in the overripe guava allows no addition of sugar in the fermentation process.
Nowadays, the production of bacterial cellulose or nata is increasing, since it can be used as a reinforcement of high-quality papers, textile, electronic display and audio membranes [10][11][12][13][14]. However, high cost and low-yield production have limited the industrial production of BC and its commercial application [15]. Therefore, the exploration for a new cost-effective carbon source with a shorter fermentation process for high yield BC production is still needed. The study by Aulia et al. [16] utilizes fresh red guava juice for making BC. This research uses additional sugar (as carbon source) and Za (as nitrogen source) in the BC production. However, research on making BC from overripe guava and without the addition of carbon and nitrogen sources has not yet been reported. On the other hand, the study by Syamsiah and Gunawan [17] utilizes fruit waste for biogas production. However, before it is actually made into biogas, the juice can be extracted for making BC. Therefore, the purpose of this study is to utilize overripe guava fruit as an alternative substrate for bacterial cellulose (BC) production by Acetobacter xylinum. The results obtained from the study provide insight into the solution to reduce the amount of organic waste, especially fruit waste and the optimal utilization of food resources.
EXPERIMENTAL SECTION 2.1 Material
This study used overripe red guava fruit from the species Psidium guajava L., Bangkok variety, obtained from Gamping fruit market, Sleman. This is an overripe guava fruit, yellow to brown in color, where the skin of the fruit tears and collapses easily. The juice of the fruit was used as a substrate of A. xylinum in making bacterial cellulose (BC) or nata. The inoculum of A. xylinum used in this study was purchased from CV. Agroprima industry, Sedayu, Bantul, Yogyakarta. The chemicals used in this study were 3,5-Dinitrosalicylic acid (DNS) reagent, 0.1N NaOH, distilled water, neutral distilled water, Bromothymol blue (BTB) indicator, potassium carbonate, potassium dichromate, anhydrous glucose and vaseline.
Instrumentation
The tools used in this study were a bottle jar with the same diameter of 6 cm and the same height for the growth of A. xylinum ( Figure S1.), autoclave (Harvard / LTE) for sterilizing overripe guava juice, UV-VIS spectrophotometer for reducing sugar analysis, a Conway diffusion dish for alcohol content analysis and glassware.
Inoculum preparation
The inoculum of A. xylinum was prepared using Schramm-Hestrin (SH) and Peptone Glucose Yeast Extract (PGY) medium. Schramm-Hestrin (SH) medium contains 20 g glucose per liter, 5g peptone, 5g yeast extract, 2.7g anhydrous disodium phosphate and 1.5 g citric acid monohydrate. 1.5% agar is added for the solid SH medium, while the characteristics of A. xylinum were confirmed using glucose and sucrose medium.
Substrate preparation
The overripe guava fruits were mashed using a blender with the addition of distilled water in the ratio of 1:1, 1:2, 1:3 and 1:4. Next, it was filtered using a filter cloth to obtain only the juice without pulp, which resulted in overripe guava juice at different concentrations, e.g. 100%, 75%, 50% and 25%. These were used as treatment in this study, namely substrate 100% (reactor A1), substrate 75% (reactor A2), substrate 50% (reactor A3) and substrate 25% (reactor A4) (Fig. S1). pp. 41-48, November 2021 p-ISSN: 0853-2788, e-ISSN:2527-7669 The filtrates will be used as growth medium for A. xylinum without the addition of both carbon and nitrogen from external sources. A total of 100 ml overripe guava filtrate juice from each treatment was placed into a 250 ml glass bottle with a diameter of approximately 6 cm and a height of 12 cm. The bottles were covered with parchment paper then tied with rubber, then sterilized by autoclaving at 121°C for 15 minutes.
Fermentation
Fermentations were performed under static conditions (batch culture). Briefly, a total of 20 ml of inoculum (which contains 1x10 7 cell/ml) was inoculated into the 100 ml sterilized filtrate juice. The experiments were performed in triplicate for each treatment. Samples were collected at regular intervals (every 2 days) for 2 weeks of fermentation to observe A. xylinum growth, substrate consumption (reducing sugar content), bacterial cellulose production (the rate formation of nata), the pH value, the alcohol and total acid concentration.
The A. xylinum growth analysis
The A. xylinum growth was analyzed using the Total Plate Count (TPC) method, refer to Cappucino and Sherman [18]. Briefly, a total of 1 ml of the sample was put into a test tube, where serial dilution was carried out until 10-7. A total of 1 ml of the 10-5, 10-6 and 10-7 dilutions was taken and spread using a sterilized spreader in a petri dish containing Herstin-Schramm (HS) agar medium. Then it was incubated for 5 days in an incubator at 30 o C. From the number of colonies per petri dish, the number of bacteria per ml of material can be determined, namely by multiplying the number of colonies by the reciprocal of dilution. The calculation with TPC needs to pay attention to several calculation requirements that are needed referring to Cappucino and Sherman [18].
Substrate consumption or reducing sugar content analysis
The substrate consumption or reducing sugar content was determined by spectrophotometry using the 3,5-Dinitrosalicylic acid (DNS) method--referred to [19]. Briefly, a total of 3 ml DNS reagent was added to 1 ml of sample in a test tube. A blank solution containing 1.0 ml of distilled water and 3 ml of DNS was run parallel. The tubes were heated in a boiling water bath for 15 min. After cooling the tubes at room temperature, 8 ml distilled water was added in each and absorbance was noted at 575 nm. Reducing sugar concentration was determined from the standard curve of glucose and multiplied by dilution factor.
Bacterial cellulose production analysis
The bacterial cellulose production or the formation of nata was analyzed by measuring its thickness using vernier calipers.
Alcohol content analysis
The alcohol content was determined quantitatively using the Chamber Conway method--refer to [20]. A Conway unit is used for alcohol detection by this procedure. Briefly, one ml potassium dichromate was placed into the Conway unit center and a sample was placed around the center. The Conway unit was then covered by a glass plate and incubated at 30 o C for 2 hours for reaction. The water and alcohol slowly evaporated, came in contact with Potassium dichromate and then oxidized. More alcohol evaporated until eventually all the alcohol from the fermented dilute solution had left the sample and reacted with the dichromate. After 2 hours, a total of 0.5 ml solution in the center of the Conway unit was taken and diluted with 4.5 ml distilled water. Then, the solution was observed in a spectrophotometer with a wavelength (λ) of 480 nm, and as a blank was distilled water. One Conway unit was used as a blank and in that unit, 1 ml distilled was used as a sample. The standard curve was prepared using standard alcohol at concentrations of 0.08, 0.06, 0.04 and 0.020g/L.
Total Acid content analysis
The total acid concentration was determined using the titration method--refer to [21]. Briefly, a total of 1 ml of sample was taken and put into a 25 ml Erlenmeyer, then 10 ml of neutral distilled water and 3 drops of Bromothymol blue (BTB) indicator were added. Then it was titrated with pp. 41-48, November 2021 p-ISSN: 0853-2788, e-ISSN:2527-7669 0.1 N NaOH until the clear color changed to blue green. The total acid content in the sample is expressed as a percentage of acetic acid calculated by the formula:
pH analysis
The pH of substrate during fermentation was analyzed using a pH meter.
The nutritional content (proximate) analysis of red overripe guava.
The proximate analysis of red overripe guava juice using in this study was conducted at the Food and Nutrition Laboratory, Universitas Gadjah Mada, Yogyakarta. The analysis included water content, protein, fat and ash.
RESULTS AND DISCUSSION
Growth parameters of A. xylinum in the overripe guava substrates were observed in the decrease in reducing sugar (Fig. 1), the rate formation of bacterial cellulose (BC) (Fig. 2) and the decrease in pH (Fig. 5). The alcohol concentration (Fig. 3) and the total acid concentration (Fig. 4) were also observed during the fermentation. The results showed that A. xylinum was able to grow on the medium containing overripe guava juice in different concentrations and produced BC or nata. It was later called nata de guava. The medium containing overripe guava juice with reducing sugar content 23 g/L (substrate 100%) at pH 4 produced the thickest BC (1.267 cm) (Fig. 1) (Table S1). Thickness parameters can indicate the quality of BC production by A. xylinum. The thickness of the BC in each overripe guava substrate increased according to the growth pattern of A. xylinum (Fig. 1) and along with the increase of incubation time (Fig.2). At the beginning of incubation, there was an accelerated growth of A. xylinum in all reactors, which was indicated by a sharply increasing curve (Fig. 1). The A. xylinum began to grow without a clear lag phase, indicating that the microorganism did not require a phase adjustment (lag phase) for replication in the carbon source containing the overripe guava juice. Such conditions can occur because in this study, the used inoculum is mature enough and meets the requirements to enter the exponential/log phase, which reached 1 x 10 7 cells/ml. Our results (Fig. 1) showed that in reactor A3, on day 2, there was a decrease in sugar content, accompanied by an increase in the growth of A. xylinum and the formation of pellicle nata, which is the primary metabolite of A. xylinum. The growth and thickness of nata continued to increase until day 12. In reactor A2 and A4, it continued to increase until day 6, while in reactor A1 until day 4. In this phase, A. xylinum has entered the exponential/log phase, the phase where reducing sugars that are the substrate for the growth of A. xylinum are used very well to produce energy to grow and convert glucose into cellulose (formation of nata) [22]. In reactor A2 and A4, the growth of A. xylinum decreased on day 8, but slowly increased again until day 12, while in reactor A1, it decreased on day 4, but slowly increased again on day 8 until day 12. In this phase, the growth of A. xylinum did not increase drastically, but the thickness of the nata continued to increase. This condition occurred possibly because more energy was used to convert glucose into cellulose than to grow to pp. 41-48, November 2021 p-ISSN: 0853-2788, e-ISSN:2527-7669 increase biomass. In all reactors, from day 12 to day 14, the growth of A. xylinum and the thickness of nata did not increase. This means that A. xylinum has been in a stationary phase, namely the phase when nutrients are decreasing or are even depleted, where there is not enough dissolved oxygen available, and a lot of secondary metabolites have been formed, thereby inhibiting the growth of A. xylinum [22]. One of the important factors for the success of BC or nata formation is to maintain the growth of A. xylinum to grow on suitable media and produce cellulose. During the formation of BC, A. xylinum performs metabolic activities in obtaining energy, which include aerobic respiration (normal pathway) and alcoholic fermentation, which continues in acetic acid fermentation. The reducing sugar compound, in the form of glucose, in the media is used by A. xylinum as an energy source to carry out its metabolic activities. The bacteria will convert glucose into precursors (UDP-glucose) on the cell membrane, which is then excreted to the outside of the cell catalyzing by enzymes that polymerize glucose into cellulose [22]. Our results show that with the longer incubation time, there is additional decrease in reducing sugar content (nutrients in the substrate decreases) ( Fig. 1; Fig. 2). This is due to the use of reducing sugars for growth (Fig. 1) and the conversion of glucose into cellulose (Fig. 2).
During the fermentation of BC or nata by A. xylinum, alcohol is synthesized. Our results show that the alcohol content produced by A. xylinum is relatively low (0.2 to 0.6%). During 14 days of observation, the resulting alcohol content is fluctuated. After 8 days of incubation, the alcohol content in all treatments dropped dramatically (Fig. 3). These indicate that the alcohol is further oxidized into acetic acid. This condition also explains why in reactors A1, A2 and A4, there was a decrease in cell growth on days 6 and 8. Under conditions of alcoholic fermentation to acetic acid, the energy produced in this process was less than in ordinary aerobic conditions [22]. Therefore, the energy produced is used to focus on survival on the media rather than to increase biomass. Then, from day 10 to day 14, there was an increase in alcohol levels. When the alcohol in the medium begins to decrease, the bacteria resynthesize the alcohol. The synthesis of alcohol by bacteria is carried out when the dissolved oxygen in the medium is in limited conditions. Under these conditions, aerobic A. xylinum bacteria cannot use glucose in the respiratory pathway to meet their energy needs, so these bacteria will choose the fermentation pathway so that they can continue to supply ATP energy even though the quantity will be smaller than if they do respiration [22].
The alcohol formed triggers the occurrence of alcoholic fermentation, where the alcohol will be oxidized to acetic acid. Alcohol fermentation is a pathway for bacteria to obtain energy through the pentose phosphate pathway [22]. The level of acid formed in this study is shown in Fig. 4. At the beginning of the incubation period, namely day 0 to day 4, acid levels increased sharply, presumably due to the use of glucose for acid formation. This is reinforced by the data shown in Fig. 1; the reducing sugar content from day 0 to day 4 decreased drastically. Meanwhile on that day, the nata formed at the substrate containing reducing sugar of 45 g/L and 30 g/L was still thin, while the substrate containing reducing sugar of 23 g/L and 20g/L, nata was not yet formed (Fig. 2). This suggested that the formation of acid from glucose intends to get more energy used for cell growth than the formation of BC. From the results of the oxidation/respiration of glucose, the energy obtained is 36 ATP [22]. Bacteria need this large energy, because at the beginning of incubation, pp. 41-48, November 2021 p-ISSN: 0853-2788, e-ISSN:2527-7669 bacterial growth is in a logarithmic phase (Fig. 1). At the end of the incubation period, namely days 10 to 14, acid levels tend to decrease. This is presumably because at the end of the incubation period, the reducing sugar content was almost gone. The presence of acid formed during cultivation also plays a role in maintaining the stability of the pH of the medium so that the cultivation can take place optimally. Low acidity can maintain the environmental conditions of the medium to avoid contaminants in the form of fungi, yeasts and other bacteria that can interfere with the activity of A. xylinum in producing cellulose. According to [7] [3], the required pH in making nata is 3-5 or in an acidic environment. The pH conditions during the incubation time (Fig. 5) showed that during fermentation, the pH of the medium was maintained at 3-5, an acidic environment. There have been many reports about cellulose being produced by A. xylinum from different kinds of substrates such as coconut water [23], papaya [24], tobacco waste [25] and coconut shell hydrolysate [26]. However, to our knowledge, the production from overripe guava has not yet been reported. Meanwhile there is a study by Setiani et al. [27] which reported the production of nata de guava. However, the study used different species of guava compared to our study, namely Syzygium samarangense. In that study, the addition of carbon source from sucrose and nitrogen source from bean sprouts extract with levels up to 15% is still needed. Meanwhile, our study showed that without the addition of carbon and nitrogen sources, nata could be formed. On the other hand, a study by Aulia et al. [16] and Mardhatillah [28] reported the production of nata de guava, yet the substrate used was not overripe guava. Furthermore, when comparing the thickness of the formed nata, the thickness of the nata produced in that study was 0.98 cm, thinner than the thickness of the nata produced in our study, which was 1.27 cm. The results of the proximate analysis showed that the total sugar pp. 41-48, November 2021 p-ISSN: 0853-2788, e-ISSN:2527-7669 and protein content of the overripe guava used in this study was equal to 4.5% and 0.9%, respectively (Table S2). Thus, nitrogen and carbon from other sources are not needed. This could be one of the advantages of using overripe guava as a substrate for making BC or nata, because there is no additional cost for carbon and nitrogen from external sources.
Bacterial cellulose (BC) has certain better characteristics for most uses than cellulose from plants. BC has higher degrees of purity, tensile strength, porosity, water-holding capacity, biodegradability and biological adaptability [29] [30] [31] [32]. However, the high production cost of BC, especially due to the cost of fermentation medium, has become one of the challenges to its general application in industrial and academic fields. Recently, to answer the challenge, the exploration of alternative medium from various industrial by-products and agroforestry waste is still ongoing. Our results show that bacterial cellulose can be produced by the growth of A. xylinum in overripe guava substrate, which potentially becomes organic waste, without any addition of carbon and nitrogen from external source. Thus, our findings may corroborate those ongoing studies. Moreover, our results also give a new insight into managing organic waste, specifically from fruit rather than the waste being thrown away and becoming organic waste generation.
CONCLUSION
The overripe guava can be used as a promising substrate for Bacterial cellulose (BC) production by Acetobacter xylinum. The medium containing 23 g/L of reducing sugar (substrate 100%) was the most recommended to produce the best BC with the following characteristics: gave the thickest BC (1.27 cm), high reducing sugar content, high protein content, which causes no need for additional nitrogen, stable pH. Instead of being disposed of as waste, the overripe guava has the potential to be low-cost feedstock for BC production. Thus, the use of overripe guava can be considered as an alternative solution in reducing organic waste from fruits and supporting the concept of eco-friendly technology. | 5,231.8 | 2021-11-09T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
The Role of the Gut Microbiome in Youth with Polycystic Ovary Syndrome: A Systematic Review
Background: Polycystic ovary syndrome (PCOS) is a common endocrine disorder that affects women of reproductive age and female adolescents. The diagnosis of PCOS is difficult during puberty due to overlapping of the criteria with normal variations of menstruation during this age period. There are insufficient data on the gut microbiome and PCOS and potential mechanisms linking the two. The present systematic review aimed to detect dysbiosis patterns in youth with PCOS, compared with healthy controls. Methods: One hundred seventy-eight studies were identified by a databases search and sixty-eight by a full-text assessment for eligibility; four were included in the systematic review and underwent quality control. Results: The results of the study were controversial in accordance to findings from the literature. A change in gut microbiome α diversity was found in PCOS adolescents, with no significant alterations in β diversity. Almost all studies found Firmicutes, Bacteroidetes, and Actinobacteria in abundance in both groups, with changes in family composition and fluctuations at the phylum level. A statistically significant association between these changes and clinical or biochemical features of the syndrome was described. Conclusions: This systematic review confirmed gut microbiota dysbiosis in youth with PCOS. However, further data are needed to clarify these changes and to build a strategy to prevent the syndrome.
Introduction
Polycystic ovary syndrome (PCOS) is a common endocrine disorder affecting women of reproductive age.The syndrome is usually established during adolescence and especially 2 years after the first menstruation.Data from around the world report that the disease prevalence varies between 6% and 9% of the population [1].Accordingly, PCOS appears to be a popular diagnosis among adolescent females with a prevalence ranging from 3.4% to 11% depending on the diagnostic criteria used to establish the diagnosis [2].
Diagnostic criteria for PCOS include biochemical and/or clinical androgen excess, ovarian dysfunction, and ultrasonographic assessment of the polycystic ovarian morphology.Based on these criteria, known as the Rotterdam criteria, used to confirm the diagnosis of a female with PCOS, at least two of the aforementioned three criteria should be present [3].
In youth, the use of these diagnostic criteria is questionable due to the common presence of irregular and anovulatory menstrual cycles, acne-as a sign of hyperandrogenismand pleiocystic ovarian morphology at this age [4].Thus, the latest consensus on the diagnosis of PCOS during adolescence suggested to evaluate the coexistence of ovarian dysfunction, expressed as menstrual disturbances/oligomenorrhea, and biochemical hyperandrogenism in order to confirm the diagnosis of PCOS in youth [4,5].
Common ovarian pathologies in childhood and youth are reported as usually asymptomatic and they emerge only when complications occur, such as acute abdomen or a palpable mass in the ovarian lodge [6].Unlike these situations, PCOS manifestation and complications include a broad group of manifestations, affecting both fertility and the metabolic profile.These complications change during the lifespan, beginning as menstrual irregularities, hirsutism, and infertility and then continuing as metabolic complications (glucose intolerance, type 2 diabetes), cardiac complications, and an increased incidence of endometrial cancer [7,8].
The quote "all diseases begin in the gut", attributed to the Ancient Greek physician Hippocrates approximately 2500 years ago, seems to fit perfectly in the case of the role of the human gut in the pathogenesis of PCOS.Four dominant phyla of bacteria appear to colonize the human gut.The Firmicutes (Gram-positive, anaerobic/aerobic, saprophytic spore-forming bacteria, mainly represented by the genera Clostridium, Faecalibacterium, Blautia, Ruminococcus, Enterococcus, and Lactobacillus) and Bacteroidetes (Gram-negative, aerobic or anaerobic, non-spore-forming bacteria, represented by Bacteroides and Prevotella) [9,10] constitute approximately 90% of the normal bacterial flora of the small and large intestine [11].The remaining two phyla that colonize the gastrointestinal tract are the Actinobacteria (Gram-positive bacteria, with the species of Bifidobacterium being the dominant species in the microflora of the newborn up to the first 1000 days of life) [12] and the Proteobacteria (Gram-negative bacteria, which show heterogeneity in morphology and physiological characteristics and consist of six different subclasses) [9].
However, the human gut microbiota is a growing and evolving ecosystem shaped by several factors during the lifespan, including the aging process, dietary habits, perinatal factors, sexual dimorphism, and hormonal factors [13].Most diseases related to the gut microbiome result from either gut inflammation or dysbiosis [14].
It has been known for over two decades that the human gut microbiome plays a key role in the pathogenesis of PCOS-known as the DOGMA hypothesis (dysbiosis of the gut microbiota) [15].Possible mechanisms include reduced intestinal mucosal permeability in obese populations or those on diets low in sugar, lipids, or dietary fibers and increased circulating lipopolysaccharides and, thus, insulin resistance and ovarian dysfunction [16].Consequently, increased insulin levels lead to increased testosterone levels [17].
Few studies have so far focused on gut microbiome dysbiosis among youth with PCOS.Furthermore, different measures are often used to describe microbiome samples among studies, preventing the evaluation of available evidence on gut microbiota changes during PCOS.These measures may not provide information on the abundance of specific taxa, but reflect a change or difference in the composition of microorganisms.Gut microbiota diversity corresponds to the number of different species present in an individual.Alpha and beta diversity are the most specific indicators, describing the explored status of gut microbiota among different populations.The estimate of diversity in a single sample is called alpha diversity.Beta diversity analysis quantifies the similarity or distance between microbiome pairs [18] or between different samples [18].Thus, alpha diversity measures the diversity of a particular population within the sample, while beta diversity measures are estimates of similarity between samples [18,19].A systematic review in adult women with PCOS has demonstrated a significant alteration of the gut microbiota [20].
The aim of this study is to systematically review the available data on gut microbiome dysbiosis in female young people with PCOS.
Materials and Methods
The current systematic review was designed following a predefined protocol, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, which is registered in the PROSPERO database under the identification number: CRD42023337850.
Search Strategy and Information Sources
The search for studies was based on a predefined P.I.C.O.(Population, Intervention, Comparison, Outcome) model.Based on this, the search included articles comparing young people (population) with PCOS (intervention) and those without (comparison), in which changes in gut microbiome composition (outcome) were investigated.Studies were searched in the scientific platforms PubMed, MEDLINE, and Cochrane Library, and all papers published between January 2012 and July 2023 were included.The search was limited to studies published in the English language.Specific keywords were used in the search filters: ("microbiome" OR "microbiota") AND ("PCOS" OR "polycystic ovary syndrome").Relevant articles from the reference sections of the screened articles that could be consistent with the subject of the systematic review were also included.In the case where more than one article was published from the same study, it was considered appropriate to extract data from the most recent and complete article.
Study Population Rationale and Eligibility Criteria
Studies considered eligible were randomized controlled trials (RCTs), cross-sectional studies, and cohort studies.According to the World Health Organization (WHO), the United Nations Population Fund (UNFPA), and the United Nations International Children's Emergency Fund (UNICEF), young people are defined as a people between the ages of 10 and 24 years [21,22].Thus, eligible studies included the age range of female patients with polycystic ovary syndrome up to 24 years.
Exclusion criteria related to study design, type of participants, and type of outcome were defined for the systematic review.Reviews (systematic or narrative), case reports or case series, case control studies, lead articles/opinion articles/commentary articles (editorial/commentary), and letters to the editor were excluded.In addition, studies involving an adult population, such as the age group over 24 years, were excluded.The maximum age of 24 years in the PCOS patient group was selected as a criterion in cases where the study design involved a youth group (referred to as either adolescents, young people, or youth).Studies performed in animals (mammals or rodents) were also excluded.Finally, studies where the end result was microbiome changes in systems other than the gut, such as changes in oropharyngeal or gynecological microbiome composition, were excluded from the present review.
Study Outcomes
The primary outcome was any reported change in the intestinal microbiome of patients with PCOS compared to controls.Secondary set outcomes were anthropometric factors and the hormonal and metabolic profile of the PCOS patients compared to controls.
Screening and Data Collection
Two authors with expertise in systematic reviewing screened all titles and abstracts for eligibility in a completely independent manner.Full texts were reviewed by the two reviewers and discrepancies were resolved with the involvement of a third reviewer.Reasons for exclusion were recorded for all studies excluded at the title, abstract, or full-text level of the review process.Data were extracted from full texts of studies into a predefined worksheet.Data collection was performed by two reviewers independently and then verified by a third according to the predefined datasheet.Any disagreements were resolved by discussion with the third investigator.
Quality Assessment of Included Studies
The risk of bias of the included studies was assessed using the ROBINS-I tool (Version 1 August 2016) (Risk of Bias in Nonrandomized Studies for Interventions) and the Cochrane RoB 2 tool for randomized trials [23,24].ROBINS-I assesses seven different domains and scores studies as low, moderate, severe, and critical risk of bias [23].According to the RoB2 tool, five domains are assessed and each domain is scored as low risk of bias, some concern, high risk of bias, or no information [24].
Results
Initially, the systematic search yielded 200 studies.Then, after excluding 51 duplicate studies, the abstracts of all remaining studies were checked.Those who met the entry criteria remained in the study and their entire text was thoroughly read, while those who did not meet the above entry criteria were excluded from further analysis.Of the rest, 75 were rejected because of a title unrelated to the search topic, but also after reading the abstract.The total number of papers studied at the full-text level was 74, of which 69 were rejected.Reasons for rejection included different type of study design (39), adult population studies (26), rat studies (1), looking for different results from the study outcome ( 2), but also having data based on a previous study included in the procedure (1).From the set of works that emerged, one was excluded; although it studied the population and the election result, it was in an indirect way.
More specifically, in the study by Eyupoglu et al. [25], potential changes in the intestinal microbiome composition of female adolescents with PCOS were studied by measuring trimethylamine N-oxide (TMAO), a gut microbiome-dependent metabolite [18].In the study, the authors wanted to assess changes in the gut microbiome, expressed by variations in serum TMAO levels and its precursors [25].
Thus, four papers were finally included in the systematic review.The detailed PRISMA-compliant flow chart for inclusion and exclusion of potential published papers is shown in Figure 1.A summary of the main characteristics of the studies included in the systematic review is presented in Tables 1-4.Of the four total studies, two involved female youth with PCOs from Turkey [26,27], one from a similar population from Spain [28], and another with female adolescents from the United States of America [29].A summary of the main characteristics of the studies included in the systematic review is presented in Tables 1-4.Of the four total studies, two involved female youth with PCOs from Turkey [26,27], one from a similar population from Spain [28], and another with female adolescents from the United States of America [29].
In two out of four of the above studies, the diagnosis of PCOS was made based on the Rotterdam criteria [1,2].In the study by Jobira et al. [29], the National Institute of Health (NIH) criteria were used, which were adapted to female adolescents with oligomenorrheaat least 2 years after menarche and the presence of clinical and/or biochemical signs of hyperandrogenism.Garcia-Beltran et al. [28] used as inclusion criteria the presence of hirsutism, oligomenorrhoea, and at least 2 years since menarche [4].Regarding the design of the studies, all were cross-sectional studies except for the study by Garcia-Beltran et al., which was a randomized controlled clinical trial [28].Age is presented as median (min-max).BMI is presented as mean ± standard deviation or median (min-max) or median (25%, 75%), as appropriate.
Table 3. Synopsis of significant phyla differences between PCOS and control groups of included studies.
The age of patients with PCOS ranged from 15.5 to 22.5 years, while the age of controls ranged from 14.1 to 27 years.In Table 2, demographic and anthropometric characteristics of the included studies are presented, according to the age category of participants: adolescents or young people only.Finally, the BMI of the young participants with PCOS ranged from 19.7 to 39.7 kg/m 2 , while that of the control group ranged from 20 to 39.3 kg/m 2 .
In the randomized study by Garcia-Beltran et al. [28], gut microbiota dysbiosis was studied in female adolescents with PCOS, aiming for individualized therapeutic intervention.
The authors describe changes in α diversity at the level of uniformity and diversity (p= 0.03 and p = 0.04, respectively), while at the level of β diversity, they also show changes between the groups.Regarding the species of bacteria at the genus level, changes were observed in the genera of the XI family of Firmicutes (p = 0.002), where they were found in abundance in the group of adolescents with PCOS, in contrast to the control group.The genera Prevotellaceae of Bacteroidetes (p = 0.0006), Prevotella of Firmicutes (p = 0.0001), and Senegalimassilia of Actinobacteria (p = < 0.0001) were found in abundance.
Similar findings of microbiome differentiation were also found in the study by Jobira et al. [29].In this study, patients with PCOS showed reduced α diversity at the level of uniformity (p = 0.0052 and p = 0.045) compared to healthy individuals, without changes at the level of variety (p = 0.655).
The β diversity showed changes between the two groups (p < 0.001); regarding the sexes, adolescents with PCOS showed a greater abundance of Actinobacteria (p = 0.027), a lower presence of Bacteroidetes (p = 0.004), and similar levels of Firmicutes and Proteobacteria compared to the healthy subjects.At the family level, adolescents with PCOS had a lower abundance in Bacteroides (p < 0.001) and Porphyromonadaceae (p = 0.024), while opposite abundance in Streptococcaceae (p = 0.047).At the genus level, adolescents with PCOS had a higher abundance of Prevotella, Finegoldia, and Lactobacillus, but lower abundances of Bacteroides and Parabacteroides.In this study, levels of total testosterone, ALT, triglycerides (TG), and HOMA-IR were positively correlated with the changes.Additionally, changes that can potentially be used as predictors of PCOS were reported, such as the genera Bacteroidetes (AUC 0.73 ± 0.06) and Actinobacteria (AUC 0.68 ± 0.07), and the families Lactobacillaceae (AUC 0.75 ± 0.08), Bacteroidaceae (AUC 0.81 ± 0.06), Porphyromonadaceae (AUC 0.68 ± 0.07), and Streptococcaceae (AUC 0.66 ± 0.07), with the family Bacteroidaceae being the strongest predictor (sensitivity of 62% and a specificity of 86%).
However, the study by Eyupoglu et al. [23] presents opposite results, with α diversity showing no changes between the two compared groups (p = 0.27, p = 0.79, and p = 0.97).No change was showed in the β diversity either.At the genera level, there was also no differentiation between the groups, with Bacteroidetes and Firmicutes being abundant in both groups, but also with the presence of Proteobacteria and Actinobacteria.The only difference was found in the Ruminococcaceae family, where it was found in greater abundance in the PCOS group compared to the control group (p = 0.006).The abundance of this family was positively correlated with the score on the Ferriman-Gallwey scale (p = 0.01) [26].
The study by Mammadova et al. [27] in lean women with PCOS is consistent with the findings of the previous study, where there appears to be no difference between the two groups regarding α and β diversity (p = 0.78, p = 0.51, and p = 0.93, respectively).Regarding the sexes, patients with PCOS appear to have a greater abundance of Proteobacteria (p = 0.039), Gammaproteobacteria (p = 0.039), Erysipelotrichia (p = 0.013), and Verrucomicrobia (p = 0.05) compared to controls.In contrast, the genera Clostridium sensu stricto and Roseburia appear to be less abundant in the PCOs group compared to controls (p = 0.04 and p = 0.021, respectively) [27], as shown in Table 3.
Differences in α and β diversity are detailed in Table 4; however, it is important to underline that a pattern of significant differences is only reported among studies focusing on adolescence [28,29].Interestingly, studies presenting data from young adults [26,27] failed to demonstrate any significant change between patients and controls, in either α diversity or β diversity.
Regarding study quality, three out of the four selected studies were assessed as low risk of bias by the ROBINS-I tool.The summary risk of the bias assessment using the ROBINS-I tool is reported in Table 5.One study was assessed as a randomized controlled trial by the RoB 2 tool, as shown in Table 6.
Discussion
In the present systematic review, an attempt was made to capture the existing research studies on intestinal microbiome variations in young female people with PCOS.The total number of women with PCOS that were included was 108, a relatively small sample, which also reflects the scarcity of studies on this specific topic.The small patient sample also reflects the difficulty of finding and conducting such studies in adolescent, non-adult populations.The difficulty is related both to the nature of the syndrome, where it is established in adolescents and young women after 2 years of menarche and therefore affects young women of older age in the majority, and to the lower prevalence of the syndrome in youth under 24 years of age.
In addition to the fact that PCOS is predominantly diagnosed in female youth rather than adolescents per se, the origin of the syndrome is clearly rooted in the metabolic profile of early adolescence [30].This is why the manifestations of the syndrome preoccupy and trouble primarily the pediatrician dealing with the adolescent girl, rather than the adult medical provider who can easily establish the diagnosis.The rationale of the present study was to shed light on the etiological origin of the syndrome during adolescence through the pooling of the available evidence on gut dysbiosis in the context of PCOS.The data provided will mainly address the pediatric clinical perspective on the disease, reinforcing the underlying link of gut dysbiosis and the occurrence of PCOS.The aim of the researchers is to focus on the adolescent disorders associated with the syndrome in order to provide evidence for the design of effective interventions before the PCOS fully manifests during adult life [31].Thus, the selection of the age range of the study population was based on the definitions of adolescents and youth according to global health promotion organizations and health stakeholders.According to the World Health Organization (WHO), the United Nations Population Fund (UNFPA), and the United Nations International Children's Emergency Fund (UNICEF), young people are defined as people between the ages of 10 and 24 years.Thus, eligible studies included the age range of female patients with polycystic ovary syndrome up to 24 years.
The study design was initially based on setting the age of 24 as an entry criterion in order to include the population group that belongs to young adults (youth); in this way, the age spectrum of the onset of PCOS was extended to include changes in the microbiome, which are imminent to the aforementioned onset.However, even among the studies included here, a pattern of differences in findings was evident in relation to age variation.Studies in younger adolescents (under 17 years of age) tended to report significant differences in the diversity of the explored microbiota, while no significant alterations were found in more advanced age populations (young to 24 years of age) [26][27][28][29].Age variation is a recognized factor that interferes with both phyla and their diversity in healthy conditions [32].The entire age from infancy to the elderly is already known to be characterized by a different microbiome profile in healthy humans [32,33].It can therefore be hypothesized that the effect of age variation on the microbiome may modify the investigated differences not only in health but also in disease.
It is well known that the study of the gut microbiome has been at the center of attention for some time, with data emerging highlighting changes initiated both by sexual dimorphism and by variations in the individual's lifestyle and metabolic and endocrine profile, additionally modified after iatrogenic interventions [34].Indeed, data from rodent studies support that the composition of the gut microbiome differs between sexes.The intestinal microbiome of the adult female mice appears similar to that of preadolescent mice, while adult male mice develop an intestinal environment distinct from that of preadolescent mice, regardless of sex [35,36].
Although studies on the effect of sexual dimorphism in humans are still scarce, according to the available data, the composition of the intestinal microbiome between women and men is reported to be diverse [37].These differences may be due to the direct effect as well as the indirect influence of sex hormones on inflammatory and metabolic factors, such as short-chain fatty acids (SCFAs) and neurotransmitters.Furthermore, the diversity and composition of the microbiome, in addition to being related to age, also adapts to the effects of hormones.Mayneris-Perxachs et al. [38], in an attempt to investigate the changes in the composition of the gut microbiome between the two sexes (men and women), highlighted the differences in β diversity between premenopausal women and men, which is based on steroid biosynthesis, whereas on the contrary, no differences were observed between postmenopausal women and men.
These differences, however, were evident in non-obese subjects, disappeared in the obese population, and were strengthened by the positive correlation with sex steroids, progesterone, and testosterone levels.Gender differences were observed between premenopausal and postmenopausal women and men.Males showed greater abundance in Bacteroidaceae and Prevotellaceae, a finding reinforced by a possible positive covariance with testosterone.In contrast, the genera Actinobacteria, Proteobacteria, Firmicutes, and Verrucomicrobia were not associated with testosterone levels.Furthermore, estrogen levels differed in obese postmenopausal women compared to lean postmenopausal women; obese postmenopausal women have higher estrogen levels due to peripheral estrogen synthesis [38].
In the field of PCOS, however, many studies have attempted to highlight the changes occurring in the gut microbiome [39][40][41].
The particular difficulty in studying PCOS is related to its pathophysiology itself, where it is characterized by hyperandrogenemia, a factor that contributes to changing the composition of the intestinal microbiome but also to the frequent coexistence of insulin resistance and overweight or obesity (reported as up to 88% in adult PCOS) [42].Recent meta-analysis provides evidence that there is a multiple relative risk of being diagnosed with either obesity, overweight, or central obesity in the setting of PCOS compared to healthy controls [43].
Studies in rodents have shown that changes in the gut microbiome, such as an increase in the genus Firmicutes, correlate with changes in the regulation of insulin levels, such as the presence or progression of obesity, type 2 diabetes, and the metabolic syndrome [44].
Zeng et al. reported changes in the functional and structural profile of the intestinal microbiome between women with PCOS and insulin resistance or without insulin resistance [39].As a result, the presence of insulin resistance appears to directly affect the gut microbiome and act, possibly synergistically with PCOS, to further diversify the microbiota [45].
PCOS itself appears to enhance inflammation and insulin resistance due to a reduction in the abundance of beneficial bacteria for the microbiome (such as Faecalibacterium of the genus Firmicutes), thereby reducing the production of SCFAs that result in intestinal barrier disturbances [46].
Torres et al. conducted a study in healthy women, women with PCOS, and women with polycystic ovary morphology; they highlighted the link between hyperandrogenism and the changes that occur in the gut microbiome of women with PCOS.Differences occurred in four genes known to produce SCFAs, which were found at a lower rate in women with PCOS compared to the other examined groups [47].
Moreover, there is research on the presence of changes in the intestinal microflora of females with PCOS and overweight/obesity or lean weight.In a recent study by Liang et al. [48], gut microbiome changes were observed in Chinese women with PCOS and in healthy subjects and analyzed in relation to BMI levels.
In this study, the authors showed that the changes were present in both lean and obese women with PCOS.Especially, they showed statistically significant differences in bacterial relative abundance of the genera Bacteroidetes, Proteobacteria, and Parabacteroides in the entire sample of women with PCOS, regardless of BMI [43].
In addition, several research protocols are investigating the effect of contraceptive pills on the intestinal microflora.Hua et al. [49] reported that microbiome changes were observed before and after oral contraceptive administration.The results highlighted differences between the sexes and genera of the intestinal microbiome of women over time and the effect of contraception, particularly in the increase in the genera Actinobacteria and Firmicutes [49].
Another aspect of therapeutic personalization research is emerging with the primary goal of balancing microbiome diversity.The available data indicate that there is a tendency to direct therapeutic approaches targeting the relationship between microbiome and circulating androgens levels [50].
Regarding changes in the intestinal microbiome of female adolescents with PCOS, there is a great heterogeneity of the reported findings as a result of the limited number of relevant studies as well as the interpretation of physiological changes due to puberty.
One of these studies on adolescents with PCOS by Eyupoglu et al. [25] aimed to find changes in gut microbiome diversity by measuring serum levels of trimethylamine N-oxide (TMAO) for potential targeted therapy.TMAO is known to be produced from the metabolism of dietary choline and L-carnitine by the gut microbiome, and many studies have shown that this important product inhibits cholesterol metabolism, induces platelet aggregation and thrombosis, and promotes atherosclerosis.Moreover, TMAO levels, in addition to atherosclerosis, are associated with type 2 diabetes and gestational diabetes [51].
Studies in adult populations with PCOS have shown that TMAO levels are elevated, even without the clinical presence of hyperandrogenism [52].
The results of the study [25] were encouraging, as it was found that the elevated TMAO levels found in the group of adolescents with PCOS were reduced after short-term oral contraceptive therapy (3 months) combined with lifestyle changes.In addition, body weight loss and a decrease in circulating androgen levels were also positively correlated [25].
Another study also regarding adolescent females with PCOS reported that the coexistence of obesity and fatty liver infiltration may be related to the altered microbiome [53].This study concluded that adolescents with PCOS and obesity and fatty liver infiltration have a different gut microbiota composition compared to those with PCOS and obesity [53].
The results obtained from the studies included in this review show great heterogeneity in terms of species diversity-findings that are in agreement with those reported in the literature.In most of these studies, when a change is described, it mainly focuses on the reduction in α diversity in women with PCOS [28,29].
Regarding β diversity and species diversity of the intestinal microbiome, studies by Garcia-Beltran et al. [28] and Eyupoglu et al. [26] were the ones that showed the most changes, especially in the genus Firmicutes, with differences in families and genera.
It is also important to point out that two of the studies included in the present systematic review studied the changes in the intestinal microbiome of female adolescents with PCOS and, in the process, sought to elucidate its changes after treatment.
The first study by Garcia-Beltran et al. [28] demonstrated gene-level changes, as mentioned above in family XI, which appear to be abundant in female adolescents with PCOS and decreased to return to normal levels after administration of a combination of drugs such as spironolactone, pioglitazone, and metformin for a time period of 1 year; however, when oral combined contraceptive therapy was administered, similar results were not observed.The significance of this finding is related to the role of family XI microbes in inflammatory liver diseases, hyperandrogenemia, and central fat distribution [28].
The second study by Eyupoglu et al. [26] studied the effect of oral contraceptives in female adolescents with PCOS.The results generally failed to show significant differences in the gut microbiome after 3 months of contraceptive treatment.Nevertheless, a sexbased trend for Actinobacteria has been reported in female young people with PCOS and obesity, accompanied by a decrease in body weight and androgen levels [26].However, the authors point out the beneficial effect of oral contraceptive administration in reducing the abundance of the genus Actinobacteria based on the results of other studies [29,54,55], where an increase in the genus was observed, and not in their study.
Conclusions
Polycystic ovary syndrome is a common disease among young female people.The pathophysiological pathways leading to the clinical manifestations of the syndrome are still under investigation.Gut microflora dysbiosis has been explored as a major factor contributing to the pathogenesis of PCOS.Until now, studies focusing on gut microbiota changes in the young population with PCOS are few.The reduction in α diversity, but also changes in β diversity of the gut microbiome, in different families and genera, especially in the phylum Firmicutes, is confirmed in PCOS young people.Further data describing gut dysbiosis during PCOS in youth are of major importance, in order to build a strategy to prevent the syndrome.
Figure 1 .
Figure 1.Flowchart of studies from databases.
Table 1 .
Characteristics of the studies.
Table 2 .
Characteristics of the population involved in the study.
Table 4 .
Synopsis of diversity assessments in the included observational studies.
Table 5 .
Risk of bias assessment of included non-RCT studies.
Table 6 .
Risk of bias assessment of included RCT study. | 6,895 | 2023-11-29T00:00:00.000 | [
"Medicine",
"Biology"
] |
Reconstitution of a tandem Co- and post-translational processing pathway with rat liver subcellular fractions.
Previously we showed that smooth microsomes from a variety of tissues effectively cleaved, sequestered, and "core" glycosylated nascent chains of secretory proteins. To further characterize the role of smooth membranes in the biosynthesis of secretory polypeptides, rat liver smooth microsomes were separated into smooth endoplasmic reticulum and Golgi fractions. Membranes of the smooth endoplasmic reticulum cleaved the signal peptide of pre-placental lactogen, attached the high mannose core to the alpha subunit of chorionic gonadotropin, and sequestered the processed proteins. None of these processing steps were performed by Golgi membranes. However, processing of asparagine-linked oligosaccharides and the coincident addition of terminal sugars was performed by Golgi but not by smooth endoplasmic reticulum membranes. The properties of this post-translational reaction are very similar to those described for the reactions in vivo. These observations demonstrate that the enzymes for co-translational (pre-protein processing) and posttranslational (oligosaccharide maturation) processing events are localized in the endoplasmic reticulum and Golgi apparatus, respectively. This functional differentiation of Golgi and endoplasmic reticulum membranes is an important feature of the secretory process in eukaryotic cells. Restriction of the recognition and transport of nascent secretory proteins to the endoplasmic reticulum establishes the polarity necessary for the ordered sequence of post-translational steps involved in the synthesis and maturation of secretory proteins.
membranes in the biosynthesis of secretory polypeptides, rat liver smooth microsomes were separated into smooth endoplasmic reticulum and Golgi fractions. Membranes of the smooth endoplasmic reticulum cleaved the signal peptide of pre-placental lactogen, attached the high mannose core to the a subunit of chorionic gonadotropin, and sequestered the processed proteins. None of these processing steps were performed by Golgi membranes. However, processing of asparagine-linked oligosaccharides and the coincident addition of terminal sugars was performed by Golgi but not by smooth endoplasmic reticulum membranes. The properties of this post-translational reaction are very similar to those described for the reactions in vivo. These observations demonstrate that the enzymes for co-translational (pre-protein processing) and posttranslational (oligosaccharide maturation) processing events are localized in the endoplasmic reticulum and Golgi apparatus, respectively. This functional differentiation of Golgi and endoplasmic reticulum membranes is an important feature of the secretory process in eukaryotic cells. Restriction of the recognition and transport of nascent secretory proteins to the endoplasmic reticulum establishes the polarity necessary for the ordered sequence of post-translational steps involved in the synthesis and maturation of secretory proteins.
Eukaryotic secretory proteins travel through a number of subcellular compartments prior to their exit into the extracellular space (38). During this transit through the cell, secretory proteins frequently undergo one or more structural modifications by specific enzymes localized within the various organelles. For example, while the synthesis and translocation of the nascent chain is taking place, the NHz-terminal 15-30 amino acids, termed the signal or prepeptide, are proteolytically removed by an activity present in the endoplasmic reticulum membrane. In the case of secretory glycoproteins, a preassembled oligosaccharide unit (30,31) is transferred from a lipid carrier to appropriate asparagine residues in the growing polypeptide chains (22, 40,49). This primary, or "core," glycosylation also occurs prior to completion of the polypeptide chains (2, 42). Upon release of the completed This work was supported by a grant from the Public Health Service (HD-13481). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. polypeptide from the ribosomes, the oligosaccharide is processed primarily by enzymes localized within the Golgi complex (40,43).
The initial stages of protein secretion, i.e. the binding and translocation of nascent chains by the ER' membrane, proteolytic removal of the pre-peptide, and core glycosylation, have been generally regarded to occur only in RER and not in SER. However, we have recently shown that smooth microsomes prepared from rat liver effectively cleave, glycosylate, and translocate nascent chains of hPL and hCGa (4).
Smooth microsomes isolated from Krebs' I1 ascites tumor cells and bovine adrenal cortex also processed and translocated nascent chains of hPL and hCGa very efficiently. Both of these tissues contain significantly less RER than is found in rat liver, providing strong evidence that the processing observed with smooth microsomes is not due to contamination by stripped RER that may have formed during membrane preparation. The data from the adrenal cortex are particularly important in this regard since the tissue is virtually devoid of RER (33,39).
Smooth microsomal preparations typicdy contain, in addition to membranes derived from the SER, fragments of the Golgi apparatus, plasma membranes, and secretory granules. The heterogeneity of smooth microsomes raises the question of whether pre-protein processing observed with this fraction is limited to the ER or are other smooth vesicles capable of processing pre-proteins. To examine this question we have subfractionated smooth microsomes into SER and Golgi fractions that are virtually free of cross-contamination. Membranes derived from the SER, when added co-translationally to ascites tumor lysates, effectively cleaved nascent chains of both hPL and hCGa and, in the case of hCGa, transferred two high mannose oligosaccharide units to the nascent chain. Partially purified Golgi membranes did not cleave or core glycosylate either protein. Furthermore, we describe a cellfree assay for the Golgi-dependent modification of the high mannose oligosaccharide added to hCGa in the ER. The in vitro processing of oligosaccharide appears to follow the same pathway described for the in vivo maturation of complex type oligosaccharide (25).
EXPERIMENTAL PROCEDURES'
*ate The "Experimental Procedures" are presented in miniprint as
RESULTS
TO examine the respective roles of SER and Golgi membranes in protein secretion, a rat liver homogenate was fractionated according to the scheme shown in Fig. 1. This fractionation involves an initial isolation of crude Golgi and SER membranes by isopycnic centrifugation, followed by a second, velocity gradient centrifugation to achieve further purification. A total of six membrane fractions were identified and collected.
The distribution of the marker enzymes galactosyltransferase (Golgi) and glucose-6-phosphatase (ER) and their specific activities are given in Table I. Based on these data, the different membrane fractions were tentatively identified as the following: f,, crude preparation of Golgi apparatus (Gh); f2, crude SER preparation (SER); f3, h a l Golgi-rich fraction (GA); f4, membranes of both Golgi and SER origin; 4 , SER, the origin of f6 is unclear.
The SER preparation (fraction fs) contains very little galactosyltransferase activity, indicating less than 10% contamination with Golgi membranes. The overall recovery of SER is comparable to that reported'by Kruppa and Sabatini (26). An estimate of ER contamination in the Golgi preparation (fraction f3) is provided by glucose-6-phosphatase activity, which is about 40% of that found i n the SER preparation studied. The exact level of ER contamination is difficult to assess since glucose-6-phosphatase is apparently present throughout the cell (12,21). In this regard it is important that the second gradient results in a further 2-fold purification of the Golgi membranes as indicated by a comparison of both marker enzymes in fractions fl and fs (Table I).
The low yield of the Golgi marker enzyme (less than 10%) in fraction f3 is in sharp contrast to the 25-90% recoveries reported by other investigators (28,53). This discrepancy may be related to the use of two separate homogenizations in the Golgi preparation. (The Golgi complex is very sensitive to mechanical rupture.) In studies where preparation of a postnuclear supernatant, and thus the second homogenization, was eliminated, the yield of Golgi membranes i1:creased 2 to %fold, with a concomitant 2-fold decrease in the level of ER contamination (Table 11, Method B). Recovery of smooth endoplasmic reticulum was decreased by about 50%. (Golgi prepared by Method B were indistinguishable from the Golgi material isolated by Method A in terms of the assays described below.) Pre-protein Processing Activities in Subfractions of Smooth Membranes-Each of the major smooth microsomal subfractions was examined for its ability to process nascent secretory proteins in reconstituted ascites tumor cell-free lysates. In this system, term and first trimester human placental RNA directs the synthesis of the pre-forms of hPL and hCGa, respectively (Figs. 2 and 3 (2, 46)). The co-translational addition of total ascites microsomes results in the cleavage of pre-hPL to hPL (Fig. 2, lane 3 (46)). In the case of hCGa, the presegment is removed, and two mannose-rich oligosaccharide precursors are attached to the peptide chain when translation is performed in the presence of ascites microsomes (2). This results in a product with decreased electrophoretic mobility (Fig. 3, lune 2 (37)).
Messenger RNA-dependent protein synthesis was reduced dramatically when Golgi membranes were included in the translation mixture. The inhibition was rapid and complete even in the presence of membrane concentrations less than 0.1 mg of protein/ml. (Translation is inhibited 20-30% in the presence of 1 mg/ml of rat liver smooth membranes.) This Golgi-dependent inhibition of protein synthesis is prevented by the addition of a neutral ribonuclease (RNase) inhibitor prepared from human placenta (5-7). Addition of the RNase inhibitor to cell-free translations had no effect on the ability of microsomes to process nascent secretory proteins (Figs. 2 and 3, lunes 3 and 2, respectively).
In the presence of SER-containing fractions, i.e. fractions f1, f2, fq, and f5, pre-hPL was converted to a protein that comigrates with authentic hPL (Fig. 2). These same membrane TABLE I1
Comparative yields of Golgi and SER membranesprepared by either double (Method A) or single (Method B) homogenization
Experimental details were as described under "Experimental Procedures." Data are expressed as units of enzyme activity per g of liver.
For glucose-6-phosphatase, 1 unit corresponds to 1 pmol of phosphate released per min at 37 "C; for galactosyltransferase, 1 unit corresponds to 1 nmol of galactose transferred to N-acetylgalactosamine per min at 31 "C; protein is in milligrams. GA, Golgi membranes; SER, smooth endoplasmic reticulum. Conditions for the assay were the same as described in the legend to Fig. 2, except that the translation products were immunoprecipitated with hCGa subunit specific antisera. Mobilities of the cleaved and glycosylated subunit ((CH0)-hCGa) and the precursor @re-hCGa) are shown in the left hand margin. Other abbreviations are as described in Fig. 2. Equal amounts of radioactivity (25,000 cpm) were applied to each lane.
fractions also cleaved and glycosylated pre-hCGa (Fig. 3). The amounts of membranes added were in the linear range of cotranslational activity for each of the fractions. The highly enriched Golgi membranes of fraction f3 did not process either pre-protein, even at concentrations 2-fold higher than f5 (data not shown). The small amount of pre-protein processing observed with the crude Golgi fraction (fraction f d is presumably due to the presence of contaminating SER fragments. There was no apparent inhibitor in the Golgi fraction since addition of f3 (1 mg/ml) to a reaction mixture containing f5 did not affect the co-translational processing of hCGa (data not shown).
It is interesting that membranes of fraction fe also do not process nascent secretory proteins. These membranes, however, are of unknown origin, and the reason for this absence of pre-protein processing remains unclear.
Processing of nascent secretory proteins is coupled to their sequestration within the lumen of microsomal vesicles (4, 8).
While the vectorial transport of the nascent chain is apparently necesary for the processing events, it has recently been shown (20) that cleavage of the prepeptide is not required for sequestration. To determine if Golgi membranes will translocate nascent secretory proteins, the pre-hCGa that accumulated in the presence of these membranes was tested for susceptibility to trypsin. Protection of the completed protein from proteolysis is indicative of sequestration within the membrane vesicles. As shown in Fig. 4, Golgi membranes (lane 7) do not protect pre-hCGa from proteolysis. However, those membranes that process hCGa also protect the processed form from trypsin digestion (Fig. 4, lanes 5, 6, 8, and 9). Thus, translocation of nascent secretory proteins does not occur in Golgi membranes. The data do not exclude the possibility that the activities responsible for proteolytic removal of the pre-peptide or attachment of the mannose-rich unit of nascent secretory glycoproteins are present in Golgi membranes.
In Vitro Processing of the Asparagine-linked Mannoserich Oligosaccharide Precursor by Golgi Membranes-The
biosynthesis of these glycoproteins is initiated by the transfer of a mannose-rich oligosaccharide precursor to the nascent polypeptide. The maturation of the oligosaccharide occurs by the stepwise removal of monosaccharides to yield a core structure common to all complex type oligosaccharides (22, 40, 49). The terminal sugars N-acetylglucosamine, galactose, and sialic acid are subsequently added by specific glycosyltransferases located within the Golgi apparatus (43,44). Our goal in the following experiments was to determine 1) if the Golgi membranes were capable of processing the oligosaccharide precursor and adding terminal sugars to the core structure, and 2) if the compartmentation of co-and post-translational modifications of the nascent secretory proteins could be demonstrated in SER and Golgi membranes, respectively. In the course of answering these questions we have developed pl of reaction mixture) after 60 min of incubation, and the incubation was continued an additional 30 min. Soy bean trypsin inhibitor (15 pg/assay) was added, and the labeled proteins were immunoprecipitated. Abbreviations are as described in Fig. 2 and Fig. 3. Equal amounts of protein were electrophoresed.
Coand Post-translational
Processing 4 183 a cell-free system capable of forming complex type oligosaccharides from the high mannose nascent oligosaccharide. Direct structural analysis of in vitro-synthesized glycoprotein substrates is complicated by the relatively low amounts of material available for analysis. We have used the enzyme
Endo-P-N-acetylglucosaminidase H (Endo H) as an
analytical probe for following the processing of asparagine-linked oligosaccharides. While Endo H will cleave the mannose-rich oligosaccharide precursor of newly synthesized glycoproteins, it w i l l not attack oligosaccharides containing terminal sugars (50-52). Oligosaccharides which are intermediates in processing exhibit intermediate sensitivities (40,50,52) to Endo H.
First trimester RNA was translated in the presence of SER (fraction f5) from rat liver, and the membrane vesicles, containing the cleaved and glycosylated form of hCGa, were isolated by centrifugation. The pellets were resuspended in the presence of Golgi or SER membranes, Triton X-100, and nucleotide sugars. The products were immunoprecipitated and digested with Endo H. Endo H sensitivity is reflected by an increased electrophoretic mobility due to the loss of carbohydrate (2,3).
After incubation in the presence of membranes derived from the Golgi complex, hCGa is converted to an Endo hresistant form (Fig. 5, lune 2) incubated in the absence of additional membranes (Fig. 5, lane 4), or in the presence of equivalent amounts of SER (lune 6), remained sensitive to Endo H. These results show that oligosaccharide processing and addition of terminal sugars occurs in the presence of Golgi membranes, but not SER, and provides further evidence regarding the subcellular origin of these fractions. UDP-GlcNAc was specifically required for conversion of hCGa to the Endo H-resistant form (Fig. 6, lunes 4 and S Post-translational processing of asparagine-linked oligosaccharides in uitro. Co-translational reaction mixtures containing first trimester placental RNA and 150 pg of rat liver SER were centrifuged at 135,000 X g for 15 min to pellet the membrane vesicles. The pellets were resuspended in post-translational assay media containing 0.8% Triton X-100,2 m~ UDP-GlcNAc, 2 mM UDP-Gal, and 100 mM PIPES-KOH (pH 6.5). These reactions were incubated for 3 h at 37 "C in the absence (lanes 4,5) or presence of 100 pg of either Golgi (GA; lanes I , 2) or SER (lanes 5, 6) membranes. The reactions were supplemented with Triton and deoxycholate to final concentrations of 1% for each and then centrifuged to remove ribosomes. Immunoprecipitation of the supernatant was carried out as described (14).'The immunoprecipitates were dissolved in acetate buffer (pH 5.9) and incubated overnight in the presence or absence of Endo H. The products were precipitated with trichloroacetic acid and analyzed by SDS-polyacrylamide gel electrophoresis. Approximately 10,000 cpm of [%]methionine was loaded in each lane. All other abbreviations are as previously described (Fig. 5). Each lane contains approximately 5000 cpm of [:'"S]methionine.
Co-and Post-translational Processing
providing strong evidence that the in vitro processing and addition of terminal sugars occurs in accordance with the proposed in vivo pathway (see Ref. 25). The processing observed in the presence of both UDP-GlcNAc and UDP-Gal was more efficient than the processing observed when only UDP-GlcNAc alone was present. No Endo H-resistant hCGa was observed in the presence of UDP-Gal alone (lane 6). The in vitro processing of oligosaccharide required the presence of Triton X-100 (Fig. 7, compare lanes 2 and 4). It has been previously shown (3) that the asparagine-linked oligosaccharide precursor remains essentially intact while contained within the ER vesicle (the glucose residues are presumably removed in the ER, see under "Discussion"). Further processing of the mannose-rich oligosaccharide required exposure to another cellular component. Thus, the absolute requirement for detergent in our assay probably reflects the compartmentation of the co-translational (addition of the oligosaccharide precursor) and post-translational (processing and terminal sugar addition) events in the ER and Golgi complex, respectively.
DISCUSSION
Co-translational Processing of Nascent Secretory Proteins-At least four activities are known to be asociated with the early stages of secretion. These are: 1) binding of nascent chains to sites in the ER membrane; 2) movement of the growing polypeptide across the ER membrane; 3) proteolytic removal of the NH2-terminal signal or pre-peptide; and 4) core glycosylation of nascent chains of secretory glycoprotein.
In a previous report we showed that three of these activities were present in smooth microsomes of ascites tumor cells, rat liver, and adrenal cortex. Nascent pre-peptides of hPL and hCGa were cleaved at the appropriate peptide bond, sequestered, and in the case of hCGa, glycosylated by these fractions. The fourth activity, binding of nascent chains to the membrane, is implied by the presence of the other three. These results strongly suggested that SER and RER cannot be distinguished on the basis of these activities.
However, in the absence of further subcellular fractionation we could not state unequivocally that the pre-protein processing observed in smooth microsomes was limited to the ER. In this study we have subfractionated smooth microsomes from rat liver into a Golgi-rich fraction and a smooth ER-containing fraction. The Golgi membranes are characterized by a high specific activity for galactosyltransferase and a low activity of glucose-6-phosphatase, and are not active in pre-protein processing or sequestration. The SER-containing fraction, on the other hand, exhibits much higher levels of glucose-6-phosphatase activity, very little galactosyltransferase activity, and is very efficient in the proteolytic processing, core glycosylation, and sequestration of nascent chains of hPL and hCGa. These data provide strong evidence that the pre-protein processing previously observed in smooth microsomes can be attributed solely to the presence of SER.
The inability of membranes derived from the Golgi complex to translocate and sequester nascent pre-forms of placental peptide hormones illustrates an important feature of the subcellular organization of eukaryotic cells and of the secretory process. The organellar components involved in the recognition and transport of nascent secretory proteins are restricted to the endoplasmic reticulum. This ensures that entry of these proteins into the secretory pathway can occur only in the ER and thus establishes the polarity necessary for the ordered sequence of co-translational and post-translational events involved in the production of secretory proteins.
Post-translational Processing: Golgi-dependent Processing of Asparagine-linked Oligosaccharides-A great deal of new information has accumulated over the past few years regarding the biosynthesis of asparagine-linked, complex type oligosaccharides. The major processing intermediates have been identified, and a scheme for the sequence of events in oligosaccharide processing has been proposed (Fig. 8).
A high molecular weight oligosaccharide with the composition GlcaMan9GlcNAc2 is assembled in the ER while attached to a dolichol lipid carrier (29,40,54). The oligosaccharide is a branched structure with the three glucose residues arranged in a linear sequence at the nonreducing end of one branch (30, 31).
Within minutes of transfer of this intermediate to the nascent chains, processing of the asparagine-linked oligosaccharide (see Fig. 8) begins with the removal of the glucose residues (22,40,49) by glucosidases located within the lumen of the ER (18,45). This intermediate is then converted to a MansGlcNAc2 form (Fig. 8, structure H I ) , which involves the sequential removal of four mannose residues by an crl,2-specific mannosidase localized in the Golgi complex (49). Further mannose removal is dependent upon the GlcNAc-transferase-I catalyzed addition of a GlcNAc residue as shown in Fig. 8 (structure ZU (19,47). Subsequent to this step, additional trimming occurs (49) until the intermediate GlcNAc2-Ma%GlcNAc2Asn is converted to a complex oligosaccharide by the addition of galactose and sialic acid residues by specific glycosyl transferases located within the Golgi apparatus (43).
We (2) have previously demonstrated that, in membranesupplemented ascites tumor cell-free extracts, first trimester placental mRNA directs the synthesis of a glycosylated form of hCGa. Based on the ability to specifically bind concanavalin A and the sensitivity of the glycoprotein to Endo-P-N-acetylglucosaminidases H and Cn, and to a-mannosidase, it was concluded that the protein contained the mannose-rich oligosaccharide precursor shown in Fig. 8 (structure If). The glycoprotein was not sensitive to glucosidase digestion. We also showed that the oligosaccharide could undergo posttranslational processing by an a-mannosidase activity present in the ascites S-100 (3). Mannose removal required the presence of 0.04% Triton X-100, suggesting that the a-mannosidase activity is localized in some subcellular component other than the ER. The oligosaccharide processing was not associated with the addition of terminal sugar^.^ The data presented here clearly show that the glycosylated forms of hCGa are resistant to Endo H after incubation in the presence of Golgi membranes and nucleotide sugars. Endo H resistance was not conferred by ER membranes under identical conditions. These results are sigmfkant in at least two respects. First, the synthesis of secretory glycoproteins can be reconstituted in vitro in a manner that appears to parallel in vivo synthesis. Secondly, the data provide confirmatory evidence regarding the subcellular origin of the fractions isolated and used to examine both the co-translational and post-translational events of glycoprotein biosynthesis.
While the exact composition and structure of the various oligosaccharide intermediates of the in vitro processing were not determined, it seems likely that the same intermediates that have been observed in vivo were generated during in vitro processing. This conclusion is based on the following lines of evidence.
1) Zn vitro oligosaccharide processing and terminal sugar addition was demonstrable only when membranes derived from the Golgi complex were present. Oligosaccharide processing in vivo also occurs in the Golgi complex (23, 35).
2) UDP-GlcNAc is specifically required for conversion of the oligosaccharide intermediate to an Endo H-resistant form.
M. Bielhka and I. Boime, unpublished observations. transferred en 61oc from a dolichol lipid carrier to appropriate aspar-glucose; M , mannose; Gal, galactose; GlcNAc, N-acetylglucosamine; agine residues on the nascent polypeptide chain. Oligosaccharide NA, sialic acid; Asn, asparagine residue in the polypeptide. (Adapted transfer and removal of glucose residues (conversion of structure I to from Kornfeld et al. (25).) Endo H resistance was enhanced when both UDP-GlcNAc and UDP-galactose were present.
Fries and Rothrnan (17) have recently described the in vitro processing and terminal sugar addition of the asparaginelinked oligosaccharides attached to the membrane glycoprotein (G protein) of vesicular stomatitis virus. They found that cell-free processing required energy in the form of ATP. This requirement was attributed to a need for energy in the transport of G protein to the Golgi complex, rather than in the actual processing reactions, since the enzymes involved in oligosaccharide processing are not energy dependent (49).
Transport of glycoprotein in vivo has previously been shown to be energy dependent (38).
The in vitro processing described here does not require ATP. However, there was an absolute requirement for detergent. Thus, while it was not possible to demonstrate transport with our assay, the compartmentation of co-translational and post-translational processing events that occur in distinct membrane preparations is evident.
The experiments described here have used glycoproteins as substrates for oligosaccharide processing and terminal sugar addition. Other in vitro studies on oligosaccharide maturation have used oligosaccharide or glycopeptides. Thus, while the presence of a protein backbone can affect the maturation of an otherwise identical oligosaccharide precursor (41,55), the extent of these effects is not known. The in vitro assay for Oligosaccharide processing described here provides an unique system well suited to the study of these problems. | 5,639.2 | 1982-04-25T00:00:00.000 | [
"Biology",
"Computer Science",
"Chemistry"
] |
SxtA and sxtG Gene Expression and Toxin Production in the Mediterranean Alexandrium minutum (Dinophyceae)
The dinoflagellate Alexandrium minutum is known for the production of potent neurotoxins affecting the health of human seafood consumers via paralytic shellfish poisoning (PSP). The aim of this study was to investigate the relationship between the toxin content and the expression level of the genes involved in paralytic shellfish toxin (PST) production. The algal cultures were grown both in standard f/2 medium and in phosphorus/nitrogen limitation. In our study, LC-HRMS analyses of PST profile and content in different Mediterranean A. minutum strains confirmed that this species was able to synthesize mainly the saxitoxin analogues Gonyautoxin-1 (GTX1) and Gonyautoxin-4 (GTX4). The average cellular toxin content varied among different strains, and between growth phases, highlighting a decreasing trend from exponential to stationary phase in all culture conditions tested. The absolute quantities of intracellular sxtA1 and sxtG mRNA were not correlated with the amount of intracellular toxins in the analysed A. minutum suggesting that the production of toxins may be regulated by post-transcriptional mechanisms and/or by the concerted actions of alternative genes belonging to the PST biosynthesis gene cluster. Therefore, it is likely that the sxtA1 and sxtG gene expression could not reflect the PST accumulation in the Mediterranean A. minutum populations under the examined standard and nutrient limiting conditions.
the STX-synthesis in cyanobacteria. This gene has a polyketide synthase (PKS)-like structure characterized by four catalytic domains with predicted activities of a S-adenosyl-methionine-(SAM) dependent methyltransferase (sxtA1), GCN5-related N-acetyltransferase (sxtA2), acyl carrier protein (sxtA3) and a class II aminotransferase (sxtA4) [19]. In both eukaryote and prokaryote organisms, STX appears to be synthesized by similar processes; in fact, incorporation patterns of precursors (as arginine, acetate and methionine) and toxin stereochemistry are identical in both cyanobacteria and dinoflagellates [25][26][27].
In recent years, A. minutum has been studied for identifying genes and expression patterns involved in critical pathways, such as those for toxin production [28]. To identify sxt genes from two STX-producing Alexandrium species, A. minutum and A. fundyense, different molecular approaches were applied using high-throughput sequencing technology of a large number of transcripts, in silico transcriptome analyses, rapid amplification of cDNA ends (RACE), qPCR and conventional PCR coupled with Sanger sequencing. These multiple approaches successfully identified the genes required for STX-synthesis in dinoflagellates. This demonstrated that STX-synthesis is to be ascribed to dinoflagellates and not to co-cultured bacteria as previously hypothesized [29]. The Alexandrium spp. transcripts of the sxtA gene have the same domain structure as those from cyanobacterial homologs, but the dinoflagellate transcripts are monocistronic; they occur in multiple copies and contain typical dinoflagellate spliced-leader sequences. Furthermore, investigation of STX-producing and non-producing dinoflagellate strains of six species showed the presence of the sxtA gene and STX-synthesis, with exception of four A. tamarense strains for which sxtA was amplified without evidence of STX or derivatives [29].
Additionally, in the cyanobacteria, the product of polyketide synthase is the substrate for the amidinotransferase, encoded by the gene sxtG, which is proposed to incorporate an amidino group from an arginine molecule into the STX intermediate [30]. Recently, the characterization of the second core gene of the STX pathway in dinoflagellates, sxtG, was performed [31].
The aim of this study was to investigate the relationship between toxin content and expression level of the sxtA1 and sxtG genes in the Mediterranean A. minutum.
It is known that the production of toxins in some Alexandrium spp. can be influenced by nutritional conditions. In particular, low levels of nitrate cause the decrease of toxicity, while low levels of phosphorus increase it [32][33][34][35]. Therefore, we conducted experiments also in conditions of nutrient depletion to check how these conditions could affect the toxins produced in A. minutum isolated from the Mediterranean Sea and if these nutritional factors could affect the regulation of sxtA and sxtG gene expression.
Toxin Content in Standard Condition
Liquid chromatography coupled with high resolution mass spectrometry (LC-HRMS) was used to check the presence of all the major STX derivatives. The HILIC-MS/MS method for PSP toxins developed by Dell'Aversano et al., on a triple quadrupole MS [36] was slightly modified to make it suitable to HRMS detection. All the analyzed strains were found to produce only GTX1 and GTX4, which differ from each other only in stereochemistry at one chiral center. They were produced at higher levels in the exponential phase than in the stationary phase. The intracellular content of toxins varied among strains in the two growth phases (p < 0.05). The CBA57 strain turned out to be the most productive one (GTX1 + GTX4 = 3.45 fmol cell −1 in the exponential phase, and GTX1 + GTX4 = 1.77 fmol cell −1 in the stationary phase), while the CBA53 strain was the lowest productive one (GTX1 = 0.04 fmol cell −1 in the exponential phase, and GTX1 = 0.01 fmol cell −1 in the stationary phase). In the AMIB5 and CBA57 strains, the toxin content significantly declined during cell growth from the exponential to the stationary phases (p < 0.05). However, the decrease of GTXs content from the exponential to the stationary phase was also observed in the other two strains, although it was not statistically significant ( Figure 1). In almost all the analyzed samples, the GTX4 concentration was higher than that of GTX1, with the exception of the CBA53 strain that produces only GTX1. In particular, in the exponential phase the content of GTX4 was three times higher than GTX1 in the AMIB5 and AMI2OL strains, and 4.5 times in the CBA57 strain. In the stationary phase, only the GTX4 declined significantly in the CBA57 strain and the same trend was observed in the other strains. In the Alexandrium strains, the composition of toxins is related to the phenotypic trait, but the amounts are variable among strains. Members of the A. minutum group (as well as A. ibericum, A. lusitanicum, A. angustitabulatum) produce primarily gonyautoxins, such as GTX1 and GTX4 [14,37]. Average cellular toxin content of toxigenic Alexandrium isolates varies considerably (up to an order of magnitude) among different growth phases and environmental regimes in batch cultures [38,39]. Moreover, the toxin content of Alexandrium strains isolated from the same geographical area can be extremely variable (from undetectable levels to >100 fmol cell −1 of toxins) [36]. The GTXs toxin content and composition of the Mediterranean A. minutum strains used in this study was in agreement with the previous observations [40,41]. Growth phase CBA57 AMIB5 AMI2OL CBA53
The sxtA1 and sxtG Gene Expression in Standard Condition
The use of endogenous housekeeping genes (actin or 5.8S rRNA) for relative quantification analyses did not produce reliable results due to the high expression variability of these genes between the two growth phases. Therefore, an absolute quantification approach was adopted using standard curves constructed with scalar dilutions of the sxtA1 and sxtG PCR products. A fixed amount of human RNA was spiked in each reverse transcription reaction and human β2M gene transcript was amplified in each cDNA sample to control the efficiency of reverse transcription and to indicate possible inhibitory effects during the synthesis of the cDNA. Moreover, the β2M was also used as an exogenous housekeeping gene for relative quantification. Using this exogenous reference, the relative quantification data showed the same trend obtained with absolute quantification (data not shown). The sxtA1 and sxtG standard curves showed efficiency of 98% and 99% and good linear regression (R 2 = 0.99) (Supplementary Figure S1).
The number of sxtA1 and sxtG transcripts was calculated by plotting the Ct of each sample on the standard curve. The data were normalized per μg of total RNA ( Figure 2). In the exponential phase, the CBA53 strain showed a content of the sxtA1 mRNA significantly higher than that of other strains (p < 0.05). Although a reduction of sxtA1 expression during the stationary phase compared to the exponential phase was evident for all strains; a statistically significant difference was observed only for the strain CBA53 (p < 0.05) ( Figure 2A).
Curiously, the sxtG was not detected in the most toxic CBA57 strain, or in the CBA53 strain. This finding was also confirmed by experiments with internal positive control (see 3.4 in Experimental Section) demonstrating that the negative results was not an artifact due to inhibition of the PCR reactions. Therefore, the sxtG gene expression analyses were performed on AMI2OL and AMIB5 strains only. No significant differences of sxtG transcript abundance were observed between the two strains in both growth phases, but significant difference was found between the two growth phases in the AMI2OL strain (p < 0.05) ( Figure 2B). Moreover, the sxtG expression did not correlate with sxtA1 gene expression in strains AMI2OL and AMIB5.
Unlike sxtA gene, the presence of sxtG gene is not exclusively specific of the Alexandrium species reported to produce saxitoxins [31]. In fact, it was observed that the sxtG amidinotransferase was present and transcribed in Alexandrium species where the sxtA gene and STX synthesis have not been identified [13]. Therefore, this gene could also be involved in other biochemical pathways or these Alexandrium spp. could have lost the capacity of STX synthesis [31]. In this study, the absence of sxtG gene in CBA57 and CBA53 strains of A. minutum could be related to the presence of a homolog of amidinotransferase that was not amplified by our primers. In fact, a second dinoflagellate amidinotransferase that groups more distantly to sxtG with homologous actinobacterial and cyanobacterial cylindrospermopsin aoaA and cyrA sequences has been identified suggesting that multiple amidinotransferases have been acquired by horizontal gene transfer (HGT) in parallel or separate events during Alexandrium evolution [31].
Each amount of sxtA1 or sxtG mRNA measured in the two growth phases was compared to the amount of toxins (GTX1/4) produced. No significant correlations were found between amount of mRNAs and intracellular toxin content in all the strains. Unexpectedly, the less toxic CBA53 strain showed the highest level of sxtA1 gene expression. Moreover, the mRNA transcripts and toxins content were compared along the different growth phases. In all the strains, even if no significant correlation was found, the mRNA transcripts and toxins showed the highest and lowest contents in the exponential and stationary phases, respectively.
The absence of correlation between the gene expression and toxin content could be due to the fact that the PSP toxin biosynthesis enzymes are long-living enzymes with a slow turn-over and may be regulated by post-translational mechanisms [30]. Moreover, in the cyanobacteria C. raciborskii T3, the saxitoxin biosynthetic pathway is encoded by a gene cluster of more than 35 kb, and comparative sequence analysis assigns 30 catalytic functions to 26 proteins [19]. A cluster of 14 genes, defined as "core" genes (sxtA-sxtI, sxtP-sxtS and sxtU) is common between the STX-pathways of several cyanobacterial genera [11,42]. Eight of these genes (sxtA, sxtB, sxtD, sxtG, sxtH/T, sxtI, sxtS and sxtU) seem to be directly implicated in STX-synthesis [19]. The STX biosynthesis pathway appears conserved between cyanobacteria and dinoflagellates: it involves arginine, SAM synthetase and acetate, with the addition of the methyl group of SAM into the final molecule [43]. It is likely that in A. minutum, saxitoxin and its derivatives are the result of the synergistic action of several enzymes homologous to those of cyanobacteria [13,29]. Of the 14 "core" STX genes, 10 dinoflagellate homologues, or candidate genes, are presently identified (sxtA, sxtB, sxtD, sxtF-I, sxtQ, sxtS and sxtU) [29,44]. However, sequence conservation might be so low that reliable homologue identification is impossible or, if several homologues are indeed missing in the dinoflagellates, alternative dinoflagellate genes could have substituted their functions in the SXT pathway. Alternatively, the STX biosynthetic pathway could have evolved independently in cyanobacteria and dinoflagellates [44]. Moreover, dinoflagellates have large genomes, a considerable number of unknown genes and a high frequency of repeats making genomic studies very hard [27]. Stüken et al. (2011) characterized the sxtA gene showing a comparable domain structure to its cyanobacterial homologue [29]. SxtA encodes a polyketide synthase, the first enzyme in the metabolic pathway, but it is not clear which other genes are involved and the extent of their activity. This fact is to be considered when the amount of intracellular toxins needs to be correlated to gene expression. Also, the subsequent characterization of sxtG, the second "core" gene in the STX pathway [31], may indicate a massive transfer of toxin-related genes from bacteria to dinoflagellates [45]. However, in contrast to cyanobacteria, most of the genes involved in STX-synthesis in dinoflagellates have remained elusive.
The sxtA1 Gene Expression and Toxin Content under Phosphorus Limitation
Strains of CBA57, AMI2OL and CBA53 were grown in phosphorus limitation as described in the Experimental Section. Under this condition, all strains were characterized by a short exponential phase; therefore, all withdrawals were made at the fifth or sixth day (exponential phase) and at the 12th day (stationary phase) (Supplementary Figure S2). The concentrations of dissolved inorganic phosphorus varied from 0.16 ± 0.1 μM (day 1) to 0.13 ± 0.04 μM (day 12) (Supplementary Table S1). The daily decrease in dissolved phosphorus is extremely low compared to the variation of the standard condition. This could be due to the fact that the algal biomass that develops in these limiting conditions is much lower compared to the standard conditions. In the phosphorus limitation, the toxin content tended to decrease from exponential to stationary phase confirming the same trend of the standard conditions ( Figure 3A). However, this decrease was not significant with the exception of AMI2OL strain (p < 0.05).
Under phosphorus limitation, the sxtA1 gene expression decreased significantly only in the CBA53 strain between the exponential and stationary phase. No significant variation of sxtA1 gene expression was observed in the AMI2OL strain ( Figure 3B). Furthermore, in CBA57 strain, which expressed the lowest sxtA1 copy number in the standard condition, the expression of sxtA1 gene was undetectable in contrast to the higher GTX1/4 content respect to the other A. minutum strains.
The toxin content and mRNA levels at each stage of growth for all strains were compared with the values obtained in the standard growth condition. No significant differences of GTX1/4 content were observed between standard and phosphorous depletion for strains CBA53 and AMI2OL, while the CBA57 strain showed a significant toxin content decrease (p < 0.05) in phosphorous limitation both in exponential and stationary phases. It is noteworthy that in phosphorous depletion, GTX2 and GTX3 were detected in trace amounts in the CBA57 strain only (data not shown). As for the sxtA1 gene expression, a significant decrease of the mRNAs was found in CBA53 strain compared to the standard nutritional condition. In particular, in the CBA57 strain, which expressed the lowest sxtA1 copy number under the standard condition, the expression of sxtA1 gene was undetectable along with an evident decreasing of toxins. Instead, in the AMI2OL strain, the sxtA1 gene expression did not vary much in both the exponential and stationary phases compared to the standard growth conditions. However, the Spearman's correlation analysis confirmed the independence of gene expression and GTX1/4 intracellular content for each strain. Some authors reported accumulation of intracellular PST in phosphorus limiting conditions [33][34][35]46,47]. However, in these studies, almost all the A. minutum strains were not from Mediterranean areas with the exception of the A. minutum A5 strain. This strain in the phosphorous limitation did not show significant effect on toxin production [47]. In that study, a rapid and substantial increase in PST levels was observed in A. minutum strains in the presence of waterborne grazers, suggesting that secondary metabolism of dinoflagellates is not only dependent on resource availability, but also on the predation pressure. In fact, it was also suggested that PSTs may be produced in phosphate-limited conditions in order to redirect the grazing pressure toward alternative non-toxic competitors [32]. In the nutritional conditions of our study, the strains were not subjected to pressure from predators. Furthermore, since phosphorus is involved in the energetic metabolism and in the regulation of intracellular functions, its deficiency in the medium should negatively affect the nucleotide synthesis, as well as the energy reserves of the cell. As a consequence, it is logical to suppose that cell energy is likely employed for the maintenance of basic and essential cellular functions [35]. Hence, the activation of energetically costly pathways, such as that used for saxitoxin synthesis, would not be activated if not necessary. This could explain the down-regulation of the sxtA1 gene in our Mediterranean A. minutum strains during phosphorus depletion treatment. The different behaviour of the A. minutum AMI2OL could be due to the biological/genetic variability of the strain [48], and this has to be further investigated by using a higher number of Mediterranean A. minutum strains.
The sxtA1 Gene Expression and Toxin Content under Nitrogen Limitation
The sxtA1 gene expression under nitrogen limitation was tested on the CBA57 and AMIB5 strains. As observed for A. minutum strains grown in phosphorus limitation, the growth was characterized by a short exponential phase; therefore, sample withdrawals were made at the fifth or sixth day (exponential phase) and the 12th day (stationary phase) (Supplementary Figure S2). The concentration of total dissolved nitrogen varied from 77.5 ± 6.5 μM at the inoculation time to 0.73 ± 0.1 μM at day 12 (Supplementary Table S1).
Both strains produced similar toxin amounts in exponential and stationary phases with no significant differences ( Figure 4A). For the CBA57 strain, a significant reduction in toxin intracellular content was found in both exponential and stationary phases with respect to the standard conditions (p < 0.05), while in the AMIB5 strain this difference was not significant. In the CBA57 strain, the sxtA1 gene was constantly expressed under nitrogen limitation, either in the exponential and stationary phase ( Figure 4B). In the AMIB5 strain, the sxtA1 expression was strongly down regulated in the stationary phase. With respect to the standard conditions, expression of sxtA1 in strain CBA57 did not change significantly, while it was strongly down regulated in the AMIB5 strain (p < 0.05).
The effect of nitrogen limitation in toxin production by the CBA57 strain was consistent with previous studies [35,47]. This effect is reasonable because toxins are nitrogen-rich molecules, which might be synthesized as a by-product of amino acids or nitrogen excess. In fact, in nitrogen limitation conditions, the intra-cellular pools of nitrogen would mainly be allocated to the production of nucleotides and amino acids in order to maintain basic and essential cellular functions, while the activation of nitrogen demanding metabolic pathways, such as PST biosynthesis, would not be favoured [46]. On the other hand, the fact that intracellular content of GTX1/4 did not change significantly with respect to the standard conditions in AMIB5 may be due to low amounts of toxin production. In this case, the amount of nitrogen in the medium may be sufficient to maintain the toxin levels observed in the standard conditions. Figure 4. Intracellular toxin content in the Mediterranean A. minutum strains during the exponential and stationary growth phases (means ± SD, n = 3) (A) and sxtA1 gene expression as mRNA copy per µg RNA −1 (B) under nitrogen limitation (means ± SD, n = 3). In the A. minutum AMIB5 strain, the sxtA1 expression was not detected in the stationary phase.
Strain Cultures
Alexandrium minutum monoclonal strains used in this study were isolated from surface sea water in the Mediterranean Sea. Strains the Ionian Sea and Tyrrhenian Sea are closed areas characterized by freshwater inputs; the coastal north-western Adriatic Sea is strongly influenced by the Po river input determining the trophic conditions of seawaters [49,50]. Cultures were maintained in 2 L bottles, using standard conditions of f/2 medium minus silicate [51], at 21.5 ± 1 °C, light irradiance of 100 µmol m −2 s −1 for 12:12 h light-dark cycles. In addition to these standard conditions, CBA53 and AMI2OL strains were also cultured in phosphorus starvation, the strain AMIB5 in nitrogen starvation and strain CBA57 in both phosphorous and nitrogen starvation. The concentrations of nutrient and growth conditions are summarized in Table S1. Every two days each culture was assessed for cell concentrations and growth rate by Utermöhl method [52] using an inverted microscope (Axiovert 40 CFL, Zeiss, Göttingen, Germany).
The stock cultures to be further inoculated in batch cultures were grown with antibiotics (50 μg mL −1 ampicillin, 33 μg mL −1 gentamicin, 10 μg mL −1 ciprofloxacin, 1.13 μg mL −1 chloramphenicol and 0.025 μg mL −1 streptomycin sulphate) using sterile handling techniques to minimize bacterial influence, as described in [40]. The antibiotic treatment was stopped at inoculation of the cultures while always maintaining aseptic handling techniques to avoid any bias introduced by this handling. The cells were analysed in the exponential (growth rate 0.21 ± 0.06 µ day −1 ) and stationary (growth rate 0.03 ± 0.01 µ day −1 ) growth phases (Supplementary Figure S3). The duration of culture experiments was between 12 and 31 days, depending on the growth rates. Cells were harvested by filtration using 3 μm pore-sized filters, rinsed with sterilised seawater and centrifuged 10 min at 1200× g. The pellets were immediately stored at −80 °C until total RNA extraction or PSP toxin analysis. The eluates were stored at −20 °C and used for the nutrient analyses.
Pellet Extraction
Pellet of the four different strains of A. minutum (CBA53, CBA57, AMIB5 and AMI2OL) were separately extracted. Each pellet was added of 1 mL aqueous 0.1 M acetic acid and sonicate for 10 min. in pulse mode, while cooling continuously in an ice bath. The mixture was centrifuged at 4835× g for 10 min and the supernatant was decanted so as to obtain 1 mL extract that was directly analysed by LC-HRMS.
LC-HRMS
All LC-HRMS analyses were performed on an Agilent 1100 LC binary system (Palo Alto, CA, USA) which included a solvent reservoir, in-line degasser, binary pump and refrigerated autosampler coupled to a hybrid linear ion trap LTQ Orbitrap XL™ Fourier transform mass spectrometer (FTMS), equipped with an ESI ION MAX™ source (Thermo-Fisher, San Josè , CA, USA).
LC-HRMS analyses were performed in the positive ion mode, in collision induced dissociation (CID) MS 2 experiments, by using the following source settings: spray voltage = 4.2 kV, capillary temperature = 440 °C, capillary voltage = 29 V, sheath gas = 35 and auxiliary gas= 10 (arbitrary units), tube lens voltage= 70 V. In all experiments, a 30,000 resolving power was used. Full scan (FS) spectra were collected in the mass range m/z 200-500, MS/MS spectra were acquired by using the parameters reported in able 1, with an activation Q of 0.250, and an activation time of 30 ms. Extracted ion chromatograms (XIC) were obtained from MS/MS spectra by selecting fragment ions reported in Table 1 and used for quantification versus PSP toxin standards. Table 1. Precursor ion, formula, mass range, and collision energy (%), used in the chemical analyses, and limit of detection (LOD) and quantification (LOQ) measured. Calculation of elemental formulae was performed by using the mono-isotopic ion peak of each ion cluster. A mass tolerance of 5 ppm was used and the isotopic pattern of each ion cluster was considered. For each PSP toxin, the limit of detection (LOD) was measured and corresponded to the lowest concentration level that can be determined at 3 < S/N ratio < 10. Limit of quantification (LOQ) was measured and corresponded to the lowest concentration level that can be determined at S/N ratio >10 (Table 1).
Matrix Effect
An A. minutum sample, containing only GTX1/4, was spiked with a pure GTX2/3 standard to obtain a concentration level of 63 ng/mL. Matrix effect was calculated by comparing the peak area of the matrix matched (MM) standard with that of a matrix free (MF) GTX2/3 standard.
The ion suppression or enhancement effect was assessed as: 100 -(peak area of MM standard/peak area of MF standard) × 100.
RNA Extraction and Reverse-Transcription
Each strain was analysed for absolute and relative quantification of sxtA and sxtG mRNAs content. RNA was extracted from 3.0 × 10 6 cells at different growth phases using TRIzol Reagent (Ambion, Life Technologies, Carlsbad, CA, USA) following manufacturer's instructions with few modifications: cell lysis was performed at 60 ± 1 °C for 10 min in a water bath, and there was a 10 min shaking step with 0.5 mm zirconia-silica beads (400 mg) contained in the sample tube. The resulting RNA pellet was dissolved in 100 μL RNAse-free water and purified with the RNeasy mini kit (Qiagen, Hilden, Germany) including on-column DNase digestion with the RNase-Free DNase Set (Qiagen, Hilden, Germany). Finally, RNA was eluted with 40 μL RNase-free water. The concentration, integrity and purity of RNA was tested with a PharmaSpec UV-1700 spectrophotometer (Shimadzu, Kyoto, Japan), measuring absorbance at 260, 280 and 230 nm, and with electrophoresis analysis in an agarose gel. Only samples with intact RNA were taken into account for reverse-transcription and qPCR. The purified RNA (900 ng) was spiked with 100 ng of human RNA derived from human MCF7 cells, to be used as an exogenous reverse transcription control. The cDNA was prepared using SuperScript ® III First-Strand Synthesis SuperMix for qRT-PCR (Invitrogen, Life Technologies, Carlsbad, CA, USA).
Primer Design and qPCR Conditions
The primers were designed using Primer Express 2.0 software (Applied Biosystems, Life Technologies, Carlsbad, CA, USA). The sequences used for designing primers specific for genes of interest were: a long isoform precursor mRNA of sxtA (GenBank Accession number: JF343268) and mRNA sxtG (GenBank Accession number: JX995121) both from Alexandrium minutum CCMP113. These primers and condition were used for amplification of both cDNA and genomic DNA for assessing the presence of target sequence in the strain genomes. Experiments that gave a negative result were repeated by including within the same reaction 100 copies of purified PCR product as a positive control in order to verify the absence of inhibition.
The actin and 5.8S RNA genes were initially considered as housekeeping genes (HKG). However, due to their significant differences in expression observed at the different growth phases, the human β2M gene was used as a control for relative quantification of sxtA and sxtG expression, and to check the reproducibility and efficiency of reverse transcription. The actin primers were designed using the A. minutum actin gene as reference (JN402307). The specificity of all primers was examined in silico using BLAST. The primers specific for 5.8S RNA were 5.8S3′ and 5.8S5′ [53]. The primers β2M f and β2M r specific for human β2-microglobulin were from [54].
The primer sequences and concentrations used in each qPCR reaction are shown in Table 2. The qPCR protocols were performed on a StepOne Real-time PCR system (Applied Biosystems, LifeTechnologies, Carlsbad, CA, USA) and have been optimized in order to obtain reaction efficiencies close to 100%. Reactions were run in 25 µL volumes with Hot-Rescue Real-Time PCR Kit-SG Mix (1×) containing Sybr Green (Diatheva, Fano, Italy). All qPCR amplification protocols started with a 10 min activation step at 95 °C, followed by 40 cycles including 15 s at 95 °C, and 1 min at 60 °C for annealing and extension. The qPCRs were followed by a dissociation protocol from 60 °C -95 °C and melting curve analysis. All samples were run in three biological replicates and each of those were run with two technical replicates. The standard curves were constructed from a six point ten-fold dilution series of purified sxtA1 and sxtG PCR products (from 2 to 1.0 × 10 6 copies) generated from DNA of A. minutum AMI2OL. The PCR product was purified with the MinElute Gel Extraction Kit (Qiagen, Hilden, Germany) and quantified with a Qubit (Invitrogen, LifeTechnologies, Carlsbad, CA, USA). The amplification efficiency of the qPCR assays was estimated from standard curves using the StepOne software version 2.3 (Applied Biosystems, LifeTechnologies, Carlsbad, CA, USA).
Statistical Analyses
Shapiro-Wilk test was used to check the lack of normality of data. Therefore, the nonparametric tests Mann-Whitney and Kruskal-Wallis were used for the comparison of median among strains and growth phases, for gene expression data and the determination of toxin intracellular contents. The Spearman's test was used for the correlation between levels of gene expression and intracellular content of toxins. All tests were obtained with PAST ver. 2.09 [56] with a p < 0.05 determining significance.
Conclusions
This is the first study on sxtA1 and stxG gene expression and their correlation with toxin content in Mediterranean A. minutum. The expression levels and intracellular toxin accumulation were studied in A. minutum strains grown in enriched medium and nutrient limitation. In standard medium conditions, A. minutum produced exclusively GTX1/4 and the toxin production decreased from exponential to the stationary phase. Despite both the sxtA1 gene expression and intracellular toxin content showing a reduction in trend from exponential to stationary phase, this correlation was not significant. The sxtG gene was detected in only two strains. The gene expression followed the same trend of the sxtA1, but the absolute mRNA quantity was not correlated with either toxins or with sxtA1. Under phosphorus or nitrogen limitation, the toxin content displayed a significant reduction in the A. minutum CBA57 only. Also in these nutrient depleted conditions, the correlation between gene expression and toxin was not significant.
Hence, the monitoring of expression level of sxtA1 and sxtG did not appear sufficient to predict toxicity in Mediterranean A. minutum. It would be necessary to increase knowledge regarding the expression and function of other genes involved in the PST biosynthesis pathway to improve the ecological interpretation of toxin production, as well as to provide the possibility of using new molecular markers for the monitoring of toxin presence in seawater and accumulation in farmed shellfish. | 6,885.2 | 2014-10-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
AN OPTIMAL CONTROL PROBLEM IN ECONOMICS
The first problem in the economics of natural resources is to find the rate at
which to extract the resource in order to optimize its value when there are no extraction
costs. It is shown that the existence of an optimal extraction path is not guaranteed by a
utility function that is merely (strictly) concave, but that the additional requirement of
asymptotic nonlinearity will assure the existence of the desired optimum.
INTRODUCTION.
The first problem in the economics of exhaustible natural resources is to establish the extraction rate of the resource.This problem is formulated in the language of the calculus of variations or, more "commonly, as an optimal control problem: choose an extraction rate q(t) to maximize the total discounted utility of the resource, j.o U(q(t)) e -t dt, subject to o appropriate initial conditions.The subject of this paper is the existence of a solution to this problem.Existence theorems for optimal control problems are abundant in the literature, but the simplicity of the problem considered here demands a simple answer.
It is well known that even in the simple setting described above, the existence of an optimal solution is not always assured.Indeed, if the utility function is linear, a solution does not exist that is "usable" for a resource extractor.Although concavity is the natural hypothesis for the utility function U, the existence of an optimal extraction path is not guaranteed by the strict concavity of U. It will be shown that the additional geometric requirement of "asymptotic nonlinearity" will resolve these two difficulties and assure the existence of the desired optimum.
Optimal extraction problems have been discussed extensively in the literature.Although more sophisticated models have been developed, the one here is always the first discussed; see, for example, the text by Dasgupta and Heal [1] and the references therein.
Two recent works that include a discussion of existence for this simple resource extraction problem are Epstein's 1983 paper [2] and Lozada's 1987 thesis [3].Both these works consider the optimal control problem from the point of view of an unbounded horizon problem and so are more general than the paper of Yam'i [4] on the finite horizon problem.More importantly, as both Epstein and Lozada realize, the key to existence is exhaustion.The example at the end of section 2 will illustrate this.The condition of asymptotic nonlinearity introduced here is a simple geometric condition on the utility function, U(q), saying essentially that U(q) does not become too linear for q large.This is sufficient to guarantee exhaustion and hence, using a result of Toman [5], existence is assured.It is easy to show that this condition is in fact equivalent to Epstein's two integral conditions in his Lemma 1, but the condition here is more easily verified and has a simple geometric interpretation.The condition of asymptotic nonlinearity is distinct from Lozada's conditions as examples will show.Finally recent results of Botteron and Dacorogna [6] in the theory of optimal foraging provide conditions similar in spirit to ours.Their results are complementary to ours in that their problem has a fixed (finite) terminal time and a fixed terminal state.
In section 2, notation and preliminary results will be established.A brief proof of a "folk theorem" is given which provides a simple criterion to determine if extraction occurs on a finite or infinite interval.An example is given at the end of section 2 of a strictly concave utility function for which the associated optimal control problem does not have a solution.
In the second section it is shown that if the utility function is strictly concave and "asymptotically nonlinear" then an optimal solution exists.The theorem is compared with the results of Epstein [2], Lozada [3], and Botteron and Dacorogna [6].
STATEMENT OF THE PROBLEM AND PRELIMINARY RESULTS.
A utility function is a map U: R + R i) U is twice continuously differentiable; ii) U>_0; iii) U > 0;and iv) U" < 0.
satisfying the following conditions: The objective is to maximize the present value of the utility of an exhaustible resource.Let x(t) be the amount of the resource remaining in situ at time t.The initial amount of the resource is Xo Let q(t) denote the extraction rate.The problem is to choose q (and implicitly an extraction time T < o0 in order to maximize: T I U(q(t)) e -St dt subject to the conditions dx x(0) x(T) > 0, > 0 d-----q' =Xo, q 0 Since U is increasing and nonnegative, if an optimal path exists, the resource will always be exhausted: x(T) 0. This fact will imply the non-existence of an optimal extraction path for the utility function constructed at the end of this section.The following proposition summarizes some well known necessary conditions for optimal extraction paths.PROPOSITION 2.1.A) If x(t) is optimal, then lim x(t) 0.
Finally, it is important in problems of resource extraction to know if the time to exhaustion is finite or infinite.This is especially critical since the transversality conditions pften differ in the finite and infinite cases.In the following theorem, it is shown that the value of the derivative of the utility function at q 0 provides a simple test to determine the extent of the extraction horizon.THEOREM 2.2.Assume an optimal solution exists.The extraction horizon is finite if and only if the derivative U is bounded (at q 0).PROOF.The Hamiltonian for the problem is H U(q) e "t Aq The Maximum Principle guarantees the existence of the constant A. Apply the Mean Value Theorem to H as follows.Let q be a feasible extraction function (nonnegative and piecewise continuous).
Fix t.Then H(0) n(q(t)) [U(0) U(q(t)) e -t + Aq(t) U'(fl) (-q(t)) e -t + Aq(t) by the Mean Value Theorem where 0< ft< q(t) -q(t) U'((:I)e -St-A If U'(0) +oo (i.e.lim V'(q) +o ), we want to show that it is not optimal to q---}O + have q(T) 0 for any finite T. Fix T and construct a feasible path q with q(T) > 0 but so small that A e t < U'(q(W)).Then H(0) H(q(W)) < 0, so by the Maximum Principle, the optimal solution cannot be zero at T.
On the other hand, if U(0) < oo, we want to show that the optimal solution will be zero for large t.Since U(0) is bounded, for any extraction function q, the value of can be chosen so large that U'(q(t))e -t X < 0. Thus forq(t) > 0, H(0)-H(q(t)) > 0 and so q(t) > 0 is not optimal.I-!
The final item of this section is an example showing that the Maximum Principle fails to yield the optimum extraction path for a class of utility functions satisfying assumptions (i)-(iv).A linear function is a trivial example of a utility function for which the Maximum Principle yields no information, although such a function also fails to satisfy conditions (iii) and (iv).With a linear utility fimction, the present value of utility is always increased by extracting the same quantity of resource over a shorter time.(So the "optimal" extraction decision would be to extract all of the resource at the first instant.Although point masses axe well known to arise as solutions to optimal control problems, they are not solutions that can be implemented by a resource extractor.)The problem with a linear utility function would disappear if the feasible extraction rates were uniformly bounded.However without introducing other considerations into the model (eg.costs), there are no a priori upper bounds on the extraction rates which are justified from the economics of the problem.This issue can also arise for utility functions which are strictly concave, as shown in the following example.
The example is similar to one by Yaari [1964].
Applying the Maximum Principle yields A e -6t (e -q q-1) Solving this for q gives q ln(Ae -6t 1).Since q(T) 0 if an optimal solution exists, it follows that A 2e -6T and q(t) The issue here is the existence of an upper bound for q.From the form of q it is seen that the existence of an optimal extraction rate depends on the terminal time T.Such a time T is found by solving, if possible, the following equation representing exhaustion of the resource: T T x I q(t) dt I -ln(2e-(T-t) 1) dt.
Thus equation (2.1) can be solved for all stock sizes, Xo, less than some maximum, Xm Since T m is known, x m can be computed explicitly in this example by a change of variables: Tm Xm -ln(2e -(Tm-t) 1)dr 12.
0 So for small stock sizes (xo _< x m ), the optimal extraction path is determined by the first order conditions.Moreover, the value of the resource is given by T T
V(x)
I -e-q + q + )e -t dt I -ln(2e -(T-t) -1) e -t dt 0 0 -1 e -q(0) q(0) e-q(0)] However for x o > Xm a solution T does not exist for equation (2.1).Thus for stock sizes greater than Xm no solution will exhaust the resource, and so by Proposition 2.1, no optimal solution exists.
Finally note that in case x o x m the stock constraint (2.1) is exactly satisfied and the first order condition implies that q is always nonnegative.However equation (2.1) implies that the initial rate of extraction is q(0) +co.Further, although an upper bound exists for the value function V(xo) (in fact V(xo) < 1/t that limit is not reached if q(0) is finite.Since extraction rates are, by definition, finite everywhere, no optimal solution exists in case Xo=Xm.
EXISTENCE OF OPTIMAL EXTRACTION PATHS.
It is routine to check that if the admissible extraction functions q(t) are required to take on values in a bounded set (e.g.0 < q(t) < B) then, with the usual assumptions on U, there exists an optimal path.(See Theorem 2 of Toman [5].)The preceding example shows that the boundedness requirement is nontrivial.Moreover, in the absence of extraction costs, there are no a priori bounds on the extraction rate.However, as suggested by Toman [5, 7], it may be possible for bounds to be inferred from other aspects of the problem.This is done next.The goal is to show that the resource will be exhausted.This will imply that the initial extraction rate is finite and hence q(t) is bounded since the extraction function is monotone.One additional assumption is needed on the utility function U. The assumption is basically that the asymptotic behavior of U is no_At linear.This assumption is used to guarantee that the resource will be exhausted (i.e.j" q(t) dt Xo.)This in turn provides the required boundary conditions needed to solve the optimal control problem.
In order to state the main theorem, two pieces of notation are needed.Let fl lim U'(q), if this limit exists; otherwise let fl +co.Similarly let a lim U'(q).q--O + q--*oo Since U is monotone, this limit always exists.Abusing notation slightly, write U(O) fl and U(co) c.The condition that U be asymptotically nonlinear is simply that lira U(q) cq +co.This means that the utility function is "not too linear" for large q--+o values of q.
THEOREM 3.1.Assume U satisfies conditions (i)-(iv).Let q(t)satisfy thefirst order conditions from the Maximum Principle.In case c > 0, assume additionally that U satisfies lim U(q) cq +co Then the stock will be exhausted.Furthermore, q(O) < co.
PROOF.Since q(t) satisfies the first order conditions, the Hamiltonian H U(q) e -& Aq is maximized as a function of q.Therefore q(t) W(Aet) where W (V') "1.
There are two constants left to be determined: $ and T The terminal condition on q lim q(t) 0 from the first order conditions allows for computation of T as a function of t--T A. (It is convenient for the proof to find T and A in this order.)Finally, the domain of W is the interval (c,/) so that c < A e t < /, for all E [0,T] and so A e [a,].
T To determine A, use the stock constraint: Xo =/ q(t) dt.
0
The problem of existence then reduces to the solution of the stock constraint for the multiplier $: To show that {3.1) has a solution A, observe first that the integral I W(A e t dt is a 0 continuous function of A To show that the stock can be exhausted, we show that T i) ,alim I W(A e t dt +o and 0 To verify i) and ii) it is helpful to change variables: v $ e t so that a + Since may be infinite, it will be convenient to fix "r E (cr,).The assertion is easily verified if cr O.In this case dv -, oo as A-,0.
In case cr > 0, use a similar estimation, but with the roles of W(v) and 1_ interchanged and apply the "asymptotic nonlinearity" hypothesis.Fix & E (c,3) and define 1 and q,, respectively, by U'(I) & and U(qA) .Comparing areas under the graphs of U' and its inverse W, we see that 1._( U(q,O trq, U(I)-Ctl ).
Assertion now follows since, q, oo as A c and, by hypothesis, lim U(q)-crq q--+oo +oo.
Assertion ii.lira In case < oo, fix o E (c,) and observe that for > o o dv -* 0 as '-3.
In case under U and W: I W(v) dv Ao Now for A > Ao, observe first that qo I (U'(q)-Ao)dq 0 W(v) dv < oo.To see this, compare the areas qo < [ U'(q) dq U(q)-U(O) So as A --, B the integral on the left approaches zero.D W(v) dv.
The issue of existence of an optimal solution to the resource extraction problem can now be resolved.Suppose an initial stock size Xo and a utility function U satisfying a standard set of assumptions (i) (iv) are given.Under the additional assumption of asymptotic nonlinearity, an extraction function q(t) W(Aet) has been produced which satisfies the necessary conditions of the Maximum Principle.Such a function is unique, but is it optimal?
To see that it is, let B q(0).Theorem 3.1 implies that admissible extraction functions can be limited to those satisfying q(t) E [0,B] so that assumption A12 of Toman [51 is satisfied.The other conditions of Toman's Theorem 2 are easily verified and therefore an optimal solution exists.This discussion is surnmarized in the following theorem.
THEOREM 3.2.Assume U satisfies the conditions (i)-(iv).In case c > 0, assume additionally that U satisfies lim U(q) cq +oo.Then there exists an optimal solution q---cx) to the problem of section 2.
Finally we relate our results to those in the literature.Lozada's thesis [3], of course, contains much more than existence results for the simple model discussed here.However his result on the problem considered here is basically that if a 0 or if inf W(v) > 0 then the resource will be exhausted.So his theorem does not apply and Theorem 3.1 does) to utility functions such as U(q) ln(q+l) + q, U(q) q-ff + q, or U(q) tan-lq, while our theorem would not apply to functions such as U(q) ,]q2_ Thus our results are distinct from Lozada's.
Epstein's paper [2] is also not principally concerned with existence results for optimal control problems.So some of his conditions are phrased to be more applicable to his topic of analyzing risk aversion.Lemma 1 of [2] is the result of relevance here.The two conditions which assure existence of an optimal solution are Co 0 By changing variables x U(c) ), it follows that these two conditions are essentially the two assertions in the proof of Theorem 3.1.In case a (= U'(e) is positive, (IC1)is equivalent to the condition that the utility function is asymptotically nonlinear.On the other hand if a 0 then Epstein's conditions hold automatically.Clearly the conditions of Theorem 3.1 are more intuitive and geometrical than the integral conditions of Epstein.Lastly, Botteron and Dacorogna [6] prove an existence theorem for the problem of minimizing I g(t,v'(t)) dt for functions v E C1([0,1]) satisfying v(0) 0, v(1) 0 S, and v'(t) > > 0. The hypotheses on g fall into 2 groups.The first group includes regularity and convexity conditions and are more general than our conditions.(This is due partly to our more specialized economic applications.Our theorems can be generalized to include discounting functions more varied than e-t.)The second hypothesis on g in [6] is a condition on 0g(t,y)/0y.While similar in spirit to our conditions on U' the conditions are quite distinct due to the fact that the problem in [6] has a fixed final state and a fixed final time.In our optimal extraction problem the final time, in particular, is one of the choice variables.(Especially in the case that T +oo, the condition in [6] is inapplicable to our problem.)As a result, the conditions of [6] and the current paper are different and in fact are complementary in that taken together, they cover both the free and fixed endpoint problems. | 3,889.2 | 1991-12-01T00:00:00.000 | [
"Economics",
"Mathematics"
] |
A look at the performance of barrel and wedge assembly in cable bolts applications
Pretensioning is one of the most common practices in cable bolting. A barrel and wedge is typically used in the free end of the cable to hold the pretension load. This study investigates the performance of barrel and wedge in cable bolt large-scale laboratory pull out tests. Twenty-five experiments have been completed containing various barrel and wedge and cable sizes under different loading conditions, namely monotonic and cyclic. The results indicated barrel and wedges undergo constant displacement throughout the experiment. The cyclic tests suggest that the barrel and wedge assembly displacement are almost entirely non-reversible. Two distinct behaviours, namely exponential and deflection point based, were observed. The study concludes that barrel and wedge assemblies can significantly influence the performance of cable bolts under axial load.
To the authors' knowledge, no comprehensive dedicated studies on barrel and wedge performance have been conducted, at least in recent years.Most of the knowledge pool returns to the twentieth century.In this research the performance of barrel and wedges are tested in 25 pull out experiments on six different cable bolts in monotonic and cyclic loading patterns.The following sections will detail the experimental plan and methodology proposed to study the barrel and wedge assembly.The results are then presented, and the outcomes are analysed. , (c) schematic of an instrumented anchor head to monitor axial load over time 9 .
Experiment design
The barrel and wedges used in this study were utilized in a large-scale laboratory pull out testing campaign 10,11 .The pull out test comprised cables encapsulated in large 300 mm by 450 mm long concrete cylinders with UCS of 40 MPa with approximate ratio of 1.0:1.5:3.0:0.6 (cement:sand:aggregate:water) with a slump of circa 100 mm.In such experiments, the borehole surface is riffled to mimic field condition and promotes failure on the cable/ bonding agent interface 12,13 .The concrete cylinders were confined by a thick metal pipe to provide outer confinement and maintain integrity during the tests.On top of the cylinders, a base plate distributed the loads from the 1000 kN hollow ram jack while reacting against the barrel and wedge (Fig. 3).
Care was taken in choosing the cables for this study in order to represent a variety of the major cable designs used in practice.Superstrand and Indented Superstrand cables are relatively smaller and without a bulb.Goliath cable is a thicker and heavy unbulbed cable.Furthermore, 9-strand, 10-strand, and 12-strand cables were also used in the experiment.These cables had bulbs along their length (one inside the concrete cylinder and one inside the anchor tube), which governs their behaviour.Tables 1 and 2 showcase the cables and the respected barrel and wedges.Furthermore, Fig. 4 illustrates the cross-section of the cables.Note that 9, 10 and 12 strand cables are hollow core due to the existence of grout tubes.
Testing procedure
Prior to the experiment, the concrete samples were grouted inside the metal outer confinement.After curing, the anti-rotation base plate was lowered onto the top of the sample, a 1000 kN hollow ram jack connected to a low-speed electric hydraulic pump, and a 1000 kN load cell.On top of the load cell, a reaction plate was placed to transfer the load to the barrel and wedge.Before the commencement of the test, the wedge was hammered into www.nature.com/scientificreports/ the barrel to the extent that more hammering would have resulted in cracking/breakage of the wedge (Fig. 5).This method, though non-scientific, ensure that the slippage of the wedge during the experiment was minimized in the laboratory.A 225 mm LVDT was fixed to the side of the jack to measure the total vertical displacement in the test, while a 150 mm LVDT was fixed to the cable to measure the displacement of the barrel relative to the cable (Fig. 5).The displacement read by this sensor includes the potential slippage of the wedge plus the barrel displacement during the test.The two LVDTs and the load cell data were recorded using a data acquisition system at 10 Hz.
In total, 25 tests were conducted on the cables, of which 18 were done in a monotonic manner, and seven were carried out in a cyclic manner.The cyclic loading pattern consisted of five equally spaced fully unload/ reload cycles during otherwise monotonic loading.To define the unloading load values, the average peak load value from the monotonic test of each cable was divided by five.Thus, the loading steps were 1/5, 2/5, 3/5, 4/5 and 5/5 of the average monotonic load.After the last unload/reload, the sample loaded until failure.The sample may fail before the last unload/reload cycle.
Both monotonic and cyclic tests were capped at 120 mm of vertical displacement.Figure 6 illustrates a typical output from the three data channels (i.e., ram stroke LVDT, barrel and wedge LVDT and load cell) during a monotonic loading test.As seen, during the test as the load is increased (in this case monotonically), the cable pulled out length is recorded via the ram stroke LVDT.In Fig. 6, the stroke LVDT values are the sum of pull out length, cable elastic elongation, and barrel and wedge movement.The barrel and wedge LVDT value is the sum of slippage of the wedge on the cable and the penetration of the wedge inside the barrel.
Performance under monotonic loading
In this section, the results of the testing campaign are plotted as the total load of the experiment (from the load cell) versus the barrel displacement (from the BW LVDT) up until the peak load value before sample failure.Plotting the load in kilo Newton and displacement in millimetres means that the area beneath each graph correlates to the energy (Joule) spent on the barrel and wedge to make reversible and irreversible deformations and displacements under the applied load.In the following figures, the prefix M and C denote Monotonic and Cyclic loading.SS and IDS stand for Superstrand cables (plain and indented) and Gol stands for goliath cable.9S, 10S, and 12S represent the bulbed cables.
As seen in Fig. 7, there is no conclusive trend for displacement values measured for smooth Superstrand and Indented Superstrand cables.The overall behaviour of the system resembles a polynomial of the order of two.www.nature.com/scientificreports/Most importantly, though, the graph clearly shows that a displacement of the barrel existed during the whole experiment and never stopped.Figure 8 showcases the 10 and 12-strand cables during the monotonic test.The plots suggest that although the barrel and wedges used for these two cables are identical, the slight difference in the shape of the cable (lay angle and lay length due to strand count) has resulted in variation in the expected behaviour.The 10-strand cable shows a deflection point at a certain displacement at which the behaviour suddenly becomes significantly stiffer.The 12-strand cable, on the other hand, behaves similarly to the Superstrand cables, albeit at higher load and displacement.Interestingly, up to 9 mm of displacement is observed for up to 500 kN of load.
The performance of 9-strand and Goliath cables is portrayed in Fig. 9. Similar to the other cables, large displacements are visible throughout the experiments.In saying that, the behaviour seems to be more in tangent with the 10-strand cables than the Superstrand cables.The common point of the 9, 10, 12-strand and Goliath cables is their larger diameter (Tables 1 and 2).This suggests the diameter of the cable can be of importance for the performance of the barrel and wedge (except for 12 strand).The deflection point in the behaviour of these cables proposes that most of the displacement occurs in the lower loads for these cables.As mentioned above, the hollow core 9-strand cable and full core goliath cable have shown the deflection point-based characteristic, www.nature.com/scientificreports/suggesting the grout tubes in the middle of the bulbed cables have a minimal adverse effect on the barrel and wedge behaviour.
Cyclic performance
Figure 10 illustrates the barrel and wedge assembly response in the cyclic tests.As seen, in all cables regardless of the amount of displacement and load, the cables had almost instantaneous rebound after the unloading steps.This is seen by the vertical load pickups and suggests that minimum elastic displacement (energy) is stored in the system.Barrell and wedges need to be able to withstand various loading events due to the unpredictable and dynamic environments of underground mines, where changes in stress regime, blasting and seismic events such as quakes or rockbursts are often common.
Another point worth mentioning is the observation of the deflection point based behaviour for the 9 and 10 strand cables, while the Superstrand and 12-strand cables illustrate slight exponential behaviour (similar to the monotonic tests).The 9 and 10-strand cables show significantly larger displacements for a given load compared to the rest of the cables tested, a phenomenon more or less spotted in the monotonic experiments.Interestingly, the 12-strand cable (a bulbed, large, heavy cable) performed very similarly to the Superstrand and Indented Superstrand cables (unbulbed smaller cables).
Conclusions
This study presented findings on the performance of barrel and wedge assembly used in cable bolting.Dedicated LVDT, fixed on the free end of the cable, measured the relative displacement of the barrel during large-scale laboratory monotonic and cyclic pull out tests.Multiple bulbed and unbulbed cables were tested, and the results suggested the followings: • Similar cables in diameter can possess different performances as seen by the 10 and 12 strand cables.The dif- ference between these two cables was in the lay angle and lay length, with 10 strand cable having a longer lay length.As a result, the 10 strand cables showed much higher displacement for a given load value compared to the 12 strand cables.Moreover, 10 strand cables had a distinct deflection point at around 4-5 mm displacement, at which the behaviour suddenly became stiffer.We know for a given barrel and wedge and cable, the longer the lay length, the larger the contact area of a single strand with a single wedge.On the contrary, each strand has a higher chance to be in touch multiple wedges for the same cable with a shorter lay length.• The deflection point-based behaviour was also witnessed in the 9 and 10 strand and Goliath cable.This sug- gests larger cables tend to behave similarly while smaller cables have exponential behaviour.In saying this, however, 12 strand cable showed exponential behaviour in monotonic and cyclic tests.• The 9, 10 and 12 strand cables all have a metal grout tube in the centre which were kept hollow for the experi- ments.Comparing their performance with the full core large Goliath cable advises the grout tube does not affect the results in a meaningful way.• All cables showed reasonable performance in the cyclic loading with almost vertical load pickup, meaning all the displacement of the barrel and wedge assembly for all the loads larger than 25 kN were permanent.The 9 and 10 strand cables showed higher displacement than the 12 and Superstrand cables, a behaviour also seen more or less in the monotonic tests.
As shown, barrel and wedge can influence the design of the cable bolts.For any given displacement, various load values could be expected.Nevertheless, one should acknowledge that the values in the graphs were concurrent with up to 120 mm of total pull out displacement.This makes the < 10 mm of the barrel and wedge displacement only a fraction (< 10%) of the total displacement of the system.Moreover, as seen in the cyclic tests, the energy seems to be spent on the permanent deformation of the barrel and wedge assembly.
It is suggested that more dedicated and comprehensive studies be conducted on barrel and wedges, perhaps in more loading variations such as instantaneous loading (drop tests) or fatigue testing (seismicity and blasting).Moreover, corrosion studies can also be helpful in indicating how long a barrel and wedge can act as intended in the design over time.
Perhaps, a unified way of testing both in the laboratory and field can be helpful.Another limitation observed in this study was the absence of a set way of initializing the assembly to the cable.In this study, this was done by hammering the wedge inside the barrel until no further displacement was allowed.This method was done to make sure no excessive slippage happened at the beginning of the tests.However, this technique was far from scientific and repeatable.
Figure 1 .
Figure 1.(a) Barrel and wedge design and activation method in the field 2 after Thompson 3 , (b) failure at barrel and wedge 4 .
Figure 2 .
Figure 2. Application of barrel and wedge in other disciplines, (a) soil anchors 7 , (b) cable and suspension bridges 8 , (c) schematic of an instrumented anchor head to monitor axial load over time 9 .
Figure 7 .Figure 8 .
Figure 7. Barrel and wedge performance for Superstrand and Indented Superstrand in monotonic loading.
Figure 9 .Figure 10 .
Figure 9. Barrel and wedge performance for 9 strand and Goliath cables in monotonic loading.
Table 1 .
Specifications of the cables 1 . | 2,993.2 | 2024-02-23T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Miniaturized probe for femtosecond laser microsurgery and two-photon imaging
Combined two-photon fluorescence microscopy and femtosecond laser microsurgery has many potential biomedical applications as a powerful “seek-and-treat” tool. Towards developing such a tool, we demonstrate a miniaturized probe which combines these techniques in a compact housing. The device is 10 × 15 × 40 mm in size and uses an aircore photonic crystal fiber to deliver femtosecond laser pulses at 80 MHz repetition rate for imaging and 1 kHz for microsurgery. A fast two-axis microelectromechanical system scanning mirror is driven at resonance to produce Lissajous beam scanning at 10 frames per second. Field of view is 310 μm in diameter and the lateral and axial resolutions are 1.64 μm and 16.4 μm, respectively. Combined imaging and microsurgery is demonstrated using live cancer cells. ©2008 Optical Society of America OCIS Codes: (190.4180) Multiphoton Processes; (170.1020) Tissue Ablation; (170.2150) Endoscopic Imaging. References and links 1. U. K. Tirlapur and K. König, "Targeted transfection by femtosecond laser," Nature 418, 290-291 (2002). 2. M. F. Yanik, H. Cinar, H. N. Cinar, A. D. Chisholm, Y. S. Jin, and A. Ben-Yakar, "Functional regeneration after laser axotomy," Nature 432, 822-822 (2004). 3. A. Vogel, J. Noack, G. Hüttman, and G. Paltauf, "Mechanisms of femtosecond laser nanosurgery of cells and tissues," Appl. Phys. B 81, 1015-1047 (2005). 4. N. Shen, D. Datta, C. B. Schaffer, P. LeDuc, D. E. Ingber, and E. Mazur, "Ablation of cytoskeletal filaments and mitochondria in live cells using a femtosecond laser nanoscissor," Mech. Chem. Biosyst. 2, 17-25 (2005). 5. A. A. Oraevsky, L. B. Da Silva, A. M. Rubenchik, M. D. Feit, M. E. Glinsky, M. D. Perry, B. Mammini, M., W. Small, IV, and B. C. Stuart, "Plasma mediated ablation of biological tissues with nanosecond-tofemtosecond laser pulses: Relative role of linear and nonlinear absorption," IEEE J. Sel. Top. Quantum Electron. 2, 801-809 (1996). 6. I. Ratkay-Traub, I. E. Ferincz, T. Juhasz, R. M. Kurtz, and R. R. Krueger, "First clinical results with the femtosecond neodynium-glass laser in refractive surgery," J. Refract. Surg. 19, 94-103 (2003). 7. W. Denk, J. H. Strickler, and W. W. Webb, "2-photon laser scanning fluorescence microscopy," Science 248, 73-76 (1990). 8. P. T. C. So, C. Y. Dong, B. R. Masters, and K. M. Berland, "Two-photon excitation fluorescence microscopy," Annu. Rev. Biomed. Eng. 2, 399-429 (2000). 9. W. R. Zipfel, R. M. Williams, and W. W. Webb, "Nonlinear magic: Multiphoton microscopy in the biosciences," Nat. Biotechnol. 21, 1368-1376 (2003). 10. P. Theer and W. Denk, "On the fundamental imaging-depth limit in two-photon microscopy," J. Opt. Soc. Am. A 23, 3139-3149 (2006). 11. P. Theer, M. Hasan, and W. Denk, "Two-photon imaging to a depth of 1000μm in living brains by use of a Ti:Al2O3 regenerative amplifier," Opt. Lett. 28, 1022-1024 (2003). 12. N. Nishimura, C. B. Schaffer, B. Friedman, P. S. Tsai, P. D. Lyden, and D. Kleinfeld, "Targeted insult to subsurface cortical blood vessels using ultrashort laser pulses: Three models of stroke," Nature Methods 3, 99-108 (2006). 13. K. König, O. Krauss, and I. Riemann, "Intratissue surgery with 80 MHz nanojoule femtosecond laser pulses in the near infrared," Opt. Express 10, 171-176 (2002). #95706 $15.00 USD Received 2 May 2008; revised 8 Jun 2008; accepted 10 Jun 2008; published 20 Jun 2008 (C) 2008 OSA 23 June 2008 / Vol. 16, No. 13 / OPTICS EXPRESS 9996 14. E. Zeira, A. Manevitch, A. Khatchatouriants, O. Pappo, E. Hyam, M. Darash-Yahana, E. Tavor, A. Honigman, A. Lewis, and E. Galun, "Femtosecond infrared laser—an efficient and safe in vivo gene delivery system for prolonged expression," Mol. Ther. 8, 342-350 (2003). 15. L. Sacconi, I. M. Tolic ́-Nørrelykke, R. Antolini, and F. S. Pavone, "Combined intracellular threedimensional imaging and selective nanosurgery by a nonlinear microscope," J. Biomed. Opt. 10, 014002014001 014002-014005 (2005). 16. K. König, I. Riemann, F. Stracke, and R. Le Harzic, "Nanoprocessing with nanojoule near-infrared femtosecond laser pulses," Med. Las. Appl. 20, 169-184 (2005). 17. F. Helmchen, M. S. Fee, D. W. Tank, and W. Denk, "A miniature head-mounted two-photon microscope: High-resolution brain imaging in freely moving animals," Neuron 31, 903-912 (2001). 18. J. C. Jung and M. J. Schnitzer, "Multiphoton endoscopy," Opt. Lett. 28, 902-904 (2003). 19. W. Göbel, J. N. D. Kerr, A. Nimmerjahn, and F. Helmchen, "Miniaturized two-photon microscope based on a flexible coherent fiber bundle and a gradient-index lens objective," Opt. Lett. 29, 2521-2523 (2004). 20. B. A. Flusberg, J. C. Jung, E. D. Cocker, E. P. Anderson, and M. J. Schnitzer, "In vivo brain imaging using a portable 3.9 gram two-photon fluorescence microendoscope," Opt. Lett. 30, 2272-2274 (2005). 21. M. T. Myaing, D. J. MacDonald, and X. Li, "Fiber-optic scanning two-photon fluorescence endoscope," Opt. Lett. 31, 1076-1078 (2006). 22. L. Fu, A. Jain, C. Cranfield, H. Xie, and M. Gu, "Three-dimensional nonlinear optical endoscopy," JBO Lett. 12, 0405011-04050113 (2007). 23. K. König, A. Ehlers, I. Riemann, S. Schenkl, R. Bückle, and M. Kaatz, "Clinical two-photon microendoscopy," Microsc. Res. Tech. 70, 398-402 (2007). 24. D. Lee and O. Solgaard, “Two-axis gimbaled microscanner in double SOI layers actuated by self-aligned vertical electrostatic combdrive” in Proceedings of the Solid-State Sensors, Actuators and Microsystems Workshop, Hilton Head Island, Hilton Head Island, South Carolina, June 6-10, 2004, 352-355. 25. H. Ra, W. Piyawattanametha, Y. Taguchi, D. Lee, M. J. Mandella, and O. Solgaard, "Two-dimensional MEMS scanner for dual-axes confocal microscopy," J. Microelectromech. Syst. 16, 969-976 (2007). 26. W. Piyawattanametha, R. P. J. Barretto, T. H. Ko, B. A. Flusberg, E. D. Cocker, H. Ra, D. Lee, O. Solgaard, and M. J. Schnitzer, "Fast-scanning two-photon fluorescence imaging based on a microelectromechanical systems two-dimensional scanning mirror," Opt. Lett. 31, 2018-2020 (2006). 27. K. C. Maitland, H. J. Shin, H. Ra, D. Lee, O. Solgaard, and R. Richards-Kortum, "Single fiber confocal microscope with a two-axis gimbaled MEMS scanner for cellular imaging," Opt. Express 14, 8604-8612 (2006). 28. J. B. Guild, C. Xu, and W. W. Webb, "Measurement of group delay dispersion of high numerical aperture objective lenses using two-photon excited fluorescence," Appl. Opt. 36, 397-401 (1997). 29. D. L. Dickensheets and G. S. Kino, "Micromachined scanning confocal optical microscope," Opt. Lett. 21, 764-766 (1996). 30. M. M. Dickens, M. P. Houlne, S. Mitra, and D. J. Bornhop, "Method for depixelating micro-endoscopic images," Opt. Eng. 38, 1836-1842 (1999). 31. J. W. Goodman, Introduction to Fourier Optics, 3rd Edition (Roberts & Co., Englewood, 2005). 32. B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, "Fiber-optic fluorescence imaging," Nature Methods 2, 941-950 (2005). 33. F. Bourgeois and A. Ben-Yakar, "Femtosecond laser nanoaxotomy properties and their effect on axonal recovery in C. Elegans," Opt. Express 15, 8521-8531 (2007). 34. Urey, H., “Spot size, depth-of-focus, and diffraction ring intensity formulas for truncated Gaussian beams,” Appl. Opt., 43 620-625 (2004) 35. K. König, P. T. C. So, W. W. Mantulin, and E. Gratton, "Cellular response to near-infrared femtosecond laser pulses in two-photon microscopes " Opt. Lett. 22, 135-136 (1997). 36. K. König, T. W. Becker, P. Fischer, I. Riemann, and K. J. Halbhuber, "Pulse-length dependence of cellular response to intense near-infrared laser pulses in multiphoton microscopes," Opt. Lett. 24, 113-115 (1999). 37. H. J. Koester, D. Baur, R. Uhl, and S. W. Hell, "Ca fluorescence imaging with picoand femtosecond twophoton excitation: Signal and photodamage," Biophys. J. 77, 2226-2236 (1999). 38. A. Hopt and E. Neher, "Highly nonlinear photodamage in two-photon fluorescence microscopy," Biophys. J. 80, 2029-2036 (2001). 39. H. F. Wang, T. B. Huff, D. A. Zweifel, W. He, P. S. Low, A. Wei, and J. X. Cheng, "In vitro and in vivo two-photon luminescence imaging of single gold nanorods," Proc. Natl. Acad. Sci. U. S. A. 102, 1575215756 (2005). 40. N. J. Durr, T. Larson, D. K. Smith, B. A. Korgel, K. Sokolov, and A. Ben-Yakar, "Two-photon luminescence imaging of cancer cells using molecularly targeted gold nanorods," Nano Lett. 7, 941-945 (2007). 41. M. J. Mandella, J. T. C. Liu, W. Piyawattanametha, H. Ra, P.-L. Hsiung, L. K. Wong, O. Solgaard, T. D. Wang, C. H. Contag, and G. S. Kino, "Compact optical design for dual-axes confocal endoscopic microscopes," Proc. SPIE 6443, E1-E9 (2007). #95706 $15.00 USD Received 2 May 2008; revised 8 Jun 2008; accepted 10 Jun 2008; published 20 Jun 2008 (C) 2008 OSA 23 June 2008 / Vol. 16, No. 13 / OPTICS EXPRESS 9997
Introduction
In recent years, femtosecond laser microsurgery (FLMS) has emerged as a superior technique for ablation of cells and subcellular structures and offers the highest precision for microsurgery in three-dimensional (3D) tissue [1][2][3][4].In FLMS, laser absorption is confined to the focal volume as a result of nonlinear interactions.The high peak intensities and short time duration of the pulses lead to efficient and rapid ionization of tissue before energy can be lost to heat diffusion.As a result, femtosecond lasers require much less energy for ablation and lead to significantly less heating of surrounding tissue (especially for repetition rates < 1 MHz) when compared to ablation with nanosecond or longer duration laser pulses [3,5].Owing to these advantages, FLMS has been gradually moving from the laboratory to the physician's office, most notably in ophthalmology, where femtosecond laser systems produced by IntraLase Corp. have been clinically used for LASIK surgery since 2003 [6].
To fully realize the potential of FLMS in many clinical applications, however, it must be guided and monitored by an equally precise and penetrating 3D imaging technique, such as two-photon microscopy (TPM) [7][8][9].Two-photon microscopy utilizes near infrared (NIR) femtosecond laser pulses similar to FLMS and provides similar advantages, such as inherent optical sectioning and penetration depth in excess of 1 mm [10,11].By combining FLMS with TPM, physicians can guide precise surgical tools with microscopic imaging capabilities deep inside scattering tissue.Indeed, the use of femtosecond lasers for both imaging and manipulation of biological samples has been demonstrated in laboratory settings using large table-top systems [12][13][14][15][16].This combined tool can be used for diagnosis and treatment of various diseases as well as for in vivo monitoring of disease progression.
Many potential clinical applications of FLMS and TPM require packaging these systems in a small and flexible device.Miniaturized two-photon microscopy probes have been developed since 2001 [17], primarily for neurological research [18][19][20], though recently probes have been developed toward clinical examination of diseased tissue [21,22].To date, there have been no known flexible probes for FLMS, due to the difficulties of delivering high peak intensities through optical fibers and miniature optics caused by nonlinear effects and material damage.In this work, we present a new miniaturized probe that addresses these challenges and serves as a first step towards a clinical TPM/FLMS endoscope.
Probe design
The design of the probe, shown in Fig. 1, enables both nonlinear optical imaging and microsurgery.The major components of the probe are an air-core photonic crystal fiber (PCF) (Fig. 1(c)), two-axis microelectromechanical systems (MEMS) scanning mirror (Fig. 1(d)), miniature relay lens system, and gradient index (GRIN) objective lens.These components provide the advantages of compact size and the ability to handle high peak intensity laser pulses to enable FLMS and TPM in a miniaturized system.This design uses the same optical pathways for both microsurgery and fluorescence excitation, thus providing visualization and guidance at the exact location of ablation.Though several of these optical components have been utilized in previous miniature two-photon microscope designs, the incorporation of amplified high peak-intensity pulses for microsurgery led to a design that not only enables FLMS, but it improves imaging capabilities as well.Specifically, (1) the air-core fiber allows delivery of high peak intensity femtosecond pulses for microsurgery, (2) the relay lenses image the scanning mirror to the back aperture of the objective lens, thus providing a large field of view (FOV) with uniform excitation, (3) the use of individual relay lenses avoids the internal foci that occur in multiple pitch length GRIN relay lenses, thus allowing for delivery of high peak intensities without material damage [23], (4) the relay lenses also expand the beam, thus allowing the use of a small and fast MEMS scanning mirror for high frame rates, while still overfilling the objective lens aperture for improved resolution, (5) both axes of the MEMS scanner are driven at resonance, allowing the use of low driving voltages to scan large FOV, and (6) the collection pathway is separated from the excitation fiber and uses a large numerical aperture (NA) fiber, providing improved collection efficiency.An air-core PCF (one meter long, Air-6-800, Crystal Fiber A/S) delivers femtosecond pulses for both imaging and microsurgery into a 10 × 15 × 40 mm 3 Delrin ® housing (see Fig. 1(c)).Pulses for imaging (at 80 MHz repetition rate from Mai Tai, Spectra Physics) were delivered at the minimum-dispersion wavelength of the PCF near 753 nm.The pulse duration was measured to be 152 fs after the fiber for a 117 fs input pulse duration using a homebuilt interferometric autocorrelator.Pulses for microsurgery (at 1 kHz repetition rate from Spitfire, Spectra Physics) were delivered near 780 nm, the operation wavelength of the chirped pulse amplifier.Microsurgery pulses were prechirped using the compressor in the amplifier to compensate for the fiber dispersion at this wavelength, resulting in a pulse duration of 178 fs exiting the fiber.The beam coming out of the fiber was collimated by a gradient index (GRIN) lens (0.46 NA, 1.8 mm diameter).The fiber tip and its collimating lens were held in a micropositioning stage that was aligned to send the collimated laser beam into the probe housing.
The laser beam is scanned inside the probe housing using a two-axis gimbal MEMS scanning mirror driven by vertical electrostatic combdrives (see Fig. 1(d)) [24,25].The reflective surface of this mirror is bare silicon, which provides a reflectance of ~30%.The 500 × 500 μm 2 mirror exhibits resonance frequencies of 1.54 kHz and 2.73 kHz.Maximum optical beam deflections of ±10.5° for the outer axis and ±10° for the inner axis are achieved by driving the mirrors with sinusoidal voltage signals at their resonant frequencies using peak voltage values of 80 volts.The corresponding number of resolvable spots, a key figure of merit for scanning devices, is approximately 172 x 232 for this device [26,27].The collimated beam on the scanning mirror is imaged onto the back aperture of a GRIN lens (0.46 NA, 210 μm working distance, and 1.8 mm diameter) through an aspherical lens pair which also serves as a 3.4× beam expander.The total group delay dispersion contribution from the miniature optics in the probe is estimated to be ~1230 fs 2 , similar to that of standard microscope objectives used in two-photon microscopy [28].
Maximum power that could be delivered at the sample is limited to 120 mW near the 753 nm imaging wavelength.The total transmission efficiency of the probe is about 12% as a result of a ~65% coupling efficiency of the fiber, the ~30% reflectance of the MEMS mirror, and the insertion losses of the remaining optics.For the microsurgery laser, the maximum pulse energy deliverable to the sample was found to be 350 nJ, above which the laser intensity at the entrance to the photonic crystal fiber began to damage the fiber and decrease coupling efficiency.The power at the sample for both lasers can be increased in the future through use of a metallic-coated MEMS mirror.The deliverable energy of the microsurgery laser can also be increased in the future through additional pre-chirping, thus decreasing further the peak laser intensity during fiber coupling.
Fluorescence emission is collected by a 2-mm core plastic optical fiber (0.51 NA).The collection fiber is positioned directly behind a 5 × 5 mm 2 dichroic mirror which reflects wavelengths above 715 nm.The collected fluorescence is delivered through 1 meter of the fiber and focused into a photomultiplier tube (H7422-40, Hamamatsu) by a 4 mm focal length lens with a Schott BG38 filter blocking scattered laser light.
For imaging, the laser beam is scanned in a Lissajous pattern by a LabView ® program driving both axes of the MEMS mirror at resonant frequencies with sinusoidal voltage signals [17,20,29].The emission signal is collected at 1 MHz rate (1 μs dwell time per pixel) and processed in real-time to display a 256 × 256 pixel image at 10 frames per second (fps).The program also incorporates a variable pixel delay to compensate for phase delay between the driving voltage and the acquired signal, as well as phase delay between mirror axes.Further improvement of the images can be achieved during post-processing through frame averaging and spatial filtering.Low-pass spatial filtering using a fast Fourier transform (FFT) algorithm and a 1.2 cycles/μm cut-off frequency in the x and y directions effectively eliminates isolated pixels which are not sampled during the Lissajous scan and appear as sub-resolution zerovalue pixels aligned in vertical and horizontal rows in the center of the image [30,31].The cut-off frequency was based on the measured resolution of the system, so as to not filter out any useful signal, similar to the filtering implemented in Dickens et al. [30]
Imaging characterization
The size and curvature of the FOV was investigated by imaging 1 μm fluorescent beads (Invitrogen) deposited onto a microscope slide.This provided a fairly uniform and flat sample, which was translated by piezoelectric stages in the x and y directions during imaging to calibrate image scale.Using peak driving voltage to the MEMS between 20 -80 volts, the diameter of the FOV could be varied between 36 -310 μm.The maximum FOV is shown in Fig 2(a).The limiting aperture for the maximum FOV was found to be the clear aperture of the second relay lens in the beam path.We observed a fairly constant intensity over much of the field.The resolution of the probe was measured experimentally by imaging 100 nm fluorescent beads (Invitrogen) in an agar gel across a 100 μm FOV to obtain the 3D two-photon pointspread function (PSF).The full widths at half-maximum of the Gaussian fits to the bead images (Fig. 2(b)) were found to be 1.64±0.09μm and 16.4±1.0μm, for lateral and axial dimensions, respectively.The measurements were averaged across 10 beads and the reported errors correspond to the standard error of the mean.The extended axial resolution has been previously attributed to spherical aberration from the GRIN lens and is similar to what has been observed in other studies [19][20][21][22]32].
Using the probe, two-photon images of fluorescently-labeled pollen grains (30-4264, Carolina Biological Supply Co.) were used to demonstrate the resolving power of probe at the micrometer scale, and can be seen in Fig. 2(c).
Cellular imaging and microsurgery
The combined imaging and microsurgery capabilities of the probe were demonstrated using breast carcinoma cells (MDMBA468) grown in a single cell layer and labeled with the fluorescent cell viability dye, calcein AM.During these experiments, the cells were imaged before and immediately following ablation.For microsurgery, the flipping mirror that directs the imaging beam was lowered to direct the microsurgery beam into the fiber while the imaging laser was blocked.The MEMS mirror was static and undeflected during microsurgery, thus targeting the center of the FOV.By bringing the microsurgery beam collinearly with the imaging beam and triggering it through the imaging program, simultaneous imaging and microsurgery of off-axis targets is also possible with this system.
Figures 3(a) and 3(b) present two-photon images before and after femtosecond laser microsurgery with a single pulse at 280 nJ pulse energy.Ablation of the targeted cell is evidenced by the loss of its fluorescence signal, where the abrupt signal loss suggests that the membrane of the targeted cell was ruptured, releasing all of the calcein dye.In this set of experiments, the pulse energy was increased incrementally from 160 nJ until loss of cellular fluorescence was observed at 280 nJ.Ablation using this energy level was found to be very repeatable.As only a single laser pulse was used in this experiment, we expect that the loss in fluorescence is due to ablation rather than photobleaching of the calcein inside the cell.Because the cells are much larger (~15-20 μm diameter) than the focal spot, longer exposures would be necessary to photobleach the total volume of the cell.Note that the high precision of fs-laser ablation allowed disintegration of the target cell while adjacent cells remain intact.We also investigated two-photon imaging and ablation of cells within a three-dimensional (3D) tissue phantom consisting of breast carcinoma cells embedded in a collagen matrix.Cells were labeled using calcein AM and imaged using 17 mW average power measured at the sample.Axial steps of 6.6 μm were made by moving the sample on a piezoelectric stage.Figure 4 presents images where a cell approximately 125 µm deep was targeted for ablation.Here, 5000 pulses at 213 nJ per pulse were used as the scattering collagen media reduces the total energy reached to the focal plane.In this case, we again observed the immediate loss of cellular fluorescence after irradiation with the microsurgery laser while the cells closest to the target remain intact.Because the targeted cell is completely embedded in collagen, which restrains the motion of cells in 3D, these experiments show that the immediate loss of fluorescence upon irradiation is due to ablation at the cell and not displacing the cell out of the FOV.
Discussion
The single-pulse cellular ablation demonstrated with the TPM/FLMS probe in Fig. 3 compares well to cellular ablation studies conducted using high-NA table-top systems [1,4,33].To compare between these studies, we can estimate peak intensities based on the focused spot size and the reported pulse duration.The spot size at the focal plane of our probe is ~3.8 μm, as calculated by multiplying the 1/e 2 width of the measured PSF by √2, because the PSF represents the intensity-squared distribution.Thus, the 280 nJ pulse energy used for ablation corresponds to a peak laser intensity of 14 TW/cm 2 when accounting for the spot size of ~3.8 μm and the pulse duration of ~178 fs.This intensity is comparable to the 9.6 TW/cm 2 singlepulse ablation threshold that we have found for nanosurgery in C. Elegans using a 1.4-NA objective lens [33].Also, it is well known that the threshold for femtosecond laser ablation decreases as larger numbers of pulses are used as a result of an incubation effect.For example, when the number of pulses was increased to 1,000 in our C. Elegans nanoaxotomy study, we observed a decrease in threshold from 9.6 TW/cm 2 to 1.8 TW/cm 2 [33].When comparing to the multiplepulse cellular ablation studies, Shen et al. [4] achieved ablation of individual mitochondria within cells with 1,000 pulses at 9.1 TW/cm 2 , as estimated based on the reported NA of 1.4 using NA d λ 925 .0 = [34].Additionally, Tirlapur and König succeeded in optoporating cells using over one million pulses at 2.4 TW/cm 2 at an 80 MHz pulse repetition rate and NA of 1.3 [1].For comparison, during ablation within the collagen tissue phantom, the 5000 pulses at 213 nJ correspond to ~11 TW/cm 2 per pulse.Given the scattering that occurs within the turbid tissue-like media of the phantom, the intensity used in this experiment compares well to the intensities used in other multiple-pulse experiments.In light of these results for both single-and multiple-pulse experiments, the observed cellular ablation threshold using the TPM/FLMS probe suggests that we were able to achieve femtosecond laser microsurgery with efficiency comparable to high-NA table-top systems.Furthermore, the laser dosages used here for two-photon imaging are estimated to be at a safe level for cell vitality, which is crucial for sensitive clinical applications (see Table 1).Cell viability will depend on both the incident peak laser intensity and the number of pulses at this intensity that the cell receives.Thus for comparison, the number of overlapping consecutive pulses was estimated as well as the peak intensity.The number of overlapping pulses is defined here as the laser repetition rate divided by the product of the spot size and the scanning speed.For the probe, the slow axis MEMS scanning frequency and the ~116 μm FOV of this experiment were used to arrive at a conservative estimate of scanning speed.Looking at peak intensity, the maximum average power used during cell imaging (17 mW used for imaging in the tissue phantom) corresponds to a peak intensity at the sample of ~13 GW/cm 2 , which is below the maximum peak intensities found to be safe for long term twophoton imaging in recent studies [35][36][37][38].In addition, the fast scanning speed used in the probe results in far fewer consecutive pulses delivered per spot at this intensity, which further reduces the overall laser dosage to the sample when imaging with the probe.The favorable comparison shown in Table 1 indicates that the collection efficiency of the probe is sufficiently high to enable safe cellular two-photon imaging using conventional fluorophores.
Conclusion
In conclusion, we have developed a miniaturized TPM/FLMS probe that can perform microsurgery through precise femtosecond laser ablation and provide visualization of the operation region through two-photon imaging.This miniaturized probe measures only 10 × 15 × 40 mm 3 and serves as a flexible test bench towards development of a clinical TPM/FLMS endoscope.Imaging is accomplished by Lissajous scanning a two-axis gimbal MEMS mirror while microsurgery is accomplished using prechirped femtosecond laser pulses with energies up to 280 nJ through an air-core PCF.Future design improvements such as a metallic-coated high reflectivity MEMS mirror and a high-NA miniature objective lens, which will provide increased power delivery and improved collection efficiency, respectively, can enable imaging of cellular autofluorescence with the probe.An increase in numerical aperture will also be beneficial for precise FLMS inside bulk tissue, where tighter focusing can help to reduce the pulse energy and avoid collateral damage arising from nonlinear affects [3].Meanwhile, novel two-photon contrast agents, such as bright luminescent gold nanorods, can be used to reduce the required excitation power by a couple of orders of magnitude in addition to providing molecularly specific imaging [39,40].Improvements in axial resolution would also improve optical sectioning, and could allow for 3D imaging of tissue if paired with a mechanism for axial scanning.Several such systems have been devised, including translating individual components or the system by a micromotor, piezoelectric actuator, or MEMS device for fully automated three-dimensional imaging [20,41].These improvements, combined with further miniaturization through the use of smaller-diameter relay optics and optimized packaging, can lead to a powerful clinical TPM/FLMS device.
Fiber-coupled systems with near-video rate imaging and high precision surgery capabilities such as the one presented here can be used for live animal studies for developing clinical techniques.The optical design approach presented in this paper shows great promise and could find applications in such disparate fields as oncology, dermatology, and neurosurgery.
Fig. 1 .
Fig. 1.The 10 × 15 × 40 mm 3 miniaturized two-photon microscope and femtosecond laser microsurgery probe.(a) The model includes 1) air-core PCF and GRIN collimating lens, 2) two-axis MEMS scanner, 3) miniature aspheric relay lenses, 4) mirror, 5) dichroic mirror, 6) 0.46-NA GRIN objective lens, and 7) 2mm-core plastic optical fiber.(b) The photograph shows the miniature probe as built without the delivery fiber and the lid that was used to seal the probe.The PCF and its collimating GRIN lens were mounted separately and aligned to the probe during experiments.(c) SEM micrographs of the PCF core and cladding structure, and MEMS scanning mirror design (d).The scale bars are 15 μm in (c) (3 μm in inset), and 600 μm in (d) (120 μm in inset).
Fig. 2 .
Fig. 2. Two-photon fluorescence imaging characterization of the miniature TPM/FLMS probe.(a) 1 μm fluorescent beads on glass, demonstrating 310 μm maximum FOV.Laser power at the sample was measured to be 8.2 mW.(b) A representative lateral point spread function from 100 nm fluorescent beads in agar (shown in inset).Black dots represent measured intensity values while the red line is the Gaussian curve fit.(c) Pollen grains imaged using 9.0 mW average power at the sample.Image (a) is averaged over 0.6 seconds, while images (b) and (c) are averaged over 5 seconds, all at 10 fps.Images (a) and (c) were spatially filtered.Scale bars are 50 μm in (a), 5 μm in (b), and 25 μm in (c).
Fig. 3 .
Fig. 3. Combined two-photon microscopy and femtosecond laser microsurgery of a single layer of breast carcinoma cells.(a) Two photon image of a single layer of live breast carcinoma cells after uptake of calcein AM taken prior to irradiation with high intensity pulses.(b) The same FOV as (a), immediately after irradiation with a single pulse at 280 nJ pulse energy.Average laser power used for imaging in both images was 10 mW.Both images were averaged over 5 seconds at 10 fps and spatially filtered.Note that the targeted cell has lost fluorescence while the cell touching the targeted cell is left intact.Scale bars are 20 μm.
Figure 4 .
Figure 4. Combined two-photon microscopy and femtosecond laser microsurgery of breast carcinoma cells in a collagen tissue phantom.(a) Lateral slices with FOV of 116 × 160 μm 2 depicting a cell targeted for ablation as well as cells above it.The distance between the center of the targeted cell and those of the two cells above it is ~35 μm.(b) The same cells shown in (a) after irradiation of the targeted cell with 5000 pulses at 213 nJ pulse energy.(c, d) Vertical slice reconstruction through the same targeted cell and the cells above it before and after laser irradiation, respectively.Total imaging depth was 210 μm and the axial spacing between lateral slices was 6.6 μm.Scale bars are 20 μm.
Table 1 .
Comparison of imaging conditions proven not to affect cell viability to imaging conditions used in this study. | 6,830.6 | 2008-06-20T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
N- and L-Type Voltage-Gated Calcium Channels Mediate Fast Calcium Transients in Axonal Shafts of Mouse Peripheral Nerve
In the peripheral nervous system (PNS) a vast number of axons are accommodated within fiber bundles that constitute peripheral nerves. A major function of peripheral axons is to propagate action potentials along their length, and hence they are equipped with Na+ and K+ channels, which ensure successful generation, conduction and termination of each action potential. However little is known about Ca2+ ion channels expressed along peripheral axons and their possible functional significance. The goal of the present study was to test whether voltage-gated Ca2+ channels (VGCCs) are present along peripheral nerve axons in situ and mediate rapid activity-dependent Ca2+ elevations under physiological circumstances. To address this question we used mouse sciatic nerve slices, Ca2+ indicator Oregon Green BAPTA-1, and 2-photon Ca2+ imaging in fast line scan mode (500 Hz). We report that transient increases in intra-axonal Ca2+ concentration take place along peripheral nerve axons in situ when axons are stimulated electrically with single pulses. Furthermore, we show for the first time that Ca2+ transients in peripheral nerves are fast, i.e., occur in a millisecond time-domain. Combining Ca2+ imaging and pharmacology with specific blockers of different VGCCs subtypes we demonstrate that Ca2+ transients in peripheral nerves are mediated mainly by N-type and L-type VGCCs. Discovery of fast Ca2+ entry into the axonal shafts through VGCCs in peripheral nerves suggests that Ca2+ may be involved in regulation of action potential propagation and/or properties in this system, or mediate neurotransmitter release along peripheral axons as it occurs in the optic nerve and white matter of the central nervous system (CNS).
In the peripheral nervous system (PNS) a vast number of axons are accommodated within fiber bundles that constitute peripheral nerves. A major function of peripheral axons is to propagate action potentials along their length, and hence they are equipped with Na + and K + channels, which ensure successful generation, conduction and termination of each action potential. However little is known about Ca 2+ ion channels expressed along peripheral axons and their possible functional significance. The goal of the present study was to test whether voltage-gated Ca 2+ channels (VGCCs) are present along peripheral nerve axons in situ and mediate rapid activitydependent Ca 2+ elevations under physiological circumstances. To address this question we used mouse sciatic nerve slices, Ca 2+ indicator Oregon Green BAPTA-1, and 2-photon Ca 2+ imaging in fast line scan mode (500 Hz). We report that transient increases in intra-axonal Ca 2+ concentration take place along peripheral nerve axons in situ when axons are stimulated electrically with single pulses. Furthermore, we show for the first time that Ca 2+ transients in peripheral nerves are fast, i.e., occur in a millisecond time-domain. Combining Ca 2+ imaging and pharmacology with specific blockers of different VGCCs subtypes we demonstrate that Ca 2+ transients in peripheral nerves are mediated mainly by N-type and L-type VGCCs. Discovery of fast Ca 2+ entry into the axonal shafts through VGCCs in peripheral nerves suggests that Ca 2+ may be involved in regulation of action potential propagation and/or properties in this system, or mediate neurotransmitter release along peripheral axons as it occurs in the optic nerve and white matter of the central nervous system (CNS).
INTRODUCTION
In the peripheral nervous system (PNS) a vast number of axons are accommodated within fiber bundles that constitute peripheral nerves. The major function of peripheral axons is to propagate action potentials along their length, therefore axons are equipped with voltage-gated Na + and K + channels which ensure successful generation, conduction and termination of each action potential. In addition to Na + and K + channels voltage-gated Ca 2+ channels (VGCCs) are expressed on peripheral axons. However only few groups have so far directly studied these channels and very little is known about their sub-types, developmental regulation, and function. Ca 2+conductance probably mediated by VGCCs was detected in rat preganglionic cervical sympathetic nerves (Elliott et al., 1989) and in unmyelinated fibers of biopsied human sural nerve (Quasthoff et al., 1995(Quasthoff et al., , 1996. Increase in intra-axonal Ca 2+ level along peripheral axons was reported during prolonged (0.3-10 s) repetitive electrical stimulation of rat vagus nerve and of biopsied human sural nerves (Wächtler et al., 1998;Mayer et al., 1999), yet the exact channel subtypes mediating Ca 2+ influx remain unknown. In mouse postganglionic sympathetic axonal bundles Ca 2+ transients could be detected not only during train stimulation but also in response to a single stimulus (Jackson et al., 2001). VGCCs located along axonal shafts in the PNS could be of great significance for modulation of action potential conduction velocity and/or frequency (François et al., 2015), fast axonal transport (Chan et al., 1980), or release of neuropeptides (Eberhardt et al., 2008;Spitzer et al., 2008). To play a modulatory role during these fast cellular processes, Ca 2+ transients in the peripheral axons should occur in a millisecond time domain. Yet, it remains unclear whether rapid activity-dependent Ca 2+ elevations take place along peripheral axons under physiological conditions, because in the previous studies either image acquisition has been done using relatively slow frame scanning mode and low sampling rate (Wächtler et al., 1998;Mayer et al., 1999;Jackson et al., 2001), or Ca 2+ conductance has been measured with blockers of K + channels in the bath or strongly elevated extracellular K + concentration (Elliott et al., 1989;Quasthoff et al., 1995Quasthoff et al., , 1996. It is also un-clear which types of VGCCs mediate rapid Ca 2+ elevations in the peripheral axons. Remarkably, in the central nervous system (CNS) VGCCs are present along the axons in several structures including retina (Sargoy et al., 2014), cerebellum (Callewaert et al., 1996;Forti et al., 2000), corpus callosum (Kukley et al., 2007), optic nerve (Lev-Ram and Grinvald, 1987;Fern et al., 1995;Sun and Chiu, 1999;Brown et al., 2001;Zhang et al., 2006;Alix et al., 2008) and spinal dorsal column (Ouardouz et al., 2003). They open in a millisecond time domain upon action potential arrival and mediate fast Ca 2+ transients which are similar to those observed in presynaptic nerve terminals at conventional neuronal synapses (Lev-Ram and Grinvald, 1987;Sun and Chiu, 1999;Kukley et al., 2007). Axonal Ca 2+ transients in the CNS are involved in synaptic signaling between axons and oligodendrocyte progenitor cells (Kukley et al., 2007;Ziskin et al., 2007), modulation of axonal excitability, and regulation of intracellular Ca 2+ level during axonal growth (Sun and Chiu, 1999;Bucher and Goaillard, 2011).
A major goal of the present study was to test whether VGCCs mediate rapid (in a millisecond time domain) activity-dependent Ca 2+ elevations along mammalian peripheral nerve axons in situ under physiological conditions. Answering this question is of great importance for the follow-up research on the functional role of VGCCs in peripheral nerves in situ and in vivo, and is also of clinical and pharmaceutical relevance. Using 2-photon Ca 2+ imaging in line scan mode we found that action potentials trigger fast Ca 2+ transients along peripheral nerve axons in situ; these Ca 2+ transients involve activation of N-and-L-type VGCCs.
Animals
C57BL/6N mice were originally obtained from Charles River and bred in house. All experiments were performed in accordance with the guidelines of the Animal Care and Use Committee at the University of Tübingen.
Ca 2+ Indicator Injection
Individual sciatic nerve slices were loaded with a high-affinity Ca 2+ indicator Oregon-green-BAPTA-1 (OGB-1 AM) or lowaffinity indicator Magnesium Green, as described previously (Regehr, 2000). Briefly, 50 µg of Ca 2+ indicator was dissolved in 20 µl of pluronic acid in dimethyl sulfoxide (20% w/v). Four hundred microliter normal ACSF was added to this solution. The final Ca 2+ indicator concentration was ∼100 µM. A volume of 5 µl of the indicator solution was loaded into a glass micropipette (diameter ∼3-6 µm) which was lowered into a nerve slice; a small positive pressure was applied for 10-15 min. Subsequently the slice was washed with ACSF for ∼15 min.
Ca 2+ Imaging with Two-Photon Excitation Microscopy
Individual nerve slices filled with Ca 2+ indicator were transferred to a recording chamber mounted on the stage of a 2-photon laser-scanning microscope (LaVision Biotech, Germany) and perfused with ACSF containing 2.5 mM Ca 2+ . Axons were stimulated with a monopolar glass electrode (3-6 µm tip diameter) filled with normal ASCF. Single pulses (pulse length 200-500 µs, pulse amplitude 50 V) were applied every 30 s using isolated pulse stimulator (ISO-STIM 01D, NPI Electronic, Germany). To acquire high temporal resolution, line scanning was performed perpendicular to the orientation of the axons, with a frequency of 500 Hz. The dye was excited at 790 nm (Spectra-Physics MaiTai HP Laser) and fluorescence signals were detected using a high sensitivity photomultiplier H7422-40 (Hamamatsu, Japan), after filtering with a DCLP dichroic mirror >500 nm. A laser-scanning system (TriM Scope II, LaVisionBiotec, Germany) coupled to an upright microscope (Olympus, Japan) equipped with a 20×, NA 1.1 water-immersion objective (Zeiss, Germany) was controlled using ImSpector Pro Software (version 4.0, LaVision Biotec, Germany), which also allowed online analysis of the data. The scan head and stimulator were synchronized using Igor Pro 6.2 Software (WaveMetrics, Lake Oswego, OR, USA) and an external trigger system (SyncUnit, LaVisionBiotec, Germany).
Analysis of Ca 2+ Imaging Data
The amplitude of the Ca 2+ fluorescence signal was measured in parts of axonal bundles positioned in the focal plane, as the ratio of the difference between the peak fluorescence and the resting fluorescence (∆F = F−F 0 ) and the resting fluorescence (F 0 ), after background subtraction. Background region was chosen as the less bright area in the field of view (FOV; not more than 10 µm away from the recorded axon). The analysis was performed using custom-written macros for IgorPro (WaveMetrics, Lake Oswego, OR, USA). 10-90% rise-time of Ca 2+ transients was measured manually using hairline cursors in IgorPro. To determine the decay time constant, a mono-exponential function was fitted to the decaying part of the transient, from the peak until ∼500 ms after the peak. The graphs show mean ± standard error (SEM).
Measurement of Axon Diameter
To estimate the diameter of single axons within small axonal bundles from which Ca 2+ transients were recorded, 3D line-scan pictures (z-stack) of the axons loaded with OGB-1 were recorded. The brightness profile of the line-scans was plotted, and the diameter of single axons was measured in the plane where the axons were in focus. ImSpector Pro Software (LaVision Biotech, Germany) was employed for these measurements.
Immunohistochemistry and Confocal Laser Scanning Microscopy
Newborn mouse pups (P0-2) were sacrificed by decapitation without anesthesia and both sciatic nerves were isolated. The nerves were transferred to a Petri dish and maintained for ∼15 min in ice-cold high-Mg 2+ ACSF containing in mM: 124 NaCl, 1.25 NaH 2 PO 4 , 10 MgSO 4 , 2.7 KCl, 26 NaHCO 3 , 2 CaCl 2 , 2 ascorbic acid, 18.6 glucose. The nerves were fixed for 1 h in 4% paraformaldehyde (PFA) prepared in phosphatebuffered saline (PBS). PBS contained, in mM: 4.3 Na 2 HPO 4 , 1.6 NaH 2 PO 4 , 150 NaCl. Afterwards the nerves were washed with PBS (3 times × 15 min) and transferred to 30% sucrose solution in PBS, where they were kept overnight at 4 • C. The nerves were then embedded into Tissue-Tek (Sakura Finetek Europe, Netherlands) and frozen at −80 • C. Ten micrometre thick slices were prepared with a Leica CM3050S Cryotome and Leica 819 Microtome blades, and transferred onto the glass slides. The slices were washed (3 times × 15 min) with tris-buffered saline (TBS), and incubated in blocking solution for 2 h at room temperature. TBS contained, in mM: 100 Sigma 7-9, 154 NaCl. Blocking solution contained: 3% bovine serum albumin and 0.2% Triton-X in TBS. The slices were incubated with primary antibody overnight at 4 • C, washed in TBS (3 times × 15 min), and incubated with secondary antibody coupled to a fluorescent dye for 3-4 h at room temperature. For double and triple immune-labeling the antibodies were applied sequentially, i.e., first primary followed by first secondary, followed by second primary, followed by second secondary, etc. All antibodies were applied in the blocking solution. Washing of slices with TBS (3 times × 15 min) was performed after incubation with each antibody. At the end of the immuno-labeling procedure counterstain 4 ,6-diamidino-2-phenylindole (DAPI, 5 mg/ml) was applied for 5 min at room temperature. The slices were washed with water, dried, covered with Vectashield (Vector Laboratories, Inc, Burlingame, CA, USA), and sealed with nailpolish. The list of antibodies used in this study is given in Table 1. Confocal images were acquired with confocal laser scanning microscope LSM-710 (Zeiss, Germany) equipped with 40× objective (Plan-Apochromat 40×/1.3 Oil DIC M27, Zeiss, Germany). The dyes were excited with the following laser-lines: 405 nm for DAPI, 488 nm for Alexa-Fluor-488, 568 nm for rhodamine-red-X (RRX) or Cy3, and 633 nm for Alexa-Fluor-633 or Cy5. The pinhole was set to 34-38 µm depending on the wave-length, and was adjusted so that the optical section for each channel was 0.9 µm. Images for multiple channels were acquired sequentially, and care was taken that parts of the emission spectra from which the light was collected for different dyes do not overlap. Images were further analyzed with ZEN Software (Zeiss, Germany).
Statistics
Statistical analysis was performed using SPSS statistics Software (Version 23.0, IBM Corp. Armonk, NY, USA). Statistical significance of the drug effect was determined with pairedsamples T-test. All values are shown as the mean ± SEM. Differences were considered significant at p < 0.05 ( * p < 0.05, * * p < 0.01, * * * p < 0.001).
Electrical Stimulation of Nerve Bundles Triggers Ca 2+ Transients Along Sciatic Nerve Axons
The first goal was to test whether activity-dependent Ca 2+ transients occur along mouse sciatic nerve axons in a millisecond time domain, and to assess whether high-or low-affinity indicator works best to measure these transients. We performed 2-photon Ca 2+ imaging in nerve slices filled with a high-affinity Ca 2+ indicator OGB-1 AM (K d = 170 nM) or a low-affinity Ca 2+ indicator Magnesium Green (K d = 6 µM), while stimulating axons electrically (Figures 1A,D). We aimed to image small axonal bundles which had constant diameter (in the range of 3-12 µm) over the length of tens of micrometers ( Figure 1B). We estimated that the diameter of thin axons comprising these bundles was in the range of 0.6-2.4 µm ( Figure 1E). Each region of interest (ROI) was selected as a line placed perpendicular to the orientation of the axons ( Figure 1A). We avoided to image cellular structures appearing as varicosities and potentially being growth cones or cut-and-resealed axons. To ensure that we record Ca 2+ transients selectively in axons, but not in the developing Schwann cells, we acquired all scans far from the indicator injection site (>300 µm). This was important, as we observed that at the injected site both Schwann cells and axons took up the dye, while far from the injection site only axons were stained with the indicator and no glial cells were labeled (Figure 1A left,B). Based on the previous studies (Thaxton et al., 2011) and our own unpublished observations, the end-to-end length of a Schwann cell in the sciatic nerve slice prepared from a neonatal mouse is no longer than 300 µm. In addition, Schwann cells in neonatal sciatic nerve are not coupled via gap-junctions (own unpublished observation). Hence, at the distance of >300 µm from the injection site, which exceeds the length of a Schwann cell in our preparation, we could selectively image the axons. While stimulating sciatic nerve axons electrically with single pulses every 30 s, we repeatedly executed fast line-scans (500 Hz) perpendicular to the orientation of the axons, and tracked changes in the fluorescence of OGB-1 (Figure 1A right) or Magnesium Green (not shown). Stimulation of axons led to a fast increase of Ca 2+ -dye fluorescence which then decayed back to baseline, indicating that changes in Ca 2+ level occur in sciatic nerve axons upon electrical activity (Figure 1A right). The peak amplitude of Ca 2+ transients in axons loaded with OGB-1 and stimulated with single pulse was typically several times larger than in axons loaded with Magnesium Green, and the signal-to-noise ratio was much better with OGB-1 compared to Magnesium Green ( Figure 1D). Further, with Magnesium Green we usually had to stimulate the axons with trains (e.g., 3-50 pulses at 25-100 Hz) rather than with single pulses in order to detect Ca 2+ transients. The 10-90% rise-time and the decay time constant of Ca 2+ transients recorded with OGB-1 were 7.73 ± 0.56 ms (n = 6) and 323 ± 30 ms (n = 7), respectively ( Figure 1C). Ca 2+ transients recorded with Magnesium Green were very small upon single pulse stimulation, therefore it was difficult to estimate rise and decay time reliably even when several sweeps were averaged. We could do it only in one experiment where the 10-90% rise-time was 4.48 ms and the decay time constant was 166 ms (Figure 1D). Based on these findings we decided to use a high-affinity Ca 2+ indicator OGB-1 for our experiments, aiming for higher signal sensitivity but keeping in mind that OGB-1 likely reports an overestimate of rise-and decay time of Ca 2+ transients along the axons (Regehr, 2000).
Ca 2+ Transients Along Sciatic Nerve Axons Depend on TTX-Sensitive Action Potentials
In brain slices, electrical stimulation of gray and white matter axons results in activation of VGCCs located in presynaptic boutons or along axonal shafts (Koester and Sakmann, 2000;Kukley et al., 2007). This activation depends on action potentials mediated by TTX-sensitive Na + channels. As peripheral nerves contain both TTX-sensitive and TTX-resistant Na + channels (Kostyuk et al., 1981), we tested whether Ca 2+ transients in sciatic nerve axons are inhibited by TTX. We stimulated the axons electrically with single pulses at 0.033 Hz and acquired line-scans as described above. After verifying that the amplitude of evoked Ca 2+ transients remains stable for at least 10 min, we applied TTX (1 µM) via the bath. TTX reduced the peak amplitude of Ca 2+ transients by 97 ± 6% (Figures 2A,B) indicating that Ca 2+ transients along the axons depend on action potentials mediated by TTX-sensitive Na + channels. However, in one experiment we found that the amplitude of Ca 2+ transients was decreased only by 68% upon TTX application (not shown), suggesting that TTX-resistant Na + channels and/or Na + -action-potential independent mechanisms may partially mediate evoked Ca 2+ increase along sciatic nerve axons.
Ca 2+ Transients Along Sciatic Nerve Axons Involve Ca 2+ Influx from the Extracellular Space
To investigate the origin of Ca 2+ transients in peripheral nerve axons, we perfused the slices with ACSF containing reduced Ca 2+ concentration (1.8, 1.2 or 0.5 mM Ca 2+ instead of 2.5 mM). The total divalent concentration was maintained constant by adjusting the levels of Mg 2+ ions in the bath. Under these conditions, the peak amplitude of Ca 2+ transients was reversibly reduced by 11 ± 2% in 1.8 mM Ca 2+ (Figures 3A,D; n = 3), 29 ± 3% in 1.2 mM Ca 2+ (Figures 3B,D; n = 3), and 55 ± 3% in Ca 2+ concentration in the control solution was 2.5 mM. Horizontal bars indicate time-period when solution with reduced Ca 2+ concentration was applied. Amplitude of Ca 2+ transients is normalized on the amplitude of the transients obtained in the presence of 2.5 mM external Ca 2+ . "a", "b", and "c" indicate time-period from which the 10 successive sweeps were averaged (corresponding averages are shown on the right of each time course). Ca 2+ transients are recorded in axons filled with OGB-1. (A-C) Right: example traces recorded before (black), during (red), and after (gray) bath application of (Continued) 0.5 mM Ca 2+ (Figures 3C,D; n = 3). These findings suggest that evoked Ca 2+ transients along sciatic nerve axons involve Ca 2+ influx from the extracellular space.
Notably, in the experiments with various Ca 2+ concentration in the bath we observed that relationship between the peak amplitude of Ca 2+ transients (∆F/F) and extracellular Ca 2+ concentration is not linear, but Ca 2+ influx tends to saturate with increasing Ca 2+ level in the bath (Figure 3D). This nonlinearity may be explained by the fact that Ca 2+ -binding site(s) at the membrane surface or within the channel pore, to which Ca 2+ ions have to bind in order to pass through the channel, get saturated at higher extracellular Ca 2+ concentration (Augustine and Charlton, 1986;Mintz et al., 1995). Alternatively, the observed non-linearity may be explained by slight saturation of OGB-1 when Ca 2+ transients are recorded with 2.5 mM Ca 2+ in the bath.
Horizontal bar indicates time-period of VGCCs blocker application. (B-E)
Right: example traces recorded before (black) or during (red) bath application of a VGCCs blocker. Each example trace represents an average of 10 successive sweeps. Black vertical bar indicates time-point of electrical stimulation. (F) Summary bar graphs showing the effect of specific VGCCs blockers on the amplitude of Ca 2+ transients vs. control. The amplitude was reduced in the presence of N-type VGCC blocker ω-conotoxin GVIA (1 µM, n = 4, * * * p < 0.001), as well as in the presence of L-type VGCC blocker nisoldipine (1 µM, n = 5, * * p < 0.01). On the contrary, the amplitude was unaffected by the P/Q-and T-type VGCCs blockers, ω-agatoxin IVA (500 nM, n = 4) and TTA-P2 (1 µM, n = 5), respectively.
These results indicate that Ca 2+ influx along sciatic nerve axons is partially mediated by N-and L-type VGCCs while P/Q and T-type VGCCs are not involved.
Immunohistological Evidence for VGCCs in the Mouse Sciatic Nerve
To obtain additional independent evidence for the presence of VGCCs in the developing mouse sciatic nerve, we performed immunohistochemistry. We found that both L-and N-type VGCCs were present in the nerve, but their localization was different. L-type VGCCs appeared on bundles of thin axons which often showed weaker labeling with neurofilament (NF200) than the other axons in the nerve (n = 3 animals, Figures 5A-H).
The axons expressing L-type VGCCs showed no co-labeling with myelin basic protein (MBP; n = 3 animals, Figures 5I-L) or choline acetyltransferase (ChAT), a marker of motor axons (n = 3 animals, Figures 5M-P). These findings suggest that L-type VGCCs are expressed by non-myelinated sensory fibers. N-type VGCCs appeared on myelinated axons (n = 3 animals, Figures 6E-H) which were also positive for NF200 (n = 3 animals, Figures 6A-D). Yet the resolution of our confocal system did not allow to reliably conclude whether N-type VGCCs were expressed solely on the axonal membrane or on the myelin as well. Some axons positive for N-type VGCCs co-labeled with ChAT (n = 3 animals, Figures 6I-O), while other axons expressing N-type VGCCs were negative for ChAT (n = 3 animals, Figures 6I-L,P-R). These data point to the fact that Ntype VGCCs are present on myelinated sensory and motor fibers.
DISCUSSION
The first important finding of the present study is that transient increases in axoplasmic Ca 2+ concentration take place in axonal shafts of neonatal mouse peripheral nerve when axons are stimulated electrically with single pulses. Further, we show for the first time that Ca 2+ transients in peripheral nerves in situ are fast, i.e., occur in a millisecond time-domain. Up to now few studies have reported transient activity-dependent Ca 2+ elevations along peripheral nerve axons in situ (Elliott et al., 1989;Quasthoff et al., 1995Quasthoff et al., , 1996Wächtler et al., 1998;Mayer et al., 1999;Jackson et al., 2001). However, no reasonable conclusion regarding kinetic parameters of Ca 2+ transients can be made from these studies because the time-course of Ca 2+ transients is rate-limited by slow acquisition, i.e., slow frame scanning mode and low sampling rate (∼2.5 Hz; Jackson et al., 2001). At the same time Ca 2+ transients with fast kinetics have been reported in axons of dorsal root ganglion neurons in culture but the involvement of VGCCs in Ca 2+ elevations in culture has not been investigated (Lüscher et al., 1996). In the present study we used fast acquisition mode, i.e., line-scanning at 500 Hz, and found that action potentials in mouse sciatic nerve axons in situ trigger axoplasmic Ca 2+ elevations which rise relatively fast (10-90% rise-time is ∼7.7 ms) and decay back to baseline with a slower time constant τ of ∼320 ms, as estimated with highaffinity Ca 2+ indicator OGB-1. These values are quite similar to those obtained with a Ca 2+ indicator of comparable K d in other preparations, including mouse cerebellar mossy fiber boutons (Delvendahl et al., 2015) and presynaptic terminals of rat calyx of Held (Borst et al., 1995). Remarkably, aiming for sufficient sensitivity and good signal-to-noise ratio during imaging of small axons in neonatal mouse nerve, we selected a high-affinity Ca 2+ indicator OGB-1 (K d = 170 nM) for our experiments. The shortcoming of this experimental design is that OGB-1 may be too slow to precisely follow rapid changes in intra-axonal Ca 2+ concentration, and most likely also adds some buffer capacity to the axoplasm (Regehr and Atluri, 1995). Hence, the factual activity-dependent Ca 2+ dynamics in the axoplasm is likely to be even faster than reported by OGB-1. Taken together, our findings indicate that transient activity-dependent Ca 2+ elevations along peripheral nerve axons can occur on a rapid time-scale, similar as it happens at synaptic boutons or along axonal shafts in the CNS.
The second important finding of our study is that activity-dependent Ca 2+ transients along peripheral nerve axons in neonatal mouse depend on Ca 2+ influx from extracellular space and involve activation of N-and L-type VGCCs. We found that a blocker of N-type VGCCs, ω-conotoxin GVIA, reduced the amplitude of Ca 2+ transients by ∼40%, the blockers of L-type VGCCs nisoldipine or nifedipine caused ∼15% reduction, while the blockers of P/Q-and T-type channels were ineffective. Furthermore, the results of our immunohistological experiments suggest that in the developing mouse sciatic nerve L-type VGCCs are present on non-myelinated sensory fibers, while N-type channels appear on myelinated motor and sensory axons. To the best of our knowledge, this is the first report about subtypes of functional VGCCs present along the peripheral nerve axons in neonatal mice. Interestingly, in neonatal rodent central (optic) nerve likewise L-or N-type VGCCs were suggested to be of functional significance (Sun and Chiu, 1999;Alix et al., 2008), while P/Q-type channels seem to get involved later during development (Alix et al., 2008). L-and/or N-type VGCCs also mediate Ca 2+ influx in adult optic nerve during pathological conditions (Fern et al., 1995;Brown et al., 2001). When we compared our findings on VGCCs subtypes in neonatal mouse sciatic nerve to another preparation of peripheral nerve, i.e., adult mouse postganglionic sympathetic axon bundle, it turned out that also in those axons ∼40% of the total Ca 2+ influx is carried by N-type VGCCs, however, in contrast to our findings, L-type VGCCs were not involved (Jackson et al., 2001). In adult mouse C-fibers T-type VGCCs have been suggested to play a role in modulating action potential conduction velocity (François et al., 2015), however in the neonatal mouse we could not find T-type VGCCs contribution to Ca 2+ influx along the axons. Remarkably, at the mammalian neuromuscular junction, where some of the peripheral nerve axons terminate, P/Q-type represents the major subtype of VGCCs, although L-and N-type VGCCs also play a role during development, re-innervation or pathological conditions (Katz et al., 1996;Nudler et al., 2003). At the distal nerve endings, in turn, T-type VGCCs have been found in addition to other VGCCs subtypes (François et al., 2015). Hence the specific subtypes of VGCCs are likely targeted differently to different functional compartments of the same axon, and may be also differently regulated in developing and adult animals, as well as during pathological conditions. In addition to known VGCCs subtypes other yet unidentified subtypes of VGCCs, or alternative routes (e.g., reversed Na + /Ca 2+ exchanger, release from internal store), may contribute to activity-dependent Ca 2+ entry into the axoplasm of peripheral nerve axons. In line with this idea are our findings that Ca 2+ transients in mouse sciatic nerve are reduced only partially by specific blockers of VGCCs. Furthermore, also in the unmyelinated nerve fibers of rat vagus nerve neither L-nor N-type nor P/Q-type VGCCs mediated Ca 2+ entry along the axons, although Ca 2+ transients in that preparation were largely inhibited by Cd 2+ (Wächtler et al., 1998). Hence, more experiments in various preparations of central and peripheral nerves/white matter are required to clarify this issue. Why do peripheral nerve axons express VGCCs along their shafts, and what could be the functional significance of activity-dependent axonal Ca 2+ transients under physiological circumstances? Ca 2+ is involved in the majority of cellular functions. Importantly, as cells keep free cytosolic Ca 2+ level very low (∼100 nM), what determines the specificity and the functional output of each Ca 2+ -dependent process is the amplitude, the time-course and the spatial domain of a transient change in intracellular Ca 2+ concentration (Berridge et al., 2003). Fast (microseconds to milliseconds) Ca 2+ transients are usually involved in fast cellular processes, e.g., synaptic transmission, opening of Ca 2+ -dependent channels, muscle contraction, etc (Berridge et al., 2003). At axonal synaptic terminals, for example, highly localized (nano-or microdomains) rapidly rising (<1 ms) and large (∼20 fold) Ca 2+ elevations mediated by VGCCs trigger rapid release of synaptic vesicles and ensure high precision of synaptic signaling (Kandel et al., 2000). In turn, more global residual Ca 2+ changes, which are also slower and smaller in amplitude, contribute to modulation of transmitter release, e.g., synaptic potentiation (Swandulla et al., 1991;Wang and Augustine, 2015). We want to emphasize that as Ca 2+ transients recorded along peripheral nerve axons in our study rise in a range of few milliseconds (10-90% rise time ∼7.7 ms) and this time probably underestimates the true speed of Ca 2+ influx into the axon upon action potential propagation, these Ca 2+ transients are well suited to trigger or/and modulate relatively fast axonal processes. For example, transient increase in axoplasmic Ca 2+ concentration may be involved in regulation of action potential conduction or frequency through e.g., activation of Ca 2+dependent K + and/or Cl − channels, or inactivation of Ca 2+ channels (Jirounek et al., 1991;Lüscher et al., 1996;Sun and Chiu, 1999;Alix et al., 2008). Another possible function of VGCCs and fast Ca 2+ entry in peripheral axons, rarely considered in the literature, could be the contribution to neurotransmitter release (vesicular or non-vesicular) along axonal shafts. Rapid (few milliseconds) increases in Ca 2+ concentration mediated by VGCCs take place along axonal shafts in white matter of the CNS (Lev-Ram and Grinvald, 1987;Sun and Chiu, 1999;Kukley et al., 2007). They result in buildup of axonal Ca 2+ microdomains which are involved in triggering fast vesicular release of glutamate at synaptic-like junctions between axons and glia cells (Kukley et al., 2007;Ziskin et al., 2007). Intriguingly, peripheral axons also appear capable of releasing neurotransmitters (glutamate and acetylcholine) from their shafts at least in two experimental paradigms: (a) when nerves are dissected from an animal, placed in Ringer solution and stimulated electrically (Lissak, 1939;Vizi et al., 1983); or (b) when dissected nerves are pre-loaded with labeled neurotransmitters, (e.g., 14 C-glutamate, tritiated choline, d-2, 3-(3)H-aspartic acid) and stimulated electrically or magnetically (Wheeler et al., 1966;DeFeudis, 1971;Weinreich and Hammerschlag, 1975;Vizi et al., 1983;Wieraszko and Ahmed, 2009). The mechanisms of neurotransmitter release from peripheral nerve axons in situ or in vivo remain largely un-investigated. But it is tempting to speculate that peripheral axons utilize similar mechanism of release as callosal and optic nerve axons, i.e., VGCCs located along axonal shafts mediate Ca 2+ influx followed by fusion and release of neurotransmitter filled vesicles. Subsequently, released neurotransmitter may bind to its receptors on the neighboring Schwann cells. In line with this hypothesis are the recent findings in cell culture demonstrating that vesicular release of glutamate occurs along the axons of dorsal root ganglion neurons and mediates axonal-glia communication important for myelination (Wake et al., 2011(Wake et al., , 2015. Notably, L-and N-type VGCCs expressed in peripheral nerve axons are the VGCCs subtypes which are involved in neurotransmitter release at Ribbon synapses and at conventional synapses between neurons, respectively (Catterall, 2011). Few older studies also show that axonal release in peripheral nerves resembles the axon terminal release in many respects, e.g., it depends on extracellular Ca 2+ and is stimulated by elevated extracellular K + (Dettbarn and Rosenberg, 1966;Vizi et al., 1983;Wieraszko and Ahmed, 2009). Yet, other investigators do not support these findings and suggest that the mechanism of axonal release in the peripheral nerves differs from release at synapses (Weinreich and Hammerschlag, 1975).
Finally, evidence is currently accumulating that multiple subtypes of VGCCs may contribute to injury mechanisms of central white matter axons (Fern et al., 1995;Brown et al., 2001;Tsutsui and Stys, 2013). In light of those findings it is likely that in addition to their physiological role, also VGCCs located along peripheral nerve axons may be of significance during pathological conditions, e.g., nerve injury, pain, or peripheral neuropathy.
AUTHOR CONTRIBUTIONS
All experiments were conducted in the laboratory of MK at the Centre for Integrative Neuroscience, University of Tübingen. MK and RB designed experiments. RB and FP performed experiments and analyzed data. MK and RB interpreted the findings, prepared the figures, and wrote the manuscript.
FUNDING
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) grants: KU2569/1-1 to MK, and PF574/5-1 to FP. This work was also supported by the Werner Reichardt Centre for Integrative Neuroscience (CIN) at the Eberhard Karls University of Tübingen. The CIN is an Excellence Cluster funded by the DFG within the framework of the Excellence Initiative (EXC 307). | 7,900.2 | 2016-06-02T00:00:00.000 | [
"Biology"
] |
Physical Characteristics of Amorphous and Crystalline Coconut Sugar Powder with the Addition of Tricalcium Phosphate (TCP) as an Anticaking Agent
The coconut sugar powder produced by vacuum drying and conventional method has high hygroscopicity due to its high sugar content (mostly sucrose). Therefore, it is easier for caking to occur during storage. An anticaking agent such as tricalcium phosphate was therefore added to the powder to maintain its stability. The purpose of this research was to determine the physical characteristics of amorphous and crystalline coconut sugar after the addition of tricalcium phosphate (TCP) in different concentrations. The two types of coconut sugar were prepared by the conventional method, which gave it a predominantly crystalline structure, and the vacuum drying method, which gave it a mainly amorphous structure. The TCP at concentrations 0, 0.5%, and 1% was added to both types of the coconut sugar. The addition of the anticaking agent affected the water sorption of coconut sugar by decreasing the monolayer water content for both types of coconut sugar. TCP seemed to give more significant effect on decreasing the hygroscopicity of crystalline coconut sugar than the amorphous one, while similar trends were obtained in increasing flow ability of both types of coconut sugar. The capacity of TCP to cover the surface of the host coconut powder was proposed as the mechanism of TCP in decreasing hygroscopicity and increasing flow ability of the host powder.
Introduction
Coconut sugar is commonly produced from the evaporation of coconut sap (called as neera). Neera is the sweet, oyster white-coloured sap liquid tapped from the immature inflorescence of coconut. Neera is obtained from the immature inflorescence of a coconut which is about to burst, and the tapping could be done for 12 to 15 times [1]. The main composition of neera is sucrose with the amount more than 80% (per total solid) and followed by a tiny amount of glucose and fructose (about 2.3% per total solid) [2]. Coconut sugar powder is produced conventionally by heating the coconut sap until reaching a saturated solution, and crystalline coconut sugar powder finally is formed. Coconut sugar was also produced by drying the coconut sap using spray drying and vacuum drying [2]. The dried coconut sugar produced had a mainly amorphous structure in contrast with the crystalline structure of coconut sugar obtained with the conventional method [2].
In drying of coconut sugar, maltodextrin as drying aid material was added to increase its anhydrous glass transition temperature higher than ambient temperature. The addition of maltodextrin which has a high glass transition temperature might increase process stability and storage of solid food in reducing caking phenomena and stickiness and increasing the flow ability. The addition of maltodextrin in the ratio of 50% (from total solid) was needed to create a significant impact on glass transition temperature in producing coconut sugar powder with vacuum drying [2]. Both the types of coconut sugar were hygroscopic even though the dried amorphous coconut sugar was more hygroscopic than the conventional coconut sugar powder [2].
Common problems that occur in food powders during storage that contribute to quality and functionality are caking due to water absorption during storage. Therefore, the addition of an anticaking agent is needed to maintain the powder stability [3]. The mechanisms of the anticaking agent's function were explained by (1) comparing it with the host powder for moisture, (2) creating moisture-protective barriers on the surface of hygroscopic particles or physical barriers between particles, (3) smoothing surfaces to eliminate interparticle friction, and (4) inhibiting crystal growth important in solid bridge formation [4]. Some of the anticaking agents used in food powder were tricalcium phosphate (TCP), silicon dioxide, calcium stearate, etc. [5]. TCP is commonly used in sugar, salt, and spices [5]. The concentration of the anticaking agent used was in the range of 1-2% [5]. Lipasek et al. [4] reported the use of an anticaking agent on deliquescent material (sucrose, sodium chloride, fructose, and citric acid) and the result that the addition of the anticaking agent resulted in reducing the moisture uptake and delaying the deliquescent point. Moreover, Nurhadi and Roos [6] added an anticaking agent to amorphous dried honey powder which resulted in reduced hygroscopicity and increased flow ability of the powder. The research to study the effect of the anticaking agent on the same material but with different structures has not been done yet. Thus, the aim of the recent work was to compare the properties of coconut sugar produced with two different methods, conventional and vacuum drying, having predominantly crystalline and amorphous structure, respectively, with the addition of the anticaking agent tricalcium phosphate.
Material and Methods
2.1. Materials. Coconut sap was obtained from Kertamukti village, Pangandaran District, West Java, Indonesia (170 km from the lab). Previously, before being delivered to the laboratory, the coconut sap had been boiled and then stored in a closed container. During transportation, the sample was kept cool in an ice box. In the lab, the coconut sap was kept frozen (GEA Chest Freezer, China) at -28°C and later thawed at room temperature before being used in further treatments. Maltodextrin DE 10-12 (Qinhuangdao Lihua Starch Co., Ltd., China) was used as drying aid. Tricalcium phosphate as the anticaking agent was obtained from PT Tigaka Distrindo Perkasa (Jakarta, Indonesia). The chemicals for water sorption determination in this study were lithium chloride (LiCl), magnesium chloride (MgCl 2 ), potassium carbonate (K 2 CO 3 ), magnesium nitrate (Mg(NO 3 ) 2 ), sodium nitrite (NaNO 2 ), sodium chloride (NaCl), potassium iodide (KI), and potassium sulphate (K 2 SO 4 ) (Merck, Germany). Silica gel and aluminium foil packaging were also used to complete the experiment.
Coconut Sugar Production by the Conventional Method.
Coconut sap was heated in a pan and continuously stirred using a spatula until it was boiling (110°C). After the coconut sap was boiled, the stirring process was speeded up to achieve a high viscosity until granules were formed [2]. The sample was then ground to reduce its size using a grinder (Getra IC-044, Indonesia) and then sieved with a 60-mesh sieve to get a homogeneous size of coconut sugar. The anticaking agent (TCP) was then added to the coconut sugar powder at different concentrations, viz., 0%, 0.5%, and 1% (per weight).
Coconut Sugar Production by the Vacuum Drying
Method. The coconut sugar powder was dried using a vacuum dryer (Binder VD 23, Tuttington, Germany), and the drying condition followed the method developed by Nurhadi et al. [2]. First, the coconut sap was mixed with maltodextrin (50% per total solid) using a magnetic stirrer on a hot plate (Thermo Scientific Cimarec™ Stirring Hotplate, USA). Previously, the solid content of the coconut sap solid was determined by a refractometer (Atago, Japan). Water was then added to the solution to reach a total solid concentration of 40%. The solution was then poured into a silicone baking tray with a thickness of ±3 mm. The temperature for vacuum drying was set to 70°C for 6 hours with absolute pressure of 5 mmHg [7]. After the coconut sap was dried, the samples were put into a desiccator to reduce the temperature to ambient temperature. The dried sample was ground to reduce its size using a grinder and then sieved using a 60-mesh sieve to get homogeneous size of coconut sugar. Then, the anticaking agent TCP was added into the resulting coconut powder at the same concentrations as in the previous experiment.
The amorphous content of both types of coconut sugar were measured by X-ray diffraction (XRD D8 Advance Bruker, Germany) and resulted 75.6% and higher than 90% for amorphous than crystalline coconut sugar, respectively. These findings complied with those in a previous research as reported by Nurhadi et al. [2].
Water Sorption Isotherm (WSI).
Water sorption isotherm was determined using the static gravimetric method. Seven saturated salts were prepared to vary the relative humidity of the desiccator. The salts used were LiCl, MgCl 2 , K 2 CO 3 , NaNO 2 , NaCl, KCl, and K 2 SO 4 (Merck, Germany), resulting in water activities of 0.14, 0.23, 0.45, 0.65, 0.75, 0.84, and 0.95, respectively. The analysis was carried out in triplicate. Samples of 1 g coconut sugar powder were weighed into vials and equilibrated over a saturated solution. The samples were weighed periodically during three weeks with a constant temperature (fluctuation 25 ± 1°C) [8]. Then, the water content of samples was measured by drying the samples in an oven at 100°C for 6 h. The Guggenheim-Anderson-de Boer (GAB) equation was used to relate water activity (a w ) and moisture content.
where X is the water content (g water/g dry solid), a w is the water activity, X m is the monolayer water content, and C and K are constants.
Particle
Size Analysis (PSA). The particle size of the sample powder was measured using a particle size analyser (Beckman Coulter, LS, USA). The particle size was expressed as the mean volumetric size [9]. The samples were placed in a test tube and dispersed using a solvent. TCP was dispersed 2 International Journal of Food Science with aquadest, and coconut sugar powder was dispersed with isopropyl alcohol [10]. The tube was then inserted into the PSA device, and the control setting was done with a computer software. The PSA data obtained was a particle distribution graph. After obtaining the particle size, the hypothetical area of coconut sugar powder and anticaking agent was calculated using the equation supplied by Earle [11] as follows: where A is the area of particles, λ is the shape factor (1.75), w is the particle mass, ρ is the particle density, and D is the diameter of particles from PSA result. The particle density of coconut sugar and anticaking agent was determined with a pycnometer (Pyrex Iwaki 2 ml). Coconut powder of known weight was filled into the pycnometer up to 2/3 of its volume. Isopropyl alcohol was then added to fill up the test volume of pycnometer until there were no more air bubbles. The pycnometer was left for 30 minutes at 25°C. Then, the pycnometer was weighed. The particle density was calculated as follows: Figure 1: The coconut sugar powder moisture content exposed to different relative humidities during WSI experiment (db = dry basis).
International Journal of Food Science
where m s is the weight of the pycnometer filled with the powder, m o is the weight of the empty pycnometer, ρ is the density of the liquid (isopropyl alcohol), m 1 is the weight of the pycnometer filled with the liquid, and m s1 is the weight of the pycnometer filled with both the solid and the liquid [12].
Scanning Electron Microscopy (SEM).
The surface morphology of coconut sugar powder microspheres was examined by means of JSM-IT300 InTouchScope™ Scanning Electron Microscope from Japan using a tilt angle of 40°a nd an accelerating voltage of 10 kV (modification from Hollenbach et al. [13]).
Hygroscopicity
Rate. Hygroscopicity was expressed as rate of water absorption by sample during storage at high relative humidity condition. Coconut powder (approximately 0.5 g) was placed in plastic vials and equilibrated over a saturated solution of NaCl with relative humidity (RH) of 75%.
The weight change of sample was recorded at certain intervals for 4 hours [14].
2.2.5. Angle of Repose. The angle of repose ( Figure 1) is a parameter commonly used for the determination of flow ability of powder. The simplest method is "poured" angle method. Firstly, 10 grams of the sample was put in a "Buchner funnel" with the open-end conditions closed. Next, the bottom of the funnel was opened and the sample was allowed to fall to a flat surface to form a balanced stack [5]. The pouring of the sample is stopped when the heap reaches a predetermined height or width. Then, the angle of repose (α) was calculated as follows in Figure 2; the angle of repose is measured by the inverse tangent (arctan) rule at which the average radius of the formed conical shape and the maximum height of the heaped material are measured, and then, the angle of repose is determined as the arctan of the maximum height to average radius ratio.
2.2.6. Colour Analysis. Colour characteristics of coconut sugar powder were determined with a spectrophotometer (Konica Minolta CM-5 Sensing Singapore Pte Ltd). In the standard method, the spectrophotometer was used and the results were expressed as L * , a * , and b * (L is the lightness; black, L = 0; white, L = 100; +a is redness, −a is greenness; +b is yellowness, −b is blueness).
Results and Discussion
3.1. Water Sorption Isotherm. The change of water content during storage for both coconut sugar powders produced by the conventional and vacuum drying methods with/without the addition of TCP is presented in Figure 1. The curve demonstrated the increased water content until reaching its equilibrium water content at various water activities. The coconut sugar powder obtained by the conventional method showed lesser water sorption at each a w compared to the coconut sugar powder from vacuum treatments.
Increasing TCP concentration impacted the time to reach the equilibrium state in the WSI experiment. Both vacuumdried and conventional coconut sugar powder showed the same trend at increased TCP concentrations (Figures 1 and 3). This might be because increasing TCP might improve the ionic dipole in Ca 2+ which causes a decrease in water adsorption capacity. Increasing the number of TCPs will increase the number of dipole ion bonds, and as a result, the addition of TCP 1% will reduce the water adsorption capacity more as compared to the addition of TCP 0.5%. Then, the equilibrium water content data for its corresponding a w were fitted into the mathematical model of the Guggenheim-Anderson-de Boer (GAB) model. The GAB model is considered the best fit for food materials with a wide range of water activity and was used to correlate the WSI data. The results of the fitting procedure of the GAB model to the experimental data of International Journal of Food Science equilibrium moisture content at different water activities are presented in Table 1. The value of the monolayer moisture content (X m ) is of particular interest since it indicates the amount of water that is strongly adsorbed at specific sites on the food surface, and it is considered as the optimum value to assure food stability, especially microbial stability [15]. The X m value of coconut sugar powdered by the vacuum drying method was higher than that by the conventional method, and the X m values decreased as TCP concentration was added for both types of coconut sugar powder. From Figure 3, it could be seen that the vacuum-dried coconut sugar with a predominant amorphous structure showed higher water sorption than the conventional one. Amorphous sugar adsorbs more water than its crystalline structure [16]. The WSI of crystalline structure showed a "J shape" with a deliquescent point which corresponds to the phase transition from solid to saturated liquid [16]. From Figure 3, the addition of the anticaking agent seemed to decrease water sorption for both types of coconut sugar. The anticaking agent might cover the hygroscopic surface of its host powder, resulting in less water absorption from the environment by the host [5].
The addition of the anticaking agent might also inhibit caking phenomena in the amorphous sugar [2]. Caking was basically the recrystallization of amorphous sugar structure, and it was initiated by water sorption [17]. From Figure 4, the normal vacuum-dried coconut sugar started to cake at a w 0.65 which was indicated by the forming of hard texture of sugar, while this did not occur in vacuum-dried coconut sugar with TCP addition even at a higher relative humidity. The covering of the host powder particle with TCP seemed
5
International Journal of Food Science to inhibit the form of sinter bridges between host particle powders, thus preventing the caking [5].
While caking was not observed in the crystalline coconut sugar formed by the conventional method, the deliquescence started to occur at a w 0.75 for both normal crystalline sugar and the sugar with TCP addition. Lipasek et al. [4] reported that the anticaking agent might delay the deliquescence point. International Journal of Food Science
Particle Size Analysis (PSA).
One of the mechanisms of the anticaking agent in maintaining the storage stability of hygroscopic host powder is by covering its hygroscopic surface area thus protecting the host from absorbing water [5].
Because the amount of the anticaking agent allowed is very small (less than 2% per total weight), to be able to cover the surface area of the host material, the anticaking agent should have a very low particle size. The lower the particle size, the higher the resulting surface area. From Figure 5, it could be seen that the anticaking agent TCP had a lower molecular weight compared to both coconut sugars as host. The particle size and surface area of TCP and the two types of coconut sugar can be seen in Table 2.
We proposed that the addition of anticaking TCP would cover the surface area of coconut sugar as a host material to inhibit the absorption of moisture from the environment. The surface area calculation was determined as explained by Earle [11]. The shape factor chosen was 1.75 for ground material compared to 1 for cube and sphere [11]. The calculation of surface area for both types of the sugar and TCP are presented in Table 2. From Table 2, it could be seen that the addition of the anticaking agent up to 1% could not cover completely the surface area of the coconut as the host powder.
The increasing concentration of TCP increased the resulting surface area. TCP should be added hypothetically to cover completely at a concentration of 1.59% and 1.53% for the vacuum-dried coconut sugar and conventional coconut sugar, respectively. From Table 2, the TCP density is 0.317 g/ml, which is lower than the density of coconut sugar. Therefore, TCP would be at the top of the host surface and compete with the host powder to adsorb moisture from the environment [18].
Scanning Electron Microscopy (SEM).
Photomicrographs of coconut sugar powder with two methods and tricalcium phosphate (TCP) are shown in Figures 6 and 7. TCP has a very fine size and tends to aggregate and form soft agglomerates with a very nonuniform size [13]. As previously stated, one of anticaking agent mechanisms is by covering the surface area of the host material which in this case is the coconut sugar [5]. The presence of TCP on the surface of coconut sugar powder is clearly seen on the sample from the conventional method compared to that from the vacuum drying method (Figures 6 and 7). From Figure 7, the TCP with the finer size stuck on the surface of conventional coconut sugar. The covering of the host powder with anticaking TCP would prevent the host from absorbing moisture from the surroundings and consequently delay the powder caking. Figure 8, the hygroscopicity of vacuum-dried coconut sugar powder was higher than that of conventional method coconut sugar powder. This is due to the predominantly amorphous structure of the coconut sugar obtained from vacuum drying [2]. The amorphous material has a greater pore size than the crystalline material thus having greater water sorption [19]. As can be seen from the WSI curve (Figure 3), amorphous coconut sugar adsorbs water in higher amounts than the conventional one. The amorphous structure has molecules that are not arranged regularly, are more open, and have a large volume. Therefore, it is easier to bind to water from the environment. Meanwhile, properties of crystalline structure are nonhygroscopic, stable, and free flowing. Hence, its water absorption only occurs on the external surface of the crystal [20].
Hygroscopic Rate. From
The addition of an anticaking agent could decrease the hygroscopicity of both types of coconut sugar. The higher the TCP added, the lower the hygroscopicity. Coconut sugar powder with a higher addition of TCP concentration would produce more covered area to the host (coconut sugar) which made it difficult for water vapor to be adsorbed into the material. From Figure 8, it seems that the addition of the anticaking agent gave more significant effect on decreasing the hygroscopicity of the conventional coconut sugar with predominant crystalline structure than the vacuum-dried amorphous one.
3.5. Angle of Repose. Free flowing and granular materials when poured through a funnel on a flat surface produce a cone with a small angle of repose of 35°or less, while cohesive powders, in contrast, have a higher angle of repose (higher than 55°) [5]. From Table 3, all the coconut sugar powders showed free flowing properties with angle of repose value in the range of 26.1-32.8°. The result seemed to give the same trend as the result of hygroscopicity, where the more hygroscopic the sample, the less the ability of the sample to flow. The addition of anticaking agents such as TCP could reduce the angle of repose from the structure of amorphous and crystalline sugar powder ( Table 3). The smaller the angle of repose, the higher the flowability of the product. The addition of the anticaking agent on honey powder also reportedly increased its flow ability [2]. The mechanism of the 3.6. Colour Analysis. The colour of coconut sugar for all treatments can be seen in Figure 9. The coconut sugar obtained from vacuum drying showed brighter colour due to the addition of maltodextrin (50% per total weight). The colour of maltodextrin itself is white. The darker colour of coconut sugar powder that was produced by the conventional method might be due to Maillard reaction and caramelization occurring during processing at high temperature ( Table 4). The Maillard reaction occurs due to the reaction between reducing sugar and amino acids, while the caramelization reaction occurs due to the interaction of sugars at high temperatures (80°C) [21]. The addition of the anticaking agent seemed not to affect the colour of the coconut sugar. The anticaking agent TCP has white colour, but the concentration used was very small (maximum 1% per total weight) thus not making a significant colour difference.
Conclusions
Coconut sugar was obtained by two different methods, conventional and vacuum drying. The conventional coconut sugar had a predominantly crystalline structure, while the vacuum-dried coconut sugar had mainly an amorphous structure (75.6%). The anticaking agent (tricalcium phosphate (TCP)) was added to maintain the stability of coconut sugar during storage and to increase its flow ability. The TCP addition seemed to affect the water sorption of both types of coconut sugar by decreasing their monolayer water content (X m ). The addition of the anticaking agent (TCP) resulted in decreased hygroscopicity and increased flow ability for both types of coconut sugar. The mechanism of TCP to maintain stability of coconut sugar might be related to covering the surface area of the host material (coconut sugar) by TCP thus preventing the host from water absorption. The TCP addition had a more significant effect on decreasing the hygroscopicity of the conventional coconut sugar with a predominantly crystalline structure than the amorphous vacuum-dried coconut sugar.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request. | 5,412.4 | 2020-09-14T00:00:00.000 | [
"Agricultural and Food Sciences",
"Materials Science"
] |
Electric Field-Modulated Surface Enhanced Raman Spectroscopy by PVDF/Ag Hybrid
Electrically modulated surface enhanced Raman scattering (E-SERS) can be able to regulate the plasmon resonance peak of metal nanostructures, further improve the detection sensitivity of the SERS substrate. However, the E-SERS substrates require auxiliary equipment to provide the electrical potential, and most of them are non-flexible structure, which limits the application of E-SERS in the portable, in-situ and fast detection area. Here, we developed an electric field-modulated SERS substrate based on the piezoelectric effect by combining the PVDF (piezoelectric-modulated layer) and Ag nanowires (AgNWs) (SERS active layer) and investigated the SERS activity in experiment and theory. The enhanced electric field and the tunable plasmon resonance induced by the piezoelectric effect provide the additional enhancement for the SERS signal. Furthermore, we fabricated a SERS active ring with a piezoelectric field-modulated substrate and achieved the in-situ detection of glucose with a non-invasive method. This work provided innovation for the E-SERS and could greatly promote the development of the in-situ, wearable and intelligent sensors.
Surface-enhanced Raman spectroscopy (SERS) 1-3 which combines molecular fingerprint specificity and potential single-molecule sensitivity, has been widely used in surface science, electrochemistry, biology, materials science and other fields due to its high sensitivity and extremely fast response [4][5][6][7][8] . It is known that the SERS enhancement effect is caused by electromagnetic mechanism (EM) [9][10][11][12] and chemical mechanism (CM) [13][14][15] , which are ascribed to the enhancements of local electric fields with the assist of the surface plasmon resonance and charge transfer, respectively. In recent years, how to prepare SERS substrates with excellent activity by simple methods has captured increasing attention [16][17][18][19] . Among numerous SERS techniques, the electrically modulated SERS (E-SERS) technology is particularly noteworthy by the virtue of the versatile ability to regulate the plasmon resonance of metal nanostructures through the electric field [20][21][22][23] . Besides, by adjusting the electrical signal, the E-SERS can be utilized to distinguish and investigate the corresponding enhancement mechanisms (EM or CM) in complex spectra [24][25][26] . The key factor that limits the rapid development of E-SERS technology is the complex equipment, where the measurements are mostly carried out in the sample cell. In addition, the demand for the SERS substrate is relatively special. The substrate should not only possess the nano-scale topography with enhanced activity but also with the excellent conductive properties, greatly hindering the rapid and in-situ detection 27 .
The piezoelectric effect discovered in 1880 28 , can produce the opposite charges on the surface with the interaction of mechanical force and has aroused much attention. Among the numerous piezoelectric materials, flexible polyvinylidene fluoride (PVDF) with remarkable piezoelectric effect has been widely researched as the representative material 29,30 . Compared with the scheme to conduct the flexible insulative polymer, the PVDF can provide an internal electric field under the pressure, which can serve as an ideal E-SERS substrate 31,32 . Besides the excellent flexibility for in-situ detection, the low Raman cross-section of the PVDF makes it has little influence on the identification of the probe molecular. Whereas, up to now, most of the reported researches on the PVDF SERS focus on the flexibility but ignore its piezoelectric effect 32 . Thus, to provide a deep and detail insight into the hidden light in force in the E-SERS, especially for the piezoelectric effect modulated Raman spectroscopy, is crucial and beneficial to better understand the enhancement mechanism.
Inspired by this, in this paper, we developed an electric field-modulated Raman substrate based on the piezoelectric effect by combining the PVDF (piezoelectric-modulated layer) and Ag nanowires (AgNWs) (SERS active layer). As exhibited in Fig. 1(a), the crisscrossed AgNWs are deposited on both sides of PVDF film with a simple spin coating method, in order to facilitate the integration with other devices, such as wearable device 33 . The top-layer crisscrossed AgNWs here act as the SERS active layer and can excite and produce the hot spots from the in-plane and interfacial AgNWs with the assist of the light-induced plasmonic resonances. The bottom-layer crisscrossed AgNWs are not absolutely required for the electric field-modulated SERS substrate but can serve as a flexible conductive electrode combining with the top-layer one. The pivotal point of the designed electric field-modulated SERS substrate [ Fig. 1(b)] is the introduction of the flexible piezoelectric-modulated PVDF layer, where the internal electric field produced under the pressure can effectively regulate the surface plasmonic property of the AgNWs and further modulate the distribution of the hot spots around the AgNWs. This work can provide innovation for the E-SERS and will greatly promote the development of the in-situ, wearable and intelligent sensors.
Materials and Methods
preparation of AgnWs/pVDf/AgnWs. The silver nanowire (synthesized by a solvothermal synthesis method) solution (diameter: 60 nm, length: 20 μm) was dispersed in ethyl alcohol was ultrasonically treated for 20 minutes. Then the PVDF film with a thickness of 10 μm was cleaned with ethyl alcohol to remove dust and impurities. After that, the AgNWs were evenly coated on both sides of the PVDF film with a Spin coating method. To test the piezoelectric effect of the substrate, the conductive copper tape was bonded on all sides of the AgNWs/ PVDF/AgNWs and the acoustic response was recorded by an audio analyzer (U8903A).
Raman detections.
Raman spectra were collected with a Horiba HR Evolution 800 Raman microscope system with a laser of 532 nm under the power of 0.048 mW, and the diffraction grid and integration time were respectively set as 600 gr/mm and 8 s throughout the experiment. fDtD Simulations. In the FDTD simulations, the Drude-Lorentz (DL) model was considered in the simulation. According to the DL model, metal's permittivity can be written as where ε ∞ is the frequency-independent dielectric constant, ω and ω p are respectively the frequency of incident light and the plasma frequency of the metal, Ω L represents the oscillator strength of Lorentz oscillators, which is proportional to the ω p (Ω L = c 0 ω p , where c 0 is a constant). f can be interpreted as the Lorentz weighting, and γ D γ L are the respectively the damping constants related to free and bound electrons. One interesting thing is that in this model the ε(ω) has a close relationship with the electron density N, due to that the ω p can be represented as: www.nature.com/scientificreports www.nature.com/scientificreports/ where ω ′ p and Ω ′ L are the changed plasma frequency and oscillator strength of Lorentz oscillators, and ΔN is the changed electron density. In our simulation, the values of all the parameters that appeared in Eqs. (1) to (4) were obtained from refs. 31,34-36 . In the simulation, two orthogonal Ag NWs with a length of 1 µm and a diameter of 60 nm were adopted, and the gap between them was set as 5 nm according to the SEM image. Incident light and monitoring light were both set as 532 nm in the simulation of electric field distribution. And to achieve the simulated LSPR peak of Ag NWs by absorption cross-section, incident lights from 300 nm to 700 nm were used.
Results and Discussion
The optical and scanning electron microscope (SEM) image of the pure PVDF film was shown in Fig. S1, and the AgNWs SEM image in Fig. 2(a) clearly exhibits the crisscrossed structure of the single nanowire with the length up to 20 μm and the diameter ca. 60 nm. What should be noted here is that the crisscrossed structure is not only the in-plane intersection but also the interfacial intersection forming a multidimensional coupling system. The clear and well-distributed Ag element appears in the energy dispersive spectrometry (EDS) mapping in the inset of Fig. 2(b) demonstrating the successful fabrication of the AgNWs. The high-resolution transmission electron microscope (HRTEM) of the AgNWs in Fig. 2(b) exhibits 0.231 nm lattice fringe spacing corresponding to the (111) planes, combined with the distinct and bright six fold symmetric symmetry pot pattern in selected area electron diffraction (SAED) [see Fig. S2], proving the high crystallinity of the AgNWs. The distribution of the diameter for AgNWs is shown in Fig. 2(c). As shown in Fig. S3, the absorption spectrum indicates that the LSPR peaks of the pristine AgNWs with the main peak at 378 nm and a shoulder at 350 nm. The Fourier transform www.nature.com/scientificreports www.nature.com/scientificreports/ infrared spectrometer (FTIR) spectrum of the PVDF and AgNWs/PVDF were collected as shown in Fig. 2(d). The α and β phase are co-existed and the latter one is responsible for the piezoelectric response of the PVDF film. The content of phase β PVDF increases after the addition of AgNWs, combined with the good acoustic response collected experimentally [ Fig. S4], demonstrating the promotion of piezoelectric response. www.nature.com/scientificreports www.nature.com/scientificreports/ The R6G molecular with the concentration of 10 -6 M was chosen to investigate the electric field-modulated SERS activity of the fabricated AgNWs/PVDF substrate in this work. We can see clearly in Fig. 3(a) that the intensity of the SERS signal for R6G on the AgNWs/PVDF substrate increase obviously when dropped a 50 g weight on its surface. By contrast, under the same condition, the enhancements for the SERS signal on the normal flexible AgNWs/PET and rigid AgNWs/SiO 2 substrate are almost ignorable, which demonstrates that the increase of SERS signal on the AgNWs/PVDF substrate may be contributed to some changes introduced by the weight drop. [Fig. S5 presents the corresponding Raman spectra of the R6G on the pure PVDF, PET and SiO 2 substrate].
In order to explore the effect of different pressures on the substrate, the enhancement of SERS signal correlation to different pressures on the substrates was further explored. We collected the SERS spectra of the R6G on the AgNWs/PVDF substrate under different weights (from 1 to 100 g), where all the weights were placed on the same position of the substrates. As exhibited in Fig. 3(b), with the increase of the weight, the intensity of the SERS signal increase obviously, which indicates the obvious pressure effects on the Raman activity. To gain a more convenient observation of the spectral changes, the comparisons of the peak at 613 cm −1 under different weights were shown in Fig. S6.
To give a deep insight into this phenomenon, the potential distribution of PVDF films under different pressures were simulated firstly using the FDTD method. From Fig. 3(c), we found that a positive potential around the weight appeared on the surface after placed a weight. The more pressures we put on the PVDF materials, the greater potential was generated. Next, the electric fields nearby the Ag NWs under different pressures were obtained as shown in Fig. 3(d). The electric field in the gap area among the AgNWs was greatly enhanced compared with the sample without weight using the 532 nm incident laser. We suspect that this phenomenon is caused by the charge transfer between the piezoelectric layer and the plasmonic structure. The AgNWs nearby the PVDF can be charged under an electrostatic induction of the positive potential 27,30 . As shown in Fig. 3(e), with the increase of the weight, the number of transferred electrons of the AgNWs keeps increasing, which would further enhance the electric field for EM. Thus, the maximum intensity of the electric field in the gap between two adjacent Ag NWs improved from 13.01 to 35.09. The more information about the potential distribution of PVDF films and the electric field strength of the SERS substrate can be seen in Figs. S7 and S8.
What's more, as shown in Fig. 3(f), these induced electrons bring a change to ε(ω) and further lead to a redshift of the plasmon resonance of the AgNWs, which better matching with the incident laser (532 nm) used in the Raman measurement and benefits the regulation of the LSP property of the SERS substrate 31,35,36 .
According to the fourth power law, the EF of the SERS substrate with the wight of 100 g can reach 1.5 × 10 6 . Based on the discussions above, it is no less reasonable to believe that the SERS enhancement for the AgNWs/PVDF substrate can be attributed to the enhanced electric field and the tunable SPR induced by the piezoelectric effect.
In addition, we also found that the arrangement of AgNWs can be divided into two types: in-plane intersected and multidimensionally crisscrossed. The electric field analysis for these two cases is exhibited in Fig. S9. By contrast, only lateral hot spots exist in the former structure, however, the latter possess both the vertical and lateral www.nature.com/scientificreports www.nature.com/scientificreports/ hot spots for the multidimensionally crisscrossed structure. The SERS performance of the two structures was also tested experimentally. The SERS signal from the multidimensionally crisscrossed structure is much stronger than that from the in-plane intersecting AgNWs substrates as shown in Fig. S10. Figure 4(a) exhibits the schematic illustration of the piezoelectric effect. The mechanical deformation introduced by impact or bend will create the piezoelectric potential (field). Here, we carried out a detailed investigation of the distance and bend dependence for the SERS signals. The process of the SERS measurement is schematically represented in Fig. 4(b). With the distance between the weight and the incident laser spot increase from 10 to 30 mm, the intensities of the peaks obviously decay [as shown in Fig. 4(c) and Fig. S11]. Further increasing the distance to 40 mm, the intensity further decreases and is similar to that no impacting. The reason account for this phenomenon is that the piezoelectric potential decays as the distance increases, shown in Fig. S12. The difference of the piezoelectric potential on different positions will lead to the diversity of the number of electrons of the AgNWs, which will further produce an effect on the electric field and the plasmon resonance of the AgNWs as discussed in Fig. 3. What's more, interestingly, in Fig. 4(d) and Fig. S13, with the radius of curvature increase of the AgNWs/PVDF substrate, the SERS activity of the substrate is enhanced 2.5 times compared with that no bending, which can attributed to the piezoelectric potential introduced by the end of the substrate and is further discussed in the following section.
By the virtue of the excellent flexibility and high transmissivity of the PVDF film, the designed piezoelectric electric field-modulated substrate has great potential for in-situ detection [inset in Fig. 4(a)]. The incident laser can pass through the PVDF, excite the hot spots around the AgNWs and further enhance the SERS signal of R6G in Fig. 5(a). Detection of glucose in sweat is the essential method to monitor the states of diabetes. Furthermore, as schematically shown in Fig. 5(b), we fabricated a SERS active ring with the AgNWs/PVDF substrate and achieved the in-situ detection of glucose with a concentration of 20% in Fig. 5(c) by a non-invasive method. Figure 5(d) presents the enhancement of the electric field distribution of the designed SERS ring under different bend, which indicates the promising prospects of the SERS ring for in-situ detection.
conclusions
In summary, a piezoelectric field-modulated Raman substrate with the piezoelectric effect by combining the PVDF and AgNWs is investigated experimentally and theoretically. The enhanced electric field and the tunable plasmon resonance induced by the piezoelectric effect are demonstrated as the reason for the additional SERS enhancement. This work presents promising prospects of the E-SERS and will greatly promote the development of the in-situ, wearable and intelligent sensors. | 3,570 | 2020-03-24T00:00:00.000 | [
"Physics",
"Chemistry"
] |
On the Tree Augmentation Problem
In the Tree Augmentation problem we are given a tree $T=(V,F)$ and an additional set $E \subseteq V \times V$ of edges, called"links", with positive integer costs $\{c_e:e \in E\}$. The goal is to augment $T$ by a minimum cost set of links $J \subseteq E$ such that $T \cup J$ is $2$-edge-connected. Let $M$ denote the maximum cost of a link. Recently, Adjiashvili introduced a novel LP for the problem and used it to break the natural $2$-approximation barrier for instances when $M$ is a constant. Specifically, his algorithm computes a $1.96418+\epsilon$ approximate solution in time $n^{{(M/\epsilon^2)}^{O(1)}}$. Using a simpler LP, we achieve ratio $\frac{12}{7}+\epsilon$ in time $2^{O(M/\epsilon^2)}$. In particular, this gives ratio better than $2$ for logarithmic costs, and not only for constant costs as in other work.
Introduction
We consider the following problem:
Tree Augmentation
Input: A tree T = (V, F ) and an additional set E ⊆ V × V of edges, called links, with positive integer costs c = {c e : e ∈ E}. Output: A minimum cost link set J ⊆ E such that T ∪ J is 2-edge-connected.
The problem was studied extensively, c.f. [10,14,3,19,7,4,17,5,16,2,15]. For a long time the best known ratio for the problem was 2 for arbitrary costs [10] and 1.5 for unit costs [7,16]. It is also known that the integrality gap of a standard LPrelaxation for the problem, so called Cut-LP, is at most 2 [10] and at least 1.5 [4]. Several LP and SDP relaxations were introduced to show that the algorithm in [7,8,16] achieves ratio better than 2 w.r.t. to these relaxations, c.f. [2,15]. For additional algorithms with ratio better than 2 for restricted versions of the problem see [5,17].
Let M denote the maximum link cost. Recently, Adjiashvili [1] introduced a novel LP for the problem -so called the k-Bundle-LP, and used it to break the natural 2-approximation barrier for instances when M is bounded by a constant. To introduce this result we need some definitions.
An equivalent formulation of the Tree Augmentation problem is as follows. Let T uv denote the unique uv-path in T . We say that a link uv covers an edge f if f ∈ T uv . Then T ∪ J is 2-edge-connected if and only if J covers T . For an edge set B ⊆ F let cov(B) denote the set of links in E that cover some f ∈ B, and τ (B) the minimum cost of a link set in E that covers B. The standard LP for the problem which we call the Cut-LP seeks to minimize c T x = e∈E c e x e over the Cut-Polyhedron Π Cut k ⊆ R E defined by the constraints: The k-Bundle-LP of Adjiashvili [1] adds over the standard Cut-LP the constraints e∈cov(B) c e x e ≥ τ (B) for any forest B in T that has at most k leaves, where k = Θ(M/ 2 ). The algorithm of [1] computes a 1.96418 + approximate solution w.r.t. the k-Bundle-LP in time n k O (1) . For unit costs, Adjiashvili designed a modification of the algorithm that achieves ratio 5/3 + .
Here we observe that it is sufficient to consider just certain subtrees of T instead of forests. Root T at some node r. The choice of r defines an ancestor/descendant relation on V . The leaves of T are the nodes in V \ {r} that have no descendants. For any subtree S of T , the node s of S closest to r is the root of S, and the pair S, s is called a rooted subtree of T, r; we will not mention the roots of trees if they are clear from the context. We say that S is a complete rooted subtree if it contains all descendants of s in T , and a full rooted subtree if for any non-leaf node v of S the children of v in S and T coincide; see Fig. 1 (a) and (b), respectively. A branch of S, or a branch hanging on s, is a rooted subtree B of S induced by the root s of S and the descendants in S of some child s of s; see Fig. 1 (c). We say that a subtree B of T is a branch if it is a branch of a full rooted subtree, or if it is a full rooted subtree with root r. Equivalently, a branch is a union of a full rooted subtree and the parent edge of this subtree. For k ≥ 3 let B k denote the set of all branches in T with less than k leaves. The k-Branch-LP seeks to minimize c T x = e∈E c e x e over the k-Branch-Polyhedron Π Br k ⊆ R E defined by the constraints: The constrains of the k-Branch-LP is a subset of constraints of the k-Bundle-LP of Adjiashvili [1], hence the k-Branch-LP is both more compact and its optimal value is no larger than that of the k-Bundle-LP. The main result of this paper is: for arbitrary costs and ρ = 1.6 for unit costs.
We note that recently Fiorini et. al. [9] augmented the k-Bundle LP of [1] by additional constraints -so called {0, 1 2 }-Chvátal-Gomory Cuts, to achieve ratio 1.5 + in n (M/ 2 ) O (1) time, thus matching the best known ratio for unit costs [16]. Our work, done independently, shows that already the k-Bundle LP has integrality gap closer to 1.5 than to 2. Our version of the algorithm of [1] is also simpler than the one in [9]. In fact, combining our approach with [9] enables to achieve ratio 1.5 + in 2 O(M/ 2 ) · poly(n) time. Note that this is a substantial improvement, as it allows to achieve ratio better than 2 for logarithmic costs, and not only for constant costs.
In the rest of this section we briefly describe our algorithm, which is a modification of the algorithm of Adjiashvili [1]. We emphasize the differences. We use the k-Branch-LP instead of the k-Bundle-LP of [1]. But, unlike [1], we do not solve our LP at the beginning, but start with an optimal solution to the Cut-LP. We design an algorithm (see Algorithm 1) that either returns a solution within the stated bound or returns a k-branch-constraint violated by our current solution x; we show that this can be done in time 4 k · poly(n), rather than in time n k O(1) as in [1]. This gives a 4 k · poly(n) time separation oracle for the k-Branch-LP (if a violated k-branch constraint is found). Since the ellipsoid algorithm has a polynomial number of calls to the separation oracle, after a polynomial number of iterations, there will be an iteration when no k-branch constraint violated by x will be found. At this iteration the algorithm will find a solution of cost at most (ρ+ )c T x, where ρ is as in Theorem 1 and x is an optimal solution to the LP formed by the cut-constraints and the k-branch-constraints found during the algorithm.
In the algorithm, we repeatedly take a certain complete rooted subtree S of T , and either find a k-branch-constraint violated by some subtree of S, or a "cheap" cover J S of S; in the latter case, we add J S to our partial solution, contract S, and iterate. When covering S, we replace every link uv with u ∈ S and v / ∈ S by two links us and sv. This makes the problems of covering S and T \S independent, but increases the fractional cost e∈E c e x e of x. This increase is bounded e∈cov(f ) c e x e , where f is the parent edge of S. If we choose S such that the quotient of e∈cov(f ) c e x e over the fractional cost of links with both endnodes in S is small, then this invokes a small loss in the ratio. We call an edge f ∈ F λ-thin if x(cov(f )) ≤ λ, and f is λ-thick otherwise. We say that a complete rooted subtree S of T is a (k, λ)-subtree if S has at least k leaves and if either the parent edge f of the root s of S is λ-thin or s = r. For λ = Θ(1/ ) and k = Θ(M/ 2 ) we choose S to be an inclusionwise minimal (k, λ)-subtree.
Then e∈cov(f ) c e x e = Θ(M/ ), the fractional cost of links with both endnodes in S is at least k−λ 2 = Θ(M/ 2 ), and the above quotient is O( ). Now let us focus on the problem of covering such S, see Fig. 2(a), where the λ-thick edges are shown by bolded lines. We contract the inclusionwise maximal subtree containing s that consists of λ-thick edges, see Fig. 2(b). It is shown in [1] that all λ-thick edges can be covered by cost 2 λ c T x = c T x (see Lemma 1), so we postpone covering these edges to the end of the algorithm. Now, after the contraction, every branch B hanging on s has less than k leaves, by the minimality of S, hence it has a corresponding constraint in the k-Branch-LP. A link uv is called a cross-link if u and v belong to distinct branches hanging on s, and uv is an in-link otherwise. As in [1], we choose the better outcome of two procedures.
The first procedure is identical to the one in [1], but we show that it can be implemented in time 4 k ·poly(n). We compute an optimal cover J B of each branch B hanging on s. If for some branch B we get x(cov(B)) < τ (B), then a k-branch constraint violated by x is found. Else, the union of the covers J B computed is a cover of S of cost 2C cr + C in , where C cr and C in denote the fractional cost of cross-links and in-links, respectively.
In the second procedure we replace each in-link uv by two up-links ua and va of the same cost as uv, where a is the least common ancestor of uv. Then we compute an extreme point solution of the Cut-LP in the modified instance; this part differs from the one in [1]. We show that in the case when every in-link is an up-link, such a solution is always half integral, see Lemma 4. We round such a solution to an integral solution within a factor of 4/3 using the algorithm of [3], and get a solution of cost 4 3 (2C in + C cr ); in the case of unit costs we improve this to 2C in + 4 3 C cr .
The main algorithm (Theorem 1)
To prove Theorem 1 in the next section we prove the following.
Theorem 2. Suppose that we are given an instance of Tree Augmentation and x ∈ Π Cut such that any proper complete rooted subtree of the input tree has less than k leaves. Then there exists a 4 k · poly(n) time algorithm that either finds a k-branch inequality violated by x, or computes a solution of cost ≤ ρ e∈E\R c e x e + 4 3 e∈R c e x e , where ρ = 12 7 for arbitrary costs and ρ = 1.6 for unit costs, and R is the set of edges in E incident to the root.
In the rest of this section we will show that Theorem 2 implies Theorem 1. Recall that given x ∈ R E we say that an edge f ∈ F is λ-thin if x(cov(f )) ≤ λ, and f is λ-thick otherwise. We need the following simple lemma.
Lemma 1 ([1]
). There exists a polynomial time algorithm that given x ∈ Π Cut and a set F ⊆ F of λ-thick edges computes a cover J of F of cost ≤ 2 λ · c T x.
Proof. Since all edges in F are λ-thick, x/λ is a feasible solution to the Cut-LP for covering F . Thus any polynomial time algorithm that computes a solution J of cost at most 2 times the optimal value of the Cut-LP for covering F has the desired property. There are several such algorithms, see [10,11,13].
We say that a complete rooted subtree S of T is a (k, λ)-subtree if S has at least k leaves and if either the parent edge f of the root s of S is λ-thin or s = r. Given such S with s = r we apply the following natural operation. Cutting a link e = uv at a node s means that if s is an internal node of T uv then we set This operation keeps x a feasible solution to the Cut-LP or to the k-Branch-LP and increases c T x by at most c e x e . Cutting S means sequentially cutting at s every link e = uv such that s is an internal node of T uv . We denote by T /S the tree obtained from T by repeatedly contracting every edge of S. Note that this defines a new Tree Augmentation instance, where contraction of an edge uv leads to shrinking u, v into a single node in the graph (V, E) of links.
The algorithm as in Theorem 1 works in iterations. Each iteration starts with a feasible solution x to a subset of constraints of the k-Branch-LP that includes all the constraints of the Cut-LP. At each iteration the algorithm either finds a k-branch inequality violated by x, or returns a solution of cost at most ρ + 8 3 λM k−λ + 2 λ c T x and terminates. In parallel, we apply the ellipsoid algorithm, for the case that a k-branch inequality violated by x is found and added to the maintained subset of constraints. The following procedure describes the main iteration of the algorithm. Note that any proper complete rooted subtree of the tree S/S considered at step 5 of the algorithm has less than k leaves and thus Theorem 2 indeed applies. Also, at step 7 the edges in F are all λ-thick and thus Lemma 1 applies. We will now analyze the performance of the algorithm assuming than no k-branch inequality violated by x was found. Let δ(S) denote the set of links with exactly one endnode in S and γ(S) the set of links with both endnodes in S. Let f be the parent edge of S. Since f is λ-thin Since x(δ(v)) ≥ 1 for every leaf v of S, x e ≥ 1 for every e ∈ E, and since S is a (k, λ)-subtree Since after S is cut, all the cut links are incident to s and since ρ ≥ 4/3 we get Assume that we are given an instance of Tree Augmentation as in Theorem 2. It is known that Tree Augmentation instances when T is a path can be solved in polynomial time. This allows us to assume that the graph (V, E) of links is a complete graph and that c uv = τ (T uv ) for all u, v ∈ V . Let us say that a link uv ∈ E is: a cross-link if r is an internal node of S uv ; an in-link if r does not belong to S uv ; an r-link if r = u or r = v; an up-link if one of u, v is an ancestor of the other.
For a set E ⊆ E of links the E -up vector of x is obtained from x as follows: for every non-up link e = uv ∈ E increase x ua and x va by x e and then reset x e to 0, where a is the least common ancestor of u and v. The fractional cost of a set B of links w.r.t. c and x is defined by e∈B c e x e . Let C in x , C cr x , and C r x denote the fractional cost of in-links, cross-links, and r-links, respectively, w.r.t. c and x. We fix some x * ∈ Π Cut and denote by C in , C cr , and C r the fractional cost of in-links, cross-links, and r-links, respectively, w.r.t. c and x * . We give two rounding procedures, given in Lemmas 2 and 3.
Lemma 2.
There exists a 4 k · poly(n) time algorithm that either finds a kbranch inequality violated by x * , or returns an integral solution of cost at most C in + 2C cr + C r .
Proof. Let B be the set of branches hanging on r. For every B ∈ B compute an optimal solution J B . If for some B ∈ B we have x * (cov(B)) < τ (B) then a kbranch inequality violated by x * is found. Else, the algorithm returns the union J = B∈B J B of the computed edge sets. We show that c(J) ≤ C in + 2C cr + C r . Let E cr be the set of cross-links and let x be the E cr -up vector of x * . Then x satisfies the k-branch inequalities of the branches in B, has value C in +2C cr +C r , and x e = 0 for every cross-link e; in particular, every link e with x e > 0 belongs to at most one set cov(B). Thus c(J) ≤ e∈E c e x e ≤ C in +2C cr +C r , as required. It remains to show that an optimal solution in each branch of r can be computed in time 4 k · poly(n). More generally, we will show that Tree Augmentation instances with k leaves can be solved optimally within this time bound. Recall that we may assume that the graph (V, E) of links is a complete graph and that c uv = τ (T uv ) for all u, v ∈ V . We claim that then we can assume that T has no node v with deg T (v) = 2. This is a well known reduction, c.f. [18]. In more details, we show that any solution J can be converted into a solution of no greater costs that has no link incident to v, and thus v can be "shortcut". If J has links uv, vw then it is easy to see that (J \ {uv, vw}) ∪ {uw} is also a feasible solution, of cost at most c(J). Applying this operation repeatedly we may assume that deg J (v) ≤ 1. If deg J (v) = 0, we are done. Suppose that J has a unique link e = vw incident to v. Let vu and vu be the two edges of T incident to v, where assume that vu is not covered by e. Then there is a link e ∈ J that covers vu . Since e is not incident to v, it must be that e covers vu. Replacing e by the link wu gives a feasible solution without increasing the cost.
Consequently, we reduce our instance to an equivalent instance with at most 2k − 1 tree edges. Now recall that Tree Augmentation is a particular case of the Min-Cost Set-Cover problem, where the set F of edges of T are the elements and {T e : e ∈ E} are the sets. It is known that the Min-Cost Set-Cover problem can be solved in 2 n · poly(n) time via dynamic programming, where n is the number of elements [6]. Thus our reduced Tree Augmentation instance can be solved in 2 2k−1 · poly(n) ≤ 4 k · poly(n) time.
For the the second rounding procedure Adjiashvili [1] proved that for any λ > 1 one can compute in polynomial time an integral solution of cost at most 2λC in + 4 3 λ λ−1 C cr . We prove: There exists a polynomial time algorithm that computes a solution of cost 4 3 (2C in + C cr + C r ), and a solution of size 2C in + 4 3 C cr + C r in the case of unit costs.
Consider the case of arbitrary bounded costs. If C in ≥ 2 5 C cr we use the rounding procedure from Lemma 2 and the rounding procedure from Lemma 3 otherwise. In both cases we get c(J) ≤ 12 7 (C in + C cr ) + 4 3 C r . In the case of unit costs, if C in ≥ 2 3 C cr we use the rounding procedure from Lemma 2, and the procedure from Lemma 3 otherwise. In both cases we get c(J) ≤ 1.6(C in + C cr ) + C r .
In the rest of this section we prove Lemma 3. The proof relies on properties of extreme points of the Cut-Polyhedron given in Lemmas 4 and 5; these properties are of independent interest. Note that although Π Cut is not a polytope, the Cut-LP always has an optimal solution x that is an extreme point or a basic feasible solution of Π Cut . Geometrically, this means that x is not a convex combination of other points in Π Cut ; algebraically this means that there exists a set of |E| inequalities in the system defining Π Cut such that x is the unique solution for the corresponding linear equations system. These definitions are known to be equivalent and we will use both of them.
A set family L is laminar if any two sets in the family are either disjoint or one contains the other. Note that Tree Augmentation is equivalent to the problem of covering the laminar family of the node sets of the full rooted proper subtrees of T , where a link covers a node set A if it has exactly one endnode in A. In particular, note that the constraint e∈cov(f ) x e ≥ 1 is equivalent to the constraint x(δ(A)) ≥ 1 where A is the node set of the full rooted subtree with parent edge f . Let us say that an instance of Tree Augmentation is star shaped if every in-link in E is an up-link. Proof. Let L be a laminar family such that x is the unique solution to the equation system {x(δ(A)) = 1 : A ∈ L}, where |L| = |E|.
Claim. Every A ∈ L has at most 2 children in L and |δ(A)| = 2.
Proof. For every uv ∈ E put one token at u and one token at v. The total number of tokens is 2|E|. Let t(A) denote the number of tokens placed at nodes in A that belong to no child of A. Since L is laminar, A∈L t(A) ≤ 2|E|. We will show that for every A ∈ L the following holds: Note that (i) and (ii) imply that t(A) ≥ 3 for any inclusionwise minimal set A ∈ L for which the lemma does not hold, giving together with (iii) the contradiction A∈L t(A) > 2|E|. We prove (i). Consider a child C of A. Let E C = δ(C) \ δ(A) denote the set of links in E that cover C but not A. The assumption that every in-link is an up-link implies that there are no links between the children of A. Thus every link in E C contributes 1 to t(A). Moreover, E C = ∅ by linear independence and since x(δ(A)) = x(δ(C)) = 1. This implies (i).
We prove (ii). If ch(A) = 0 then t(A) = |δ(A)|. Assume that ch(A) ≥ 1 and that |δ(C)| = 2 for every child C of A. Let E A denote the set of links in δ(A) that cover no child of A. Note that every link in E A contributes 1 both to δ(A) and to t(A). Every link in δ(A) is either in E A or in δ(C) ∩ δ(A) for some child C of A. Every link in E C contributes |E C | ≥ 1 to t(A), and |δ(C) ∩ δ(A)| ≤ 2 − |E C | ≤ 1 to δ(A). This implies (ii).
From (i) and (ii) it follows that to prove (iii) it is sufficient to consider the cases ch(A) = 1, 2. If A has a unique child C then since x(δ(A)) = x(δ(C)) = 1 and by linear independence, E A = ∅; thus t(A) ≥ |E A ∪E C | ≥ 2. If A has exactly two children, say C and C , then t(A) ≥ |E C ∪ E C | ≥ 2.
From the above claim it follows that the equation system {x(δ(A)) = 1 : A ∈ L} has exactly two variables in each equation. By [12], the solution of such systems is always half-integral.
The algorithm that computes an integral solution of cost 4 3 (2C in +C cr +C r ) is as follows. We obtain a star shaped instance by removing all non-up in-links and compute an optimal extreme point solution x to the Cut-LP. By Lemma 4, x is half-integral. Cheriyan, Jordán & Ravi [3] showed how to round a half-integral solution to the Cut-LP to integral solution within a factor of 4/3. Thus we can compute a solution J of cost at most c(J) ≤ 4 3 c T x ≤ 4 3 c T x * . We claim that c T x ≤ 2C in + C cr + C r . To see this let E in be the set of in-links and let x be the E in -up vector of x * . Then x is a feasible solution to the Cut-LP of value 2C in + C cr + C r , in the obtained Tree Augmentation instance all non-up in-links removed. But since x is an optimal solution to the same LP, we have c T x ≤ c T x = 2C in + C cr + C r . This concludes the proof of Lemma 3 for the case of arbitrary costs.
For the case of unit costs we prove: Remark. Corollary 1 holds also for arbitrary costs, but in this case the proof is much more involved. Specifically, we use the following statement, which we do not prove here, since it currently has no application: where the indices are modulo k. Then Let x be as in Corollary 1 and et x be an E in -up vector of x. Note that x ∈ Π Cut , since x ∈ Π Cut . We will show how to compute a solution J of size c(J) ≤ x (E) ≤ 2C in + 4 3 C cr + C r . While there exists a pair of links e = uv and e = u v such that x e , x e > 0 and T u v ⊂ T uv we do x e ← x e + x e and x e ← 0. Then x remains a feasible solution to the Cut-LP without changing the value (since we are in the case of unit costs). Hence we may assume that there is no such pair of links. Let E be the support of x . If every leaf of T has some cross-link in E incident to it, then by the assumption above there are no up-links. In this case, since E is a forest, x e ≥ 1 for every e ∈ E and E is a solution as required.
Otherwise, there is a leaf v of T such that no cross link in E is incident to v. Then there is a unique up-link e incident to v, and x e ≥ 1. We take such e into our partial solution, updating x and E accordingly. Note that some cross links may become r-links, but no up-link can become a cross-link, and the set of cross-links remains a forest. Applying this as long as such leaf v exists, we arrive at the previous case, where adding E to the partial solution gives a solution as required. This concludes the proof of Lemma 3.
Conclusions
In this paper we presented an improved algorithm for Tree Augmentation, by modifying the algorithm of Adjiashvili [1]. A minor improvement is that the algorithm is slightly simpler, as it avoids a technical discussion on so called "early compound nodes", see [1]. A more important improvement is in the running time -4 k poly(n) instead of n k O(1) , where k = Θ(M/ 2 ). This allows ratio better than 2 also for logarithmic costs, and not only costs bounded by a constant. These two improvements are based, among others, on a more compact LP for the problem. Another important improvement is in the ratio -12 7 + instead of 1.96418 + in [1]. This algorithm is based on a combinatorial result for star shaped Tree Augmentation instances, when all in-links are up-links. We showed that for star shaped instances, the extreme points of the Cut-Polyhedron are half-integral, and thus Tree Augmentation on such instances can be approximated within 4/3. As was mentioned, a related recent result of [9] shows that for star shaped instances, augmenting the cut constraints by {0, 1 2 }-Chvátal-Gomory Cuts gives a polyhedron with integral extreme points; thus Tree Augmentation on such instances can be solved optimally. Overall we get that star shaped instances behave as Tree Augmentation instances when T is a star (this is essentially the Edge-Cover problem): the extreme points of the Cut-LP are half-integral, while augmenting it by {0, 1 2 }-Chvátal-Gomory Cuts given an integral polyhe-dron. The description of the {0, 1 2 }-Chvátal-Gomory Cuts in [9] is somewhat complicated, and we ask whether adding simpler constraints, suggested earlier by the author, gives the same result: x(cov(A)) ≥ |A|/2 A ⊆ E, |A| odd, no 3 edges in A lie on the same path In the case when T is a star, the last condition on A is void, and it is known that augmenting the Cut-LP by these constraints gives an integral polyhedron -c.f. [20] where an equivalent Edge-Cover problem is considered. | 7,144.2 | 2017-03-01T00:00:00.000 | [
"Mathematics"
] |
Advancement of artificial intelligence techniques based lexicon emotion analysis for vaccine of COVID-19
Emotions are a vital and fundamental part of life. Everything we do, say, or do not say, somehow reflects some of our feelings, perhaps not immediately. To analyze a human's most fundamental behavior, we must examine these feelings using emotional data, also known as affect data. Text, voice, and other types of data can be used. Affective Computing, which uses this emotional data to analyze emotions, is a scientific fields. Emotion computation is a difficult task; significant progress has been made, but there is still scope for improvement. With the introduction of social networking sites, it is now possible to connect with people from all over the world. Many people are attracted to examining the text available on these various social websites. Analyzing this data through the Internet means we're exploring the entire continent, taking in all of the communities and cultures along the way. This paper analyze text emotion of Iraqi people about COVID-19 using data collected from twitter, People's opinions can be classified based on lexicon into different separate classifications of feelings (anticipation, anger, trust, fear, sadness, surprise, disgust, and joy) as well as two distinct emotions (positive and negative), which can then be visualized using charts to find the most prevalent emotion using lexicon-based analysis.
Introduction
A series of security incidents have recently happened around the world, demonstrating the wide range of crises in which today's ordinary people are effectively using their mobile communication devices [1]. The majority of significant news events now include real-time social media comments [2]. Social networks have emerged as a research topic in which experts from all backgrounds seek inspiration. As a matter of fact, social networks, particularly social network analysis (SNA), which is backed by computer science, offer the opportunity to widen other fields of knowledge. Many fields have established the notion of social network and social network analysis, including scientist or other professions cooperation networks, family networks, student friendship networks, company director networks, consumer networks, labor market, public health, psychology, and so on. It has recently become a part of a new branch of science known as computational social science [3]. Social networks are characterized by social scientists as a group of people who share a common interest, such as familial relationships, political action, information, views, or geographic location [4]. Microblogging is defined as the activity of posting small amounts of digital content, such as text, images, links, small videos, or any other type of media, to the internet. They, like other social networking sites, attempt to build a sense of online community. These platforms let users to exchange information about their lives, activities, opinions, and status in a light-weight, simple manner. Twitter is one of the most widely used microblogging sites [5]. Recent events have brought Twitter to the fore as a new media to study. Users can follow or be followed on Twitter. The relationship of following and being followed on does not need reciprocation like most online social networking sites, such as Facebook or MySpace,. A user can follow any other user without having to follow them back [6].
Artificial intelligence (AI)
Several computer systems have been constructed over the last few decades to do numerous human mental functions such as arithmetic, designing computer programs, and interpreting languages, all of which are thought to require "intelligence." Some computer systems are able to analyze electronic circuits, solving formulas, diagnosing diseases, and comprehending a limited amount of human speech and natural language texts. The majority of the work in developing these systems has been done in the field of "AI" [7]. AI is the branch of engineering and science related to the theory and practice of creating systems that possess the characteristics we associate with intelligence in human behavior, such as NLP, perception, issue planning and solving, adaptation & learning, and environmental action [8]. The AI field of NLP is a branch of computer science. It is related with how computers interact with human natural languages, and in particular with programming computers to analyze huge amounts of natural language data [9].
Social media
Any medium of communication that allows for two-way engagement is referred to as social media (SM). It is a allows users to share and consume content in a variety of formats, including text, image, and video. People utilize social media in their daily lives in a variety of ways, from text messaging to online dating. Users of social media can contact with friends, family, and organizations all around the world using interactive services [10]. Users of these services have decided to form type of virtual society known as online social networks (OSN). They're also known as virtual communities [3]. These social networks have grown in popularity in recent years, offering a more efficient and user-friendly method to maintain social connections and communicate information in a variety of formats and mediums, including microblogging, status updates, mobile text alerts, blogs, instant messaging, and forums. Microblogging is defined as the activity of posting little amount of online content to the internet, which can take the shape of text, images, links, brief videos, or any other form of media. Microblogging has become extremely popular among groups of friends and professional colleagues who often update their material and follow one other's updates . Because they are brief and easy to analyze, this style of blogging is seen to be more informative and accurate for marketers. Twitter, Jaiku, and Pownce are just a few of the services that provide microblogging. These platforms let users to exchange information about their lives, activities, opinions, and status in a light-weight, simple manner. Twitter is one of the most widely used microblogging sites [5]. Twitter keeps track of the most popular phrases, words, and hashtags and posts them under the heading "trending topics" on a regular basis. A hashtag is a Twitter protocol for starting and following a discussion thread by prefixing a word with the '#' character. Twitter displays a list of the top ten trending topics on every user's homepage in a right sidebar [6]. This data is open to the public. As a result, it can be used as raw data primarily for opinion extraction, customer satisfaction analysis, and grading alternative government schemes, as well as sentiment analysis [11].
Twitter application programming interfaces (APIs)
APIs are the modern software systems' electrical sockets. It defines how software components should communicate with one another. This interface, at a high level, contains a list of commands that a first component can use to access functionality in a second component, as well as the particular format in which the first component should provide those commands to the second component. The user can see some program components, such as the user interface of a web browser. There are many more components that are hidden but serve important duties. Various software components, for example, are in charge of delivering and receiving web page data over the Internet, interpreting that data and rendering it in a graphical style, and handling persistent data (e.g., browser cookies) kept by websites. The relationship between these components is defined via APIs [12]. There are two forms of API sorting available on Twitter. Developers can read and write Twitter data using the REST-APIs. These APIs are useful for researchers because they allow them to search for messages that have been posted recently and meet certain criteria, such as the inclusion of specific keywords, hashtags, or user names. Developers can use the streaming APIs to access Twitter's global data stream. These APIs are useful for academics since they enable for the real-time capture of data matching certain criteria. Researchers must first construct a Twitter application that handles requests to Twitter's database in order to gain access to these APIs. To obtain the login details required to access data using Twitter's API, you must first create a Twitter application. The steps for establishing a Twitter application are outlined below. It must first have a Twitter account that is active. Then go to Twitter's app registration page and follow the instructions there. Following the creation of the application, it must generate four tokens that will allow the scripts to collect data on Twitter. Access token, API-secret, API-key, and Access token secret are the four. These keys are extremely important, and the user should treat them as if they were email passwords. Any use of these credentials to access Twitter's databases can be traced [13].
Gathering data from twitter
To retrieve data from Twitter, a user must first have access to the Twitter API. Twitter may alter the procedure for granting API access to users [14]. The OAuth package in R is used to perform Twitter API authentication. The processes for using OAuth to access the Twitter API are shown in Figure 1. To use Twitter API, you'll need to establish a Twitter application. These keys will be used to construct a Twitter link that will start the authentication procedure. Twitter verifies the user's identification and issues PIN 3 (also known as a verifier). This pin must be provided by the user to the application.
The next application step uses this PIN to obtain from the Twitter API an Access Secret and Token that are unique to the user.
The information about the token and secret key is cached for future use. GetUserAccessKeySecret can be used to accomplish this.
Gathered data preprocessing
By reducing data errors, preprocessing the data can improve sentiment analysis. It's a method of removing undesirable parts from data. Sentiment analysis methods that do not use data preprocessing may miss crucial terms, reducing the accuracy of the results. Preprocessing, on the other hand, can result in the loss of critical information. The elimination of punctuation, which could be valuable to the analysis, is one example of erroneous over-preprocessing. There are many different types of general preprocessing procedures which are: 1) Filtering: Non Arabic tweets are removed using the filtering method. The filter () function in the dplyr package in the Rstudio environment is used to subset a data frame, keeping the rows that satisfy the constraints. It works with both grouped and ungrouped data; most data operations are performed on variablesdefined groups. group by() turns an existing table into a grouped table in which actions are carried out "by group." The function ungroup () eliminates grouping. On ungrouped data, filtering is frequently more faster.
Simple typing errors, such as repeated letters and misspellings, are corrected by the filtering mechanism. It used dictionaries in this operation. Predefined dictionary words replace acronyms and abbreviations. The following are some examples of simple forms of errors: a) Errors created by the sound of the language, such as "ظالم" can be transcribed as "ضالم" and errors generated by switching letters, such as "بيت" can be written as "يبت" b) Eliminate the repetition of characters, such as " كثيييييييييير " being replaced by " كثير " by deleting the vowels repetition.
c) Correct spelling errors such as: Issues with the space bar; either no space (as in " كيفحالك "), or incorrect space (as in حبا" مر ").
Letter closeness; for example, the word "كنت" might be written as "منت" because the letters and are close to each other on the keyboard. The confusion produced by similar characters, such as the words "كتير" and "كثير" [15].
2) Removing URL's: Links to other webpages and websites can be found in URLs. During the analysis, they supply no information. These were of no use, thus they were eliminated from all tweets using R's tm_map() function. Blank spaces were used to replace all sentences and subparts of sentences that began with http [16].
3) Removing emoticons, numbers, and punctuations: The different emoticons punctuations in the tweets are removed in this phase [17]. One might wonder why emoticons were removed. It was removed because when the emoticons were retrieved, they appeared as square boxes rather than genuine emoticons. The "gsub" function in R was used to delete the unwanted emoticon values [16].
4)
Removing stopwords: It's a method for removing commonly used terms that are nonsensical and unhelpful for text classification. This decreases the size of the corpus without sacrificing crucial information [18].
5) Stemming:
Stemming is a necessity for many Natural Language Processing operations. In most Information Retrieval systems, it is critical. The basic goal of stemming is to reduce a word's various grammatical forms, such as its noun, adjective, verb, adverb, and so on, to its root form. The purpose of stemming is to reduce a word's inflectional forms and, in some cases, derivationally related forms to a common base form. R's "stem" function is used to complete this task [19].
Sentiment and emotion analysis
Sentiment analysis is a procedure that applies NLP to automate the extraction of opinions, attitudes and emotions from text, audio, perspectives, tweets, and sources of DB. Subjectivity analysis, opinion mining, and assessment extraction are other terms for It [20]. A sentiment classifier can detect whether a sentence has a positive or negative connotation by determining its polarity. A general opinion on a topic can be derived by averaging the polarity of individual texts given a sample of texts that discuss the same issue. For example, a consensus view on a product can be determined by gathering a collection of reviews, i.e., whether the product is popular with consumers or not. The state-of-the-art in sentiment categorization is generally split into two approaches, one lexicon-based and the other learning-based classifier [21]. To discover the emotions contained in a text, emotion analysis employs NLP, analysis of text, and a variety of techniques for computational. This analysis can be done on a number of levels, including document, sentence, word, and aspect levels. The basic two important approaches in sentiment analysis for classifying emotions are: emotional dimensions and emotional categories [22].As shown in Figure 2, the analysis of emotion of certain input data consists of the following steps: Figure 2. Procedure for analyzing emotions
Experiment
This research is based on the use of lexicon-based analysis as an analytical tool for natural language processing. The text was pre-processed and filtered after collecting data from Twitter. Then take a series of procedures to demonstrate individual sentiments as well as the most prevalent emotion among Iraqis regarding Corona virus vaccine. Figure 3 shows the algorithm that was used to analyze the sentiment of the Iraqi populace.
The analysis procedure was as follows:
1-
The keys are used to generate tokens that are used to authenticate the browser after receiving the Twitter API and Google Map API. The Iraqi trend "كورونا" on November 2020 is used in this research. This hashtag is used to perform a Twitter search and collect data.
2-
Non Arabic tweets are removed from the fetched tweets.
3-Remove URLs, numbers, punctuation, emoticons, and stopwords from the filtered data.
4-
To get the words back to their roots, a stemming process is used.
5-
Make a list of the most frequently used words.
6-Create the TDM (term-document matrix), which describes the frequency of words found in the cleaned tweets.
7-
Arrange the words in decreasing order.
9-
The most frequent emotion for the chosen hashtag is calculated and displayed in a plot by summarizing the sentiments.
The most positive and negative feelings can be shown in figure 7. Positive sentiments can be found in the right-hand corner. The negative sentiments are displayed in the bottom left corner. Figure 7. Most populated positive and negative emotions Finally, using the hashtag "كورونا" we were able to determine the most prevalent emotion for Iraqi population opinion about corona vaccine. They were feeling trust about it. Figure 8 depicts the situation.
Conclusions
In this research, a sentiment analysis method was developed to examine Iraqis' perceptions of the corona virus vaccine. In our work, we could: Create a Twitter developer account and a Twitter application that gives us access to the Twitter API, allowing us to get Twitter data by selected hashtag. Examine the public's positive and negative sentiments, also their most prevalent emotions on the chosen issue. | 3,747.8 | 2021-10-07T00:00:00.000 | [
"Computer Science"
] |
The Existence and Uniqueness of an Entropy Solution to Unilateral Orlicz Anisotropic Equations in an Unbounded Domain
: The purpose of this work is to prove the existence and uniqueness of a class of nonlinear unilateral elliptic problem ( P ) in an arbitrary domain, managed by a low-order term and non-polynomial growth described by an N -uplet of N -function satisfying the ∆ 2 -condition. The source term is merely integrable.
Introduction
Let Ω be an arbitrary domain of R N , (N ≥ 2). In this paper, we investigate the existence and uniqueness solution of the following problem: ( a i (x, u, ∇u) ) x i is a Leray-Lions operator defined onW 1 B (Ω) (defined as the adherence space C ∞ 0 (Ω)) into its dual; B(t) = (B 1 (t), · · · , B N (t)) are N-uplet Orlicz functions that satisfy ∆ 2 -condition; the obstacle ψ is a measurable function that belongs to L ∞ (Ω) ∩W 1 B (Ω); and for i = 1, · · · , N, b i (x, s, ξ) : Ω × R × R N −→ R are Carathéodory functions (measurable with respect to x in Ω for every (s, ξ) in R × R N , and continuous with respect to (s, ξ) in R × R N for almost every x in Ω) that do not satisfy any sign condition and the growth described by the vector N-function B(t). Take f ∈ L 1 (Ω) too. Statement of the problems: Suppose they have non-negative measurable functions φ, ϕ ∈ L 1 (Ω); andā,ã are two constants, positive, such that for ξ = (ξ 1 , · · · , ξ N ) ∈ R N and ξ = (ξ 1 , · · · , ξ N ) ∈ R N , we have and withB(t) being the complementary function of B(t), h ∈ L 1 (Ω) and l : R −→ R + being a positive continuous function such that l ∈ L 1 (Ω) ∩ L ∞ (Ω). We recall that in the last few decades, tremendous popularity has been achieved by the investigation of a class of nonlinear unilateral elliptic problem due to their fundamental role in describing several phenomena, such as the study of fluid filtration in porous media, constrained heating, elastoplasticity, optimal control, financial mathematics and others; for those studies, there are large numbers of mathematical articles; see [1][2][3][4] for more details.
When Ω is a bounded open set of R N , we refer to the celebrated paper by Bénilan [5], who presented the idea of entropy solutions adjusted to Boltzmann conditions. For more outcomes concerning the existence of solutions of this class in the Lebesgue Sobolev spaces (to be specific B(t) = |t| p ), we cite [6,7]. We cite [4,8,9] for the Sobolev space with variable exponent. In the case of Orlicz spaces, we have some difficulties due to the non-homogeneity of the N-functions B(t) and a rather indirect definition of the norm. It is generally difficult to move essentially L p techniques to Orlicz spaces. For more work within this framework, we quote [10][11][12][13].
On the other hand, when Ω is an unbounded domain, namely, without expecting any assumptions on the behavior when | x | −→ +∞, Domanska in [14] investigated the well-posedness of nonlinear elliptic systems of equations generalizing the model equation with corresponding indices of nonlinearity p i > 1 ( i = 0, n ). In [15] Bendahmann et al. the problem ( P ) with b(x, u, ∇u) = div(g(u)) and g(u) a polynomial growth like u q in L p -spaces was solved. For more results we refer the reader to the work [16]. We mention [17][18][19], for the Sobolev space with variable exponent, and [20][21][22][23][24][25][26] for the classical anisotropic space. The oddity of our present paper is to continue in this direction and to show the existence and uniqueness of entropy solution for equations (P ) governed with growth and described by an N-uplet of N-functions satisfying the ∆ 2 -condition, within the fulfilling of anisotropic Orlicz spaces. Besides, we address the challenges that come about due to the absence of some topological properties, such as the densities of bounded or smooth functions.
The outline of this work is as follows. In Section 2, we recall some definitions and properties of N-functions and the space of Sobolev-Orlicz anisotropic solutions. In Section 3, we prove the Theorem of the existence of the solutions in an unbounded domain with the help of some propositions; to be demonstrated later. In Section 4, we show the uniqueness of the solution to this problem, which is expected for strictly monotonic operators at least for a broad class of lower-order terms. Finally, there is Appendix A.
Mathematical Background and Auxiliary Results
In this section, we introduce the notation, recall some standard definitions and collect necessary propositions and facts that are used to establish our main result. A comprehensive presentation of Sobolev-Orlicz anisotropic space can be found in the books of M.A Krasnoselskii and Ja. B. Rutickii [23] and in [20,25].
This N-function B admits the following representation: is an increasing function on the right, with b(0) = 0 in the case z > 0 and b(z) −→ ∞ when z −→ ∞.
Its conjugate is noted byB(z) = | z | 0 q(t) dt with q also satisfying all the properties already quoted The Young's inequality is given as follows: This definition is equivalent to, ∀k > 1, ∃ c(k) > 0 such that Definition 3. The N-function B(z) satisfies the ∆ 2 -condition as long as there exist positive numbers c > 1 and z 0 ≥ 0 such that for | z | ≥ z 0 we have Additionally, each N-function B(z)satisfies the inequality We consider the Orlicz space L B (Ω) provided with the norm of Luxemburg given by According with [23] we obtain the inequalities and Moreover, the Hölder's inequality holds and we have for all u ∈ L B (Ω) and v ∈ LB(Ω) In [23,25], if P(z) and B(z) are two N-functions such that P(z) B(z) and meas Ω < ∞, then L B (Ω) ⊂ L P (Ω); furthermore, Additionally, for all N- We define for all N-functions B 1 (z), · · · , B N (z) the space of Sobolev-Orlicz anisotropicW 1 B (Ω) as the adherence space C ∞ 0 (Ω) under the norm Remark 1. Since B satisfies the ∆ 2 -condition, the modular convergence coincides with the norm convergence.
Remark 2.
If the doubling condition is imposed on the modular function, but not on the conjugate, then the space for the solutions to exist is non-reflexive in general. For this reason we will assume in the remainder of this article that B satisfies the both conditions; the ∆ 2 -condition and ∇ 2 -condition, so the Propositions 1 and 2 will remain true.
with B being the right derivative of the N-function B(z) .
Proof. By (6), we take y = B (z); then we obtain and by Ch. I [23], we get the result.
In the following we will assume that for each N-function B i (z) = with b > 1 checks the ∆ 2 -condition and (22).
The Existence of an Entropy Solution
This section is devoted to the proofs of our main results which will be split into different steps. For m ∈ N * , we define the truncation at height m, T m (u) : R −→ R by
Definition 6.
A measurable function u is said to be an entropy solution for the problem (P ), if u ∈W 1 B (Ω) such that u ≥ ψ a.e. in Ω and in Ω }, and sg m (s) = T m (s) m . We and for all v ∈W 1 B (Ω), we consider the following approximate problem: Theorem 1. Assume that conditions (1)-(4) and (22) hold true, then there exists at least one solution of the approximate problem (P m ).
and we assume that 1 0 h(t) t dt converge, so we consider the N-functions B * (z) defined by Step 1. A priori estimate of { u m }: and for a small enough η we deduce that v ≥ ψ. Thus v is an admissible test function in (P m ) and we get for by (2) and (4), we obtain where c is a constant such that 0 < c < 1, and since h, f m , φ ∈ L 1 (Ω) we deduce that (Ω), and by (8), (3) and (6) and the fact that exp(G(±∞)) ≤ exp where c 2 (k) is a positive constant which depends only on k.
Step 2. Almost everywhere convergence of { u m }: Firstly, we prove that meas{ x ∈ Ω : | u m | ≥ k } → 0. According to Lemma 2, we have with c being a positive constant and (k) → 0 when k → ∞. By (31) we obtain Thus, we deduce that Hence Secondly we show that for all {u m } measurable function on Ω such that In the beginning with α → g(α, k) is a decreasing map; then and according to (34) and (35) we have like [28] we obtain lim k→∞ g(0, k) = 0. Hence We have now to demonstrate that the almost everywhere convergence of { u m : } (Ω(R + 1)), and by embedding Theorem, for an N-function P with P B we have and since η R = 1 in Ω(R), we have: in Ω.
Lemma 3 ([29])
. Let an N-functionB(t) satisfy the ∆ 2 -condition and u m , m ≥ 1 and u be two functions of Then, u m u weakly in L B (Ω) as m → ∞. Hence, Step 3. Weak convergence of the gradient: implies the local convergence in measure and, therefore, the local Cauchy property of u m in measure Proving that ∇u m −→ ∇u locally in measure as m → ∞.
For that, we borrow ideas from Evans [13], Demangel-Hebey [12] and Koznikova L. M. [21,22]. Let δ > 0 be given. By Egoroff's Theorem, there exists E δ,k,α ⊂⊂ Ω such that Then, by Lemma 3 and (33) we obtain that According to (1) and the fact that a continuous function on a compact set achieves the lowest value, there exists a function θ(x) > 0 almost everywhere in Ω, such that, for holds. Writing (P m ) twice for { u m } and { u n }, and by subtracting the second relation from the first and according to (23), (27), (29) and (36) we obtain Consider the following test function: Further on, by applying (40), we get Since B(u) satisfies the ∆ 2 -condition, by (14) we have According to Lemma 3, we get and Additionally, using (14) and (3) we have Hence, , and according to (42), (43), (44) and (15) we obtain that Then, For any arbitrary δ > 0 for fixed m and α, by choosing k from (45) we establish the following inequality By applying Lemma 1, for any > 0, we find In addition, according to (37), we have By combining (39), (46) and (47) we deduce the inequality Hence, the sequence { ∇u m } is fundamental in measure on the set Ω(R) for any R > 0. This implies (38) and the selective convergence, Then, we obtain for any fixed k > 0 Applying Lemma 3, we have the following weak convergence
Proposition 4. Suppose that Conditions
(1)-(4) are satisfied and let (u m ) m∈N be a sequence inW 1 B (Ω(R)) such that Proof. Let > 0 fixed, and η > ; then from (1) we have using the condition (c) we get proceeding as in [28], and we obtain ∇u m −→ ∇u; by letting −→ ∞ we get ∇u m χ −→ ∇u, from (2), and the vitali's Theorem, we get Consequently, by Lemma 2.6 in [11] and (48), we get thanks to lemma 1 (see [20]) and (48), we have Step 4. Strong convergence of the gradient: In this step we consider again the following test function: by (2) and (4) we get we then obtain By (2) we get According to (27), (29); and T k (u m ) Then, By Lebesgue dominated convergence theorem, we have T k (u m ) −→ T k (u) strongly inW 1 B,loc (Ω) and ∇T k (u m ) ∇T k (u) weakly inW 1 B (Ω); then the terms on the right hand side of (50) go to zeros as k, j, m tend to infinity, which gives By Proposition 4 and the diagonal process, we deduce for k −→ ∞ that Hence, we obtain for a subsequence ∇u m −→ ∇u a.e. in Ω. (53) Step 5. The equi-integrability of b m i (x, u m , ∇u m ) : In this step we will show that Therefore, it is enough to show that b m i (x, u m , ∇u m ) is uniformly equi-integrable. We take the following We have By (2) and (4) we get Since a m i (x, u m , ∇u m ) is bounded inW 1 B (Ω), and η j (|u m |) ≥ 0 then by (27), (29) we obtain LetV(Ω(R)) be an arbitrary bounded subset for Ω; then, for any measurable set E ⊂V(Ω(R)) we have We conclude that ∀E ⊂V(Ω(R)) with meas(E) < β( ), and Finally, by combining the last formulas we obtain giving the assumed results.
Step 6. Passing to the limit: Let ϕ ∈W 1 B (Ω) ∩ L ∞ (Ω); we take the following test function: By Fatou's Lemma we get and the fact that weakly inW 1 B (Ω). Additionally, since ψ k T k (u m − ϕ) ψ k T k (u − ϕ) weakly inW 1 B (Ω), and by (53) we obtain and and so we get now passing to the limit to infinity in k, we obtain the entropy solution of the problem.
Uniqueness of the Entropy Solution
Theorem 3. Suppose that conditions (1)-(3) are true, and b i (x, u, ∇u) : Ω × R × R N −→ R are strictly monotonic operators, at least for a broad class of lower order terms. Then, the problem (P ) has a unique solution.
Proof. Let u andū belong to K ψ ∩ L ∞ (Ω) being two solutions of problem (P ) with u =ū.
Author Contributions: All authors performed all the steps of the ideas and proofs in this research. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 3,486 | 2020-09-01T00:00:00.000 | [
"Mathematics"
] |
RSCNN: A CNN-Based Method to Enhance Low-Light Remote-Sensing Images
: Image enhancement (IE) technology can help enhance the brightness of remote-sensing images to obtain better interpretation and visualization effects. Convolutional neural networks (CNN), such as the Low-light CNN (LLCNN) and Super-resolution CNN (SRCNN), have achieved great success in image enhancement, image super resolution, and other image-processing applications. Therefore, we adopt CNN to propose a new neural network architecture with end-to-end strategy for low-light remote-sensing IE, named remote-sensing CNN (RSCNN). In RSCNN, an upsampling operator is adopted to help learn more multi-scaled features. With respect to the lack of labeled training data in remote-sensing image datasets for IE, we use real natural image patches to train firstly and then perform fine-tuning operations with simulated remote-sensing image pairs. Reasonably designed experiments are carried out, and the results quantitatively show the superiority of RSCNN in terms of structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) over conventional techniques for low-light remote-sensing IE. Furthermore, the results of our method have obvious qualitative advantages in denoising and maintaining the authenticity of colors and textures.
Introduction
Remote-sensing images play a significant role in large-scale spatial analysis and visualization, including climate change detection [1], urban 3D modelling [2], and global surface monitoring [3]. However, due to the effects of remotely sensed devices, undesirable weather conditions, such as haze, blizzards, storms, clouds, etc. [4], have a great negative impact on the visibility and interpretability of remote-sensing images. Low-light images create more difficulties for many practical tasks such as marine disaster monitoring and night monitoring. Therefore, it is a great necessity to enhance the contrast and brightness of low-light images automatically when we want to achieve a high-quality remote-sensing image dataset with large scale and long time series.
The purpose of image enhancement (IE) is to improve the visual interpretation of images and to provide better clues for further processing and analyzing [4][5][6]. Over time, many low-light IE methods have been proposed and achieved great success in image processing and remote-sensing fields. Histogram Equalization (HE) [7] and its variants such as Dynamic Histogram Equalization (DHE) [8], Brightness Protecting Dynamic Histogram Equalization (BPDHE) [9], and Contrast Constrained Adaptive Histogram Equalization (CLAHE) [10] are classic traditional contrast-enhancement methods. The purpose of HE is to increase the contrast of the entire image by expanding the dynamic range of the image. It is a global adjustment process without considering the change in brightness, which is prone to local overexposure, color distortion, and poor denoising. This kind of method can automatically obtain images with stronger contrast and better brightness. enhanced image with better sharpness, but it may cause worse denoising. Besides, a proper layer size is required to adequately capture the characteristics of training data while reducing the risk of a vanishing gradient as much as possible. Low-light CNN (LLCNN) [26] firstly introduces CNN convolutional layers into low-light IE and achieves better result in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) compared to LLNet and many other traditional methods. LLCNN utilizes a specially designed convolutional module and residual learning to achieve a deeper network while coping with the vanishing gradient problem. It adopts SSIM as the training loss to obtain better texture preservation. The same as in [24], a gamma degradation method with the parameters randomly set in the range (2,5) is used to generate low-light images for training. Multi-branch Low-light Enhancement Network (MBLLEN) [27] uses a CNNbased module to extract and enhance feature maps at different levels and fuses them to obtain the final result. The authors of [28] trained pure fully convolutional end-to-end networks, which operate on raw sensor data of extreme low-light images directly to obtain an enhanced result.
With respect to remote-sensing low-light image enhancement, most researchers still focus on traditional and machine learning methods. For example, the authors of [29] applied HE for contrast enhancement, that of [30] used dominated brightness level analysis and adaptive intensity transformation to enhance remote-sensing images, the authors of [21] proposed DWT-based methods for remote-sensing IE tasks, and the work in [4] enhanced low-visibility aerial images using the Retinex representation method. Deep learning methods have not received enough attention yet.
According to a previous discussion, obviously, convolutional network has shown its great superiority in low-light image processing. Therefore, in this paper, we proposed a purely CNN-based architecture called remote-sensing CNN (RSCNN) for low-light remotesensing IE. Different kernels in the RSCNN are used to capture various features such as the textures, edges, contours, and deep features of low-light images. Then, all the feature maps are integrated to obtain the final images which have been enhanced properly. It is well known that definition of the loss function of a neural network is very crucial. The L1 loss is very popular for measuring the whole similarity of two images. In addition, the SSIM loss is also applied in this paper to retain more accurate image textures. The sum of the L1 and SSIM loss functions are adopted as the overall loss function to take advantage of the two loss functions. With respect to the lack of training dataset for remote-sensing IE, we adopt transfer-learning from the pretrained RSCNN model for the natural image enhancement dataset and fine-tune it for remote-sensing IE with simulated low-light and normal-light remote-sensing image pairs.
Reasonable experiments are carried out with two datasets. Compared to 10 baselines, both quantitative and qualitative results illustrate that RSCNN has great advantages over other methods for low-light remote-sensing IE.
Formatting of Mathematical Components
The framework of RSCNN is shown in Figure 1. A deep CNN-based model extracts the abstract features and learns the detailed information from the input low-light images. Since CNN-based models can directly process multi-channel images without color space conversion, all information of input images can be retained and the complex nonlinear relation patterns between low-light and normal-light image pairs can be well learned, thereby generating images with proper light, stronger contrast, and natural textures. In detail, there are four main different types of components in the deep learning network, as described below.
(1) Convolution layer The whole network has 8 convolution layers. Each layer consists of multiple kernels, and the weights of these kernels do not change during the convolution process, i.e., there is weight sharing. With the convolution operation, RSCNN extracts the different features of the input images at different convolution levels. The output of the first CNN layer roughly depicts the location of low-level features (edges and curves) in the original image. On this basis, another convolution operation is carried out, and the output will be the activation map representing higher-level features [31]. Such features can be semicircles (a combination of curves and lines) or quadrilateral (a combination of several lines). The more convolution layers, the more complex feature activation map will be obtained. There are several parameters that need to be determined for each layer, such as the kernel size , padding , and stride . The number of kernels is the number of output feature maps. and denote the width and height of images, respectively. Thus, the size of the output feature maps can be calculated as follows: Since we want to fix the tensor size of the input and the output for each convolution layer, we set K = 5, S = 1, and P = 2 for the first convolution layer and K = 3, S = 1, and P = 1 for the rest.
(2) Activation layer The activation layer is vital in a deep CNN because the nonlinearity of the activation layer introduces nonlinear characteristics to a system which has just undergone linear computation, gives RSCNN a stronger representational power, and avoids the occurrence of gradient saturation during training. We adopt rectified linear unit (ReLU) for its advancement in improving the training speed of RSCNN without obvious changes in accuracy. The activation layer is applied over the output of the previous layer.
Every value obtained from upper stream convolution layer should be activated by ReLU before it is input into the downstream convolution layer.
(3) Upsampling operation Inspired by the CNN for super resolution methods [32][33][34], in the RSCNN, we adopt bilinear interpolation to magnify the image by two times for a better receptive field and then add another CNN layer after that in order to learn more complex features with different scales. We use Bicubic as the interpolation method in this operation to help preserve clearer edges [35].
(4) Max-pooling operation
We adopt the pooling operation in RSCNN with two purposes: Firstly, the pooling operation is helpful to reduce the number of parameters and to resize the image to the In detail, there are four main different types of components in the deep learning network, as described below.
(1) Convolution layer The whole network has 8 convolution layers. Each layer consists of multiple kernels, and the weights of these kernels do not change during the convolution process, i.e., there is weight sharing. With the convolution operation, RSCNN extracts the different features of the input images at different convolution levels. The output of the first CNN layer roughly depicts the location of low-level features (edges and curves) in the original image. On this basis, another convolution operation is carried out, and the output will be the activation map representing higher-level features [31]. Such features can be semicircles (a combination of curves and lines) or quadrilateral (a combination of several lines). The more convolution layers, the more complex feature activation map will be obtained. There are several parameters that need to be determined for each layer, such as the kernel size K, padding P, and stride S. The number of kernels N is the number of output feature maps. W and H denote the width and height of images, respectively. Thus, the size of the output feature maps can be calculated as follows: Since we want to fix the tensor size of the input and the output for each convolution layer, we set K = 5, S = 1, and P = 2 for the first convolution layer and K = 3, S = 1, and P = 1 for the rest.
(2) Activation layer The activation layer is vital in a deep CNN because the nonlinearity of the activation layer introduces nonlinear characteristics to a system which has just undergone linear computation, gives RSCNN a stronger representational power, and avoids the occurrence of gradient saturation during training. We adopt rectified linear unit (ReLU) for its advancement in improving the training speed of RSCNN without obvious changes in accuracy. The activation layer is applied over the output of the previous layer.
Every value obtained from upper stream convolution layer should be activated by ReLU before it is input into the downstream convolution layer.
(3) Upsampling operation Inspired by the CNN for super resolution methods [32][33][34], in the RSCNN, we adopt bilinear interpolation to magnify the image by two times for a better receptive field and then add another CNN layer after that in order to learn more complex features with different scales. We use Bicubic as the interpolation method in this operation to help preserve clearer edges [35].
(4) Max-pooling operation
We adopt the pooling operation in RSCNN with two purposes: Firstly, the pooling operation is helpful to reduce the number of parameters and to resize the image to the Remote Sens. 2021, 13, 62 5 of 13 original patch image size, decreasing the training cost by a meaningful extent. Secondly, the pooling operation can cut down the possibility of overfitting, helpful to suppress noise.
In RSCNN, we set the kernel size to 2 for each max-pooling operation.
Loss Function
A combination of the SSIM loss function and the L1 loss function is adopted in RSCNN. The L1 loss function, noted as L l1 , is given as Equation (3).
where p and P represent the index of the pixel and the patch, respectively. o(p) and e(p) represent the values of the pixels in the processed patch and target ones, respectively. L1 loss can preserve pixel-wise relations between the target images and the enhanced ones of every training pair, helping enhanced images have similar light intensity to the target one. However, it gives less consideration to the overall structure of the whole image, resulting in a lack of textural details. Additionally, low-light capture usually causes structural distortions such as blurs and artifacts, which is visually salient but cannot be well handled by pixel-wise loss functions such as the mean squared error.
The SSIM loss function, however, is helpful under this situation. The SSI M value for patch P is defined as Equation (4).
where x is the original normal-light image, y is the enhanced one, µ x and µ y are the respective pixel value averages, σ 2 x and σ 2 y are the respective variances, σ xy is the covariance, and c 1 and c 2 are the constants to prevent the denominator from being zero. A larger SSI M is means better quality of the processed images. Therefore, L ssim is defined as 1 − SSI M.
For L, we combine L ssim and L l1 as Equation (5).
The value of p is set to 0.1 in L. The training target is to minimize L.
Training
(1) Datasets There are two datasets that are used in this work: the DeepISP dataset [36] and the UCMerced dataset [37]. Their descriptions are as follows.
DeepISP: A total of 110 pairs of normal exposure and low-light exposure images are included, 77 for training and 33 for testing. The scenes captured include indoor and outdoor images, and sun light and artificial light with a Samsung S7 rear camera. The image pairs are almost the same, except that the low-light one has a 1/4 of the exposure time of the normal one. The resolution of each image is 3024 × 4032. Original images are divided into patches with sizes of 256 × 256. Figure 2 illustrates the representative images of every type. This dataset is named Dataset1.
DeepISP: A total of 110 pairs of normal exposure and low-light exposure images are included, 77 for training and 33 for testing. The scenes captured include indoor and outdoor images, and sun light and artificial light with a Samsung S7 rear camera. The image pairs are almost the same, except that the low-light one has a 1/4 of the exposure time of the normal one. The resolution of each image is 3024 × 4032. Original images are divided into patches with sizes of 256 × 256. Figure 2 illustrates the representative images of every type. This dataset is named Dataset1. Figure 3 shows some representative images of UCMerced datasets. This dataset is named Dataset2. Figure 3. The representative images of UCMerced for every type [37].
As far as we know, there is no specific open dataset for low-light remote-sensing image enhancement training. With respect to this dilemma, a set of natural low-light and normal-light image pairs generated from an ordinary image dataset, that is Dataset1 in this paper, is adopted for the pretrained training.
Then, because the light source angle and camera angle of remote-sensing imaging equipment have their own obvious characteristics compared with natural images, it is not proper to directly apply a model that was trained using natural image pairs to remotesensing images. Therefore, a fine-tuning process is indispensable. First, we choose "dense residential" images from the UCMerced dataset because, compared with other categories, these images have more diverse features, richer textures, more complex shadows, and blurrier boundaries. These complex conditions make low-light images more difficult to enhance. Then, we follow the methods of [19] and [29] to set the original image as the ground truth and use the degradation method to generate the corresponding low-light image. A pair of low-light images and the corresponding one is used as the input and label for RSCNN training and testing. A random gamma adjustment is used to simulate the low-light images. The parameter gamma is randomly set in the range of (2, 5), enabling RSCNN to adaptively enhance the image and to have better generalization. Finally, a total of 100 pairs of normal exposure and low-light exposure images is used. They are split into 80 pairs for training and 20 pairs for testing, respectively. This dataset is named Dataset2.
(2) Evaluation criteria As far as we know, there is no specific open dataset for low-light remote-sensing image enhancement training. With respect to this dilemma, a set of natural low-light and normal-light image pairs generated from an ordinary image dataset, that is Dataset1 in this paper, is adopted for the pretrained training.
Then, because the light source angle and camera angle of remote-sensing imaging equipment have their own obvious characteristics compared with natural images, it is not proper to directly apply a model that was trained using natural image pairs to remotesensing images. Therefore, a fine-tuning process is indispensable. First, we choose "dense residential" images from the UCMerced dataset because, compared with other categories, these images have more diverse features, richer textures, more complex shadows, and blurrier boundaries. These complex conditions make low-light images more difficult to enhance. Then, we follow the methods of [19,29] to set the original image as the ground truth and use the degradation method to generate the corresponding low-light image. A pair of low-light images and the corresponding one is used as the input and label for RSCNN training and testing. A random gamma adjustment is used to simulate the low-light images. The parameter gamma is randomly set in the range of (2, 5), enabling RSCNN to adaptively enhance the image and to have better generalization. Finally, a total Remote Sens. 2021, 13, 62 7 of 13 of 100 pairs of normal exposure and low-light exposure images is used. They are split into 80 pairs for training and 20 pairs for testing, respectively. This dataset is named Dataset2.
(2) Evaluation criteria PSNR, SSIM [11], and CIEDE2000 [38] are used to evaluate the performance of RSCNN. Since SSIM has already been described, here, we briefly describe the PSNR evaluator as follows.
where X is the normal-light image and Y is the enhanced one generated from the low-light image. MAX represents the maximum signal value that exists in X. The higher the PSNR, the better RSCNN performs. According Equation (6), we can see that PSNR is a variant of mean squared error (MSE). It is a pixel-wise full-reference quality metric, computed by averaging the squared intensity differences of the enhanced result and reference image pixels [11]. It is easy to calculate and has clear physical meanings but is not sensitive to the change in image structure and is not completely in accordance with human visual characteristics. SSIM makes up for PSNR. According to Equation (4), SSIM puts focus on image structure similarity and measures the image similarity from brightness (µ x , µ y ), contrast (σ 2 x , σ 2 y ), and structure (σ xy ). PSNR and SSIM are widely used to evaluate the performance of low-light image-processing methods [22,24,26,39,40] and remote-sensing image-processing methods [20,41,42]. With the help of PSNR and SSIM, we can effectively evaluate the color retention and structural differences between enhanced images and reference images.
Furthermore, we adopt CIEDE2000 as the evaluation criteria. It is a color difference equation based on CIE's lab color space (CIELAB) and is published by the International Commission on Illumination (CIE) in Publication 142-2001. It can help us evaluate the degree of color difference between the ground-truth image and the enhanced image. The smaller CIEDE2000 is, the closer the result image is to the ground-truth image. We use the "imcolordiff" function in Matlab 2020b for CIEDE2000. It is based on [43].
Implementation Details
There are 3 kinds of CONVs: 1-D-CNNs, 2-D-CNNs, and 3-D-CNNs. Since we want to treat the input image patches as a whole with spatial information, we choose a 2-D-CNN as the CONV in our network [44]. The configuration of each convolution layer is shown in Figure 1. The weights of each CONV layer are initialized using kaiming_normal [45].
During training, the patch-size is set to 256 × 256 and the depth of the whole network is 8. In addition, Adam optimization is adopted with a weight decay of 0.0001. The base learning rate is 0.001, and the batch size is 8. Our model is trained using PyTorch.
Baselines
Ten different methods, which are shown in Table 1, are compared with our proposed method.
As observed, different types of models are considered. The models that are used apply the default settings suggested by the authors.
Comparison Results on Dataset1
The experiment is first carried out on Dataset1, and 9 different methods are compared with RSCNN. Detailed results are presented in Table 2. In the experimental results, the SSIMs of DHE and CLAHE are significantly improved compared to ordinary HE, and the PSNR result of DHE is the best. Compared with the histogram equalization algorithms, the Retinex algorithms achieve better indicator results. Among them, the SSIM of the MSRCR method is about 12% higher than that of DHE but, because its adjustment method is not global pixel-wise, the PSNR is 8% lower than that of DHE. With respect to LIME and BIMEF, compared with the traditional histogram method and the Retinex method, it has a better effect in maintaining the overall visual characteristics and pixel-wise results. DWT-SVD is often used for low-light remote-sensing image enhancement. The results of DWT-SVD are similar to the enhancement algorithms based on luminance estimation. 28.194 Obviously, from the perspective of quantitative analysis indicators, the results of RSCNN have better results than various traditional low-light enhancement algorithms and can be applied to low-light remote-sensing image enhancement tasks. For example, the SSIM of the RSCNN is 0.825, which is 0.2 higher than that of the widely used DWT-SVD algorithm. As for the PSNR, our method achieved 28.123 dB, which is much better than those of all these baselines since their PSNRs are lower than 20 dB.
As shown in Figure 4, in general, all the methods are able to obtain brighter images with stronger contrast. However, the results of many methods are not sufficient and satisfactory. For example, HE-based methods such as HE, DHE, and CLAHE can inappropriately enhance the dark background (too bright or too dark) and can cause color distortions. SRbased methods (i.e., SSR, MSR, and MSRCR) and LIME are able to appropriately enhance the dark background, but the color distortions are also very severe and the background is enhanced to be blue instead of actual dark.
As for color distortion, CLAHE, DWT-SVD, and RSCNN work relatively better, and the backgrounds of the enhanced images are very close to those of the target images. However, DWT-SVD and CLAHE suffer from over-enhanced and insufficient brightness, respectively, in the high-contrast region, which is not as natural as that of our proposed RSCNN. In addition, the HE and DHE enhanced images have significant noise, and SSR and MSR generate images that appear to be covered by haze. Meanwhile, the images that are enhanced by our proposed method are sharper and have better brightness than those of other methods thanks to its powerful feature extraction ability and learning ability.
Remote Sens. 2021, 13, x FOR PEER REVIEW 9 of 13 As shown in Figure 4, in general, all the methods are able to obtain brighter images with stronger contrast. However, the results of many methods are not sufficient and satisfactory. For example, HE-based methods such as HE, DHE, and CLAHE can inappropriately enhance the dark background (too bright or too dark) and can cause color distortions. SR-based methods (i.e., SSR, MSR, and MSRCR) and LIME are able to appropriately enhance the dark background, but the color distortions are also very severe and the background is enhanced to be blue instead of actual dark. As for color distortion, CLAHE, DWT-SVD, and RSCNN work relatively better, and the backgrounds of the enhanced images are very close to those of the target images. However, DWT-SVD and CLAHE suffer from over-enhanced and insufficient brightness, respectively, in the high-contrast region, which is not as natural as that of our proposed RSCNN. In addition, the HE and DHE enhanced images have significant noise, and SSR and MSR generate images that appear to be covered by haze. Meanwhile, the images that are enhanced by our proposed method are sharper and have better brightness than those of other methods thanks to its powerful feature extraction ability and learning ability.
Comparison Results on Dataset2
To evaluate the performance of RSCNN on the low-light remote-sensing images, we fine-tuned the trained model and tested it using Dataset2. The results are presented in Table 3. In addition, Figure 5 shows the visual results to compare the proposed method with other methods. In remote-sensing image enhancement, preserving accurate textural and structural information is very important for many applications including scene classification [46] and object detection [47]. In addition, obtaining images with natural colors is also of great significance for visual discrimination and further analysis.
Comparison Results on Dataset2
To evaluate the performance of RSCNN on the low-light remote-sensing images, we fine-tuned the trained model and tested it using Dataset2. The results are presented in Table 3. In addition, Figure 5 shows the visual results to compare the proposed method with other methods. In remote-sensing image enhancement, preserving accurate textural and structural information is very important for many applications including scene classification [46] and object detection [47]. In addition, obtaining images with natural colors is also of great significance for visual discrimination and further analysis. As we can see from Table 3, the comparison results indicate that RSCNN has the best performance compared to all other low-light image enhancement methods. Specifically, the SSIM, PSNR, and CIEDE2000 of our method are 0.791, 20.936 dB, and 19.496, respectively. To comprehensively support the qualitative conclusions of the superiority of RSCNN, visual comparison and analysis are also needed. Figure 5 shows the image-enhancement results obtained using different methods for qualitative comparison. In addition, the patches in the two red boxes are enlarged to show detailed information. As shown in Figure 5, all the methods obtain images with stronger contrast and brightness. However, the results of CLAHE, BIMEF, and DWT-SVD may not be sufficiently enhanced since the brightness is still somewhat dim. In addition, different methods have different characteristics, resulting in different effects.
For example, in terms of the image colors, the buildings obtained by HE, DHE, and LIME are enhanced to be different colors, which are far from the standard natural images. The estimated images generated by SSR, MSR, and RSCNN are much better than other methods. As for detailed information such as edges and textures in dark regions, HE, DHE, and LIME are able to obtain clear cars. However, several other methods cannot ac- As we can see from Table 3, the comparison results indicate that RSCNN has the best performance compared to all other low-light image enhancement methods. Specifically, the SSIM, PSNR, and CIEDE2000 of our method are 0.791, 20.936 dB, and 19.496, respectively. To comprehensively support the qualitative conclusions of the superiority of RSCNN, visual comparison and analysis are also needed. Figure 5 shows the image-enhancement results obtained using different methods for qualitative comparison. In addition, the patches in the two red boxes are enlarged to show detailed information. As shown in Figure 5, all the methods obtain images with stronger contrast and brightness. However, the results of CLAHE, BIMEF, and DWT-SVD may not be sufficiently enhanced since the brightness is still somewhat dim. In addition, different methods have different characteristics, resulting in different effects.
For example, in terms of the image colors, the buildings obtained by HE, DHE, and LIME are enhanced to be different colors, which are far from the standard natural images. The estimated images generated by SSR, MSR, and RSCNN are much better than other methods. As for detailed information such as edges and textures in dark regions, HE, DHE, and LIME are able to obtain clear cars. However, several other methods cannot accurately replicate the detailed information. For example, the textures of cars that are generated by CLAHE, BIMEF, and DWT-SVD are very dark and blurred, which make it hard to figure out the shape, and even the trees cannot be visually recognized since they are nearly black. Additionally, although the results from MSR and SSR are free of apparent color distortion, they suffer from apparent grid-like veins, which can be avoided by using our method. As a whole, the visual effects of the RSCNN are the closest to the original image in both color and texture. For instance, RSCNN preserves the details of trees and cars and enhances remote-sensing image with little information loss, thus making the images more realistic than those of other methods.
Conclusions
An end-to-end RSCNN model is proposed in this paper to get brighter images from degraded low-light images and is applied to remote-sensing images. A CNN architecture is used to achieve end-to-end enhancement for low-light remote-sensing images. The usampling and downsampling operators are designed to learn deep features from different scales. In this way, the enhanced images can have more detailed features. Compared to other traditional methods, our result achieves more natural results with more realistic textures and vivid details while revealing the edge features and structural features as much as possible. It can help a lot with subsequent high-level remote-sensing image information-discovery tasks. | 6,832 | 2020-12-26T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Feature level fine grained sentiment analysis using boosted long short-term memory with improvised local search whale optimization
Background In the modern era, Internet-based e-commerce world, consumers express their thoughts on the product or service through ranking and reviews. Sentiment analysis uncovers contextual inferences in user sentiment, assisting the commercial industry and end users in understanding the perception of the product or service. Variations in textual arrangement, complex logic, and sequence length are some of the challenges to accurately forecast the sentiment score of user reviews. Therefore, a novel improvised local search whale optimization improved long short-term memory (LSTM) for feature-level sentiment analysis of online product reviews is proposed in this study. Methods The proposed feature-level sentiment analysis method includes ‘data collection’, ‘pre-processing’, ‘feature extraction’, ‘feature selection’, and finally ‘sentiment classification’. First, the product reviews given from different customers are acquired, and then the retrieved data is pre-processed. These pre-processed data go through a feature extraction procedure using a modified inverse class frequency algorithm (LFMI) based on log term frequency. Then the feature is selected via levy flight-based mayfly optimization algorithm (LFMO). At last, the selected data is transformed to the improvised local search whale optimization boosted long short-term memory (ILW-LSTM) model, which categorizes the sentiment of the customer reviews as ‘positive’, ‘negative’, ‘very positive’, ‘very negative’, and ‘neutral’. The ‘Prompt Cloud dataset’ is used for the performance study of the suggested classifiers. Our suggested ILW-LSTM model is put to the test using standard performance evaluation. The primary metrics used to assess our suggested model are ‘accuracy’, ‘recall’, ’precision’, and ‘F1-score’. Results and Conclusion The proposed ILW-LSTM method provides an accuracy of 97%. In comparison to other leading algorithms, the outcome reveals that the ILW-LSTM model outperformed well in feature-level sentiment classification.
INTRODUCTION
In recent times, the growth of web applications like 'social networks', 'e-commerce websites', 'blogs', and 'online forums' has allowed people to express their views on products, events, and services (Rosa et al., 2018). Similarly, sentences with positive connotations may indicate 'happiness', 'contentment', or 'pleasure'. As a result, these data must be analyzed to extract pertinent data and polarized views, respectively. Furthermore, businesses and individuals must be able to determine whether a user's opinion is beneficial or detrimental in real time. This reveals the significance of people's opinions in society. Therefore, it must be formulated and expressed appropriately (Selvakumar & Lakshmanan, 2022;Sharma, Chaurasia & Srivastava, 2020). Sentiment analysis plays a vital role in identifying and converting people's thoughts about a certain subject or item into a positive, negative, or neutral polarity or even into a score, or star rating (Sun et al., 2019). Based on the granularity preferred grade, sentiment analysis classification has been done at three levels that are 'document level', 'phrase level', and 'feature or aspect level'. The objective at the document level is to identify whether an entire opinion article shows a positive or negative attitude (Mutanov, Karyukin & Mamykova, 2021). Whereas in the phrase or sentence level, each statement is examined to determine whether it provided a positive, negative, or neutral attitude. On the aspect level, the finer-grained analysis is carried out and it investigates the viewpoint alone (Basiri et al., 2021). The assessments at the document and sentence levels do not specifically show what people liked and disliked. An opinion has two components that are an emotion which may be positive or negative and a target of opinion. The vast majority of sentiment analysis algorithms in use today concentrate on phrase-and document-level sentiment analysis (Ma et al., 2018). These two granularity-based sentiment analysis methods have partially overcome several issues, but they are still unable to satisfy the needs of the vast majority of current applications. Because of this, there has been a lot of interest in the expansion of fine-grained aspect-level sentiment analysis algorithms in the field of vision (Sadr, Pedram & Teshnehlab, 2019). Based on the methodology used, sentiment analysis at the aspect level may be further divided into lexicon-based, features-based, unsupervised, and more recently, deep learning-based categories (Kumar et al., 2020). Natural language processing (NLP) is the actual processing of text components. It converted the text element into a machinereadable format. The fundamental intention of NLP is sentiment analysis, which has several applications in 'web mining', 'text mining', and 'data mining' (Abdi et al., 2019). The quantity of false positives in text categorization is decreased by the application of deep learning algorithms (Souma, Vodenska & Aoyama, 2019;Wang, Niu & Yu, 2019). These deep learning methods have lately been used for several NLP problems for the categorization of small text (Zhai et al., 2020). However, identifying the proper situations for particular qualities is the key challenge. The majority of earlier methods that merged attentional processes with recurrent neural networks eventually increased noise and decreased prediction accuracy. Another challenge for attention systems is that the mood of some context words varies based on a variety of variables and cannot be inferred only from their appearance Rehman et al., 2019). The remaining sections of the work carried are organized as follows: Section 2 contains the literature survey. The research gap is described in Section 2.1, and the research methodology is discussed in Section 3. The final section discusses the probable outcomes and assessment metrics involved in this study.
LITERATURE SURVEY
This section is a detailed evaluation of the research work done previously on the topic of sentiment analysis.
Numerous e-commerce and social media platforms allow customers to write large numbers of product evaluations online, which provides developers with invaluable information for building new items. Therefore, Vijayaragavan, Ponnusamy & Aramudhan (2020) have utilized a cluster-based classification method for online product evaluations. Here, a support vector machine (SVM) classification method was employed to categorize the online customer product reviews. The second stage involves extracting the traits using an emotional analysis strategy. Finally, the effectiveness of the customer's capacity to successfully buy the items is evaluated using fuzzy-based soft set theory. On the other hand, Wang & Wan (2019) have offered SentiGAN and C-SentiGAN as tools for obtaining emotional intelligence. This model includes a multi-class discriminator and various generators. Here, a penalty-based aim was employed to motivate generators to produce a range of instances for a particular mood label. Similarly, Ishaq, Asghar & Gillani (2020) have presented a genetic algorithm with a convolutional neural network to provide a sentiment analysis classification method. Here the feelings were examined by combining three processes, such as mining semantic characteristics, utilizing Word2vec to transform the retrieved corpus, and deployment of CNN for opinion mining. Likewise, Shuang et al. (2020) have established a feature distillation network (FDN) for minimizing irrelevant data (noise) and extracting feature-related emotional information. The relationships among aspects and their respective contexts are implemented at a high resolution using a unique double gate technique. Xu et al. (2019) have presented an improved word representation strategy that builds weighted word vectors using the well-known TF-IDF algorithm and sentiment analysis. Bidirectional long short-term memory (BiLSTM) receives the weighted word vectors and correctly collects context information, improving the representation of comment vectors.
On the other hand, Ikram & Afzal (2019) have introduced aspect-based sentiment classification to detect hidden patterns in academic huge data by recognizing aspect-level sentiments. Similarly, Ye, Xu & Luo (2021) have presented an ALBERTC-CNN based aspect level sentiment analysis. The upgraded ALBERTC network was utilized to extract global phrase information and local emotion data while representing the initial aspect-level text as a word vector. Additionally, Gao et al. (2019) have suggested the CE-HEAT approach to extract the uncommon sentiment terms. It has two hierarchical attention units, the first one collects sentiment characteristics from the sentiment attention layer. The alternative one makes use of the aspect characteristics obtained via the aspect attention layer. Boussaïd, Lepagnot & Siarry (2013) provided a survey about a few of the key metaheuristics. It describes the components and concepts used in numerous metaheuristics in order to compare and contrast their similar and distinct characteristics. Similarly, Dokeroglu et al. (2019) have distinguished fourteen novel and phenomenal metaheuristics that have been invented in the last 20 years, in addition to the classic ones such as genetic, tabu search and particle swarm. Their study addresses critical metaheuristic issues as well as new recommendations for potential research opportunities and open challenges of nature-inspired population-based optimization algorithms. Hussain et al. (2019) conducted a survey of metaheuristic research in the literature, which included 1,222 publications from 1983 to 2016. Based on the evidence gathered, their article explored four aspects of metaheuristic research: the introduction of new algorithms, comparisons and analysis, modifications and hybrids, future directions and research gaps. Baydogan & Alatas (2021) proposed an automatic hate speech detection system based on metaheuristic approach for achieving better results in their suggested methodology. Ant lion optimization (ALO) and moth flame optimization (MFO) algorithms were developed for the hate speech detection problem in the suggested optimization technique. Baydogan (2021) have presented a novel swarm intelligence-based social spider algorithm (SSA), which is primarily focused on a modeling of spider collaborative behaviours and was initially used for sentiment analysis (SA) on Twitter data., introducing a new application domain for optimization algorithms.
Research gap
The domain dependence on sentiment terms is the major obstacle to opinion mining and sentiment analysis. A feature set may perform exceptionally well in one domain while exhibiting very poor performance in another. For sentiment analysis to be effective, opinion words must interact with implicit data. The implicit information determines how sentimental phrases actually work. People express their ideas in many ways. Every person has a unique opinion since everyone has a distinct manner of thinking and expressing themselves. Typographical flaws can occasionally make it difficult to gather opinions.
Natural language overhead, such as ambiguity, co-reference, implicitness, inference, etc., made sentiment analysis tools more difficult to use. It might be difficult to classify sentences as positive or negative, very positive, very negative, and neutral in terms of opinion mining (OM) at the sentence level since every writer has a distinct writing style and because a sentence may include positive or negative, very positive, very negative, and neutral concepts.
amounts of textual data, thereby boosting service quality and generating enormous profits for enterprises.
Data collection
Data pre-processing-tokenization, lemmatization, stemming, removal of stop words.
Feature extraction-log term frequency-based modified inverse class frequency (LFMI).
Proposed methodology
In this article, ILW-LSTM, a deep learning approach, is suggested for the feature-level sentiment analysis. This method includes five main stages such as 'data acquisition', 'data pre-processing', 'feature extraction', 'feature selection', last 'sentiment classification'. At first, the data is gathered from the taken dataset then the textual data is taken for sentiment analysis to determine the essential aspects. The textual data is subjected to pre-processing, which includes 'white tokenization', 'lemmatization', and 'snowball stemming'. Tokenization is a technique for dividing text into smaller bits. Text in huge amounts is broken down into words or phrases. Depending on the issue, precise data criteria are defined to divide the text content into pertinent tokens. One of the most used data preprocessing techniques is lemmatization. The suggested method makes use of the long-term frequency-based modified inverse class frequency for textual feature extraction. Subsequently, the Levy flight-dependent mayfly optimization algorithm (LFMO) is employed to improve the classification accuracy. This method subjects to the training model and chooses the optimal set of features. The final step is to add the selected characteristics to the proposed ILW-LSTM for polarity classification. The output of the ILW-LSTM represents the classes as 'positive', 'negative', 'very positive', 'very negative', and 'neutral' (sentiments) of the entered data. Figure 1 displays the general stream diagram of the suggested feature-level sentiment analysis.
Data acquisition
A rating is created by combining information from the review about the sentence subjects and attitudes. Below are the pre-processing steps for the customer reviews that will be processed further.
Pre-processing
Pre-processing procedures are employed to remove undesirable datasets from the group. In this case, pre-processing is carried out as part of the sentiment analysis data preparation procedure. The three pre-processing operations are 'white space tokenization' 'lemmatization', and 'snowball stemming'.
Tokenization
To understand the context or build the NLP model, this method separates text into words or tokens whenever it encounters a whitespace character. To determine the meaning of the text, it further examines the word order.
Lemmatization
The process of lemmatization uses morphological analysis and vocabulary to determine the lemma. It also yields the dictionary structure of the word. Morphological analysis is the process used to identify all possible connections in a multidimensional collection. It is also employed for multidimensional issue-solving. It transforms lemma into something akin to a byte sequence so you may create a duplicate of your preferred lemmatize. The relationship between the term's regular form and one of these words in a phrase is known as a lexeme. Additionally, part-of-speech (POS) labeling aims to improve the review's consistency by focusing on recall and accuracy. To the extent possible, this may be applied to the lexicons under investigation to eliminate groupings of terms judged unnecessary or disruptive to document identification.
Snow-ball stemming
Stemming is a NLP technique that reduces word inflection to its root forms, when words of a similar kind clump together under a single stem. The SBS, also known as the Porter2 stemming algorithm since it is a superior adaption of the Porter Stemmer, is utilized in this situation. By changing the context of a word ending sequence in a probabilistic manner, it transforms terms into stems. The stemming procedure is one that historically ignored the word's few characters without considering its significance. Lemmatization, however, is a technique that changes words into meaning without eliminating any characters.
Feature extraction
The method of feature extraction is crucial to sentiment analysis. It is mainly employed to convert raw or original data into a low-dimensional feature. The characteristics from the text are taken out in this part for sentiment analysis. Our suggested method uses a log-term frequency-based modified inverse class frequency (LFMI) for further evaluation. Additionally, this method incorporates testing and training, with the extraction of textual elements occurring during training.
Log term frequency-based modified inverse class frequency The word usually weighting of the input reviews is carried out after the pre-processing operation. The frequency of a word in a document of review sentences is measured by the term frequency T fq Zhao et al. (2021). However, T fq alone is insufficient since a text will be heavily influenced by the terms that appear more frequently. The use of the class information from reviews in supervised term weighting techniques has drawn increasing interest. T fq and a supervised term weighting method are thus combined in this case. Inverses class frequency (I fq ) is the proportion of total classes to total classes where the phrase appears on training reviews. Before performing log normalization on the final output of T fq data, which is denoted as long term frequency or LT fq . LTF-centered term weighting is computes T fq of each term in the pre-processed dataset. The modified form of MI fq , known as the MICF, is then computed for each word. In this case, the change of MI fq is carried out because distinct class-specific scores for each term must have varying contributions to the overall term score. As a result, it is necessary to apply various weights to the various class-specific scores, and the weighted total of all the class-specific scores is then used as the final term 'score'. The suggested calculation for term weighting using the aforementioned concoction of schemes is written as Eq. (1) where w mn stands for the particular weighting factor for the term t m for class c m , which is well-defined as The procedure used to assign weight to the available datasets is known as the weighting factor. Where r x t * denotes the number of reviews r x in class c m that contain the term t m , r x t denotes the number of r x in other classes that contain the term t m . r x e t denotes the number of r x in other classes that do not contain the term t m , and r xt denotes the number of reviews r x in class c m that do not contain the term t m . Negative weights are disregarded using the constant 1. To prevent the zero-denominator problem in the worst-case scenario, the minimal denominator is set to 1 if r xt ¼ 0 or r x t . The MICF serves as the focal point of a new term weighting created under the name LT fq À MI fq t m ð Þ. The LT fq t m ð Þ and I fq t m ð Þ formulas may be written as where T fq t m ; r x ð Þ represents the total frequency of a phrase t m occurring on the set of documents revr x .
where C t m ð Þ is the number of classes that include the word t m , and N denotes the total number of classes in a set of document reviews. F x ¼ F 1 ; F 2 ; ……F 3 ; …::F p after term weighting denotes the features of the dataset, where F 1 ; F 2 ; ……F 3 ; …::F p denotes the number of weighted terms from the pre-processed dataset.
LFMO-based feature selection
The study implements feature selection using the optimization technique known as LFMO, which is described as, The mayfly algorithm was inspired by the way that mayflies interact with one another, particularly during mating. Once the eggs hatch, then the mayflies are immediately regarded as adults. Despite how long they are alive, only the fittest mayflies tend to survive. Every mayfly in the search space has a position that correlates to a method for resolving the problem. The traditional mayfly method uses RANDOM functions to generate new variables that result in the local optimum. Here, Levy flying and the Mayfly algorithm were coupled by the researchers to increase the mayfly's ability to seek and determine the best solution. According to the Levy flight notion, if a Levy flight-based technique is used for structural analysis, it provides quick convergence rate and does not require any derivative information. It also improves the local search avoidance and local trapping of the ideal solution. The following stages are necessary for the proposed mayfly optimization method to function: Stage 1: There should be two sets of mayflies, one for the male population and the other for the female. Then, each and every mayfly is randomly positioned in the problem space as a candidate solution, indicated by the d-dimensional vector Q Gx ¼ Q G1 ; Q G2 ; . . . ; Q Gd ð Þ : Then, depending on the established goal function F C T Q Gx ð Þ ð Þ , the performance is evaluated (Nagarajan et al., 2022;Zervoudakis & Tsafarakis, 2020).
Stage 2: The initialization of a mayfly's velocity vel ¼ vel 1 ; . . . ; vel d ð Þoccurs during a positional change. Hybrid interplay between individuals and social flying experiences determines its path. Every mayfly adapts to change its route to match its current personal optimal position Pbest ð Þ. Additionally, it also changes dependent on the best position that any other mayfly in the swarm has obtained thus far Gbest ð Þ. Stage 3: With initialized velocities of vel mx , the population of male mayflies is designated as Q Gmx x ¼ 1; 2; . . . ; IG ð Þ . The male population mayflies, which congregate in swarms, indicate that each mayfly's position changes on its personal experience and that of its neighbors. Q T Gx is taken to be mayfly x's current location in search space at time step T, and it changes as velocity vel Tþ1 x is added to the current position. This expression is written exactly as it is provided here.
A few meters above the water, with Q 0 Gxm U Q Gmmin ; Q Gmmax ð Þ , male mayflies are thought to be present and engaged in a nuptial dance. Since they are always moving, it may be inferred that these mayflies don't have remarkable speeds. The velocity x of a male mayfly is therefore determined as follows.
vel Tþ1 xy ¼ g à vel T xy þ m 1 e Àbr 2 p Pbest xy À Q T Gmxy þ m 2 e Àbr 2 g Gbest y À Q T Gmxy Here, the contributions of the social and cognitive components are scaled up using positive attraction constants m 1 . vel T xy relates to the mayfly x's velocity in dimension y ¼ 1; . . . ; i at time step T. Q T Gmxy specifies the mayfly's x th position in dimension j at time step T; m 1 . Likewise, r g stands for the Cartesian distance between Q Gx and Gbest, whereas r p represents the distance between Q Gx and Pbest x . Additionally, Pbest x designates the mayfly in its finest location yet. The personal optimal position Pbest xy at the following time step T þ 1 is calculated based on the minimization issues under consideration.
The formula for the Gbest position at step T time is given below.
Here is the formula for calculating the Cartesian distance.
For the algorithm to work as intended, the finest mayflies in the swarm must repeatedly execute the up-and-down nuptial dance. Therefore, it is necessary to continuously adjust the velocity of these best mayflies, which is computed as follows.
vel Tþ1 Here, d stands for the coefficient of nuptial dance, and b is the random rate between [1, 1].
Stage 4: The population of female mayflies is initialized Q Gfx x ¼ 1; 2; . . . ; IG ð Þwith velocities vel fx in this stage. Female mayflies often do not swarm as males do. Instead, it usually flies in the direction of its male peers to mate. Q T Gfx is taken to be the location of female mayfly x in search space at step T time, and its position changes as a result of adding velocity vel Tþ1 x to the current position.
Q 0 Gxm U Q Gmmin ; Q Gmmax ð Þ prevents one from randomizing the attraction process in this case. Thus, it is agreed that the model would be a deterministic process. Since there are issues with minimization, their velocities are calculated as given below.
Here, b stands for the random value in the interval [1, 1], and Fl stands for the random walk coefficient. Equation (15) is used to get this value.
Stage 5: A mayfly candidate solution's velocity is determined in this stage using the Levy flight method. To determine the mayfly candidate's speed, Eq. (13) In this step, the global finest component's location is changed using the Levy flight method. Although the Levy flying approach has so far been applied for exploration, it is connected to a particular search. Here, vel max is determined using the formula: were, levy k ð Þ ¼ 0:01 r 5 r r 6 j j 1= b . levy k ð Þ specifies the step length and incorporates the infinite variance of the Levy distribution and mean values of 1 < k < 3. k represents the step length. The distribution factor is indicated by the symbol. Stage 6: The gravitational constant, or g value, can be thought of as a stable integer between 0 ð ; 1.
where Iteration represents the algorithm's current iteration, iteration max denotes the max number of iterations, and g max ; g min signify the max and min values that may be considered for the gravity coefficient, respectively. Stage 7: Mayflies mate and the young are inspected. The crossover operator described here discusses the mating behavior of the mayflies. Each parent is chosen from the male and female population using the same selection method, which is the attractiveness of females to men. One might choose the parents specifically based on fitness function or at random. In terms of the fitness function, the best female mates with the best male, the second-best female mates with the second-best male, etc. This crossing produces two offspring, for which the following formulation is provided.
Here, the male stands for the male parent, the female for the female parent, and L is a random value falling inside a certain range. The offspring's starting velocity is set at zero. Finally, a new subset of articles with additional educational elements is the outcome of this step. Figure 2 depicts the flow chart for the LFMO-based feature selection algorithm.
Sentiment classification
The selected features from the feature selection process are inputted into the suggested ILW-LSTM classifier, to classify sentiment. A unique variety of RNNs called LSTM was created to address the vanishing and exploding gradients problem that recurrent neural networks encounter. Like other RNN types, LSTMs produce their output based on the data from the current time step and the output from the previous time step and then transmit their current output to the subsequent time step. A memory cell that can preserve its state for any length of time makes up each LSTM unit, as well as three non-linear gates: an input gate in t ð Þ a forget gate fg t ð Þ, and an output gate out t ð Þ. Information entering and leaving the memory cell is managed by these gates. Assume tanh : ð Þ as hyperbolic tangent function, as dot product, and r : ð Þ as a sigmoid function applied to elements. At time t, in t and hid t represent the input and hidden state vectors, respectively. Gate weight matrices are displayed by X and Y, while bias vectors are denoted by bias. By producing a number in the range [0, 1], the forget gate determines what data has to be forgotten.
Evaluate each search agent's fitness value.
The best search agent is D Ã .
The maximum number of iterations is indicated by iter max .
For each search agent
The value of b, P, Q, r, and I are updated The following equations are used by the input gate to compute in t andc t , combine them, and decide what additional information should be stored.
The following equations determine which components of the cell state should be output by the output gate.
The optimization of the LSTM using ILW is described as follows.
Optimization of LSTM based on ILW
The foundation for WOA is the humpback whales' hunting method. The biggest member of the baleen whale family is the humpback whale. The unique hunting technique used by humpback whales is one of its uncommon features. These whales can recognize the victim and surround them. Equations (24) and (25) illustrate how the encircling movement is expressed Here, P and Q are the vectors of the coefficient, '.' designates the element-by-element multiplication, and D is the coordinates that detail the best outcome that was attained. The coefficient vectors P and Q are calculated via the following equation.
where b is exactly decreased across all iterations from 2 to 0, and randv is a random vector between [0, 1]. There are two ways to mathematically express the humpback whale bubblenet approach; the decreasing encircling mechanism lowers the value of b in Eq. (26). The first assessment made by the spiral updating position strategy is the separation between the prey's position at i à ; j à ð Þ and the whale's position at i; j ð Þ. The spiral equation connecting the position of the whale and its prey is then calculated to replicate the helix-shaped group of humpback whales, as shown in Eq. (28) Equation (29) illustrates the statistical formula for updating the answer depending on the shrinking encircling technique or the spiral method, where r is a random value between [0, 1]. D may be used to look for the vector of the prey. To dissuade searchers from the reference whale, Vector D has a random rate in the range [1, 1]. Its numerical expression is given as follows where D randam is the random position vector chosen among the available solutions and the solution representation of the proposed algorithm is shown in Fig. 3. Initially, features within the chromosome should be chosen. As an outcome, the length of the chromosome is 10, pertaining to ten aspects that are depicted in Fig. 3. Considering the chromosome, each feature is assigned a unique value, i.e., one when selected or 0 otherwise. Following that, original chromosomes are chosen at random to form the population. After that, several search agents were selected for mating in order to produce offspring chromosomes based on the fitness value associated with each solution (i.e., chromosome). Later, a fitness function is employed for calculating the fitness value. The lower the fitness value, the better the solution. The "search agents" are then selected from the current population based on the fitness function. The nature of the ILW-LSTM lies in the hypothesis that mating two best solutions could generate an optimal solution.
Fitness function
The major contribution towards this work is to improve classification accuracy with a limited amount of calculation time. The fitness estimation of each solution is evaluated after initialization and saved for further use. The proposed ILW modifications to this approach are designed to reduce the error between the projected and actual LSTM output. The advantage of traditional WOA for evaluating unimodal functions is its improved coordination of exploitation. Additionally, it excels at exploring multimodal functions and improves convergence speed throughout an entire iteration. Along with its positive aspects, WOA also has certain disadvantages, such as the fact that it cannot handle all optimization issues and that it searches for the global optimum with slow convergence. The planned ILW is used to address these issues. In the traditional WOA, the variable r was utilized, with a value chosen at random between [0, 1]. The r value is dependent on a formulation involving two criteria to further increase the performance.
Here, the terms f i À 1 ð Þ; f i ð Þ denote the fitness value of the solution in the prior iteration, and the current iteration respectively. Additionally, max f i ð Þ ð Þ denotes the maximum of all fitness values. The r value for that answer is based on Eq. (32) if the fitness value of the current solution is greater than that of the prior solution. However, if the fitness value of the new solution is not greater than that of the previous solution and the value of r depends on Eq. (33).
EXPERIMENT
The experiment was implemented to evaluate the ILW-LSTM model for feature-level sentiment analysis on the Prompt Cloud dataset. The experimental design, data preprocessing, hyper-parameter settings, and performance measures were all described in depth in this section.
Dataset description
Prompt Cloud generated this dataset, which has been combined into a usable dataset. It includes Amazon-generated phone reviews. The majority of reviewers have awarded unlocked mobile phones four-star and three-star ratings. To identify patterns in reviews, ratings, and price as well as the relationships between them, PromptCloud examined 400 k customer reviews of unlocked mobiles available on Amazon.com. Below is a list of the fields.
Title of the product Trade mark
Cost Ranking Review text How many consumers thought the review was beneficial The data was attained in December 2016 by the flatterers used to provide our data extraction services. The reviews are around 230 characters long on average. It also revealed that longer reviews are frequently more beneficial and that there is a favourable relationship between price and rating.
Hyper-parameters setting
The accuracy was optimized and the hyper-parameter was tuned using the improvised local search whale optimization approach. The hyper parameters measures in the suggested approach are shown in Table 1.
Experimental setup
To create deep learning models, a variety of tools and libraries are available. Keras is the preferred tool. Tensor Flow was used as Keras' backend since it was GPU-compatible. The following computer specs were used for deep learning experiments: The Python program is used to execute the experiment on a computer with 12 GB of RAM and an Intel TM core (7M) i3-6100CPU running at 3.70 GHz. The model was built and trained using Python programming. The system's performance is shown by comparing its evaluation metrics to those of other systems currently in use.
Evaluation metrics
Similar studies will compare the classification achievements from sentiment classification with the confusion matrix measurements to demonstrate the accuracy of the methodology. The confusion matrix is employed to determine measurement values for accuracy, precision, recall, and F1-score. One class must always be designated as positive and the other as negative when dealing with two-class categorization issues. Take into account the test set, which consists of both positive and negative samples. The goal of every classifier is to assign a class to each sample; however, certain classifications might not be acceptable. Our suggested ILW-LSTM model is put to the test using standard performance evaluation. The following metrics can be used to evaluate the model: Accuracy: It may be identified as the percentage of all accurately classified occurrences in the overall count of instances.
Precision: It is characterized by the proportion of correctly classified positive occurrences to the total number of positively anticipated instances.
Recall: It is described as the fraction of suitably categorized positive instances from the total count of positive instances.
Recall ¼ Tp Tp þ Fn ð Þ F1-score: It can be defined as the average harmonic between precision and recall.
Experimental results
Accuracy, recall, precision and F1-score are calculated according to the obtained confusion matrix that is given in Fig. 4: We noticed that the suggested hybrid model's classification results can be distinct throughout the testing of the proposed models. In other words, the identical input was categorized as negative by the suggested model. The accuracy of the training and validation datasets compared to the training epochs is shown in Fig. 5. The values for the epoch are on the x-axis, while the values of accuracy are on the y-axis. The accuracy of the proposed model was found to be 97 percent. To assess the false positive and true positive of our methods, the authors like to draw the receiver operating characteristic (ROC) curves for our systems in the additional Fig. 6.
The ROC curve for the suggested method is displayed in Fig. 6. The ROC curves demonstrate how well our model performs at various 0-1 thresholds. Based on the ROC for the ILW-LSTM approach, this work discovered that the suggested technique has good effectiveness for feature-level sentiment analysis. Metrics for class-based categorization for the suggested models are shown in Table 2. Aside from accuracy, measures like F1-score, recall, and precision were also considered for evaluating the effectiveness of the model because the dataset utilized in the research is imbalanced. The suggested models performed best in terms of class-based F1-score, precision, and recall score for the negative class. Figure 7 illustrates the precision, F1-score and Recall values for the classification outcome.
Comparative analysis
To demonstrate the efficacy of the proposed model's performance, the findings from the suggested work are compared to those from several other current methodologies. These findings clearly proved the value of the ILW-LSTM technique and encouraged several research projects to create a sentiment categorization system based on deep learning algorithms. As a consequence, the findings of the suggested model have been compared with those of current methods in this section. A comparison of the different methods using the suggested methodology is shown in Table 3 for clarity. Accuracy, precision, recall, and F1-score are the factors used in the comparison analysis. In this case, the suggested methodology values for accuracy, precision, recall, and F1-score are 97.61%, 97.24%, 99.35%, and 98.27%, respectively, when compared to those of CNN, LSTM, Bi-LSTM, CNN-LSTM, and ConvBi-LSTM. On PromptCloud datasets, the ILW-LSTM model performed far better than other deep learning models. Figure 8 shows a comparison of the accuracy, precision, F1-score and recall of several methodologies.
CONCLUSION
Multi-class sentiment analysis has consistently been a complex problem that has piqued the interest of academics owing to its broad range of applications. In this study, the ILW-LSTM is proposed to determine the polarity of consumer reviews. The PromptCloud dataset is utilized in the experiments. The suggested model begins with the data preprocessing stage to transform raw data into an understandable format. The pre-processed data then go through a feature extraction procedure using a modified inverse class frequency algorithm based on Log term frequency. The mayfly method was chosen in this instance for feature selection due to its excellent exploration capacity, and by leveraging levy flight as a reliable hybrid strategy, it also obtains greater exploitation capacity. Finally, the selected data is transformed to the ILW-LSTM, which categorizes the sentiment of the customer reviews as 'positive', 'negative', 'very positive', 'very negative', and 'neutral'. Focused on evaluation metrics like precision, recall, f-score, and accuracy, the proposed ILW-LSTM approach is compared to the existing CNN, LSTM, CNN-LSTM, Bi-LSTM and ConvBi-LSTM techniques. The outcome shows that, when compared to existing sentiment classifiers, the ILW-LSTM achieves the greatest level of performance for the datasets. The ILW-LSTM classifier used in the sentiment analysis achieves a classification accuracy of about 97% which is more effective than existing techniques.
As far as future scope, there is still scope for multiclass sentiment classification in the present study. The proposed method could be strengthened using a novel hybrid optimization technique. For the systems under study, additional feature selection technique and to incorporate a data set comprising of emojis may be considered to further improve the classification performance.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
The authors state that this work has not received any funding. | 8,824.2 | 2023-04-24T00:00:00.000 | [
"Computer Science"
] |
CHROMOSOMAL POLYMORPHISM IN 12 POPULATIONS OF Mikania micrantha ( COMPOSITAE )
Mikania micrantha is a climbing perennial weed of the family Asteraceae, with a vast distribution from South America to south of the United States. This species is widely distributed throughout Brazil, where it shows little morphological variation. Mitotic chromosomes of 12 populations of M. micrantha derived from several Brazilian sites were studied using Feulgen staining and C-banding. The populations included eight diploid (2n = 36 and 42) and four tetraploid (2n = 72) cytotypes. Chromosome numbers of 2n = 36 and 2n = 42 are reported for the first time for M. micrantha. These populations had a secondary constriction in the middle of the larger arm of chromosome pair 1, following the same pattern described for all Mikania species analyzed so far. Numerical and structural variation of the chromosomes was quite common among the karyotypes and nearly all cytotypes differed from each other in some aspect. Most of the chromosomal differentiation may be attributed to inversions and addition or deletion of DNA fragments. C-banding, applied to three of the 12 populations, also revealed polymorphism in the distribution of heterochromatin. Additionally, one to 14 supernumerary or B-chromosomes were observed. The Bs were detected in six of the 12 populations and varied in size, number, and structure among karyotypes and also among cells of the same root meristem. The B chromosomes were also heterochromatic, showing a C-banding pattern similar to the A chromosomes, and suggesting that they may be derived from the chromosomes of the A complement. Departamento de Biologia Geral, CCB, Universidade Estadual d Londrina, 86051-990 Londrina, PR, Brasil. Send correspondence to P.M.R. Departamento de Biologia da Universidade Estadual Paulista, Assis, SP, Brasil. Departamento de Botânica, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brasil.
INTRODUCTION
Mikania micrantha H.B.K. is a climbing perennial weed of the family Asteraceae distributed throughout tropical and subtropical regions of the American continent.It is a pioneer species, frequently found in disturbed environments or in changing communities where the original vegetation has been destroyed for crop introduction.It holds an advantage over many other pioneer species because of its vigorous vegetative and sexual reproduction (Swamy and Ramakrishnan, 1987).M. micrantha is of great ecological importance due to its participation in the reposition of degraded or newly open habitats.It is widespread throughout many Brazilian regions where it grows in environments such as forest borders, roadsides, along fences, and in newly changed habitats.
Numerous cytotypes have been reported in M. micrantha, including n = 19 in a population from Colombia and n = 19 and 20 in a West Indian population (Powell and King, 1969a,b).Chromosome numbers of n = 17 and n = 19 were reported for populations from Ecuador (King et al., 1976) and Argentina (Turner et al., 1979 andWaisman et al., 1984).Nineteen bivalents were found in one population of M. micrantha from Mexico (Strother, 1983) and another from Jamaica (Keil et al., 1988).The karyotype of a polyploid population (2n = 72) from Londrina, Brazil, was analyzed by Ruas and Ruas (1987).The present study examines the karyotypes of 12 populations of M. micrantha collected from different Brazilian sites to help provide a better understanding of chromosome variation in this species.
MATERIAL AND METHODS
The specimens of M. micrantha were obtained from 12 sites in Brazil (Table I) and cultivated in a greenhouse.At least five samples were collected from each population.Root tips were collected from potted plants, pretreated with 0.002 M 8-hydroxyquinoline for 4 h at 8 o C, fixed in 3:1 ethanol-glacial acetic acid overnight, transferred to 70% alcohol, and stored in a refrigerator until used.The conventional Feulgen method with modifications described by Nogueira et al. (1995) was used for chromosome preparations.C-banding was obtained with the method of Schwarzacher et al. (1980).
The morphological chromosome data used in this study included: 1) absolute length of individual chromosomes and haploid chromosome length, both measured in µm; 2) relative length of each chromosome; 3) arm ratio for each chromosome (long arm/short arm).
The chromosomes were classified according to the nomenclature of Levan et al. (1964).Diploid numbers were determined by counting the chromosomes in at least 10 metaphases of every sample of each population.Supernumerary or B chromosomes were analyzed in 25 cells of each population.The average of the chromosome lengths for each chromosome pair was obtained from the measurements of five well-spread metaphases and was used for the analysis and for the construction of the idiograms.
Karyotype asymmetry was determined by using the index total form percent (TF%) according to Huziwara (1962).The asymmetry index was also analyzed by using a plot of the values generated by the Zarco index (Zarco, 1986).The TF%, the size of the largest and smallest chromosomes, the haploid chromosome length and relative length, and the arm ratio values were compared by oneway analysis of variance and Tukey's test (Steel and Torrie, 1960).
Karyotype characterization
The cytotypes of M. micrantha derived from 12 collection sites were studied (Table I).Their karyotypes and respective idiograms are in Figures 1-4.Eight cytotypes were diploid, seven with 2n = 2x = 36 and one with 2n = 2x = 42 chromosomes; four were tetraploid with 2n = 4x = 72.Numbers of 2n = 36 and 2n = 42 are documented here for the first time.Comparisons among the diploid cytotypes (2n = 36) showed that whereas the karyotypes followed similar patterns, minor structural differences were detected in almost all corresponding chromosomes (Table II; Figures 1 and 3).Pair 1 was the most inconstant, showing variation in both size and structure.The largest chromosomes were in cytotypes from Estrela do Norte and Praia de Ipanema, with haploid sets of 32.5 ± 1.62 µm and 31.3 ± 0.36 µm, respectively (Table II; Figure 1E,F).The population from Praia Grande, on the other hand, exhibited the smallest chromosomes and a haploid set of 19.9 ± 1.55 µm (Figures 1C and 3C).The differences in chromosome sizes encompass almost all chromosome pairs, reflecting a gain of genetic material that may have derived through mechanisms of DNA amplification.Two other diploid cytotypes (populations from Alfredo Guedes and Campinas) with 2n = 36 showed the same karyotype formula and similar haploid sets (Figures 1D,G and 3D,G;Tables I, II), but they differed slightly in the arm-ratio values, which were probably modified by inversions.Similar conclusions can be reached in the analysis of the four tetraploid cytotypes with 2n = 4x = 72 chromosomes (Tables III, IV; Figures 2 and 3).In this group, only the population from Apucarana (PR) differed in the value of the haploid chromosome length (Table III).Most of the variation among the tetraploids was represented by centromeric shifts (Table V) even though the general karyotype pattern, described for the genus (Ruas and Ruas, 1987;Ruas and Aguiar-Perecin, 1997), was maintained.
The longest chromosome in the karyotypes of all populations of M. micrantha studied had a secondary constriction located in the middle of the long arm (Figures 1-3).This is a conservative pattern which has been described in many Mikania species.Ruas and Ruas (1987) studied six species of Mikania and suggested that the secondary constriction might be considered a cytological marker for the genus, which could help in the identification of the species.Those results were fully supported by the work of Ruas and Aguiar-Perecin (1997), in which the same karyotype pattern was revealed in 10 other Mikania species.
Karyotype asymmetry was also determined for all cytotypes.The TF% and Zarco index (Tables II, III; Figure 4) showed that the population from Campinas had the karyotype with the highest degree of symmetry, whereas the karyotypes of the populations from Praia de Ipanema, Joinville, and Petrópolis were the most asymmetrical.The highest value of interchromosomal asymmetry (Tables II, III) was found in the population from Estrela do Norte.This value reflects the huge size of chromosome 1, which may have resulted from duplication of parts of this chromosome pair.Less significant differences in the Zarco in- dex were also present in the other populations.The tetraploid cytotypes had similar values of TF%.However, the Zarco index permitted the detection of slight differences among them.Chromosome variation among populations of the same species has been observed in many plant groups.In the genus Serjania (Nogueira et al., 1995), two populations of S. laruotteana, two of S. fuscifolia and three of S. gracilis with the same chromosome number (2n = 24) had different chromosome rearrangements as observed in M. micrantha (Tables II, III; Figures 1-3).Whereas the presence of chromosome races in many groups certainly involves biological adaptation to different habitats, nearly every population of M. micrantha studied showed differences in the karyotypes (Tables IV and V).These differences do not seem to be related to adaptive variables, since populations occupying similar environments bear distinct karyotypes.Similarly, karyotype variation was not accompanied by modification in plant morphology.For example, the polyploid cytotype from Apucarana had the largest leaves (14.0 cm in length, 7.0 cm in width) while the polyploid from Salto Apucaraninha had small leaves (5.0 cm long and 3.5 cm in width).The diploid population from Campinas (2n = 36) had leaves 10.0 cm long and 7.0 cm in width.These data suggest no correlation between leaf size and ploidy level.Inflorescence types were also constant among all cytotypes, with only small variation in size.
C-banding analysis
C-banding was applied to three diploid cytotypes of M. micrantha, including the populations from Piracicaba, Campinas (Figure 5A,B), and Praia Grande (data not shown).C-band analysis revealed a variable pattern in the amount and distribution of heterochromatin in these cytotypes.The cytotype from Piracicaba showed a large heterochromatic block near the secondary constriction of chromosome 1.Three other chromosome pairs had small centromeric bands (Figure 5A).Similarly, another cytotype (Campinas) exhibited a block of heterochromatin located near the secondary constriction of the large arm of chromosome 1 and several other chromosomes showed centromeric C-bands (Figure 5B).The cytotype from Praia Grande (data not shown) on the other hand had only three pairs of very small centromeric bands and total absence of heterochromatin in chromosome 1.Variation in heterochromatic blocks has been observed in several groups of plants, such as in populations of Trillium kamtaschaticum (Kurabayashi, 1957), Tulbaghia leucantha (Vosa, 1973) and Gibasis karwinskyana (Kenton, 1991).The differences in the patterns of C-band observed among the three cytotypes of M. micrantha may be associated with the small detectable variation in the size of the haploid set (Table II).Therefore, at least for the diploid cytotypes, the small differences in the haploid chromosome length may reside in unique and repetitive sequences of DNA.
Besides the variation in number from cell to cell and among cytotypes, the B-chromosomes of M. micrantha diverged in size (from micro-size to about 0.8 µm) and morphology (Figure 6).According to Jones (1995), at least 65 plant species have two or more forms of B chromosomes.Metacentric (m) and telocentric (t) B chromosomes were observed in Aegilops mutica (Mochizuki, 1960) and large and micro-sized B chromosomes were verified in Brachycome dichromosomatica (Smith-White and Carter, 1970).Loidl (1982) showed that the B chromosomes can be distinguished by their overall size, arm ratio, or Giemsa C-banding in Allium flavum.In M. micrantha, the six cytotypes with Bs show three morphological types, such as m, submetacentric (sm), and subtelocentric (st) types, which also vary from micro-sized in some cells to larger telocentrics in others (Figure 6 A-D).A variable number of very small m type Bs were found in many cells of the cytotype from Piracicaba.The m type Bs may explain the origin of micro-Bs by centromere misdivision of a single unpaired B, giving rise to two different-sized chromosomes and further derivatives by deletion of parts of the arms.The B-chromosomes of sm type were predominant in the other populations.Jones and Rees (1982) suggested that the variation in the frequency of Bs among populations may be of adaptive value in a stress situation.The distribution of Bs did not follow any specific pattern in the cytotypes of M. micrantha.Their frequency varied among cells of individual plants as well as among populations which occupy different or similar environments such as sea shores (Praia de Ipanema and Praia Grande) and high altitudes (Petrópolis).Therefore, the presence of Bs could not be associated with any adaptive requirement in M. micrantha.
C-banding (Figure 5 A,B) in the diploid cytotypes from Piracicaba, Campinas, and Praia Grande (data not shown) showed almost totally heterochromatic B chromosomes.No detectable differences were observed in the Cband pattern between A and B chromosomes, suggesting that homologous repetitive sequences may be present in both A and B chromosomes of M. micrantha.
column followed by different lowercase letters are significantly different at the 5% level by the Tukey test.Table V -Tukey test for arm ratio and relative length in the chromosome complement of four tetraploid populations of Mikania micrantha with 2n = 72 a .column followed by different lowercase letters are significantly different at the 5% level by the Tukey test.
cells of the same root meristem (Table
Table I -
Sites of origin, collection number, chromosome number, and karyotypic formula of 12 populations of Mikania micrantha.
Table III -
Chromosome length (µm), haploid chromosome length (HCL), ratio of longest/shortest chromosomes (L/S), and karyotype symmetry using total form (TF%) and Zarco index of four tetraploid cytotypes of Mikania micrantha with 2n = 72 chromosomes.a a Means within each column followed by different lowercase letters are significantly different at the 5% level by the Tukey test.
Table II -
Chromosome length (µm), haploid chromosome length (HCL), ratio of longest/shortest chromosomes (L/S), and karyotype symmetry using total form (TF%) and Zarco index of cytotypes of Mikania micrantha, seven with 2n = 36, and one with 2n = 42.a Means within each column followed by different lowercase letters are significantly different at the 5% level by the Tukey test. a
Table IV -
Tukey test for arm ratio values and relative length of the karyotypes of seven populations of Mikania micrantha with 2n = 36 and one with 2n = 42 chromosomes.a | 3,208.8 | 1999-09-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Single-crystalline nanoporous Nb2O5 nanotubes
Single-crystalline nanoporous Nb2O5 nanotubes were fabricated by a two-step solution route, the growth of uniform single-crystalline Nb2O5 nanorods and the following ion-assisted selective dissolution along the [001] direction. Nb2O5 tubular structure was created by preferentially etching (001) crystallographic planes, which has a nearly homogeneous diameter and length. Dense nanopores with the diameters of several nanometers were created on the shell of Nb2O5 tubular structures, which can also retain the crystallographic orientation of Nb2O5 precursor nanorods. The present chemical etching strategy is versatile and can be extended to different-sized nanorod precursors. Furthermore, these as-obtained nanorod precursors and nanotube products can also be used as template for the fabrication of 1 D nanostructured niobates, such as LiNbO3, NaNbO3, and KNbO3.
Introduction
Nanomaterials, which have received a wide recognition for their size-and shape-dependent properties, as well as their practical applications that might complement their bulk counterparts, have been extensively investigated since last century [1][2][3][4][5][6][7][8]. Among them, one-dimensional (1D) tubular nanostructures with hollow interiors have attracted tremendous research interest since the discovery of carbon nanotubes [1,[9][10][11][12][13][14]. Most of the available single-crystalline nanotubes structurally possess layered architectures; the nanotubes with a non-layered structure have been mostly fabricated by employing porous membrane films, such as porous anodized alumina as template, which are either amorphous, polycrystalline, or only in ultrahigh vacuum [13,14]. The fabrication of single-crystalline semiconductor nanotubes is advantageous in many potential nanoscale electronics, optoelectronics, and biochemical-sensing applications [1]. Particularly, microscopically endowing these single-crystalline nanotubes with a nanoporous feature can further broaden their practical applications in catalysis, bioengineering, environments protection, sensors, and related areas due to their intrinsic pores and the high surfaceto-volume ratio. However, it still remains a big longterm challenge to develop those simple and low-cost synthetic technologies to particularly fabricate 1 D nanotubes for functional elements of future devices.
Recently, the authors have rationally designed a general thermal oxidation strategy to synthesize polycrystalline porous metal oxide hollow architectures including 1 D nanotubes [15]. In this article, a solution-etching route for the fabrication of single-crystalline nanoporous Nb 2 O 5 nanotubes with NH 4 F as an etching reagent, which can be easily transformed from Nb 2 O 5 nanorod precursors is presented.
As a typical n-type wide bandgap semiconductor (E g = 3.4 eV), Nb 2 O 5 is the most thermodynamically stable phase among various niobium oxides [16]. Nb 2 O 5 has attracted great research interest due to its remarkable applications in gas sensors, catalysis, optical devices, and Li-ion batteries [9][10][11][16][17][18][19][20][21]. Even monoclinic Nb 2 O 5 nanotube arrays were successfully synthesized through a phase transformation strategy accompanied by the void formation [10], which can only exist as non-porous polycrystalline nanotubes. In this study, a new chemical etching route for the synthesis of single-crystalline nanoporous Nb 2 O 5 nanotubes, according to the preferential growth habit along [001] of Nb 2 O 5 nanorods, is reported. The current chemical etching route can be applied to the fabrication of porous and tubular features in single-crystalline phase oxide materials.
Experimental section
Materials synthesis Nb 2 O 5 nanorod precursors Nb 2 O 5 nanorods were prepared via hydrothermal technique in a Teflon-lined stainless steel autoclave. In a typical synthesis of 1 D Nb 2 O 5 nanorods, freshly prepared niobic acid (the detailed synthesis processes of niobic acid from Nb 2 O 5 has been described in previous studies by the authors [22][23][24][25]) was added to the mixture of ethanol/deionized water. Subsequently, the white suspension was filled into a Teflon-lined stainless steel autoclave. The autoclave was maintained at 120-200°C for 12-24 h without shaking or stirring during the heating period and then naturally cooled down to room temperature. A white precipitate was collected and then washed with deionized water and ethanol. The nanorod precursors were dried at 60°C in air.
Single-crystalline nanoporous Nb 2 O 5 nanotubes
In a typical transformation, 0.06-0.20 g of the obtained Nb 2 O 5 nanorods was added to 20-40 ml deionized water at room temperature. 2-8 mmol NH 4 F was then added while stirring. Afterward, the mixture was transferred into a Teflon-lined stainless steel autoclave and kept inside an electric oven at 120-180°C for 12-24 h. Finally, the resulting Nb 2 O 5 nanotubes were collected, and washed with deionized water and ethanol, and finally dried at 60°C in air.
Materials characterization
The collected products were characterized by an X-ray diffraction (XRD) on a Rigaku-DMax 2400 diffractometer equipped with the graphite monochromatized Cu Kα radiation flux at a scanning rate of 0.02°s -1 . Scanning electron microscopy (SEM) analysis was carried using a JEOL-5600LV scanning electron microscope. Energy-dispersive X-ray spectroscopy (EDS) microanalysis of the samples was performed during SEM measurements. The structures of these nanorod precursors and nanotube products were investigated by means of transmission electron microscopy (TEM, Philips, TecnaiG2 20). UV-Vis adsorption spectra were recorded on UV-Vis-NIR spectrophotometer (JASCO, V-570). The photoluminescence (PL) spectrum was measured at room temperature using a Xe lamp with a wavelength of 325 nm as the excitation source.
Results and discussion
Typical XRD pattern of the Nb 2 O 5 nanorod precursors obtained from the ethanol-water system shown in Figure 1 exhibits diffraction peaks corresponding to the orthorhombic Nb 2 O 5 with lattice constants of a = 3.607 Å and c = 3.925 Å (JCPDS no. 30-0873). No diffraction peaks arising from impurities such as NbO 2 were detected, indicating the high purity of these precursor nanorods. The morphology of these precursor products was observed by means of SEM and TEM. nanorods with the diameter 300-600 nm and the length 2-4 μm. The bottom inset of Figure 2b shows typical TEM image of a single solid Nb 2 O 5 nanorod, demonstrating that the nanorod have a diameter of~300 nm and length of approximately 2 μm, which is in agreement with the SEM observations. The HRTEM image (the top inset of Figure 2b) taken from the square area exhibits clear lattice fringes, indicating that the nanorod is highly crystallized. The spacing of 0.39 nm corresponds to the (001) planes of Nb 2 O 5 , which shows that these precursor nanorods grow along the [001] direction.
After the hydrothermal process along with an interface reaction, Nb 2 O 5 nanotubes were obtained with F --assisted etching treatment. The XRD pattern shown in Figure 3a reveals a pure phase, and all the diffraction peaks are very consist with that of nanorod precursors and the reported XRD profile of the orthorhombic Nb 2 O 5 (JCPDS no. 30-0873). EDS analysis was used to determine the chemical composition of an individual nanotube. The result shows that these nanotube products contain only Nb and O elements, and their atomic ratio is about 2:5, which is in agreement with the stoichiometric ratio of Nb 2 O 5 . The EDS results clearly confirm that F was not doped into these nanotubes (Figure 3b).
The morphology and structure of the finally nanoporous nanotubes were first evaluated by SEM observation. The representative SEM image in Figure 4a implying the finally formed nanotubes well resemble the shape and size of Nb 2 O 5 nanorod precursors. The detailed structure information is supported by the highmagnification image shown in Figure 4b, which shows some typical nanotubes with thin walls. For accurately revealing the microstructure of these nanotubes, TEM observation was performed on these nanotubes. Figure 5a shows a typical TEM image of these special nanostructured Nb 2 O 5 . These nanotubes have a hollow cavity and two closed tips. A magnified TEM image of some Nb 2 O 5 nanotubes is presented in Figure 5b. It can been see that the nanotube surface is highly nanoporous and coarse, composed of dense nanopores. SAED pattern obtained from them by TEM shows they are single-crystalline, as seen in the typical pattern in Figure 5b (inset). The nanoporous characterization of these single-crystalline nanotubes was further verified by a higher-magnified TEM image (Figure 5c). The single-crystalline nature of the nanotubes is further indicated by the Nb 2 O 5 lattice which can be clearly seen in the HRTEM image of the surface of a nanoporous nanotube. Though it is difficult to directly observe by TEM, since the observed image is a two-dimensional projection of the nanotubes, Figure 5d shows dense nanopores around which the Nb 2 O 5 lattice is continuous. The diameter of the nanopores appears to be 2-4 nm, and the growth direction of these nanoporous nanotubes is [001], just the same as nanorod precursors. During the hydrothermal process of Nb 2 O 5 nanorod precursors, the formation of single-crystalline nanoporous nanotubes can be ascribed to preferential-etching of single-crystalline nanorods. In hydrothermal aqueous NH 4 F solution, HF were formed by the hydrolysis of NH 4 + and were further reacted with Nb 2 O 5 to form soluble niobic acid. The etching of nanorods in this study preferentially begins at the central site of the nanorod, which might be because the central site has high activity or defects both for growth and for etching. Further etching at the center of nanorod leads to its splitting, and the atom in the (001) planes are removed at the next process, causing the formation of the tubular structure. Furthermore, during the etching process, these newly generated soluble niobic acid diffused into the reaction solution from the central of the precursor nanorods, leaving dense nanopores on the shell of nanotubes with closed tips. For verifying such preferential-etching formation mechanism, HF solution as an etching reagent was directly adopted. Figure 6 shows the morphology and structure of Nb 2 O 5 products, which exhibit that hollow tuber-like nanostructures can also be achieved. However, the asobtained Nb 2 O 5 products are broken or collapsed nanotubes, which is ascribed to the fast etching rate of HF reagent. The diameter of nanoporous nanotubes can be tunable by adjusting the diameter of precursor nanorods. We can thus obtain different diameters of Nb 2 O 5 nanotubes, which could meet various demands of nanotubes toward practical applications. For example, when Nb 2 O 5 nanorods with a smaller diameter (approximately 200 nm) were adopted as precursors, the corresponding Nb 2 O 5 nanotubes with similar sized nanotubes were achieved (Figure 7). These Nb 2 O 5 nanotubes and nanorods can be used as versatile templates to fabricate MNbO 3 (M = Li, Na, K) nanotubes and nanorods. For example, when Nb 2 O 5 nanorod precursors directly reacted with LiOH at high temperature, LiNbO 3 nanorods were immediately achieved. As shown in Figure 8a, b, the morphology of Nb 2 O 5 templates is preserved. XRD pattern of the calcination products (Figure 8c) clearly shows the pure-phase LiNbO 3 ferroelectric materials. These LiNbO 3 nanorods were obtained through calcination of Nb 2 O 5 and LiOH with appropriate amount ratios at 500°C for 4 h. This calcination method is general and versatile, and it can be applied to fabricate other niobate materials such as NaNbO 3 and KNbO 3 . The optical properties of these Nb-based nanomaterials (LiNbO 3 , NaNbO 3 , and KNbO 3 ) are shown in Figure S1 in Additional file 1). UV-Vis adsorption measurement was used to reveal the energy structure and optical property of the as-prepared Nb 2 O 5 nanorods and finally porous nanotube products. UV-Vis adsorption spectra of Nb 2 O 5 nanorods and nanotubes are presented in Figure 9a. It can be seen from Figure 9a that the structure transformation from solid nanorods to nanoporous nanotubes is accompanied by distinct changes in the UV-Vis spectra because of the significant difference in shape between nanorod precursors and nanotube products. As a direct band gap semiconductor, the optical absorption near the band edge follows the formula where α, v, E g , and A are the absorption coefficient, light frequency, band gap energy, and a constant, respectively [16,26]. The band gap energy (E g ) of Nb 2 O 5 can be defined by extrapolating the rising part of the plots to the photon energy axis. The estimated band gaps of Nb 2 O 5 nanotubes and nanorods are 3.97 and 3.72 eV, respectively ( Figure 9b), which are both larger than the reported value (3.40 eV) of bulk crystals [10]. The blue shift (approximately 0.25 eV) of the absorption edge for the porous nanotubes compared to solid nanorods exhibits a possible quantum size effect in the orthorhombic nanoporous Nb 2 O 5 nanotubes [10]. Wavelength and intensity of absorption spectra of Nb 2 O 5 nanocrystals depend on the size, crystalline type and morphology of the Nb 2 O 5 nanocrystals. If their size is smaller, then the absorption spectrum of Nb 2 O 5 nanocrystals becomes blue shifted. The spectral changes are observed because of the formation of nanoporous thinwalled tubular nanomaterials, similar to the previous research result [10].
Conclusions
In summary, we have elucidated a new preferential-etching synthesis for single-crystalline nanoporous Nb 2 O 5 nanotubes. The shell of resulting nanotubes possesses dense nanopores with size of several nanometers. The formation mechanism of single-crystalline nanoporous nanotubes is mainly due to the preferential etching along c-axis and slow etching along the radial directions. The as-obtained Nb 2 O 5 nanorod precursors and nanotube products can be used as templates for synthesis of 1 D niobate nanostructures. These single-crystalline nanoporous Nb 2 O 5 nanotubes might find applications in catalysis, nanoscale electronics, optoelectronics, and biochemical-sensing devices.
Additional material
Additional file 1: Figure S1 UV-Vis (a) and PL (b) spectra of Nbbased nanomaterials. PL spectra were obtained with an excitation wavelength of 325 nm measured at room temperature. | 3,166.8 | 2011-02-14T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Molecular Mechanism of Ciprofloxacin Translocation Through the Major Diffusion Channels of the ESKAPE Pathogens Klebsiella pneumoniae and Enterobacter cloacae
Experimental studies on the translocation and accumulation of antibiotics in Gram-negative bacteria have revealed details of the properties that allow efficient permeation through bacterial outer membrane porins. Among the major outer membrane diffusion channels, OmpF has been extensively studied to understand the antibiotic translocation process. In a few cases, this knowledge has also helped to improve the efficacy of existing antibacterial molecules. However, the extension of these strategies to enhance the efficacy of other existing and novel drugs require comprehensive molecular insight into the permeation process and an understanding of how antibiotic and channel properties influence the effective permeation rates. Previous studies have investigated how differences in antibiotic charge distribution can influence the observed permeation pathways through the OmpF channel, and have shown that the dynamics of the L3 loop can play a dominant role in the permeation process. Here, we perform all-atom simulations of the OmpF orthologs, OmpE35 from Enterobacter cloacae and OmpK35 from Klebsiella pneumoniae. Unbiased simulations of the porins and biased simulations of the ciprofloxacin permeation processes through these channels provide insight into the differences in the permeation pathway and energetics. In addition, we show that similar to the OmpF channel, antibiotic-induced dynamics of the L3 loop are also operative in the orthologs. However, the sequence and structural differences, influence the extent of the L3 loop fluctuations with OmpK35 showing greater stability in unbiased runs and subdued fluctuations in simulations with ciprofloxacin.
■ INTRODUCTION
The increasing global prevalence of antimicrobial resistance poses a significant risk of future epidemics in human populations.At the same time, prevention and treatment of common bacterial infections are becoming less effective against strains that have developed multidrug resistance and, in some cases, extreme drug resistance. 1 The World Health Organization has emphasized the urgent need to address the emergence of resistance in clinical pathogens, most prominently the ESKAPE pathogens, that are the leading cause of resistance-associated deaths. 1 Resistance in these species significantly increases the morbidity and mortality associated with nosocomial infections.The recent Global Antimicrobial Resistance and Use Surveillance System (GLASS) report draws attention to the worrying increase in resistance rates among bacterial pathogens, such as a 42% rate of Escherichia coli resistant and a resistance rate of more than 59% of Klebsiella pneumoniae to third-generation cephalosporins. 2he development of new drugs against resistant pathogens faces significant hurdles not only in identifying candidates that demonstrate effectiveness during in vitro assessment but also in ensuring their in vivo efficacy and safety.Identification of drug candidates that are effective in in vitro studies on pathogenic isolates is a primary challenge.Low accumulation of drug molecules inside the bacterial cell has been identified as a reason for the failure at this stage.−15 At the same time, a process that counteracts the accumulation of drug molecules within bacterial cells, i.e., efflux, is active and is attributed to the action of ATP-driven efflux plumps.The net accumulation is therefore largely determined by the influx and efflux rates across the bacterial membrane.Naturally, in drug development efforts, strategies have been considered to improve the influx and reduce the efflux rates for existing or novel drug molecules.
Mechanistic insights into the antibiotic permeation processes through porins are expected to aid in the development of drugs with improved influx properties.−21 Porins usually exist as trimers, wherein each monomer is a β-barrel formed by antiparallel βsheets (see Figure 1).The external hydrophobic surfaces serve as intermonomer contacts, along with an extracellular loop L2 that forms polar interactions with a groove of the neighboring monomers.Several long loops form the extracellular opening of the channel, except for the loop L3 that folds inward into the channel lumen to create a narrow constriction region (CR) which is responsible for the size exclusion property of the channel.In the case of E. coli, the porins OmpF and OmpC have almost circular constriction zones with a diameter of approximately 6.5−7 and 5.5−6 Å, respectively.The CR is characterized by a transverse electric field generated due to the presence of positively and negatively charged residues that decorate the opposite sides of the CR (Figure 1). 16,22In a previous study on OmpF and its homologues, 23 a transversal field of around 0.15 to 0.30 V/nm in the CR was reported.This transverse electric field in the CR has been shown to play a critical role in the permeation of polar solutes through the pore. 9,23It aids in orienting solutes with an internal dipole into configurations that maximize the engagement with the charged residues in the CR.This strong orientation and the interactions with the residues in the CR help the solute molecule somewhat offset the potential barrier that arises due to the steric restriction in accommodating bulky solutes.Mutations of the charged residues in the CR have been implicated in the development of resistance. 22−31 These factors were also considered in the development of a quantitative scoring function that can be used to predict the permeability of a given channel for a set of antibiotics. 23−35 Detailed simulations of the OmpF channel have uncovered the mechanistic basis for the preference of molecules with a positive charge and the role of antibioticinduced loop dynamics in the translocation of antibiotics. 36sing simulations on the permeation of antibiotics with different charges, the study showed that an accessible positively charged moiety can interact with acidic residues on the L3 loop and therefore can potentially induce transient conformational shifts in the flexible L3-FS segment (F118−S125) of the loop during permeation through the narrow CR.This mechanism has been termed L3-dynamics dependent (L3D-D) translocation.In contrast, molecules with only negative charged moieties prefer to interact with the basic residues of the barrel wall and do not induce significant conformational
The Journal of Physical Chemistry B
fluctuations in the L3 loop resulting in an L3-dynamics independent (L3D-I) translocation mechanism.The corresponding permeation model provides a possible explanation for the findings from previous empirical investigations regarding the importance of a sterically accessible positively charged group and is also consistent with reports of a fast permeation of zwitterionic antibiotics. 7,16,23,32,33While these studies have focused on the OmpF channel as a model, the implied role of the L3 loop dynamics in antibiotic translocation necessitates an examination of a similar role of the loop dynamics in the porin orthologs from other pathogens of clinical significance.Moreover, notable structural variations among the orthologs have been suggested to govern the experimentally observed differences in the permeation rates of a given antibiotic through the orthologs.A detailed atomistic description of the antibiotic permeation process through the OmpF orthologs and comparisons with OmpF may enable to clarify the effect of the detailed pore structure and dynamics on the exact permeation pathway.
With this objective, the present work focuses on the orthologs of OmpF, namely OmpE35 from Enterobacter cloacae and OmpK35 from K. pneumoniae.To this end, we begin with the analysis of the available crystal structures 23 to examine the pore structure and amino acid variations in and around the CR of the orthologs that may influence the pore dynamics and antibiotic permeation, providing a structural basis for the expected differences in dynamics of the L3 loop.Unbiased MD simulations of the channel have been performed to examine the intrinsic L3 loop stability in the absence of an antibiotic.This part is followed by detailed simulations of ciprofloxacin (CIP) permeation through the orthologs.CIP was chosen as the molecule of interest because it is a rigid, zwitterionic molecule, and its permeation pathways through OmpF are associated with significant conformational fluctuations of the L3 loop. 31Since the permeation process is essentially a rare event in the practically accessible simulation time scales, we employed an enhanced sampling scheme to investigate the permeation process.Our approach employs the temperature accelerated sliced sampling 37 (TASS) method which is similar to the umbrella sampling method that uses a series of harmonic bias potentials for sampling antibiotic configurations along the channel axis, but also enables within each simulation window an improved sampling of the antibiotic translation and rotational degrees of freedom by boosting the sampling along the associated collective variables (CVs).The method has been previously used to calculate free energy for the permeation of different antibiotics through OmpF and provided qualitative insights into antibiotic induced L3 dynamics. 31,36,38Structural analyses and simulations with antibiotics indicate that the observed sequence variation between the homologues determine the stability and (antibiotic-induced) dynamics of the L3 loop.
■ MATERIALS AND METHODS
System Setup.The simulation systems were prepared using the atomic coordinates of OmpF (E.coli) (PDB ID: 2ZFG) and its orthologs OmpK35 (K.pneumoniae) (PDB ID: 5O77), and OmpE35 (E.cloacae) (PDB ID: 6ENE) obtained from the Protein Data Bank.The trimeric forms of these channels were embedded into a lipid bilayer with the help of the Membrane Builder module within the CHARMM-GUI server. 39,40All titrable residues were modeled in their standard protonation states at pH 7.0 except residues E296 in OmpF, D285 in OmpE35 and residues E102 and E110 in OmpK35 (see Supporting Information Section S1 for details).The lipid bilayer consists of 1-palmitoyl-2 oleoyl-sn-glycero-3-phosphoethanolamine (POPE) molecules.TIP3 water molecules were used to solvate the protein−membrane system, and neutralization was achieved by adding potassium ions.For the unbiased simulations, additional K + and Cl − were added to obtain a net concentration of 0.15 M. The details of these systems are provided in Table S1.The systems were simulated using the CHARMM36 force field 41,42 with the short-range electrostatics and the van der Waals interactions calculated using a cutoff of 12 Å and a switching distance of 10 Å.The long-range electrostatics was treated using the particle-mesh Ewald approach 43 with a grid spacing of 1 Å.Moreover, all bonds were constrained using the parallel LINCS algorithm. 44A minimization step was performed using the steepest-descent algorithm, and equilibration was performed in steps for a total of 50 ns.Final production runs were performed in the NPT ensemble.The temperature was set at 300 K using the Nose− Hoover thermostat with a 1 ps coupling constant, and the pressure was maintained at 1 bar using a semi-isotropic scheme with the Parrinello−Rahman barostat.The unbiased production simulations were performed for a total of 150 ns.For the biased simulations, we used a virtual site setup 45−47 that enabled utilizing a 5 fs time step.For setting up the virtual site systems, the pre-equilibrated all-atom system topology was converted to the virtual topology using the pdb2gmx tool in GROMACS.Thereafter, an equilibration step was performed with position restraints initially applied on all heavy atoms of the protein and antibiotic molecule as well as the phosphate atoms of the lipids.The equilibration was performed with a gradual release of the restraints in a stepwise manner.Simultaneously, the time step was increased from 1 to 2 fs and finally to 5 fs in the final equilibration step, as described in a previous study. 48All simulations were performed with GROMACS 2019 49 patched with PLUMED plugin version 2.4. 50The force field parameters for the CIP molecule were obtained from a previous study. 48UCSF Chimera, 51 VMD 52 and in-house Python scripts were used to analyze the trajectory data and to create images for this work.
Choice of Collective Variables.The biasing strategy used in this study enables accelerated sampling along multiple CVs.The principle CV z is defined as the projection of the vector between the center of mass of the antibiotic molecule and the C α atoms of the β-strands of the channel along the z-axis.The CV approximates the direction of permeation through the channel and has been used in previous studies. 38,53,54In addition, CVs describing the rotation and translation of the antibiotic, and antibiotic−solvent interactions were included in the sampling scheme.The translational CVs x and y describe the motion of the antibiotic molecule orthogonal to the pore axis in x and y-direction.Mathematically, these are the projections of the vector between the center of mass of the antibiotic and the C α atoms of the channel along the x and yaxes, respectively (Figure S2A).The rigid body rotation of the antibiotic molecule is described through additional CVs.The CVs z ij and x ij denote the projections of the internal antibiotic vector r ij between the carbonyl carbon atom (C16) and the nitrogen atom of the piperazine ring along the z and x-axes, respectively.Moreover, y kl and z kl describe the projections of the internal antibiotic vector r kl between the C2 and C4 atoms of the quinolone ring onto the axes y and z, respectively.The two internal vectors are shown in Figure S2B.The rotation of The Journal of Physical Chemistry B the antibiotic along the specified axis is defined as θ = cos −1 (p ab /∥r ab ∥), where p ab denotes the respective projection of the vector r ab along the given axis.In practice, we employed a linear projection of these CVs as these are convenient proxies for the nonlinear cosine function.Finally, the coordination number for the antibiotic−water interactions CN CIP−WAT was used as a CV as well.All CV definitions and the associated parameters are provided in Table S3.
Setup for Temperature Accelerated Sliced Sampling Simulations.To take advantage of the trimeric arrangement of the channels under investigation, we simultaneously applied three separate bias forces to study the antibiotic permeation through the three OmpF monomers, as employed in previous studies. 38,48The biased simulations were performed using the TASS method. 37Within this scheme, the principal CV z is sampled using a series of harmonic bias potentials in the range z ∈ [−2.4,2.0] nm for the OmpE35 channel and z ∈ [−3.0, 2.0] nm for the OmpK35 channel.The simulation windows were generated in a stepwise fashion, wherein the final equilibrated configuration at position i − 1 was used as input configuration for the equilibration at umbrella position i.A 10 ns equilibration was performed at each umbrella position.Within each simulation window, we subsequently performed simulations using the harmonic potential bias (the same as was used for the equilibration runs) along the principal CV z and using temperature acceleration along the orthogonal CVs. 55,56echnically, in the TASS scheme, all biases are applied to a set of fictitious variables that are tightly coupled to the real CVs.These additional variables are introduced within an extended space that is maintained at a higher temperature.This scheme allows for the simultaneous inclusion of a large number of CVs, most recently demonstrated in a ligand dissociation study where up to 22 CVs were considered. 57For our simulations, the extended temperature was set to 900 K using a Langevin thermostat.Moreover, we prepared a total of 75 windows along the principal CV z.The windows were positioned 1.0 Å apart in the extracellular (EC) and periplasmic (PP) vestibules at both ends of the channel and 0.5 Å apart in the CR region.The values of the harmonic force constants range from 2000 kcal/mol at the channel ends to 6000 kcal mol −1 nm 2 in the CR in the case of the OmpK35 channel, and between 2000 and 5500 kcal/mol/nm 2 for the OmpE35 channel.The small pore size in the CR restricts the free rotation of bulky solutes.Thus, such antibiotic molecules can only assume two possible orientations as they pass through the CR.In the case of CIP we either see the amino group ahead (orientation I) or the carboxyl group ahead (orientation II) while crossing the channel from the EC to the PP side (see Figure S3).During the generation of the initial configurations for the TASS sampling, care was taken to ensure that the antibiotic molecules are in orientation I in all umbrella windows that sample the CR.Orientations of the antibiotic molecule belonging to path II had to be sampled using an additional set of umbrella windows, wherein we ensured that the molecule is in orientation II during the equilibration step.Twenty-eight additional umbrella windows were employed to sample path II, while the obtained data was merged with that belonging to path I prior to the estimation of the free energy.The TASS simulations for the OmpE35 and OmpK35 porins altogether cumulated to simulation times of 27 and 30 μs, respectively.
Estimation of Free Energy.The free energy surface (FES) estimation was performed using the TASS mean force approach as described previously. 38,58Individual 1D free energy estimates for each of the monomers were compared by suitably aligning the profiles to assess the convergence of the FES estimates from independent simulations.Note that the choice of the overlapping points for the free energy curves can affect the calculated error.In the present work, the alignment points roughly were chosen to yield the best fit between the independent estimates.Subsequently, the average 1D free energy profile from the three TASS simulations was obtained using a bootstrapping procedure based on the whole histogram bootstrapping method implemented in g_wham 59 and employing the chosen overlap point to align the bootstrap estimates.As described previously, 38 for a given umbrella position, the approach involves randomly picking a single histogram (with replacement) from H histograms (here, H is the number of independent samples; H = 3 for the present calculations).This procedure is used to pick U histograms (here, U is the number of umbrella windows employed for the TASS calculations) and the set is used to generate a single bootstrap estimate of the potential of mean force (PMF).For each histogram within a bootstrap sample, uncorrelated samples are obtained and the PMFs are estimated using the mean force TASS method.In this way, we can generate X bootstrap estimates that are used to obtain an average PMF and the associated error.For the present calculations, we used X = 100 bootstrap samples.The minimum free energy paths along the average 2D FES were determined using the zero-temperature string method. 48,53,60RESULTS AND DISCUSSION Structural Variations among OmpF Orthologs.As a start, we analyzed the structural variations among the different OmpF orthologs.While several crystal structures of OmpF have been available for quite a while already, structures for the OmpE35 and OmpK35 porins have only been reported more recently. 23The previous structural characterization based on all-atom MD simulations showed that OmpE35 has the same average pore radius in the CR of 3.1 Å at the CR of the OmpF channel. 23In contrast, OmpK35 has a wider radius at the CR of 3.6 Å.The wider pore radius of OmpK35 results in a slightly higher ion conductivity in electrophysiology experiments compared to OmpF and OmpE35. 23In the case of bulkier solutes such as antibiotic molecules, the permeation rates are governed by a large number of factors related to the channel and the respective molecule.Certainly, the structural and sequence variations among the orthologs are also expected to influence the relative permeation rates of a given antibiotic molecule.Moreover, keeping in mind the role of the hydrogenbond network in the stabilization of the L3 loop and its connection to the antibiotic-induced loop dynamics, we were interested in how the sequence variations might influence the fluctuations of the L3 loop.The sequence alignment of OmpF with OmpE35 and OmpK35 shows that OmpE35 has a greater sequence identity of 77% (84% similarity) with OmpF than OmpK35 with 55% (67% similarity).If one considers only the residues in the CR, we find a sequence identity of 81% (90% similarity) between OmpE35 and OmpF.In the case of OmpK35, the sequence identity is 55% (72% similarity) for the same residues in the CR (Figure S3).Thus, in OmpE35 the CR has a large degree of residue conservation and most of the differences are present in the channel vestibules.For OmpK35, the divergence from OmpF is more pronounced and observed throughout the channel.Although there is a high degree of conservation between OmpE35 and OmpF in the CR, the The Journal of Physical Chemistry B divergence is greater between OmpK35 and OmpF.Note that the internal electric field is expected to vary based on the differences in the distribution of charged residues at the CR.The electrostatic potential across the constriction zone has been estimated to be 160 mV for OmpF and OmpE35 and 110 mV for OmpK35. 23This difference between OmpF (or OmpE35) and OmpK35 is possible considering the lower sequence similarity between the pores as stated earlier.The notable variations between the orthologs will be discussed below in more detail.
Figure 2A shows the structural superposition of OmpF and OmpE35 crystal structures, focusing on the prominent residues in the CR.The conformation of loop L3 is very similar in both orthologs.However, we find differences in the residues that form stabilizing interactions with the loop.The most prominent difference is the loss of the hydrogen bond D121-Y32 that is present in OmpF.Such an interaction with the equivalent D116 residue is missing in OmpE35 due to a significant shortening of the loop L1 and a loss of the tyrosine residue (see Figure 2A, inset 1).The loss of this stabilizing interaction can result in larger fluctuations in the flexible L3-FS segment (residues F113-S120) of OmpE35.Note that the residue D121 in OmpF also forms another hydrogen bond with Y294 residue.This hydrogen bond is retained in OmpE35 in the form of the D116-Y283 interaction.This apart, we also observed a substitution of the arginine residue R167 in OmpF to the hydrophobic leucine L162 in OmpE35.The R167 residue forms a hydrogen bond with the L3 backbone at residue S125 in OmpF. Figure 2A, inset 2, shows this part of the protein in OmpE35, a compensatory mutation in the form of residue R234, however, replaces the function of R167.Finally, the backbone of the L3 tip is stabilized in OmpF by a network of hydrogen bonds involving residues E296 and D312.In OmpE35, a similar stabilizing network is maintained by the residues D286 and D301 (see Figure 2A, inset 3).Overall, the loop conformations and dynamics in OmpE35 are expected to be similar to those in OmpF with a potentially slightly greater flexibility of the L3-FS segment in the former.
The structural superposition of OmpK35 crystal structure with that of OmpF shows larger variations in critical residues in the CR (see Figure 2B).Comparing the structure of the L3 loop, one immediately notices differences in the form of a shift at the L3 tip at position 110 and a larger loop bulge due to the insertion of a tryptophan residue at position 116 of OmpK35.Moreover, the side chain of residue W116 interacts with the adjacent barrel wall, possibly leading to a stabilizing effect on the L3-FS segment in OmpK35.In OmpK35, the observed shift of the L3 tip toward the barrel wall appears to be due to a difference in the residues that stabilize the tip. Figure 2B, inset 1, shows that in the case of OmpK35, the L3 backbone is stabilized directly by the residue E290 rather than through a network formed by E296 and D312 as observed in OmpF.The loss of the aspartate residue in OmpK35 leads to a shift of the L3 tip toward the residue E290.Additional stabilization to the L3 tip in this position is achieved through the interactions between the residues E110 and E20.Due to the proximity of the residues E110 and E20, one of the residue has a high probability to be in a protonated state that enables a hydrogen bond interaction (see Section S1).Altogether, these differences result in an increase in the pore radius of the OmpK35 channel.Other key differences are in the hydrogen bonds that stabilize the L3-FS segment.Similar to OmpE35, in OmpK35 we note a loss of a tyrosine residue located on loop L1 stabilizing L3-FS (see Figure 2B, inset 2).At the same time, the D114-Y288 interaction is retained, which is equivalent to the D121-Y294 interaction in OmpF.Another difference between OmpF and OmpK35 is that the residue R162 (equivalent to R167 in OmpF) does not form a hydrogen bond
The Journal of Physical Chemistry B
with the backbone of loop L3 (see Figure 2B, inset 3).While these differences are expected to increase the flexibility of the L3-FS segment in OmpK35, the W123 interaction with the barrel wall might compensate and provide additional stability to the L3-FS segment.
To examine if the aforesaid variations affect the L3-FS stability, we performed unbiased simulations of these channels.Figure 3 depicts plots of the root-mean-square deviation (RMSD) for the L3-FS region, showing that in OmpE35 this segment shows a large propensity for backbone fluctuations similar to that of OmpF.In contrast, the L3-FS segment in OmpK35 is found to be quite stable in the unbiased simulations.In addition to the backbone fluctuations, the L3-FS stabilizing hydrogen bond in OmpE35 (D116-Y283) also undergoes fluctuations.The corresponding hydrogen bond in OmpK35 (D114-Y288) however remains stable throughout.The pore radii calculated from these trajectories at the narrowest section of the pore is about 3.32 Å in case of OmpF and OmpE35, and about 3.59 Å in case of OmpK35 (Figure S5).The pore dynamics leads to fluctuations around these mean values by about 0.21 Å in all cases.However, calculations of the pore radii in the region around the L3-FS segment lying in the preorientation region (PR) shows larger fluctuations of 0.34 Å in OmpF, 0.30 Å in OmpE35, and of only 0.18 Å in OmpK35.Overall, these results indicate that the variations in and around the CR significantly influence the fluctuations of the pore size and the stability of loop L3.
Free Energy Calculations for CIP Permeation Suggests Faster Permeation through OmpK35.The 1D and 2D FES for CIP permeation through OmpE35 and OmpK35 are depicted in Figures 4 and 5 similar to those in Figure S9 for OmpF.The 1D FES calculated along the CV z suggests that the permeation barrier for CIP in the case of OmpK35 (11.8 ± 1.16 kcal/mol) and OmpE35 (12.9 ± 1.77 kcal/mol) are in a similar range.However, due to the larger pore diameter in OmpK35, it is expected that there would be a relaxation in the steric restrictions to permeation and possibly a lower barrier to permeation.The 2D FES provides a more detailed view of the permeation with additional information on the orientation, i.e., through CV z ij .The 2D-FES plots in Figure 5 shows that the CIP molecule can permeate via two possible pathways in both OmpE35 and OmpK35.The two pathways are related to the two possible orientations a bulky antibiotic can attain in the The Journal of Physical Chemistry B CR: one with the amino group going ahead (path I) and the other with the carboxylate group going ahead (path II) as the antibiotic traverses the CR.For OmpE35, however, the 2D-FES has a significant undersampled region within the configuration space.This region is an entropically forbidden region of the space that appears due to the narrow diameter at the center of the channel.Such a feature was also present in the 2D-FES estimates in previous studies on the OmpF channel. 38,61Notably, in the case of OmpK35 this forbidden region is significantly reduced.This result was to be expected, considering the larger minimum pore diameter in the CR of the OmpK35 channel compared to that of OmpE35 (see Figure S4).Furthermore, we calculated the minimum free energy path associated with paths I and II using a zerotemperature string method, as also shown in Figure 5.A comparison of the free energy for the translocation through OmpE35 along path I and path II suggests that for both paths, the molecule encounters a free energy barrier of around 12 kcal/mol.Thus, permeation can occur via either of the two paths with similar probabilities.For OmpK35, we find that path I has a greater feasibility due to a lower barrier of 10.5 kcal/mol compared to path II with a barrier height of 13 kcal/ mol.It must be pointed out that in the case of OmpF, path I was found to be energetically more feasible due to a lower barrier of 11.5 kcal/mol compared to that of path II with 13.5 kcal/mol. 31Considering the high sequence identity between OmpF and OmpE35, especially for the residues in the CR, the difference in barriers seems unexpected.With the reported error in free energy of >1.0 kcal/mol it is not possible to conclusively comment of the relative difference in the CIP permeation rates between OmpF and OmpE35 based on the free energy values.
Next, we examined the sampled CIP configurations within the two channels to obtain a molecular picture of the permeation process.Antibiotic molecules can assume a myriad of configurations in the wide EC vestibule.However, the channel gets narrower toward the CR, thus limiting the accessible configurational space.Moreover, within the EC region toward the CR, also termed the PR, a molecule with an internal dipole preferentially aligns with the electric field transverse to the pore axis.This PR region lies roughly in the range z ∈ [−1.6, −0.5] nm.Notably, simulations of CIP permeation through OmpF showed that the PR serves as a region for a possible path-switching maneuver, where the molecule can switch from path I to path II and vice versa. 31As shown in Figure 6 for OmpE35, we find a similar switching The Journal of Physical Chemistry B region involving a transition from position Ia where the CIP molecule interacts with K75 and D116 to position IIa in which the molecule interacts with residues K75 and E112.This transition involves a shift of the piperazine amine group of CIP from D116 to E112 with the K75 interactions with the carboxylate moiety acting as a pivot.From here on, the molecule passes through the CR either via path I or path II. Figure S6 shows prominent poses of CIP as it crosses the CR via path I and path II from Ia to If and IIa to IIf, respectively.The configurations along the two paths are similar to those previously observed for CIP permeation through OmpF. 31The CIP molecule moves through the CR along a track of positively charged residues on one side and negatively charged residues on the other, either in the orientation belonging to path I or to path II, and subsequently exits the CR.
For the OmpK35 channel, the PR was largely populated by states with orientations corresponding to path II, as is also apparent from the 2D-FES in Figure 5.As the molecule enters the CR, it reorients to align with the internal electric field (Figure S7, pose P1).However, we find that in the case of OmpK35 the CIP molecule enters further into the CR along path II before possibly undergoing a transition toward path I.The switching transition is depicted in Figure S7 as poses P2 to P5.The charged amine group of CIP that initially interacts with D114 residue in pose P2, undergoes a transition to interact with the E110 residue as shown in poses P3 and P4 and finally shifts to pose P5 where it interacts with residue D106, completing the switch to path I.This late switch in the CR is feasible due to the wider pore in OmpK35.Thereafter, the molecule crosses and exits the CR along path I as shown in poses P6 to P9.
Antibiotic Induced L3-FS Conformational Dynamics in OmpF Orthologs.A key feature of the CIP permeation mechanism through the porin OmpF was the observed L3 dynamics, particularly in the L3-FS segment, associated with permeation. 31From unbiased simulations, even in the absence of an antibiotic molecule, the L3-FS of OmpF already shows some backbone fluctuations as well as fluctuations in the hydrogen bonds that stabilize the L3-FS segment as shown in Figure 3.Given this observation, antibiotic-induced changes in the L3-FS conformation are to be expected.The L3-FS segment of OmpE35 behaves similarly to that in OmpF in unbiased simulations.Thus, CIP-induced L3-FS dynamics appears to be feasible.Our analysis of the TASS trajectories shows that the passage of CIP through the PR and CR is associated with L3-FS fluctuations, as can be discerned from the L3-FS RMSD plot in Figure 7.While the unbiased simulations for OmpK35 show a stable loop (see Figure 3), the TASS trajectories show that in this case as well, the L3-FS undergoes induced conformational fluctuations.However, in comparison to OmpF and OmpE35, we see smaller conformational fluctuations associated with the permeation event based on the lower RMSD values.A structural analysis suggests that the subdued L3-FS backbone fluctuation in OmpK35 can be attributed to the stabilizing effect of the indole ring of the W116 residue interacting with the barrel wall.This stabilizing effect can be seen in the RMSF plot in Figure 7, where the RMSF values decrease sharply in the case of OmpK35 for residues from position 122 onward.Note that residue W116 is actually an insertion that has been omitted in the RMSF plot, but corresponds to the position between residues 122 and 123 according to the OmpF numbering used in the plot.This apart, the stability of the L3-FS segment in OmpK35 is also apparent from the results of the unbiased simulations in Figure 3.The RMSF plot of OmpE35 is interesting as well, as it shows that the loop fluctuations are limited to the L3-FS (shaded in gray).In contrast for OmpF, one can also see a peak in the region around the residues 112 to 114.In the OmpF study, 31 this peak was attributed to conformational changes in the residue D113 that also provides key interactions to the amine moiety of the CIP molecule during translocation.It is interesting to note that fluctuation in this residue is marginal in the equivalent D108 residue of OmpE35 and more so in the case of the D106 residue of OmpK35.In the case of OmpK35, such a difference may be due to the wider pore that enables passage of CIP without the need for a conformational transition of the D106 residue.However, OmpE35 is closely related to OmpF and has a similar pore diameter.In the analysis of trajectories, we only found marginal fluctuations in the D108 side chain associated with the passage of the CIP molecule.Overall, we note that the conformational dynamics induced by the antibiotic molecule is also an important factor in the permeation mechanism through OmpF orthologs.Moreover, the extent of the loop dynamics associated with a translocation event depends on the particular channel and in particular on the stabilization of loop L3.
■ CONCLUSION
Experimental studies have examined the structural and physicochemical aspects of the process of antibiotic influx into Gram-negative bacterial cells.Early studies on the permeation of a range of antibiotics revealed a notably higher The Journal of Physical Chemistry B permeation rate for zwitterionic antibiotics than for mono-and dianionic antibiotics. 7,16The investigations suggested a dominant influence of solute charge distribution, hydrophobicity, and size in determining the effective permeation rate through porins.Zwitterionic antibiotics have also been found to have a stronger binding in the CR than anionic antibiotics. 18Electrostatics plays a major role in the permeation as has been suggested by the observations of strong current blockages in electrophysiology studies that indicate the presence of binding sites in the CR, 62 through the observed binding site in the OmpF structure cocrystallized with ampicillin, 11 and in biased metadynamics simulations of various antibiotics wherein the most prominent affinity sites involve interactions with charged residues in the CR. 18,27,29,48,53,63Prominently, a systematic study of different OmpF orthologs and a representative set of β-lactam antibiotics suggested that a successful permeation involves achieving a balance in the electrostatic and steric factors. 23otably, this study suggested a scoring function that takes into account the statistical averages of various channel and antibiotic properties, as well as their thermal fluctuations.Later on, detailed biased simulations of OmpF with the zwitterionic ciprofloxacin molecule highlighted the possibility of antibiotic-induced fluctuations of the L3 loop at the pore CR. 31,38 Based on these results, it was suggested that the permeation is not only dependent on thermal fluctuations about the statistical averages but perhaps more critically on the induced fluctuations of the L3 loop during antibiotic passage through the CR.However, a direct confirmation of the role of loop backbone fluctuations through mutations in the L3 loop is not straightforward.Previous studies have shown that the channel permeation properties are sensitive to mutations and any mutation to restrict backbone fluctuations in the L3-FS would affect other properties such as pore size and the electrostatics. 24,64A study on molecules with different charge distributions, it was reported that that the antibiotic-induced fluctuations are observed only in zwitterionic and cationic antibiotics with a positive charge that is accessible for interactions with the negatively charged residues of the L3 loop. 36It is worth mentioning that in the case of enrofloxacin (ENR), which differs from CIP in that the former has an ethyl cap on the positively charged amine group, interactions of the amine group in ENR with the negatively charged residues of the L3-FS are not feasible due to steric restriction, and thus the molecule does not induce loop fluctuations.Besides this, the role of the conformational flexibility of an antibiotic and the accessible conformational space at the CR is bound to another dominating factor, and is possibly intimately related to the empirically deduced role of the number of internal rotatable bonds in the antibiotic molecule. 32he present study aimed at extending our current molecularlevel understanding of the permeation process and the role of antibiotic-induced pore fluctuations during permeation through OmpF orthologs.More specifically, the objective was to see if the induced L3 dynamics is a general feature of all porins or if it is determined by ortholog-specific structural variations.To this end, we first discussed the differences in sequence and structure between OmpF, OmpE35, and OmpK35 and how these variations might affect the permeation process.The observed variations suggest possible differences in the pore dynamics and the extent of L3 stabilization, a finding that was also supported by unbiased simulations of the three porins.To further examine how these differences influence the permeation of a given antibiotic, we studied the permeation of the antibiotic CIP through the porins OmpE35 and OmpK35, while OmpF was already investigated in an earlier study. 31The simulations indicate that the observed differences in the pore structure, i.e., in the minimum pore radius and in the L3 stabilization, influence the feasibility of the two possible CIP orientations during translocation.The difference is particularly striking between OmpF (or OmpE35) and OmpK35 in terms of the observed structural variations, L3 dynamics, and the permeation mechanism.In OmpK35, an additional stabilization of the unstructured L3-FS segment leads to greater stability and rigidity both in the absence and presence of the zwitterionic antibiotic molecule.In OmpF, the transient conformation fluctuations of L3-FS induced by an antibiotic molecule containing a positive charge were suggested to aid the permeation of bulky antibiotics by reducing the entropic contribution to the barrier.In OmpK35, however, the greater rigidity of the loop appears to diminish the mechanistic role of L3-FS dynamics in permeation processes.At the same time, the larger pore radius of OmpK35 makes up for the loss of the L3-FS flexibility.Energetically, CIP has a translocation barrier through OmpK35 of 10.5 kcal/mol which is smaller than that for OmpF with 11.5 kcal/mol and for OmpE35 with 12 kcal/ mol.This result is in line with the trend previously reported for penicillins 65 while no experimental trend has been reported so far for the antibiotic CIP.We speculate however that the differences in the permeation characteristics through OmpF (or OmpE35) and through OmpK35 may be different in case of bulkier zwitterionic antibiotics.A larger pore size in OmpK35 from K. pneumoniae compared to OmpF from E. coli may not always result in a faster permeation of antibiotics through the former.As the size of the antibiotic molecule increases, the OmpK35 pore would present a higher permeation barrier, while the greater rigidity of the L3-FS segments suggests that antibiotic-induced L3 dynamics would not play a dominant role in the translocation process.For such bulkier zwitterionic drugs, the OmpF pore could still present a more efficient permeation path than the OmpK35 pore.For such bulkier drugs, it may be advantageous to contain internal rotatable bonds for more flexibility.In the present work, we have not studied bulkier drugs through MD simulations due to significant sampling issues with increasing size of the solute under investigation. 66−69 Based on the studies thus far, the TASS method does present itself as a suitable method to study complex systems.Work in the direction of extending the investigations to larger antibiotic molecules is in progress.Nonetheless, the insights obtained from the present simulations can help future computational investigations on antibiotic permeation through these channels.The present as well as recent investigations on porins using the TASS scheme have focused only on the role of the channel in antibiotic permeation and did not study the effect of lipopolysaccharides (LPS) on the extracellular leaflet of the outer membrane.−74 It would be interesting to compare the permeation energetics and channel dynamics during antibiotic permeation in the presence of modeled LPS on the EC side.A recent study reports that LPS does not The Journal of Physical Chemistry B markedly influence the internal electric field at the porin constriction, a dominant factor influencing permeation. 75owever, the reported differences in the dynamics of loop L3 and of the extracellular loops of the channel in the presence and absence of LPS imply a significant influence on the effective permeation rates.
In the context of understanding the permeation in the case of the bacterium K. pneumoniae, further studies would need to also focus on the OmpK36 channel, which has an important physiological role in the survival of the pathogenic strains.Interestingly, mutations in the L3 loop of OmpK36 have been reported that improve the fitness of the pathogenic strains of K. pneumoniae. 76While data on accumulation and MIC values of different antibiotics is available, the role of efflux rates complicates the derivation of correlations between antibiotic properties and accumulation.Furthermore, most of the studies on permeation have focused on OmpF as a model system.Previous investigations, for instance, have looked to decouple the influx and efflux process and study the accumulation of antibiotics and identified antibiotic substituents that are critical determinants for permeation through OmpF. 13Similar studies that systematically look at permeation rates of antibiotics with different sizes and charge profiles through the orthologs systems may be necessary to further understand the differences in the permeation behavior of antibiotics among the various orthologs.
Additional analysis of protonation states and of the MD simulations are provided (PDF) ■
Figure 1 .
Figure1.General structural features common to OmpF and its orthologs.These porins are composed of 16-stranded β-barrels arranged in the form of a trimer.The β-strands are connected via long loops toward the extracellular side of the barrel.The L3 loop is folded back into the lumen of the barrel, partially occluding the channel and leading to an hourglass shape with a CR at the center of the pore.The CR is also characterized by the presence of a strong transverse electric field that arises due to the presence of charged residues of opposite polarity.
Figure 2 .
Figure 2. Structural superposition of the CRs of (A) the OmpE35 channel and (B) the OmpK35 channel with respect to the OmpF channel.The OmpF structure is depicted in orange, OmpE35 in green and OmpK35 in blue.Prominent residues in the constriction zone are highlighted.The residue labels are colored in the same color code as the respective structures.The insets zoom into differences in the residues that provide stabilizing effects to the L3 loop by hydrogen bonds.
Figure 3 .
Figure 3. (Top row) Root mean square deviation of the C α atoms of the residues within the L3-FS region of OmpF, OmpE35, and OmpK35 calculated based on a 150 ns-long unbiased all-atom MD simulation.L3-FS corresponds to the L3 loop segment F118 to S125 in OmpF, F113 to S120 in OmpE35 and W111 to T119 in OmpK35.(Bottom row) Plots for the hydrogen bond distances for the D121-Y294 bond (in OmpF) and its equivalent hydrogen bonds, i.e, D116-Y283 in OmpE35 and D114-Y288 in OmpK35.All analyses were performed on the three monomers individually, as depicted by the different colors.
Figure 4 .
Figure 4. One-dimensional free energy plots for CIP permeation through OmpE35 and OmpK35 calculated using TASS simulations.The principal CV, z is the projection of the center of mass distance between CIP and channel monomer, along the z-axis.The free energy estimates for permeation through the three individual monomers are shown in blue, green and magenta.The average free energy (in black) estimate and associated standard error (shaded region) were calculated using a histogram bootstrapping approach.
Figure 5 .
Figure 5. Two-dimensional free energy estimates for CIP permeation through OmpE35 and OmpK35 are shown in the upper panels.The CV z is the projection of the center of mass distance between CIP and channel monomer along the z-axis, and the CV z ij is the projection of the longest axis of CIP along the z-axis.The two possible permeation paths, I and II, are calculated using a zero-temperature string method.The free energy along the two paths is depicted in the lower panels.The respective plots for OmpF are shown in Figure S9.
Figure 6 .
Figure 6.Path switching point in the PR of OmpE35 that allows the CIP molecule to transition between path I and path II configurations during permeation.The black arrow in the upper panel shows the switching path along the 2D-FES estimated using TASS.The lower panels depict the prominent conformations involved in the switch between path I and path II.The L3 loop is shown in yellow and the charged residues are labeled.
Figure 7 .
Figure 7. Fluctuations of the L3-FS segment during CIP translocation through OmpF (yellow), OmpE35 (green) and OmpK35 (blue).L3-FS corresponds to the L3 loop segment F118 to S125 in OmpF, F113 to S120 in OmpE35 and W111 to T119 in OmpK35.The upper panel shows the C α RMSD values calculated for the L3-FS region from the TASS trajectories sampling CIP configurations at different positions along the channel.The lower panel depicts the per residue RMSF values for the backbone of loop L3 calculated from the simulation windows sampling the CR.The residue number follows the numbering in the OmpF porin.Note that the L3 loop of OmpK35 has an additional insertion in the L3-FS region at position 116.This residue has been omitted in the RMSF plot. | 10,571.4 | 2024-08-23T00:00:00.000 | [
"Medicine",
"Chemistry",
"Biology"
] |
Freezing of Solute-Laden Aqueous Solutions: Kinetics of Crystallization and Heat- and Mass-Transfer-Limited Model
Following an earlier study, we reexamined the latent heat of fusion during freezing at 5 K/min of twelve different pre-nucleated solute-laden aqueous solutions using a Differential Scanning Calorimeter (DSC) and correlated it with the amount of initially dissolved solids or solutes in the solution. In general, a decrease in DSC-measured heat release (in comparison to that of pure water, 335 mJ/mg) was observed with an increasing fraction of dissolved solids or solutes, as observed in the earlier study. In addition, the kinetics of ice crystallization was also obtained in three representative biological media by performing additional experiments at 1, 5 and 20 K/min. A model of ice crystallization based on the phase diagram of a water–NaCl binary solution and a modified Avrami-like model of kinetics was then developed and fit to the experimental data. Concurrently, a heat and mass transfer model of the freezing of a salt solution in a small container is also presented to account for the effect of the cooling rate as well as the solute concentration on the measured latent of freezing. This diffusion-based model of heat and mass transfer was non-dimensionalized, solved using a numerical scheme and compared with experimental results. The simulation results show that the heat and mass transfer model can predict (± 10%) the experimental results.
Introduction
The latent heat of fusion in solute-laden aqueous solutions is an important parameter in the modeling and optimization of various low-temperature applications in biomedicine, as well as in the food industry. It is widely reported in the literature that the amount of "freezable" water (or water that changes phase during freezing) is less than the total water content by an amount denoted as the "bound" or "unfreezable" water [1][2][3][4][5][6]. Fennema et al. [7] stated that during the freezing of food substances, the latent heat of fusion should be assumed to be~80% of the expected heat release based on the total water content. Thus 20% of the total water content is "bound" and does not freeze in various foods. Further, Cooke and Kuntz [6] reported that as much as 0.8 g of water/g of dissolved solids (to as low as 0.3 g of water/g of dissolved solids) is "bound" or does not freeze in various biological systems (membranes, lipids, intact ribosomes, muscle cells and polypeptides). Thus, there is a need to determine the magnitude of the latent heat of fusion during the freezing of solute-laden aqueous solutions commonly used in cryobiological applications (and, by extension, the amount of "bound" or "non-freezable" water in these biological media) to optimize a variety of freezing applications (including cryopreservation and cryosurgery).
The objective of this study is two-fold: to re-assess the original results from Devireddy et al. [1], which showed that the latent heat of fusion during the freezing of various pre-nucleated aqueous solutions is correlated with the amount of dissolved solids or solutes in the biological media. These measurements were taken using a newer model of a Differential Scanning Calorimeter (DSC-Diamond) as opposed to an older instrument (DSC-Pyris 1) in the earlier study. Similar to the earlier study, the kinetics of ice crystallization were also obtained in three representative biological media (1 × PBS, 10 × PBS and 1M glycerol in 1 × PBS) by performing additional experiments at 1, 5 and 20 K/min. Unlike Devireddy et al. [1], where a full set of heat and mass transfer equations were used to model the process, here, we developed and present a simple model of ice crystallization based on the phase diagram of a water-NaCl binary solution [2] and the modified Avramilike model of kinetics [3,4,10,60,61]. In the second part of this study, we developed an additional numerical model to describe the heat and mass transfer diffusion problem during the freezing of a salt solution in a small cylindrical container (i.e., a DSC sample pan). This numerical model differs from the one in Devireddy et al. [1] in that no additional experimental parameters are needed to complete the heat and mass transfer model, and the model is self-consistent. Predicted simulation results from the heat and mass transfer model were compared with the corresponding experimental results and show a high degree of agreement.
Aqueous Solutions: Biological Media
The experiments were conducted using a DSC-Diamond machine (Perkin-Elmer Corporation, Newark, CT, USA). The temperature scale of the instrument was calibrated by the melting point of pure ice (273.15K or 0 • C) and indium (156.7 • C for 99.9% purity), while the enthalpy scale was based on the heat of fusion of pure ice (335 mJ/mg), as described earlier [34,35,39]. The latent heat of fusion during freezing was obtained using the DSC in the following solute-laden solutions: (i) 1 × (isotonic), 5 × and 10 × Phosphate-Buffered Saline (PBS) solutions (Celox, Inc., Hopkins, MN, USA); (ii) serum-free RPMI culture media (Celox, Inc., Hopkins, MN, USA); (iii) cell culture media: RPMI with 20% Fetal Bovine Serum (FBS) and 1% penicillin-streptomycin (Sigma Chemical Co., St. Louis, MO, USA); and (iv) 0.05M, 0.1 M, 0.5 M and 1.0 M glycerol in 1 × PBS solutions. Thus, the latent heat of fusion was obtained for nine different aqueous solutions, as described below.
Differential Scanning Calorimeter (DSC) Experiments
The DSC experiments were conducted by placing approximately 9 to 10 mg of each solution in a standard aluminum DSC sample pan (Perkin-Elmer Corporation, Norwalk, CT, USA). The sample was cooled at 5 K/min from 4 • C until ice nucleated in the solution, typically from −6 to −12 • C (observed as a sharp negative peak on the DSC thermogram; Figure 1). The sample was then equilibrated at the phase change temperature (obtained from a separate control experiment as the temperature at which a frozen sample thaws when heated at 5 K/min). The pre-nucleated or "site-saturated" sample was then cooled at 5 K/min to −50 • C to obtain the magnitude and the temperature dependence of the heat release (i.e., the thermogram; see Figure 1). In the cases of 1 × PBS, 5 × PBS, 10 × PBS and 1 M glycerol in 1 × PBS solutions, experiments were also conducted at two additional cooling rates of 1 and 20 K/min. Note that the higher cooling rate of 20 K/min was chosen as a conservative estimate of the fastest cooling rate (40 K/min) at which the DSC can accurately reproduce heat release signatures [35,44,52]. We found that for cooling rates greater than 40 K/min, the DSC heat release measurement spreads out and increases in value [35,44,52]. This inaccuracy could be due to the limitation of the rate at which the phase change process proceeds due to ice crystal growth, as well as the nonlinearity of the resistance within the instrument [62][63][64][65]. Six separate DSC experiments were performed with each solution for each cooling rate studied. The percentage of dissolved solids in the solution was obtained by measuring the difference in weight between hydrated and fully dehydrated solutions (dried in an oven for 3 to 4 d at 50 to 60 • C), the procedure of which was described in an earlier study [1] and, in the interest of brevity, is not repeated here (Table 1).
Differential Scanning Calorimeter (DSC) Experiments
The DSC experiments were conducted by placing approximately 9 to 10 mg of solution in a standard aluminum DSC sample pan (Perkin-Elmer Corporation, Norw CT, USA). The sample was cooled at 5 K/min from 4 ˚C until ice nucleated in the solu typically from −6 to −12 ˚C (observed as a sharp negative peak on the DSC thermog Figure 1). The sample was then equilibrated at the phase change temperature (obta from a separate control experiment as the temperature at which a frozen sample th when heated at 5 K/min). The pre-nucleated or "site-saturated" sample was then co at 5 K/min to −50 °C to obtain the magnitude and the temperature dependence of the release (i.e., the thermogram; see Figure 1). In the cases of 1 × PBS, 5 × PBS, 10 × PBS a M glycerol in 1 × PBS solutions, experiments were also conducted at two additional ing rates of 1 and 20 K/min. Note that the higher cooling rate of 20 K/min was chosen conservative estimate of the fastest cooling rate (40 K/min) at which the DSC can a rately reproduce heat release signatures [35,44,52]. We found that for cooling rates gr than 40 K/min, the DSC heat release measurement spreads out and increases in v [35,44,52]. This inaccuracy could be due to the limitation of the rate at which the p change process proceeds due to ice crystal growth, as well as the nonlinearity of th sistance within the instrument [62][63][64][65]. Six separate DSC experiments were perfor with each solution for each cooling rate studied. The percentage of dissolved solids i solution was obtained by measuring the difference in weight between hydrated and dehydrated solutions (dried in an oven for 3 to 4 d at 50 to 60 °C), the procedure of w was described in an earlier study [1] and, in the interest of brevity, is not repeated ( Table 1). The integrated area under the DSC thermograms (assumed to correspond to th tent heat of fusion) was obtained using the DSC (Perkin-Elmer Corporation, Norwalk software with either a sigmoidal or linear baseline, as shown in Figure 1 (as describ
Magnitude of DSC-Measured
Heat The integrated area under the DSC thermograms (assumed to correspond to the latent heat of fusion) was obtained using the DSC (Perkin-Elmer Corporation, Norwalk, CT) software with either a sigmoidal or linear baseline, as shown in Figure 1 (as described in the DSC manual and by ). The choice of the baseline influences the integrated area under the thermogram (i.e., the measured value of latent heat), and although more accurate baseline selections are reported in the literature [62][63][64][65], the simpler sigmoidal and linear baselines were used in this study because of their ease of use and the importance of trends ( Table 1). The sigmoidal baseline was drawn between the phase change temperature and~−22 • C, while the linear baseline was drawn between the phase change temperature and~−40 • C, as described in the DSC-Diamond manual.
Note that the sample was equilibrated at the phase change temperature to mimic the behavior of a freezing process in a biological system. For example, water in a cell suspension or a tissue system is compartmentalized into either intracellular or extracellular (vascular in the case of tissues) spaces. During freezing, ice almost always nucleates in the extracellular space (due to the nature of nucleation processes and the fraction of water that is available for phase change). To model the freezing process in such a system, it is necessary to understand the temperature and time dependence of the latent heat released upon subsequent cooling [66]. Hence, the choice was made to pre-nucleate the sample, equilibrate at the phase change temperature and, subsequently, impose a constant cooling rate on the solution. Figure 2). Although the total magnitudes of the latent heat release (L) were found to be within 2% of each other for all of the cooling rates studied, there was, however, a time dependence of the measured value of the latent heat release. A function that accounts for the experimentally measured temperature and time dependence of the latent heat release was thus sought. For simplicity, it was assumed that the temperature and time dependence of latent heat release are independent of one another and can be represented as: The function for temperature dependence, α(T), was chosen primarily due to the fact that the temperature dependence of the 1 K/min latent heat release for the 1× PBS solution was found to be very similar to the release profile defined by the phase diagram for an isotonic water-NaCl binary solution, as described previously [2]. Thus, a function that is very similar to the one detailed earlier [2] was also chosen in this study. In fact, if A = 0.53, then Equation (2) corresponds exactly to the function described by Hayes et al. [2] for isotonic water-NaCl solutions. Table 2, are also shown in the figures. Error bars are present but are too small to resolve in the graph (n = 6 DSC cooling runs at each cooling rate for each solution).
The reason for the selection of the time dependence function, β(t), shown in Equation (3), was primarily that it represents the kinetics of transformation, which requires nucleation and/or growth processes, as developed by Avrami [3,60,61] and described in detail by Christian [4]. It should be pointed out that several other models have been reported in Table 2, are also shown in the figures. Error bars are present but are too small to resolve in the graph (n = 6 DSC cooling runs at each cooling rate for each solution). This is a very simple assumption that allows for the development of a crystallization kinetics model. Several empirical models were previously fit to the experimentally determined temperature and time dependence of the latent heat release (an example of a purely empirical fit is shown in 64), and the following two functions were chosen because of their possible physical or "mechanistic" significance: and where "A", "k" and "n" are constants that need to be obtained by curve fitting to the experimentally determined data; T ph is the phase change temperature of the solute-laden aqueous solution (represented as: T ph = T-osm × 1.858, where T is the absolute temperature in degree Kelvin, and Osm is the osmolality of the solution); and t is time in seconds and can be represented as: ((T ph -T) × 60)/B, where B is the cooling rate (K/min). The function for temperature dependence, α(T), was chosen primarily due to the fact that the temperature dependence of the 1 K/min latent heat release for the 1× PBS solution was found to be very similar to the release profile defined by the phase diagram for an isotonic water-NaCl binary solution, as described previously [2]. Thus, a function that is very similar to the one detailed earlier [2] was also chosen in this study. In fact, if A = 0.53, then Equation (2) corresponds exactly to the function described by Hayes et al. [2] for isotonic water-NaCl solutions.
The reason for the selection of the time dependence function, β(t), shown in Equation (3), was primarily that it represents the kinetics of transformation, which requires nucleation and/or growth processes, as developed by Avrami [3,60,61] and described in detail by Christian [4]. It should be pointed out that several other models have been reported in the literature that describe crystal growth rates under a variety of conditions in aqueous solutions or otherwise, for example: the studies by Boutron [10], Kubota and Mullin [67] and Hey and MacFarlane [11] and the review by Long et al. [68].
The Avrami-like model was chosen over other models in this study because of its simplicity and because, by plotting the data as previously suggested by MacFarlane et al. [69], the DSC-measured heat releases at 5 and 20 K/min were found to follow the Avrami-like kinetics of transformation ( Figure 3). The Avrami model of kinetics was originally developed for isothermal crystallization processes and was later extended to non-isothermal processes by Cahn [70,71] and Ozawa [72], with essentially the same formulation as shown above. The exception being that for non-isothermal processes (similar to the experiments performed in the present study), the constant, k, is a function of temperature, T. To simplify the curve-fitting process and the Avrami-like model, this dependence of k on T was neglected in the present study. Clearly, this assumption represents a first-order approximation of the traditional Avrami models of crystallization. The two constants in the current Avrami-like model, "k" and "n", are a constant and the time exponent, respectively. The time exponent, n, is characteristic of the nucleation type and the growth geometry, and the constant, k, is related to the nucleation and growth rates [4]. [67] and Hey and MacFarlane [11] and the review by Long et al. [68]. The Avrami-like model was chosen over other models in this study because of its simplicity and because, by plotting the data as previously suggested by MacFarlane et al. [69], the DSC-measured heat releases at 5 and 20 K/min were found to follow the Avramilike kinetics of transformation ( Figure 3). The Avrami model of kinetics was originally developed for isothermal crystallization processes and was later extended to non-isothermal processes by Cahn [70,71] and Ozawa [72], with essentially the same formulation as shown above. The exception being that for non-isothermal processes (similar to the experiments performed in the present study), the constant, k, is a function of temperature, T. To simplify the curve-fitting process and the Avrami-like model, this dependence of k on T was neglected in the present study. Clearly, this assumption represents a first-order approximation of the traditional Avrami models of crystallization. The two constants in the current Avrami-like model, "k" and "n", are a constant and the time exponent, respectively. The time exponent, n, is characteristic of the nucleation type and the growth geometry, and the constant, k, is related to the nucleation and growth rates [4]. Briefly, in the case of diffusion-controlled growth, the time exponent, n, is expected to be 1.5 for "site-saturation" or for all nuclei present at the beginning of the transfor- Briefly, in the case of diffusion-controlled growth, the time exponent, n, is expected to be 1.5 for "site-saturation" or for all nuclei present at the beginning of the transformation with negligible initial dimensions or 2.5 when the nucleation rate is constant during the transformation process. When the growth takes place in large plates or cylinders, the exponent is reduced to 0.5 and 1, respectively. For polymorphic changes, discontinuous precipitation, eutectoid reactions and interface-controlled growth, the time exponent, n, can vary between 1 and 4, depending on the conditions of growth (for "site-saturation" or "a nucleation process in which all nucleation sites are exhausted early in the transformation", it varies between 1 and 3; for a constant nucleation rate, it takes a value of 4, and it has a value between 3 and 4 for a decreasing nucleation rate). The constant, k, is a measure of the crystal growth rate, and a higher value of k signifies a faster rate of crystal growth or transformation. Note that the predicted value for the constant k in Table 2 ranges from 3.3 to 2.3, with the presence of glycerol decreasing the value, suggesting that the presence of glycerol saturates the nucleation sites when compared to plain PBS solutions.
The constant "A" in the temperature dependence function, L(T), was obtained using the 1 K/min data for the three different solutions investigated (1 × PBS, 10 × PBS and 1 M glycerol in 1 × PBS) by using a least-square minimization technique [73]. After determining the best-fit value of "A" to the 1 K/min data, the function L(T,t) was fit to the 5 and 20 K/min data, and the two remaining unknowns in the model (constant, k, and time exponent, n) were obtained for the three different solutions studied. In addition, the "combined best-fit" values of the constant, k, and time exponent, n, that best fit to the 5 and 20 K/min data concurrently were also obtained (a similar procedure was previously applied for other functions [36,44,54,74]). All of the curve-fitting results presented have an R 2 value greater than or equal to 0.95, indicating that there was good agreement between the experimental data points and the fit calculated using the estimated constants.
Heat-and Mass-Transfer-Limited Model of Freezing of a Salt Solution in a Small Container
We start by considering the problem of freezing a salt solution in a small container (or a DSC sample pan) as a coupled problem of the temperature and salt concentration. Let H be the height of the salt solution, λ be the mean thickness of dendritic fingers (assumed to be 5 × 10 −5 m) and R be the radius of the cylinder (2 × 10 −3 m). The volume of the fluid is: V = πR 2 H. Since the pan is heated from below (axial direction), we can try to model the temperature T and the concentration c as having radial symmetry, i.e., T(r, z, t) = T(z, t), and c(r, z, t) = c(z, t). Let Z(t) be the position of the freezing front at time t. Then, the governing dimensional equations are: where i = 1 denotes the frozen region 0 ≤ z ≤ Z(t), and i = 2 denotes the unfrozen region Z(t) ≤ z ≤ H. The boundary conditions at the free boundary z = Z(t) are: L . .
where T(Z + (t), t) = lim z→Z ± (t) T(z, t). The initial conditions and the boundary conditions at the ends z = 0 and z = H. are: T(0, t) = T ph − Bt, the temperature decreases linearly from the phase change temperature (the imposed constant cooling rate c(0, t) = 0), the frozen fraction is pure ice, ∂T ∂z (H, t) = 0 and ∂c ∂z (H, t) = 0, Neumann conditions. The initial conditions are: T(z, 0) = T ph = 273.15 − mc 0 , and c(z, 0) = c 0 . Note that m is a constant that defines the phase change temperature based on the initial solute concentration, c 0 , and is approximately equal to 1.858 (if c 0 is given in Osm/L; for an isosmotic solution corresponding to a NaCl concentration of 0.9 wt%, the c 0 value is 0.3 Osm/L). Assuming that the diffusion in the frozen region is negligible (D c,1 = 0 and c(z, t) = 0 for 0 ≤ z ≤ Z(t)) so that the frozen fraction is pure ice, we can non-dimensionalize the equations using the length scale l = H and the time scale τ = T ph B with the following non-dimensional quantities:T = T T ph ,ć = c c 0 ,ź = z l andt = t τ . Dropping the primes and simplifying, we obtain the following non-dimensional system: T ph D c,2 are the inverse thermal and compositional diffusivities, respectively. The boundary conditions are given by: where St is the Stefan number ( L T ph c l ∼ 10 −2 ), M = mc 0 , and c l is the heat capacitance of the liquid. The initial conditions are: T(z, 0) = 1, and c(z, 0) = 1. We now estimate the non-dimensional coefficients: ε i = BH 2 T ph D T,i ∼ B•2 × 10 −4 min/K, assuming H = 5 × 10 −5 to 5 × 10 −4 m, and D T,i ∼ 10 −5 to 10 −7 m 2 s . Similarly, γ 2 = BH 2 T ph D c,2 ∼ B•2 × 10 −1 min/K, assuming D c,2 ∼ 10 −9 m 2 s . Note that ε i is quite small, even for moderate cooling rates, while γ 2 is only negligible for very low cooling rates, and that Stε i ε i . This suggests that we can consider a further reduced system such that the heat conduction is quasi-steady, while the full diffusion equation is used for the solute concentration. That is, ∂ 2 T ∂z 2 = 0 for i = 1, 2, and γ 2 ∂c ∂t = ∂ 2 c ∂z 2 for Z(t) ≤ z ≤ 1. The boundary conditions for the reduced system are: The initial data are T(z, 0) = c(z, 0) = 1. The solution to the temperature field is trivial: T(z, t) = 1 − t. Thus, from the first boundary condition of the reduced system, we find that c(Z + (t), t) = 1 + t M . Finally, the system reduces to only solving the diffusion equation for c in the liquid fraction: with γ 2 This is the model that was solved numerically by reformulating the equation in a fixed domain. Rather than solving the diffusion Equation (14) on the variable domain Z(t) ≤ z ≤ 1, we mapped the problem onto a fixed domain: 0 ≤ δ ≤ 1. This is carried out as follows: First, we define the parametrization z = z(δ, t) such that z(δ, t) is uniform in δ. We further suppose that z(0, t) = Z(t) is the interface position. In addition, z(1, t) = 1; it is now easy to see that: z(δ, t) = (1 − Z(t)δ + Z(t). Transforming Equation (14), we obtain: ∂c ∂δ Bioengineering 2022, 9, 540 9 of 17 Equation (15) was solved with boundary conditions (Equation (16)) using a first order in time and a second order in space.
The initial and boundary conditions are: For the boundary value, c −1 = 3c 0 − 3c 1 + c 2 . For the inverse non-dimensional coefficient γ 2 (t), we begin by considering the dimensional diffusion coefficient D c (t) = k 6πR • T µ , with D (298K) = 2.83 × 10 −11 m 2 /min and viscosity (µ) = 1cP, to obtain k 6πR = 9.17 × 10 −14 . For the viscosity model, we assume the well-known formulation: µ = Ae F RT , with A = 6.627 × 10 −4 , F = 1.807 × 10 4 and R = 8.314, to calculate the non-dimensional inverse diffusion coefficient, γ 2 = BH 2 T ph D c (T) . Several studies have shown that there is a strong influence of solute concentration on solution viscosity [75][76][77][78][79][80]. This effect was not considered in the present study to reduce the complexity of the model and also to assess whether a simpler approach would suffice. For the fully discrete scheme (in time), we let ∆t be the time step and let t n = n∆t. The scheme is then given by . The concentration profile then evolves to: c n+1
Results
As stated earlier, we tried to reproduce the earlier results of the magnitude of latent heat during freezing obtained by Devireddy et al. [1]. Table 1 shows the DSC-measured heat release readings for the various aqueous solutions investigated. In general, these results are within ±1% of the results obtained by Devireddy et al. [1] and show a decrease in heat release as the amount of dissolved solids (solutes) increases. This trend/result is consistent with earlier studies [1,34]. Note that, as expected, the dissolved fraction of solids increases with the solute concentration, i.e., from 1 × PBS to 10 × PBS. As demonstrated in the earlier study [1] and reconfirmed in this study, although serum-free RPMI has approximately the same fraction of dissolved solids as the cell culture media, it has a considerably smaller magnitude of latent heat released (261 vs. 221 mJ/mg). As stated in earlier studies [18,19,75], this decrease in latent heat release might either be due to the decrease in the latent heat of water (at lower temperatures) and/or due to the presence of "bound" water. Figure 2A-C show the experimentally determined temperature dependence of latent heat release from 1 × PBS, 10 × PBS and 1M glycerol in 1 × PBS solutions, respectively. In each figure, the experimentally determined fraction of heat release at various sub-zero temperatures is shown: 1 • C/min (filled circles), 5 • C/min (open squares) and 20 • C/min (filled triangles). Note that the heat release profile of the 1 × PBS solution for a cooling rate of 1 • C/min is very similar to the release profile defined by the phase diagram for a water-NaCl binary solution (solid line in Figure 2A). Figure 2 also shows that the heat release profiles for 5 and 20 • C/min significantly lag behind the 1 • C/min profile for all three solutions studied. The constant "A" obtained by curve fitting to the 1 • C/min data was found to be 0.53 for all three solutions investigated (Table 2). Thus, the temperature dependence function shown in Equation (2), when applied to the 1 × PBS solution, is exactly the same as the one described previously by Hayes et al. [2] for a binary (water-NaCl) isotonic solution. The solid lines in Figure 2B,C represent the temperature dependence function shown in Equation (2), with A = 0.53 for 10 × PBS (T ph =~−5.3 • C or~267.85 K) and for 1 M glycerol in 1 × PBS (T ph =~−2.4 • C or~270.75 K), and are shown to accurately predict the fraction of heat release measured at 1 • C/min. Figure 3 shows the plots of the 1, 5 and 20 • C/min data for the 1 × PBS solution, as previously suggested by MacFarlane et al. [69]. Equation (3) can be recast in the form: ln[-ln(1-β(t))]=ln(κ)+γ•ln(t), and thus, plots of ln[-ln(1-β(t))] against ln(t) should fall on a straight line when the transformation process follows the model, with the slope of the line being equal to the time exponent "n" and the y-intercept being equal to the natural log of the constant, k, or ln(k). Such a plot is shown in Figure 3 and shows that the 1 • C/min (filled circles), 5 • C/min (open squares) and 20 • C/min (open triangles) data points can be represented by three separate straight lines. Thus, Figure 3 suggests that the transformation process follows the Avrami-like model reasonably well and validates our choice of L(t) shown in Equation (3). The three straight lines from left to right represent lines with a slope of 1.5 (or the time exponent, n = 1.5). The 1 • C/min (filled circles), 5 • C/min (filled squares) and 20 • C/min (open triangles) data points are found to be reasonably well represented, even with the solid lines. The time exponent, n, was then set to 1.5 for two reasons: (1) it corresponds to the physical situation of the DSC experiment: i.e., at time t = 0, all nuclei are present or "site-saturated" [4], and (2) as is shown later (Figure 4), the value of "n" that best fits the 1, 5 and 20 • C/min data concurrently was also found to be equal to 1.5 (the "combined best-fit" value of "n" was also found to be 1.5 for the other two solutions studied, as shown in Table 2 and Figure 4). Table 2). The constant (k) is plotted on the y-axis, while the time exponent (n) is plotted on the xaxis.
In Figure 5, there are three plots. In each plot, a different initial concentration of salt/PBS is used; Figure 5A represents 1 × PBS, and 5B shows 5 × PBS, while 5C shows the 10 × PBS solution. In each graph, the results are shown using cooling rates of 1, 5 and 20 K/min, as well as the corresponding experimental results. Additionally, the solid curve in each plot corresponds to the % of latent heat released that is predicted by using the solution in the phase diagram, which assumes that the concentration is equilibrated (constant). Observe that as the cooling rate is decreased, the numerical solution converges to that predicted by the phase diagram. Moreover, the % of latent heat released is a decreasing function of the cooling rate in all cases investigated. There is good agreement between the A B C Figure 4. Contour plots of the goodness-of-fit parameter R 2 (≥ 0.95) for the Avrami-like kinetic model. Figure 4A-C show the combinations of constant (k) and the time exponent (n) that fit the 5 and 20 K/min data for 1 × PBS, 10 × PBS and 1M glycerol in 1 × PBS solutions, respectively, with an R2 ≥ 0.95. Note that the contour plot for 1 K/min is completely enclosed by the contour plot corresponding to 5 K/min for all media (contour not shown in the interest of clarity). The "combined best-fit" parameters for 1, 5 and 20 K/min are represented by a star (*) in each figure and are: n = 1.5 and k = 3.3 for 1 × and 10 × PBS solutions and n = 1.5 and k = 2.3 for 1M glycerol in 1 × PBS solution ( Table 2). The constant (k) is plotted on the y-axis, while the time exponent (n) is plotted on the x-axis. Figure 4A-C show contour plots of the goodness-of-fit parameter, R 2 , in the constant (k) and time exponent (n) space that "fit" the 5 and 20 • C/min data for 1 × PBS, 10 × PBS and 1 M glycerol in 1 × PBS solutions, respectively. Any combination of "k" and "n" shown to be within the contour will "fit" the experimentally determined water transport data at that cooling rate with an R 2 value > 0.95. The common region between the contours indicates the combination of "k" and "n" that will fit the data at both 5 and 20 K/min with an R 2 ≥ 0.95. The predicted "combined best-fit" values of "k" and "n" are denoted by a star (*) and fall within the two contours. The "combined best-fit" values of the constant, k, and the time exponent, n, are shown in Table 2, along with the "best-fit" values of the constant "A", for the three different solutions investigated in this study (1 × PBS, 10 × PBS and 1 M glycerol in 1 × PBS solutions).
Heat-and Mass-Transfer-Limited Model Results: In Figure 5, we present comparisons between the numerical and experimental results for the percentage of heat released as a function of -(T-T ph ) obtained using the mathematical model described earlier, Equations (10) and (11). Given that the magnitude of the heat of fusion cannot be easily calculated from the heat and mass transfer model, the percentage of heat released was calculated directly from the interface position by assuming t final for each simulation as the time at which T(t final )=T ph -40. Then, the numerical percentage of latent heat released is: % Latent Heat released (t) = (Z(t))/(Z(t final )). This is consistent with the experimental value if the latent heat produced by freezing an infinitesimal amount of ice does not vary with temperature/time.
Discussion
The latent heat data presented in this study suggest that the previously described recommendation by Fennema et al. [7] that ~80% of total water content freezes in food substances results in an over-prediction of the latent heat of fusion of aqueous solutions (by as much as ~45% for 5 × PBS) and also that the suggested value of bound water (~0.3 to 0.8 g of water/g of dissolved solids in membranes, lipids, intact ribosomes, muscle cells and polypeptides) by Cooke and Kuntz [6] is a very conservative estimate (~10 times In Figure 5, there are three plots. In each plot, a different initial concentration of salt/PBS is used; Figure 5A represents 1 × PBS, and 5B shows 5 × PBS, while 5C shows the 10 × PBS solution. In each graph, the results are shown using cooling rates of 1, 5 and 20 K/min, as well as the corresponding experimental results. Additionally, the solid curve in each plot corresponds to the % of latent heat released that is predicted by using the solution in the phase diagram, which assumes that the concentration is equilibrated (constant). Observe that as the cooling rate is decreased, the numerical solution converges to that predicted by the phase diagram. Moreover, the % of latent heat released is a decreasing function of the cooling rate in all cases investigated. There is good agreement between the numerical and experimental results with 1 ×, 5 × and 10 × PBS. It is intriguing to note that the shape of the experimental curves is insensitive to the initial concentration. The experimental curves seem to be merely shifts of one another, with the shift reflecting the lowering of the phase change temperature with the increase in the initial concentration. A slightly different behavior is seen in the numerical solution, with the simulations at higher cooling rates under-predicting (~6 to 10%) the amount of latent heat released. Given the simplicity of the assumed numerical model, this error was deemed to be acceptable. However, increased agreement between the heat and mass transfer limited model simulations and the experimental results could possibly be achieved by varying/optimizing the mean thickness of the dendritic fingers between the 1 ×, 5 × and 10 × PBS solutions, as described in [1]. Additional improvements to the model are presented in the discussion section.
Discussion
The latent heat data presented in this study suggest that the previously described recommendation by Fennema et al. [7] that~80% of total water content freezes in food substances results in an over-prediction of the latent heat of fusion of aqueous solutions (by as much as~45% for 5 × PBS) and also that the suggested value of bound water (~0.3 to 0.8 g of water/g of dissolved solids in membranes, lipids, intact ribosomes, muscle cells and polypeptides) by Cooke and Kuntz [6] is a very conservative estimate (~10 times lower for 1 × PBS). An important cryosurgical application of the lower value of the latent heat of fusion of aqueous solutions (vs. in pure water) is to increase the size of ice balls formed during the freezing of a solute-laden aqueous solution, as compared to that in pure water, for a specified cooling load [81].
The magnitude of the latent heat of freezing reported in the current study is in good agreement with a similar study performed in 2002 by Devireddy et al. [1] and is also in agreement with the previously published literature. For example, values of 275 to 250 mJ/mg were reported for phosphate and sodium buffer solutions by Murase and Franks [23] and were obtained using a Differential Scanning Calorimeter (DSC-2). Similarly, values ranging from 29.3 to 218 mJ/mg were reported by Iijima [12] when solutions containing glycerol (60 to 10% wt/v ratio) were thawed at 10 K/min in a DSC-7. Similar results were obtained by Han and Bischof [34] utilizing a DSC-Pyris 1 machine. Other investigators have reported similar trends [82][83][84][85][86][87][88][89][90]. Several theories have been proposed to explain the inverse relationship between the solute concentration and measured latent heat values, including temperature effects, unfreezable or bound water, the solute distribution and the associated heats of dissolution and entropic effects, and are not repeated here. The interested reader is referred to the primary sources and review articles [5][6][7]75,84,85].
A function based on the phase diagram of a binary solution and the modified Avramilike model of kinetics was developed to predict the measured temperature and time dependence of the latent heat release. The modified Avrami model of kinetics, as a first-order approximation, disregards the temperature dependence of k; in the traditional model of Avrami kinetics, this represents a process that is isothermal. An additional simplifying assumption is in Equation (1), where it is assumed that the temperature and time dependence of the latent heat release are independent of each other and that the combined response can be modeled as a superposition of the two effects. The validity of these assumptions is supported by the ability of the model to predict the experimental results, but clearly, this does not suggest that the simplified Avrami-like model presented here fully captures the underlying mechanisms/physics of the freezing process. However, the presented model is quite simple and straightforward to apply in numerical schemes of biological freezing rather than incorporating the full equations of heat and mass transfer; see, for example, Devireddy et al. [66] for an example of such a coupled freezing problem in tissues. The model as developed is applicable to PBS-based solutions as well as PBS-glycerol solutions.
In theory, this model should be applicable to all solute-laden solutions. However, additional experimental data are needed to verify this assumption/claim. Three different model constants were obtained by curve fitting to the experimental data to complete the function (Equation (1)). The constant in the temperature dependence function, A, was found to be 0.53 from the 1 K/min data for the three different solutions studied. Significantly, the "combined best-fit" value of the time exponent, n, also remained constant at 1.5 for all three solutions studied (Table 2 and Figure 5) and is presumably due to the fact that, in the model, n = 1.5 correlates with the physical situation of the DSC experiment, i.e., pre-nucleated solutions at time t = 0 or "site-saturated" conditions [4]. Another interesting observation is that the value of "k" remains constant between 1 × PBS and 10 × PBS solutions at 3.3 and falls to 2.3 for 1 M glycerol in 1 × PBS solution ( Table 2). As mentioned earlier, the constant, k, can be thought of as a measure of the crystal growth rate or the rate of the transformation process, and a lower value of "k" implies a slower rate of growth. Thus, the reduction in the value of "k" between the PBS (1 × and 10 ×) and glycerol in 1 × PBS solutions suggests that the crystal growth rate or the rate of transformation is slower in the latter (glycerol solution) in comparison with the former solutions (1 × and 10 × PBS). The slower crystal growth rate in the glycerol solution is presumably due to its higher viscosity in comparison to the PBS solutions; i.e., the "type" of solute affects the value of "k" and not the "amount" (since "k" is constant between 1 × and 10 × PBS solutions).
A closer examination of Figure 2 shows that the fit of Equation (1) to the data shows a cooling rate dependence. This suggests a model limitation that is clearly due to the use of an isothermal model for a non-equilibrium process. Specifically, at the fastest cooling rate studied, 20 K/min, Equation (1) under-predicts the fraction of heat released for 1 × PBS (T ph < T < −15 • C), while the opposite is true for both the 10x PBS (−12 < T < −22 • C) and 1M glycerol in 1 × PBS (T < −12 • C) solutions. No satisfactory explanation is available at this time for this observation, apart from the ones stated earlier. Note that a similar trend of under-prediction is seen in the numerical results as well ( Figure 5). It might be that both the Avrami-like model and the numerical model are missing a fundamental piece of the puzzle, or the experimental data are showing an artifact of unknown origin. For example, the choice of the model diffusion coefficient (or the viscosity) could be incorrect. However, a sensitivity analysis found that the model predictions will "match" the experimental data only if the diffusion coefficient is lowered by a factor of 10 −10 . This decrease in the diffusion coefficient is neither unsupported by experiments nor realistic. Effects such as the temperature dependence of solute diffusivity and the temperature dependence of latent heat were considered but were not significant enough to change the model results, unless unrealistic assumptions that are unsupported in literature were made, in a similar fashion to the diffusion coefficient. Additional modifications and model improvements were considered but were deemed to be too impractical, as they required further assumptions with several unknown variables and also increased the complexity of the model to be developed. For example, model improvements include the inclusion of ice crystal interactions, the inclusion of nucleation models to assess the ice crystal size and distribution, the inclusion of irregularly shaped and sized ice crystals, the modeling of the formation of partial and/or complete eutectics at the advancing ice front, the inclusion of salts in the frozen fraction and instabilities at the advancing ice front.
Conclusions
The latent heat of fusion during the freezing of different pre-nucleated solute-laden aqueous solutions was obtained using a Differential Scanning Calorimeter (DSC) and correlated with the amount of initially dissolved solids or solutes in the solution. In general, a decrease in DSC-measured heat release (in comparison to that of pure water, 335 mJ/mg) was observed with an increasing fraction of dissolved solids in the solution, a fact that has been well established in the published literature. A model based on the phase change diagram of a water-NaCl binary solution and a modified Avrami-like model of kinetics was developed and fit to the observed data to obtain three model parameters ("A", "k" and "n"). The model was found to simulate the temperature and time dependence in the DSC-measured heat release data reasonably well (the goodness-of-fit parameter R 2 ≥ 0.95). A mathematical model of the freezing of a salt solution in a small container is also presented to further describe the experimental measurements. This model was non-dimensionalized, solved using a numerical scheme and compared with experimental results. The results show that the simulations of the mathematical model are in good agreement (± 10%) with the experimental results.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 10,473.4 | 2022-10-01T00:00:00.000 | [
"Physics"
] |
The C-RORC PCIe card and its application in the ALICE and ATLAS experiments
The ALICE and ATLAS DAQ systems read out detector data via point-to-point serial links into custom hardware modules, the ALICE RORC and ATLAS ROBIN. To meet the increase in operational requirements both experiments are replacing their respective modules with a new common module, the C-RORC. This card, developed by ALICE, implements a PCIe Gen 2 x8 interface and interfaces to twelve optical links via three QSFP transceivers. This paper presents the design of the C-RORC, its performance and its application in the ALICE and ATLAS experiments.
1 Introduction 1.1 ALICE online architecture in Run 1 and Run 2 ALICE [1] is the heavy-ion experiment at the CERN LHC dedicated to the study of the physics of strongly interacting matter. It has been designed to cope with the high particle densities produced in central Pb-Pb collisions. The data captured from all 18 subdetectors are read out by the ALICE Data Acquisition (DAQ) system via around 500 serial optical links called Detector Data Links (DDLs) [2]. The data sent via DDLs from the cavern to the counting rooms is received in custom FPGA based DAQ Read-Out Receiver Cards (D-RORCs). These boards are installed in servers acting as Local Data Concentrators (LDCs). For each DDL an exact copy of the incoming data is forwarded within the D-RORC FPGA to another DDL towards the High-Level Trigger (HLT). A simplified overview of the read-out architecture is shown in figure 1.
The HLT is the first system in ALICE where data from all detectors is combined and reconstructed. This compute cluster is comparable in size to the DAQ cluster and additionally contains Graphics Processing Units (GPUs). The interface nodes are equipped with custom FPGA based HLT Read-Out Receiver Cards (H-RORCs), receiving the detector data via DDLs and performing first reconstruction steps. In addition to software based data processing on the nodes, the computing power of the HLT could significantly be enhanced by implementing pre-processing algorithms in the H-RORC firmware and offloading computations to GPUs [3]. Output nodes pass the processed data back to the DAQ system via H-RORCs and DDLs.
The HLT decisions for each event are readout by the DAQ, using the DDLs as for any other detector. The sub-events from the detector LDCs and the HLT decision are then sent over the Event Building Network for global processing and finally into long term storage.
The Read-Out Receiver Cards for DAQ and HLT have similar requirements, however they have been developed and maintained as independent projects. The H-RORC contains a Xilinx Virtex-4 - FPGA and connects to DDLs via pluggable add-on boards hosting the optical links. The interface to the host machine is implemented with PCI-X. The D-RORCs have been used in two different revisions: one with PCI-X and one with PCIe interfacing to the host machine. These boards use Altera APEX or Stratix II FPGAs and have two optical interfaces per board. During Run 1 around 400 D-RORCs and around 240 H-RORCs were used in the DAQ and HLT systems. The read-out architecture described will remain the same for Run 2. LHC luminosities after Long Shutdown 1 are expected to be in the range of 1 − 4 × 10 27 cm −2 s −1 with a center-of-mass energy of 5.1 TeV for Pb-Pb collisions. The expected data rates require that the read-out system as deployed during Run 1 is upgraded. The Time Projection Chamber (TPC) is replacing its Readout Control Unit with a redesign for higher detector bandwidth and increased output link rate (RCU2). The Transition Radiation Detector (TRD) is implementing a higher read-out link rate with the existing Global Tracking Unit (GTU) hardware. Therefore the original version of the DDL (also referred to as DDL1) has been upgraded to the DDL2 [4], which supports higher link rates. The increasing data rates and read-out changes also affect the systems of DAQ and HLT and in particular the Read-Out Receiver Cards.
Both types of RORCs used during Run 1 are limited in their optical read-out capabilities by the DDL1 link rates. Additionally, the PCI-X host interface is obsolete and increasingly rare in recent server PCs. These facts require a replacement of the Run 1 Read-Out Receiver Cards.
ATLAS: upgrade of the ReadOut system
The focus of the ATLAS experiment [5] at the LHC is the study of high-energy proton-proton collisions at high luminosities. The experiment makes use of a trigger system consisting of three levels to reduce the event rate to a manageable level. The first level consists of dedicated hardware. Data from events accepted by this level are transferred from the front-end electronics to the ReadOut Drivers (RODs). These are sub-detector specific modules, located in an underground service area adjacent to the cavern in which the experiment is installed. An important task of the RODs is to build event fragments and output these to the ReadOut System (ROS). For each first-level trigger accept each ROD outputs one event fragment. Each fragment contains an identifier, the L1Id, which is, apart from resets, monotonically increasing for consecutive fragments. A supervisor se--2 -lects a higher-level trigger processing node for handling the event and forwards the same L1Id and additional information, provided by the first-level trigger, to it. The additional information is used by the second-level trigger for requesting only part of the event data from the ROS. The L1Id is forwarded as part of each request for data associated with that L1Id via the Ethernet network connecting the nodes and the ROS. The ROS responds by sending the requested data. For Run 1 the second level of triggering was implemented using a dedicated set of server PCs. Upon acceptance by this level, full event building was performed by another dedicated set of server PCs known as the Event Builder, 1 which like the second-level trigger processors requested the event data from the ROS, but instead of a fraction all data were requested. Full events were then built and forwarded to the highest trigger level, known as the Event Filter and running on another dedicated set of server PCs. For Run 2 the same approach will be used, but all processing of an event, i.e. for second-level triggering, event building and Event Filter processing, will be done on the same processing node. As in Run 1 event fragments will be discarded in the ROS upon delete requests that are broadcast to the ROS by a supervisor. This occurs after a second-level trigger reject or after successful building of the full event (or of a partial event in case of certain types of events, in particular calibration events). A diagram of the structure of the Trigger and DAQ (TDAQ) system for Run 2, with data volumes and trigger rates indicated, is presented in figure 2.
The event fragments are transferred from the RODs to the ROS via dedicated point-to-point links in the form of optical fibers, using the S-link protocol [6] and running at either 160 MB/s or 200 MB/s maximum throughput. For Run 1 about 1600 of these links were deployed, this number increases to about 1800 for Run 2. The ROS as deployed during Run 1 was built from about 150 server PCs, with typically 4 ROBINs [7] installed per PC. ROBINs are PCI plug-in cards with three inputs for the point-to-point links via which the RODs output their data. Each PC was also equipped with a PCIe plugin card connecting via two ports to the data collection network, implemented with 1 Gb Ethernet technology. Each ROBIN contained a 64 MB paged memory buffer for each of the three inputs, a Xilinx Virtex-II FPGA, a PCI interface chip and a PowerPC processor keeping track, for each buffer and together with the FPGA, of the association between page number and L1Id of each fragment stored. Requests were forwarded by the PC to a ROBIN via its 64-bit 66 MHz PCI interface, requested data was written to the memory of the host via DMA.
The increase of the number of ROD-to-ROS links for Run 2 made a reduction of the rack space used per link desirable. Furthermore 64-bit PCI technology is becoming obsolete, motherboards with four PCI slots, similar to those installed in the ROS PCs used in Run 1, are not readily available for the current generation of CPUs (Ivy Bridge or Haswell architecture). A PCIe solution was therefore required. In addition the higher luminosity and collision energies of Run 2, the higher maximum average level-1 accept rate of 100 kHz (instead of about 70 kHz for Run 1), and updated trigger conditions will result in more data being sent to and requested from the ROS. Therefore it was decided to replace the ROS used in Run 1 by a more compact ROS with PCIe based ROBINs and capable of handling requests for event fragment data for at least 50% of the fragments received via the ROD-to-ROS links. With the CPU power available in modern server PCs it was considered feasible to move the tasks of the on-board processor of the ROBIN to the CPU of the ROS PC, simplifying the design of the ROBIN and also simplifying support, as both the software and the development environment for the on-board processor no longer have to be maintained. This new version of the ROBIN is known as the RobinNP, "NP" refers to "No Processor". The custom board developed by the ALICE collaboration, the C-RORC, described in the next section, provides all functionality required for the RobinNP, as discussed in section 3.3.
The Common Read-Out Receiver Card (C-RORC)
The lack of suitable commercial platforms to replace the Run 1 Read-Out Receiver Cards deployed in ALICE led to the development of a custom board. Even though the development was driven by ALICE requirements, the target platform was kept as generic as possible. A photo of the final board with the major components annotated is shown in figure 3. The board is a full-width, full-height PCIe card according to the PCIe specification. The height of the components is kept within the specification to allow installation of boards into adjacent PCIe slots. The boards are powered from 6-pin GPU power cables.
The central component on the board is a Xilinx Virtex-6 FPGA. This FPGA already comes with a PCIe hard block for up to eight lane PCIe generation 2 (8x 5.0 Gbps). A measurement of the usable PCIe bandwidth with a maximum payload size of 256 byte per PCIe packet on a -4 -recent IvyBridge server is shown in figure 4. This example uses a custom DMA engine and two DMA buffers as described in section 3.2. The transfer rate for the plain event payload to the host buffer is shown (lowest rates). The rate taking into account the transfer of the report words and the overall throughput at the PCIe transaction level packet interface including all transaction protocol headers (highest rates) are also shown. The throughput is quite close but not equal to the theoretical limit of 4 GB/s, as there is still a portion of bandwidth required for the link level protocols including crediting.
The board interfaces to 12 serial full duplex optical links via three QSFP modules, with each QSFP module connecting to four optical links. Break-out fibers are available to connect to the existing fiber installations. The serial links are directly connected to the transceivers of the FPGA (GTX), which limits the maximum serial link rate to around 6.6 Gbps. An on-board configurable reference clock oscillator makes it possible to use almost any link rate within the supported range. On-board DDR3 memory can be installed in two SO-DIMM sockets. The required memory controllers can be implemented in the FPGA and allow operation of single ranked modules up to 1066 Mbps and dual ranked modules up to 606 Mbps. Both interfaces have been tested with a variety of different modules up to 2 × 8 GB total capacity. FPGA configuration files can be stored in on-board synchronous flash memories for fast auto-configuration of the board upon power-on. Additionally, there is enough memory to store multiple FPGA configurations. A configuration microcontroller can be accessed by the host machine via SMBus even if the PCIe link is down. This allows implementation of a safe firmware upgrade procedure by always keeping a known-to-be-working configuration in the flash memory.
The large scale production of the boards was organized as a common effort between ALICE and ATLAS. Extensive hardware tests have already been conducted by the contractor. More application specific tests have been done by ALICE and ATLAS at CERN. At the time of this writing 359 boards have successfully been produced, tested and delivered to CERN, of which most have been installed in the ALICE DAQ and HLT and ATLAS DAQ systems.
Applications of the C-RORC in ALICE and ATLAS
With the C-RORC there is now a common hardware platform for three applications in two LHC experiments: ALICE Data Acquisition, ALICE High-Level Trigger and ATLAS TDAQ Read-Out System. Even though the platform is the same, each application has to interface to existing application-specific hardware and software infrastructure. For this reason firmware for each of the three applications is developed independently. Nevertheless, common building blocks are reused and approaches are shared. The following sections describe the applications in more detail.
ALICE data acquisition
The ALICE DAQ system handles the data flow from the detector to permanent data storage in the CERN computing center and is responsible for uploading configuration data to the detectors [8]. The interface to the DDLs in the DAQ Read-Out Receiver Card firmware is therefore providing two operating modes: data taking and detector configuration.
In data taking mode the receiving channel of each read-out link is used to transfer event data from the detector electronics to the DAQ farm. The transmitting channel is used for flow control. In detector configuration mode the transmitting channel is used to send configuration data to the front end electronics. The receiving channel is used for acknowledgments from the front end electronics.
The ALICE DAQ Run 2 setup is a mixed installation consisting of C-RORCs for all TPC, TRD and HLT-to-DAQ links. The previous D-RORC boards are still in use with the remaining detectors. The C-RORCs use six optical links to receive detector data and the other six links to send a copy of the data to the HLT. The copy process between the links is directly implemented in the RORC firmware. The DDL protocol has been ported to the higher DDL2 rates to support the detectors that upgrade their read-out for Run 2. The firmware interface to the host server via PCIe is based on a PLDA DMA engine [9] for six data channels. This is the same interface as already used for the D-RORC boards, which allows a common device driver and software interface for both types of boards.
The host memory for DMA operations is managed with the physmem driver and divided into page-like segments with known physical start addresses and lengths. These buffer descriptors are pushed into a FIFO in the RORC firmware and then used as start addresses for DMA transfers. For each descriptor used for a DMA transfer, the RORC writes an entry into a second DMA buffer in the host memory to inform the software of new data. The DAQ farm for Run 2 will consist of a cluster of around 130 servers with 10 Gb Ethernet interconnect, in which 59 C-RORCs are installed.
ALICE High-Level Trigger
In the ALICE HLT one C-RORC replaces three to six of the previous H-RORC boards, thus allowing a much denser integration of the optical links into the cluster. Up to 12 links per board are used to receive data from the DAQ system. The optical link protocol is identical to that used for ALICE DAQ: DDL at different link rates depending on the detector. For Run 2, 74 C-RORCs have been installed into 2U dual socket IvyBridge servers together with GPUs and 56 Gb InfiniBand interconnect. The overall HLT for Run 2 consists of 180 compute nodes, each with two 12-core CPUs and a GPU, and some infrastructure machines. A schematic picture of the node configuration and an overview of the dataflow inside the HLT C-RORC firmware is shown in figure 5.
The existing HLT data transport framework assumes one process per DDL. With 12 links per board this requires DMA engine firmware that is able to operate 12 DMA channels independently. This was not possible with any available commercial PCIe DMA core for the given FPGA architecture, so a custom DMA engine was developed. This DMA engine handles scatter-gather DMA descriptor lists provided by the host system and thus allows the standard Linux memory subsystem to be used for buffer allocation and mapping. The possibly scattered physical memory fragments are mapped into a contiguous virtual memory region by a user space device driver library. The -6 - DMA buffers are used as ring buffers, with each DMA channel using two: EventBuffer and Re-portBuffer. Detector data from the optical links are directly written into the EventBuffer. Once an event is fully transfered, an entry is written into the ReportBuffer containing the offset and length of the event in the EventBuffer. All hardware access is performed from user space using the Portable Driver Architecture (PDA, [10]) library together with a user space device driver. The PDA allows memory-mapping the DMA buffer twice to consecutive virtual memory addresses, which allows a transparent handling of the wrap-around effects of the ring buffers.
An essential part of the HLT firmware is the FastClusterFinder online pre-processing algorithm [11], which can be integrated into the dataflow to extract features of the raw TPC data while passing through the RORC. The FastClusterFinder can handle the full bandwidth of the DDL link and induces only a marginal additional readout latency of a few microseconds while saving a significant amount of CPU resources compared to the same processing steps in software. This was developed for DDL1 speed and and has now been tuned to support the higher optical link rates of the DDL2 protocol.
The ALICE HLT uses the DDR3 memory on the C-RORC only to replay previously recorded detector data into the system. Six DMA channels share one DDR3 SO-DIMM module. This allows the full HLT chain to be tested with real detector data without requiring DAQ or detector resources. The on-board DDR3 memory is not used during physics runs.
ATLAS Readout System
As mentioned in section 1.2, the C-RORC provides all the functionality required for the RobinNP: 12 ROD-to-ROS links can be connected to a single board, four times as many as to the ROBIN, and the PCIe interface has a throughput of at maximum about 15 times that of the PCI interface of a ROBIN. The resource requirements for implementation of RobinNP functionality are fully satisfied by the FPGA, which is also capable of handling the requirements with respect to data throughput. Furthermore up to 16 GB of buffer memory can be installed in the two SO-DIMM slots, while a ROBIN has 192 MByte of buffer memory. Higher speeds than the current 160 or 200 MB/s are also possible for the input links.
The C-RORC made it possible to build a new compact ROS that makes use of 98 2U-high server PCs, with per PC typically two C-RORCs installed and 24 ROD-to-ROS links connected (to -7 -be compared to 12 links connected per 4U-high ROS PC for Run 1). Because of the factor of two increase of the number of links connected to a single PC, and because of the higher request fractions the networking infrastructure also had to be upgraded: instead of having two 1 Gb Ethernet links a ROS PC is now connected with four 10 Gb Ethernet links to the data collection network.
A schematic diagram of the RobinNP firmware and its interactions with the host PC is presented in figure 6. The firmware consists of two identical parts, referred to as ROBGroups, each connecting to six ROD-to-ROS links (labeled as ROL (ReadOut Link) in the diagram) and a common part implementing an eight lane Gen 1 PCIe interface and the DMA engine. The latter is the engine available from PLDA [9]. Each ROBGroup has one shared buffer memory, consisting of a 4 GByte DDR3 SO-DIMM module, which is logically subdivided in six partitions, one for each ROD-to-ROS link. Pages in the buffer memories are managed by multi-threaded software running on the ROS PC, a typical page size is 2 kByte. For each memory partition the PC provides information on free memory pages, via FIFOs implemented in firmware, to each of the 12 input handlers. Incoming fragments are stored in free pages. For every page used, information on the page number, L1Id and length of the fragment stored is entered in the Used Page FIFO of the input handler that handled the fragment. Per ROBGroup the information from each of these FIFOs flows into the "Combined Used Page FIFO", and is subsequently transferred to the memory of the PC by means of DMA by the "FIFO duplicator". The information is used by a dedicated thread for "indexing", i.e. information is stored on the relation between L1Id and the page (or pages if the fragment is larger than the page size) in which a fragment is stored as well as on the length of the fragment. Data requests received via the network cause a look-up of this information and forwarding of requests for reading data from the pages concerned. These data are then read by the FPGA from the DDR3 memory and passed to the DMA engine for transfer to the memory of the PC. For each ROBGroup a second FIFO duplicator transfers information concerning completed DMA transfers from a FIFO to the memory of the PC. This information is used for collecting the requested data, which is output via the network. Clear requests are also sent to the ROS via the network. These requests result in the identifiers of the pages concerned being recycled onto a free page stack and eventually back onto the Free Page FIFOs, thus allowing the data in memory to be overwritten. The communication between RobinNP and the PC is interrupt driven: the indexer thread is woken upon storage of new event data and the thread used for data collection is woken upon the completion of DMA transfers. Interrupt coalescence has been implemented in an innovative way: an interrupt only occurs if the buffer to which data is transferred from the FIFO with which the interrupt is associated is empty upon arrival of new data. During normal operation the PC does not need to read any data via PCIe from FIFOs in the FPGA, as all data is written under DMA control to the memory of the PC. In this way optimum utilisation of the available PCIe bandwidth is achieved.
At the time of writing the installation of the new ROS has just been completed. Each of the 98 installed ROS PCs has a single CPU motherboard equipped with an Intel E5-1650v2 six-core 3.5 GHz CPU and 16 GB of memory. The CPU connects directly to 40 PCIe Gen3 lanes, 32 lanes are connected to a riser card with four 8 lane connectors. In most of the PCs two connectors are used for two C-RORCs, the other two for two dual-port 10 Gb Ethernet NICs with optical transceivers. The operating system of the PCs is Linux (SLC6). This configuration has been shown to be able to satisfy the 50% readout fraction requirement at 100 kHz first-level trigger accept rate with two C-RORCs with RobinNP firmware installed [12].
Conclusion & outlook
This paper presents the C-RORC, a PCIe-based FPGA read-out board, which will be used in two of the major LHC experiments for three applications in data taking for Run 2. All parties strongly profited from the collaboration. The significant increase in production volume with respect to to deployment restricted to ALICE led to cost savings per board for both experiments. Usage experience, implementation methods and partly even source code could be shared between the developers of the different applications reducing the overall development time. All boards required for Run 2 have been successfully produced, tested, delivered and installed in the ALICE DAQ and HLT systems and in the ATLAS DAQ system. | 5,660.2 | 2015-02-13T00:00:00.000 | [
"Computer Science"
] |
The image of the father in Downton Abbey: manifestation of identity in virtuous actions
Abstract The British television series Downton Abbey (directed by Julian Fellowes) could be considered a media phenomenon without precedent. Since its release in 2010 and throughout its six seasons, it reached a global audience in over 220 countries. At a time when the father figure is dissolving, fading, or missing in the content of television series, the worldwide success of Downton Abbey features the image of Lord Robert Crawley (Hugh Bonneville), Earl of Grantham, the father of an aristocratic family in England. This article will analyze paternal image from its ontological and historical roots, considering truth, transcendence and goodness, as well as manifestations of paternal virtues. Given the fact that virtue is not easily measured, the analysis will show that a father's identity can be reflected in a series of virtuous actions, as manifestations of core virtues. These virtues are the basis for a quantitative content analysis of the seven episodes of the first season, through the application of a methodological tool for audiovisual analysis.
Introduction
Genuine interest in unraveling the great human dramas has beenand continues to be -a fundamental pillar in both mythological and historical dramas, the same ones that have been progressively brought to the big screen. At the beginning of the 21st century, this interest has successfully migrated to the new world of television series, perhaps because of the exacerbated advance of telecommunication technology in our days. Obtaining a defined captive market, such entertainment companies as Sony Pictures Television, Netflix and Masterpiece have managed to captivate a new group of viewers who eagerly await the new season or episode in their favorite series. The British series Downton Abbey (2010)(2011)(2012)(2013)(2014)(2015) was no exception to this cultural phenomenon; it enjoyed an international audience with a presence in 220 countries during its six ambitious seasons. Found worthy of the most prestigious international awards of its kindthe Emmy Awards and the Golden Globes -Downton Abbey is considered the most expensive British series (per minute) of its time and the most watched in the history of North American television. But what is the reason for this overwhelming success? What relevance can be found in a series that recounts the life of an English aristocratic family at the beginning of the 20th century?
Lord Robert Crawley (brilliantly played by Hugh Bonneville) is the current heir to the noble title of Earl of Grantham and owner of the majestic castle known as Downton Abbey, where he resides with his wife Cora, and daughters Mary, Edith and Sybil, as well as the staff on duty, captained by Butler Charles Carson (Jim Carter). The plot revolves around the fate of the assets (not only material) that the Crawley family enjoys in the absence of an immediate heir, and in the context of the social and cultural developments that the turn of the century brought with it. This is coupled with historical events of great significance, such as the sinking of the Titanic (1912), the start of the First World War (1914) and the first outlines of suffrage feminism. However, the drama on display goes much further: The backdrop of the plot of Downton Abbey is the paradigm shift or worldview. Downton Abbey is a tremendously effective way of telling what happens when you go from an orderly and hierarchical world, asymmetric in personal relationships, to a subjective world dependent on the will of each person, symmetrical in personal relationships. It tells of the passage from a paternal world to one of universal brotherhood, but without paternity. (Assirio 2015, 151-159) As we will see later, the paternal functions exercised by Lord Robert Crawley are manifested in Downton Abbey through several acts of virtue. These suggest a deep analysis of his role as the father of a British aristocratic family in relation to his wife, daughters, extended family and, above all, service personnel, starting from a plotsimilar to the classic series Upstairs, Downstairsthat so far has been little explored in audiovisual media (Esquire 2017). At a time marked by paternal absence (now maternal absence as well) in the domestic family environment (Hurtado 2014), television series have used this drama in their favor (Gonz alez Gaitano 2012;Fern andez Aguinaco 2013;Fuster 2015). This is reason enough for future research to continue systematic studies of the current paternal image in other equally successful, although antagonistic, audiovisual genres such as the series Breaking Bad, Game of Thrones, Mad Men, Billions, among others. But before continuing, let us briefly sketch the ideological path that has put the paternal image into question in the past seventy years. embedded in family life: (1) the procreation and upbringing of his children as 'rational and free' beings; (2) his own technical preparation that allows him to manage his financial agenda in his chosen profession; (3) exemption from those burdensome domestic functions, typical of the traditional family model, which are delegated exclusively to the mother. In this way, what began to be known as the liberal family system, started to occupy a privileged place among the 'marriage boomers' and the 'baby boomers', both typical of the postwar period, particularly the 50's and 60's. Sociologists of the stature of Talcott Parsons, from Harvard University, were optimistic and supportive of the new Western family life (Parsons 1951, 116-119). Even William Goode wrote in 1963 the book World Revolution and Family Patterns, in which he argued that the 'conjugal family', based on Lockean liberalism and in consonance with Protestant asceticism, would be the cultural norm that would spread both in the industrialized West and in the underdeveloped or decolonizing countries (Goode 1963).
Half a century later, the Lockean model is now under observation. In other words, the liberal family system can be considered the cause of profound cultural disruption. Among other things, we see today low fertility rates and depopulation; low marriage rate; rise in de facto marriages; an increase in what was known in the past as 'illegitimate births'; a high rate of orphans, unemployment and unmarried status; a total triumph of sexual radicalism, evidenced in the new contraceptive 'culture'; a high rate of abortions and infanticides. In sum, we see a new totalitarianism that seeks to impose gender ideology at all costs, equal marriage 'culture', transgenderism (especially in children) and radical feminism (Andrews and Hurtado 2020, 127-139). This cultural change has come at an astonishing speed, particularly in the past 10 years. Why this change? What social or political forces is causing it? What further consequences are expected in the new millennium? The answers to these questions are based on three contradictions implicitly present in liberal ideology itself: First: a deficient understanding of the concept of male human nature. John Locke certainly had a dark vision of man, whose most eager appetites (he thought) were favored in his desire for 'self-preservation' and thus, in his sexual 'self-indulgence' (Locke 1965;Yenor 2011, 20-23). The aspiration to have a large family, then, comes only from the desire to satisfy the sexual impulse, and from the Darwinian struggle to perpetuate one's existence. The institutions that surrounded the old patriarchyin that sensemanaged to keep the males close to the family home in exchange for complete authoritydespotic at timesover wife and children, as well as community and political environment. That was the male's main civilizational motivation. Aware of this, the new 'liberal project' would not be compatible with this lifestyle at a political level, due to its strong monarchical orientation (scandalous in the eyes of modern thinkers). Locke considered it necessary to put an end to this patriarchal rationality, and therefore, to the family style that sustained it. The family home would then represent the limit to the father's authority as a parent. But outside the home, his role as an individual citizen would radically evolve, mostly in the first industrial revolution, commonly located at the end of the 18th century, to what we now know as the image of the breadwinning father.
However, this 'soft patriarch' image showed its vulnerability in no time. For Locke, women have a natural instinct for attachment and protection towards children in the domestic environment. The man, on the other hand, once exclusively incited to exercise his authority in this area, would realize an irrefutable truth: a woman knows how to exercise this authority in the domestic sphere in a natural way. Therefore, men had to accept these living conditions as the basic premise of the new family functioning. However, the same liberal ideology affirmed over time that the price that women paid in this model was very high. For this reason, in order to achieve the desired equality that liberalism promised to all individuals, women would have to overcome their maternal 'instincts', weakening their emotional ties (towards the husband and children) in search of their own self-development. It is here that the contractual 'idea' of marriage breaks down, for as women gave up their innate maternal purpose, the male's paternal purpose slowly wakened. In that sense, John Stuart Mill was one of the first thinkers to describe the now known feminist imperative (Mill 1869(Mill [2008) in the mid-19th century. Its social and cultural acceptance would depend solely on what he himself called 'fashion'.
Second: the limited functions of the liberal family. The liberal family system promoted by Locke was based on a 'voluntary pact' between men and women that placed reconciliation of interests and properties in common in the center of their dialogue. This was not far from patriarchal rationality, in which the economic life of men and women was completely merged, characterized by developing ad intra a wide range of activities productive in themselves (Carlson and Hurtado 2019, 79-95). Hence its wellknown economic self-sufficiency. In the liberal-Lockean scheme, a similar balance was achieved, although with some nuances. The most relevant novelty, in that sense, was the dispossession of a considerable portion of economic autonomy, property being the first affected. That is, since the primary purpose of the liberal family encompassed only the procreation and socialization of young childrentheir upbringing as free and rational individualsto speak of a robust domestic economy would be absurd. Although the implications of this gradual weakening process were not perceived by Locke, it is clear that his liberal family model would have a central place in the future capitalist industrial order. Already in the 1950s, Parsons himselfchanneling Lockecelebrated the emergence of this new family model due to its 'specialized' nature. At the same time, female suffrage (or first-generation feminism) contributed to the progressive transformation of the traditional family, from an economic-political unit into a consumer one.
Added to that, the gradual disappearance of the ties to the extended family, and with it, the loss of its centrality as a principle of community unity in general, allowed the 'nuclear family', as it came to be called, to concentrate its efforts on two basic and irreducible functions: (1) procreation and socialization of children; (2) the emotional stability of adults. Regardless of the internal logic of these two, Parsons was doubtful that these functions would prevail, perhaps because of their strong psychological orientation, as summarized by Philip Abbott in his book The Family on Trial (Abbott 1981).
Third: its dependence on coercive social engineering. Having said all this, in order to reshape the political order, Locke concluded that the natural family would have to be redesigned. Human beings are not born 'rational and free' but have to become so with the help of their environment. For this reason, the English thinker invented the previously mentioned 'nuclear family', whose main function would be to model the new 'individual', subject to the new liberal order. In its founding stage, this task was relatively similar to the later efforts that tried to mold the 'Soviet man', the 'fascist man', even the 'feminist woman', all of them of an indisputable totalitarian nature. But it was Mill who took the first step in that direction, postulating as a desirable goal for every infant to become 'free and rational', always in his constant search for his own 'individual development', in pursuit of a supposed 'perfect equality'. Once this new model of the 'egalitarian family' was proposed, Mill promoted it as the only desirable one, and since it does not occur 'naturally', it would have to be imposed, coercively if necessary (Mill 1869(Mill [2008).
Eventually, it was the liberal philosopher T.H. Green who took the next step in this discursive line, suggesting that the 'institutions of civil life', that is, the state, should 'make it possible for a man to be freely determined by the idea of a possible self-satisfaction' (Green 1967, 32-33). This is confusing language that elevates the concept of 'self-realization' to the degree of 'central liberal principle'. For his part, the American philosopher John Rawls took this argument to the next level with his search for the famous 'distributive justice', or its 'fair opportunities'. In that sense, the search for selfdiscovery of each individual failed in the context of the traditional family, but not within the framework of the new liberal family, at least in theory. Perhaps for this reason, Rawls decreed his famous phrase: 'Will the family be abolished?' (Rawls 1971, 511) Almost sixty years after it was formulated, the idea of equal opportunities in the family context has led us in this direction, together with the clear weakening of the paternal image within and without the home.
The quest for the image of the father: objectives and hypotheses
The main objective of this paper is to analyze the image of the father seen in Lord Robert Crawley, Earl of Grantham in the series Downton Abbey. Specifically, the active presence of five fundamental virtues will be analyzed, namely, self-control, joy, work (or industriousness), generosity and responsibility, as seen in the character's actions in front of his relatives (see section 4). It is to be expected that Lord Crawley's virtuous behavior will be considerable and worthy of imitation, in the historical moment in which the figure of the father seems to be very present in the acclaimed television series (Fuster 2015). For this reason, we start from the hypothesis that the father analyzed in this work manifestsand therefore promotesone or more virtue in each scene of the series.
We will develop a theoretical framework that explains the paternal image, understood from the virtuous acts of the father, but, before doing so, it is necessary to reflect on the essence of fatherhood from an anthropological perspective. To do this, one has to go back to the concept of human person, because one as the father of a family is a male, but it is also true that a male is first a person. Therefore, the fatherhood of a man cannot be separated from his human-persona condition, that is, from his personal identity and his historical identity.
Personal identity: a 'who' in front of others
Paternal identity initiates in personal identity, placing us in the field of ontological identity, which refers to the 'given', the 'original' or the 'innate' (Mart ınez Priego, Anaya Hamue, and Salgado 2014, 75-85). We are talking about the most proper and specific nucleus of each human being: his who: The person is unique and unrepeatable, because he is someone; not just a what, but a who. The person is the answer to the question who are you? Being a person immediately means to be a who, and being a who means to be with a name. Thus, a human is an animal that uses a proper name, because the name designates the person. (Yepes and Aranguren 2006, 64-65) In effect, the human person 'receives' existence, since he does not make his own self. In that sense, being a person is being 'a who' that cannot be understood without some notes that characterize his identity. Yepes and Aranguren (2006, 62-70) listed five constituent notes of personal identity showing a unitarily personal being, namely, intimacy, manifestation, dialogue, donation and freedom: 1. Intimacy: refers to the interior space of the person, in which we find ourselves through reflection. It is 'the way of being that does not need to assimilate external elements or possess them' (Polo 1999, 157). From there, the human being is capable of self-possession and, therefore, capable of opening up to others and bringing the novelty of their own existence (Ricoeur 2006; Wojtyla 2008). 2. Manifestation: when we speak of the human capacity to 'open up' to others, we are referring to the concrete way in which the person 'manifests' (Polo 1999, 159), which is expressed through the body in terms of language and human acts. These allow human beings to manifest 'who they are, actively reveal their unique and personal identity and make their appearance in the human world' (Arendt 2005, 208). 3. Dialogue: personal identity is dialogical. That is, each personal being is presented as a 'who' before another 'who' with whom it shares what it 'is'. It supposes being recognized by the other as an irreplaceable who, since we are 'in front of others' in a radical way, a fact from which the richness of the relationship starts (Alvira 2000). 4. Donation: but your relationship is not enough to build human life. It is necessary to achieve more: it is necessary to give oneself to others, which presupposes knowing oneself as a gift, since the gift, if it is personal, 'leads [ … ] to being a gift with respect to the one who gives, and also to be one with respect to the one who accepts' (Sell es 2011, 614). For this reason, a person can only give himselfdonate himselfif he knows himself as a gift, if he accepts the gift that was given to him at his origin. 5. Freedom: the 'gift of oneself' is impossible without freedom. Personal freedom is unlimited because, although it is typical of a finite being, it can always be expanded, showing that the human person is, therefore, unrestricted openness (Sell es 2011), with the ability to control what he wants (Ricoeur 2006). The free person updates his identity through his actions in front of others, from his freedom, which makes his personal growth possible.
Historical identity: time and space
The possible openness to others, personal and free, opens up a space to another fundamental dimension for the construction of personal identity: historical identity (Mart ınez Priego, Anaya Hamue, and Salgado 2014). It unfolds in a specific space and time, in the midst of a network of specific personal relationships, with a specific genetic code, a language, and an evolving social conscience. With this, it can be affirmed that personal identity depends on historical identity, since this 'can only be articulated in the temporal dimension of human existence' (Ricoeur 2006, 107). This opens up a specific number of possibilities for participation in the construction of the world of human beings. Indeed, without being authors of our own existence, we can be co-participants in human reality (Hurtado 2014), endowing it with a 'why' that refers to our own action (Ricoeur 2006). But the possibility of human co-participation in the construction of one's own historical and cultural reality presupposes a certain initial in-identity (Innerarity 1993, 371-174), which gives reason to the possibility of the habit that tends to virtue. In other words, 'by being-in-time, man lives in an installation that changes with its own passing and in which man projects and carries out his own life' (Yepes and Aranguren 2006, 72). Now, the notion of time reminds us that the human being had a past and therefore a tradition the memory of which remains (MacIntyre 1981). Without a memory of the past, it is not possible to project the future, since the characteristic of intelligence is to imagine what good would be like, starting from the known. 'Life is an operation that is carried out forward' (Mar ıas 1973, 91). With this, it is not intended to downplay the present: 'men want to stay [ … ] their passing never passes [ … ], rescuing time, reliving what is true, these are constants in human behavior' (Yepes and Aranguren 2006, 73).
For its part, space tells us about the concrete place 'where I appear to others as others appear to me, where men do not merely exist as other living or inanimate things, but make their appearance explicitly' (Arendt 2005, 225). It is here where language and the human act gain their respective relevance (Aguilar Rocha 2007), creating the 'space'not only physicalwhich humans tend to inhabit (Alvira 2020, 35-50).
Paternal image: virtue in front of the child
Intimacy, manifestation, dialogue, donation and freedom are the notes that, insofar as they belong to personal identity, can be considered as characteristic of the longed-for paternal identity, from its historical dimension. In other words, we are talking about their personal deployment in a specific space and time, in a specific context in which these notes are manifested from the person to his peers. Who will these 'peers' be? We are talking about the family home (Hurtado and Galindo 2019), constituted in principle by the father, the mother and the children, because let us remember that the child is not possible without the father, but neither is the father possible without the mother (von Hildebrand 2019). Now, returning to the parent-child relationship, it is important to point out that the persons involved need each other reciprocally (Polaino-Lorente 1995, 295-316). The personal identity of the child is identified with the personal identity of the parent, since every human being, 'never ceases to be a child: he can become a parent, but being a child constitutes him (Polo 1995, 324). 'The reality of the being of the child 'can' get to the depths, responding forcefully to the question: who am I? one can answer: 'I am the parent of my child'. The child, for his part, is justified before others when he affirms: 'I am my father's son'. However, both answers present a certain distance in the existential level, since the responsibility assumed by the father towards the son has a certain hint of one-sidedness. In other words, the father is built in front of the son, which can generate either security or insecurity in both. The son has to abandon himself to the arms of his father, trusting his personal growth, even when the father is not sure of his own identity.
Perhaps in this dynamic lies the transcendent dimension of parenthood, which overcomes its biological roots to give it a profound existential meaning. Indeed, all human beings come into the world one through another (Santamar ıa 2000). We know, or should know, whose children we are. Awareness of one's own paternity generates (in a man) a recognition of his own being 'in the child', giving meaning to his own existence aiming towards personal improvement. It is a new way of overcoming the limits of his personality to remain in the child, giving a greater scope to what one 'is' and what one 'can' become (Arendt 2005). In other words, the search for the good of the fatherthe search for virtuetranslates into the happiness of the child, and therefore that of the father. With this, the exclusiveness of the relationship between father and son can be affirmed, one and only. One is always the father of one's son, not according to time or circumstance, from which it follows that fatherhood does not depend on the will of the father, but mainly on the need of the child (Malo 2015), who accepts his filiation by recognizing that he needs the father to live (Assirio 2013).
ÃÃÃ
As we have seen, both fatherhood and filiation imply a permanent relationship. This does not change when the son leaves the family home in search of his own life, or when he reneges on his parents because of mistakes made in the past. Fatherhood, therefore, refers to the original fact of one's own existence, which is rooted in the life of the parents themselves. Consequently, the foundation of fatherhood finds its raison d'être in the identity of the sonof his whowhich is confirmed in its real origin that it is always relative to the life of his parents.
[This] relationship, insofar as it is constitutive, foundational and original, inevitably refers to the origin of one's own being, enlivening itself in its roots, challenging man from them. In the lives of the father and the child, paternity and filiation have a vocation of eternity, consequently, they are stronger than the death of man which they always survive. (Polaino-Lorente 1995, 303) To this must be added the notion of the good that develops in the realm of human action, the concrete good that must be realized in the virtuous act. The goodness of the father, his virtue, has at its end perfection itself, the same that must overflow into the perfection of the son: this is donation. Through it, paternal love returns to the childgratuitously and benevolentlythe gift of himself, his own existence. The father's love becomes 'donative' love that he himself recovers in a broad and full way in the life of the son, a fact that is also his greatest gift: to be co-creator of his son (Polaino-Lorente 1995, 295-316;Hurtado 2011).
The virtue of the father has a 'multiplier' effect on the son, because they both know that neither of them is capable of absolute goodness or perfection. All human growth is possible thanks to this original 'indetermination', justifying what Rodr ıguez Luño calls 'the habit of good choice ' (2006, 214). However, the conquest of virtue is not possible alone: it takes an intimate environment that generates the confidence necessary to achieve any educational purpose. We are talking about the family home, which is also the foundation of the community spirit that enlivens the social mechanism (Athi e and Hurtado 2020). In effect, the family and the community are necessary for the human person to acquire the capacity for discernment and choice that must be incorporated as 'good operating habits' that bring about the achievement of vital fullness (Naval 1997, 761-778). Therefore, it is in the midst of domestic family life where the hard core of one's own personality and the development of virtues that build one's identity are constructed (Hurtado and Galindo 2019).
Content analysis on some virtuous actions in the father figure:
Downton Abbey (1st season) Virtue, understood as a good habit that tends to perfection, is not easily measurable. Therefore, we will refer to the virtuous actions performed by the character in question in front of others. Virtue, as a human act, has a qualitative value that is articulated as the 'best-worst' binomial. For the purposes of our research, it is interesting to see how the paternal identity of Lord Robert Crawley is manifested in his virtuous acts, namely, self-control, joy, work (or industriousness), generosity and responsibility, which represent a type of human disposition that points towards a full life of action. (Corominas and Alc azar 2014). The structure of the analysis is not intended to be absolute in itself, but rather a practical and orderly approach to observing the attitude of the father in certain circumstances and in front of specific people, that may serve as a 'spearhead' for future analyses of other parental images in cinema and in television series.
Methodology
The methodology used in this research is audiovisual-content analysis, based on previous experiences applied to the media in general (Porto Pedrosa 2013, 5-79) and television series in particular (V azquez 2011), in order to show the values transmitted by television products. Several authors agree that television is capable of modeling people's behavior (Samaniego, Pascual, and Navarro 2007, 307-328)our strengths and our weaknessesand influencing it (Del R ıo, Alvarez, and Del R ıo 2004). Downton Abbey has been studied through approaches that range from a comparative analysis to historical series from other countries (S anchez Burdiel 2014) to the broadcasting of the English character and the nostalgia it fosters (Baena and Byker 2015). In any case, we have not seen a published analysis that addresses the manifestation of virtuous actions through content. Talking about Downton Abbey as a television series is not entirely accurate, since today it can be streamed through Amazon Prime Video and its consumption is no longer strictly through television. However, this particularity does not interfere with our study, which does not analyze the perception of this audiovisual product, but its content. In this sense, this research has a clear quantitative application and has certain advantages: While qualitative methods, by using natural language, are better to gain access to other people's world of life in a short time, quantitative methods are better to conduct positive science, that is, they allow clear, rigorous, and reliable data collection and allow empirical hypotheses to be tested in a logically consistent way. (Sierra Bravo 1998, 25) To be consistent with the essence of the selected technique and, making use of experiences such as those mentioned, we have carried out a content analysis whose basic unit has been each of the scenes in which the father, Lord Robert Crawley, Earl of Grantham, appears in the first season of the series. The scene is the most important element of a script (Field 2005, 162) and we can define it as the unit of dramatic action (that is, endowed with an approach, a middle and an outcome), determined by a criterion of spatial-temporal location. Thus, every time space or time changesor both variablesin the film or series content, the scene will be changing (S anchez-Escalonilla 2014, 188).
Based on these criteria, 318 scenes were counted in the first season of the series: 48 in the first episode, 36 in the second, 42 in the third, 41 in the fourth, 46 in the fifth, 42 in the sixth and 63 in the seventh. A first count established by the authors during the viewing for the analysis was adjusted to conform to the number of scenes counted by Amazon Prime Video, the streaming platform where the series is hosted.
To each of these units we have applied a content analysis protocol accompanied by the complete transcription of the scene and composed of the following analysis categories: 1. Unit number, 2. Episode number, 3. Description of the scene, 4. Duration of the scene, 5. Name of each one of the characters that appears with the protagonist, 6. Group and subgroup of each of the characters. Molesley, Matthew's butler, his father and Charles Grigg, Carson's expartner who goes to the house to blackmail him. 7. Attitude of each of the characters, 8. Motivation of each of the characters, 9. Group belonging to the character with which the greatest interaction occurs. 10. Manifestation of virtues in the figure of the father: a. Yes, b. No, 11. Core virtues that are manifested in virtuous actions in the figure of the father.
In this category one or more virtuous actions may be selected. a. Self-mastery (core virtue): 'strength to open oneself to the outside world of things and people' (Corominas and Alc azar 2014, 25). It can manifest itself through these virtuous actions: i. Self-control: serenity seeking to understand information and acquiring opinions and convictions, as well as the ability to control impulses. 1 ii. Self-knowledge: process by which the human person understands change, accepts his progress and limitations and opens up to relationships with others (Delors 2013). iii. Humility: recognition of 'one's own inadequacies, qualities and capacities, and [using them] to do good without attracting attention or requiring the applause of others' (Isaacs 2003, 361 (Isaacs 2003, 165). b. Joy (core virtue), understood as 'the synthesis of man's aspirations [ … ] fruit of life according to virtue' (Corominas and Alc azar 2014, 23) and, therefore, visible in these virtuous actions: i. Optimism: trusting in one's own qualities and those of others; distinguishing the positive from possibilities and obstacles and facing them with 'sportsmanship and joy' (Isaacs 2003, 81).
ii. Positive attitude: always trying to see the good in things, even when there are difficulties, without falsifying or idealizing reality (Corominas and Alc azar 2014, 23), iii. Peace: on a personal level, it is an inner state devoid of negative feelings that supposes tranquility with oneself and with others. 3 c. Work or industriousness: core virtue that shows 'the external projection of the person who uses things and perfects them according to their needs' (Corominas and Alc azar 2014, 24). The demand to work well is manifested in one or more of these virtuous actions: i. Commitment to a job well done: industriousness that is typical of someone who 'diligently performs the activities necessary to reach [ … ] maturity [ … ] in professional work and in the fulfillment of other duties' (Isaacs 2003, 253). ii. Effort: 'intimate dedication, which goes beyond duty, that the subject makes in the achievement of something that interests him [ … ]; the desire to carry out well-done work and to, if necessary, leave their mark' (Benavente 2003, 19). iii. Strength: to resist and endure inconvenience and to give in with courage 'to overcome difficulties and to undertake great efforts' (Isaacs 2003, 63). d. Generosity: core virtue that supposes the culmination of human relationships and 'consists not only of giving things but of giving oneself' (Corominas and Alz azar 2014, 123). Hence, the virtuous actions in which it is manifested are these: i. Loyalty: accepting the implicit links in adhering to others and reinforcing and protecting 'over time, the set of values they represent' (Isaacs 2003, 235). ii. Fidelity: a 'virtue that allows one to keep what one has promised' (Tom as de Aquino II-II, 2010, q. 110, a. 3, ad. 5). It is the congruence of what is said with what is done; it rests on the honesty that should reign among men (Tom as de Aquino II-II 2010, q. 88, a. 3, ad. 1). iii. Appreciation: 'a quality linked to maturity; it is the recognition of the value of what someone has done for another, and allows establishing strong ties between people' (Gom a 2013). iv. Forgiveness: a 'fundamental attitude that makes the person be inclined [ … ] to the "cancellation" of the "balance of guilt" of an offender and to affirm him as a person' (Crespo 2004, 129). It has the character of a gift. v. Respect: the habit of considering the dignity of people, as unique and unrepeatable beings, with intelligence, will, freedom and capacity to love, as well as their rights according to their condition and circumstances (Corominas and Alz azar 2014, 125). vi. Understanding: the desire to help other people according to one's circumstances; to understand them and see, from their point of view, the situation they face (Corominas and Alz azar 2014, 126). e. Responsibility: core virtue; reflection of the personal maturity of those who are capable of living their freedom, and who 'assume the consequences of their intentional acts, the result of the decisions they make or accept' (Isaacs 2003, 131). Thus, it can manifest itself in the following virtuous actions: i. Commitment to the truth: attitude of the person who seeks to do justice to reality (Spaemann 1998). ii. Coherence: knowing what one's own objectives are, why one acts in one way or another, the connection between more profound desires and what one actually does. 4 iii. Flexibility: adapting 'behavior with agility to the circumstances of each person or situation, without abandoning the criteria of personal action' (Isaacs 2003, 219).
Final Comments: Before closing this section, it is convenient to emphasize the unique nature of this study. Although the topic analyzed, human virtue, belongs to the Humanities, it is approached in this paper from the perspective of communication and from a quantitative methodology. The fact that the initial theoretical-humanistic section now gives way to a more quantitative one could give the impression of incoherence; However, we consider this distinction necessary in order to delve into such a profound issue and, ultimately, to advance the investigation of television productions beyond the merely communicative aspects that they offer.
Data analysis
In total, the first season of Dowton Abbey contains seven episodes, which in turn, as we mentioned in the methodological section, are subdivided into 318 scenes (48 in the first episode, 36 in the second, 42 in the third, 41 in the fourth, 46 in the fifth, 42 in the sixth and 63 in the seventh and last episode of the season). In those seven episodes, Lord Robert Crawley, Earl of Grantham, appears in the first episode in 22 scenes; in the second, in 9; in the third, in 7; in the fourth in 8; in the fifth, in 9; in the sixth, in 9 and, in the seventh, in 16. In total, the father in the series appears in 80 scenes throughout the first season of Downton Abbey. The moments in which the Earl of Grantham, as a father, manifests each of the virtues, taking into account that in our analysis we have considered that more than one virtue could appear in the same scene, can be seen in Graph 1.
Virtuous actions related to self-control
Of all the core virtues analyzed, the most represented is self-control, which appears in 75 of the 80 total scenes. Together with self-control, of all the virtuous actions analyzed temperance, self-knowledge, humility, simplicity, personal balance, serenity, truthfulness and sinceritythe one that the father most often shows is serenity. This is especially in the scenes where he interacts with Bates, his valet, even when Robert Crawley wants to know more about his employee's past and Bates refuses to reveal that secret information (episode 7: scene 8).
Virtuous actions related to generosity
This is followed by generosity, with a total of 48 manifestations in the 80 scenes in which the father appears in the entire first season. Set within the context of generosity, understanding appears as the most represented virtuous action, something that Lord Grantham manifests throughout the season with his family and also with the servants. An example of this last interaction is found in his understanding towards Bates, visible in multiple scenes, and towards other employees when they have performance or ability problems (see Lord Grantham in the face of Mrs. Patmore's blindness problem in scene 7 of the last episode). This is seen even when there is a chance losing them because they find a better job (see episode 7: scene 12). In total, understanding is manifested in the figure of this protagonist of Downton Abbey in 27 of the 48 scenes in which Lord Robert shows generosity.
Respect is worth mentioningas a second manifestation of virtuous action related to generosityin the Earl's treatment of Matthew, his distant cousin, whom he treats as an equal even though Matthew comes from a lower social stratum than his own. This can be seen in the continuous allusions to Matthew having a profession, something inappropriate and unusual for the nobility at that time. In this sense, the example that the father gives to his daughters is significant. Little by little, they overlook this social difference when they see how their father integrates his distant cousin. In all, this protagonist of Downton Abbey shows respect in 25 of the 48 scenes in which he expresses his generosity.
Virtuous actions related to responsibility
As the third most represented core virtue in the figure of the father, we find responsibility; it appears in 44 of the 80 total scenes of the first season. Responsibility can be Graph 1. Number of times the father manifests a virtue, graphed on the total number of scenes. Source: our own. manifested in three virtuous actions: through commitment to the truth, which constitutes half of the scenes starring the father in this category; coherence (43.18% of the scenes) and flexibility, which is manifested in 31.81% of the scenes.
The type of virtuous action linked to responsibility that stands out the most in Lord Robert is that of commitment to the truthwith a foundation in honor. This is demonstrated in his way of facing difficult situations, from his commitment to confessing to Mary that Patrick died in the sinking of the Titanic (episode 1: scene 3) to losing his fortune because of having no male heir.
Virtuous actions related to joy
The fourth most common virtue is joy, found in 27 of the 80 total scenes in which the father appears during the first season. When referring to joy, which is manifested in virtuous actions that denote optimism, positive mood and peace, we note that in Lord Grantham all three are manifested with similar percentages: optimism and positive mood, each in 35% of scenes, and peace in 30%. Despite difficult situations in the political, economic and family context, Lord Grantham always maintains a positive disposition, above all. An example of this appears in scene number 16 of episode 1, where Lord Grantham makes Carson see that having Bates working for Downton is not as bad as he believes, and allows Bates to continue with his job, instead of firing him and getting carried away by Carson's attitude towards the valet. We can also observe these virtuous actions when Lord Grantham learns of his wife's unexpected pregnancy, in scene 3 of episode 7. In it, the Earl is happy despite the uncertainty the moment, related to the risks of a pregnancy at his wife's advanced age, and the economic instability that the family is going through. In general terms, we observe that Lord Grantham remains constant in virtuous actions that show the virtue of joy throughout the first season, except in specific cases, such as the death of the baby, or before the imminent start of the war.
Virtuous actions related to work or industriousness
The virtuous actions that appear the least are those related to the core virtue of work or industriousness, which are expressed in 21 of the 80 total scenes. The three main virtuous actions showing industriousness are: commitment to a job well done, effort and strength. Of them, the one that is repeated the most is strength, which appears in 47% of the scenes in which we see virtuous actions in Lord Grantham, referring to work. Although Lord Grantham is not a typical worker in the contemporary sense of the term, he must remain strong to preserve not only his assets, but also the calm that allows him to develop the best strategy so as not to lose the family fortune.
One aspect that can help clarify the perception of work, as a paid activity, that the protagonist of Downton Abbey has, is the negative conception that in the socio-historical context of the series was held about the simple fact of having a profession. The Earl of Grantham himself shows this in scene 7 of episode 1 where, when speaking with George Murray, they both discuss Matthew Crawley's profession, as a lawyer, and that of his father, as a doctor. Both Murray and Robert Crawley are dismissive of those professions or of any job in general. The same happens at a dinner at Downton, with Matthew and Isobel (episode 2: scene 3), in which they begin to get to know each other and the most shocking detail for the Crawleys is that Matthew works as a lawyer and also divides his week into business days and weekends. The position of the aristocracy regarding work as an activity is clear: it is not typical of their class.
Core virtues and interactions
We mentioned earlier that virtues do not manifest themselves except in interaction with others. Therefore, part of our study focuses on analyzing how the protagonist manifests each virtue in his interactions with different groups. To do this, we have carried out the necessary cross tabulation of variables to obtain, on the one hand, who the protagonist mainly interacts with in each scene and, on the other, which virtues are manifested by the Earl of Grantham before of the characters of each group.
In the first place, if we identify each scene with an interaction of the protagonist, we see that the total number of interactions adds up to 80. Of these, 30 (37.5%) occur with the members of his family (wife and/or daughters) who live at Downton Abbey; 14 (17.5%) with relatives living outside the home (Violet, his mother, Matthew, his distant cousin and Matthew's mother Isobel) and 11 (13.75%) with other friends and family acquaintances. In total, in 55 of the 80 scenes starring Lord Robert (68.75%), he appears interacting with people from his family circle, friends or acquaintances who visit him for any reason. The same character interacts with his servants on 23 occasions (28.75%) and with one of their friends or acquaintances in two (2.5%). While he interacts in 55 scenes with his family, friends and acquaintances, he does so on 25 occasions (31.25%) with the servants and their friends or acquaintances (Graph 2).
When carrying out a review of the core virtues that are manifested in virtuous actions of the protagonist in each interaction, we see self-control in all the scenes in which he appears, except in five. These are the second-to-last scene of the first episode, in which the Earl of Grantham expresses to Bates his anger at the Duke of Crowborough in a single comment; the third scene of the second episode, in which his daughter Mary introduces him to Kemal Pamuk, with whom she is in love, and Lord Graph 2. Interactions of the protagonist. Source: our own.
Grantham makes some humorous comment; and the second, the fifth and the sixth scenes of the sixth episode. In the second scene of the sixth episode, the protagonist reprimands his daughter Sybil for going to meetings of the liberals; in the fifth, he indirectly tells her that he is aware that some servants are conspiring with her to attend the meetings behind his back; and, in the sixth, Robert Crawley, like his wife and daughter Mary, shows astonishment that his daughter Edith is invited by Sir Anthony Strallan to a concert.
Overall, we can say that self-control is manifested by the protagonist in 93.33% of the scenes in which he interacts with his wife and daughters; in 91.30% of the interactions with their servants; in 90.90% of the scenes in which he interacts with his friends or acquaintances; and in 100% of those in which he interacts with friends or acquaintances of the servants (Graph 3).
Regarding generosity, the protagonist manifests virtuous acts related to this core virtue in 60% of the scenes. Specificallyas shown in the chartour analysis concludes that Robert shows generosity in 66.66% of the interactions with his wife or daughters; in 92.85% of the interactions with those who live outside; and in 18.18% of those with their friends or acquaintances. However, he is generous in 52.17% of the interactions with his servants and in 50% of those scenes in which he interacts with their friends or acquaintances. If we group the people who interact with the father into either family and friends or servants and friends, we obtain that, while with his family and friends he shows generosity in 64.40% of his interactions; with the servants and his friends he shows this core virtue in 52% of the scenes (Graph 4).
Third, we find that virtuous actions in relation to responsibility appear in 55% of the scenes. If we look at the different recipients of Lord Grantham's interactions, we find that he manifests this core virtue in two thirds (66.66%) of the scenes in which he interacts with his wife or daughtersa group of relatives who live with him at Downton Abbeywhile the percentage increases to 71.42% in the group of relatives who live outside the home. With his friends or acquaintances, Lord Robert shows Graph 3. Manifestation of self-control by the father and with whom he interacts. Source: our own. responsibility in 36.36% of the scenes. The results are more homogeneous in the group of servants: with his servants, the Earl shows responsibility in 60.87% of the interactions and, with their friends, in 50%. By establishing the division of Group 1 (relatives, friends and acquaintances of the protagonist) on the one hand, and Group 2 (servants and friends or acquaintances of this) on the other, we find that the protagonist shows responsibility in 52.72% of the interactions with Group 1, and in 60% of the scenes in which he interacts with Group 2 (Graph 5).
As for virtuous actions derived from joy, these are manifested in 33.75% of the total scenes (27 out of 80). This protagonist of Downton expresses joy in 30% of the interactions with his closest family; in 50% of interactions with their family members who do Graph 4. Manifestation of generosity by the father and with whom he interacts. Source: our own.
Graph 5. Manifestation of responsibility by the father, divided according to those with whom he interacts. Source: our own. not live at Downton Abbey and with 27.27% of their friends and acquaintances. Regarding servants, Robert manifests joy in 30.43% of the interactions with his servants and in 50% of the scenes in which he interacts with friends or acquaintances of them. Overall, the protagonist expresses joy in 34.54% of the scenes with Group 1 and in 32% of the interactions with Group 2 (Graph 6).
The last of the five virtues analyzed in order of frequency of appearance is work or industriousness, with its virtuous actions manifested in 23.75% of the protagonist's interactions. If we break that number down into target groups of his interactions, we find that, with his family members living with him at Downton, work appears in Graph 6. Manifestation of joy by the father, divided according to those with whom he interacts. Source: our own.
Graph 7. Demonstration of work or industriousness by the father, divided according to those with whom he interacts. Source: our own. 26.66% of interactions; with relatives living outside of Downton in 42.85%; and with friends or acquaintances in 18.18%. In interactions with the servants, Robert manifests this core virtue in 13.04% of them. In this case, all interactions with the servants are scenes where only the servants appear. In none of them do any of the friends or acquaintances of those servants appear. If we confront the group of family-friendsacquaintances with that of servants-friends-acquaintances, our analysis shows that the Earl manifests the core virtue of work in 29.09% of interactions with Group 1 and in 13.04% of those with Group 2 (Graph 7).
Conclusions
When attempting to discover if the paternal identity of Lord Robert, the father in Downton Abbey, is manifested through virtuous actions representative of five core virtuesself-control, joy, work or industriousness, generosity and responsibilitywe can conclude that, indeed, this occurs in all the scenes of the first season. In total, the protagonist appears in 80 scenes, and in all of them, he manifests at least one virtuous action, which reinforces our initial hypothesis.
Paternal identity has an ontological dimension that corresponds to the person as such, and a historical dimension in which the person expresses himself in a specific time and space and in relation to others in a specific context. In this series, the virtuous figure of the protagonist reveals who the person of the father is, and how he manifests himself in the family and with all the people with whom he interacts. The virtuous actions of the father demonstrate that he conducts himself according to his role as a father, in order to perfect others: the better the father is, the better he makes himself and others. This attests to the fact that virtue is not attained alone but within the family and its social community.
The core virtue that the father most manifests is self-control, especially through serenity; it appears in 93.75% of the scenes. There is a minimal difference in the frequency of appearance of this virtue depending on with whom the interaction takes place. The Earl manifests self-control in 94.54% of the interactions with his family, friends and acquaintances, and in 92% of the interactions with the servants and their friends or acquaintances.
This minimal difference in the manifestation of virtue occurs again in the case of joy, since the father shows this core virtue in 34.54% of interactions with family, friends and acquaintances and in 32% in those in which he interacts with his servants and their friends or acquaintances. Joy is manifested in 33.75% of the scenes and is fourth in frequency of appearance.
The difference is somewhat greater in the case of generosity. Whereas, with his relatives, friends and acquaintances, Lord Grantham is generous in 63.63% of the interactions, with his servants and their friends or acquaintances he is so in 52% of scenes in which he interacts with them. Generosity, appearing in 60% of the scenes, is the second most manifested core virtue in the figure of the protagonist.
Self-control, joy and generosity are virtues that the protagonist expresses more often with his family than with the servants. However, the Earl manifests another virtue more often in his interactions with his servants and their friends or acquaintances.
Such is the case of responsibility. While the protagonist manifests responsibility in 52.72% of the interactions with his family, friends and acquaintances, he does so in 60% of the scenes in which he interacts with his servants and friends or acquaintances of these. Responsibility, manifested in 55% of the scenes, is the third core virtue in frequency of appearance.
Work or industriousness is the least represented virtue: in 23.75% of total interactions. It is presented in 29.09% of the scenes with family, friends and acquaintances, and in 13.04% of those with servants, friends or acquaintances of these. Thus, it is also the virtue most unequally manifested with the interaction groups.
Except for this virtue, where there is a difference of more than double in manifestation percentages if we consider the family/servants interactions, all the others are shown in a homogeneous way in the interactions with both groups. Responsibility even appears more in interactions with servants.
This allows us to conclude that the analyzed character not only shows his identity through virtuous actions that express the core virtues, but also does so in a coherent way, by acting in accordance with all the virtuesand four of the five in similar percentagesin his interactions with both his family circle and that of the servants.
We can affirm that the habits of good choice that Lord Grantham manifests express the notes of his personal identity, and also constitute an example of life in its spacetime context, the environment of Downton Abbey in the early twentieth century. In this way, the figure of this protagonist of the series is very attractive because, with his virtuous acts, he transcends not only his family, but also his employees, towards whom he exercises a natural parental concern. This, undoubtedly, is an unprecedented characteristic in paternal representation in a contemporary television series. | 12,775 | 2021-01-02T00:00:00.000 | [
"Philosophy"
] |
Hydrodynamics of Direct Contact Condensation Process in Desuperheaters
: Due to the global warming and environmental implications, the focus of household heating has shifted from fossil fuels towards environmentally friendly and renewable sources. Desuperheaters have been found an attractive option as a domestic provision for the warm water; they used steam induced direct contact condensation (DCC) as the major means to warm the water. The present study has been an attempt to investigate the hydrodynamics in the Desuperheater vessel experimentally, when the pressurized pulsating steam was injected into the vessel, where, the steam jet interacted co-currently with the slow-moving water. Visual flow visualization provided an overall flow picture that showed a circulation region when the pulsating steam was injected into the slow co-currently moving water and the peaked vorticity corresponded to the steam injection duration varying from 10-60 seconds. An array of 7 Hot Film Anemometers (HFA) was traversed axially and radially to determine the velocity fluctuations at 0 – 20 cm from the steam's nozzle exit. Vortical structures were obtained that corresponded to the entrainment of the steam with the surrounding cocurrently moving water. The circulation regions were thus exhibited in relation to the steam's injection durations as well as the downstream axial distance of 2 cm and 15 cm from the nozzle exit, which showed that the core local circulation at 2 cm, lost 75-79% of its circulation at 15 cm downstream of the nozzle exit.
INTRODUCTION
The demand for energy on a domestic level has increased over the years, forming a greater proportion of total energy demand. Several factors are responsible for this rise, including, population growth, growing economy and thus wealthier lifestyles causing an increase in the use of electronic devices and vehicles. Another facet of the issue is the increasing usage of energy resources like fossil fuels. Such fuels have a definite age and quantity, but their increasing usage devastates the global outlook by polluting the environment. Thus, attention has diverted to renewable energy resources with increased efforts to determine renewable sources as a replacement of fossil fuels. Household warm water contributes to a major share in energy consumption. e.g. based on descending order; it consumes 32% in South Africa (Nkomo, 2016), 29% in Mexico (Rosas-Flores et al., 2011), 27% in China (Mahmoudi et al., 2018), 25% in Australia (GOVERNMENT, 2010), 22% in Canada (Aguilar et al., 2005), 14% in Europe (Trends, n.d.) and 11% in USA (Allouhi et al., 2015).
There are many systems which exist to provide customized solutions suited to a household's warm water requirement. These systems depend upon the climate, the nature of the requirement, nature of energy resource, and the design of the system. Thus, the selection of a suitable energy system could reduce the cost of warm water production and help save on unnecessary usage of energy resources, whilst being environmentally friendly. There are numerous studies (Chow, 2010;Hepbasli and Kalinci, 2009;Jaisankar et al., 2011;Shukla et al., 2009) on methods being used to provide warm water to the households which heat-pumps, solar water heaters with phase change materials, and thermal/photovoltaic solar technology-based systems. All these studies are comprehensive reviews, 3 within which the usage of the desuperheaters have been elaborated in detail. Desuperheaters performed a cordial role in the provision of the warm water, irrespective of the sources from where they inducted steam. They were involved in the processes such as the direct contact condensation that became the central to warm the water. Yet myriad times, the desuperheater setup has been discussed in depth. Still, till the date, there wasn't any study to our knowledge that discussed the issue of the direct contact condensation (DCC) induced hydrodynamics within the mixing region in the desuperheater including the pulsating steam injection. The current study is an effort in this regard. In the current study, a detailed analysis has been provided for the hydrodynamic trends prevailed in the desuperheaters.
The present study focused on the effect of the short pulse high-pressure steam injection into the continuous very slowly flowing water, and thus, the overall effect of the pulse injection on the in-situ hydrodynamics was determined. The sequence of the events that occurred within the mixing chamber was characterized, and the flow structures like vorticities, right from the moment they came into being till the time they decayed, were described in detail. The details of the experimental setup and the sequence of performed experiments are given in the proceeding section.
EXPERIMENTAL SETUP
The experimental setup comprised of a desuperheater vessel, which is shown in The steam was injected into the desuperheater through a nozzle attached to a vertical duct. The vertical duct was submerged in the vessel, and the nozzle was located at the axial centre of the desuperheater's vessel. The inner diameter of the vertical duct is 3 cm, the inner diameter, d1 of the nozzle is 2.5 cm, the throat diameter d2 is 1 cm and exit diameter, d3 is 1.5 cm. The length of the nozzle is 10 cm, and the diameter and length of the desuperheater vessel are 10 cm and 60 cm, respectively. The vessel was filled with subcooled water which moved with very lower velocity, 0.01cm/sec and the steam was 4 injected at the stable gauge pressure of 4 bars in pulsating mode. The steam's injection was controlled using a solenoid valve and an electronic control system (ECS) installed upon the main steam line (not shown) in Fig 1. Hot Film Anemometers (HFA) were used to measure velocity fluctuations associated with interfacial steam-water flow. A fixture was made to facilitate forward and backward movement of the seven HFA within the fluid medium in the vessel. Before performing the experiments, the steam's mean velocity was measured at the exit of the nozzle by using the pitot tube. The dynamic pressure measured at the front of the pitot tube can provide the axial steam's mean velocity ( ) at the location of the front face of the pitot tube, through the application of the Bernoulli's equation, expressed as, where ∆ is the pressure difference between total pressure at the mouth of the pitot and the static pressure and is the steam density. This velocity was used to non-dimensionalize the velocity values obtained from the HFA sensors. The ECS could also monitor the movement of this fixture in clockwise and anti-clockwise directions to control the movement of the HFA sensors along the axial axis through initiating forward and backward movement of the HFA sensors. The velocity fluctuations were measured along the axial (X-U) and radial (Y-V) directions. All the seven HFA sensors were used at same time such that the array traversed a distance of 20 cm along the axial, from the downstream towards the upstream by acquiring the data for 1 min at a single location along the axis. Then it traversed forward to a distance of 1cm, and again the measurements have been done. in this way the whole medium comprised of the mixture of steam and water has been scanned in a vertical plane from a distance of 20cm till the exit of the nozzle. Both of these velocity values as a function of the spatial distances along which these have been recorded gives us useful information related to the total circulation along the axial direction, and local circulation and the velocity distributions along the radial direction. The total circulation (Linden, 1973) was measured with the help of the velocity fluctuation along the x-axis and the area containing the axial and radial velocity fluctuations being measured provided the total circulation, of the vortical ring. Whereas, the local circulation in terms of the angular velocity distributions along the radial direction (Linden, 1973) was calculated by using the following relations, The experiments thus performed and the discussion on the acquired results has been presented in the following section.
RESULTS & DISCUSSION
In the current study, the steam was injected at 5 bars of gauge pressure into a desuperheating vessel in a pulsating mode. The flow hydrodynamics associated with the flow regimes evident in the vessel was investigated with special emphasis on the vortical structures and circulation flows generated within the concurrently slowly flowing water. The details of the accompanying results in this regard were given as follows. the steam has been injected inside the flowing water. It was larger than the corresponding length across which it prevailed and then decreased with the passage of time. It is interesting to state here that the growth rate of this circulation depends mainly on the entrainment of the surrounding water. However, the circulation motion diminished at a distance away from the exit of the nozzle, it is, therefore, the growth rate of the large vortical structure underwent through major changes as the core region of the circulation also varied. A possible reason of restricting the circulation between the steam and the co-currently flowing water could be the buoyancy influence of the steam, which destabilized the interface along with the momentum-driven entrainment that impacted the flow in a negative way, an observation that was convincingly supported by an earlier study (Maxworthy, 1974).
Circulation flow ring and Vortical Structures inside the flow regimes
The steam injected in this phase of the experiment for time duration varying from 10-60 seconds. The Fig 2(b). The results have shown that for all the injection time durations (10-60sec), the dependence of the circulating vortical structure was very weak at the varying injection time. it was also confirmed that even the Reynolds number also didn't add any dominant effect on the diameter of the ring, this finding is in line with an earlier study (Liess and Didden, 1976a). The dependence of the length of the vortical circulation ring land diameter against the steam inlet pressure (5 bars), and injection time was also determined. It was estimated by first assuming that the velocity (U), which was measured on the average basis at the exit of the steam nozzle had a uniform distribution and the length (L) of the Steam induced vortical ( 0 ), during the time duration, t is given by the relation as follows (Kulkarny, 1977),
Effect of Injection time on the instabilities inside the flow regime
It should also be noted that when traversing the velocity sensors, flow instabilities were measured inside the flow which remained dominant till the time when the steam was injected into the water and the flow dissipated after the valve for steam injection was shut after steam's injection for a specified time period.
The instabilities being observed here, were analogous for the similar instances in the earlier studies (Krutzsch, 1939;Liess and Didden, 1976b;Maxworthy, 1972;Moore, 1974;Widnall, D O N A et al., 1974;Widnall, S. E. & Sullivan, 1973) with variations in steam injection duration or variations in Reynolds number. However, a few interesting trends of the instabilities' wave number can be seen in seconds. Afterwards, it shows an increasing trend initially; however, then suddenly a decreasing trend can be seen which emphasizes the dominant role by the dissipative effects in the current flow regime.
A straight forward reason for such a behaviour is due to the instabilities that have first shown an increasing trend, which is consistent with the dimensions of propagation of the circulation vortical ring which afterwards has been broken out, with the resultant profiles have shown gradually flattening profiles due to the dissipative character under the action of such dissipative forces. Although, the phenomena of the pulsating fluid injection into the water was described by a number of studies that included mostly the visualization studies, but here in the current manuscript we quantitatively discussed the effect of the pulsating steam injection into the water on the flow regimes involving interaction between steam and water. It should also be noted that we do accept the non-frozen nature of the date, but still on the average basis, the fluid regime has been characterized as much as we can. So far the accumulative results have been concerned which can be drawn on the basis of the results discussed till yet, It was found that the core region which was emerged, but remains attached to the nozzle exit and with the rise in the injection time, only a slight rise in the length of the core region was observed.
But still, the main core that was responsible for giving birth to the forward rolling large vortical structures, Yet such efforts didn't exactly predict the exact balance between the positive and negative vorticities but still the basic physical phenomenon that can be used as a basis for modelling such case, cannot be simply denied as a whole, since still in the region far from the injection point the viscous dissipation surely can become dominant over inertial forces to break down the large circulating structures.
Flow Hydrodynamics in the region far from the steam nozzle
It has been observed in a number of studies (Afrasyab et al., 2013;Khan, 2014;Khan et al., 2016bKhan et al., , 2016aKhan et al., , 2013 that the instabilities at the interface have lower amplitudes in the region near the nozzle exit which has been transformed into larger and larger amplitude instabilities as soon as the steam propagates into the water. Amplitude of such instabilities after some finite rise break down into the ringlike vortices, which causes the interaction between the fluids at the interface. A possible reason for such a behaviour can be seen in earlier studies [19] where an imbalance between the axial wave number and radial mode number was claimed (Widnall, D O N A et al., 1974). According to the observations quoted in the given studies, the breaking of the outer core did not take place uniformly, rather occurred in azimuthal direction with the formation of a net flow. It was further observed in another earlier study (Leibovich and Randall, 1972) which claimed the propagation of just a single wave along the central core and the wave had a finite amplitude and a large axial velocity. Due to the large axial flow velocity, the central core wave in the far-off region had a profound effect on the regime owing to the ring instability.
11
The measurements at the far region from the nozzle exit, exhibited a non-frozen nature which depicted the highly fluctuating nature of the flow regime. The velocity fluctuations measured at the central core were having less amplitudes than the velocity amplitudes at the periphery of the circulating region. The main reason for this may be the higher axial mean velocities, and fluctuations in the velocity were marginal compared to the mean values. Also, the interface appeared to be turbulent at the far regions as well, and this was characterized due to the formation of the vortical structures owing to the entrainment of the surrounding water. The variations in the magnitudes of the vorticities in the far region were relatively large owing to the stronger interacting between the steam and the surrounding water.
To fully understand the effects imparted by the vortices and the turbulence at the far regions from the nozzle exit, the local circulations at two distances, i.e. 2 cm and 15 cm from the nozzle exit, were obtained (see Fig 4), which were when compared with the core local circulations at the distance of 2 cm, the local circulation was found to lose 75-79% of its circulation at a distance of 15 cm as shown in | 3,572.8 | 2021-03-02T00:00:00.000 | [
"Engineering",
"Physics"
] |
Thermal management and packaging of wide and ultra-wide bandgap power devices: a review and perspective
Power semiconductor devices are fundamental drivers for advances in power electronics, the technology for electric energy conversion. Power devices based on wide-bandgap (WBG) and ultra-wide bandgap (UWBG) semiconductors allow for a smaller chip size, lower loss and higher frequency compared with their silicon (Si) counterparts, thus enabling a higher system efficiency and smaller form factor. Amongst the challenges for the development and deployment of WBG and UWBG devices is the efficient dissipation of heat, an unavoidable by-product of the higher power density. To mitigate the performance limitations and reliability issues caused by self-heating, thermal management is required at both device and package levels. Packaging in particular is a crucial milestone for the development of any power device technology; WBG and UWBG devices have both reached this milestone recently. This paper provides a timely review of the thermal management of WBG and UWBG power devices with an emphasis on packaged devices. Additionally, emerging UWBG devices hold good promise for high-temperature applications due to their low intrinsic carrier density and increased dopant ionization at elevated temperatures. The fulfillment of this promise in system applications, in conjunction with overcoming the thermal limitations of some UWBG materials, requires new thermal management and packaging technologies. To this end, we provide perspectives on the relevant challenges, potential solutions and research opportunities, highlighting the pressing needs for device–package electrothermal co-design and high-temperature packages that can withstand the high electric fields expected in UWBG devices.
Introduction
Power electronics is the technology for electrical energy conversion using solid-state electronics. It is ubiquitously deployed in electric vehicles, data centers, motor drives, electric grids and renewable energy integration. At the heart of power electronics are power semiconductor devices, which are used as solid-state switches in circuits. The global market for power semiconductors reached $40 billion in 2021 and is growing fast [1]. Functioning as solid-state switches, ideal power devices should have minimal resistance when passing an ON-state current, block high voltage in the OFF-state and produce minimal loss during turn-ON/OFF switching.
Conduction and switching losses appear in real-world power devices due to the non-zero ON-resistance (R ON ) and the need to extract or supply charges in switching, respectively. These energy losses dissipate as heat, elevating the temperature of the device active junction and the package housing the device. The elevated junction temperature (T j ) could adversely impact the device characteristics as well as the device and package reliability. T j of commercial power devices is usually limited to below 125 • C-175 • C for longterm, reliable operation [2]. Thermal management, which dictates the heat removal in a device and its package, is thereby a key limiting factor for power device performance and reliability.
In the last two decades, power electronics has witnessed revolutionary advances enabled by devices made of widebandgap (WBG) semiconductors, such as silicon carbide (SiC) and gallium nitride (GaN) [3][4][5][6][7][8]. Owing to their superior material properties such as high critical electric field (E C ), WBG devices can achieve a much lower specific ONresistance (R ON,SP = R ON A, where A is the device area) for the same breakdown voltage (BV), thereby allowing for smaller areas, capacitances, charges and switching losses, as well as higher operating frequencies, compared with similar voltageand current-rated silicon (Si) devices [9]. The higher frequency and lower losses further enable miniaturization of passive components in power electronics systems, reduce system size, boost power density and enhance efficiency. On the horizon there are power devices made of ultrawide bandgap (UWBG) semiconductors such as gallium oxide (Ga 2 O 3 ), aluminum nitride (AlN), and diamond [10][11][12][13]. UWBG devices promise a theoretical R ON,SP -BV trade-off superior to their WBG counterparts and are excellent candidates for the next generation of power electronics.
Despite their superior electrical performance, thermal management of (U)WBG devices is more challenging than that of their Si counterparts for three main reasons: (a) since these devices are expected to handle very high power densities while minimizing areal footprint, the combination of high power and small area leads to extremely high heat fluxes which are incredibly difficult to manage; (b) the concurrence of higher current density and electric field (E-field) produces very high local heat fluxes, which can lead to non-uniform temperature distributions and local thermal runaway; (c) for some device structures and materials, there are inherent thermal limitations, which further complicate and exacerbate the problem. For example, in high-electron-mobility transistors (HEMTs), the device active region consists of an extremely thin (∼5 nm) quantum well current channel, i.e. the two-dimensional electron gas channel, where heat generation is spatially confined [14], further increasing the demand for thermal management. Another example is Ga 2 O 3 , which suffers from a very low thermal conductivity (k T ) of 11-27 W m −1 K −1 [15,16] (table 1). This is an order of magnitude lower than that of Si.
Due to these challenges, heat removal has become a key roadblock to exploiting the electrical performance of (U)WBG devices in systems. For instance, the insufficient thermal management requires a larger A for heat dissipation, which compromises the device switching frequency and system efficiency [18]. The thermal issues can also result in de-rating of power devices, i.e. reducing the continuous operating current.
Thermal management in power devices is a multidimensional problem. As shown in figure 1, the heat generated at the device junction is usually removed through the package to an area where it can be further dissipated, typically via convection (e.g. air and liquid cooling). The micronand submicron-sized device structures in the junction area, semiconductor materials and their interfaces, package architectures, packaging materials and cooling techniques all play vital roles in determining the thermal resistance (R th ) along the flow of heat, i.e. the junction-to-ambient thermal resistance (R th,j-a ). Moreover, their roles are usually interdependent. For instance, the cooling and package designs can alter the major heat flow towards the top or bottom side of the chip, under which circumstances R th,j-a would be sensitive to different device structures and material properties within the same device. Hence, thermal management of power devices must holistically account for the package and cooling as well as the internal structures and materials.
The demonstration of large-current, packaged devices is an indispensable milestone for any power device technology towards being deployed in power electronics, as no industrial devices can be used in converters without packaging. While WBG packaging has been researched for years [19,20], the good news is that the emerging UWBG technologies also reached this milestone very recently [10,21,22]. Despite this fast progress, papers that provide a global overview of WBG and UWBG device thermal management are scarce.
Recently, thermal studies on lateral UWBG devices have been reviewed with a focus on the fundamentals of devices and materials [23,24]. This article attempts to build on a materialdevice-package holistic viewpoint that is closely tied to power electronics applications and discusses common challenges for the thermal management of UWBG devices. To this end, we place a particular emphasis on the state-of-the-art of packaged devices and provide the perspectives both on the device-and package-level thermal management of UWBG devices. As the literature is vast but space is limited, in this paper, we will focus on the power device and module packaging. Convection-centric cooling technologies have been nicely summarized in [19] and will not be the main focus of this article. We also note that thermal management is not the sole purpose for packaging; the packaging for (U)WBG devices must also handle high E-fields [25,26] as well as reduce parasitics and electromagnetic interference (EMI) [27,28]. These electrical and EMI considerations will be briefly mentioned in this article, with a focus on their relevant constraints on the thermal designs of packages.
This article is organized as follows: section 2 illustrates the significance of thermal management for power devices; section 3 describes the basic package architectures and discusses the impact of semiconductor k T ; sections 4 and 5 overview the thermal management and packaging of WBG and UWBG devices, respectively; section 6 provides future perspectives on thermal management and packaging of UWBG power devices; section 7 summarizes the paper.
Why does thermal management matter for power devices?
We attempt to answer this question from the viewpoint of power device operations. For a unipolar transistor, the total power loss (P loss ) is the sum of conduction loss (P con ) and switching loss (P sw ), which can be described by [29] P loss = P con + P sw = DR on,sp where D is the duty cycle, I 0 is the conduction current, f is the switching frequency and k s is a circuit-related switching parameter. The minimum P loss can be achieved by optimizing the device area (A opt ) when d(P loss )/d(A) = 0: Considering a 175 • C limit for T j (the widely used limit for commercial WBG devices), and that losses are independent of T j (an optimistic simplification), the thermal constraint for device operation at ambient temperature (T a ) is As illustrated in section 1, the key system benefits of (U)WBG devices is their high frequency. From (3) and (4), the maximum frequency limited by the thermal constraint is As suggested by equation (5), the fulfillment of frequency upscaling in power electronics hinges not only on the inherently low R on,sp of (U)WBG devices but also a low R th,j-a . To retain the R on,sp advantage, a R th,j-a at least similar to that of Si devices is preferred. This is challenging due to the much smaller A of (U)WBG devices.
Another angle for understanding the more pressing thermal requirements of (U)WBG devices is to look at the density of power to dissipate, which can be estimated using (2) and (3): Although k s could vary for different (U)WBG devices, (6) suggests that a higher power density is required for frequency upscaling, and this requirement is more or less independent of the underlying material. Figure 2 plots the power density versus frequency for representative Si, SiC and GaN device technologies as well as the projection for UWBG semiconductors.
A common vision is that a T j higher than 175 • C could potentially be tolerated by UWBG devices for long-term operations, which could relax their thermal management requirements. Although high-voltage Ga 2 O 3 and diamond devices have been reported to operate at temperatures up to 327 • C-427 • C [30,31], their long-term reliability at high T j still needs further scrutiny. Here, we outline the impacts of T j on power devices and packages, as these impacts need to be carefully considered when determining the future T j constraint for UWBG devices.
R on,sp and saturation current density (I sat ) are typically the first to be negatively affected by the elevated T j . When T j increases from 25 • C to 150 • C, R on,sp of commercial GaN and SiC transistors was reported to increase by 1.7-2.5 times [32], with a reduction in I sat of up to 50% [33]. This will increase the device conduction loss and de-rate its current capability. Additionally, the device transconductance decreases at high T j due to the reduced carrier mobility, resulting in a slower switching speed and higher switching loss. The increased conduction and switching losses accelerate the T j rise, which may lead to a destructive thermal runaway [34,35].
The reliability and robustness of power devices are also compromised at high T j . T j is a direct accelerator for power device wearout in switching operations [36]. For example, the lifetime of a 600 V rated GaN HEMT stressed at 640 V and 8 A hard switching was reported to decrease from 900 h at 100 • C to 250 h at 125 • C [36]. Under a similar overvoltage switching condition, the degradation of SiC metal-oxide-semiconductor field-effect transistors (MOSFETs) was reported to accelerate from tens of hours at 25 • C to tens of minutes at 100 • C [37]. Robustness is also critical for power devices to withstand surge energy, overvoltage and overcurrent in systems [33,38]. The failure of power devices under these conditions is usually thermally triggered, suggesting a compromised robustness at a higher T j . For example, the critical surge energy of SiC and GaN p-n junctions is generally 30%-40% lower at a T j of 150 • C than at 25 • C [39].
Finally, the reliability of packaging components such as the die-attach, interconnects, substrates and encapsulants, can also be negatively impacted by high T j . A survey on the hightemperature reliability of these components is presented in [40]. When T j is elevated to 200 • C, cracking is seen in some widely used packaging components such as Durapot epoxy and Resbond hydro-set ceramic [40]. This suggests a need to explore new materials and structures for high-temperature packages, which will be discussed in section 6.
Basics of power device/package thermal management
As shown in figure 1, the thermal management of power devices can be categorized into device-and package-level designs. Examples of device-level designs are uniformly spreading heat across the device, alleviating hot spots around critical junction areas and using materials with high k T as device substrates [41][42][43]. Typically, in a power transistor, the channel formed beneath the gate is the area that experiences the peak T j . A substrate with a high k T and reduced thickness can help to 'pull' the heat down away from the channel, particularly in the scheme of a bottom-side cooling package. An example of this would be using SiC and diamond as the substrate for GaN HEMTs [44][45][46][47][48]. This method is also applicable to Ga 2 O 3 devices with a low k T [24,[49][50][51][52][53], which will be elaborated in sections 4 and 5.
The heat generation induced by E-field crowding can also be mitigated by device design. Implementing guard rings [54][55][56], field plates [57][58][59][60][61][62] and junction termination extensions [63][64][65][66][67][68][69] can help to spread the E-field and prevent the build-up of weak electrical and thermal points in the device. This can be well illustrated by a comparative thermal study of lateral and vertical GaN devices with identical material k T : superior thermal performance is revealed in vertical GaN devices due to the alleviated E-field crowding and more uniform current distributions [2].
The device package incorporates many components, such as die-attach, interconnects (e.g. wire bonds), metal-ceramic substrates and baseplates or lead frames, encapsulants and terminals. The k T of these materials and R th of these layers and interfaces can affect heat dissipation away from critical areas of the semiconductor. The effectiveness of the overall packaging system is dependent upon the individual components and their thermal properties. For example, the defects, brittle intermetallics and thermomechanical stresses can cause the R th of the die-attach layer to increase, which in turn results in an increase in local temperature and package degradation and failure [70,71]. Cracks which may develop during operation because of thermal cycling and mismatches in the coefficient of thermal expansion (CTE) of different materials (e.g. the encapsulation material and the metal interconnects) also impede the heat flux and result in larger R th [72]. It has been shown throughout the literature that device thermal performance is improved with optimization of device packaging [21,22,24,73,74].
There are generally three types of packaging and cooling schemes. Figure 3(a) shows a typical schematic of a bottomside cooling package [75]. For multi-chip modules, the power devices are attached to a substrate which is typically a ceramic with copper metallization on the top and bottom. The substrate is then typically soldered onto a baseplate for heat spreading and mechanical support. The module is encapsulated using epoxy resin or silicone gel. The final module is mounted to a heatsink for cooling. A thermal interface material is used between the baseplate and the heatsink to reduce the thermal resistance of the interface. While a bottom-side cooling package is suitable for high-k T semiconductors such as SiC and diamond [23,76,77], it may be difficult to achieve efficient heat removal for those with a low k T . Junction-side (or top-side) cooling can be advantageous for semiconductors with low k T , as the heat flows from the device junction directly to the package rather than through the bulk of the device, as shown in figure 3(b) [78,79]. For these low-k T devices, junction-to-case thermal resistance (R th,j-c ) can be reduced compared with a bottom-side cooling package to enable higher power density [21]. However, junction-side cooling packages can also face some challenges. The chip is often directly attached onto the substrate metallization through a flip-chip method. The mismatch of CTE between the metal and the chip could result in mechanical stresses [80,81]. The E-field distribution can also be an issue as the edge termination of the device is in close proximity to the substrate metallization, which could lead to voltage de-rating or degradation [27,82,83]. Further, as most of the heat is removed through the device metal contacts, the effective heat dissipation area will be smaller for a junction-side cooling package than for a bottom-side cooling package since the bottom metal contact covers the entire chip area while the top contact does not due to the edge termination, gate and passivation.
Double-side cooling packages (shown in figure 3(c)) are gaining increased attention for their superior thermal management due to both bottom-and top-side cooling paths [21,84,85]. Such a package may offer the opportunity to address the thermal limitations of some WBG or UWBG materials with low k T . However, the complexity of doubleside cooled packages makes them prone to reliability and yield issues as well as high cost.
As presented in table 1, the k T of WBG and UWBG materials varies over a wide range, among which Ga 2 O 3 has the lowest value and diamond the highest [17]. Considering the device-package interplay, a natural question is whether the semiconductor k T matters for the thermal management of packaged devices. The answer could be quite straightforward for junction-side and double-side cooling packages: for semiconductors with a low k T , such that the R th through the bottom heat flow path is very large, there is likely to be negligible difference between junction-side and double-side cooling packages. A powerful example is the Ga 2 O 3 diodes reported in [21], which experimentally demonstrates that junction-side cooling enables most of the heat to be directly extracted from the package with minimal heat flowing into the Ga 2 O 3 chip.
For a bottom-side cooling package, the significance of semiconductor k T depends on the relative R th of the chip and package. An analysis of WBG and UWBG devices with bottom-side cooling was presented in [17]. It was found that SiC and diamond), the package's R th dominates R th,j-a , whereas for 150 < k T < 400 W m −1 K −1 (e.g. AlN and GaN), the contribution of the chip's R th depends on the heat transfer coefficient (HTC) at the heatsink.
Finally, it is worth noting that the accurate measurement of peak T j is challenging for both bare-die and packaged devices, particularly those with submicron channel/gate structures. The T j profile measured by micro-Raman spectroscopy and thermoreflectance usually represents an average temperature within a critical area [86][87][88]. The actual peak T j could be underestimated, and the measured R th could be smaller than the actual value due to the delicate channel structures and Efield crowding. Besides, some techniques, such as infrared thermography, have a poor spatial resolution of ∼5-10 µm [89], and many thermography approaches cannot be applied to packaged devices. For packaged devices, T j is often monitored by measuring a thermosensitive electrical parameter (TSEP) [90]. For example, in [21] the forward voltage at 10 mA was selected as the TSEP of a packaged Ga 2 O 3 diode, and showed an excellent linearity with temperature.
Review of WBG device thermal management technologies
Over the last two decades, GaN HEMTs and SiC MOSFETs have arguably become the two most commercially successful WBG power transistors. These devices and the constructed packages are now widely used in electric vehicles, data centers and consumer electronics. As shown in figure 4, the vertical MOSFET and lateral HEMT have fundamentally distinct device architectures and physics; both structures have also been used for several types of UWBG devices. Hence, despite the relative maturity of their thermal management and packaging technologies, a brief review could be beneficial for the emerging UWBG devices.
SiC diode and MOSFET
The commercialization of SiC devices dates back to the first SiC Schottky barrier diodes (SBDs) introduced to the market by Infineon in 2001 and the first SiC MOSFETs in discrete packages by Cree and Rohm in 2010-2011 [20]. SiC devices are now commercially available in the voltage class of 650-3300 V [7]. Thanks to the high k T of SiC and the availability of the industrial substrate thinning process, the standard bottom-side cooled packages are prevailing for SiC discrete packages and multi-chip power modules. Today, most of the commercial SiC SBDs and MOSFETs as well as Si IGBTs are packaged in transistor outline (TO) series discrete packages and multi-chip power modules with standardized footprints and wire bond interconnects. These bottom-side cooled packages are able to achieve reasonable R th due to the high k T of SiC, but, due to the higher current densities of SiC MOS-FET dies associated with the smaller chip size compared with Si IGBTs, current de-rating is often needed to avoid overheating of the devices during operation [91]. Accordingly, extensive research efforts have been devoted to further reducing the R th of SiC packages to enable greater heat dissipation under both steady-state and transient conditions.
Junction-and double-side cooling have been pursued for SiC devices by replacing the traditional wire bonds with interconnect methods that enable direct heat transfer from the device junction to the substrate or lead frame. Examples of such interconnect methods include soldering lead frames or substrates directly to the die topside contacts [20,76,92], soldering or sintering Cu, Mo or Cu-Mo posts or bumps between the die and the substrate [93][94][95][96], and drilling and electroplating Cu-filled vias for PCB-embedded packages [97][98][99]. As an example, the use of Cu and Mo posts in a double-side cooled configuration was utilized to achieve an overall R th,j-c of just 0.17 • C W −1 for a 10 kV, 25 A SiC MOSFET [85].
Recently, Seal et al demonstrated a double-side cooled, chip-scale, wire-bondless package [100]. As shown in figures 5(a) and (b), a SiC die is assembled on a metallic connector and then is flip-chip bonded onto a substrate using solder balls, allowing heat dissipation from both sides of the die. The metallic connector translates the bottom interconnection of the device to the plane of the top contacts, making all the device terminals accessible on one side. Figure 5(c) shows its comparison with a TO-247 MOSFET package, revealing a package size that is 14 times smaller. Smaller power loop inductance and electrical R ON have also been demonstrated compared with conventional wire-bonded, bottom-side cooled packages.
SiC multi-chip modules were first demonstrated by Cree/Wolfspeed in 2013 [20] and are now available from multiple vendors in standard footprints. An example of a SiC power module with a double-side cooled package is shown in figure 5(d) [18]. Wolfspeed provides SiC power modules from 1200 V to 1700 V and 20 A (six-pack, three-phase) to 600 A (half-bridge). An example of one of their 1200 V modules is shown in figure 5(e); it employs solderless pins to interface with an external PCB and removes the baseplate in the module [101]. In addition, removing the baseplate reduces the form factor of the pack and machining and material costs, and improves the reliability of the thermal interfaces such that R th can be maintained for a higher number of cycles [102]. At higher voltages, 10 kV SiC MOSFET power modules have been demonstrated both in academia and industry with traditional bottom-side cooling packages [103,104] and, more recently, double-side cooling packages [85]. A comprehensive review of SiC power module packaging is presented in [20,76].
GaN vertical FETs and lateral HEMTs
Before detailing the more mature lateral GaN HEMTs, we briefly introduce the emerging vertical GaN transistors due to the similarity between their thermal management and that of SiC MOSFETs. Recently, several vertical GaN transistors have been demonstrated, such as current aperture vertical electron transistors [105], trench MOSFETs [106] and power fin field-effect transistors (FinFETs) [9]. The temperaturedependent characteristics and dynamic switching performance of vertical GaN transistors have also been reported [9,[107][108][109][110]. The packaging of vertical GaN transistors is similar to their SiC counterparts. For example, the TO-247 packaged vertical GaN fin junction-gate field-effect transistor (Fin-JFET) has been recently demonstrated [32,33,111], which shows good thermal performance at high temperatures as well as under avalanche and short-circuit conditions. GaN power HEMTs are commercially available in the voltage classes of 15-900 V [4,8] and have been recently demonstrated up to 10 kV [66] based on an emerging multichannel structure [67][68][69]112]. Thermal management is challenging for GaN HEMTs for two reasons. First, commercial GaN power HEMTs are all fabricated on low-cost Si or sapphire substrates with a multi-layer, high-dislocation-density buffer region between GaN device layers and the substrate. The relatively low k T of GaN, Si and sapphire compared with SiC (see table 1), as well as the thermal boundary resistance (TBR) between multiple layers, lead to a relatively large R th across the wafer structure. Second, the current in lateral GaN HEMTs is spatially confined compared with that in vertical devices, deteriorating the non-uniformity of heat generation and dissipation.
To address the thermal challenge, different packages have been developed by industry to allow for junction-side cooling, such as Infineon's dual-small-outline (DSO) packages, Texas Instrument's quad-flat-no-leads (QFN) packages, GaN System's GaNpx packages and EPC's 'solder bar' and ball grid arrays (BGAs) packages. These packages are all surface mounted and obviate the long leads in the traditional TO packages to minimize parasitic inductance, which is essential to exploit the fast-switching capabilities of GaN HEMTs. While the DSO and QFN packages still have bonding wires, the GaNpx and solder bar packages eliminate leads and bonding wires. Specifically, the EPC package works for low-voltage GaN devices and consists of solder bumps to form the land grid arrays or BGAs. This flip-chip bonding using solder bumps is widely used in microwave and radio-frequency (RF) applications. Its main drawback is the limited surface area of the interconnect, which restricts the heat flow down to a small fraction of the total die area.
Figures 6(a) and (b) compare the footprint ratio and R th,j-c of these commercial GaN packages as a function of rated current [113]. The packaging efficiency is represented by the footprint ratio, i.e. the ratio of the die footprint to the package footprint. The higher this ratio, the more efficient the package and the PCB footprint utilization.
To further reduce the package parasitic inductances and increase the footprint ratio, efforts have been devoted to embedding GaN bare dies into the PCB. PCB embedding has been demonstrated for a single GaN device, GaN integrated circuits and a full-bridge GaN module with good thermal performance [114,115]. Additionally, Lu et al proposed an alternative packaging approach that combines a PCB interposer for device interconnection and a DBC substrate for heat dissipation, electrical isolation and lower CTE mismatch [113] (figure 7). A 650 V, 120 A GaN HEMT is packaged, demonstrating a R th,j-c of 0.14 K W −1 , which outperforms the similarly rated commercial GaN packages.
Device-level thermal management is also being actively studied, and many demonstrations have been made in GaN RF power devices. Despite the use of low-cost Si and sapphire substrates in commercial GaN power devices, researchers have explored GaN devices fabricated on high-k T substrate, starting from the integration of GaN onto SiC substrates [45]. To further alleviate the thermally limited performance, GaN on higher-k T diamond substrates has also been achieved [82]. Chao et al demonstrated a GaN-on-diamond HEMT with power density over three times greater than that of a GaN-on-SiC HEMT with the same active area [116]. This demonstration employed a wafer bonding approach; meanwhile, chemical vapor deposition (CVD) of diamond on the N-polar side of GaN epilayers has also been investigated. From a thermal perspective, Pomeroy et al reported a 40% decrease in R th,j-c of a GaN-on-diamond HEMT compared with a reference GaN-on-SiC HEMT [117]. More recently, high-quality CVD diamond/GaN interfaces have enabled the demonstration of very high power GaN-on-diamond HEMTs with reasonable temperature profiles [118]. Coating the device with high-k T heat-spreading layers has been demonstrated on GaN HEMTs using nanocrystalline diamond (NCD) [119][120][121]. Electrothermal simulation has shown that the peak T j of NCD-capped GaN HEMTs is reduced by 30% with respect to a reference HEMT [122]. Incorporation of p-type doping in NCD can further reduce E-field crowding [123].
Review of UWBG device thermal management technology
Similar to section 4, we will prioritize the reported thermal management of large-area packaged devices in this section and briefly mention the device-level management reported for small-area devices. The thermal studies of UWBG devices heavily concentrate on Ga 2 O 3 due to its very low k T . In contrast, AlN and diamond devices have good substrate k T , suggesting the applicability of housing them in the mature packages developed for GaN and SiC. However, the increased ionization of their deep-level dopants at elevated temperature makes them suitable for high-temperature applications, which brings new challenges for packaging.
Ga 2 O 3 device
Thermal management is arguably the most serious concern for Ga 2 O 3 power devices. The device-level thermal management of lateral Ga 2 O 3 devices has been reviewed in [23,24]. Here we provide a brief summary of these studies and will elaborate our perspectives in the next section. Following the footsteps of its predecessors (e.g. GaN HEMTs), two aspects are being extensively explored for Ga 2 O 3 device-level thermal management: (a) substrate engineering, particularly the heterogeneous wafer-epitaxy integration of Ga 2 O 3 device layers onto high-k T substrates, and (b) optimization of channel structure to reduce the peak T j . The ultimate goal of approach (a) is to enable a bottom-side cooling package for Ga 2 O 3 devices, while (b) would be beneficial to both junction-side and bottom-side cooling package schemes.
In [124]. The thermal resistance of the SBD based on Ga 2 O 3 /SiC heterogeneous material is one quarter that of the β-Ga 2 O 3 bulk wafer. In the same year, a smaller-size demonstration of a Ga 2 O 3 substrate directly bonded to a SiC substrate was also reported [50]. Very recently, Song et al reported a Ga 2 O 3 -on-SiC composite wafer fabricated by a fusion-bonding method and subsequent Ga 2 O 3 epitaxy on this composite wafer [49]. In addition to Ga 2 O 3on-SiC, Ga 2 O 3 heterogeneous wafer-epitaxy integration onto diamond substrates is also being explored [125][126][127], but has not reached the wafer scale yet due to the small size of singlecrystalline diamond substrates.
In addition to heterogeneous integration, thinning the native Ga 2 O 3 substrate is a simple and effective method for thermal management, which can be easily implemented by chemical-mechanical planarization [62]. Modeling and analysis for thermal management of Ga 2 O 3 devices on thinned substrates are presented in [128]. A double-side cooling package combined with a heat spreader was predicted to reduce the R th of a single-finger device to as low as 11 mm • C W −1 with a maximum power density as high as 16 W mm −1 achieved for a T j limit of 200 • C. A multi-finger transistor thermal model was also developed to show that the Ga 2 O 3 transistor could work below the T j limit by properly designing the gate pitch.
Similar to a GaN HEMT, encapsulation of Ga 2 O 3 transistors with a high thermal conductivity material is also desirable. While NCD growth on Ga 2 O 3 is currently under active investigation as a heat spreading layer, most recently Lundh et al demonstrated for the first time an AlN-capped lateral Ga 2 O 3 transistor. The sputtered AlN cap was sufficiently effective to enable a DC power density in excess of 5 W mm −1 , exceeding that of any substrate-side thermal management approach reported to-date [129].
As the other focus of device-level thermal management, the impact of channel design on device thermal management is exemplified in the power FinFET, a new junctionless power transistor [9] first demonstrated in GaN [61,130,131] and subsequently in Ga 2 O 3 [132,133]. Due to the anisotropic k T in Ga 2 O 3 [134], Chatterjee et al predicted that a Ga 2 O 3 FinFET with fins orientated to [100] could allow for a 30% reduction in peak T j as compared to devices with fins aligned to the [010] orientation [135]. From similar considerations, in Ga 2 O 3 lateral MOSFETs, Kim et al pointed out that the layout design could also impact the peak T j [136].
Recently, large-area packaged Ga 2 O 3 power devices have been demonstrated by a few groups [21,22,[137][138][139][140][141], which allows for probing the Ga 2 O 3 thermal management beyond the material and device levels. Unlike the heterogeneous integration that aims at making Ga 2 O 3 chips compatible with bottom-side cooling packages, another pathway is to employ a junction-side cooling package and extract the heat from the device junction directly to the package without the need for substrate engineering. As a validation of this path, Xiao et al demonstrated the first large-area Ga 2 O 3 SBDs packaged in bottom-side and double-side cooled configurations using nanosilver sintering as the die-attach (figures 8(a)-(c)) [22]. The packaged SBDs show a forward current over 20 A and a breakdown voltage over 600 V. The R th,j-c of a double-side packaged Ga 2 O 3 SBD was measured to be 1.43 K W −1 and 0.5 K W −1 in the bottom-side and junction-side cooling configurations, respectively (figures 8(d) and (e)) [21]. The latter R th j-c is lower than the similarly rated commercial bottomside cooled TO-packaged SiC SBDs. By considering different cooling approaches (as can be represented by the HTC), R th,j-a was analyzed for the bottom-side, junction-side and double-side cooling package schemes (figure 8(f)). It was concluded through thermal impedance measurements and simulations that junction cooling is essential for Ga 2 O 3 devices, with a HTC over 10 3 W m −2 K being preferable [21].
Surge current is an essential ruggedness metric listed in any power diode's datasheet and the most important indicator of its transient electrothermal ruggedness [39]. It measures the device's capability of temporarily sustaining a current much higher than the rated current before the protection circuit intervenes and is usually evaluated in a 10 ms wide half-sinusoidal current waveform. Xiao et al found that a double-side cooled package enables a critical surge current of 70 A in Ga 2 O 3 SBDs, which is nearly two times higher than for the bottomside cooled packaged device (figures 9(a) and (b)) [22]. The former Ga 2 O 3 SBD shows a ratio between the peak surge current and the rated current higher than that of the similarly rated commercial Si and SiC SBDs. Electrothermal mixedmode simulations revealed that with the double-side package heat is mainly extracted through the junction side in the transient condition; meanwhile, the peak T j is moved from the Schottky contact into the bulk Ga 2 O 3 during the transient heating process (figures 9(c) and (d)). These results illustrate the significance of the package design and cooling configuration on the transient thermal performance and electrothermal ruggedness of Ga 2 O 3 devices [74].
The results above seem to suggest that, for a typical Ga 2 O 3 chip, a double-side cooled package brings little benefit compared with a junction-side cooled package. According to the simulations in [21], this does not hold when the Ga 2 O 3 substrate is thinned or replaced by high-k T substrates. Under such configurations, the heat removal through the back side of the chip could become as effective as that through the junction side. This prediction was experimentally validated by Gong et al [141]. A multi-step grinding and CMP process was used to thin the substrate down to 70 µm, and the fabricated Ga 2 O 3 SBD was housed in a double-side cooled package (figure 10(a)). A reference Ga 2 O 3 SBD without substrate thinning (550 µm thick) was fabricated and packaged using the same process. The Ga 2 O 3 SBD with the thinned substrate showed a smaller R th,j-c (figure 10(b)) and higher surge current capability, illustrating effective through-chip heat removal under steady-state and transient conditions. Such superior performance was further validated by performing 150 W systemlevel power factor correction circuit measurements, delivering a high conversion efficiency of 98.9% (figure 10(c)) and manifesting the impact of device thermal management on circuit performance.
Finally, a few other large-area Ga 2 O 3 diodes have been demonstrated with TO packages, including the trench MOS Schottky diode [137] and NiO/Ga 2 O 3 heterojunction p-n diodes [138][139][140]. Although these works did not focus on thermal management, they report excellent electrical characteristics of the packaged Ga 2 O 3 devices, including hightemperature operation, minimal reverse recovery, high overvoltage ruggedness and nanosecond switching. These results retire many critical risks associated with the electrical performance of the packages for UWBG power devices.
Diamond and AlN devices
Among the UWBG semiconductors, diamond and AlN have the theoretical best-in-class power material figure-of-merit [142,143]. Due to the relative immaturity of material synthesis and processing technologies, their device development is still at an early stage, although packaged diamond power devices have been recently demonstrated with bottom-side packages (figures 11(a) and (b)) [10].
UWBG devices are in general attractive for hightemperature applications due to their low intrinsic carrier concentration. An additional feature in diamond is the relatively high activation energy of dopants, which are incompletely ionized at the room temperature. At elevated temperatures, the increased ionization results in a negative temperature coefficient (NTC) of R on,sp in diamond devices, which prevents thermal runaway. Hitoshi et al demonstrated a vertical diamond SBD assembled on a metal-ceramic package ( figure 11(c)) [144]. The packaged diamond SBD shows small reverse recovery at high temperatures up to 250 • C ( figure 11(d)). Another high-temperature diamond SBD demonstration was reported by Sergey et al [145] with a forward current higher than 10 A up to 200 • C. The authors also compared the device thermal performance using silver paste and Cu-Sn solder as two different types of die-attaches.
As illustrated in figures 11(e) and (f), the device mounted with Cu-Sn solder shows lower peak T j but slightly higher forward bias drop. The R th,j-c and conductivity of the two packaged devices were reported to be 3.7 K W −1 (1.4 W cm −2 K) and 1.2 K W −1 (4.2 W cm −2 K) for the silver paste and Cu-Sn solder, respectively.
Perez et al [146] investigated the system-level benefits of the NTC effect in diamond devices ( figure 12). Diamond SBDs with three different die sizes were compared with a similarly rated SiC SBD, and their heatsinks were optimized accordingly. While allowing the devices to operate at an elevated temperature (up to 500 K for SiC and 1300 K for diamond) allows the use of heatsinks with high R th values, the R ON of diamond devices decreases while that of SiC devices increases (figures 12(b) and (c)). Diamond devices exhibit a lower power loss and optimal operation at higher temperatures, thus easing the heatsink designs. As a result, the power loss and heatsink volume of diamond devices can both be three times smaller than those of their SiC counterpart at 450 K ( figure 12(d)).
To realize the high-temperature application of diamond power devices, challenges are present in high-temperature packages. For example, the difference in CTE and stiffness between diamond and die-attach materials will lead to thermal stress and thus lifetime degradation and reliability issues. Fusté et al [147] simulated the thermomechanical interaction between components in a diamond module. Simulation was conducted for a custom SOT-227 power module ( figure 13) with Si, SiC and diamond. A diamond die shows higher residual effective stress compared with SiC and Si dies during the high-temperature thermal cycle. This issue results from the high elastic modulus and low CTE of diamond, because of which small bending deformation causes high tension on both upper and lower die surfaces. Three different die-attach materials were then investigated for stress distribution and deformation in diamond modules. The results show a saturated stress distribution and a similar accumulated viscoplastic deformation for the three materials.
AlN power devices and high-Al Al x Ga 1-x N channel HEMTs have been studied over the past few years with a focus on their electrical performance, for example the improvement of ohmic contact [148,149]. Few works have been reported on their device-and package-level thermal management. Owing to the high k T of AlN, bottom-side cooled packages are expected to work well. Besides, AlN devices also present a NTC effect in their R on,sp , making them suitable for hightemperature applications [150].
Al x Ga 1-x N is promising for the next generation of lateral power devices but suffers from low k T (see table 1) due to alloying. The thermal behaviors of Al x Ga 1-x N channel HEMTs have been rarely explored. Lundh et al [151,152] performed multidimensional thermal analysis and revealed the interdependence of electronic and thermal transport in Al x Ga 1−x N channel HEMTs. It should be noted that Al x Ga 1-x N HEMTs can be made on either a sapphire substrate or a free-standing AlN substrate. The former substrate would make the device thermal management similar to Ga 2 O 3 devices, while the latter may make it similar to bulk AlN devices. As a summary of this section, thanks to the high k T of AlN and diamond, bottom-side cooled packages are expected to be suitable. However, due to the deep-level doping, AlN and diamond devices may deliver optimal performance at high temperatures, thereby requiring high-temperature packaging. This requirement poses challenges for package design, CTE management, package material selection and package reliability. These issues and their potential solutions will be further discussed in the next section.
Device-package co-design
The major collective push for recent thermal management efforts in the United States began with DARPA's thermal management technology program and continued with the nearjunction thermal transport and intra-chip/inter-chip enhanced cooling programs [153][154][155][156]. Many results on WBG and UWBG technologies suggest a need for the co-design of power devices and their packages as well as of their electrical and thermal performances.
Bottom-,
junction-and double-side thermal management. As illustrated in sections 4 and 5, substrate thinning and integration with high-k T substrates is an effective method for reducing R th and dissipating heat from the active region. However, further improvements can be made by employing embedded microfluidics such as microchannel cooling and jet impingement [82,154,157,158]. The high HTC in close proximity to the device can lead to significant improvements in power density. For instance, van Erp et al recently demonstrated a microchannel cooling structure integrated in the Si substrate of GaN-on-Si SBDs [159]. Using this technology, a GaN-on-Si full-wave bridge rectifier was demonstrated and it achieved 30 times greater power output than a natural convection (air-cooled) reference structure [159]. For junction-side thermal management, approaches including the integration of high-k T heat spreading layers, flip-chip integration and microfluidic cooling could be utilized to dissipate heat from the active region. Junction-side thermal management is especially enticing for lateral (U)WBG transistor structures which typically have their active regions within tens of nanometers of the junction-side device surface, as is the case for structures such as GaN HEMTs, Al x Ga 1-x N HEMTs, and lateral Ga 2 O 3 MOSFETs [160][161][162][163].
Despite the effectiveness of flip-chip integration for junction-and double-side cooling packages as exemplified in sections 4 and 5, further optimization is required for successful deployment. For the junction-side package of lowk T devices, the underfill materials surrounding the die-attach could be critical. These materials usually exhibit low k T , making little impact on high-k T power devices where the majority of the heat is removed through the die contact and die-attach. However, for low-k T devices, the underfill material could become a non-negligible heat extraction path. For example, electrothermal simulation suggested that the heat dissipation of Ga 2 O 3 MOSFETs is greatly limited by flipchip bonding of the device to the carrier substrate with low-k T epoxy. Increasing the epoxy underfill k T from typical values of ∼1 W m −1 K −1 to 14 W m −1 K −1 (h-BN infused epoxy composite) leads to a 66% reduction in the peak T j rise for the Ga 2 O 3 MOSFET flip-chip integrated with a diamond carrier substrate [24].
Furthermore, the impact of the thermal design on the electrical performance must also be considered. The mismatch of the CTE between different layers and interfaces in the package may lead to considerable mechanical and reliability issues. Similar to bottom-side cooling, introducing high-HTC forced convection cooling can greatly improve the thermal performance. For example, Kwon et al demonstrated junction-side jet impingement cooling via additively manufactured nozzles with air as the coolant, which reduced the peak temperature rise of the tested GaN transistors by ∼65% [164].
More work is also needed to improve the double-side cooling of low-k T devices, such as Al x Ga 1-x N and Ga 2 O 3 . As illustrated in section 5, if substrate heat removal is inefficient, double-side cooling of low-k T devices may have minor benefits over junction-side cooling due to the majority of the heat being dissipated through the junction rather than through the device. To realize the high current and power densities offered by these UWBG devices, further work is needed to improve the heat dissipation through low-k T devices, such as through wafer thinning and heterogeneous integration, such that the doublesided cooling can be improved.
Heterogeneous integration and TBR.
One highly desirable solution for device-level thermal management is to utilize monocrystalline diamond and AlN as part of the active semiconductor layer. This would position the thermal management solution directly at the source of the heat generation. Lundh et al reported a comparative simulated thermal analysis of lateral transistor structures based on UWBG Al x Ga 1-x N, Ga 2 O 3 and diamond. It was shown that diamond transistors can have up to ∼50 times lower R th than other UWBGbased device technologies [165]. Unfortunately, diamondbased device technologies are still plagued by doping limitations, scalability and the high cost of producing large-area single-crystal substrates [13,166].
Because of its importance for thermal management of WBG and UWBG devices, an area of research that has been attracting increasing interest is the understanding, characterization and optimization of interfacial thermal transport [167,168]. For GaN devices on foreign substrates, the TBR has been shown to contribute significantly to the peak T j rise [169][170][171]. Typically, this TBR consists of contributions from both interfaces and interfacial layers. Manoi et al suggested that optimization of the AlN nucleation layer contributed to the effective TBR in GaN-on-SiC HEMTs because an additional 10%-40% temperature rise was possible with an AlN layer in GaN HEMTs depending on the composition of the nucleation layer in the GaN microstructure [172].
Likewise, for UWBG devices, the TBR at the interfaces must be minimized. Decreasing the TBR must be considered to most effectively deploy the thermal management approaches of bottom-side cooling, junction-side-cooling or double-side cooling packages discussed previously. For example, for Al x Ga 1-x N and Ga 2 O 3 , the thermal conductance across the surrounding UWBG/metal contacts and UWBG/substrate interfaces becomes increasingly important for thermal transport in junction-side and bottom-side thermal management, respectively. Shi et al used time-domain thermoreflectance to measure the TBR of several Ga 2 O 3 /metal interfaces [173]. They found that Ni/Ga 2 O 3 and Cr/Ga 2 O 3 interfaces have the lowest TBRs for Schottky and ohmic contacts, respectively.
For bottom-side thermal management by bonding UWBG semiconductors to a high-k T substrate, the bonding agent and technique introduce an additional R th to the device. Cheng et al have recently reviewed the TBR across heterogeneously integrated surfaces by techniques including transfer bonding, surface-activated bonding, plasma bonding and hydrophilic bonding [174]. Physics-based modeling of interfacial thermal transport is still very much an active area of investigation as typical methods, such as the acoustic mismatch model, diffuse mismatch model and atomic Green's function, all possess inherent limitations. Therefore, these frameworks can fail to fully capture and elucidate the complex interactions occurring at and near the interface, such as the presence of local vibrational modes unique to the interfacial region [175][176][177][178]. Understanding the physical processes that dictate thermal transport within/across WBG/UWBG materials and interfaces will undoubtedly provide some guidance for thermal management and package design and thus requires further research.
Electrothermal co-design.
As mentioned in previous sections, suppression of E-field crowding is beneficial from a thermal perspective since Joule heat generation from the E-field is also reduced and more evenly redistributed in the transistor channel [2]. Therefore, advanced electrothermal co-design should be employed to enhance both electrical and thermal performance [2,[179][180][181]. Considering this co-design, the aforementioned heterogeneous integration can go beyond mere thermal management to electrothermal management. For example, Zhang et al proposed to insert p-type diamond as a cap layer or back-barrier layer above or beneath the horizontal current channel, so as to not only provide a path for near-junction heat removal but also an E-field management structure to suppress E-field crowding [123] ( figure 14). By inserting a p-diamond back-barrier layer with a perfect charge balance into the n-type channel, the peak E-field is suppressed. As a result, the peak T j location moves from the gate edge to the drain edge and its magnitude is lowered.
It is worth mentioning that electrothermal management is not only important for UWBG devices but also for their packages. The package of UWBG devices is expected to withstand a higher E-field than its WBG counterpart. Hence, E-field control itself is a pressing challenge for UWBG packaging. We will elaborate this point in section 6.2.3.
Finally, it is also pertinent to be mindful of both the device application and the thermal management strategies being employed at packaging levels. For applications involving fast-switching transients, such as solid-state circuit breakers, devices may be expected to handle high power loads in the nanosecond to microsecond regime [182]. Lundh et al demonstrated that for a GaN HEMT subject to submicrosecond pulsing, there is no observable temperature rise in the underlying substrate [183]. Similarly, in a millisecond pulse, Xiao et al revealed a small temperature rise in the Ga 2 O 3 substrate of a packaged Ga 2 O 3 SBD [22]. Therefore, for many transient applications, substrate engineering may have less impact and junction-side approaches will be preferable for thermal management.
To address the importance of matching the thermal design with the time scale of its application, several reviews focusing on transient thermal management have recently been published [81, 184,185]. In addition to application-specific considerations, such as time-scale matching, the thermal design must also be tuned to match the thermal management scheme at the package level. From this perspective, Zhang and Palacios encouraged device-packaging thermal co-design and offered some practical applications [9]. For GaN FinFETs, if the package is designed to extract heat from the top-side of the device, then it is more important to optimize the fin pitch and the interfin material for better thermal management [9]. This would have a large impact on the switching speed and loss of Fin-FETs and thus require careful electrothermal co-design under switching operation conditions [186].
High-temperature packaging
The adoption of UWBG devices and an overall trend towards a higher power density amplifies the need for robust and reliable packaging. The ability and suitability of these devices to operate at much higher temperatures due to their low intrinsic carrier concentrations and deep-level dopants is compelling for many automotive, aerospace, military and downhole applications [187,188]. However, conventional packaging materials and design in many cases are not adequately suited for operating temperatures exceeding 250 • C. This, in combination with the low k T of some UWBG materials, emphasizes the need for rigorous thermal management design.
As illustrated in section 5, UWBG devices have demonstrated superior high-temperature stability compared with Si and WBG devices, making them inherently suitable for hightemperature applications. However, device packaging has rarely been shown to survive operating temperatures beyond 200 • C [189] with most being limited to 250 • C and below [190,191]. Three of the limitations of a high temperature package are the stability and reliability of the encapsulation, the substrate, and the die-attach. Each of these components is critical for the operation and reliability of any packaged device and needs to be carefully selected to account for the elevated temperatures and resulting thermomechanical stresses.
Encapsulant.
The primary limitation for the reliability of high-temperature packages is the encapsulation. The encapsulation serves as a crucial passivation layer and also provides environmental protection to the device [192]. Reliability studies suggest that the dielectric and mechanical strength of commercially available encapsulants degrades significantly before the temperature reaches 250 • C-275 • C [193]. In Table 2. Critical properties for encapsulation [195,196]. [194,195]. The majority of commercial encapsulants also have a low k T , which if improved upon could alleviate some induced thermal stress and strains. Thus, it is critical to examine the dielectric, mechanical and thermal stability/conductivity properties when evaluating an encapsulant for high-temperature applications. Silicone elastomers are commonly used in power module packages. They have a relatively low Young's modulus, which helps to alleviate some of the thermomechanical stresses; however, they are subject to the aforementioned degradation when exposed to temperatures above 200 • C [196]. As such, other polymeric, composite and novel encapsulation materials are of great interest. The pertinent properties for material selection are shown in table 2.
Packaging substrate.
Metal-ceramic substrates are widely used in electronics packaging for structural support, insulation, thermal management and electrical interconnection. With temperatures potentially exceeding 250 • C, thermal cycling in the substrate can cause cracking, warping and/or delamination of the layers, all of which are severe reliability issues [197]. The substrate is also paramount for thermal management, serving as a first-level heat spreading and extraction layer, which puts a critical lens to k T as well. For these reasons, careful consideration of the mechanical, thermal and electrical properties must be made when selecting a substrate [198]. Table 3 highlights commonly used metal/ceramic substrate technologies [198,199]. 6.2.3. Die-attach. The die-attach must provide a strong connection between the device and its associated substrate while having high electrical and thermal conductivity [200]. In addition, matching the CTE to both the substrate and the device is critical to minimize the thermomechanical stresses seen at the interface. With expected working temperatures surpassing 250 • C, conventional tin-and lead-based solders will either melt or degrade significantly, and as such other attachment methods must be utilized. Table 4 identifies the key properties of commonly used methods and potential high-temperature materials [192].
Nano-silver sintering offers a higher maximum operating temperature, good thermal performance and better electrical conductivity than conventional solder, making it a suitable choice for high-temperature applications [201,202]. Furthermore, large-area silver sintering allows for the reliable construction of multi-layer substrates and bonding of substrates to baseplates, and enables other novel packaging configurations to assist in thermal management and mechanical reliability [203, 204].
New package designs and enhanced cooling.
Another tactic to alleviate some of the generated heat and in turn reduce overall thermomechanical stress is to alter the package layout and architectures. While the basic structures of bottom-, junction-and double-side cooled packaging are illustrated in figure 3, many module-and system-level designs exist and can significantly affect system performance. While this higher-level packaging and integration is not the focus of this article, these exciting research opportunities are worth a mention. For example, in a double-side cooled, multi-chip package, interposers, such as metal bricks, balls or tubes, have been used for device-top interconnection, with Cu being the most widely used interposer material due to its high electrical and thermal conductivity [85]. However, rigid Cu interconnections between the device and the substrates of the power module could bring reliability concerns. Ding et al demonstrated a porous interposer made of sintered silver, which reduces the thermomechanical stresses in the module by 42%-50% with a trade-off of only a 3.6% increase in T j [205].
Lastly, second-level cooling strategies can be implemented to reduce R th,j-a . Two-phase, jet impingement and immersion cooling, among others, can improve upon the performance of the conventional finned heatsinks and liquid cold plates [206]. Some of these cooling technologies are illustrated in figure 8(f). Recently, Gebrael et al demonstrated a novel cooling approach by monolithically integrating a thin insulating material and a conformal Cu coating on power devices [207]. This approach allows the copper to be in close proximity to devices, and has been validated by applications to WBG GaN power devices. 6.2.5. Electric-field control. While primarily thermal limitations and concerns have been discussed to this point, the package also provides critical support pertaining to E-field control. To fully exploit the higher E-field blocking capability of UWBG devices, the E-field strength of the package must also be higher. To prevent partial discharge and to mitigate the possible peak temperature induced by a crowded E-field and the risk of device or package electrothermal failure, several techniques can be implemented to either reduce the E-field magnitude or provide ample insulation to increase reliability [26]. Table 3. Substrate critical properties and material comparison [198,199]. The first method is the selection of an encapsulant with a higher dielectric strength. However, the electrical properties of many materials, especially polymers, can change significantly as temperature increases [208]. Nevertheless, the use of passivation coatings, i.e. polyimide conformal coatings, can be a way to supplement this loss of dielectric strength.
The second method is to adjust the ceramic insulating layer of the substrate. Increasing the thickness of the ceramic can reduce the E-field intensity at the triple point but would also increase R th,j-c [209]. Therefore, it should be used sparingly. In the vein of substrate adjustment, stacking multiple metal/ceramic bonded substrates has been demonstrated as an effective method of reducing the E-field and improving the partial discharge inception voltage (PDIV) of the package [210]. By stacking two of these substrates, depending on the material and thickness, an improvement of between 53% [28] and 94% [211] in the PDIV was observed. It should be noted that after two stacked substrates the benefits are curtailed as voltage sharing becomes uneven [211]. In addition, this method may also reduce the overall R th,j-c .
Lastly, the application of a coating to the triple point can be an effective way to increase PDIV. A nonlinear resistive coating between the substrate and the encapsulant has been shown to provide an increase of up to 85% in PDIV [25]. This method is incredibly flexible as it can be utilized in combination with a range of architectures and packaging materials, making it a practical way to improve electrical field management. However, further evaluation is needed to understand the performance under higher operating temperatures.
In summary, the packaging for UWBG devices will play a critical role in their ability to operate at their full potential through enhanced heat dissipation, high-temperature operation, thermomechanical stress and strain reduction, and E-field control.
Conclusion
The last two decades have witnessed revolutionary advances in power electronics enabled by SiC and GaN power devices. Similar advances are also envisioned with the maturity of UWBG devices. These devices promise continued scaling for power electronics towards a higher frequency, smaller form factor and higher power density. An unavoidable by-product of this scaling is more heat generation in a smaller chip area, which requires increasingly advanced device-and packagelevel thermal management to ensure safe device operation and long-term reliability. Thus, thermal management is a critical enabler to exploit the electrical superiority of WBG and UWBG power devices.
The thermal management of practical power devices has to account for packaging and cooling. Fortunately, WBG and UWBG power device technologies have both achieved the packaging milestone, in the case of the latter very recently. This paper outlines three basic cooling architectures, followed by discussion of critical device structures and material properties for each WBG and UWBG device technology. Thermal management of packaged WBG and UWBG power devices has been comprehensively reviewed.
Thermal management and packaging of UWBG power devices are still in their infancy and face new challenges that are not present in Si and WBG devices and packages, such as the very high E-field and heat flux, very small die size and the very low k T of some UWBG materials. Additionally, UWBG power devices offer the unique opportunity to operate at very high temperatures, but the lack of packages operational in such conditions has become a critical roadblock. The solutions to these challenges require a new level of device-package, electrothermal co-design. Additionally, breakthroughs in hightemperature, high-voltage packaging technologies are highly desirable for expanding the application space of UWBG power electronics.
Finally, we provide perspectives on the key challenges and potential solutions to UWBG thermal management. These perspectives aim at invoking future research in materials science, physics, devices, packaging and power electronics, as well as manifesting the context for more fundamental electrothermal studies. The exciting research in this area will greatly accelerate the development and deployment of UWBG power devices, and could make a revolutionary change in the landscape of power electronics.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files). | 14,490 | 2023-01-20T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Product Quality Detection through Manufacturing Process Based on Sequential Patterns Considering Deep Semantic Learning and Process Rules
Companies accumulate a large amount of production process data during product manufacturing. Sequence data from the mining production process can enable a company to evaluate the manufacturing process, to find the key factors affecting product quality, and to improve product quality. However, the production process mainly exists in the form of text. To solve this problem, we propose a novel frequent pattern mining algorithm (EABMC) based on the text context semantics and rules of the manufacturing process to remove redundant sequences and to obtain good mining results. In this algorithm, first, we use embeddings from language models (ELMo ) to improve the process of text similarity matching and to classify similar semantic processes into one class. Then, the manufacturing process unit (MPU) is proposed by extracting the characteristics of manufacturing process data according to the constraints of the manufacturing process and other conditions. The above two steps cause the complex manufacturing process sequence to merge and simplify. Once again, a frequent pattern mining algorithm (CloFAST) is used to explore the important manufacturing process relationships behind a large amount of manufacturing data. In addition, taking the data from a production enterprise in Guizhou Province as an example, the validity of the method is verified. Compared with other methods, this method is shown to have greater mining efficiency and better results and can find out the key factors that affect product quality, especially for text data.
Introduction
With the advancement of sustainable manufacturing technologies and the development of the world's manufacturing industry, manufacturing companies are paying more and more attention to the connection between product and manufacturing data [1]. Through the information upgrade of manufacturing equipment and software, data in the domain fields can be acquired, stored, and analyzed simultaneously in the manufacturing process and then is fed back to production to improve the production efficiency and yield, to shorten the manufacturing cycle, and to improve product quality. This has become a trend in the manufacturing industry [2,3]. Historical data, such as design information and manufacturing information in an enterprise, contain a rich product design and manufacturing knowledge. Mining and analysis have become an important means for enterprises to enhance their competitiveness [4]. In the manufacturing industry, process planning is a kind of experience-based complex knowledge application activity. The mining of process planning data can ensure product quality. Therefore, determining how to turn the data into useful knowledge that helps manufacturing process diagnostics and improves product quality has become the focus of research [5].
Data mining technology is one of ten emerging technologies that is predicted to "change the world in the 21st century". Data mining refers to the discovery of hidden information or knowledge from large amounts of data by combining different technologies such as artificial intelligence, statistical analysis, computer science, machine learning, pattern recognition, expert systems, databases, and graph visualization. Different data mining methods, such as association rule mining [6], sequence pattern mining [7], classification [8], clustering [9], text mining [10], and knowledge transfer [11], will also have different effects on data processing and mining results. Therefore, it is especially important to choose the appropriate mining method [12].
CAPP (Computer Aided Process Planning) refers to the use of computer software and hardware technology to support the environment, where the computer is used to perform numerical calculations, to make logical judgments, and to carry out reasoning functions to formulate parts of the machining process [13]. In current manufacturing cases, the technology combines CAPP with data mining to get good results. Implicit process experience and knowledge can be obtained from these historical manufacturing information data. The records of daily manufacturing activities are stored in the manufacturing database of the company, which has important attributes such as the manufacturing ID, the date of manufacturing, the equipment ID, the manufacturing type, etc. [14]. Due to the numerous constraints of process planning problems and the influence of process planning personnel, the design and optimization of process planning methods are difficult points of process planning problems.
To obtain more suitable process planning mining results for the manufacturing industry, we should build relevant models according to the characteristics of process planning data. Designing special domain models for process planning problems can improve the efficiency and precision of mining to achieve good process planning knowledge.
Nowadays, many studies are devoted to discovering factors that cause the quality of products in the manufacturing process to decline. Some experts have used the association rule [6] for mining process planning knowledge in the past. However, the association rule pays more attention to the relationships among transactions, ignoring the chronological order of events. Sequential pattern mining [7] is focused on timing-based event-related mining, so it is more suitable than the association rule for the discovery of process planning knowledge. Due to the special nature of process knowledge, often in the form of text, some process steps that are combined and cannot be presented separately could have good mining results. Therefore, the article researches natural language processing technology and encapsulates these processes in the form of a manufacturing process unit (MPU). As an event, the excavation time can be reduced and the excavation accuracy can be improved by using MPU.
This article takes the manufacturing of a wheel hub produced by an enterprise in Guizhou Province, China as the research object and builds an unstructured quality analysis data set by integrating the text data of each production link of wheel manufacturing and proposes a novel frequently closed sequential pattern mining algorithm based on the manufacturing process text contextual semantics and manufacturing rules (EABMC) to obtain key factors affecting product quality. By performing sequence pattern mining on the quality analysis data set, it can help the company to discover the abnormal quality and its influencing factors in the manufacturing process of wheel hub products and to find out the correct sequence relationship that affects quality. It not only can accurately locate quality problems but also help companies improve process parameters.
The rest of the sections in this paper are organized as follows: Section 2 reviews previous work related to developments and applications of similarity measurements of text semantics, different algorithms for the manufacturing process, and frequent sequential pattern mining. Section 3 introduces the proposed algorithm, EABMC. Section 4 illustrates a case study to verify the feasibility of the proposed method and the performance of different thresholds on results and compares them with other methods. The results and analysis of the case study are also presented in this section. Finally, Section 5 provides major conclusions and points out the future directions of this paper.
Text Mining and Semantic Similarity Measurement
Text similarity measurement is an important task in natural language processing. Salton and Buckley [15] proposed the term frequency-inverse document frequency (TF-IDF), which converts the text into a high-dimensional and sparse word matrix and then uses cosine similarity to calculate the text similarity. Mikolov et al. [16] proposed Word2vec, which improved the Skip-gram model to obtain higher quality vectors faster. Blei et al. [17] proposed latent Dirichlet allocation (LDA), which is a generative probabilistic model that is used for the collection of text corpora data. Pennington et al. [18] proposed a new clear and interpretable language model, GloVe, to form word vectors. The essence of the model is to integrate the latest matrix factorization and word2vec at that time and to use nonzero data in the global word cooccurrence matrix for training instead of using only the local window information of a word. Latent Semantic Analysis (LSA) [19] is an algorithm that obtains a semantic representation of words or paragraphs through statistical calculation. By mapping the high-dimensional document-word vectors to the latent semantic space, the concepts related to documents and terms are extracted and the relationship between documents and terms is analyzed. The core of LSA is to extract the subject of the document based on the Singular Value Decomposition (SVD) matrix decomposition method, which effectively solves the problem of polysemy and polysemy that traditional vector models cannot handle. ELMo [20] is a new word vector representation model based on a deep learning framework. This model not only can represent the grammatical and semantic features of vocabulary but also changes with the change of context. The model is essentially a combination of internal hidden state features of a two-way language model based on large-scale corpus training.
Application of Different Algorithms in the Manufacturing Process
Manufacturing process planning is one of the key aspects of a product's lifecycle [21]. Decision support methodology will help companies to improve their production efficiency [22]. Many scholars have applied advanced algorithms to develop a good manufacturing process to help companies with their decision-making. Su et al. [23] proposed a genetic algorithm based on edge selection to solve the optimal sequence of processing operations with minimum processing cost and satisfying all priority constraints. The strategy based on edge selection can generate feasible solutions during initialization and can ensure that each feasible solution is generated with an acceptable level of probability, thereby improving the convergence efficiency of the Genetic Algorithm (GA). Phung et al. [24] presented an improved clustering algorithm to optimize the operation sequence.
The key concept of this method is to first check the priority constraints to select all possible following operations for the last operation in the sequence and then to compare their driving costs to select the best feasible operation with the minimum driving cost in the sequence.
Wang et al. [25] proposed a general sorting method for machining features, which is used in the process planning of complex parts to solve one-to-many mapping between machining features and machining operations that leads to an increase in non-cutting tool paths. Milošević et al. [26] proposed a system for a distributed and collaborative environment that can help manufacturing companies and experts discuss, recommend, evaluate, and select the best process plan for manufacturing a series of parts. Cheng and Wang. [27] used a data-driven matching method for processing parameters as a process knowledge service. The ability to quickly set process parameters for a new, complex product responds to the needs of customers in a process manufacturing environment.
Applications of Sequential Pattern Mining Algorithm
Sequential pattern mining (SPM) was first applied to shopping data to determine customers' purchase rules. Subsequently, it has been widely used for travel recommendations, movie recommendations, supply chain diagnosis, and many other practical problems [28]. Huang et al. [29] used frequent closed sequence mining technology (ClaSP) to analyze cargo transportation data to help transportation companies to discover potential factors that reduce the quality of cargo during transportation. The results showed that this method can be used to determine the causes of low-quality transportation services. Amiri et al. [30] proposed a new prediction model based on sequential pattern mixing. This model considers the correlations among different resources and extracts the application's behavioral pattern independently of the fixed pattern length, thereby clearly indicating that the pattern-based mining model can provide novel and useful perspectives to solve some of the problems involved in predicting application workloads. Tsai et al. [31] proposed an effective sequence classification method based on two-stage SPM. In the first stage, during the sequential pattern mining process, if a pattern is subsequent to other sequential patterns, the redundant sequential pattern will be identified. A list of compact sequential modes (not including redundant modes) is generated and used as a representative function of the second stage. Huynh et al. [32] proposed a parallel method, Multiple threads CM-SPADE (MCM-SPADE), for multi-core processor systems. This method uses the multithreading technology of the SPM database, which can improve the performance of SPADE and Co-occurrence MAP-SPADE (CM-SPADE).
Many kinds of literature have proposed ways to combine other algorithms with sequential pattern mining to achieve better results. Tarus et al. [33] proposed a knowledge-based hybrid recommendation system based on ontology and sequential pattern mining for recommending e-learning resources to learners. In the proposed recommendation method, ontology is used to model and express domain knowledge about learners and learning resources while the SPM algorithm discovers the learner's sequential learning model. Ding et al. [34] proposed a spatial sequence model of coastal land use based on association rules to mine interesting sequential patterns of land use along the sea and land along the coastal zone. Yuan et al. [28] proposed an SPM algorithm to mine the failure sequence pattern in text data. The algorithm aims to solve the problem of poor structure of text data and the existence of multiple forms of text expression in the same concept. The traditional SPM algorithm cannot be directly applied to text data. Experiments show that the algorithm can effectively mine sequential patterns in text data.
Sequential pattern mining is a data mining method for obtaining frequent sequential patterns in a sequential database. The above researchers studied different algorithms for the manufacturing process to optimize the operation sequence and used a sequential pattern mining algorithm to mine different types of data. However, they did not combine their methods of seeking the rules behind manufacturing features from the perspective of discovering quality reasons. A novel, frequently closed sequential pattern mining algorithm based on the manufacturing process of text contextual semantics and the manufacturing process unit was proposed to obtain key factors of product quality.
Wheel Hub Quality Data Analysis
For manufacturing companies, their product quality control is very critical. In the manufacturing process of the wheel hub, the company has accumulated a large amount of production data and inspection data. In the era of big data, how to use the mining technology of industrial big data to find the law of quality transfer from massive time series production and manufacturing data to achieve effective control and improvement of product quality is a new problem faced by manufacturing enterprises. Therefore, quality data analysis has become an important requirement of industrial big data. For the collected wheel production process data, the traditional probability statistical method and data mining algorithm are used to build a complete and targeted analysis model, and the key factors that affect the quality of the wheel hub can be found through correlation analysis for subsequent quality improvement and production. Process improvement provides reasonable data support. Figure 1 shows the framework of the product quality analysis process based on manufacturing data. The wheel hub production process flow is incoming materials→forging→X-ray inspection→appearance inspection→deburring→dissolution treatment→aging treatment→lathe surface→lathe shape→fine slot→dovetail slot→deburring→drilling→milling inner cavity→tapping→drilling→cleaning→detection→anodizing→painting→drying→sampling detection→painting→shipping.
The product quality data is defined in Equation (1): where _ is the product ID and _ is the product test results. Manufacturing data are the relevant data generated during the production of the product. The manufacturing data for a single process are defined as shown in Equation (2) where _ is the product ID; _ is the device ID; _ is the machining time; ℎ is the team ID; is the operator ID; and _ is the processing parameter set.
By combining the above two data with the _ as the association, the product quality data based on the process were obtained as shown in Equation ( The wheel hub manufacturing data is the relevant data generated during the production procedure of the product, including the intermediate status generated during the manufacturing, such as temperature, pressure, and other information; the equipment number; the coding information of the tooling or fixture used; equipment alarm information; qualified and unqualified product The wheel hub production process flow is incoming materials→forging→X-ray inspection→appearance inspection→deburring→dissolution treatment→aging treatment→lathe surface→lathe shape→fine slot→dovetail slot→deburring→drilling→milling inner cavity→tapping→drilling→cleaning→detection→anodizing→painting→drying→sampling detection→painting→shipping. The product quality data is defined in Equation (1): where Product_id is the product ID and Quality_result is the product test results. Manufacturing data are the relevant data generated during the production of the product. The manufacturing data for a single process are defined as shown in Equation (2): where Product_id is the product ID; Equip_id is the device ID; Product_time is the machining time; Shi f t is the team ID; Operator is the operator ID; and Product_parameters is the processing parameter set. By combining the above two data with the Product_id as the association, the product quality data based on the process were obtained as shown in Equation (3): Operator, Product_parameters), Quality_result > The wheel hub manufacturing data is the relevant data generated during the production procedure of the product, including the intermediate status generated during the manufacturing, such as temperature, pressure, and other information; the equipment number; the coding information of the tooling or fixture used; equipment alarm information; qualified and unqualified product information; manufacturing time; quality detection data; and product material traceability information.
Product quality factor analysis can use data mining methods to find the influencing factors that cause bad products. In this paper, sequential pattern mining is used as the algorithm for wheel hub quality analysis. The names and contents of the processes in the production of the wheel hub, the characteristic values of the process parameters (mold temperature, forming temperature, billet temperature, and dissolution time), forging operator number (FO_ID), heat treatment operator number (HTO_ID), production equipment number (PE_ID), production shift number (PS_ID), production workshop number (PW_ID), and batch number (B_ID) are used as input to the sequential pattern mining algorithm, and the product quality detection results are divided into qualified and unqualified. The sequential pattern mining algorithm will output a series of frequent sequences that satisfy the support and confidence as the mining results. Because the traditional sequential pattern mining algorithm is not suitable for unstructured data such as text, to effectively solve the mining and analysis of the enterprise hub quality data, this paper improves the sequential pattern mining algorithm.
Measurement of Text Semantic Similarity Based on Contextual Word Embedding
Manufacturing process text is unstructured data with domain knowledge that contains a large number of manufacturing-related terms. However, because different operators have different understandings and applications of related terms, often, the same professional term is presented by different words. In text mining, different words are often represented by different labels, which not only increase the amount of data but also make the semantics process less readable. Traditional machine learning methods cannot effectively solve the problem of semantics term and context connection. Therefore, this article uses the method of contextual word embedding to represent all text vocabulary in the form of word vectors. In the process of word representation, this method learns the semantic correlations between words and uses the appearance of the words to get a contextual connection. Table 1 shows the characteristics of different language embedding models. The bag-of-words model uses one-hot encoding. In this model, the word has a value of 1 when it appears and the rest is 0. The vector dimension of words is the number of all occurrences of words. When the number of texts increases, it will increase the vector dimension and increase the amount of calculation. Any two words are independent of each other, so they cannot reflect the semantic relationship of the text. The topic model can give the theme of each document in the document set in the form of a probability distribution so that, after analyzing some documents to extract their theme distribution, they can perform text similarity work according to the theme distribution. The word embedding distance model is based on word2vec technology. After converting all the words into a vector, the cosine value between each word is calculated to obtain the similarity between the texts. Since words and vectors are in a one-to-one relationship, the problem of polysemy cannot be solved. The latent semantic analysis model is to reduce the dimension using singular value decomposition (SVD) in the concurrence matrix of word and document so that the similarity of text can be measured by the cosine similarity of two low-dimensional vectors. The contextual word embedding model is no longer just a vector correspondence but a trained model. When in use, input a sentence or a paragraph into the model and the model will infer the word vector corresponding to each word according to the online text. One of the obvious benefits after doing this is that, for polysemous words, the polysemy can be understood in combination with the context before and after. Therefore, this paper uses a contextual word embedding model in the follow-up study.
ELMo is a pretrained contextual word embedding model, shown in Figure 2, which uses a bidirectional long-term short-term memory network (LSTM) language model consisting of a forward and a backward language model. The model solves two problems: one is the semantic and grammatical complexity of word usage, and the other contains the local changes. These usages should change as well. The representation of each word is a function of the entire input sentence. The specific method involves training a bidirectional LSTM model with a language model as the target on a large corpus and then uses the LSTM to generate the word representation. Given a sequence with N tokens (t 1 , t 2 , · · · , t N ), the objective function is the maximum log-likelihood of a bidirectional (forward and backward) language model, as shown in Equation (4).
where Θ x represents the parameters represented by the token, → Θ LM and ← Θ LM represent the LSTM parameters in the bidirections, and Θ s represents the parameters of the softmax layer. For token i in each layer, we can calculate (2L + 1)th representations, as shown in Equation (5).
where R k is the representation of token k and h LSTM k,j is the hidden layer, which is equal to The ELMo representation of token i is calculated by Equation (6).
where γ task is a scalar factor that adjusts the vector scale according to the characteristics of a specific task and s task j is the normalized weight. (SVD) in the concurrence matrix of word and document so that the similarity of text can be measured by the cosine similarity of two low-dimensional vectors. The contextual word embedding model is no longer just a vector correspondence but a trained model. When in use, input a sentence or a paragraph into the model and the model will infer the word vector corresponding to each word according to the online text. One of the obvious benefits after doing this is that, for polysemous words, the polysemy can be understood in combination with the context before and after. Therefore, this paper uses a contextual word embedding model in the follow-up study. ELMo is a pretrained contextual word embedding model, shown in Figure 2, which uses a bidirectional long-term short-term memory network (LSTM) language model consisting of a forward and a backward language model. The model solves two problems: one is the semantic and grammatical complexity of word usage, and the other contains the local changes. These usages should change as well. The representation of each word is a function of the entire input sentence. The specific method involves training a bidirectional LSTM model with a language model as the target on a large corpus and then uses the LSTM to generate the word representation. Given a sequence with , the objective function is the maximum log-likelihood of a bidirectional (forward and backward) language model, as shown in Equation (4). In natural language processing, the data involved are often contextual, and traditional feedforward neural networks are unable to process such data well. A recurrent neural network (RNN) is a typical neural network structure applied to sequence data. This network processes sequence data by introducing directed loops. The structure of the RNN is divided into three layers, namely the input, the middle, and the output. The middle layers can be connected back and forth, so that the information of the current state can be passed to the next state as part of the input of the next state. In this way, the nodes in the sequence can obtain previous information. However, when the sequence data becomes longer, the RNN is not able to handle this problem well. As a special RNN, the long-term short-term memory network (LSTM) selectively retains context information through a specially designed gate structure, which can effectively solve the problem of gradient explosion and gradient disappearance when RNN processes long sequence data. Equations (7)-(11) show the operation mechanism of the LSTM.
where σ is the activation function sigmoid; tanh is the hyperbolic tangent activation function; x t is the unit input; i t , f t , and o t are the input, forget, and output gates at time t; w,b is the weight matrix and offset vector of the input gate and forget gate; c t is the state at time t; and h t is the output at time t. Words before and after each word will affect it, so we must fully consider the context of the text. Therefore, this paper uses a bidirectional long-term short-term memory network (BiLSTM) for feature extraction. In the BiLSTM, Besides, we connect an attention layer with the BiLSTM layer. In this layer, we calculate the attention score by the query vector (Q), key vector (K), and value vector (V) through Equation (12).
where d k is the dimension of the dot product of Q and K scaled by k. Q, K, and V are calculated from the same input, which is represented by BiLSTM in this model. Cosine similarity is used to measure the similarity of text. Equation (13) represents the text similarity.
where A k and B k are the vectors where the similarity calculation will be performed. Then, we proposed the structure of the sentence similarity model based on ELMo and the Attention-BiLSTM (EAB), the structure flow is shown in Figure 3. Table 2 shows a comparison of three algorithms used on the CCKS2018 dataset. The results show that our proposed EAB algorithm's performance is better than that of others. Table 3 shows five sentences, and Figure 4 shows the clustering results of 2D space. performance is better than that of others. Table 3 shows five sentences, and Figure 4 shows the clustering results of 2D space. Table 3. Five sample sentences.
Sentence ID Sentence Content
A Check that the tooling model is in good condition and complete and that the preparation of chilled iron, core sand, and alloy meet the process requirements.
Check that the tooling model has no offset and that the surface of the mold is tight.
Check that the tooling model is in good condition. Fluorescence inspection of the cabin according to HVJ40·23001.
Vibration aging of cabin according to Ez2082·34002. Table 3. Five sample sentences.
Sentence ID Sentence Content
A Check that the tooling model is in good condition and complete and that the preparation of chilled iron, core sand, and alloy meet the process requirements. B Check that the tooling model has no offset and that the surface of the mold is tight. C Check that the tooling model is in good condition. D Fluorescence inspection of the cabin according to HVJ40·23001. E Vibration aging of cabin according to Ez2082·34002. Table 2.
Manufacturing Process Unit
By defining the MPU, the part process planning becomes a problem of sorting and optimizing the MPUs, and the ordering and optimization must satisfy the constraints and the ordering rules.
Process constraints exist in every section of the part's manufacturing activities, including knowledge constraints, resource constraints, technical constraints, and demand constraints.
Knowledge constraints: Knowledge constraints mean that all manufacturing methods and manufacturing sequences must be selected to conform to process knowledge, process rules, and process standards. When designing a process plan, the process rules to be followed include roughing, precision, benchmarking, clustering, and some specific criteria, such as threading, which are usually arranged after the outer round before the rough grinding of the outer circle.
Resource constraints: Resource constraints refer to the manufacturing conditions, manufacturing equipment, manufacturing materials, and other material conditions that are available inside the enterprise. The MPU may be available with a variety of manufacturing resources, which allows for more alternatives when selecting manufacturing resources, but it also introduces complexity into manufacturing decisions.
Technical constraints: Technical constraints refer to the specific shape of the part's geometry and technical conditions (shape tolerance, surface roughness, accuracy grade, etc.) and are the basis for selecting the manufacturing method. For example, the rotary parts are mainly used for turning and the contour type parts are mainly used for milling.
Order constraints: Order constraints refer to the specific requirements of the supply contract with the customer for the product, such as the order time, order method, and so on. The content of the contract has an important impact on the organization of production, process flow, technical specifications of the implementation, and so on.
From the above-described manufacturing feature information model, it is known that process data with a unit as a carrier is packaged in each MPU. Process sequencing is the sequential arrangement of all MPUs to form a parts processing sequence. To this end, there are the following process sequencing criteria.
Process ordering rule 1-general guidelines: this refers to the knowledge constraints that must be met when sorting MPUs.
Process ordering rule 2-customization criterion: in addition to this general criterion, when the topology of parts is too complex or the processing conditions of enterprises are limited, technicians need to make some artificial regulations on the sequence of MPUs according to the current process conditions.
The part feature refers to a combination of a series of information including a certain structural shape, manufacturing accuracy, and assembly requirements of the part. Part features are generally divided into two categories: (1) basic features, which are features that build part of geometry topologies and are not capable of secondary splitting, such as planes, holes, etc., and (2) auxiliary features of the main features, which can be split twice, such as threads, keyways, etc. For example, Table 2.
Manufacturing Process Unit
By defining the MPU, the part process planning becomes a problem of sorting and optimizing the MPUs, and the ordering and optimization must satisfy the constraints and the ordering rules.
Process constraints exist in every section of the part's manufacturing activities, including knowledge constraints, resource constraints, technical constraints, and demand constraints.
Knowledge constraints: Knowledge constraints mean that all manufacturing methods and manufacturing sequences must be selected to conform to process knowledge, process rules, and process standards. When designing a process plan, the process rules to be followed include roughing, precision, benchmarking, clustering, and some specific criteria, such as threading, which are usually arranged after the outer round before the rough grinding of the outer circle.
Resource constraints: Resource constraints refer to the manufacturing conditions, manufacturing equipment, manufacturing materials, and other material conditions that are available inside the enterprise. The MPU may be available with a variety of manufacturing resources, which allows for more alternatives when selecting manufacturing resources, but it also introduces complexity into manufacturing decisions.
Technical constraints: Technical constraints refer to the specific shape of the part's geometry and technical conditions (shape tolerance, surface roughness, accuracy grade, etc.) and are the basis for selecting the manufacturing method. For example, the rotary parts are mainly used for turning and the contour type parts are mainly used for milling.
Order constraints: Order constraints refer to the specific requirements of the supply contract with the customer for the product, such as the order time, order method, and so on. The content of the contract has an important impact on the organization of production, process flow, technical specifications of the implementation, and so on.
From the above-described manufacturing feature information model, it is known that process data with a unit as a carrier is packaged in each MPU. Process sequencing is the sequential arrangement of all MPUs to form a parts processing sequence. To this end, there are the following process sequencing criteria.
Process ordering rule 1-general guidelines: this refers to the knowledge constraints that must be met when sorting MPUs.
Process ordering rule 2-customization criterion: in addition to this general criterion, when the topology of parts is too complex or the processing conditions of enterprises are limited, technicians need to make some artificial regulations on the sequence of MPUs according to the current process conditions. The part feature refers to a combination of a series of information including a certain structural shape, manufacturing accuracy, and assembly requirements of the part. Part features are generally divided into two categories: (1) basic features, which are features that build part of geometry topologies and are not capable of secondary splitting, such as planes, holes, etc., and (2) auxiliary features of the main features, which can be split twice, such as threads, keyways, etc. For example, Part A has a total of n features and the parts can be expressed as A = (a 1 , a 2 , · · · , a n ), where α i represents the ith processing feature, 1 ≤ i ≤ n.
The MPU is the basic unit that constitutes the feature of the part, for example, completion of dimensions and tolerance for hole features ∅60K6 +0.021 +0.002 represents the hole features, and the MPU is expressed as Rough Turning → Hal f − Fine urning → Rough grinding → Fine grinding . For part A, feature ϕ i is ϕ i = (ω 1 , ω 2 , · · · , ω n ), where ω i represents the ith process unit, 1 ≤ i ≤ n.
In part processing, the processing resources corresponding to each process unit are different. Processing resources mainly refer to machine tools, cutters, fixtures, etc. Therefore, the processing unit can be seen as a collection of processing resources. We assume that machine toolsets are X = (x 1 , x 2 , · · · , x m ), cutter sets are Y = (y 1 , y 2 , · · · , y n ), and fixture sets are Z = (z 1 , z 2 , · · · , z o ). Among them, m, n, and 0 are the sum of machine tools, cutters, and fixtures in the manufacturing sector. For part A, process unit where x α , y β , z γ represents machine tools, cutters, and fixtures needed for manufacturing process units ω i , respectively.
A part consists of different features, and each of them has several serial or parallel relationships. Many manufacturing features have evolved. Figure 5 shows the manufacturing process model with six MPUs. For MPU 1, the processing unit 101 is in the front and processing unit 102 is in the back; MPU 2, MPU 3, and MPU 4 belong to a parallel relationship. A directed acyclic graph consists of several nodes and arcs, and there is no closed-loop; this is called the sequence of process units.
In part processing, the processing resources corresponding to each process unit are different. Processing resources mainly refer to machine tools, cutters, fixtures, etc. Therefore, the processing unit can be seen as a collection of processing resources. We assume that machine toolsets are x y z represents machine tools, cutters, and fixtures needed for manufacturing process units i , respectively.
A part consists of different features, and each of them has several serial or parallel relationships. Many manufacturing features have evolved. Figure 5 shows the manufacturing process model with six MPUs. For MPU 1, the processing unit 101 is in the front and processing unit 102 is in the back; MPU 2, MPU 3, and MPU 4 belong to a parallel relationship. A directed acyclic graph consists of several nodes and arcs, and there is no closed-loop; this is called the sequence of process units.
EABMC Sequential Pattern Mining Algorithm
Process knowledge is an important part of manufacturing. To improve manufacturing precision and to shorten the manufacturing cycle, it is necessary to obtain the process knowledge that is urgently needed by the current processing enterprises from their historical process data. The structure of mechanical parts is composed of a limited number of typical manufacturing features, which are reassembled according to the functions of the parts. Therefore, the current process planning of mechanical parts mainly focuses on the selection of manufacturing methods and manufacturing equipment and the arrangement of the manufacturing sequence. After the long-term accumulation of these works, the historical process data become the experience and rules that are often used in the process planning of mechanical parts in a period, which is embodied in the typical process sequence and process decision rules. There are a large number of process-related data in the CAPP system, such as material types, part features, processing methods, processing equipment, tools, etc. The data related to the part processing process represent the process of knowledge. Similar parts tend to have similar manufacturing processes. This paper proposes the use of the frequently closed sequential pattern mining algorithm based on the text contextual semantics of the manufacturing process (EAB) and manufacturing process units (MPU). The algorithm is shown in Figure 6, which
EABMC Sequential Pattern Mining Algorithm
Process knowledge is an important part of manufacturing. To improve manufacturing precision and to shorten the manufacturing cycle, it is necessary to obtain the process knowledge that is urgently needed by the current processing enterprises from their historical process data. The structure of mechanical parts is composed of a limited number of typical manufacturing features, which are reassembled according to the functions of the parts. Therefore, the current process planning of mechanical parts mainly focuses on the selection of manufacturing methods and manufacturing equipment and the arrangement of the manufacturing sequence. After the long-term accumulation of these works, the historical process data become the experience and rules that are often used in the process planning of mechanical parts in a period, which is embodied in the typical process sequence and process decision rules. There are a large number of process-related data in the CAPP system, such as material types, part features, processing methods, processing equipment, tools, etc. The data related to the part processing process represent the process of knowledge. Similar parts tend to have similar manufacturing processes. This paper proposes the use of the frequently closed sequential pattern mining algorithm based on the text contextual semantics of the manufacturing process (EAB) and manufacturing process units (MPU). The algorithm is shown in Figure 6, which includes the construction of the process knowledge requirement model, construction of the process knowledge data model, and recommendation of frequent key MPU patterns. Through the analysis and mining between the wheel hub manufacturing data and the wheel inspection unqualified data, the influencing factors of the production process with a high unqualified rate are found, and then, the main factor sequence collection of wheel quality is sorted out. includes the construction of the process knowledge requirement model, construction of the process knowledge data model, and recommendation of frequent key MPU patterns. Through the analysis and mining between the wheel hub manufacturing data and the wheel inspection unqualified data, the influencing factors of the production process with a high unqualified rate are found, and then, the main factor sequence collection of wheel quality is sorted out. Figure 6. EABMC sequential patterns mining model.
The previous association rules are used to mine the process knowledge, and after removing the manufacturing time, manufacturing equipment, and other data, the processing sequence is obtained.
The transaction set T is a collection of processes of a component, , where n t is the process data for each item. Sequence pattern mining and association rule mining are similar in many respects, but sequence pattern mining pays more attention to the sequential relevance of data. The objects and results of the sequential pattern mining are ordered, that is, the entries of each sequence in the data set are ordered in time or space and the output results are also ordered. Therefore, it is more suitable for mining of a typical process sequence and reasoning of process decisions than association rules.
The pattern of sequential pattern mining is as follows: X Y , where, X I , Y I , and X Y . is the former term, and is the latter term. The probability that items contained in itemset and itemset appear simultaneously in the transaction set T is recorded as the support of the sequence pattern as shown in Equations (14) and (15) and is an important indicator of the sequence pattern. The previous association rules are used to mine the process knowledge, and after removing the manufacturing time, manufacturing equipment, and other data, the processing sequence is obtained. The transaction set T is a collection of processes of a component, T = {t 1 , t 2 , · · · , t n }, where t n is the process data for each item. Sequence pattern mining and association rule mining are similar in many respects, but sequence pattern mining pays more attention to the sequential relevance of data. The objects and results of the sequential pattern mining are ordered, that is, the entries of each sequence in the data set are ordered in time or space and the output results are also ordered. Therefore, it is more suitable for mining of a typical process sequence and reasoning of process decisions than association rules.
The pattern of sequential pattern mining is as follows: X → Y , where, X ⊆ I, Y ⊆ I, and X ∩ Y = ∅. X is the former term, and Y is the latter term. The probability that items contained in itemset X and itemset Y appear simultaneously in the transaction set T is recorded as the support of the sequence pattern Sup(X ⇒ Y) , Con f (X ⇒ Y) as shown in Equations (14) and (15) and is an important indicator of the sequence pattern.
In the transaction containing itemset X, the conditional probability of the occurrence of item set Y is the confidence of the sequence pattern, which is a measure of the accuracy of the sequence pattern and is used to measure the strength of the sequence pattern The minimum support degree min_sup and the minimum confidence level min_conf are set. If the support degree of the item set is greater than or equal to the minimum support degree min_sup, it is called a frequent itemset; if the confidence level of the frequent itemset is greater than or equal to the minimum confidence level min_conf, then the frequent itemset is called strong rules. Therefore, the problem of mining the sequence pattern in the transaction database can be divided into the following two processes: (1) find all itemsets that satisfy the minimum support, that is, obtain frequent itemsets, and (2) generate strong association rules through frequent itemset.
Sequential pattern mining is the subject of data mining. It involves finding statistically relevant patterns between data examples. In these examples, the values are passed in order. It is generally assumed that these values are discrete, so time series mining is closely related, but it is generally considered to be a different activity. Sequential pattern mining is a special case of structured data mining [35].
In this section, we first introduce some preliminary concepts and then formalize the closed sequential pattern mining problem.
Definition 1. (Itemset)
An itemset is a set containing m different items, referred to as I= {i 1 , i 2 , . . . , i n }.
Definition 2. (Sequence) Abbreviated as SID, a sequence is a complete stream of information. Sequence Y is referred to as
Definition 3. (Sequence attribute) Each sequence has a unique identifier (Sid), and each item set for each sequence has a temporary item set identifier (Eid), which is a Timestamp. The Eid of a sequence is unique.
Definition 5. (Frequent sequence)
Given the minimum support threshold, if sequence Y in the sequence database support is not less than the threshold value, a frequent sequence called sequence Y is referred to as FS. Definition 6. (Frequently closed sequence) If the sequence and super sequence support are not the same and it is a frequent sequence, the sequence is a frequently closed sequence (FCS).
There are several algorithms for mining sequential patterns, such as FAST [36], GSP [37], SPADE [38], and PrefixSpan [39]. These algorithms show good performance in databases that contain short frequent sequences or support thresholds that are not very low. Closed sequence mining [40] aims to reduce the number of sequences that exceed the threshold and to pick long sequences to reduce the amount of calculation and time. The closed sequence pattern mining algorithm has better performance than other sequence pattern mining algorithms and is favored by more and more users.
The closed FAST sequence mining algorithm based on sparse ID lists (CloFAST) is a novel algorithm that is used for mining closed frequent sequences of an itemset, as cited by Fumarola et al. [41]. The EABMC (details are shown in the Algorithm 1) combines natural language processing, process rules, and sparse ID list technologies based on sparse ID lists and vertical ID lists. Its theoretical properties are studied to quickly count the support of sequential patterns with a novel one-step technique to both check sequence closure and to prune the search space. EABMC is better than other closed sequential pattern mining algorithms.
In the first database scan, EABMC was used to find a frequent itemset and to establish its sparse ID list (line 2). Then, it also found frequent closed itemsets and built its sparse ID list (line 4). This was achieved by constructing a closed itemset enumeration tree (CIET) based on a modified version of the FAST algorithm [36], which integrates the marking and pruning techniques proposed in Moment [42]. Lines 5 to 12 initialize the first level of the closed sequence enumeration tree (CSET). Each node in the first level represents a (candidate) closed sequence of size 1, for which the only element is a closed frequent item set. The vertical id-list (VIL) of the first-level node can be directly calculated according to the sparse id-list (SIL) of the closed frequent itemset. Starting from the first layer, according to the depth-first search strategy, the nodes in CSET are regarded as sequence expansion. During the mining process, the current closed sequential pattern set is stored in CSET. Finally, the EAMBC returns the complete set of closed sequential patterns in CSET.
The EABMC algorithm proceeds in two steps: (1) It generates a subset of frequent sequences (FS) and a superset of closed frequent sequences (CFS), called closed frequent candidates (CFC), and this subset is stored in the main memory.
(2) It performs the post-pruning stage to eliminate all non-closed sequences from the FCC to finally obtain accurate CFS (closed frequent sequences).
Data Preprocessing
All the results reported in this paper were obtained with a PC with an AMD Ryzen 7 1800X Eight-Core processor with 32G RAM, and the analysis language used was Java. The manufacturing process dataset was used in the example stored in a large manufacturing database of a foundry company from Guizhou Province, China. In the study, a foundry product was used as an example. There are 29,687 history records from the years 2017 to 2018. By performing data preprocessing, relevant demand data were obtained. The part of the manufacturing process carried out by EAB and MPU is shown in Table 4. In sequential pattern mining and analysis, some key sequences are often selected and frequent sequences are judged by the number of occurrences of key sequences. Sparse data is generated when there are many key sequences, but each sequence contains a small number of key sequences. The sparseness of the data can cause deviations or even errors in the mining results. The processing unit proposed in this paper can solve the sparseness problem of the data. By combining the processes, the quasi-class of the process is reduced, and the key sequence in each sequence data is increased. The input data description is shown in Table 5.
Discussion of Minimum Support Count
The minimum support in the data mining algorithm is the threshold that will affect the accuracy of the mining result. A high threshold will lose much of the significant information, and a low threshold will increase the workload. Choosing the right threshold is critical for sequential pattern mining.
The data set obtained by the preprocessing steps was used as a specific data set for determining the support threshold, the CloFAST algorithm was applied, and then the judgment result of the support threshold was obtained. The results are shown in Figure 7.
The experiment set min_sup = 0.01 as the initial value, and as min_sup increased, the number of closed sequential patterns decreased. When min_sup = 0.04, there were 241 closed frequent sequential patterns, which was equal to the count of closed frequent sequential patterns for min_sup = 0.05. Thus, an optimal support threshold of min_sup = 0.04 was finally obtained.
The data set obtained by the preprocessing steps was used as a specific data set for determining the support threshold, the CloFAST algorithm was applied, and then the judgment result of the support threshold was obtained. The results are shown in Figure 7.
The experiment set min_sup = 0.01 as the initial value, and as min_sup increased, the number of closed sequential patterns decreased. When min_sup = 0.04, there were 241 closed frequent sequential patterns, which was equal to the count of closed frequent sequential patterns for min_sup = 0.05. Thus, an optimal support threshold of min_sup = 0.04 was finally obtained. The Chi-square test [44] is a method that is commonly used in statistics for data analysis. It is mainly used to compare two or more sample rates (composition ratios) and to carry out a correlation analysis of two categorical variables. This method classifies data into different parts to ensure independence among the categorical data points.
The dataset was divided into two data sets, D1 and D2, randomly. The CloFAST algorithm was applied to obtain the results shown in Figure 6. When the dataset was reduced to half of the original size; D1 showed the first extreme point of value m at min_sup = 0.04, while D2 showed the first extreme point of value m at min_sup = 0.04. Then, the dataset was randomly divided into four data sets: D3, D4, D5, and D6. The CloFAST algorithm was applied, and it showed that the first extremum point appeared when min_sup = 0.04 for all four data sets, as shown in Figure 8. Therefore, based on the observations from Figure 7, the minimum support value was set as 0.4 in the following experiment. The Chi-square test [43] is a method that is commonly used in statistics for data analysis. It is mainly used to compare two or more sample rates (composition ratios) and to carry out a correlation analysis of two categorical variables. This method classifies data into different parts to ensure independence among the categorical data points.
The dataset was divided into two data sets, D 1 and D 2 , randomly. The CloFAST algorithm was applied to obtain the results shown in Figure 6. When the dataset was reduced to half of the original size; D 1 showed the first extreme point of value m at min_sup = 0.04, while D 2 showed the first extreme point of value m at min_sup = 0.04. Then, the dataset was randomly divided into four data sets: D 3 , D 4 , D 5 , and D 6 . The CloFAST algorithm was applied, and it showed that the first extremum point appeared when min_sup = 0.04 for all four data sets, as shown in Figure 8. Therefore, based on the observations from Figure 7, the minimum support value was set as 0.4 in the following experiment.
The data set obtained by the preprocessing steps was used as a specific data set for determining the support threshold, the CloFAST algorithm was applied, and then the judgment result of the support threshold was obtained. The results are shown in Figure 7.
The experiment set min_sup = 0.01 as the initial value, and as min_sup increased, the number of closed sequential patterns decreased. When min_sup = 0.04, there were 241 closed frequent sequential patterns, which was equal to the count of closed frequent sequential patterns for min_sup = 0.05. Thus, an optimal support threshold of min_sup = 0.04 was finally obtained. The Chi-square test [44] is a method that is commonly used in statistics for data analysis. It is mainly used to compare two or more sample rates (composition ratios) and to carry out a correlation analysis of two categorical variables. This method classifies data into different parts to ensure independence among the categorical data points.
The dataset was divided into two data sets, D1 and D2, randomly. The CloFAST algorithm was applied to obtain the results shown in Figure 6. When the dataset was reduced to half of the original size; D1 showed the first extreme point of value m at min_sup = 0.04, while D2 showed the first extreme point of value m at min_sup = 0.04. Then, the dataset was randomly divided into four data sets: D3, D4, D5, and D6. The CloFAST algorithm was applied, and it showed that the first extremum point appeared when min_sup = 0.04 for all four data sets, as shown in Figure 8. Therefore, based on the observations fro m Figur Table 6 shows the number of sequences produced under different methods. Through our method, we were able to remove the repetitive, synonymous, and redundant sequences, which provided a good premise for the next step of sequential pattern mining. To verify that the performance of the improved method is superior to the traditional algorithm, we compared the proposed EABMC with CloFAST, MPU-CloFAST, Word2Vec-CloFAST, ELMo-CloFAST, and Word2Vec-MPU-CloFAST in terms of the accuracy rate (shown in Figure 9), running time (shown in Figure 10), and memory consumption (shown in Figure 11), according to the dataset configuration and after varying the support threshold. In terms of the three main indexes of data mining, EABMC generally outperformed all of the other systems for almost every support value when the number of frequent sequences was higher. method, we were able to remove the repetitive, synonymous, and redundant sequences, which provided a good premise for the next step of sequential pattern mining. To verify that the performance of the improved method is superior to the traditional algorithm, we compared the proposed EABMC with CloFAST, MPU-CloFAST, Word2Vec-CloFAST, ELMo-CloFAST, and Word2Vec-MPU-CloFAST in terms of the accuracy rate (shown in Figure 9), running time (shown in Figure 10), and memory consumption (shown in Figure 11), according to the dataset configuration and after varying the support threshold. In terms of the three main indexes of data mining, EABMC generally outperformed all of the other systems for almost every support value when the number of frequent sequences was higher. method, we were able to remove the repetitive, synonymous, and redundant sequences, which provided a good premise for the next step of sequential pattern mining. To verify that the performance of the improved method is superior to the traditional algorithm, we compared the proposed EABMC with CloFAST, MPU-CloFAST, Word2Vec-CloFAST, ELMo-CloFAST, and Word2Vec-MPU-CloFAST in terms of the accuracy rate (shown in Figure 9), running time (shown in Figure 10), and memory consumption (shown in Figure 11), according to the dataset configuration and after varying the support threshold. In terms of the three main indexes of data mining, EABMC generally outperformed all of the other systems for almost every support value when the number of frequent sequences was higher. We analyze the sequence pattern set, combined the wheel hub quality fault diagnosis knowledge and process knowledge, summarized and sorted out the main factors affecting the wheel hub quality and the process parameter optimization rules, and provided data support for product quality improvement and process optimization.
Experiment Results
The sequence rules derived from the sequence pattern mining algorithm can help enterprises to analyze the abnormal product quality data and to determine the potential sequence rules that lead to the degradation of product quality. Taking the data of a factory in Guizhou as an example, the data included 20,281 product processing sequences. Through quality inspection, 20,152 items were qualified, 129 items were unqualified, and the product yield was 99.36%. The EABMC algorithm was used to mine the sequence pattern of 129 unqualified sequences, and 124 frequent sequences were obtained. Table 7 shows some of the mining results. Through the analysis of these 124 sequences, the enterprise could obtain the potential causes of product disqualification. We analyze the sequence pattern set, combined the wheel hub quality fault diagnosis knowledge and process knowledge, summarized and sorted out the main factors affecting the wheel hub quality and the process parameter optimization rules, and provided data support for product quality improvement and process optimization.
The sequence rules derived from the sequence pattern mining algorithm can help enterprises to analyze the abnormal product quality data and to determine the potential sequence rules that lead to the degradation of product quality. Taking the data of a factory in Guizhou as an example, the data included 20,281 product processing sequences. Through quality inspection, 20,152 items were qualified, 129 items were unqualified, and the product yield was 99.36%. The EABMC algorithm was used to mine the sequence pattern of 129 unqualified sequences, and 124 frequent sequences were obtained. Table 7 shows some of the mining results. Through the analysis of these 124 sequences, the enterprise could obtain the potential causes of product disqualification. To more accurately determine the causes of quality degradation, we further analyzed the following: incoming materials→forecasting and incoming materials→solution treatment is highly supported in the 2-item assessment, indicating that, in these two processes, the product quality will be unqualified. From this frequent sequence, it is known that the forging and dissolution treatment are the key procedures that affect the quality of the wheel hub, so companies should focus on these two procedures. Forging→X-ray inspection→drilling means that, after the forging, the X-ray inspection and drilling procedures are the important reasons that will cause the quality of the wheel hub to decrease. When the wheel hub manufacturing sequence includes the deburring→dissolution treatment→fine slot sequence, it will cause the quality of the wheel to decrease. With these frequent sequence patterns, the company can focus on these sequence patterns when formulating the wheel manufacturing process, and avoiding these sequence patterns can improve the quality of the hub.
Sequence Relation Visualization
The sequence relations obtained by the EABMC algorithm were visualized, and the sequence relation visualization is shown in Figure 12. The visualization revealed a clear temporal direction between the manufacturing processes. The knowledge graph includes all procedures that lead to the reduction of wheel hub quality and the direct connection of each procedure, which can make the enterprise more intuitively understand the factors that lead to the reduction of product quality.
To more accurately determine the causes of quality degradation, we further analyzed the following: incoming materials→forecasting and incoming materials→solution treatment is highly supported in the 2-item assessment, indicating that, in these two processes, the product quality will be unqualified. From this frequent sequence, it is known that the forging and dissolution treatment are the key procedures that affect the quality of the wheel hub, so companies should focus on these two procedures. Forging→X-ray inspection→drilling means that, after the forging, the X-ray inspection and drilling procedures are the important reasons that will cause the quality of the wheel hub to decrease. When the wheel hub manufacturing sequence includes the deburring→dissolution treatment→fine slot sequence, it will cause the quality of the wheel to decrease. With these frequent sequence patterns, the company can focus on these sequence patterns when formulating the wheel manufacturing process, and avoiding these sequence patterns can improve the quality of the hub.
Sequence Relation Visualization
The sequence relations obtained by the EABMC algorithm were visualized, and the sequence relation visualization is shown in Figure 12. The visualization revealed a clear temporal direction between the manufacturing processes. The knowledge graph includes all procedures that lead to the reduction of wheel hub quality and the direct connection of each procedure, which can make the enterprise more intuitively understand the factors that lead to the reduction of product quality. After correcting the factors that lead to the reduction of wheel hub quality obtained by the sequential pattern mining algorithm, the wheel hub is manufactured again. The analysis software is used to compare the effect before (shown in Figure 13a) and after (shown in Figure 13b) improvement. From the average grain size distribution in Figure 13b, it can be seen that, compared with the original scheme, the uniformity of hub grain refinement in the optimization scheme is significantly improved. After correcting the factors that lead to the reduction of wheel hub quality obtained by the sequential pattern mining algorithm, the wheel hub is manufactured again. The analysis software is used to compare the effect before (shown in Figure 13a) and after (shown in Figure 13b) improvement. From the average grain size distribution in Figure 13b, it can be seen that, compared with the original scheme, the uniformity of hub grain refinement in the optimization scheme is significantly improved.
(a) average grain size distribution before improvement (b) average grain size distribution after improvement Figure 13. Comparison of grain refinement effect of the wheel hub.
Conclusions and Future Work
This paper has proposed a frequent closed sequential pattern mining algorithm based on the text contextual semantics of the manufacturing process and the manufacturing process rules (EABMC). This algorithm aims to obtain frequent sequence relations of product manufacturing processes to help with the identification of factors affecting product quality and to improve the product quality. We used EAB to merge semantically similar sequential texts, and we utilized the
Conclusions and Future Work
This paper has proposed a frequent closed sequential pattern mining algorithm based on the text contextual semantics of the manufacturing process and the manufacturing process rules (EABMC). This algorithm aims to obtain frequent sequence relations of product manufacturing processes to help with the identification of factors affecting product quality and to improve the product quality. We used EAB to merge semantically similar sequential texts, and we utilized the MPU to deal with sequences containing simultaneous occurrences and to reduce impurities in the manufacturing sequence. To get a good mining result, particularly when the volume of the processed data is large, we chose a closed sequential pattern mining approach to decrease the number of sequences beyond the threshold and to pick out long sequences and we proposed the use of a processing unit that consists of common match processes to reduce the amount of computing and calculating time required. A longer sequence contains a higher amount of manufacturing information. Process design relies heavily on the designer's process knowledge and related experience. The suggested method tries to avoid these human factors. The method sets threshold support values according to the mapping relationship between the account of closed frequent sequences and the support threshold min_sup. In the proposed manufacturing process method, the threshold support value is dynamically adjusted for manufacturing situations, such as for different categories, according to the data connected to the manufacturing database. The generated closed sequential patterns may change depending on the different manufacturing situations, e.g., the category, precision, or personalization. Compared with other methods, this method has a higher level of efficiency and better performance. This paper will help the manager make decisions to improve product quality and find important factors related to production and manufacturing that affect product quality.
This model is suitable for text-based sequence data; if structured data is the main, it is not very suitable. If it is in a specific field such as medical treatment and product recommendation, it is necessary to fine-tune the model with relevant field text data to improve the accuracy of the text similarity in the model, to better compress the semantic similar data, and to improve the operation efficiency. In sequence pattern mining, the degree of support determines the number of frequent sequences generated after mining. The degree of support is low, the frequent patterns generated are many, the degree of support is high, and the frequent patterns generated are few. Too few frequent patterns are not conducive to discovering the relationship between elements, and too frequent patterns will analyze many useless relationships. When applying this model in different fields, it is necessary to select appropriate support parameters according to the actual situation.
Future work will optimize the use of sequential pattern mining to detect the temporal relationships among sequential patterns in manufacturing records. Through integration with intelligent manufacturing technology, a complete knowledge graph of the process sequence relationship will be constructed and research on product quality analysis will be based on the knowledge graph. | 15,069.6 | 2020-06-28T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Experimental Investigations of Effect of Sulphur on Beach Sand–Fly Ash–Asphalt (S-F-A) Paving Mixes
The main components of the flexible pavements are asphalt and aggregates. But in most of the places in India there is shortage of the good quality aggregates (especially coarse aggregates), at the same time beach sand is available in plenty in many regions. Due to relative abundance of beach sand, the studies on the utilities of the beach sand in paving mixes are worth taking up. But beach Sand-Asphalt mix alone is not suitable for pavement construction, because of its low stability and high air voids. In the present study, Sand-Fly ash-Asphalt-Sulphur (S-FA-S) mixes are being made in different proportions and tested for their properties. Fatigue strength, stability, water sensitivity, stiffness modulus and dynamic modulus tests are carried out at standard test conditions and the results are analyzed for drawing conclusions. This study investigates the potential use of abundant ingredients, which may replace the ones which are scarce in nature.
Introduction
Asphalt pavement is a crucial part of India's strategy for building a high performance transportation network for the future.Asphalt construction is fast and relatively simple; it is economical, and the materials to make it are widely available [1].In flexible pavement, asphalt is an expensive constituent and aggregates are costly and scarce.Kerala is a state in the southern part of India having a vast coastal area with abundance of the fine beach sand.This can be used as an alternative to the stone aggregates in order to cater lack of good quality aggregate [2].Fly ash is another by-product of thermal power stations which are also pettily available.Fly ash, when added as filler, seems to improve performance of mix in multiple ways to create high performance asphalt pavements [3].Over the last several years, evidence has begun to compound that fly ash as modifier improves the rheology of the mastic and produces multifunctional and synergistic benefits in the mixture [3,4].Works in the United States and Europe have proven that this modifier can substantially improve the resistance of the Hot Mix Asphalt (HMA) to permanent deformation (creep) damage at high temperatures [5]. 1 It also substantially improves low temperature fracture toughness without reducing the ability of the mastic to dissipate energy through relaxation [6].Extensive researches were done by various researchers throughout the world regarding the addition of fly ash to bituminous mixes [3,4,7,8].Buttlar et al. [6] used micromechanics to assess the mechanical properties of mineral fillers such as hydrated lime and fly ash, combined with asphalt to form mastics.They concluded that a rigid layer adsorbed to the filler explains the ability of the filler to result in stiffening ratios that are greater than would be predicted based on volumetric concentrations alone.Based on the equivalent rigid layer analysis, physicochemical reinforcement effects play a dominant role throughout the range of filler-to-asphalt ratios encountered in practice.Fly ash, hydrated lime and lime slurry added to reclaimed asphalt has been shown to improve the ageing kinetics and general rheological properties of reclaimed and recycled asphalt [9].Furthermore, the addition of lime slurry and fly ash in the cold milling and cold in-place recycling process has proved to be very beneficial [10].
Investigators have found out that the properties of Sand-Asphalt mixes can be improved significantly using sulphur and have also indicated various tests of fly ash on Sand-Asphalt-Sulphur mixes and given encouraging result.Possibility of incorporation of beach sand and fly ash with sulphur in bituminous blends is the aim of this research work.S-F-A-S mix can be used as an overlay mix to any concrete mix which may act as a base course [7].
Commulative % finer (%)
To objectives of the study are; to study the fatigue characteristics of the S-F-A-S mix, to study and to compare the indirect stiffness characteristics and the dynamic modulus of the mixes with varying proportions of Asphalt and Sulphur, and to find out the extent of Sulphur which can replace the asphalt content in the above mix.
The study is limited to stress strain characteristics of S-F-A-S mix.The fatigue, stiffness modulus and dynamic modulus tests were carried out using Nottingham Asphalt Tester (NAT) with a temperature level of 30 ± 1 o C. The dynamic modulus tests were conducted at a stress level of 0.2 MPa.
Experimental Investigation Materials
Beach sand used for the study was collected from Shankumugham beach in Trivandrum district of Kerala state in India.This sand was very fine, its gradation is shown in Figure 1.The physical properties of the materials used for the present investigations are shown in Table 1.The asphalt was tested to find out the basic properties such as viscosity, specific gravity, softening point etc.Sieve analysis was conducted on beach sand for finding out the gradation.The sand was found to be uniformly graded, having a gradation in the range of the 300-600 .The filler material used for the preparation of specimen was fly ash.Grain size distribution of fly ash used for the study is shown in Figure 2.
Details of Proportioning of Mixes
For studying the effect of varying proportion of constituent, 1200 gm of mixes with various percenttages, by mass, of ingredients were considered.In all these mixes, asphalt was decreased from 7 to 3% with a decrement of 1%, by mass, of total mix, while sulphur was increased from 9 to 13% with an increment of 1%, by mass, of total mix, and the rest was aggregate (i.e.beach sand and fly ash).Different trial mixes were prepared and tested for determining the proportion of beach sand, and fly ash.From this trial mixes, it was observed that equal proportion of fly ash and beach sand, i.e. 42 g each, and the remaining with sulphur and asphalt, produces a dense, homogenous mix in the case of S-F-A-S mixes.
For mix preparation, beach sand and fly ash were heated to 150° C, required quantity of asphalt and sulphur heated separately to 140° C and mixed thoroughly with the heated sand-fly ash mix by keeping mixing temperature as 160±2° C. The sample was allowed to cool at room temperature and Marshall stability test was carried out according to procedure for asphalt mixes as specified in ASTM D: 1559 [4].Obtained result is shown in Table 2.
From Table 2, it was observed that, the 42-42-4-12 (42% beach sand, 42% fly ash, 4% asphalt, 12% sulphur) mix is the superior mix since it shows maximum Marshall properties.The air voids obtained was much more than the standard value which is due to the gradation of aggregates and additives used in the mix preparation.The fine content of the total mix is more in sand asphalt mix compared to the other mixes.
Preparation of Sample using Superpave Gyratory Compactor
Five sets of mix combinations, each set having three cylindrical specimens of diameter 100 mm, were prepared for fatigue and stiffness modulus tests.The different combinations of constituents for the preparation of the mixes are as shown in Table 2.The specimens were compacted by using gyratory compactor and the gyrations are applied at the rate of 30 gyrations per minute with a consolidation pressure of 600 kPa.After 80 gyrations the mould is taken out from the machine and the sample is extracted.The sample is allowed to cool at room temperature and the stiffness and fatigue tests are done after 24 hours, by using Nottingham Asphalt Tester (NAT), CRT-NU14.
Indirect Tensile Stiffness Modulus Test
The indirect tensile stiffness modulus test was done using the NAT by applying a horizontal stress of 200 kPa.The Linear Variable Differential Transducers (LVDT) measures the deformations of the specimen.A total of five pulses were applied and the stiffness modulus was directly obtained from the equipment.The setup of indirect tensile stiffness modulus test is shown in the Figure 2.
Indirect Tensile Fatigue Test Results
For determining the fatigue life, indirect tensile fatigue test was carried out.Five combinations of mixes as per the blend shown in Table 2 were used for the fatigue study.Horizontal stress was varied from 100 to 400 kPa initially and 200 kPa was then selected for the experiments so as to get a reasonnable number of load cycles to failure.Stiffness
Dynamic Modulus Test
The dynamic modulus tests were conducted in accordance with AASHTO Designation: TP 62-03 [11], at different frequencies and number of cycles using NAT.The frequency and number of cycles applied are shown in Table 3.The tests were carried out in room temperature i.e. 30±1° C, by applying axial stress amplitude of 200 kPa.
Water Sensitivity Study
The samples, which were found to be satisfying the Marshall criteria was further selected for the accelerated curing, in order to find out its water sensitivity.This was done by conducting the Marshall tests on the samples, which were immersed in water at 60C for a period of 24 hours.The test was done according to ASTM D1075 standards [12]
Results and Discussion
As seen in Table 2, when there is no sulphur in the mix, the asphalt containing sand particles have very little inter locking between them, resulting in low stability.When sulphur is added to the mix, the solidified sulphur in the voids interlocks the asphalt coated particles together, thereby increasing stability.
In a bituminous mix, a high air void content is considered objectionable, because in such cases the bituminous mix becomes highly permeable and hence more susceptible to weathering action, thereby reducing its durability.The air voids content specified for a base course by Asphalt Institute is 3%-8%.The equivalent air voids content for this range of Sulphur-Asphalt mixes, comes to about 10%-30%.
The air voids of all the mixes tested fall within this range.
The low flow values in general reflect that the mixes are stiffer than conventional asphalt concrete mixes.Beach Sand-Fly ash-Asphalt-Sulphur mix (S-F-A-S) shows good Marshall properties, the presence of calcium in the fly ash increases the bond between the aggregate and asphalt, but the polar bonds are not so strong as that of the bond formed during hydrated lime addition.
Indirect Stiffness Modulus Test
The indirect stiffness modulus values were directly obtained from the test and are tabulated in Table 4 and graphically depicted in Figure 3.
The indirect tensile stiffness value was maximum for the blend with 4% asphalt and 12% sulphur.Further addition or reduction of asphalt and sulphur in the mix reduces the stiffness modulus significantly.
Dynamic Modulus Test
The dynamic modulus tests were conducted at frequencies of 25, 10, 5, 1, 0.5, and 0.1 Hz and at a stress level of 0.25 MPa.The application of first frequency phase is considered as the preconditioning phase, the average dynamic modulus corresponding to 10 Hz is considered as the dynamic modulus.The results obtained from the dynamic modulus test were shown in Figure 4 and tabulated in Table 5.
From the figure, it is clear that the dynamic modulus was highest for 4% asphalt and 12% Sulphur blended mix.Further reduction and addition of asphalt and sulphur reduces the dynamic modulus.
Indirect Tensile Fatigue Test
The load repetition to failure for varying percentages of asphalt and sulphur content is depicted in the Table 6.The graphical representation of indirect fatigue test results is shown in Figure 5.
It was observed that 4% asphalt and 12% sulphur was the optimum content of sulphur in the blend, for good fatigue life of S-F-A-S mix.
Water Sensitivity Test Results
The water sensitivity test results are as depicted in Table 7 and the same is graphically represented in figure 6.The water sensitivity studies proved that all the selected samples were found to be having a stability value ratio of more than 0.78, which is greater than as specified in MoRTH (Ministry of Road Transport and Highways), as 0.75 [7].The sample with 4% asphalt and 12% sulphur content had a maximum value of 0.9 and was least sensitive, which again proves the superior quality of this mix over the others.By suitably proportioning the various constituent the fly ash can be put to maximum use thus can control pollution and disposal problem to a certain limit [13].From the above mentioned test results, it is observed that, by addition of sulphur, the properties of S-F-A mix is enhanced and the variation of sulphur and asphalt in the mix greatly affects properties of the S-F-A-S mix.From the indirect fatigue, dynamic modulus and indirect stiffness tests, it was observed that 4% content of asphalt combined with 12% sulphur was the optimum content of sulphur in the blend for high modulus, stiffness and for good fatigue life of S-F-A-S mix.
Conclusions
Conclusions drawn from this investigation are as follows: The flow values, in general, reflect that the mixes are stiffer than conventional asphalt concrete mixes. From sensitivity water test, it was seen that the S-F-A-S mix was least sensitive to water, as its Marshall stability ratio more than 0.75.The S-F-A-S 42-42-4-12 mix have the minimum water sensitivity as its Marshall stability ratio was 0.90. S-F-A-S mix has good fatigue life and stiffness modulus and hence the mix can be considered as an alternative in the areas with shortage of quality aggregate but abundant in beach sand. The variation of sulphur and asphalt in the S-F-A-S mix, significantly affects the properties of Sand -Fly ash -Asphalt mixes. For the optimum Asphalt-Sulphur proportion of 16% in the mix, all the properties of the mix were showing a parabolic trend; the properties increased with sulphur content up to 12% and 4% asphalt and thereafter showing a decrease trend.
Log of sieve size (mm)Figure 1 .Figure 2 .
Figure 1.Grain Size Distribution of Shankumugham Beach Sand
Figure 3 .
Figure 3. Graphical Representation of Stiffness Modulus Test Results
Figure 4 .
Figure 4. Graphical Representation of Dynamic Modulus Test Results Professor, Department of Civil Engineering, College of Engineering, Trivandrum.Kerala University, INDIA.Email<EMAIL_ADDRESS>2 Asst.Professor, Department of Civil Engineering, College of Engineering, Trivandrum.Kerala University, INDIA.
Note: Discussion is expected before June, 1 st 2013, and will be published in the "Civil Engineering Dimension" volume 15, number 2, September 2013.Received 10 July 2011; revised 12 July 2012; accepted 27 August 2012
Table 1 .
Physical Properties of the Materials Used
Table 2 .
Marshall Properties of S-F-A-S Mixes
Table 3 .
Number of Cycles for the Test Sequence
Table 4 .
Indirect Stiffness Modulus Result
Table 6 .
Indirect Fatigue Test Results
Table 7 .
Water Sensitivity Test Results Figure 5. Graphical Representation of Indirect Fatigue Test Results Figure 6.Water Sensitivity Test Result | 3,322.2 | 2013-04-03T00:00:00.000 | [
"Engineering"
] |
From Static to Interactive: Transforming Data Visualization to Improve Transparency
Data presentation for scientific publications in small sample size studies has not changed substantially in decades. It relies on static figures and tables that may not provide sufficient information for critical evaluation, particularly of the results from small sample size studies. Interactive graphics have the potential to transform scientific publications from static reports of experiments into interactive datasets. We designed an interactive line graph that demonstrates how dynamic alternatives to static graphics for small sample size studies allow for additional exploration of empirical datasets. This simple, free, web-based tool (http://statistika.mfub.bg.ac.rs/interactive-graph/) demonstrates the overall concept and may promote widespread use of interactive graphics.
Introduction
Scientific and technological advances have enhanced our ability to study the biology of health and disease. They have also changed the way that we access and share scientific information. Study preregistration websites, data repositories, reporting guidelines and recommendations, and checklists for statistical analysis are all designed to promote transparency and enhance the reproducibility of scientific results. Data presentation for scientific publications has not changed substantially, however, despite this growing emphasis on transparency and reproducibility. Scientists rely on static figures and tables that may not provide sufficient information for critical evaluation, particularly of the results from small sample size studies.
This paper aims to explore the potential of interactive graphics to transform scientific publications from static reports of an experiment into interactive datasets narrated by the authors. Small sample size studies offer excellent opportunities to explore interactive visualizations, as small datasets generally rely on a few key types of figures. These studies commonly use bar and line graphs that show summary statistics for continuous data and scatterplots that examine the relationship between two variables. Offering interactive alternatives to these static graphs may be a simple and effective strategy for promoting widespread use of interactive graphics. We have designed and present an interactive line graph as an alternative to the static graph for small sample size studies that allows for additional exploration of empirical datasets. In addition to demonstrating the overall concept, this simple, web-based tool may encourage utilization of interactive graphics and address growing demands to show individual-level data [1,2,3,4].
Limitations of Traditional Line Graphs
A recent systematic review of original research articles published in top physiology journals demonstrated that 61% of papers contain at least one line graph, making this the second most common type of figure used to present continuous data [1]. Line graphs are designed for longitudinal data; lines are used to show that measurements were repeated on the same participant, specimen, or sample. Measurements are typically performed at predetermined sets of time points or conditions in experimental studies. The lines estimate the pattern of response by assuming a linear change between each consecutive set of time points or experimental conditions. This is fundamentally different from regression and other types of analysis, in which lines are used to illustrate trends that were estimated using one measurement per participant, specimen, or sample. Line graphs focus on how differences between the means for each group change across time points or conditions. However, they do not provide two important pieces of information. First is the amount of overlap between different groups, as less overlap indicates that the difference is more important. Second is information as to whether all individuals in the same group follow a consistent response pattern. This information is difficult or impossible to obtain using the standard line graph. The degree of overlap between groups is typically illustrated by showing error bars that represent the standard deviation. However, error bars for different groups frequently overlap (Fig 1, Panel A), making it difficult to determine where the error bars for each group end. Several strategies are used to address this problem. The most common approach is to use error bars to show the standard error (Fig 1, Panel B), which is smaller than standard deviation. This reduces the likelihood that error bars for different groups will overlap; however, standard errors measure the precision of the mean rather than the variability in the sample. An alternate approach is to use unidirectional error bars, which are oriented away from other groups (Fig 1, Panel C). In this case, it is difficult to estimate the position of the missing error bars to assess the amount of overlap between groups. Another option is to stagger the position of overlapping data points on the x-axis; however, few graphical packages offer this alternative.
The common practice of displaying summary statistics can be misleading, as many different data distributions can lead to the same graph (Fig 2) [1]. The actual data may suggest different conclusions from the summary statistics. This problem is accentuated by the small sample sizes often used in basic science research. In 75% of papers published in top physiology journals, the smallest group shown in a figure had six independent observations or fewer, whereas the largest group shown in a figure had 15 independent observations or fewer [1]. A recent study reported that eight animals per group is a typical sample size for preclinical research [5]. Outliers are common in such small datasets, and it is difficult to determine the distribution of the data. This is problematic, as standard line graphs do not show values for individual participants. The sample size for each group cannot be determined, nor can the viewer assess whether response patterns are similar for all individuals in a particular group.
Current Alternatives to Traditional Line Graphs
While several alternatives to the line graph have been proposed [6,7], the existing options have important limitations. Templates for creating graphics for paired or matched data were Reimagining the line graph. Panels A-C use traditional line graphs to present a simulated dataset as mean and standard error (Panel B) or mean and standard deviation (Panels A and C). While Panels A and C clearly indicate that there is overlap between groups, it difficult to assess the magnitude of the overlap. The error bars for Groups 2 and 3 overlap, while those for Group 1 go in the opposite direction. Panels D-F show selected figures that were created using our web-based tool for making interactive line graphs. Readers can view the interactive versions by uploading S1 Data into our web-based tool, then clicking on the name of each figure the "Graphs" heading. The lines in Panel D represent the group means, whereas the shaded regions represent one standard deviation above and one standard deviation below the mean. Replacing error bars (Panel C) with semitransparent shading (Panel D) makes it easier to identify regions where the groups overlap. The mean responses suggest that measurements for Group 1 do not change across the three conditions (Panel D). In contrast, Group 2 shows a small response to Condition 2, whereas Group 3 shows a larger response. However, examining individual-level data showing changes from Condition 1 to Condition 2 (Panel E) reveals that Group 2 includes responders and nonresponders. Response provided in our previous paper [1]. The templates create univariate scatterplots showing differences for each individual as well as "spaghetti plots" in which lines are used to connect paired values (as shown in Fig 2, upper graphs of Panels B, C, and D). This approach does not scale well for larger datasets or for small datasets with more than two time points or conditions. Showing one line for each individual often leads to a complicated and uninformative graphic with many crossing lines. It may also be difficult to distinguish among individuals in different groups, especially when groups overlap. The reliance on black and white figures in scientific papers exacerbates these problems. A variety of other strategies have been proposed, including small multiples [6] and lasagna plots [7]. S1 Text briefly outlines several options and provides examples and references. Many of these strategies are most effective for datasets without patterns for the responders are similar to the responses observed among individuals in Group 3, whereas response patterns for the nonresponders are similar to those of individuals in Group 1. Panel F shows that while values for most individuals in Group 3 decreased between Conditions 2 and 3, one individual experienced a slight increase. This observation is a clear outlier. The lines for Panels E and F represent the median change.
doi:10.1371/journal.pbio.1002484.g001 groups, in which each line represents an observation of interest. Other static alternatives to the traditional line graph make it difficult to determine whether responses are consistent among all individuals within a particular group. Strategies such as the lasagna plot provide individual level data; however, the lasagna plot was designed for large datasets and is less effective in small studies.
An Interactive Alternative to Traditional Line Graphs
Interactive line graphs may provide additional information needed to interpret longitudinal data in small studies. We developed a simple, free, web-based tool (http://statistika.mfub.bg.ac. rs/interactive-graph/) that allows users to quickly create interactive line graphs for small datasets. These graphs have four key features, allowing for rapid examination of different aspects of the data (Box 1): 1. View different summary statistics: the base graph shows the central tendency and variation in each group for each condition or time point. The user can adjust the graph to view the mean, mean and standard deviation, mean and standard error, mean and 95% confidence interval, median, median and interquartile range, or median and range. Measures of variation for each group are shown as a semitransparent shaded region, allowing one to assess the magnitude of the overlap among observations from different groups.
2. Display lines for some or all individuals in each group: the line for each participant or sample in the dataset can be turned on or off individually, allowing one to view any subset of individuals in the dataset.
Box 1. Data Exploration Using the Interactive Line Graph
Interactive line graphs can be quickly created using a web-based application that does not require any programming expertise or specialized skills-users simply enter or upload data and customize the graph axes and labels. The insight gained from an interactive line graph will depend on the empirical dataset. In addition to enhancing readers' understanding of the data, the interactive line graph may help authors to select static graphs that most effectively illustrate key findings for print publication. A simulated dataset is provided to illustrate these points (S1 Data). The interactive line graph can be viewed by uploading this simulated dataset into the web-based tool (http://statistika. mfub.bg.ac.rs/interactive-graph/upload). Fig 1 shows traditional line graphs for this dataset (Panels A-C), followed by selected static graphs that were created using the webbased tool (Panels D-F). The traditional line graphs showing mean ± standard error (Panel B) and mean ± standard deviation (Panels A and C) provide no information about individual responses and make it difficult to assess the degree of overlap between groups. When the mean ± standard deviation graph is recreated using our web-based tool (Panel D), the overlapping and unidirectional error bars are replaced by semitransparent shaded regions. Differences in shading make it easier to identify regions where the standard deviations for different groups overlap. The average values suggest that there is no response in Group 1, an intermediate response in Group 2, and a large response in Group 3. However, the individual change scores examining the differences between Conditions 1 and 2 tell a different story (Panel E). Group 2 seems to include "responders" and "nonresponders." Nonresponders follow the same pattern of change as individuals in Group 1, whereas the magnitude of change in responders is similar to the responses observed among individuals in Group 3. Averaging these two subgroups gives the misleading impression of an intermediate response in Group 2.
3. View a subset of groups, conditions, or time points: these options allow the viewer to focus on a subset of groups, conditions, or time points.
4. View change scores for any two conditions or time points: the "Difference Plot" tab displays a univariate scatterplot that shows change scores for each individual in the dataset. This allows for comparisons of the magnitude, direction, and consistency of changes across groups.
The tool allows for both (1) the integration of static graphics into a publication as a .tiff file and (2) downloading of a data file for a customized interactive graphic, which can be presented in the paper supplement. As color coding is used to present different groups, the tool includes a color blind working mode. All interactive line graph features can be viewed in a color blindsafe color scheme. A black-and-white mode is also included for less complex graphs.
From Static to Interactive Scientific Publishing
A recent editorial highlighted the static nature of data presentation as a major limitation of scientific publications [8]. There are several potential benefits to making interactive graphics common features of publications for small sample size studies. Interactive graphics can provide crucial information that cannot be obtained from a static graphic. They may be valuable tools for promoting transparency, reproducibility, and open science in an era when these factors are increasingly valued [9,10,11]. Customized interactive graphics have already been presented by journals [12] and authors [13,14] to complement research articles. Anecdotal reports suggest that this can be an effective strategy for increasing interest in published research [13]. Interactive data visualizations could fundamentally change the way authors, reviewers, and readers understand and interpret research data. However, the application of interactive graphics in scientific publications will be dependent on both author and journal acceptance. Author-level solutions, such as the interactive line graph described in this paper, would allow authors to create interactive graphics for individual papers and include them in the data supplement. Journal-level solutions would allow journals to include interactive graphics in the web versions of all papers published in the journal.
Conclusions
This paper presents a "proof of concept" example that demonstrates how interactive alternatives to static graphics for small sample size studies allow for additional exploration of empirical datasets and illustrates the types of tools that are needed to promote widespread use of interactive graphics. The principles described above can be applied to other types of figures and tables, including those applicable to big datasets. Most scientists use electronic devices to access scientific publications, yet the interactive potential of these technologies remains untapped. Exploring more dynamic alternatives is crucial as we enter an era of transparent and open science.
Supporting Information S1 Data. Example of an interactive line graph. This example can be viewed by uploading S1 Data into the web-based tool (http://statistika.mfub.bg.ac.rs/interactive-graph/). [17]. Changes in placental growth factor were examined longitudinally in women who had normotensive pregnancies (n = 24) and women who developed preeclampsia (n = 15). The points show observations from all women in the dataset (mode = 3 measurements per woman; range 1-4 measurements per woman). Lines show the pattern of change for one individual in each tertile in both the normotensive pregnancy and preeclampsia groups. (TIF) S1 Text. Static alternatives to the line graph. (DOCX) | 3,469.2 | 2016-06-01T00:00:00.000 | [
"Computer Science"
] |
An Efficient Ensemble Approach for Alzheimer’s Disease Detection Using an Adaptive Synthetic Technique and Deep Learning
Alzheimer’s disease is an incurable neurological disorder that leads to a gradual decline in cognitive abilities, but early detection can significantly mitigate symptoms. The automatic diagnosis of Alzheimer’s disease is more important due to the shortage of expert medical staff, because it reduces the burden on medical staff and enhances the results of diagnosis. A detailed analysis of specific brain disorder tissues is required to accurately diagnose the disease via segmented magnetic resonance imaging (MRI). Several studies have used the traditional machine-learning approaches to diagnose the disease from MRI, but manual extracted features are more complex, time-consuming, and require a huge amount of involvement from expert medical staff. The traditional approach does not provide an accurate diagnosis. Deep learning has automatic extraction features and optimizes the training process. The Magnetic Resonance Imaging (MRI) Alzheimer’s disease dataset consists of four classes: mild demented (896 images), moderate demented (64 images), non-demented (3200 images), and very mild demented (2240 images). The dataset is highly imbalanced. Therefore, we used the adaptive synthetic oversampling technique to address this issue. After applying this technique, the dataset was balanced. The ensemble of VGG16 and EfficientNet was used to detect Alzheimer’s disease on both imbalanced and balanced datasets to validate the performance of the models. The proposed method combined the predictions of multiple models to make an ensemble model that learned complex and nuanced patterns from the data. The input and output of both models were concatenated to make an ensemble model and then added to other layers to make a more robust model. In this study, we proposed an ensemble of EfficientNet-B2 and VGG-16 to diagnose the disease at an early stage with the highest accuracy. Experiments were performed on two publicly available datasets. The experimental results showed that the proposed method achieved 97.35% accuracy and 99.64% AUC for multiclass datasets and 97.09% accuracy and 99.59% AUC for binary-class datasets. We evaluated that the proposed method was extremely efficient and provided superior performance on both datasets as compared to previous methods.
Introduction
Alzheimer's disease (AD) is an incurable neurological disorder that leads to a gradual decline in cognitive abilities, but early detection can significantly mitigate symptoms [1].
Patients with AD lose their cognitive abilities, making it difficult to carry on with normal responsibilities and perform daily routine task; thus, they become dependent on their family for small tasks and survival. AD causes problems of memory loss like remembering things, arranging and recollecting things, intuition, and judgmental issues [2]. Around 2% of people at the age of 65 are affected with AD and 35% at the age of 85 years. It was reported that 26.6 million people were affected in the year 2006, and the count is increasing dramatically [3]. In 2020, more than 55 million people were affected by AD, and the count is estimated to reach 152 million by 2050 [4]. The degradation of brain cells and the dysfunction of synaptic and pathological changes start to develop almost 20 years before AD diagnosis [5]. A proper diagnosis of the disease is also needed to develop the necessary drugs to slow down the progression process, and the patient's whole medical history is thoroughly examined for the effective monitoring of the disease. The overall cost and effort faced by patients and families are also increasing dramatically. Researchers have emphasized the importance of the early detection of AD for starting treatment promptly and obtaining accurate results.
Individuals with AD typically exhibit a reduction in brain tissue volume in the hippocampus and cerebral cortex, accompanied by an expansion of the ventricles in the brain, as observed in multiple studies. In advanced stages of the disease, brain scans such as MRI images show a substantial reduction in the hippocampus and cerebral cortex, along with ventricular expansion [6]. AD primarily affects the regions of the brain and the intricate network of brain tissues involved in cognition, memory, decision making, and planning. The diffusion of brain tissues in the affected areas causes a decrease in the MRI image intensities in both the magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) techniques [7][8][9].
In recent years, there has been a growing trend of using neuroimaging data and machine learning (ML) methods to characterize AD, providing a potential means for personalized diagnosis and prognosis [10][11][12]. Currently, deep learning (DL) has emerged as a powerful methodology in the diagnostic imaging field, as evidenced by several recent studies [13][14][15][16][17]. Diagnosing AD using DL is still a significant challenge for researchers [18]. Medical images are scarce and of lower quality, and the difficulty in identifying regions of interest (ROI) within the brain and unbalanced classes are issues encountered in detecting AD. Among the various DL architectures, the convolutional neural network has received considerable interest due to its extraordinary effectiveness in classification [19]. In contrast to conventional machine learning, deep learning enables automatic feature extraction like low-level to high-level latent representations. Therefore, deep learning requires minimal image pre-processing steps and little prior understanding of the synthesis process [20].
The imbalanced datasets for medical disease detection are the most significant challenge. The number of samples in each class is not equal for Alzheimer's disease, despite the availability of a balanced dataset. The model's performance is biased, and generalizations become difficult with imbalanced datasets. Individual deep learning models handle basic data efficiently, but overfitting occurs when dealing with complex problems. The generalizability, efficacy, and reliability of this type of model are poor. Individual deep learning models make predictions or detections based on learning with a single set of weights and do not capture nuances from all image features. To accurately diagnose a disease using segmented magnetic resonance imaging, it is necessary to conduct an in-depth examination of the disease-specific tissues. Several studies have used conventional machine-learning approaches to diagnose diseases from MRI, but manually derived features or the physical examination of medical data and patient records are more complex, time-consuming, and require a significant level of medical staff involvement. The conventional method does not provide a precise diagnosis, resulting in errors during diagnosis and inefficiencies.
Deep learning automates the detection process, making it more efficient and faster. An accurate diagnosis is crucial in cases where early detection is essential for proper treatment. Deep learning models have demonstrated an extraordinary ability to learn nuanced patterns from complex and high-dimensional data. They can automatically extract pertinent information from the images and overcome the limitations of traditional methods. The proposed method addresses the data imbalance issues more efficiently with adaptive synthetic oversampling techniques and makes diagnostics faster. The proposed method combines the predictions of multiple models to make an ensemble and stronger model that learns complex and nuanced patterns from the data. The proposed method is more robust, reliable, and diverse in its decision making. Our objective was to examine the ensemble model's capacity to detect AD and perform feature extraction in order to improve the model's overall effectiveness. The following are main contributions of our study: 1.
An efficient ensemble approach was proposed that combines VGG16 and Efficient-Net-B2 for Alzheimer disease classification with high accuracy using multiclass and binary-class datasets, also exploring the effect of transfer learning to improve the performance of the model.
2.
The adaptive synthetic oversampling technique was applied to a highly imbalanced dataset to balance the Alzheimer's disease classes. The efficacy of the ADASYN in terms of model overfitting was also investigated to increase the generalization performance of deep learning models.
3.
The efficacy of the proposed method was analyzed using k-fold cross-validation and comparing with other state-of-the-art approaches. We also performed a comparison of ensemble and individual deep learning models.
In this paper, we organized our content into several sections. Section 2 presents a comprehensive review of the relevant literature. Section 3 outlines the pre-processing, methods, and performance measures. The results and discussion are presented in Section 4. Section 5 provides the concluding remarks for this paper.
Literature Review
Due to the prevalence and challenging nature of Alzheimer's disease (AD), it poses difficulty for experts regarding diagnosis, which has been extensively studied in the literature. The authors of [21], conducted a study in which they utilized Alzheimer's data to perform a classification process. Their dataset comprised three classes, and they employed Dense-Net as the model, with soft-max serving as the classification layer. The study resulted in an accuracy of 88.9%. While the results were favorable, there remained potential for further improving the accuracy of the model. In addition, Yildirim et al. [22] conducted a study on AD classification using a four-class dataset. They employed convolutional neural network (CNN) architectures and compared the results with their proposed hybrid model, built upon a Resnet50 base and utilizing its knowledge. According to the authors, the hybrid model achieved an accuracy rate of 90%, which outperformed the success rate of pre-trained CNN models. The detection of AD has been extensively researched, and it poses various challenges. The authors of [23] utilized a sparse auto-encoder and 3D CNN to develop a model that could detect disease cases in affected individuals based on the magnetic resonance imaging (MRI) of the brain. The use of three-dimensional convolutions was a significant breakthrough, as it outperformed two-dimensional convolutions. Although the convolution layers were pre-trained with an auto-encoder, they were not fine-tuned, and it was anticipated that fine-tuning would lead to improved performance [24].
Researchers worldwide have shown great interest in classifying AD. The dominant technique for identifying healthy data from fMRI images is to extract features with a CNN, followed by deep learning (DL) classification. The authors of [25] used a deep CNN to classify Alzheimer disease versus normal patients with Alzheimer's functional MRI data and structural MRI data, achieving 94.79% accuracy with the LeNet5 method and 96.84% accuracy with the Google-Net method. Recently, there has been a notable increase in the use of DL methods in various fields because of their superior performance compared to traditional methods. One study [26] developed a hybrid model that involved using extracted patches from an auto-encoder combined with convolutional layers. Another study [23] improved upon this by incorporating 3D convolution.
In a previous study [27], auto-encoders arranged in a stack with a soft-max layer were used for classification. Another study [28] utilized standard CNN architectures by intelligently selecting training data and utilizing transfer learning but did not achieve remarkable results. A comprehensive comparison was conducted in another study [29], which examined the results and trained data using scratch with fine-tuning. Based on the findings, in most cases, the latter outperformed the former. Fine-tuned CNNs have been used to solve numerous medical imaging problems, including plane localization in ultrasound images [30].
As discussed above, the use of transfer learning (TL) in the medical discipline is significant for detecting AD with sufficient precision. Other research [31] emphasized the use of unsupervised feature learning, which involved two stages. The first stage was to extract features from unprocessed data using two methods-scattered filtering and uncontrolled neural layer networks. To classify healthy and unhealthy individuals, sparse filtering and regression with soft-max were employed. Additionally, some unsupervised learning techniques, including Boltzmann machines and dispersed coding, were used to dispose of the collected data. The ADNI dataset containing cerebrospinal fluid was used in this approach, with a total of 51 AD patients, including 43 with mild signs of AD. MRI scans were collected using 1.5 T scanners. In their study, the authors of [32] proposed a technique that utilized ML algorithms to gather information about a patient's behavior over time. By employing Estimote Bluetooth beacons, the method accurately determined the location of the patient within the house, with a precision of up to 95%.
Gerardin and team investigated the use of hippocampal texture features [33] as an MRIbased diagnostic tool for early-stage AD, achieving a classification accuracy of 83%. They determined that the hippocampal feature outperformed other techniques in distinguishing stable MCIs and MCI to Alzheimer disease converters. Liu and colleagues [34] used stacked DL auto-encoders with soft-max at the output layer to address the bottleneck issue, achieving a remarkable accuracy of 87.67% for multiclass classification with minimal input data and training. The researchers concluded that combining multiple features would lead to more precise classification results.
The authors of [35] demonstrated the effect of transfer learning on image classification and showed that fine-tuning produced better results. Alzheimer's disease was diagnosed in [36] employing convolutional-neural-network-based architecture and magnetic resonance brain imaging. The VGG-16 model was deployed as a classification feature extractor. The findings showed that the proposed model for Alzheimer's disease was 95.7% correct. The study [37] introduced a transfer learning strategy to localize plans in ultrasound scans that could transfer knowledge on fewer layers. Another study [38] proposed an architecture that utilized a transfer learning approach for the detection of Alzheimer's disease from a multiclass, open-access series of imaging study datasets. The architecture was tested on pre-processed unsegmented and segmented images. The architecture was tested on both binary and multiclass datasets. The results demonstrated that the proposed architecture attained a 92.8% accuracy on multi-class and an 89% accuracy on binary-class datasets.
Iram [39] conducted research on the detection of Alzheimer's disease using biosignals and the most common machine learning models, which facilitated neurodegenerative disease diagnosis at an early stage. The dataset was imbalanced; to fix the imbalance, oversampling and undersampling techniques were employed, and missing values were addressed. Multiple metrics were employed by the author to evaluate the performance. This study emphasized the significance of machine learning and signal processing in the early identification of life-threatening diseases like Alzheimer's. Linear and Bayes classifiers were used. Using the Bayes classifier, the author obtained greater accuracy in diagnosis. Kim [40] developed machine learning algorithms for the identification of Alzheimer's disease biomarkers. The predictive performance of models employing multiple biomarkers was more effective to that of models employing an individual gene.
Biosignals were used by Han et al. [41] to identify dementia in elderly people. They employed no artificial intelligence techniques in their analysis. Insufficient participation made it impossible to derive broad generalizations. A number individuals with moderate dementia should be tested from a broader population. Similar to this, another study [42] employed biosignals to analyze cognitive disorders including Alzheimer's and Parkinson's diseases. The authors developed a novel, economical approach for disease identification. Hazarika et al. [43] presented a light-weight, inexpensive, and fast diagnosis method that used brain magnetic resonance scans. They used the DenseNet121 model, which was very expensive and able to detect the disease with 87% accuracy. However, the authors developed and combined two models, AlexNet and LeNet, with fine-tuning. Their method extracted features by utilizing three parallel filters. Their study demonstrated that their model accurately detected the disease with a 93% accuracy rate.
The researchers in [44] used the CNN-based transfer learning architecture VGG-16 to classify Alzheimer's disease and achieved 95.7% accuracy. Murugan et al. [45] proposed deep learning for dementia and Alzheimr's disease classification from magnetic resonance images. Several studies in the literature have faced class imbalance issues for Alzheimer's disease detection because imbalanced datasets lead to overfitting, inaccurate results, and low accuracy among deep learning models. Another problem is that there are not enough data available for training deep learning models. Therefore, we utilized the adaptive synthetic technique (ADASYN), which creates new data samples synthetically, as deep learning models perform best with balanced datasets.
Proposed Methodology
This section describes the Alzheimer's disease dataset, pre-processing, adaptive synthetic oversampling technique, deep learning and ensemble models, model evaluation metrics, and classification results. Figure 1 briefly represents the workflow of the proposed method. The pre-processed dataset was then utilized for training the pre-trained and proposed method to efficiently and accurately detect Alzheimer's disease cases. When the training process was complete, the performance of the models was investigated based on unseen data. In the following subsections, the proposed methodology is discussed.
Dataset Description and Pre-processing
The two Alzheimer's disease datasets used in this study were collected from Kaggle's data repository. The multiclass dataset contained four classes, namely mild demented, moderate demented (MD), non-demented (ND), and very mild demented (VMD). A person suffering from the ND class experiences disability in terms of behavioral skills, difficulty in learning and remembering things and the skills of thinking and reasoning, and it even affects the patient's personal life. However, dementia is not necessarily caused by aging, and its main sign is not memory loss. In the very mild demented (VMD) stage, the patient starts to suffer memory loss, forgetting where he/she put their belongings, recent names they heard, etc. It is hard to find VMD patients through the cognitive capacity test. In the mild demented (MD) phase, the patient is unable to complete their work properly, forgets their home address, and has a hard time remembering things. These patients are not stable and even forget they have memory issues, because they forget everything. This stage is detected by cognitive testing. The fourth class is moderate demented (MOD), which is the most alarming stage because the patient loses their ability to understand anything and faces problems with calculation; it becomes difficult for them to leave home on their own because they forget the way; and they forget important historical events and activities they performed recently. Table 1 shows the MMSE score and gap between the Alzheimer's disease classes in the dataset. The mild demented class had a 25.12 MMSE score, the moderate demented class 21.77, the non-demented class 23.50, and the very mild demented class 24.51. The average MMSE mean score for all four classes was 23.72, with a 4.49 standard deviation. The largest gap between Alzheimer's disease classes was for the mild demented and moderate demented classes at 3. The smallest gap was 0.59 for the mild demented and very mild demented patients. The images of AD in the dataset were RGB images with different numbers of pixels. The ND class contained 3200 samples, while the MD class contained 896 images, the VMD class contained 2240, and the MOD class contained 64. The only disadvantage of this dataset was that it was imbalanced. To solve this issue, we used ADASYN for class balancing. Another binary MRI Alzheimer's dataset contains 965 AD and 689 MCI images. Medical image pre-processing is very important to achieve quality results and increase the image quality for machine and deep learning [46]. The images had different heights and widths, and to train the deep learning models, we needed fixed-size inputs. Therefore, we resized all the images to a fixed size of 224 × 224 × 3.
Adaptive Synthetic (ADASYN) Technique
Adaptive synthetic (ADASYN) oversampling technology is used in classification tasks to handle imbalanced classes in datasets. ADASYN creates new synthetic samples from the minority class to address the class imbalance issues. It improves the generalization accuracy of various classifiers. ADASYN is mainly used for object detection, facial expressions, and image analysis to balance the classes. It is a very effective and flexible technique compared to any other oversampling technique. Researchers have utilized the ADASYN oversampling technique to balance an imbalanced dataset for tuberculosis detection from CXR images. They balanced the minority classes with the ADASYN technique to enhance the overall effectiveness of the tuberculosis detection model and achieved a high accuracy compared to other techniques [47]. Table 2 shows the training and testing images after splitting the balanced data. Algorithm 1 shows the steps of the ADASYN technique.
Ensemble Deep Learning with Transfer Learning Approach
Typically, constructing a deep learning architecture is a challenging task. The weights that one uses in deep learning are allocated before the training phase and changed continuously. Deep learning requires a lot of time to change the weights repeatedly, which leads to the overfitting of the model. Transfer learning (TL) has been the most effective method to overcome the aforementioned problems [48]. Transfer learning leverages previously learned knowledge from pre-trained models trained on large datasets. In addition, it adjusts the hyper-parameters and tunes the hidden layers of pre-trained models. The efficiency of deep learning may be improved by TL, which helps to save time and effort [49].
Ensemble learning is the most essential approach for improving the overall performance of several individual deep learning models. Ensemble learning trains many deep learning models on the same datasets and integrates them so effectively that the predictions made by the models are accurate and the detection accuracy increases [50]. Ensemble learning may be applied in a variety of medical diagnosis tasks. Overall, it improves performance, makes models more robust, and reduces the chances of overfitting. By combining the aspects of several models, deep learning can learn simple and complex patterns efficiently. Five ensemble deep models were used in this Alzheimer's disease detection study to efficiently detect cases of Alzheimer's disease from multiclass and binary-class classification datasets. The input layers, output shape, and parameters of the proposed ensemble model are presented in Table 3.
The proposed ensemble deep learning model is shown in Figure 2. Firstly, we imported the VGG-16 and EfficientNet-B2 models from the keras application and other important libraries relevant to the model. The input image shape for the ensemble model was 224 × 224 × 3. Then, we loaded both the pre-trained deep learning models with includetop equal to false (without top layers). The input shape for the ensemble models was created and kept the same. After that, we concatenated the output of both the VGG-16 and EfficientNet-B2 models using the "concatenate" function. A dropout layer was added immediately after the concatenation layers. The flatten layers function was used to convert the features into a specific format that was acceptable for the fully connected layer. We then fine-tuned the other layers to accelerate the training steps and increase the overall progress. Four batch normalizations and three dense layers were used with activation functions. Batch normalization is a very popular method that normalizes layers as well as providing stability to neural networks. It also makes learning easier and faster. The testing accuracy may be improved with batch normalization, depending on the type of data. Dense layers are regularly used for image classification. Finally, the model was compiled with the "Categorical Cross-entropy" loss function and Adam optimizer.
Fine-Tuned Individual Deep Learning Models
This subsection covers a brief description of certain deep learning (DL) models, namely convolutional neural networks (CNNs), DenseNet121, VGG16, Xception, and EfficientNet-B2. It also analyzes the performance of the trained model using performance metrics like accuracy, AUC, recall, precision, and F1 score.
CNN
CNNs are considered the most significant DL models. Unlike traditional matrix multiplication, CNNs employ convolution in their operation. Their primary application is in object classification using image data. CNNs are a type of deep learning model that are widely used for image and video processing tasks. The structure and function of the visual cortex in the brain inspired these networks. A CNN's operation involves several processing layers, including convolutional layers, pooling layers, and fully connected layers. Overall, CNNs are powerful tools for pre-processing tasks and have been used for various applications, including object detection, facial recognition, and autonomous driving [51,52].
The CNN architecture is shown in Figure 3. It took an input size of 224 × 224 × 3. The CNN architecture had three convolutional two-dimensional layers followed by the RelU activation function, three max pooling layers, and three batch normalization layers. Then, a flattening layer was added to follow the dropout layer. Two dense layers were included, one followed by the activation of the 'RelU' function and the other by the activation of the soft-max layer.
DenseNet121
DenseNet121 [53] is a CNN architecture that has been commonly employed for image classification tasks. It was introduced in 2017 as an improvement upon the previous popular architectures such as VGG and ResNet. DenseNet121 employs a dense connectivity pattern, where each layer receives feature maps from all previous layers and passes its feature maps to all successive layers. This dense connectivity allows for better gradient flow and parameter efficiency and reduced vanishing gradient problems. The architecture has 121 layers, including convolutional, pooling, and dense blocks, and has achieved state-of-the-art performance on several benchmark datasets such as ImageNet.
EfficientNet-B2
EfficientNetB2 is a CNN architecture that is part of the EfficientNet family of models. It was designed to provide an optimal balance between model size and performance for image classification tasks. EfficientNetB2 is larger and more complex than the original EfficientNetB0 model, but it maintains the same basic structure, including the use of compound scaling to balance depth, width, and resolution. EfficientNetB2 has 7.8 million parameters. It is often used as a baseline model for transfer learning or fine-tuning specific image classification tasks [54].
VGG16
VGG-16 is a deep CNN architecture that was developed by the visual geometry group (VGG) at the university of Oxford in 2014. It is a widely used model for image recognition tasks and has achieved state-of-the-art results in many computer vision (CV) benchmarks. The architecture of VGG16 contains 16 layers, including 13 convolutional layers and 3 fullyconnected layers. The convolutional layers have small 3 × 3 filters and are placed on top of each other, increasing the depth of the network. The use of small filters with a small stride size helps preserve spatial information and enables the network to learn more complex features [55].
Xception
Xception is a deep CNN architecture that was proposed in 2016. It was inspired by the inception architecture but differs from it by replacing the standard convolutional layers with depth-wise separable convolutions. This approach minimizes the number of training parameters and computations, resulting in faster and more efficient training. Xception also employs skip connections to allow for better gradient flow and improved accuracy. The architecture has achieved state-of-the-art results on various image classification benchmarks such as ImageNet, and it has been widely used in computer vision applications [56].
Performance Measures
Evaluation metrics are quantitative measures used to assess the performance of a model or system in solving a specific task. The model's classification results could be divided into four classes: true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN). TP refers to correctly identified positive instances, while TN refers to accurately identified negative instances. FP represents falsely predicted positive instances, and FN represents falsely predicted negative instances. Various evaluation parameters were utilized in this study, including recall, precision, accuracy, AUC, and F1 score.
Results and Discussion
Experiments were conducted using a Hewlett Packard Core i5 , sixth-generation, 25 GB RAM, and a colab Pro GPU that was manufactured by Google were used in this study. This section presents all the experiments conducted on the binary and multiclass Alzheimer's brain disease datasets. We utilized efficient ensemble deep learning architectures that consumed minimum resources. We utilized a 32-bit batch size, 15 epochs, a learning rate of 0.0001, a cross-entropy loss function, Adam, and an SGD optimizer.
Results of Individual Fine-Tuned Deep Learning Models
Experiments were conducted using individual fine-tuned deep learning models including VGG-16, DenseNet-121, EfficientNet-B2, CNN, and Xception. These individual models were trained and tested using a loss function named categorical cross-entropy for mild demented, moderate demented, non-demented, and very mild demented cases and an Adam optimizer to optimize the performance. A batch normalization layer was added to Efficient-Net-B2, Xception, and VGG-16 to increase the training process, reduce the learning time, and lower the generalization errors. Moreover, a dropout layer was utilized to avoid overfitting. There were 50 epochs implemented for each model. Table 4 presents the results of the individual pre-trained models. For the individual models, DenseNet-121 attained the lowest accuracy, precision, recall, F1 score, and area under the curve for Alzheimer's disease multiclass classification. The second most poorly performing deep model was Xception, which achieved a 75.04% accuracy and 93.70% area under the curve. Both the CNN and VGG-16 models achieved almost the same classification accuracy. The fine-tuned high-performance model EfficientNet-B2 achieved a 95.89% accuracy and 95.95% recall score. EfficientNet-B2 performed well in individual deep learning models. Figure 4 shows the performance comparison of individual models using various metrics. DenseNet-121 and Xception performed poorly in terms of recall score and F1 score. EfficientNet-B2 performed exceptionally, in addition to VGG-16. The area under the curve (AUC) was better than the other metrics.
Results of Ensemble Deep Learning Models with Multiclass Dataset
The ensemble deep learning model results are presented in Table 5. The ensemble EfficientNet-B2 and DenseNet-121 model achieved a 96.96% accuracy, 97% precision, 96.98% recall, 96.93% F1 score, and 99.60% area under the curve (AUC) score. The second VGG-16-DenseNet-121 ensemble model achieved a 95.56% accuracy and 98.75% AUC. The EfficientNet-B2+Xception model achieved a 96.26% accuracy, 96.50% recall, and 99.11% AUC. Xception+DenseNet-121 achieved a 91.05% accuracy. The proposed VGG-16+EfficientNet-B2 model achieved a 97.35% accuracy score and a 99.64% area under the curve (AUC). All the ensemble models performed well and accurately detected the AD cases from the multiclass dataset. The DenseNet-121+Xception ensemble model achieved an 18% higher accuracy than the individual DenseNet-121 and Xception models. The other ensemble model achieved 1.46% better results when we compared it with the individual EfficientNet-B2 model. The performance comparison of the ensemble models is presented in Figure 5. Among the ensemble models, VGG-16+EfficientNet-B2 performed efficiently, with high performance metrics. The Xception model with Efficient-Net-B2 provided better results than the individual Xception model. Similarly, DenseNet-121 with VGG-16 performed with high accuracy for detecting Alzheimer's disease. The experiments proved that the ensemble models provided excellent results compared to the individual models in terms of all performance metrics. The results of the ensemble deep learning models using the imbalanced dataset are shown in Table 6. The ensemble model of EfficientNet-B2 and DenseNet-121 obtained an accuracy of 92.82%, a precision of 94.29%, a recall of 93.76%, an F1 score of 91.52%, and an areaunder-the-curve (AUC) score of 99.38%. The second ensemble model of VGG-16-DenseNet-121 had an accuracy of 91.52% and an AUC of 98.98%. The EfficientNet-B2+Xception model had an accuracy of 90.45%, a recall of 87.80%, and an AUC of 98.80%. Xception+DenseNet-121 obtained an accuracy of 89.29%. The proposed VGG-16+EfficientNet-B2 model obtained an accuracy score of 95% and an AUC of 99.41%. All ensemble models achieved outstanding performance and accurately identified AD cases in the multiclass datasets. Using the imbalanced dataset, the DenseNet-121+Xception ensemble model achieved an 8% lower accuracy. The accuracy of another ensemble model was 7% lower when compared to the balanced dataset. Figure 6 displays the performance comparison of the ensemble models using the imbalanced dataset. Among the ensemble models, VGG-16+EfficientNet-B2 performed effectively, with high performance metrics. In comparison to previous models, the DenseNet-121 model with Efficient-Net-B2 offered superior results. In the same way, DenseNet-121 with VGG-16 showed good performance in identifying Alzheimer's disease. The results showed that the ensemble models with an unbalanced dataset also produced better results. The experiments, however, showed that the proposed approach achieved 2.35% higher accuracy when utilizing the balanced dataset. Table 7 presents the results of the proposed model with different learning rates to check the impact of the learning rates on the model performance. During the training phase, it was essential to select the appropriate learning rate in order to ensure that the model weights were properly updated. We achieved a 94.47% accuracy and 98.53% AUC by utilizing a 0.01 learning rate. In another experiment, the learning rate was set to 0.001, and a 97.30% accuracy was achieved. When the learning rate was set to 0.0001, we attained a model accuracy of 97.35% and a 99.64% AUC. The confusion matrix results of the ensemble deep learning models are shown in Figure 7, where label 0 indicates moderate demented, label 1 indicates non-demented, label 2 indicates mild demented, and label 3 indicates very mild demented. The VGG-16+EfficientNet-B2 model produced 100% true predictions for non-demented cases. The Xception+DenseNet-121 model produced 98% true predictions for non-demented and mild demented Alzheimer's cases. The Exception+EfficientNet-B2 model also produced the same 100% true predictions for non-demented case. The VGG-16+DenseNet-121 model achieved 91% true predictions for the moderate demented class. The results hence showed that the VGG-16+EfficientNet-B2 model predictions were very good. The training-testing accuracy and loss are displayed in Figure 8a. We observed that the training accuracy was 81.34 at epoch 1, and by epoch 10, we started to see variations in the data. We chose to train the ensemble deep learning models for 50 epochs, and we were able to improve their performance. Figure 8b shows the performance curves of the ensemble EfficientNet-B2+DenseNet-121 model, where the training accuracy was at its highest point at epoch 45 Figure 8d,e shows that the testing loss for the ensembles of Xception+DenseNet-121 and Exception+EfficientNet-B2 was high compared to that in Figure 8a,b.
Results of Ensemble Deep Learning Models with Binary-Class Dataset
The results of the ensemble models were also evaluated on the binary-class Alzheimer's disease dataset to test the effectiveness of the proposed model, as shown in Table 8. The EfficientNet-B2+DenseNet-121 model achieved a 95.45% accuracy, 95.10% precision, 95.45% recall, 95.50% F1 score, and 98.68% area-under-the-curve (AUC) score. The second ensemble VGG-16+DenseNet-121 model achieved a 94.90% accuracy and 98.43% AUC. The EfficientNet-B2+Xception model achieved a 91.80% accuracy, 91.80% recall, and 97.34% AUC. The Xception+DenseNet-121 model achieved a 91.05% accuracy. The proposed VGG-16+EfficientNet-B2 model achieved a 97.07% accuracy score and 99.59% area under the curve (AUC). All the ensemble models performed outstandingly and accurately detected the AD cases for the binary-class dataset. The proposed ensemble model also achieved a remarkable 97.07% accuracy on the binary-class classification dataset.
K-Fold Cross-Validation Results for Ensemble Models
The performance and feasibility of the proposed ensemble model were also evaluated with k-fold cross-validation. The results of the cross-validation are displayed in Table 9. The experiments validated that with k-fold cross-validation, the performance was also outstanding. The VGG-16+DenseNet-121 model achieved an accuracy score of 0.942 with a +/− 0.02 standard deviation. EfficientNet-B2+ DenseNet-121 achieved an accuracy score of 0.961 with a +/− 0.04 standard deviation. VGG-16+ EfficientNet-B2 achieved a 0.963 accuracy and a +/− 0.03 standard deviation. The results suggested that the proposed ensemble model was fit and accurate enough to detect Alzheimer's disease from the multiclass MRI image dataset.
Comparison of Proposed Ensemble Model with Previous Studies
To show the effectiveness and robustness of the proposed ensemble model, we performed a comparison of the proposed method with previous studies discussed in related work. Table 10 depicts the results comparison for the detection of Alzheimer's disease cases. We chose those studies from the literature that considered multiclass datasets for the comparison with the proposed method. Jain et al. [39] proposed convolutional neural networks for AD classification using multiclass images with 95.73% accuracy. Similarly, another researcher [42] used the CNN-based transfer learning architecture VGG-16 to classify Alzheimer's disease and achieved 95.70% accuracy. Yildirim et al. [23] employed hybrid deep CNN models using a multilclass Alzheimer's dataset and attained 90% accuracy. Liu et al. [22] utilized a multi-deep CNN for automatic Alzheimer's disease classification with the lowest accuracy. The results shown in the comparison table were not satisfactory due to the low accuracy and the fact that the models were not properly utilized to achieve outstanding results. However, our proposed ensemble model classified Alzheimer's disease with the highest accuracy and was more efficient than any other individual or previous pre-trained models.
Conclusions
The timely diagnosis and classification of Alzheimer's disease using multiclass datasets is a difficult task. To detect and treat the disease, an accurate automatic system is required. This study proposed a deep ensemble model with transfer learning techniques to detect Alzheimer's disease cases from a multiclass dataset. The Alzheimer disease dataset was highly imbalanced, and we used adaptive synthetic oversampling (ADASYN) to balance the classes. The proposed model achieved an accuracy of 97.35% in detecting disease cases. The DenseNet-121+Xception ensemble model achieved an 18% higher accuracy than the individual DenseNet-121 and Xception models. Another ensemble model achieved 1.46% better results when we compared it with individual EfficientNet-B2. Our proposed ensemble model was less time-consuming, more efficient, worked well even on small datasets, and did not use any hand-crafted features. The deep learning automatically extracted relevant and key features from the samples, and an ensemble of deep learning models captured various aspects of the given samples in depth. In the future, we will collect and evaluate larger amounts of data to quickly and precisely diagnose Alzheimer's cases and combine various types of data to enhance the accuracy of detecting models. | 8,451.8 | 2023-07-26T00:00:00.000 | [
"Computer Science"
] |
The Influence of Photoperiod on the Action of Exogenous Leptin on Gene Expression of Proinflammatory Cytokines and Their Receptors in the Thoracic Perivascular Adipose Tissue (PVAT) in Ewes
Leptin resistance is either a condition induced by human obesity or a natural phenomenon associated with seasonality in ruminants. In the cardiovascular system, the leptin resistance state presence is a complex issue. Moreover, the perivascular adipose tissue (PVAT) appears to be crucial as a source of proinflammatory cytokines and as a site of interaction for leptin contributing to endothelium dysfunction and atherosclerosis progression. So the aim of this study was to examine the influence of the photoperiod on the action of exogenous leptin on gene expression of selected proinflammatory cytokines and their receptors in thoracic PVAT of ewe with or without prior lipopolysaccharide (LPS) stimulation. The experiment was conducted on 48 adult, female ewes divided into 4 group (n = 6 in each): control, with LPS intravenous (iv.) injection (400 ng/kg of BW), with leptin iv. injection (20 μg/kg BW), and with LPS and 30-minute-later leptin injection, during short-day (SD) and long-day (LD) seasons. Three hours after LPS/control treatment, animals were euthanized to collect the PVAT adherent to the aorta wall. The leptin injection enhanced IL1B gene expression only in the LD season; however, in both seasons leptin injection intensified LPS-induced increase in IL1B gene expression. IL1R2 gene expression was increased by leptin injection only in the SD season. Neither IL6 nor its receptor and signal transducer gene expressions were influenced by leptin administration. Leptin injection increased TNFA gene expression regardless of photoperiodic conditions. Only in the SD season did leptin treatment increase the gene expression of both TNFα receptors. To conclude, leptin may modulate the inflammatory reaction progress in PVAT. In ewe, the sensitivity of PVAT on leptin action is dependent upon the photoperiodic condition with stronger effects stated in the SD season.
Introduction
Perivascular adipose tissue (PVAT), the adipose tissue surrounding vessels, was primarily believed to play a structural role in protecting vessels during contraction. Now it is well documented that PVAT is also a crucial regulator of vascular function [1]. Being located in close contact with fibroblasts, vascular smooth muscle cells, or endothelial cells, PVAT can act in the endocrine or the paracrine way secreting adipokines (e.g., leptin, adiponectin, and resistin), cytokines, chemokines, etc. [2]. It is believed that anatomically, PVAT varies greatly depending on the location. The thoracic aorta is surrounded mostly by brown PVAT (thoracic PVAT); further on, the beige PVAT is present around the abdominal aorta (abdominal PVAT); and finally, the white PVAT surrounds the small arteries (mesenteric PVAT) [1,3]. The high thoracic PVAT content was found to be significantly associated with a higher prevalence of cardiovascular disease (CVD) [4]. In the pathological states, such as obesity or diabetes, PVAT can also be a significant source of an excessive amount of proinflammatory cytokines contributing to endothelium dysfunction and atherosclerosis progression [5]. Regardless the mediators of inflammation, such as interleukin (IL)-1β, IL6, or tumour necrosis factor (TNF)α, PVAT can also secrete a number of adipokines possessing similar proinflammatory properties such as leptin. Basically, leptin is supplied to the vessel walls from the lumen, but in pathological states, the leptin synthesis and secretion from PVAT may also be intensified. Leptin secreted by PVAT acts directly on the vessel as there is no anatomical barrier between the adventitia and adipocytes; that fact enables leptin to reach the smooth muscle cells more quickly [6,7]. The action of leptin in vessels has a dual nature. As was demonstrated by Sikka et al. [8], leptin is essential to maintain the proper functioning of the blood vessels, regardless of body weight. Besides stimulating nitrite oxide (NO) synthesis, leptin acts as a vasodilator stimulating the endothelium-derived hyperpolarizing factor (EDHF) and production of hydrogen peroxide (H 2 O 2 ) by endothelial cells [9]. Leptin receptors were also identified in both vascular smooth muscle cells [10] and endothelial cells [11]. There is no evidence for their presence in the adventitia, the least explored layer of the blood vessels; however, as was shown in our previous immunohistochemical analysis, the leptin protein is also present in this aortic layer, which indirectly may indicate its activity in this part of the vessel wall [12,13]. On the other hand, leptin also appears to play an important role in the promotion of atherosclerotic lesions, because, as was stated by Schroeter et al. [14], leptin receptors are present in the atherosclerotic plaque. Moreover, ob/ob mice (with leptin gene knockout) are resistant to atherosclerosis [15]. It is believed that in the pathological conditions (e.g., obesity), leptin accelerates the development of atherosclerosis by several mechanisms: increasing synthesis of proinflammatory cytokines in macrophages and monocytes (IL6, IL12, IL18, and TNFα), enhancing expression of endothelin 1, intensifying oxidative stress in endothelial cells, promoting migration and proliferation of the vascular smooth muscle cell, and stimulating platelet aggregation [16]. Also, NOdependent vasorelaxing actions of leptin are impaired as a result of selective vascular leptin resistance. Leptin resistance is a state when the cells/organs are insensitive to elevated levels of leptin. The main effect is observed in the brain where leptin actions are strictly connected with energy metabolism and food intake. In vessels, the leptin resistance means inability to obtain the NO-mimetic vasorelaxing effect of leptin in contrast to its vasoconstricting effects. As it was observed by Bełtowski et al. [17,18], acute administration of leptin in lean rats elevated the level of NO metabolites and cyclic guanosine monophosphate (cGMP) in plasma and the aortic wall, but the acute effects of leptin were impaired in animals with hyperleptinemia caused by both high-caloric "cafeteria diet" or chronic leptin injection. Therefore, the existence of vessel leptin resistance in the cardiovascular system in pathological conditions is confirmed.
The obesity-induced leptin resistance has been the interest of scientists for several years. It encompasses a complex pathophysiological phenomenon with a number of potential research lines in terms of its mechanisms or diagnostics. Connected with the presence of hormonal imbalance, reproductive disturbance, insulin resistance, diabetes, or hypertension, leptin resistance is a rather negative state in humans [19]. However, in ewe, the phenomenon of natural leptin resistance is observed in the long-day season (spring/summer), which is connected with seasonal adaptation to changes in energy supply and demand. Szczesna and Zieba [20] concentrated on the central leptin resistance, which can explain the increased food intake with simultaneous high blood leptin level in the long-day season. The existence of leptin resistance or the changes in leptin sensitivity in the peripheral tissue of ewe in different photoperiodic conditions have not been examined yet; also, the possible importance of the presence of such a state, for example, in ewes' vessels, was not discussed before.
The aim of this study was to examine the influence of photoperiodic conditions (long-day (LD) and short-day (SD) seasons) on the action of exogenous leptin on gene expression of proinflammatory interleukins and their receptors in PVAT of ewes with or without prior acute inflammation induction. The experiment was conducted on 48 adult, about 2year-old, female blackface ewes during two different photoperiods: in December (SD season; day : night 8 : 16) and in June (LD season; day : night 16 : 8). The animals were maintained indoors under natural lighting conditions (latitude 52°N, 21°E) in individual pens. The stress of social isolation was limited by visual contact with other members of the flock. The animals were acclimated to the experimental conditions for one month. The animals were fed a consistent diet of commercial concentrates with hay and water available ad libitum according to the recommendations of the National Research Institute of Animal Production (Krakow, Poland) [21].
Materials and Methods
In the SD season, the stage of the oestrous cycle of ewes was synchronized by the Chronogest® CR (Merck Animal Health, Boxmeer, The Netherlands) method using an intravaginal sponge impregnated with 20 mg of a synthetic progesterone-like hormone. All ewes had Chronogest® CR sponges placed for 14 days. After sponge removal, the ewes received an intramuscular injection of 500 iu of pregnant mare's serum gonadotropin (PMSG) (Merck Animal Health, Boxmeer, The Netherlands). The experimental procedure began 24 h following PMSG injection, so the ewes were in the follicular phase of the oestrous cycle. During the LD season, the animal synchronization was not required as animals were in seasonal anoestrous.
In both experiments, the animals were randomly divided into 4 groups, n = 6 in each: control (C), with LPS injection to induce immune stress (LPS), with leptin injection (LEP), and with LPS and leptin injection (LPS+LEP). The LPS from Escherichia coli 055:B5 (Sigma-Aldrich, St. Louis, MO, USA) was dissolved in saline and injected into the jugular vein at a dose of 400 ng/kg of body mass [22]. The recombinant sheep leptin (Protein Laboratories Rehovot (PLR) Ltd., Rehovot, Israel), also dissolved in saline, was injected 30 min after LPS treatment at a dose of 20 μg/kg of body mass (based on the dose used for growing beef heifers according to Maciel et al. [23]). The used dose of leptin caused a significant increase in the leptin blood level up to 22 ng/mL 30 min after injection in the LEP group in the LD season (the basal leptin level was 1.19 ng/mL in this group and 0.44 ng/mL in the LEP group in the SD season), which at the end of experiment decreased to 10.54 ng/mL (the data presenting leptin blood profile are in press). The control animals received an equivalent volume of saline (0.9% w/v NaCl; Baxter, Deerfield, IL, USA) at the moment of LPS and leptin injection. The experiment scheme is presented in Table 1.
Three hours after LPS/saline treatment (which was 2.5 h after leptin/saline injection), the animals were euthanized and samples of PVAT adherent to the thoracic aorta wall were collected. All tissues were washed in saline, frozen in liquid nitrogen, and stored at −80°C.
2.2. Relative mRNA Expression. Total RNA from the PVAT was isolated using a RIBOZOL (VWR Chemicals, Solon, OH, USA) according to the manufacturer's instruction.
The quantity and quality of total RNA were quantified spectrophotometrically at 260 and 280 nm with the use of a NanoDrop 1000 instrument (Thermo Fisher Scientific Inc., Waltham, MA, USA). The RNA integrity was checked by 1% agarose gel electrophoresis. The cDNA synthesis was performed using a Maxima™ First Strand cDNA Synthesis Kit for RT-qPCR (Thermo Fisher Scientific Inc., Waltham, MA, USA) according to the manufacturer's instruction. 1200 ng of total RNA was used as starting material for reversed transcription in reaction volume of 20 mL.
Real-time PCR assay was carried out with the use of a 5x HOT FIREPol EvaGreen qPCR Mix Plus (no ROX) (Solis BioDyne, Tartu, Estonia) and HPLC-grade oligonucleotide primers purchased from GenoMed (Warsaw, Poland). Specific primers for determining the expression of examined and reference genes are presented in Table 2. Each PCR reaction contained 3 μL qPCR mix, 10 μL RNase-free water, 0.225 μL of each primer (working concentration 0.5 mM), and 1.5 μL cDNA template (previously 3x diluted). The reactions were run on the Rotor-Gene Q thermocycler (Qiagen, Dusseldorf, Germany) using the following protocol: 95°C for 15 min and 35 cycles of 94°C for 5 s for denaturation, 59°C for 20 s for annealing, and 72°C for 5 s for extension. After the cycles, a final melting curve analysis with continuous fluorescence measurements was performed to confirm the specificity of the amplification.
The relative gene expression was calculated using the comparative quantification option of the Rotor-Gene Q Series Software 2.0.3 (Qiagen, Dusseldorf, Germany). To compensate variation in cDNA concentrations and PCR efficiency between samples, an endogenous control gene was amplified in each sample and used for normalization. Initially, three reference genes (HDAC1, ACTB, and GAPDH) were tested; however, after analysis with the use of NormFinder software [28], the HDAC1 gene was stated as the endogenous control with the best expression stability in the experimental design. The results are presented in arbitrary units, as the ratio of the target gene expression to the expression of the reference gene in the control group was calculated as 1.
Statistical Analysis.
Statistical analysis was performed using STATISTICA v. 13.1 (Dell Inc., Round Rock, TX, USA). Results of two-way (LPS and leptin injection) analysis of variance (ANOVA) followed by post hoc Tukey's test were considered statistically significant at P ≤ 0:05. The results for each season were analysed separately. The ANOVA analysis was performed after its two assumptions, normality (Shapiro-Wilk's test) and homogeneity of variances (Levene's test), were checked. The post hoc test was performed only if one of the main factors exerted a significant effect according to the ANOVA test. All data are presented as means ± standard deviation (SD). (Figure 1). In the SD season, Tukey's post hoc test showed that in comparison to the control group, the LEPR gene expression was increased after leptin injection (Tukey's test, C vs. LEP, P ≤ 0:001). Moreover, the higher LEPR mRNA levels were observed in the LEP and LPS +LEP groups in comparison to the LPS group (Tukey's test, LPS vs. LEP and LPS+LEP, P ≤ 0:0001 and P ≤ 0:001, respectively). In the LD season, both the LPS and leptin injections decreased LEPR gene expression (ANOVA, P ≤ 0:0001 for LPS effect and P ≤ 0:05 for leptin effect). The lowest LEPR mRNA level was stated for the LPS group, while the LEP and LPS+LEP groups did not differ from each other but were still lower than the control one. (Figure 2). The obtained results showed that regardless the season, the LPS injection increased the gene expression of IL1B and both its receptors (IL1R1 and IL1R2) (ANOVA, P ≤ 0:0001 for all three genes in both seasons), but in the SD season, the influence of endotoxin on the IL1R2 gene expression was more pronounced (8.87-fold change vs. 5.31-fold change in SD and LD season, respectively).
Interleukin-1β and Its Receptors
The single leptin injection enhanced the IL1B gene expression only in the LD season (Tukey's test, C vs. LEP, P ≤ 0:02); however, in both seasons, leptin injection intensified the LPS-induced increase in IL1B gene expression (Tukey's test, LPS vs. LPS+LEP, P ≤ 0:001 and P ≤ 0:002 for the SD and LD seasons, respectively). Leptin injection did not influence IL1R1 mRNA levels. The IL1R2 gene expression was increased by leptin injection only in the SD season (Tukey's test, C vs. LEP, P ≤ 0:0001), whereas no additive effect of LPS and leptin was observed on this gene expression.
3.3. Interleukin-6, Its Receptor, and Signal Transducer ( Figure 3). There was observed an effect of endotoxin injection on IL6, its receptor (IL6R), and its signal transducer (IL6ST) gene expressions in both seasons regardless leptin injection (ANOVA, P ≤ 0:0001 for all three genes in both seasons). The IL6 and IL6ST gene expressions were enhanced, and IL6R was decreased after LPS injection. In the SD season, the influence of endotoxin on IL6 was more pronounced (293-fold increase) than that on the LD one (115-fold increase).
Neither IL6 nor its receptor and signal transducer gene expressions were influenced by leptin regardless of the photoperiodic conditions. (Figure 4). TNFA gene expression was not influenced by singular LPS injection regardless the photoperiodic season. On the other hand, both TNFR1 and TNFR2 gene expressions were increased by LPS injection regardless the season (ANOVA, P ≤ 0:0001 for both genes in the two seasons).
TNFα and Its Receptors
Leptin injection increased TNFA gene expression regardless the photoperiodic conditions (ANOVA, P ≤ 0:0001 for both seasons); however, in the SD season, this effect was more pronounced (3.31-fold vs. 1.78-fold change in the SD and LD seasons, respectively). Only in the SD season did leptin injection increase the gene expression of both TNFα receptors (Tukey's test, C vs. LEP, P ≤ 0:002 and P ≤ 0:04 for TNFR1 and TNFR2, respectively) and intensified the LPS-induced TNFR2 gene expression (Tukey's test, LPS vs. LPS+LEP, P ≤ 0:0004).
Discussion
The fact that leptin is synthesized and secreted by adipose tissue is generally known; however, theories on leptin action on this tissue are inconsistent [29][30][31]. Basing on microarray profiling of human white adipose tissue (WAT), Taleb et al. [32] concluded that leptin can act on this tissue, especially on the expression of genes related to inflammation and immunity, but WAT structure and functions are very different from PVAT, especially the thoracic one. There is no literature data on whether intravenous leptin administration affects PVAT activity either in healthy organisms or in animals with induced systemic acute inflammation. Moreover, in the present study, the "long-day sheep," a large animal model used as a model of the hyperleptinemic state, was used [33]. Moreover, Zieba et al. [34] even made a further suggestion proposing the "long-day ewe" as a model for obesity research because obese people as "long-day ewes" are characterized by enhanced food intake and reduced energy expenditure accompanied by a high leptin level. It is worth mentioning that sheep is also considered to be an accepted animal model in immunological studies because the sheep shows similar sensitivity to endotoxins to primates in contrast to rodents [35,36]. Moreover, the fact that sheep, in contrast to the mouse and rat, is a diurnal animal also influences the immune response because immune system activity exhibits important oscillation over the course of a day [37]. The limited usefulness of small rodents as an animal model in experiments in the field of immunology was noticed by the USA Food and Drug Administration, which came to the conclusion that all new drugs developed to be used to treat the symptoms of systemic inflammation, before the start of clinical trials, must be tested on at least one recognized nonrodent animal model [35]. Considering the abovementioned information, we assumed that sheep would be an interesting model in the studies connecting leptin, inflammation, and photoperiod to show whether leptin can modulate the course of the inflammatory reaction in PVAT.
The seasonal changes in the LEPR gene expression in PVAT after LPS and leptin administration were observed. In SD season, the stimulating effect of leptin on its own receptor was stated regardless the immune status of animals. In contrast, in the LD season, both examined factors (LPS and leptin) decreased the LEPR expression. As the gene expression of the long form of leptin receptor (OB-Rb) was very low in the collected PVAT, the expression of the leptin receptor basing on the fragment of sequence encoding all kinds of leptin receptors (exons 6 and 7, whereas exon 20 is Primers designed using Primer3web version 4.0.0 (http://bioinfo.ut.ee/primer3/). specific for each form, long and short one) was examined. As mentioned previously the "long-day ewe" is characterized by hyperleptinemia with impaired leptin action; such a state is mostly characteristic for obese people with impaired energy metabolism and food intake. But leptin resistance was also confirmed in the vessel where it results in an inability to obtain the NO-mimetic vasorelaxing effect of leptin in contrast to its vasoconstricting effects [17,18]. The several mechanisms lying behind leptin vessel resistance have been proposed; however, many of them are based on the observation made according to neuronal actions of leptin, so there is a need for their verification in the arteries [38]. The downregulation of the leptin receptor, as a kind of leptin self-regulation, was observed in the hypothalamus of rats with diet-induced obesity [39]. Also, Bohlen et al. [40] stated that gene expression of OB-Rb decreased upon prolonged exposure to leptin in human aortic smooth muscle cells.
On the other hand, in hyperleptinemic spontaneously hypertensive rats, the expression of leptin receptors is enhanced [41]. In the present study, the receptor expression after leptin injection is decreased in the LD season which can also be proposed as the mechanism of self-regulation in the conditions of hyperleptinemia. However, further investigations are needed to evaluate whether this is a sufficient mechanism to induce leptin resistance in PVAT. The molecular mechanism connected with a negative feedback mechanism of intracellular suppressor of cytokine signalling 3 (SOCS-3) on leptin Janus kinases (JAK)/signal transducer and activator of the transcription 3 (STAT3) pathway should be the next step, especially when Szczesna et al. [42] demonstrated that the photoperiod may influence leptin effects on the SOCS-3 expression in the sheep pituitary. The next point in the presented study was to examine whether leptin influences proinflammatory cytokine gene expression in physiological conditions and if it modulates the progress of acute inflammation reaction. The impact of the photoperiod was also considered in this matter. Basing on the obtained results, it can be stated that intravenous injection of leptin increased the TNFα gene expression regardless the examined season and the presence of acute inflammation; however, only in the SD photoperiod did leptin administration also stimulate TNFα receptor gene expression. This may indicate that during the LD photoperiod, the leptin action is partially inhibited. This may also suggest increased sensitivity of PVAT to TNFα action during the SD season. The gene expression increased by exogenous leptin TNFα and its receptor suggests that leptin can influence the autocrine activity of TNFα on PVAT; however, TNFα may also migrate to the aorta wall because of direct contact with PVAT and adventitia and so exert effect over there. Generally, in the vessels, TNFα reduces NO bioavailability, induces oxidative stress and reactive oxygen substrates (ROS) formation, or increases proinflammatory cytokine synthesis, playing a significant role in vascular function impairment [43]. On the other hand, in adipose tissue, TNFα participates in the inhibition of carbohydrate metabolism (can induce a state of insulin resistance in adipocytes), lipogenesis, adipogenesis, and thermogenesis and stimulation of lipolysis [44]. It also influences endocrine functions of adipose tissue suppressing the production of adiponectin [45] or promoting leptin release from adipocytes [46,47]. The mechanism of leptin influence on TNFα expression has already been examined but never in PVAT. Lee et al. [48] stated that leptin increases TNFα in Raw 264.7 cells. The authors also suggested the possible pathway: phospholipase C (PLCγ)/Src/phospholipase D1 (PLD1)/phosphatidic acid (PA)/p70S6K/c-jun N-terminal protein kinase (JNK). They stated that leptin enhanced the activity of PLD1 through activation of PLCγ and Src, while PLD1 siRNA decreased the leptin-induced expression and production of TNFα. Leptin-induced PLD activation was also inhibited by a PLCγ inhibitor (POA) and Src kinase inhibitor (PP2) indicating PLCγ and Src kinase are upstream activators of PLD1. Earlier Shen et al. [49] showed that leptin can enhance TNFα via JNK and p38 MAPK pathways in LPS-stimulated Kupffer cells. In contrast, in the present study, leptin administration led to an increase in TNFA gene expression in PVAT regardless the prior LPS-stimulation and examined photoperiod. However, we did not observe a stimulatory effect of LPS on TNFA gene expression, although the same dose of LPS used in ewe increased TNFA in the hypothalamus [25]. Only in the SD season, the TNFA gene expression was the highest in the group treated with both LPS and leptin. It must be stressed that also only in this season did leptin injection increase the TNFα receptor gene expression in PVAT, which may show that especially in this season, the leptin-increased TNFA gene expression is associated with increased TNFα autocrine activity on this tissue. TNFα mediates its biological effects on adipose tissue via two distinct cell surface receptors: TNFR1 and TNFR2. The circulating levels of both these receptors are increased in both obesity and nonobesity adults with proatherogenic lipid profiles [50,51]; however, it is TNFR1 that is said to be dominant. For example, the lack of TNFR1, but not the lack of TNFR2, significantly improves insulin sensitivity in ob/ob mice and cultured adipocytes [52,53]. In the present study, the effect of season was stated for both TNFR1 and TNFR2, which may suggest that both these receptors play a crucial role in mediating TNFα effects in a season-dependent manner. Moreover, the expression of TNFR2 was synergistically increased by LPS and leptin, whereas in the case of TNFR1, no such effect was stated. The lack of leptin effect on TNFα receptor gene expression in the LD photoperiod with a simultaneous effect on TNFA gene expression may be considered in two ways. Firstly, it may be suggested that although TNFA content increases after leptin injection, TNFα action is not so pronounced that its receptor gene expression is not stimulated. It must be pointed out here that, as TNFα actions on adipose tissue are rather negative, such a mechanism might be regarded as a positive one. However, on the other hand, it cannot be excluded that during LD season, the basal protein expression of TNFα receptors is so high that further gene expression increase is not necessary. Further research, especially at the protein level, is required to clarify the existing mechanism.
In addition to the effect of leptin on TNFα activity in PVAT, injection of leptin also affected other studied proinflammatory cytokines. It was found that leptin administration potentates the stimulatory effect of LPS on IL1B gene expression in PVAT, but the individual effect of leptin on IL1B gene expression was stated only in the LD season. IL1B is one of the most potent proinflammatory cytokines promoting vascular inflammation. IL1B acts as not only a local vascular but also a systemic contributor in atherosclerosis progression [54]. Acting mainly by nuclear factor kappa B (NF-κB) signalling, JNK, and p38 mitogenactivated protein kinase pathways [55], IL1B promotes the expression of other cytokines (e.g., IL6), adhesion molecules, and the migration and mitogenesis of the vascular smooth muscle [56]. Considering such negative effects of IL1B, the inhibition of its activity has become a potential therapeutic target in the prevention and treatment of atherosclerosis. The Canakinumab Anti-Inflammatory Thrombosis Outcome Study (CANTOS) was one of the clinical studies that concentrated on blocking the IL1B proinflammatory activity for atherosclerosis therapy [54]. In CANTOS, a human monoclonal antibody, canakinumab, was examined. In contrast to the IL1 receptor antagonist (IL1ra, anakinra), canakinumab acting selectively on IL1B but not on IL1A did not affect the host defences and susceptibility to the infection to such a large extent [54]. Although there are no works presenting the leptin effect on cytokine (and so IL1B) synthesis in sheep tissue, there are several works conducted on rats, however not in the context of PVAT. Luheshi et al. [57] stated that leptin increased Il1B in the hypothalamus of rats; furthermore, leptin action on appetite and body temperature was abolished by the IL1ra or in mice lacking the IL1 receptor. Next, Sachot et al. [58] showed that leptin is a circulating mediator of LPS-induced anorexia and fever probably through a hypothalamic IL1B-dependent mechanism (but not the IL6-dependent one) as fever and anorexia were attenuated in the presence of leptin antiserum. Moreover, Hosoi et al. [59] confirmed their thesis that leptin regulates IL1B expression in the brain via the STAT3independent mechanisms conducting research on db/db rodents, which do not possess the active long form of the leptin receptor. As was mentioned above, IL1B effects are strictly connected with its receptor type I and II presence (IL1R1 and IL1R2, respectively). However, in the present study, we did not state the effect of leptin on IL1R1 expression; the expression of IL1R2 was increased by leptin only in the SD season. It should be stressed that although IL1B can be bound by two receptors, only IL1R1 is able to transduce the signal to the inside of the cell. IL1R2 acts as a decoy receptor and reduces the amount of substrate for the appro-priate receptor -so its function can be treated as antiinflammatory [60]. Considering the increased IL1B expression in LD and increased IL1R2 expression in the SD season, it can be suggested that the leptin-induced IL1B activity may be more potent in the LD season, which is in contrast to the results stated for TNFα in the present study. However, regardless the season, leptin can enhance the stimulatory effect of LPS on IL1B, which stresses the proinflammatory activity of leptin in the PVAT of ewes with induced acute inflammation. Adipose tissue is said to be one of the major sources of IL6 in the organism, producing 10-35% of circulating IL6 plasma levels in humans [61]. IL6 exerts pleiotropic actions in the organism; among them is the induction of symptoms that accompany infection, such as increased temperature. IL6 is even proposed as a new biomarker for the diagnosis of sepsis, which might be helpful to provide adequate and timely management of critically ill patients and thus reduce Figure 4: Relative gene expression of tumour necrosis factor α (TNFA) and its receptor types 1 and 2 (TNFR1 and TNFR2, respectively) in ewe's thoracic perivascular adipose tissue (PVAT) during short-day (SD) and long-day (LD) seasons. ABC: bars with different letters differ significantly according to two-way ANOVA with post hoc Tukey's test for each gene separately, P ≤ 0:05, n = 6. the morbidity and mortality associated with this condition [62]. Also in the cardiovascular system disease, IL6 is an upstream inflammatory cytokine that plays a central role in propagating the downstream inflammatory response responsible for atherosclerosis [63]. The usage of the IL6 inhibitor, tocilizumab, improved endothelial function and decreased aortic stiffness [64]. In the laboratory conditions, induction of inflammation with the use of LPS also significantly increases the IL6 plasma level as well as this interleukin gene expression in tissues [25]. In the present study, the increase in IL6 gene expression after LPS injection was 293-fold whereas for IL1B only 3-fold. The seasonal differences in LPS effects on IL6 gene expression in sheep PVAT are also interesting. In the SD season, the influence of endotoxin on IL6 was more pronounced (293-fold increase) than that in the LD one (115-fold increase), which shows that response to LPS injection is season-dependent with higher sensitivity in the SD season. The explanation to such a condition could be connected with seasonal leptin resistance (lower sensitivity in the LD season); however, in the present study, no effect of leptin in both physiological and acute inflammation conditions was stated on IL6 and its receptor. It is rather surprising, especially when IL6 is said to be one of the most important proinflammatory cytokines secreted by adipose tissue and it is known (basing on studies conducted on rats) that sickness behaviour after LPS injection is mediated by both leptin and IL6 [65]. In contrast, Taleb et al. [32] stated that a single supraphysiological dose of polyethylene glycolleptin injected to healthy nonobese men can decrease IL6 gene expression in WAT 72 h after treatment. The antiinflammatory effect of leptin, but in plasma, was stated by Xiao et al. [66] who observed that leptin infusion before endotoxin treatment attenuated IL6 and cortisol responses to the endotoxin in ovariectomized rhesus monkeys. They concluded that leptin can be both released in response to inflammation and act to attenuate the response to proinflammatory cytokines. Such conclusions are in contrast to the results obtained in this study, which could be due to differences in animal model (ovariectomized animals vs. females with active ovaries but synchronized to eliminate the effect of different levels of reproductive hormones), tissue (plasma vs. PVAT), or time of leptin injection (before vs. after endotoxin injection).
Conclusions
To conclude, acute inflammation induction modulates leptin receptor expression in thoracic PVAT in a season-dependent manner. In addition, exogenous leptin influences IL1B and TNFA gene expressions, which may moderate the inflammatory reaction progress in this examined adipose tissue, showing leptin as a significant risk factor in atherosclerosis progression. Moreover, in ewe, the sensitivity of PVAT on leptin action on proinflammatory cytokines and their receptors is dependent upon the photoperiodic condition with stronger effects stated in the SD season.
Moreover, the obtained results showing the presence of decreased sensitivity of PVAT on leptin action in PVAT of "long-day" sheep may suggest that such sheep can be an interesting model in cardiovascular system studies. This seems particularly important, knowing that rodents do not have PVAT, and pigs, which are very often used in PVAT research, do not show seasonal changes in leptin sensitivity that is specific to seasonal animals, like sheep.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflict of interest. | 7,470.4 | 2019-11-12T00:00:00.000 | [
"Biology"
] |
Efficient Low-rank Multimodal Fusion with Modality-Specific Factors
Multimodal research is an emerging field of artificial intelligence, and one of the main research problems in this field is multimodal fusion. The fusion of multimodal data is the process of integrating multiple unimodal representations into one compact multimodal representation. Previous research in this field has exploited the expressiveness of tensors for multimodal representation. However, these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor. In this paper, we propose the Low-rank Multimodal Fusion method, which performs multimodal fusion using low-rank tensors to improve efficiency. We evaluate our model on three different tasks: multimodal sentiment analysis, speaker trait analysis, and emotion recognition. Our model achieves competitive results on all these tasks while drastically reducing computational complexity. Additional experiments also show that our model can perform robustly for a wide range of low-rank settings, and is indeed much more efficient in both training and inference compared to other methods that utilize tensor representations.
Introduction
Multimodal research has shown great progress in a variety of tasks as an emerging research field of artificial intelligence. Tasks such as speech recognition (Yuhas et al., 1989), emotion recognition, (De Silva et al., 1997), (Chen et al., 1998), (Wöllmer et al., 2013), sentiment analysis, (Morency et al., 2011) * equal contributions as well as speaker trait analysis and media description (Park et al., 2014a) have seen a great boost in performance with developments in multimodal research.
However, a core research challenge yet to be solved in this domain is multimodal fusion. The goal of fusion is to combine multiple modalities to leverage the complementarity of heterogeneous data and provide more robust predictions. In this regard, an important challenge has been on scaling up fusion to multiple modalities while maintaining reasonable model complexity. Some of the recent attempts (Fukui et al., 2016), at multimodal fusion investigate the use of tensors for multimodal representation and show significant improvement in performance. Unfortunately, they are often constrained by the exponential increase of cost in computation and memory introduced by using tensor representations. This heavily restricts the applicability of these models, especially when we have more than two views of modalities in the dataset.
In this paper, we propose the Low-rank Multimodal Fusion, a method leveraging low-rank weight tensors to make multimodal fusion efficient without compromising on performance. The overall architecture is shown in Figure 1. We evaluated our approach with experiments on three multimodal tasks using public datasets and compare its performance with state-of-the-art models. We also study how different low-rank settings impact the performance of our model and show that our model performs robustly within a wide range of rank settings. Finally, we perform an analysis of the impact of our method on the number of parameters and run-time with comparison to other fusion methods. Through theoretical analysis, we show that our model can scale linearly in the number of modalities, and our experiments also show a corresponding speedup in training when compared with Figure 1: Overview of our Low-rank Multimodal Fusion model structure: LMF first obtains the unimodal representation z a , z v , z l by passing the unimodal inputs x a , x v , x l into three sub-embedding networks f v , f a , f l respectively. LMF produces the multimodal output representation by performing low-rank multimodal fusion with modality-specific factors. The multimodal representation can be then used for generating prediction tasks.
other tensor-based models.
The main contributions of our paper are as follows: • We propose the Low-rank Multimodal Fusion method for multimodal fusion that can scale linearly in the number of modalities.
• We show that our model compares to state-ofthe-art models in performance on three multimodal tasks evaluated on public datasets.
• We show that our model is computationally efficient and has fewer parameters in comparison to previous tensor-based methods.
Related Work
Multimodal fusion enables us to leverage complementary information present in multimodal data, thus discovering the dependency of information on multiple modalities. Previous studies have shown that more effective fusion methods translate to better performance in models, and there's been a wide range of fusion methods. Early fusion is a technique that uses feature concatenation as the method of fusion of different views. Several works that use this method of fusion (Poria et al., 2016) , (Wang et al., 2016) use input-level feature concatenation and use the concatenated features as input, sometimes even removing the temporal dependency present in the modalities (Morency et al., 2011). The drawback of this class of method is that although it achieves fusion at an early stage, intra-modal interactions are potentially suppressed, thus losing out on the context and temporal dependencies within each modality.
On the other hand, late fusion builds separate models for each modality and then integrates the outputs together using a method such as majority voting or weighted averaging (Wortwein and Scherer, 2017), (Nojavanasghari et al., 2016). Since separate models are built for each modality, inter-modal interactions are usually not modeled effectively.
Given these shortcomings, more recent work focuses on intermediate approaches that model both intra-and inter-modal dynamics. Fukui et al. (2016) proposes to use Compact Bilinear Pooling over the outer product of visual and linguistic representations to exploit the interactions between vision and language for visual question answering. Similar to the idea of exploiting interactions, proposes Tensor Fusion Network, which computes the outer product between unimodal representations from three different modalities to compute a tensor representation. These methods exploit tensor representations to model inter-modality interactions and have shown a great success. However, such methods suffer from exponentially increasing computational complexity, as the outer product over multiple modalities results in extremely high dimensional tensor representations.
For unimodal data, the method of low-rank tensor approximation has been used in a variety of applications to implement more efficient tensor operations. Razenshteyn et al. (2016) proposes a modified weighted version of low-rank approximation, and Koch and Lubich (2010) applies the method towards temporally dependent data to obtain lowrank approximations. As for applications, Lei et al. (2014) proposes a low-rank tensor technique for dependency parsing while Wang and Ahuja (2008) uses the method of low-rank approximation applied directly on multidimensional image data (Datumas-is representation) to enhance computer vision applications. Hu et al. (2017) proposes a low-rank tensor-based fusion framework to improve the face recognition performance using the fusion of facial attribute information. However, none of these previous work aims to apply low-rank tensor techniques for multimodal fusion.
Our Low-rank Multimodal Fusion method provides a much more efficient method to compute tensor-based multimodal representations with much fewer parameters and computational complexity. The efficiency and performance of our approach are evaluated on different downstream tasks, namely sentiment analysis, speaker-trait recognition and emotion recognition.
Low-rank Multimodal Fusion
In this section, we start by formulating the problem of multimodal fusion and introducing fusion methods based on tensor representations. Tensors are powerful in their expressiveness but do not scale well to a large number of modalities. Our proposed model decomposes the weights into low-rank factors, which reduces the number of parameters in the model. This decomposition can be performed efficiently by exploiting the parallel decomposition of low-rank weight tensor and input tensor to compute tensor-based fusion. Our method is able to scale linearly with the number of modalities.
Multimodal Fusion using Tensor Representations
In this paper, we formulate multimodal fusion as a multilinear function which are encoding unimodal information of the M different modalities, the goal of multimodal fusion is to integrate the unimodal representations into one compact multimodal representation for downstream tasks. Tensor representation is one successful approach for multimodal fusion. It first requires a transformation of the input representations into a highdimensional tensor and then mapping it back to a lower-dimensional output vector space. Previous works have shown that this method is more effective than simple concatenation or pooling in terms of capturing multimodal interactions , (Fukui et al., 2016). Tensors are usually created by taking the outer product over the input modalities. In addition, in order to be able to model the interactions between any subset of modalities using one tensor, proposed a simple extension to append 1s to the unimodal representations before taking the outer product. The input tensor Z formed by the unimodal representation is computed by: where ⊗ M m=1 denotes the tensor outer product over a set of vectors indexed by m, and z m is the input representation with appended 1s.
The input tensor Z ∈ R d 1 ×d 2 ×...d M is then passed through a linear layer g(⋅) to to produce a vector representation: where W is the weight of this layer and b is the bias. With Z being an order-M tensor (where M is the number of input modalities), the weight W will naturally be a tensor of order-(M + 1) in The extra (M + 1)-th dimension corresponds to the size of the output representation d h . In the tensor dot product W ⋅ Z, the weight tensor W can be then viewed as d h order-M tensors.
In other words, the weight W can be partitioned into W k ∈ R d 1 ×...×d M , k = 1, ..., d h . Each W k contributes to one dimension in the output vector h, i.e. h k = W k ⋅ Z. This interpretation of tensor fusion is illustrated in Figure 2 for the bi-modal case.
One of the main drawbacks of tensor fusion is that we have to explicitly create the highdimensional tensor Z. The dimensionality of Z Figure 2: Tensor fusion via tensor outer product will increase exponentially with the number of modalities as ∏ M m=1 d m . The number of parameters to learn in the weight tensor W will also increase exponentially. This not only introduces a lot of computation but also exposes the model to risks of overfitting.
Low-rank Multimodal Fusion with Modality-Specific Factors
As a solution to the problems of tensor-based fusion, we propose Low-rank Multimodal Fusion (LMF). LMF parameterizes g(⋅) from Equation 2 with a set of modality-specific low-rank factors that can be used to recover a low-rank weight tensor, in contrast to the full tensor W. Moreover, we show that by decomposing the weight into a set of low-rank factors, we can exploit the fact that the tensor Z actually decomposes into {z m } M m=1 , which allows us to directly compute the output h without explicitly tensorizing the unimodal representations. LMF reduces the number of parameters as well as the computation complexity involved in tensorization from being exponential in M to linear.
Low-rank Weight Decomposition
The idea of LMF is to decompose the weight tensor W into M sets of modality-specific factors. However, since W itself is an order-(M + 1) tensor, commonly used methods for decomposition will result in M + 1 parts. Hence, we still adopt the view introduced in Section 3.1 that W is formed by For an order-M tensor W k ∈ R d 1 ×...×d M , there always exists an exact decomposition into vectors in the form of: The minimal R that makes the decomposition valid is called the rank of the tensor. The vector sets are called the rank R decomposition factors of the original tensor.
In LMF, we start with a fixed rank r, and parameterize the model with r decomposition factors {{w .., d h that can be used to reconstruct a low-rank version of these W k .
We can regroup and concatenate these vectors into M modality-specific low-rank factors. Let w is its corresponding low-rank factors. And we can recover a low-rank weight tensor by: Hence equation 2 can be computed by Note that for all m, w (i) m ∈ R dm×d h shares the same size for the second dimension. We define their outer product to be over only the dimensions that are not shared: w A bimodal example of this procedure is illustrated in Figure 3.
Nevertheless, by introducing the low-rank factors, we now have to compute the reconstruction m for the forward computation. Yet this introduces even more computation.
Efficient Low-rank Fusion Exploiting Parallel Decomposition
In this section, we will introduce an efficient procedure for computing h, exploiting the fact that tensor Z naturally decomposes into the original input {z m } M m=1 , which is parallel to the modalityspecific low-rank factors. In fact, that is the main reason why we want to decompose the weight tensor into M modality-specific factors.
Using the fact that Z = ⊗ M m=1 z m , we can simplify equation 5: where Λ M m=1 denotes the element-wise product over a sequence of tensors: Λ 3 An illustration of the trimodal case of equation 6 is shown in Figure 1. We can also derive equation 6 for a bimodal case to clarify what it does: An important aspect of this simplification is that it exploits the parallel decomposition of both Z and W, so that we can compute h without actually creating the tensor Z from the input representations z m . In addition, different modalities are decoupled in the simplified computation of h, which allows for easy generalization of our approach to an arbitrary number of modalities. Adding a new modality can be simply done by adding another set of modality-specific factors and extend Equation 7. Last but not least, Equation 6 consists of fully differentiable operations, which enables the parameters {w (i) m } r i=1 m = 1, ..., M to be learned end-to-end via back-propagation.
Using Equation 6, we can compute h directly from input unimodal representations and their modal-specific decomposition factors, avoiding the weight-lifting of computing the large input tensor Z and W, as well as the r linear transformation. Instead, the input tensor and subsequent linear projection are computed implicitly together in Equation 6, and this is far more efficient than the original method described in Section 3.1. Indeed, LMF reduces the computation complexity of tensorization and fusion from O(d y ∏ M m=1 d m ) to O(d y × r × ∑ M m=1 d m ). In practice, we use a slightly different form of Equation 6, where we concatenate the low-rank factors into M order-3 tensors and swap the order in which we do the element-wise product and summation: and now the summation is done along the first dimension of the bracketed matrix.
[⋅] i,∶ indicates the i-th slice of a matrix. In this way, we can parameterize the model with M order-3 tensors, instead of parameterizing with sets of vectors.
Experimental Methodology
We compare LMF with previous state-of-the-art baselines, and we use the Tensor Fusion Networks (TFN) ) as a baseline for tensorbased approaches, which has the most similar structure with us except that it explicitly forms the large multi-dimensional tensor for fusion across different modalities. We design our experiments to better understand the characteristics of LMF. Our goal is to answer the following four research questions: (1) Impact of Multimodal Low-rank Fusion: Direct comparison between our proposed LMF model and the previous TFN model.
(2) Comparison with the State-of-the-art: We evaluate the performance of LMF and state-of-theart baselines on three different tasks and datasets.
(3) Complexity Analysis: We study the modal complexity of LMF and compare it with the TFN model.
(4) Rank Settings: We explore performance of LMF with different rank settings.
The results of these experiments are presented in Section 5.
Datasets
We perform our experiments on the following multimodal datasets, CMU-MOSI (Zadeh et al., 2016a) POM (Park et al., 2014b), and IEMOCAP (Busso et al., 2008) for sentiment analysis, speaker traits recognition, and emotion recognition task, where the goal is to identify speakers emotions based on the speakers' verbal and nonverbal behaviors.
Features
Each dataset consists of three modalities, namely language, visual, and acoustic modalities. To reach the same time alignment across modalities, we perform word alignment using P2FA (Yuan and Liberman, 2008) which allows us to align the three modalities at the word granularity. We calculate the visual and acoustic features by taking the average of their feature values over the word time interval . Language We use pre-trained 300-dimensional Glove word embeddings (Pennington et al., 2014) to encode a sequence of transcribed words into a sequence of word vectors.
Visual The library Facet 1 is used to extract a set of visual features for each frame (sampled at 30Hz) including 20 facial action units, 68 facial landmarks, head pose, gaze tracking and HOG features (Zhu et al., 2006). Acoustic We use COVAREP acoustic analysis framework (Degottex et al., 2014) to extract a set of low-level acoustic features, including 12 Mel frequency cepstral coefficients (MFCCs), pitch, voiced/unvoiced segmentation, glottal source, peak slope, and maxima dispersion quotient features.
Model Architecture
In order to compare our fusion method with previous work, we adopt a simple and straightforward model architecture 2 for extracting unimodal representations. Since we have three modalities for each dataset, we simply designed three unimodal sub-embedding networks, denoted as f a , f v , f l , to extract unimodal representations z a , z v , z l from unimodal input features x a , x v , x l . For acoustic and visual modality, the sub-embedding network is a simple 2-layer feed-forward neural network, and for language modality, we used an LSTM (Hochreiter and Schmidhuber, 1997) to extract representations. The model architecture is illustrated in Figure 1.
Baseline Models
We compare the performance of LMF to the following baselines and state-of-the-art models in multimodal sentiment analysis, speaker trait recognition, and emotion recognition. Support Vector Machines Support Vector Machines (SVM) (Cortes and Vapnik, 1995) is a widely used non-neural classifier. This baseline is trained on the concatenated multimodal features for classification or regression task (Pérez-Rosas et al., 2013), (Park et al., 2014a), (Zadeh et al., 2016b). Deep Fusion The Deep Fusion model (DF) (Nojavanasghari et al., 2016) trains one deep neural model for each modality and then combine the output of each modality network with a joint neural network. Tensor Fusion Network The Tensor Fusion Network (TFN) explicitly models view-specific and cross-view dynamics by creating a multi-dimensional tensor that captures uni-modal, bimodal and trimodal interactions across three modalities. Memory Fusion Network The Memory Fusion Network (MFN) (Zadeh et al., 2018a) accounts for view-specific and cross-view interactions and continuously models them through time with a special attention mechanism and summarized through time with a Multi-view Gated Memory. Bidirectional Contextual LSTM The Bidirectional Contextual LSTM (BC-LSTM) , (Fukui et al., 2016) performs contextdependent fusion of multimodal data. Multi-View LSTM The Multi-View LSTM (MV-LSTM) (Rajagopalan et al., 2016) aims to capture both modality-specific and cross-modality interactions from multiple modalities by partitioning the memory cell and the gates corresponding to multiple modalities.
Multi-attention Recurrent Network
The Multiattention Recurrent Network (MARN) (Zadeh et al., 2018b) explicitly models interactions between modalities through time using a neural component called the Multi-attention Block (MAB) and storing them in the hybrid memory called the Long-short Term Hybrid Memory (LSTHM).
Evaluation Metrics
Multiple evaluation tasks are performed during our evaluation: multi-class classification and regression. The multi-class classification task is applied to all three multimodal datasets, and the regression task is applied to the CMU-MOSI and the POM dataset. For binary classification and multiclass classification, we report F1 score and accuracy Acc−k where k denotes the number of classes. Specifically, Acc−2 stands for the binary classification. For regression, we report Mean Absolute Error (MAE) and Pearson correlation (Corr). Higher values denote better performance for all metrics except for MAE.
Results and Discussion
In this section, we present and discuss the results from the experiments designed to study the research questions introduced in section 4.
Impact of Low-rank Multimodal Fusion
In this experiment, we compare our model directly with the TFN model since it has the most similar structure to our model, except that TFN explicitly forms the multimodal tensor fusion. The com-parison reported in the last two rows of Table 2 demonstrates that our model significantly outperforms TFN across all datasets and metrics. This competitive performance of LMF compared to TFN emphasizes the advantage of Low-rank Multimodal Fusion.
Comparison with the State-of-the-art
We compare our model with the baselines and stateof-the-art models for sentiment analysis, speaker traits recognition and emotion recognition. Results are shown in Table 2. LMF is able to achieve competitive and consistent results across all datasets.
On the multimodal sentiment regression task, LMF outperforms the previous state-of-the-art model on MAE and Corr. Note the multiclass accuracy is calculated by mapping the range of continuous sentiment values into a set of intervals that are used as discrete classes.
On the multimodal speaker traits Recognition task, we report the average evaluation score over 16 speaker traits and shows that our model achieves the state-of-the-art performance over all three evaluation metrics on the POM dataset.
On the multimodal emotion recognition task, our model achieves better results compared to the stateof-the-art models across all emotions on the F1 score. F1-emotion in the evaluation metrics indicates the F1 score for a certain emotion class.
Complexity Analysis
Theoretically, the model complexity of our fusion method is O(d y × r × ∑ M m=1 d m ) compared to O(d y ∏ M m=1 d m ) of TFN from Section 3.1. In practice, we calculate the total number of parameters used in each model, where we choose M = 3, d 1 = 32, d 2 = 32, d 3 = 64, r = 4, d y = 1. Under this hyper-parameter setting, our model contains about 1.1e6 parameters while TFN contains about 12.5e6 parameters, which is nearly 11 times more. Note that, the number of parameters above counts not only the parameters in the multimodal fusion stage but also the parameters in the subnetworks.
Furthermore, we evaluate the computational complexity of LMF by measuring the training and testing speeds between LMF and TFN. Table 3 illustrates the impact of Low-rank Multimodal Fusion on the training and testing speeds compared with TFN model. Here we set rank to be 4 since it can generally achieve fairly competent performance. Based on these results, performing a low-rank multimodal fusion with modality-specific low-rank factors significantly reduces the amount of time needed for training and testing the model. On an NVIDIA Quadro K4200 GPU, LMF trains with an average frequency of 1134.82 IPS (data point inferences per second) while the TFN model trains at an average of 340.74 IPS.
Rank Settings
To evaluate the impact of different rank settings for our LMF model, we measure the change in performance on the CMU-MOSI dataset while varying Figure 4: The Impact of different rank settings on Model Performance: As the rank increases, the results become unstable and low rank is enough in terms of the mean absolute error.
the number of rank. The results are presented in Figure 4. We observed that as the rank increases, the training results become more and more unstable and that using a very low rank is enough to achieve fairly competent performance.
Conclusion
In this paper, we introduce a Low-rank Multimodal Fusion method that performs multimodal fusion with modality-specific low-rank factors. LMF scales linearly in the number of modalities. LMF achieves competitive results across different multimodal tasks. Furthermore, LMF demonstrates a significant decrease in computational complexity from exponential to linear time. In practice, LMF effectively improves the training and testing efficiency compared to TFN which performs multimodal fusion with tensor representations.
Future work on similar topics could explore the applications of using low-rank tensors for attention models over tensor representations, as they can be even more memory and computationally intensive. | 5,510.4 | 2018-05-01T00:00:00.000 | [
"Computer Science"
] |
Setting priorities for greening cities with monetary accounting values for amenity services of urban green
Life Satisfaction Analyses in Germany reveal a significant positive correlation between the amount of green space within 1 km of residence and well-being. The comparison of the effects of green space and income on well-being allows the derivation of a monetary demand function for green spaces close to the place of home. This demand function was used together with land-use and population data to estimate the monetary value of green space close to home for every 2 km × 2 km grid cell in Germany. The results can be used in environmental economic accounting as a proxy for the (visual) amenity services of green spaces close to residences and provide urban planners with additional information on the strength and spatial distribution of demand for green spaces in residential areas. The study shows that, especially in densely populated areas where more than 30 per cent of the German population lives, the (simulated) exchange value of green spaces (price per additional hectare derived from the demand function) multiplied by the number of
Introduction
Due to the Millenium Ecosystem Assessment (2005), the international TEEB study (TEEB 2012) and its national follow ups (e.g. Naturkapital Deutschland -TEEB DE 2018) and the implementation of target 2, action 5 of the European Biodiversity Strategy to 2020 (European Commission 2011), ecosystem services and their economic valuation have increasingly received the attention of science and politics. With the 'UN System of Environmental Economic Accounting -Ecosystem Accounts', which was adopted at the beginning of 2021 (UN SEEA-EA 2021), a first international standard for the integration of ecosystem services into environmental economic accounting is now available.
In a pilot project on ecosystem accounting for the German Federal Ministry for the Environment, commissioned by the Federal Agency for Nature Conservation, Hirschfeld et al. (2020) prepared the first Germany-wide assessments for selected ecosystem services. The services should be quantified throughout Germany in a spatially specific way by physical indicators as well as monetarily.
This article presents the results of this study with regard to "visual amenity services". According to UN SEEA-EA 2021 (table 6.3), these are "the ecosystem contributions to local living conditions, in particular through the biophysical characteristics and qualities of ecosystems that provide sensory benefits, especially visual". UN SEEA-EA 2021 (para. 6.58) recognises that, within its proposed ecosystem service reference list, there are several additional ecosystem services that are relevant to the amenity of a location. Recreation-related and noise-attenuation services are mentioned as examples. Furthermore, green urban areas can reduce air pollution (air filtration service), have a positive influence on a healthy urban climate by buffering summer heat waves, which becomes of increasing importance due to climate change (local climate regulation service) and serve as places for social contacts and interactions (Kowarik et al. 2017). Ideally, "where possible, each of these services should be measured distinctly" (UN SEEA-EA 2021, ibid.). However, in practice, only combinations of amenity-related services can be measured with the methods applied today. This is also true for the present study (see discussion).
The physical and monetary values for the amenity services of urban ecosystems (or, more generally, ecosystems close to one's home) can help to correct or provide information for national accounts for the impacts on "goods" (here: neighbourhood amenity) that are relevant to people's welfare, but are not traded or only imperfectly traded on markets (TEEB 2009, chapter 3.3;Natural Capital Germany 2017). Furthermore, they can also serve as additional information for plannning purposes.
The economic valuation technique used in our study attempts to determine the price one would pay to extend the amenity services of urban green space in one's neighbourhood. This hypothetical price is based on the idea that such services are traded on the market, that each seller posseses only a small part of the green space in a neighbourhood and that the seller can restrict the "use" of the amenity services to those people who pay the price to him. In such a case, people's willingness to pay -as a hypothetical price -for amenity services can be compared with the prices paid for other goods, for example, the price of building land. If the willingness to pay of all stakeholders for the amenity services provided by, say, one hectare of urban green space is higher than the price of one hectare of building land, then there is a chance of a social welfare gain if a larger share of urban land is used for the production of amenity services (cf. OECD 2018; for the grounding in economic theory, see Hicks 1939, Kaldor 1939, Scitovsky 1941, for discussion see below).
The following chapters first explain why the Life Satisfaction Method was used here as the basis for a nationwide estimate of the amenity values of green spaces close to housing and present some relevant details of the Life Satisfaction Study by Krekel et al. (2016) used for this purpose.
Next, the land use and population data for our nationwide assessment are presented. The extrapolation required an adjustment of the Krekel et al. (2016) evaluation function, as their analysis was based on different geographical data. Another adjustment was made to correct for sorted preferences.
The results of our extrapolation are then presented cartographically and broken down by different population densities. The social demand for green spaces close to housing is compared with corresponding values for building land. From this, it can be deduced where the demand for urban green space is highest and where the value of an additional hectare of urban green exceeds the value of an area as building land.
The article concludes with an evaluation and discussion of the results and identifies future research needs.
Methodology of economic valuation
For a German nationwide assessment of the amenity services of ecosystems in the vicinity of the place of residence, it must first be decided which valuation method should be used to determine hypothetical prices for these services. The reliability of direct surveys of willingness to pay (contingent valuation studies) and the results of choice experiments, in which the best combination of the amount of an ecosystem service and its price has to be selected between several alternatives, is considerably questioned in the economic literature (cf. McFadden and Train 2017). Rather, indirect methods are preferred (UN SEEA-EA 2021, chapter 9.3). Such methods are, for example, the "Hedonic Pricing Analysis" in which preferences for amenity services are derived from real estate market data and the "Life Satisfaction Method" in which results of sociological studies on life satisfaction are evaluated together with spatial land-use data to derive preferences for green spaces in the residential environment Wüstemann 2014, Krekel et al. 2016).
The concept of environmental economic accounting according to SEEA EA presupposes that a service is associated with a transaction (UN SEEA-EA 2021, chapter 6.3.4). In the "Hedonic Price Analysis", recommended by the SEEA EA for assessing the amenity value of urban green spaces, this transaction is the payment of a premium on the price of a property due to a more favourable green space provision in the residential environment.
In our study, the "Life Satisfaction" or "Experienced Preference" method was used instead. This method measures the effect of green spaces on a life satisfaction scale and then compares this effect with the increase in income that leads to the same increase in life satisfaction (Krekel et al. 2016). This increase in income then represents a monetary value of the additional green space which is taken as an approximation of the hypothetical price for amenity services sought.
If this method is classified in the methods proposed by UN SEEA-EA (2021) for the monetary valuation of ecosystem services, the "Life Satisfaction Method" can be best defined as a 'Simulated Exchange Value' method. In this approach, a market for an ecosystem service is theoretically constructed and the price is simulated, in which the market generates an equilibrium between supply and demand, based on the measured marginal utility function of the users or beneficiaries.
The values or prices used in our study are based on the assumption that there is a competitive market for the supply of (publicly accessible -see below) green space. This means that the price for the right to use or benefit from each unit of green space is negotiated individually between its suppliers and each buyer (beneficiary) and that those who do not want to pay the price can be excluded from the use or benefits. The actual transaction underlying the valuation is, therefore, not the payment of a possibly slightly higher real estate price as is the case with Hedonic Pricing. Instead, it is the experience of urban green with the senses, by seeing, smelling and hearing. Often, this requires no separate effort; rather, it also arises as a by-product of everyday activities, such as shopping, walking to work, a short walk in the neighbourhood etc. Kolbe et al. (2019) compare the results of two German studies using Hedonic Price Analysis and the Life Satisfaction method, both of which used the proportion of public green space within a 1 km radius of the place of residence as an explanatory variable Wüstemann 2014, Krekel et al. 2016). They find that the Hedonic Price Analysis leads to values that are only the 38th to 124th part of the value that the Life Satisfaction Method yields. Using a market simulation model in which both methods are represented, they show that market imperfections that characterise the real estate market, such as incomplete information, high transaction costs, short-term limited supply and equity preferences, can explain why it is possible that Hedonic Price Analysis only represents a fraction of the actual effect of green space on life satisfaction. The use of this method would, therefore, lead to results that could not be justified from a methodological point of view. At the same time, Kolbe et al. (2019) show that the Life Satisfaction Analysis overestimates the value of publicly accessible green spaces if the inhabitants are distributed amongst the residential locations according to their individual "green preferences" ("sorted preferences"). Based on the same market simulation, they also propose quantitative adjustments to the results of the Life Satisfaction Analysis to correct for sorted preferences (see below and Fig. 1).
In our study, we used the Life Satisfaction Analysis by Krekel et al. (2016), which comes to very similar results as the study of Bertram and Rehdanz (2015), but uses more data and covers a larger area. It is based on approx. 42,000 records from the 'Socio-Economic Panel' (SOEP) of the German Institute for Economic Research from the years 2000 to 2012 (for further information, see DIW 2022). Each dataset includes a subjective assessment of life satisfaction on a Likert scale from 0 to 10, as well as possible explanatory variables such as age, gender, marital status, health, education, income etc. In addition to these explanatory variables, the proportion of publicly accessible green spaces within a 1 km radius of the place of residence is examined in the context of a multi-criteria regression analysis to determine how it affects life satisfaction, besides the other explanatory variables. Private gardens could not be taken into account. There was insufficient geographical data available for this and, moreover, it is not possible to allocate specific garden areas to the persons surveyed in the SOEP solely on the basis of the available geographical reference data. Additionally, qualitative aspects of the green spaces and green elements like roadside trees could not be considered. Despite these shortcomings (for a discussion see below), the share of green space in 1 km radius can be at least taken as a rough indicator of the bio-physical quantity of amenity services.
The relation between the area of green space within a radius of 1 km in hectares and the price people would hypothetically pay for an additional hectare (marginal utility function), as estimated by Krekel et al. (2016), is shown in Fig. 1, grey dashed line. Kolbe et al. (2019) derive lower limits and upper limits from their market simulation model to correct the marginal utility function of Krekel et al. (2016) for "sorted preferences". In addition to these upper and lower bounds (blue and ochre line), also a "medium" variant is proposed (Fig. 1, black line). The valuation function we use here is this "medium" variant after adjusting for the spatial dataset used in our study. Krekel et al. (2016) use the spatial data of the European Urban Atlas for Germany for the year 2006 (Copernicus 2022) to calculate the proportion of green spaces within 1 km of the residence of each SOEP participant whose dataset they used. The green spaces taken into account are the ´Green Urban Areas´defined by the Urban Atlas.
Spatial data and adjustment of the valuation function
The Urban Atlas only covers the most urbanised parts of Germany. In contrast, the aim of our study was to assess the amenity services of all green spaces close to home, regardless of wether they are located in densily or sparsely populated areas. Futhermore, the defintion of ´Green Urban Areas´ by the Urban Atlas excludes all wooded areas that are not completely surrounded by settlement areas. From an amenity service perspective, however, all wooded areas in the vicinity of one's home have to be taken into account when measuring amenity services, regardless of whether they are completely sorrounded by settlements or not. Additionally, any kind of agriculturally used grassland is excluded. However, Krekel et al. (2016) found a significant positive influence of meadows and pastures on life satisfaction, in contrast to arable land that had no influence. Utilised grassland must, therefore, also be included in the indicator green spaces within a 1 km radius used to measure the bio-physical strength of amenity services.
Instead of the Urban Atlas and the Green Urban Areas defined there, we, therefore, base our study on the the geodataset of the ATKIS Basis-DLM (BKG 2016) in the version of the IOER-Monitor (IOER-Monitor 2022). This dataset is updated more frequently and regularly than the Urban Atlas. It covers the whole area of Germany and uses, in the case of settlements, cadastral data (BKG 2016), which are often more precise than the remote sensing data (see Copernicus 2016) of the 2006 Urban Atlas. The following types of areas of the ATKIS Basis-DLM/IOER-Monitor are considered as publicly accessible green space: woody vegetation, woodland, meadow and pasture, park/publicly accessible greenspace, other sport, leisure and recreation area and cemetery.
The data basis for the distribution of the population is the data from the last population census (StBA 2011), in which all 100 m × 100 m grids with more than three persons are recorded with their respective population numbers.
Before the values of the marginal utility function of Krekel et al. (2016) -corrected on the basis of Kolbe et al. (2019), see above -could be extrapolated to the whole population in Germany, the function first had to be the adjusted to the changed spatial database and the additionally included green space types.
For this purpose, it was first adapted by linear transformation to the higher average green space supply per person resulting from the use of ATKIS Basis-DLM data and the inclusion of additional green space types, compared to the Urban Atlas and the Green Urban Areas defined there. In the second step, the function was then further calibrated so that a spatial extrapolation with Urban Atlas data and the original marginal utility function of Krekel et al. (2016), corrected according to Kolbe et al. (2019) on the one hand and the extrapolation for the same area with ATKIS Basis-DLM data, extended green space definition and adjusted marginal utility function on the other hand, yielded the same estimate for the total monetary value of the amenity service (measured as: marginal utility × green space in a 1 km radius × number of persons). This calibration was done by reducing the slope of the already linearly fitted function to such an extent that both calculations led to the desired equality of results. Fig. 2 shows the linearly adjusted and calibrated marginal utility function (black line) and, derived from it, the utility function (violet curve) and the simulated expenditure function (red curve). The marginal utility function, as explained above, shows the willingness to pay for an additional unit of green space (simulated price); the utility function expresses the monetary value of the total green space close to home for a household (average: 1.8 persons); and the simulated expenditure function shows the total expenditure the household would make for it.
The simulated expenditure is the accounting compatible exchange value of the green space for one single household. It is caclulated according to the formula "value of total green space within a radius of 1 km = simulated price per hectare × hectare of green space". The welfare value (utility) of the total green space is higher than the exchange value. It is calculated as the area under the marginal utillity function. (For the differences between exchange and welfare value, see also UN SEEA-EA 2021, Section D, p. 174 and ibid. chapter 12.) An evaluation according to different population density classes in 2 km × 2 km grids shows that the calculation for the ´Green Urban Areas´ of the 2006 Urban Atlas, which shows only minor deviations from the 2012 version regarding the definition of green space, compares rather well with the caculation based on ATKIS Basis-DLM for 2012 (Fig. 3). Only in the lowest density class do the total amenity values of urban green deviate strongly from each other, which is mainly due to the fact that, in this class, the supply of ´Green Urban Areasí s lowest due to the exclusion of forests and grassland on the edge of settlement areas. As a result of the calibration, the sum across all grids yields the same value for both calculations.
Extrapolation to Germany
To ensure that the underlying life satisfaction analysis, based on SOEP data from 2000 to 2012 and spatial data from the 2006 Urban Atlas, as well as the available population data of the 2011 census and the spatial data used in our own analysis are not too far apart in time, the 2012 version of the ATKIS Basis-DLM was used for the Germany-wide extrapolation rather than the current version of ATKIS.
The extrapolation to all households in Germany using the calibrated marginal utility function was carried out within the framework of a detailed analysis restricted to all cities with more than 50,000 inhabitants and a German-wide analysis in a 2 km × 2 km grid. The results of the detailed analysis regarding the green space supply per household are published in the IÖR-Monitor (2021).
In the detailed analysis, the sum of publicly accessible green spaces within a 1 km radius was determined for each 100 m × 100 m census grid. The exchange value of this green space area was then calculated using the calibrated marginal utility function as "marginal utility per hectare × number of hectares × number of households in the 100 m × 100 m grid". The monetary values per census grid were then added up for each municipality. No values were assigned to the individual green spaces within a settlement. In addition, a larger-scale analysis was carried out in which, for simplicity, the total area of publicly accessible green space within each 2 km × 2 km grid in Germany was assigned to the entire population in this grid, multiplied by a factor of 0.785 (π/4), in order to take into account that the green space supply in the underlying empirical study of Krekel et al. (2016) was measured in a 1 km radius around each residential location and not in the larger area of 4 km corresponding to a 2 km × 2 km grid.
The 2 km × 2 km analysis cannot assess the respective supply situation in such detail for each place of residence as is the case with the detailed 100 m × 100 m census grid analysis. This could theoretically lead to a distortion of the results in connection with the valuation function used. However, as it turned out, the value calculation on the basis of the Germany-wide mean value of green provision arrives at a figure that is very close to the aggregation of the partial values of the 2 km × 2 km grids, although these grids differ greatly with regard to green provision. It can, therefore, be assumed that the values calculated on the basis of 2 km × 2 km grids are very close to the values that would have been calculated on the more precise basis of the detailed analysis.
Since the monetary results of the detailed analysis, which had already been published as preliminary in Grunewald et al. (2021), subsequently proved to be incorrect, valid monetary results are currently only available for the 2 km × 2 km analysis. Fig. 4b and Fig. 4c show the accounting compatible exchange value (simulated market value) and the welfare value, which is under normal market conditions always higher than the exchange value. In all areas where the green space supply exceeds the saturation point, i.e. the quantity at which households are no longer willing to pay a price for an additional green space unit, the welfare value per household reaches its highest value, while the exchange value (simulated price paid for an additional unit × total quantity sold) is zero. Such areas are normally situated on the outskirts or outside urban areas. Although the individual welfare value has the highest possible level here, the social welfare value for all inhabitants is rather small compared to densely populated urban districts due to the lower number of inhabitants. Fig. 4d presents an indicator for the social scarcity of green spaces. The scarcer the areas are, the more valuable each additional area is from the individual perspective and the higher is the individual simulated price. Multiplication by the number of inhabitants gives the individual scarcity a social significance. Multiplication follows the economic principle of aggregating individual benefits to society by adding them up (usually) without further weighting. According to this principle, the highest social benefit from one additional hectare of green space is achieved where the product of marginal utility for an average household multiplied with the number of households is highest (for discussion see below).
Results, comparison with prices for building land and relevance for municipal planning
For government programmes that aim to increase the greening of cities, for example, to make them more resilient to climate change, as well as for municipal green space planning, the monetary scarcity indicator presented here offers -in addition to other, already existing indicators like the distance to the next green space (Grunewald et al. 2017) -further guidance for deciding where more green spaces could make the greatest contribution to public well-being.
Since the scarcity indicator presented is a value that expresses an economic benefit, it can -unlike other parameters -also be directly compared with the economic costs that are incurred if settlement areas are kept free of further development, for example, for residential or commercial use, in order to establish and maintain them as green spaces. The most important cost factor, besides construction and maintenance costs of parks (see below), is the renunciation of an alternative use as residential or commercial land. One indicator of this is the price of a building site.
In the grid squares with the highest population density, where 30% of Germany's population lives, the value per ha of green space aggregated over the residential population is, on average, 783,838 euros per ha and year (cf. Fig. 5). This corresponds to a one-off payment of 2,613 euros per m ("present value" calculated with a 3% discount rate for an infinite period of time). If a lower discount rate of, for example, 1.5% is used, which can be justified for long-term environmental considerations (TEEB 2010, chapter 6), this one-off payment would even double. According to Krekel et al. (2016), the reference year for the monetary values is 2016. This means that the value of the green spaces in these grids was far above the average expenditure for land ready for construction in large cities with more than 500,000 inhabitants, which was just under €700 per m in 2016 (StBA 2021), including construction and maintenance costs for particularly expensive green spaces, for which a present value of €680 per m results when calculated on the basis of the information from Krekel et al. (2016).
In the grids squares with the lowest population densities, where 40% of the German population lives, the value per ha of green space is on average only just under 12 euros per m . In each of the density classes in this group, it is below the average sales value of building plots in municipalities with less than 2000 inhabitants (56 euros per m in 2016), including the cost of particularly low-cost green spaces (78 euros per m ). However, the green spaces in question are likely to be mainly grassland and woodland rather than parks. The remaining 30% of the German population live in grid squares, for which a mean green space value of approx. 486 euros per m results. In the respective density classes, this is partly above and partly below the sum of the price of building land in cities with between 200,000 and 500,000 inhabitants (294 euros per m ) and the mean value of the abovementioned cost maxima and minima of park facilities (379 euros per m ).
The figures show that the monetary value of the amenity services of green spaces often far exceeds the sum of building land prices and the construction and maintenance costs of urban parks. Taking into account their monetary impact on citizens' well-being, the preservation and creation of green spaces is, therefore, economically worthwhile in many cases and would lead to a net increase in welfare. The 2 km × 2 km grids, for which the monetary amenity value of urban green spaces were identified throughout Germany (Fig. 4 d), can serve as a guide for local decision-makers as to where there is a particularly high need for additional green space.
More precise proposals for the location of new green spaces would be possible if additional information were available on the current land use dynamics in the different 2 2 2 Figure 5.
Simulated exchange values of publicly accessible green spaces as a function of population density -comparison with property prices and costs for the creation and management of parks (source: own illustration).
neighbourhoods and more detailed knowledge on local land prices including their differentiation between different neighbourhoods and between inner city and suburban areas.
Discussion and scope for further research
On the basis of the ecosystem service "amenity values of publicly accessible green spaces in the vicinity of residential areas", it was shown that a monetary valuation of ecosystem services, as currently discussed and developed for application in environmental accounting, can support decision-making processes on the ground with socially relevant information.
Monetary values for ecosystem services have the advantage over other decision support tools that they can be compared with each other and with other monetary values. Here, it is the alternative value of land when used as building land. They thus provide an additional basis for weighing different concerns, taking into account individual preferences for green in the city and for building land, which is not available in a comparable form when using other decision criteria and methods.
In the case of urban green spaces, the monetary valuation presented can be used to describe relatively precisely in which urban areas, depending on the population density and the current green supply, additional green spaces have an effect on the welfare of the inhabitants that is greater than the economic benefit the corresponding areas would provide as residential or commercial spaces. A relatively high discount rate of 3% was used for this comparison. At lower discount rates, the relative value of green space versus building land would shift further in favour of green space.
In addition to showing the practical benefits of our results, it is also important to point out that the presented nationwide assessment of the benefits of green spaces for Germany still has weaknesses and should be further developed.
We have used an economic welfare concept for our analysis. Under this concept, the willingness to pay of the various stakeholders is usually aggregated into a societal value without taking income differences into account. This could lead to poor sections of the population being given less consideration than rich ones in the provision of public goods.
Here, however, we use an average marginal utility function. Therefore, the monetary results shown in the figures are income neutral. This means that green areas are only valued according to population density and total green space provision, regardless of income differences.
However, in low-income neighbourhoods, the need for green space may be relatively higher due to a lack of private gardens or fewer resources for trips to recreation sites. This is not considered, here. Additionally, the concept of a minimum provision for everyone is not included in our analysis. The latter would alter the picture, however, only marginally. An example would be a small residential population surrounded by industrial areas. Our demand indicator also does not capture the maintenance and creation of large representative green spaces that have value for the population, for example, as a local identification factor that goes beyond normal use as green space. For more discussion about economics and social values, see, for example, Massenberg (2019).
As mentioned in the Introduction, green spaces close to home provide a bundle of different ecosystem services, some of which also have a potentially positive influence on well-being. Krekel et al. (2016) used health as an explanatory variable for well-being, alongside urban green space and other variables. The effect of health-related services, such as air filtration and local climate regulation, could therefore already be subtracted from the measured wellbeing effect. This would mean that possibly only recreation-related services are still included in the values presented here. The exact interaction of amenity values with other benefits should be analysed in more detail in the future. In the meantime, the present values could be considered as a lower estimate of the combined value of visual amenity and recreation-related services, which may, to some extent, also include air filtration and local climate services.
Private gardens as well as urban trees were not considered in the study, although they also have positive welfare effects. If urban trees or private gardens were fully linearly spatially correlated with green spaces, the presented benefits of green spaces would be overestimated, as the values would include both the values of green spaces and the value of trees and private gardens. If there were no correlation at all between green spaces and urban trees/private gardens, private gardens and urban trees would have an additional benefit/value for well-being. Thanks to improved spatial data, these correlations can also be analysed more precisely in future studies.
As presented, the original Life Satisfaction Analysis from Krekel et al. (2016) referred to a different spatial data base and had to be adapted to the ATKIS Basis-DLM used here.
Although an attempt was made to minimise possible errors through calibration, it would be useful to conduct one or more further life satisfaction analyses directly based on ATKIS data in the future.
Such analyses should also be used to solve the other shortcomings of our approach through the following additional investigations, amongst others: • Further investigation of the significance of 'sorted preferences'. The meancorrected preference function used lies between upper and lower limits that are (still) relatively far apart and should, therefore, be analysed empirically in more detail. • Inclusion of other green elements, such as private gardens and city trees, for which nationwide information is now available. • Differentiation of the appreciation of different types of green spaces and consideration of quality parameters. • More detailed investigation of the relevance of other ecosystem services (e.g. microclimate, air filtration, recreation) in order to quantitatively assess overlaps. • Comparison with stated-preference valuations in order to better combine advantages and avoid disadvantages of different economic methods.
• Broader and more detailed coverage of the costs for investment and maintenance of urban green spaces. | 7,533.4 | 2022-12-07T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Physical Layer Authenticated Image Encryption for IoT Network Based on Biometric Chaotic Signature for MPFrFT OFDM System
In this paper, a new physical layer authenticated encryption (PLAE) scheme based on the multi-parameter fractional Fourier transform–Orthogonal frequency division multiplexing (MP-FrFT-OFDM) is suggested for secure image transmission over the IoT network. In addition, a new robust multi-cascaded chaotic modular fractional sine map (MCC-MF sine map) is designed and analyzed. Also, a new dynamic chaotic biometric signature (DCBS) generator based on combining the biometric signature and the proposed MCC-MF sine map random chaotic sequence output is also designed. The final output of the proposed DCBS generator is used as a dynamic secret key for the MPFrFT OFDM system in which the encryption process is applied in the frequency domain. The proposed DCBS secret key generator generates a very large key space of 22200. The proposed DCBS secret keys generator can achieve the confidentiality and authentication properties. Statistical analysis, differential analysis and a key sensitivity test are performed to estimate the security strengths of the proposed DCBS-MP-FrFT-OFDM cryptosystem over the IoT network. The experimental results show that the proposed DCBS-MP-FrFT-OFDM cryptosystem is robust against common signal processing attacks and provides a high security level for image encryption application.
Introduction
The Internet of Things (IoT) represents a modern internet phenomenon.Device recognition achieves intelligence through establishing or facilitating context-related decisions via the device transceiving information about itself.The rise of cloud computing capabilities leads to an unlimited addressing capacity.The IoT's purpose is to allow device connectivity with anybody and anything at anytime, anywhere, and via any path/network and service.The IoT can be used in different applications such as transportation, healthcare, power grids, entertainment and smart buildings [1].Encrypting IoT data before transferring it over wireless networks is one of the simplest and most effective ways to prevent it from being intercepted and altered.Data are converted into an unreadable format by the process of encryption, which can only be decrypted by authorized persons with the right key.IoT services require security to be at the core of everything.Physical layer security (PLS), one of the providing methods for communication security, has attracted a lot of interest from both academics and business since it can provide uncrackable, demonstrable, and quantifiable secrecy.PLS has a significant advantage over encryption since it is not dependent on computational complexity.Consequently, the degree of security attained will not be high, even if the listener has advanced computing capabilities.In contrast to a technique based on encryption, this is founded on the notion that an observer has a constrained computational ability to tackle challenging mathematical puzzles for brief intervals.For the PLS protocol design in IoT, the unique characteristics of IoT, such as cheap cost, wide-range coverage, enormous connection, and varied services impose significant problems [2].The development of PLS solutions for IoT applications remains difficult despite the success of PLS technique research.The IoT is distinguished by four special characteristics in particular: low cost, broad coverage, high connectivity, and a variety of services.How to design PLS strategies that well match these four features remains an open problem [2].
Recently, [3] have enhanced the dynamics of constellation fluctuations between neighboring frames by utilizing the randomness in the data.The constellation is then dynamically rotated while using analog-based encryption rather than digital-based encryption, which lowers quantization loss and increases robustness to channel phase problems.The others in [4] offer an asymmetric multi-level physical layer security (PLS) scheme in which each transmitted symbol is subjected to two different types of distortion: multi-reception amplitude randomization and channel-based phase distortion.Additionally, the technique streamlines receiver design while providing a significant security advantage for authentic links.The study makes several doable, reasonable, and access-controlled suggestions in [5] for safeguarding the physical layer of the Internet of Things (IoT).The study is still going on with a specific focus on the difficulties with encrypted data.To achieve this goal, a secure approach at the physical layer that provides cryptographic features for usage in conjunction with a flexible RC6 encryption/decryption method is described.
The chaos-based PLS transmission scheme for IoT is introduced in [6].The suggested approach successfully addresses the concerns with the extremely high PAPR of the OFDM symbols in addition to providing confidentiality of physical layer information transfer by encrypting the Discrete Fourier Transform (DFT) matrix.Additionally, it has no need for additional sideband information and, in theory, has a minimal computing complexity.A physical layer security scheme for OFDM-based IoT systems with compressed sensing is proposed in [7].The others use a combination of compressed sensing (CS) and OFDM to increase security.Therefore, using compressed sensing, we suggest the PLSSCS physical layer security strategy for OFDM-based IoT systems.By using channel measuring rather than previously collected data, it can alleviate the drawback of key extraction.In [8], the RSA Algorithm and Constellation Encryption Design Based on Chaotic Sequence are introduced.The main goal of this technique is to construct a large number of highly secure encrypted sequences by efficiently combining chaotic sequences and RSA.The precise procedure is to communicate system parameters using the asymmetric RSA algorithm, create a secret sequence using the chaotic sequence's initial value sensitivity, and then encrypt the original sequence using the secret sequence.
There are many physical layer encryption (PLE) schemes applied to the IoT networks.The key idea of PLE is to exploit the randomness of channels to degrade the received signal quality at the eavesdropper.Three new PLE techniques complement IoT features well and have a lot of promise for use in the future [9].The noise aggregation and selfencryption [10,11], fountain-coding based secure transmission [12][13][14][15][16][17][18] and Self-Encryption via constellation rotation [19][20][21][22][23][24][25][26][27][28] are different examples of the PLE used in the IoT.OFDM, which has a high spectral efficiency and easy implementation, is used as a self-encryption via the constellation rotation principal.It has been incorporated into different protocols including IEEE 802.11 a/g/n, IEEE 802.16 WiMAX, the frequency domain [29][30][31], data scrambling in the time domain [32], rotation of the modulation symbols [33], and noiseenhanced constellation rotation [34,35] for many of these reasons.Research has looked at using PLE, such as constellation scrambling, to increase the security level of OFDM.
The majority of IEEE 802.11Wi-Fi amendments, including 802.11 a, 802.11 g, 802.11 ac, 802.11 n, 802.11 ax, and 802.11 p (the protocol used in vehicle networks) [36][37][38], have embraced OFDM.High-speed Wi-Fi has recently emerged as a viable option for IoT devices due to its compatibility with existing networks.As a result, IEEE 802.11 ah [39] has been proposed as a new Wi-Fi standard for IoT systems.The basic physical layer structure of the transceiver adheres to the conventional design to maintain backward compatibility with access points and clients that support the OFDM physical layer structure despite the fact that this standard offers a number of new and enhanced features to improve power and spectral efficiency [39].IoT applications can be categorized into two groups: low data rate applications like smart meters and high data rate applications like multimedia IoT.A number of IoT communication protocols, including NB-IoT and 802.11 ah, rely on OFDM as an effective multiple access approach to support the successful operation of high data rate IoT applications [40].One key feature of IoT systems is their ability to support a variety of legacy and emerging communication protocols, including SigFox, cellular technology, 6LoWPAN (IPv6 Low-power Wireless Personal Area Networks (LoWPAN)), BLE (Bluetooth low energy), ZigBee, RFID (radio frequency identification), NFC (near-field communication), Z-Wave, NB-IoT (Narrow Band IoT), LoRaWAN (long-range wide area network), and Wi-SUN (wireless smart utility network) [41].There are currently eight major categories of PLS schemes that concentrate on data confidentiality for OFDM systems: channel-based encryption [42], phase encryption [43], permutation [44,45], artificial noise (AN) and artificial fast fading (AFF) [46,47], preamble modulation [48] (Figure 1), power allocation [49], Peak-to-Average Power Reduction (PAPR) encryption [50], the frequency domain [51] and the time domain [52] are two other areas in which these techniques can be used.
Sensors 2023, 22, x FOR PEER REVIEW 3 of 25 embraced OFDM.High-speed Wi-Fi has recently emerged as a viable option for IoT devices due to its compatibility with existing networks.As a result, IEEE 802.11 ah [39] has been proposed as a new Wi-Fi standard for IoT systems.The basic physical layer structure of the transceiver adheres to the conventional design to maintain backward compatibility with access points and clients that support the OFDM physical layer structure despite the fact that this standard offers a number of new and enhanced features to improve power and spectral efficiency [39].IoT applications can be categorized into two groups: low data rate applications like smart meters and high data rate applications like multimedia IoT.A number of IoT communication protocols, including NB-IoT and 802.11 ah, rely on OFDM as an effective multiple access approach to support the successful operation of high data rate IoT applications [40].One key feature of IoT systems is their ability to support a variety of legacy and emerging communication protocols, including SigFox, cellular technology, 6LoWPAN (IPv6 Low-power Wireless Personal Area Networks (LoWPAN)), BLE (Bluetooth low energy), ZigBee, RFID (radio frequency identification), NFC (near-field communication), Z-Wave, NB-IoT (Narrow Band IoT), LoRaWAN (long-range wide area network), and Wi-SUN (wireless smart utility network) [41].There are currently eight major categories of PLS schemes that concentrate on data confidentiality for OFDM systems: channel-based encryption [42], phase encryption [43], permutation [44,45], artificial noise (AN) and artificial fast fading (AFF) [46,47], preamble modulation [48] (Figure 1), power allocation [49], Peak-to-Average Power Reduction (PAPR) encryption [50], the frequency domain [51] and the time domain [52] are two other areas in which these techniques can be used.Chaos-based physical layer encryption is used in OFDM-based IoT systems to achieve the phase randomization and constellation rotation in the transmitted image in both spatial and transformation domains.An investigation of the Fractional Fourier Transform (FrFT) domains is introduced in [48].The FrFT parameters are considered as the additional keys for encryption achieving reliable cybersecurity for robust image communication.In [49], multiple fractional order chaotic systems are used in the proposed color image encrypting technique, since using multiple fractional order for image encryption considerably increases the key space and the key sensitivity.A generalization of the FrFT is the multi-parameter fractional Fourier transform (MP-FrFT).Due to the widespread use of MPFrFT in both cryptosystems [50][51][52][53][54][55], more and more academics are becoming interested in it.The authors in [56] introduce the MP-WFRFT and chaotic scrambling-assisted directional modulation technology for improving physical layer security.To realize the power-efficient and security-enhanced wireless transmissions, the directional modulation (DM) technology with multiple parameters weighted-type fractional Fourier transform (MP-WFRFT) and chaotic scrambling (CS) was developed in [56].
In 2023, a new physical layer authentication in wireless networks-based machine learning approaches is introduced in [57].The purpose of the work given in [57] is to identify and thoroughly compare prior research on physical layer authentication.In addition Chaos-based physical layer encryption is used in OFDM-based IoT systems to achieve the phase randomization and constellation rotation in the transmitted image in both spatial and transformation domains.An investigation of the Fractional Fourier Transform (FrFT) domains is introduced in [48].The FrFT parameters are considered as the additional keys for encryption achieving reliable cybersecurity for robust image communication.In [49], multiple fractional order chaotic systems are used in the proposed color image encrypting technique, since using multiple fractional order for image encryption considerably increases the key space and the key sensitivity.A generalization of the FrFT is the multiparameter fractional Fourier transform (MP-FrFT).Due to the widespread use of MPFrFT in both cryptosystems [50][51][52][53][54][55], more and more academics are becoming interested in it.The authors in [56] introduce the MP-WFRFT and chaotic scrambling-assisted directional modulation technology for improving physical layer security.To realize the power-efficient and security-enhanced wireless transmissions, the directional modulation (DM) technology with multiple parameters weighted-type fractional Fourier transform (MP-WFRFT) and chaotic scrambling (CS) was developed in [56].
In 2023, a new physical layer authentication in wireless networks-based machine learning approaches is introduced in [57].The purpose of the work given in [57] is to identify and thoroughly compare prior research on physical layer authentication.In addition to demonstrating the most recent PLA techniques, this study examined whether machine learning techniques improved wireless network security performance in physical layer authentication models.Additionally, it pointed out problems and offered lines of inquiry for further study.Researchers and security model creators interested in employing machine learning (ML) and deep learning (DL) methodologies for PLA in wireless communication systems in future research and designs will find this work to be useful.In addition, an application of machine learning techniques in medical data processing based on distributed computing and the IoT is suggested in [58].Also, in [59], the CNN learning and offloading is used as a hybrid approach for latency and battery lifetime optimization in IoT devices.The main contributions of this research follow: 1.
New robust MCC-MF sine map is designed and analyzed.
2.
New dynamic chaotic biometric (Digital Fingerprint) signature (DCBS) generator based on the combining the biometric signature and the proposed MCC-MF sine map random chaotic sequence output is also designed.
This paper is organized as follows.An introduction is presented in Section 1; Section 2 presents a related preliminary basics.Section 3 presents the proposed MCC-MF sine map, Section 4 presents the proposed DCBS-MP-FrFT-OFDM cryptosystem.Section 4 presents the performance analysis and simulation results discussions of the proposed DCBS-MP-FrFT-OFDM cryptosystem.The following section is the comparison results analysis.Finally, the conclusions and future works are drawn.
Multiple Parameters FrFT
The MPFrFT was presented with its applications and its advantages in signal processing, image encryption and communications in [60].The a th -order continuous FRT of x(t) is given by: where K a (u, t) is the transform kernel and α = aπ/2.The matrix F is N × NDFT can be defined as: The DFT matrix F has only four different eigenvalues {1, −j, −1, j}.Consider S as a nearly tri-diagonal N × N matrix whose nonzero entries are S n,n = 2cos 2πn/N, 0 ≤ n ≥ N and S n,n+1 =S n+1,n=1 , 0 ≤ n ≥ N − 2, and S n−1,0 =S 0,n−1=1 .The matrices S and F will have the same eigenvectors if they commute with the matrix F (S•F = F•S) but will not have the same eigenvectors λ k = e −jπk 2 .Based on the four different eigenvectors {1, −j, −1, j}, the ath-order FrFT matrix of size N × N denoted by F a is defined by [61]: where (•) T denotes the matrix transpose operation, for N even, v k is the normalized k th -order discrete Hermite- Gaussian-like eigenvector of S, and Λ a is a diagonal matrix whose entries are λ a k with fractional order a.The MPFrFT can be defined as an extension of the FrFT with multiple parameters by replacing the order a with the vector of fraction orders a of length 1 × N which are independent fraction orders; then, the MPFrFT denoted by F a is defined as: where Λ a is given by: In addition, this model of 1D MPFrFT can be modeled as 2D MPFrFT by using two vectors of fraction orders a and b with lengths of 1 × N and 1 × M. The two vectors of fraction orders a and b are independent fraction orders.The 2D MPFrFT can be performed by applying one 1D MPFrFT along rows followed by applying another 1D MPFrFT along columns.The 2D MPFrFT denoted by F (a,b) is defined as [61]: Then, the 2D MPFrFT of a 2D input P of size M × N can be defined in a row-column scheme as: The properties of the MPFrFT are given in [62].The main advantage of the 2D MPFrFT is that the two vectors of fraction orders a and b with lengths of 1 × N and 1 × M can be used as an additional secret key for secure applications.
Biometric Authenticated Secret Key
A fingerprint can be used as a biometric property to extract digital data using a variety of methods, such as a block-based approach to create a feature vector [62].With the help of this feature vector, code words can be created that are sufficiently random and large to be employed.The procedure includes the following steps: feature extraction, straight line attribute calculation, straight line attribute obfuscation, and production of a biometric binary string.Then, from the fingerprint image, we extract the minute points, core points, and delta points.If P is a collection of minute points, then p(x, y) stands for a minute point's coordinate.A collection of minor points is denoted by the notation point p = {p 1 (x 1 , y 1 ), p 2 (x 2 , y 2 ), . . . ,p k (x k , y k )}.Miniscule points are represented by p i (x i , y i ), i = 1, 2, . . .k.The core point is then represented as Cp(x c , y c ), where x c is the x-coordinate and y c is the y-coordinate of the discovered core point "Cp"from the input fingerprint picture.Finally, when a delta point is found in a fingerprint picture, it is represented as D p (x d , y d ), where x d , is the discovered delta point's x-coordinate and y d is its y-coordinate.Divide the image into small blocks and compute the straight-line properties between the points in the set 'P'.The fingerprint image 'I' will be divided into a number of tiny blocks, each measuring m × m pixels, with I = p × q of all blocks.
Using all the blocks, we determine the straight-line properties when computing all straight lines from a block's minutiae point (p k ), which stands for the block in the i th row and j th column of I ij as a reference block for all other blocks' minutiae points.Compute the length and angle of each straight line, using the Euclidean distance for length (li) and the x-axis for angle (a i ).Let F B represent a collection of straight-line lengths and angles for all blocks, F B = {(l 1 , a 1 ), (l 2 , a 2 ), . . . ,(lzb, azb)}.Find the block I lm that contains the core point (C P ), compute all straight lines that connect the core point (C P ) to all other minutiae points of neighboring blocks, and then extract the core and delta points from image I. Let F C denote a set of lengths and angles of straight lines, where the size of the F B is z b .Finally, the extracted minutiae attributes contain three fields per minutiae: the x-coordinate Sensors 2023, 23, 7843 6 of 26 ([1, 511]), y-coordinate ( [1,511]) and orientation θ ([0, 359]); the three parameters (x.y.θ) are used as a biometric minutia [63][64][65].In [64], a high-performance fingerprint scanner and a recognition engine are both included in the FS83 serial Fingerprint Authentication Module (FS83-sFAM), which is used in order to generate 2072 bytes from three samples of different fingerprints of one user.The resultant bits are represented in hexadecimal format, which is used in authenticated and secret key generation.The biometric fingerprint image is shown in Figure 1.
Proposed Multi-Cascaded Chaotic Modular Fractional Sine Map (MCC-MF Sine Map)
The cascade chaotic system (CCS) is a general 1D chaotic framework for creating new nonlinear chaotic systems using any two 1D chaotic maps as seed maps; it was first introduced in [66].Zhongyun et al. also suggested a dynamic parameter-control chaotic system (DPCCS) [67] based on the concept of the CCS.The DPCCS has a simple architecture that uses the control map's output to dynamically modify the seed map's parameters.CCS and DPCCS have straightforward hardware implementation, simple structures, and wildly unpredictable behavior.In this section, a new MCC-MF sine map is introduced and analyzed.The development of discrete fractional calculus allowed for the effective incorporation and capture of memory effects in nonlinear discrete temporal systems.Complex features are seen in chaotic systems with a fractional order.Assume that a sequence ρ(n) is given and the isolated time scale ℵ a is represented in terms of the real valued constant τ as {τ, τ + 1, τ + 2, . . ., } such that ρ : ℵ τ → R. The difference operator is denoted by ∆, where ∆ρ(n) = ρ(n + 1) − ρ(n).Then, we summarize some of the basic definitions related to discrete fractional calculus as follows: The fractional sum of order α (α > 0) is given by [68]: The Caputo-like delta difference of order α is defined by [68]: The delta fractional difference equation of order α is represented by [69]: The equivalent discrete fractional integral is given by [70]: Note that the initial iteration in this case is [71]: The non-modular fractional sine chaotic map is given by [72]: The proposed MCC-MF sine map is designed based on the concept of a cascade chaotic system.The fractional chaotic map is given by [68][69][70][71][72] and the final mathematical model is given by: r 1 sin(πr 2 sin(πr 3 sin(πr 4 sin(πx(j − 1))))) (15) where r 1 , r 2 , r 3 and r 4 are the control parameters and x(0) is the initial condition of the proposed map.Using more than one parameter of the sine map gives a high Lyapunov exponent (LE) value, high chaotic range and a large key space.The block diagram of the proposed MCC-MF sine map is shown in Figure 2. The proposed MCC-MF sine map consists of four fractional chaotic sine maps connected in concatenated form with different secret parameters.The modular function is used to improve the chaotic property based on the continuity of the map output.The effect of the fractional order on the chaotic map can be shown in Figure 3, where Figure 3a-d The effect of the fractional order on the chaotic map can be shown in Figure 3, where Figure 3a-d The effect of the fractional order on the chaotic map can be shown in Figure 3, where Figure 3a-d The NIST test suit, which consists of 16 statistical tests, is used to determine the randomness of the proposed MCC-MF sine map.These tests determine whether or not the created sequence is random.These tests' primary reliance is on the probability value (p value).The significance level, which is the line separating the rejection and non-rejection regions, compares the p-value.The significant level in NIST is set at 0.01.If the p-value is less than or equal to 0.01, the sequence is not random and is rejected; if it is greater than 0.01, the sequence is random and accepted.The proposed MCC-MF sine map's binary sequence of 10 bits is examined using SP800-22 [73], and the results are shown in Table 1.
Proposed Secure MP-FrFT-OFDM Cryptosystem
Due to their effective use of network resources and bandwidth, ability to accommodate a range of mobility scenarios, and ability to deliver high data rates, OFDM systems have demonstrated widespread success in many wireless communication applications.Thus, it is anticipated that OFDM will continue to be a crucial enabling technology in present and future systems, including 5Gs [42].In order to deal with inter-channel interference (ICI) and inter-symbol interference (ISI) problems and permit simultaneous data transmission via band-limited channels, OFDM was first presented in the middle of the 1960s [48].Wide frequency selective channels are, in theory, divided into a number of small, flat fading sub-bands by OFDM.Despite the fact that OFDM sub-bands are made orthogonal and independent of one another, a guard band known as the cyclic prefix (CP) is necessary to lessen the impacts of ISI and ICI.Instead of employing an empty guard The NIST test suit, which consists of 16 statistical tests, is used to determine the randomness of the proposed MCC-MF sine map.These tests determine whether or not the created sequence is random.These tests' primary reliance is on the probability value (p value).The significance level, which is the line separating the rejection and non-rejection regions, compares the p-value.The significant level in NIST is set at 0.01.If the p-value is less than or equal to 0.01, the sequence is not random and is rejected; if it is greater than 0.01, the sequence is random and accepted.The proposed MCC-MF sine map's binary sequence of 10 6 bits is examined using SP800-22 [73], and the results are shown in Table 1.
Proposed Secure MP-FrFT-OFDM Cryptosystem
Due to their effective use of network resources and bandwidth, ability to accommodate a range of mobility scenarios, and ability to deliver high data rates, OFDM systems have demonstrated widespread success in many wireless communication applications.Thus, it is anticipated that OFDM will continue to be a crucial enabling technology in present and future systems, including 5Gs [42].In order to deal with inter-channel interference (ICI) and inter-symbol interference (ISI) problems and permit simultaneous data transmission via band-limited channels, OFDM was first presented in the middle of the 1960s [48].Wide frequency selective channels are, in theory, divided into a number of small, flat fading sub-bands by OFDM.Despite the fact that OFDM sub-bands are made orthogonal and independent of one another, a guard band known as the cyclic prefix (CP) is necessary to lessen the impacts of ISI and ICI.Instead of employing an empty guard space, the idea of a CP is based on adding a cyclic extension to the symbol itself.In the suggested encryption scheme, authenticated biometric features are utilized as the biometric secret key generation with the proposed MCC-MF sine map chaotic secret key generation in order to design a DCBS generator for the MPFrFT OFDM image encryption.
Proposed DCBS Generator
The design of the proposed DCBS generator is based on the secure fractional number sequence generated from the proposed MCC-MF sine map and the biometric fingerprint minutiae generated from the FS83 s-FA Module [62].We assumed that the FS83 s-FA Module generated a sequence "T" ∈ [1,256] with a length of 2072 bytes.The block diagram of the proposed DCBS generator is shown in Figure 4.As shown in Figure 4, the proposed DCBS generator consists of 128 secret keys, an initial condition generation, and the proposed MCC-MF-sine map.The secret key is used for initial condition generation for the proposed MCC-MF-sine map with the fractional secure parameters (v 1 to v 4 ) in order to generate 2072 bytes and 512 × 512 bytes.The output of the MCC-MF-sine map is used as an input for the DCBS generator to produce two vectors a and b of sizes (1 × 256) and 256 × 256 bytes for the encryption and authentication process.
1.
The secret key (SK) of 128 bits represented by 32 hexadecimal digits "C2250EA6637F5A FAAF0654 9CCD16220A" is used to combined the biometric signature with the fractional number sequence generated from the proposed MCC-MF sine map.
2.
The secret key is divided into eight sections to generate the initial conditions and the different control parameters of the proposed MCC-MF sine map.All secret parameters and the initial condition are 10 −15 decimal precision.
Sensors 2023, 22, x FOR PEER REVIEW 9 of 25 space, the idea of a CP is based on adding a cyclic extension to the symbol itself.In the suggested encryption scheme, authenticated biometric features are utilized as the biometric secret key generation with the proposed MCC-MF sine map chaotic secret key generation in order to design a DCBS generator for the MPFrFT OFDM image encryption.
Proposed DCBS Generator
The design of the proposed DCBS generator is based on the secure fractional number sequence generated from the proposed MCC-MF sine map and the biometric fingerprint minutiae generated from the FS83 s-FA Module [62].We assumed that the FS83 s-FA Module generated a sequence "T" ∈ [1,256] 4-The first ( ) and the second ( ) eight hexadecimal digits are used to generate the first fractional secret control parameter ( ) as:
3.
The first eight hexadecimal digits (k 1 s ) and the last eight hexadecimal digits number eight k 8 s are used to generate the fractional initial condition of the proposed MCC-MF sine map as: The first (k 1 s ) and the second (k 2 s ) eight hexadecimal digits are used to generate the first fractional secret control parameter (r f 1 ) as: The next three fractional secret control parameters are given as: 6.
The proposed MCC-MF sine map given in Equation ( 15) is iterated t = 512 × 512 × 8 times by using the generated 4 secret parameters and the fractional secure parameters ( v 1 to v 4 ).
7.
Ignore the first 1000 bits to prove the chaos property of the generated chaotic sequence.
In addition, select the last 2072 bytes of the generated chaotic sequence.8.
Concatenate the chaotic sequence output (2072 bytes) with the 2072 bytes of the biometric signature to generate the dynamic chaotic biometric signature (DCBS).9.
Finally, randomly select 256 × 256 bytes from the iterated chaotic sequence 512 × 512 bytes for the diffusion process by Xoring with the original image and select the two different 256 vectors (a and b ), which are used as the secret multi-parameters for the confusion process in the MPFrFT OFDM transform.
Secure MP-FrFT-OFDM Based on MCC-MF Sine Map and DCBS Generator
The concept of OFDM is used in the physical layer communication based on Fast Fourier Transform (FFT).Also, the Fractional Fourier Transform (FrFT) is used in the OFDM system.The FrFT used only one parameter for the phase shift in the FFT which converts the FFT to FrFT.In the multi-parameters, FrFT used a vector of secret fractional values (0 to 1) with a length equal to the length of the FrFT for the OFDM system.In addition, the encryption process is applied in the frequency domain based on the OFDM system, which is the standard modulation used in the physical layer (IEEE 802.11 a/g/n).In this section, we suggested using the MPFrFT and the FrFT instead of the standard FFT in OFDM.The FFT-OFDM is only a multi-carrier transmission system.By using the FrFT instead FFT in OFDM, only one fraction order is used to change the phase in the output of the FrFT, which cannot satisfy any encryption properties.In this paper, the MPFrFT is used with the OFDM, where the MPFrFT has multi-parameters (ordered) equal to the length of the FrFT used, the length of MPFrFT can be 256, which means that the 256 parameters can be used as a secret key (secure multi-phase changing parameters).So, the encryption can be applied in the frequency domain without any additional equipment.
The framework of the proposed secure MP-FrFT-OFDM based on MCC-MF sine map and DCBS generator is given in Figure 5.The first step in the proposed MPFrFT OFDM image encryption based on the MCC-MF sine map and DCBS generator key is changing the input image into a binary format of (d = 256 × 256 × 8) bits, d ∈ {0, 1}.The second step involves applying the convolutional coding with a code rate (R = 1/2) to the image data bits as an error-correcting code.The coded data sequence is mapped onto the QPSKmodulated symbols.Based on the proposed DCBS generator, the secret multi-parameters are generated for the MPFrFT OFDM image encryption process; finally, the cyclic prefixes are added to the output of the MPFrFT OFDM encrypted data.In the receiver, the inverse processes are used.The three steps of the proposed cryptosystem will be discussed.The proposed secure MP-FrFT-OFDM based on MCC-MF sine map and DCBS generator is shown in Figure 5.
Authenticated Encryption Scheme
The ciphering technique's steps can be summed up as follows: 1. Read the image that was entered.
7. Add the cyclic prefix (CP) to the output of the secure MP-FrFT.8. Send the encrypted image across an IoT channel to the recipient side.
Authenticated Decryption
The deciphering technique's steps can be summed up as follows: 1. Receive the authenticated encrypted image data.
2. Remove the cyclic prefix (CP) from the received secure MP-FrFT.
3. The first step in the decryption is the de-confusion by applying the inverse MP-FrFTbased OFDM on the encrypted image based on the inverse two secret fractional parameter vectors − and − as follows: − = −(0.9+ ∑ ) 4. Apply QPSK de-mapping. 5. Apply convolutional de-coding for the diffusion 256 × 256 × 8 bits.6. Convert the authenticated encrypted image into binary format.
Authenticated Encryption Scheme
The ciphering technique's steps can be summed up as follows: 1.
Read the image that was entered.
2.
Convert the input image into binary format.
3.
The first encryption step started with the diffusion process by Xoring, which converts the binary image data of 256 × 256 × 8 bits with the select random iterated chaotic sequence of 256 × 256 × 8 bits.4.
Apply QPSK mapping.
Add the cyclic prefix (CP) to the output of the secure MP-FrFT.8.
Send the encrypted image across an IoT channel to the recipient side.
Authenticated Decryption
The deciphering technique's steps can be summed up as follows: 1.
Receive the authenticated encrypted image data.
2.
Remove the cyclic prefix (CP) from the received secure MP-FrFT.
3.
The first step in the decryption is the de-confusion by applying the inverse MP-FrFTbased OFDM on the encrypted image based on the inverse two secret fractional parameter vectors −a and −b as follows: −a = −(0.9 −b = −(0.9 4.
6.
Convert the authenticated encrypted image into binary format.7.
The second step in the decryption step is the de-diffusion by Xoring of the authenticated encrypted binary image data of 256 × 256 × 8 bits with the select random iterated chaotic sequence of 256 × 256 × 8 bits.8.
Apply the required analysis.
Examining the impact of noise, information entropy, visual inspection, histograms, assaults, differential, and encryption quality metrics, the effectiveness and security of the proposed system are examined.The recommended picture encryption approach maintains a good security quality, according to all numerical results.
Performance Analysis and Results
Key space analysis, UACI, NPCR, neighboring pixel correlation analysis, and histogram analysis are only a few of the statistical and security analysis techniques used.A CT scan of the brain (medical) and other images are chosen with different standard gray-scale test images such as Cameraman, Peppers and Lena for system simulation.The test image is 256 × 256 pixels.The OFDM parameters include the following: the total number of OFDM sub-carriers is denoted by (N sc = 256), the FFT length is set to a 256 bit length, and the CP is set to a 32 bit length.The suggested system performance is tested under the AWGN channel effect at zero mean µ = 0 and at different values of noise variances, σ 2 = 0.01, 0.05, 0.10, 0.15, 0.20 .Also, the proposed system is tested under different signal processing attacks as Salt and Pepper noise and Speckle noise.Analysis of the histogram, neighboring pixel correlation, key space, NPCR, and UACI tests are examples of statistical and security analysis.The multi-secure simulation parameters are displayed in Table 2.The suggested authenticated secure image transmission system simulation parameters are also shown in Tables 2 and 3.
Visual Quality Metrics
Remarkable indicators used in studying the encryption robustness are represented as the Key Performance Indicators (KPIs) for the proposed system.Visual quality inspection is measured in terms of BER performance and PSNR as a clarity investigation performance as well as statistical measures to evaluate the degree of encryption quantitatively.The visual quality metrics for the proposed scheme are measured in terms of BER performance and PSNR performance in the form E b N 0 vs.BER, and E b N 0 vs.PSNR as a visual testing for the received image.Different E b N 0 values between 0 and 18 dB are used to calculate the PSNR values of the received image.Bit Error Rate (BER) is a signal quality metric that evaluates the performance of the entire system, including the transmitter, the receiver, and the medium used to connect them.BER is defined as the ratio of the number of bits received in error due to interference, noise, or other problems to the total number of bits received.In [24], the BER simple formula is defined.
BER =
Number of bit errors Total number of transferred bits (25) The PSNR ratio, which is measured in decibels (dB), is regarded as a visual quality metric test of the reconstructed (decrypted) image compared to the original transmitted image [25].The quality of the produced image will be better the higher the PSNR value.Here, f 2 max denotes the highest pixel value possible, I(i, j) denotes the original image pixel, I ' (i, j) denotes the received image pixel values, (M × N) denotes the image size, and all other variables are equal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean µ = 0 and over certain ranges of noise variances, σ 2 = 0.01, 0.05, 0.10, 0.15, 0.20 , as shown in Table 4. 4. and all other variables are equal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table 4.A Salt and Pepper noise attack is when a certain amount of the pixels in the image are affected by an impulse type of noise represented by either black or white dots (hence the name of the noise), which can significantly deteriorate the quality of an image [3].It can be used to model defects in the transmission of the image.The proposed authenticated secure image transmission scheme system is examined under Salt and Pepper noise attack; here, the noise density = 0.02 .The and performances of the proposed , and coded OFDM schemes are tabulated at different values of (0 to 10 dB) in Tables 5 and 6.The results given in Table 5 are plotted in Figures 6 and 7, respectively.In Figure 6, at = 8 dB, the FFT-OFDM BER performance is 8.60 × 10 , the FrFT-OFDM BER performance is 8.21 × 10 and the MPFrFT-OFDM BER performance is 8.59 × 10 .In Figure 6, at = 8 dB, the FFT-OFDM PSNR performance is 30.65 dB , the FrFT-OFDM PSNR performance is 30.8548dB and the MP-FrFT-OFDM PSNR performance is 30.6579dB.The proposed FrFT-OFDM system gains PSNR performance improvement by about 0.1969 dB compared with the proposed MPFrFT-OFDM system, but a large key space is achieved by the MPFrFT OFDM system than the FrFT-OFDM system.A Salt and Pepper noise attack is when a certain amount of the pixels in the image are affected by an impulse type of noise represented by either black or white dots (hence the name of the noise), which can significantly deteriorate the quality of an image [3].It can be used to model defects in the transmission of the image.The proposed authenticated secure image transmission scheme system is examined under Salt and Pepper noise attack; here, the noise density = 0.02 .The and performances of the proposed , and coded OFDM schemes are tabulated at different values of (0 to 10 dB) in Tables 5 and 6.The results given in Table 5 are plotted in Figures 6 and 7, respectively.In Figure 6, at = 8 dB, the FFT-OFDM BER performance is 8.60 × 10 , the FrFT-OFDM BER performance is 8.21 × 10 and the MPFrFT-OFDM BER performance is 8.59 × 10 .In Figure 6, at = 8 dB, the FFT-OFDM PSNR performance is 30.65 dB , the FrFT-OFDM PSNR performance is 30.8548dB and the MP-FrFT-OFDM PSNR performance is 30.6579dB.The proposed FrFT-OFDM system gains PSNR performance improvement by about 0.1969 dB compared with the proposed MPFrFT-OFDM system, but a large key space is achieved by the MPFrFT OFDM system than the FrFT-OFDM system.A Salt and Pepper noise attack is when a certain amount of the pixels in the image are affected by an impulse type of noise represented by either black or white dots (hence the name of the noise), which can significantly deteriorate the quality of an image [3].It can be used to model defects in the transmission of the image.The proposed authenticated secure image transmission scheme system is examined under Salt and Pepper noise attack; here, the noise density = 0.02 .The and performances of the proposed , and coded OFDM schemes are tabulated at different values of (0 to 10 dB) in Tables 5 and 6.The results given in Table 5 are plotted in Figures 6 and 7, respectively.In Figure 6, at = 8 dB, the FFT-OFDM BER performance is 8.60 × 10 , the FrFT-OFDM BER performance is 8.21 × 10 and the MPFrFT-OFDM BER performance is 8.59 × 10 .In Figure 6, at = 8 dB, the FFT-OFDM PSNR performance is 30.65 dB , the FrFT-OFDM PSNR performance is 30.8548dB and the MP-FrFT-OFDM PSNR performance is 30.6579dB.The proposed FrFT-OFDM system gains PSNR performance improvement by about 0.1969 dB compared with the proposed MPFrFT-OFDM system, but a large key space is achieved by the MPFrFT OFDM system than the FrFT-OFDM system.A Salt and Pepper noise attack is when a certain amount of the pixels in the image are affected by an impulse type of noise represented by either black or white dots (hence the name of the noise), which can significantly deteriorate the quality of an image [3].It can be used to model defects in the transmission of the image.The proposed authenticated secure image transmission scheme system is examined under Salt and Pepper noise attack; here, the noise density = 0.02 .The and performances of the proposed , and coded OFDM schemes are tabulated at different values of (0 to 10 dB) in Tables 5 and 6.The results given in Table 5 are plotted in Figures 6 and 7, respectively.In Figure 6, at = 8 dB, the FFT-OFDM BER performance is 8.60 × 10 , the FrFT-OFDM BER performance is 8.21 × 10 and the MPFrFT-OFDM BER performance is 8.59 × 10 .In Figure 6, at = 8 dB, the FFT-OFDM PSNR performance is 30.65 dB , the FrFT-OFDM PSNR performance is 30.8548dB and the MP-FrFT-OFDM PSNR performance is 30.6579dB.The proposed FrFT-OFDM system gains PSNR performance improvement by about 0.1969 dB compared with the proposed MPFrFT-OFDM system, but a large key space is achieved by the MPFrFT OFDM system than the FrFT-OFDM system.A Salt and Pepper noise attack is when a certain amount of the pixels in the image are affected by an impulse type of noise represented by either black or white dots (hence the name of the noise), which can significantly deteriorate the quality of an image [3].It can be used to model defects in the transmission of the image.The proposed authenticated secure image transmission scheme system is examined under Salt and Pepper noise attack; here, the noise density = 0.02 .The and performances of the proposed , and coded OFDM schemes are tabulated at different values of (0 to 10 dB) in Tables 5 and 6.The results given in Table 5 are plotted in Figures 6 and 7, respectively.In Figure 6, at = 8 dB, the FFT-OFDM BER performance is 8.60 × 10 , the FrFT-OFDM BER performance is 8.21 × 10 and the MPFrFT-OFDM BER performance is 8.59 × 10 .In Figure 6, at = 8 dB, the FFT-OFDM PSNR performance is 30.65 dB , the FrFT-OFDM PSNR performance is 30.8548dB and the MP-FrFT-OFDM PSNR performance is 30.6579dB.The proposed FrFT-OFDM system gains PSNR performance improvement by about 0.1969 dB compared with the proposed MPFrFT-OFDM system, but a large key space is achieved by the MPFrFT OFDM system than the FrFT-OFDM system.A Salt and Pepper noise attack is when a certain amount of the pixels in the image are affected by an impulse type of noise represented by either black or white dots (hence the name of the noise), which can significantly deteriorate the quality of an image [3].It can be used to model defects in the transmission of the image.The proposed authenticated secure image transmission scheme system is examined under Salt and Pepper noise attack; here, the noise density d = 0.02.The BER and PSNR performances of the proposed FFT, FrFT and MPFrFT coded OFDM schemes are tabulated at different values of E b N 0 (0to10dB) in Tables 5 and 6.The results given in Table 5 are plotted in Figures 6 and 7, respectively.In Figure 6, at E b N 0 = 8dB, the FFT-OFDM BER performance is 8.60 × 10 −4 , the FrFT-OFDM BER performance is 8.21 × 10 −4 and the MPFrFT-OFDM BER performance is 8.59 × 10 −4 .In Figure 6, at E b N 0 = 8dB, the FFT-OFDM PSNR performance is 30.65dB, the FrFT-OFDM PSNR performance is 30.8548dB and the MP-FrFT-OFDM PSNR performance is 30.6579dB.The proposed FrFT-OFDM system gains PSNR performance improvement by about 0.1969dB compared with the proposed MPFrFT-OFDM system, but a large key space is achieved by the MPFrFT OFDM system than the FrFT-OFDM system. (0 to 10 dB) in Tables 5 and 6.The results given in Table 5 are plotted in Figures 6 and 7, respectively.In Figure 6, at = 8 dB, the FFT-OFDM BER performance is 8.60 × 10 , the FrFT-OFDM BER performance is 8.21 × 10 and the MPFrFT-OFDM BER performance is 8.59 × 10 .In Figure 6, at = 8 dB, the FFT-OFDM PSNR performance is 30.65 dB , the FrFT-OFDM PSNR performance is 30.8548dB and the MP-FrFT-OFDM PSNR performance is 30.6579dB.The proposed FrFT-OFDM system gains PSNR performance improvement by about 0.1969 dB compared with the proposed MPFrFT-OFDM system, but a large key space is achieved by the MPFrFT OFDM system than the FrFT-OFDM system.7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].7 in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets [4].Table 8 presents both BER and PSNR metrics performance for the proposed FFT, FrFT and MPFrFT coded OFDM systems at different values of E b N 0 (0dBto16dB) under Speckle noise attack.Speckle noise is represented as a multiplicative noise to the brain medical test image, using uniformly distributed random noise with zero mean, µ = 0 and variance, δ 2 = 0.02.The results in Table 8 are plotted in Figures 8 and 9 in order to clarify the Speckle noise attack effect on the proposed authenticated secure medical image transmission schemes.BER calculations were obtained for the introduced FFT, FrFT and MP − FrFT coded OFDM systems at E b N 0 = 8dB; the BER values are 3.70 × 10 −8 , 0, and 3.11 × 10 −4 respectively.At the same E b N 0 = 8dB, the PSNR performance values are 75.1229dB,InfdB, and 54.1514dB respectively.Then, at E b N 0 ≥ 8.50dB, the proposed systems provide the highest BER and PSNR performance (0BERandInfPSNR).Speckle noise is represented as a multiplicative noise to the brain medical test image, using uniformly distributed random noise with zero mean, µ = 0 and variance, δ 2 = 0.02.In addition, Table 9 shows the BER and PSNR performance for FFT, FrFT, and MPFrFT coded OFDM over Speckle noise attack at δ 2 = 0.02 at E b N 0 = 2, 8, 8.50dB.
Encryption Quality Metrics
Encryption quality metrics for the proposed scheme are measured using deferential
Encryption Quality Metrics
Encryption quality metrics for the proposed scheme are measured using deferential attack analysis, correlation analysis, histogram analysis, entropy analysis and key space analysis.
Deferential Attack Analysis
The number of pixels change rate (also known as the NPCR) and unified average changing intensity (also known as the UACI) are two frequently used tests that were used to evaluate the sensitivity of the encrypted image.To strengthen resistance to the differential attack, each small change to the plain image should result in a significant disruption of the cipher image.Consider and , two cypher pictures for two planar images and , which only differ by one pixel. (, ) and (, ) are the gray-scale pixel values of the two images and , respectively.The NPCR and UACI are defined as [13]
Encryption Quality Metrics
Encryption quality metrics for the proposed scheme are measured using deferential attack analysis, correlation analysis, histogram analysis, entropy analysis and key space analysis.
Deferential Attack Analysis
The number of pixels change rate (also known as the NPCR) and unified average changing intensity (also known as the UACI) are two frequently used tests that were used to evaluate the sensitivity of the encrypted image.To strengthen resistance to the differential attack, each small change to the plain image should result in a significant disruption of the cipher image.Consider and , two cypher pictures for two planar images and , which only differ by one pixel. (, ) and (, ) are the gray-scale pixel values of the two images and , respectively.The NPCR and UACI are defined as [13]
Encryption Quality Metrics
Encryption quality metrics for the proposed scheme are measured using deferential attack analysis, correlation analysis, histogram analysis, entropy analysis and key space analysis.
Deferential Attack Analysis
The number of pixels change rate (also known as the NPCR) and unified average changing intensity (also known as the UACI) are two frequently used tests that were used to evaluate the sensitivity of the encrypted image.To strengthen resistance to the differential attack, each small change to the plain image should result in a significant disruption of the cipher image.Consider and , two cypher pictures for two planar images and , which only differ by one pixel. (, ) and (, ) are the gray-scale pixel values of the two images and , respectively.The NPCR and UACI are defined as [13]:
Encryption Quality Metrics
Encryption quality metrics for the proposed scheme are measured using deferential attack analysis, correlation analysis, histogram analysis, entropy analysis and key space analysis.
Deferential Attack Analysis
The number of pixels change rate (also known as the NPCR) and unified average changing intensity (also known as the UACI) are two frequently used tests that were used to evaluate the sensitivity of the encrypted image.To strengthen resistance to the differential attack, each small change to the plain image should result in a significant disruption of the cipher image.Consider C 1 and C 2 , two cypher pictures for two planar images p 1 and p 2 , which only differ by one pixel.C 1 (i, j) and C 2 (i, j) are the gray-scale pixel values of the two images C 1 and C 2 , respectively.The NPCR and UACI are defined as [13]: where D(i, j) is an identical-sized bipolar array to the cypher picture that is described as: In the simple images, p i is the value of the initial pixel.Without modifying any other values, it is changed to p i = (p i + 100) mod256 to obtain a second image and encrypt the two images in order to calculate the NPCR and UACI values of the two encoded images.Tables 10 and 11 display the findings of the NPCR and UACI for the proposed MP-FrFT, FrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.Tables 10 and 11 shows the NPCR and U ACI the comparison among the proposed MPFrFT and FrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images
Correlation Analysis
Correlation is defined as a statistical relationship that measures the relativity between two variables.The correlation between the original image and encrypted image is measured between two vertically adjacent pixels: a plain image/cipher image, respectively [4].If the correlation coefficient values are closer to 1, it reflects highly dependent variables between the original and deciphered image (i.e., good decryption quality).If the correlation coefficients are closer to 0, it refers to highly independent variables between the original and cipher image (i.e., totally different, no features between original image and encrypted one, high-quality encryption algorithm).Smaller values of the correlation coefficients assess a successful encryption/decryption process.The correlation between original and encrypted images for the proposed systems MPFrFT and FrFT are tabulated in Table 12.These correlation coefficient values ensure the immunity of the proposed schemes.The correlation coefficient r xy is defined as [3]: where E(x) = 1 N × ∑ N i=1 x i , x, y are the gray-scale pixel values of the source and enciphered images.
Histogram Analysis
The definition of a histogram is a statistical graphical distribution of each discrete intensity level (also known as a "gray level") in a digital image into user-specified ranges.It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the 13 shows the histogram analysis for the proposed MPFrFTcoded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13.Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13.Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13.Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
intensity level (also known as a "gray level") in a digital image into user-specified ranges.
It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Encryption Decryption
The definition of a histogram is a statistical graphical distribution of each discrete intensity level (also known as a "gray level") in a digital image into user-specified ranges.It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Test Image
Original Image Histogram
Encryption Decryption
The definition of a histogram is a statistical graphical distribution of each discrete intensity level (also known as a "gray level") in a digital image into user-specified ranges.It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Test Image
Original Image Histogram
Encryption Decryption
The definition of a histogram is a statistical graphical distribution of each discrete intensity level (also known as a "gray level") in a digital image into user-specified ranges.It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Test Image
Original Image Histogram
Encryption Decryption
The definition of a histogram is a statistical graphical distribution of each discrete intensity level (also known as a "gray level") in a digital image into user-specified ranges.It displays the gray scale, the density of the gray-level distribution, the average luminance of an image, picture contrast, and so on.The histogram's horizontal axis displays the potential intensity values, while the vertical axis displays the number of pixels for each of these intensities [5].The proposed MPFrFT and FrFT-coded OFDM ciphering approaches reflect identical histograms of the relevant source images, according to the reported histogram analysis.As a result, an encrypted image's statistical metrics are the same as those of the matching source image.Table 13 shows the histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Test Image
Original Image Histogram Encryption Decryption
Key Space Analysis
The total number of unique keys that can be utilized in the encryption process is calculated by the key space.If the length of each initial value or control parameter is set to 16 decimals, the secret keys for the proposed encryption consist of eight initial values (x 1 0 ) valid in the range of [0, 1] and four control parameters r 1 ,r 2 ,. .., r 4 valid in the range of 0.1 to 20.The key space determines the total number of distinct keys that can be used in the encryption process.The secret keys for the suggested encryption consist of eight starting values (x 1 0 ) valid in the range of [0, 1] and four control parameters r 1 ,r 2 ,. .., r 4 valid in the range of 0.1 to 20 if the length of each initial value or control parameter is set to 16 decimals.It is possible to determine the entire complexity (total key space) as follows: 10 15 × 10 15 × 10 15 × 10 15 × 10 15 = 10 4×15 =10 60 .The key space of an image of size 256 × 256 is 256 × 256 × 2 8 = 4×10 5 .In addition, the multi-parameter a and b has a key space for a = 10 15×256 and for b = 10 15×256 , so the total multi-parameter key space for the MP-FrFT-OFDM is equal to 10 7680 .Finally, the total key space of the proposed cryptosystem can be calculated as 10 60 × 4 × 10 5 ×10 7680 = 4 × 10 7745 = 2 2200 .The findings showed that the key space of our approach is very vast, preventing all sorts of brute force assaults.The key space of the proposed algorithm is more than 2 100 .The findings and analyses of the important space analysis are presented in Figure 10.
Entropy Analysis
The unpredictability of the received image is calculated using entropy, which is a measure of uncertainty in the cyphered image.Strong randomness and strong confidentiality are signs that the encoded image has high entropy [13].One definition of entropy in an information system reads like follows: Sensors 2023, 23,7843 where "m" is the information source, the symbol "m i " is represented by N total bits, it has a probability of p (m i ) , and the optimal information entropy value is close to 8. The entropy result based on the proposed algorithm is 7.9999.
in the range of 0.1 to 20 if the length of each initial value or control parameter is set to 16 decimals.It is possible to determine the entire complexity (total key space) as follows: 10 × 10 × 10 × 10 × 10 = 10 × =10 .The key space of an image of size 256 × 256 is 256 × 256 × 2 = 4×10 .In addition, the multi-parameter and has a key space for = 10 × and for = 10 × , so the total multi-parameter key space for the MP-FrFT-OFDM is equal to 10 .Finally, the total key space of the proposed cryptosystem can be calculated as 10 × 4 × 10 × 10 = 4 × 10 = 2 .The findings showed that the key space of our approach is very vast, preventing all sorts of brute force assaults.
The key space of the proposed algorithm is more than 2 .The findings and analyses of the important space analysis are presented in Figure 10.The unpredictability of the received image is calculated using entropy, which is a measure of uncertainty in the cyphered image.Strong randomness and strong confidentiality are signs that the encoded image has high entropy [13].One definition of entropy in an information system reads like follows: () = − ∑ ( ) ( ) (32) where "m" is the information source, the symbol " " is represented by N total bits, it has a probability of ( ) , and the optimal information entropy value is close to 8. The entropy result based on the proposed algorithm is 7.9999.
Key Sensitivity Analysis
A strong encryption system should be highly sensitive to even the smallest alteration to the secret keys [13].Assume the control settings and beginning values that are used to encrypt plain photos ( ) and ( , ,…, ) in order to test the key sensitivity.Use the
Key Sensitivity Analysis
A strong encryption system should be highly sensitive to even the smallest alteration to the secret keys [13].Assume the control settings and beginning values that are used to encrypt plain photos x 1 0 and (r 1 ,r 2 ,. .., r 4 in order to test the key sensitivity.Use the new key to decode the image after the encryption process by adding 10 −16 to any beginning condition or control parameter.As a result, Figure 11 shows the key sensitivity test demonstrates how sensitive the proposed encryption system is to the security key.That indicates the least amount of secret key modification during the decoding procedure.The outcome will be an image that is entirely unencrypted.new key to decode the image after the encryption process by adding 10 to any beginning condition or control parameter.As a result, Figure 11 shows the key sensitivity test demonstrates how sensitive the proposed encryption system is to the security key.That indicates the least amount of secret key modification during the decoding procedure.The outcome will be an image that is entirely unencrypted.
Comparative Analysis
In this section, the performance comparison between the proposed cryptosystem results and other methods described in the literature for the Lena image of size 256 × 256 is shown in Table 14.The comparison between the proposed cryptosystem and the other recent methods is based on different criteria such as key space, entropy, correlation, NPCR and UACI.As shown in Table 14, whether the proposed DCBS-MP-FrFT-OFDM cryptosystem has the capacity to withstand various attacks is evaluated in order to determine
Comparative Analysis
In this section, the performance comparison between the proposed cryptosystem results and other methods described in the literature for the Lena image of size 256 × 256 14.The comparison between the proposed cryptosystem and the other recent methods is based on different criteria such as key space, entropy, correlation, NPCR and UACI.As shown in Table 14, whether the proposed DCBS-MP-FrFT-OFDM cryptosystem has the capacity to withstand various attacks is evaluated in order to determine the encryption system's strength.The proposed method for evaluating it was put through a safety check, which included discussions of the histogram, entropy, correlation coefficient, NPCR, UACI, and NIST randomness tests.
Conclusions
A new physical layer authenticated encryption (PLAE) technique focused on the multi-parameter fractional Fourier transform-orthogonal frequency division multiplexing (MP-FrFT-OFDM) is proposed in this paper for secure image transmission over public IoT networks.This paper designs and studies a new, robust multi-cascaded chaotic modular fractional sine map (MCC-MF sine map).A novel dynamic chaotic biometric (Digital Fingerprint) signature (DCBS) generator based on combining the biometric signature and the suggested MCC-MF sine map random chaotic sequence output is also devised.It is based on the proposed MCC-MF sine map random chaotic sequence output.For the multiparameter fractional Fourier transform in the OFDM system, which studies the encryption process in the frequency domain, the suggested DCBS generator's output is used as a dynamic secret key.The suggested DCBS secret key generator is used to satisfy the secrecy and authentication features.The proposed DCBS-MP-FrFT-OFDM cryptosystem over IoT network's security strengths are tested using statistical analysis, differential analysis, and key sensitivity analysis.The suggested proposed DCBS-MP-FrFT-OFDM cryptosystem's ability to withstand various attacks is tested in order to gauge how strong the encryption system is.The suggested approach for evaluating it was subjected to a safety examination, which covered discussions of the histogram, entropy, correlation coefficient, NPCR, UACI, and NIST randomness tests.
This study adds to the body of literature by further examining the flaws of using the MPFrFT as two dimensions with multi-parameters of the FrFT, which increase the secret key space based on multi-phase shifting strategy in OFDM.The proposed DCBS-MP-FrFT-OFDM cryptosystem does not need any additional equipment, except the OFDM is replaced by MP-FrFT-OFDM and an external two-dimensional multi-parameters DCBS generator.The DCBS generator generates the all the secret keys in the proposed cryptosystem.In addition, the limitations of the proposed MPFrFT-OFDM scheme include that the two secret vectors can be optimized in order to improve the BER performance.On the other hand, for the security analysis, the MPFrFT OFDM has a very large key space as discussed in the key space analysis compared with other systems.
In the future, we will propose a brand-new deep CNN that can produce a digital signature in order to satisfy the identity property.Additionally, a digital deep CNN signcryption system can be created to combine the encryption and digital signature.Future studies could also concentrate on watermarking, data hiding in encrypted images, and stream video encryption and decoding.It will also be advised to use a new Deep Convolutional Neural Network for a compression-encryption system.Also, in future work, different optimization schemes can be used to optimize the selection of the two vectors a and b with a size of 1 × 256 fractional numbers in the range of (0 to 1) to improve the BER performance of the proposed MPFrFT OFDM.
Sensors 2023 ,
22, x FOR PEER REVIEW 7 of 25 consists of four fractional chaotic sine maps connected in concatenated form with different secret parameters.The modular function is used to improve the chaotic property based on the continuity of the map output.
describe the output series, the bifurcation diagram (BD), and the Lyapunov value.
describe the output series, the bifurcation diagram (BD), and the Lyapunov value.Sensors 2023, 22, x FOR PEER REVIEW 7 of 25 consists of four fractional chaotic sine maps connected in concatenated form with different secret parameters.The modular function is used to improve the chaotic property based on the continuity of the map output.
Figure 3 .
Figure 3. (a) BD of the conventional non-modular FSCM.(b) LE of the conventional non-modular FSCM.(c) BD of the proposed MCC-MF sine map.(d) LE of the proposed MCC-MF sine map.
Figure 3 .
Figure 3. (a) BD of the conventional non-modular FSCM.(b) LE of the conventional non-modular FSCM.(c) BD of the proposed MCC-MF sine map.(d) LE of the proposed MCC-MF sine map.
with a length of 2072 bytes.The block diagram of the proposed DCBS generator is shown in Figure 4.As shown in Figure 4, the proposed DCBS generator consists of 128 secret keys, an initial condition generation, and the proposed MCC-MF-sine map.The secret key is used for initial condition generation for the proposed MCC-MF-sine map with the fractional secure parameters ( ) in order to generate 2072 bytes and 512 × 512 bytes.The output of the MCC-MF-sine map is used as an input for the DCBS generator to produce two vectors and of sizes (1 × 256) and 256 × 256 bytes for the encryption and authentication process.
Figure 4 .
Figure 4. Proposed DCBS generator block diagram based on MCC-MF sine map.1-The secret key (SK) of 128 bits represented by 32 hexadecimal digits "C2250EA6637F5AFAAF0654 9CCD16220A" is used to combined the biometric signature with the fractional number sequence generated from the proposed MCC-MF sine map.2-The secret key is divided into eight sections to generate the initial conditions and the different control parameters of the proposed MCC-MF sine map.All secret parameters and the initial condition are 10 decimal precision.3-The first eight hexadecimal digits ( ) and the last eight hexadecimal digits number eight ( ) are used to generate the fractional initial condition of the proposed MCC-MF sine map as:
Figure 4 .
Figure 4. Proposed DCBS generator block diagram based on MCC-MF sine map.
Figure 5 .
Figure 5.The proposed secure MP-FrFT-OFDM based on MCC-MF sine map and DCBS generator.
2 .
Convert the input image into binary format.3. The first encryption step started with the diffusion process by Xoring, which converts the binary image data of 256 × 256 × 8 bits with the select random iterated chaotic sequence of 256 × 256 × 8 bits.4. Apply convolutional coding for the diffusion 256 × 256 × 8 bits. 5. Apply QPSK mapping.6.The second encryption step is the confusion process, applying the inverse MPFrFT based OFDM modulation based on the two secret fractional parameter vectors a = 256 bytes ( , , ⋯ , ) and = 256 bytes ( , , ⋯ , ) as follows:
Figure 5 .
Figure 5.The proposed secure MP-FrFT-OFDM based on MCC-MF sine map and DCBS generator.
6 .
The second encryption step is the confusion process, applying the inverse MPFrFT based OFDM modulation based on the two secret fractional parameter vectors a = 256 bytes(a 1 , a 2 , • • • , a 256 ) and b = 256 bytes (b 1 , b 2 , • • • , b 256 ) as follows:
Table 4 .
AWGN channel effect at zero mean µ = 0 and over certain ranges of noise variances σ 2 = 0.01, 0.05, 0.10, 0.15, 0.20 .and all other variables are equal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table
Figure 7 .
Figure 7. PSNR of FFT, FrFT, MPFrFT OFDM over Salt and Pepper noise attack, d = 0.02.Different E b N 0 values 2, 8 and 16dB are chosen in Table7in order to highlight the visual quality metric performance of the proposed systems under the Salt and Pepper noise effect.In medical ultrasound imaging, Speckle is a granular interference that inherently exists in and degrades the quality of the medical images.It results from the coherence of backscattered signals from various distributed targets[4].
Figure 10 .
Figure 10.Encryption and decryption results of the gray images Baboon, Lena, and Cameraman.(a) the original images, (b) the encrypted images, (c) the decrypted images 5.2.5.Entropy Analysis
Figure 10 .
Figure 10.Encryption and decryption results of the gray images Baboon, Lena, and Cameraman.(a) the original images, (b) the encrypted images, (c) the decrypted images.
Figure 11 .
Figure 11.Key sensitivity analysis, original images are shown in (a), cipher images of the original key are shown in (b).Decrypted images for the incorrect decryption key are shown in (c), decrypted images for the correct decryption key are shown in (d).
Figure 11 .
Figure 11.Key sensitivity analysis, original images are shown in (a), cipher images of the original key are shown in (b).Decrypted images for the incorrect decryption key are shown in (c), decrypted images for the correct decryption key are shown in (d).
Table 1 .
The randomness tests results for the proposed MCC-MF sine map based on NIST SP800-22 tests.
Table 1 .
The randomness tests results for the proposed MCC-MF sine map based on NIST SP800-22 tests.
Table 2 .
The multi-secure parameters used in the simulations.
Table 3 .
The proposed authenticated secure image transmission system simulation parameters.
and all other variables are equal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
and all other variables are equal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
Table 4 .
AWGN channel effect at zero mean = 0 and over certain ranges of noise variances ( = 0.01, 0.05, 0.10, 0.15, 0.20).variablesareequal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
Table 4 .
AWGN channel effect at zero mean = 0 and over certain ranges of noise variances ( = 0.01, 0.05, 0.10, 0.15, 0.20).variablesareequal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
Table 4 .
AWGN channel effect at zero mean = 0 and over certain ranges of noise variances ( = 0.01, 0.05, 0.10, 0.15, 0.20).variablesareequal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
Table 4 .
AWGN channel effect at zero mean = 0 and over certain ranges of noise variances ( = 0.01, 0.05, 0.10, 0.15, 0.20).variablesareequal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
Table 4 .
AWGN channel effect at zero mean = 0 and over certain ranges of noise variances ( = 0.01, 0.05, 0.10, 0.15, 0.20).variablesareequal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
Table 4 .
AWGN channel effect at zero mean = 0 and over certain ranges of noise variances ( = 0.01, 0.05, 0.10, 0.15, 0.20).variablesareequal.Various images with a resolution of 256 × 256 pixels are used to test the simulation analysis for the proposed authenticated secure image transfer technique.The AWGN channel is a well-known model to indicate various random processes seen in nature; it contains a uniform power across the whole frequency band.Starting with a CT brain medical image, the proposed system behavior is examined under the AWGN channel effect at zero mean = 0 and over certain ranges of noise variances, ( = 0.01, 0.05, 0.10, 0.15, 0.20), as shown in Table4.
Table 5 .
performances of the proposed , and coded OFDM under Salt and Pepper noise, noise density = 0.02.
Table 5 .
performances of the proposed , and coded OFDM under Salt and Pepper noise, noise density = 0.02.
Table 5 .
performances of the proposed , and coded OFDM under Salt and Pepper noise, noise density = 0.02.
Table 5 .
performances of the proposed , and coded OFDM under Salt and Pepper noise, noise density = 0.02.
Table 5 .
performances of the proposed , and coded OFDM under Salt and Pepper noise, noise density = 0.02.
Table 5 .
BER performances of the proposed FFT, FrFT and MPFrFT coded OFDM under Salt and Pepper noise, noise density d = 0.02.
Table 6 .
PSNR performances of the proposed FFT, FrFT and MPFrFT coded OFDM under Salt & Pepper noise, noise density d = 0.02.
Table 5 .
performances of the proposed , and coded OFDM under Salt and Pepper noise, noise density = 0.02.
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Different values 2, 8 and 16 dB are chosen in Table
Table 10 .
NPCR comparison among the proposed MPFrFT and FrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 11 .
U ACI comparison among the proposed MPFrFT and FrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 12 .
Correlation comparison among the proposed MPFrFT and FrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 13 .
Histogram analysis for the proposed MPFrFT-coded OFDM using Cameraman, Peppers and Boat standard gray-scale test images.
Table 14 .
Performance comparison between the proposed cryptosystem results and other methods described in the literature for a Lena image of size 256 × 256. | 20,673.8 | 2023-09-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Vehicle Plate Detection in Car Black Box Video
Internet services that share vehicle black box videos need a way to obfuscate license plates in uploaded videos because of privacy issues.Thus, plate detection is one of the critical functions that such services rely on. Even though various types of detectionmethods are available, they are not suitable for black box videos because no assumption about size, number of plates, and lighting conditions can bemade.We propose a method to detect Korean vehicle plates from black box videos. It works in two stages: the first stage aims to locate a set of candidate plate regions and the second stage identifies only actual plates from candidates by using a support vector machine classifier. The first stage consists of five sequential substeps. At first, it produces candidate regions by combining single character areas and then eliminates candidate regions that fail to meet plate conditions through the remaining substeps. For the second stage, we propose a feature vector that captures the characteristics of plates in texture and color. For performance evaluation, we compiled our dataset which contains 2,627 positive and negative images.The evaluation results show that the proposed method improves accuracy and sensitivity by at least 5% and is 30 times faster compared with an existing method.
Introduction
Internet services that share user created contents including videos and images have long become a part of people's everyday information and entertainment elements.Proliferation of such services accompanies side effects.Because of heightened awareness about privacy, the exposure of personal information without consent has drawn people's attention more than ever before.For example, it is certainly undesirable if vehicle plates of some people are exposed without their permission in the Internet services such as Google street view [1] and black box video sharing sites [2][3][4].Many countries ban sharing of personal information captured in black box videos.For instance, Germany and USA prohibit the distribution of images or videos containing faces and plates without written permission.Thus, it is required to delete or at least obfuscate privacy related data before making them available online [5].
However, such work to eliminate personal information is impracticable if it is performed manually without automation considering the quantity of images and videos that are newly available every day.The privacy information that this paper deals with is vehicle plates captured in black box videos.
The services sharing such videos need to remove plates from uploaded videos.To automate such work, methods for detecting plates play a central role; plate should be located correctly before being removed.
Methods for vehicle plate detection have long been used in various fields such as security control, parking management, and automatic toll systems as a vital prestep before recognizing plate numbers.However, existing methods have limitations by constraints and assumptions particularly regarding location of plates within images, plate sizes, and lighting conditions.Detecting license plates in street view and black box videos provides various challenges: road signs and billboards are similar to plates, plates have different sizes depending on distance and are sometimes rotated or slanted, and their colors change according to lighting.
In this paper, we propose a novel method to detect Korean plates in videos captured by black box cameras.It detects six different types of Korean plates as shown in Figure 1.We aim to develop a scheme which works without assumption about plate locations within images, sizes, and illumination.We design it to be able to detect multiple instances of plates.One of the challenges that we should overcome is the case where boundary of plates is indistinguishable from vehicle color.Some of existing methods exploit assumption that rectangular boundaries of plates are distinct from background.However, it is not always true.To work with such cases, we propose a bottom-up way.We first detect characters that might constitute plates and then combine them together to form plate-like regions which are later classified.
The paper is organized as follows.Section 2 surveys a list of research efforts for plate detection.Section 3 describes the proposed method in detail and Section 4 presents the performance evaluation results.Section 5 concludes the paper.
Related Works
Methods for detecting vehicle license plates from images have been studied in various literature because of their wide applicability.The methods can be largely divided into five groups.In the remaining section, we describe characteristics of each group in detail and discuss their strength and weakness, respectively.
Edge-based methods [7][8][9][10][11][12] are one of the simplest approaches.They scan images to find areas on which vertical and horizontal edges overlap and, at the same time, of which shapes are rectangles with the ratio of width to height close to those of plates.The methods depend on Sobel filter to detect edges.When searching such areas, Hough transform is adopted in [9,10] to detect even rotated plates.To determine boundaries of rectangles, connected component analysis (CCA) [11] or template matching [12] is used.In general, the edge-based approaches are simple to implement but require that all edge pixels are connected.Otherwise, disconnected parts of edges are discarded as noise.Other constraints are that plates should have different colors than vehicle and the ranges of plate sizes in images and illumination condition should be known in advance.
Texture-analysis-based methods [13][14][15] are motivated by an observation that edges from characters appearing on plates have texture characteristics.Thus, detection processes compare edges for possible matching with a set of predefined textures derived from actual plates.For texture matching, various methods are used such as vector quantization [13], Gabor filter [14], and wavelet transform [15].
Multistage approaches [16,17] consist of a series of steps along which the number of candidate areas which might be plates decreases and only areas with high probability of being plates are left at the end of the steps.Selection processes along steps use feature vectors derived from areas such as Haar [16], covariance descriptor, and HoG descriptor [17].Use of such feature vectors lifts the constraint that requires that plate colors should be different from vehicle.
Color-based approaches are inspired by an observation that color combination used in plates including characters on them is hardly found in streets.A simple method [18] selects areas of which color combination matches with those of plates by dividing pixels in areas into 13 categories under HLS color model.There are other approaches; neural network is used to classify color distribution [19] and boundaries of plates are detected by colors [20].These color-based methods are robust to rotation and perspective transformation plates.However, they are subject to lighting and not applicable when colors of vehicles and plates are similar.Regarding such limitation, use of average and standard deviation of hue value distribution of areas is proposed [21].
Character-based methods work in a bottom-up way; they infer plate areas by using information of detected characters.A direct method [22] finds areas containing patterns like digit characters and then use a neural network to determine their likelihood of being plates.A similar method [23] improves accuracy by limiting the range of pattern sizes.For efficiency, another method employs an extra step to exclude nonplate areas by limiting the ratio of width to height [24].There are other modified methods; a Laplacian filer is used to strengthen character edges [25] and multiple classifiers such as Adaboost and support vector machine (SVM) over Haar feature are employed [6].This type of methods is susceptible to false positives: nonplate objects that are rectangles with character patterns.
We propose a method that is a hybrid of the multistage approach and the character-based one.A similar one to our method is Ho et al. 's work [6] which shares two-stage structure: a first stage selects candidate regions which have high probability of being plates and a second stage determines actual plates among candidates by using a machine learning classifier.Another similarity can be found in a two-stage method [26] of Google.It was developed to protect privacy by blurring plates captured in street view images.It has a layer of convolutional neural network (CNN) to detect vehicles and another layer of neural network to locate plates from detected vehicles.
However, similarity ends here.Our method uses a different approach for selecting candidate regions than Ho et al. 's work and is more efficient in terms of the number of candidate regions.Also, we use different feature vectors for a SVM classifier.In terms of complexity, our method is different and simpler than Google's work which employs two CNNs.
Vehicle Plate Detection
We develop a novel method to detect vehicle plates from the videos captured by car black boxes.Our approach is motived by the fact that plates have letter and numeric characters on them.We search in an image to locate characters and merge the character regions to form a plate-like rectangular area.Only the areas that satisfy certain conditions are classified as plates.Such approach is reasonable because Korean vehicle plates have several digit characters evenly spaced.However, black box videos have many nonplate objects that show similar features of digit characters, for example, road signs and billboards.Moreover, the task is challenging because plates have different sizes; some of them are slanted at an angle and are under different illumination.
The proposed method consists of two sequential steps.The first step selects candidate regions which are most likely to be plates; thus it is called a candidate selection step.The second step, a decision step, determines which candidate regions are actual plates by using a machine learning-based classifier, SVM.
Figure 2 shows intermediate results while images are processed through the two steps.From the results, we explain what substeps consist of each step and the overview.Details will be discussed later.Given an input image, character regions are emphasized by strengthening edges as shown in Figure 2(b).Then only the edges that have high probability to belong to character regions are selected as in Figure 2(c).The next step merges neighboring character regions together to form rectangular areas.Among the areas, only those that satisfy shape conditions that characterize license plates are chosen as in Figure 2(d), which are then fed into SVM to determine actual plates as in Figure 2(e).The white rectangles mark final results of plate detection.
We now describe the first step, the candidate selection step, in detail.It consists of five substeps.As the substeps proceed, the number of candidate regions decreases by filtering out nonplate areas through the operations such as morphology, connected component analysis, and region merging.The first substep detects edges from an input image and then sharpens them by convoluting with the Laplacian filter, leaving only pixels on which zero crossing occurs.By this filtering, the edges become more distinct.It can prevent the loss of character regions in the following steps.
The second substep is to remove noise and connect edges by two morphology operations as shown in Figure 3.It should be noted that the previous step strengthened not only character edges but also noise.Noise is in general too short edges that cannot constitute character regions.We use two operations in order: opening and closing.The opening removes short edges by erosion and expansion, while the closing merges disconnected but neighboring edges together by expansion and erosion.
In the third substep of Figure 4, we select some of rectangular areas that have high probability that they correspond to a single character region.It is achieved by connecting neighboring edge pixels and finding a minimum rectangular area that includes all the pixels.For this, we use connected component analysis (CCA).However, it returns other rectangular areas as well as actual character regions.Thus, extra regions need to be identified.Given a rectangular region, , it is considered as a character region if the following condition is met: where is the number of edge pixels in and is its area in pixel.Otherwise, it is classified as noncharacter regions and removed from the results as in Figure 4(c).We determine min and max from the statistics of actual character regions gathered from our own dataset.For example, we use the threshold range of [0.2, 0.8] which are derived from all possible characters of license plates.The fourth substep merges neighboring rectangular areas together to form a bigger rectangle that contains all of them.The rationale behind this is that plates consist of a set of neighboring character areas.The decision whether to merge regions is based on the following three conditions.If either one of them is not met, they are not merged.The first condition states that all the regions, , in a set, , are mergeable if the following is met: where (ℎ) is the height of a region , (ℎ) is an average height of all regions in a set , and th is a threshold.By this condition, only the regions with similar heights are merged.
The second condition requires that vertical coordinates of center points should satisfy the following condition: where () is the width of , [] is the horizontal center coordinate of , and ( , ) is the distance between centers of and .This condition checks whether any one of the regions is apart from others in a set.After checking the three conditions, Figure 5(b), for example, shows that the rectangular regions on the license plates in the center bottom area of the image were merged into one bigger rectangle.The condition checking of the fourth substep, in theory, should be performed on all possible combinations of regions.It, however, causes the resulting computing time to be prohibitively enormous.Thus, we rely on two heuristics to limit the number of possible sets to be checked.Firstly, we limit the size of a set to be minimum two and maximum ten; we assume no plate is larger than ten merged regions.Secondly, we use a backtracking algorithm in such a way that we extend sets, which met the three conditions, by adding a new region and checking if it still holds.
The final substep selects only plate-like regions as shown in Figure 6(b).We define the plate-likeness quantitatively by the following three conditions.Regions that fail to satisfy all the conditions are discarded.Firstly, the ratio of width to height of a region should be within a range of [ , ] as follows: where () and () are width and height of candidate region .Secondly, the ratio of region size to whole image should be within a range of [ min , max ] as follows: where is image size.This is because plates in black box videos are limited in their maximum and minimum sizes because of distance to camera and resolution.Thirdly, a center point of a region should be in lower two-thirds segments when dividing image horizontally into three equal segments as follows: where [] is vertical coordinate of center of a candidate region and () is the image height.This is because of an angle by which black box cameras capture vehicles.We determine the range thresholds , , min , max from the statistics of six different types of plates from our own dataset.
Candidate regions obtained at the end of the first stage are then fed into the second stage, the decision step, which uses a machine learning-based classifier to select only actual plates.For this stage, we use a nonlinear support vector machine (SVM) which takes as input feature vectors derived from candidate regions.
Feature vector is designed to represent edge density distribution and dominant color of region.It has (= + ) dimensions where dimensions are for edge density and for dominant colors as shown in Figure 7. Before retrieving feature vector, regions which have different sizes are normalized so that they have the width of pixels while their aspect ratio is kept.Also, the color model of regions is changed to HSV from RGB.This is because HSV is more robust than RGB in the case of illumination change; using the hue component makes algorithms less sensitive to lighting variations.
To calculate the first values of feature vector, a region is divided into rectangular blocks, each of which has the same size and, for each block, the ratio of the number of edge pixels to the total number of pixels in a block is calculated.A total of resulting ratios comprise the first dimensions.For example, the region in Figure 7 is divided into (= × ) blocks arranged in rows and columns.The ratios are stored in feature vector in a row-wise way.The hyperparameter of r and c should be determined empirically by considering both the image resolution and the ratio of width to height of plates.
In the experiments of our paper, we used the value of = 5 and = 10.The remaining dimensions of feature vector are used to represent most dominant colors of region.We retrieve dominant colors by using a histogram which has 256 bins representing the entire range of hue values.We choose bins with the highest frequencies and their corresponding hue values which are [0, 255] are retrieved in frequency order to fill values of feature vector.In theory, two ( = 2) dominant colors are sufficient because plates are composed of two colors: background and characters.However, adding one more piece of color information ( = 3) is necessary due to illumination irregularity caused by partial shadow, ambient, diffuse, and specular lights.
Actual plates Nonplate
After obtaining feature vectors of regions, they are fed into a pretrained SVM to perform binary classification.Given a set of training examples, each labeled as belonging to plates or not, an SVM training algorithm builds a model that can later assign regions to one category or the other, making it a nonprobabilistic binary linear classifier.More details about the SVM are discussed in the next section.
Performance Evaluation
We present the results of performance evaluation of the proposed method along with comparison with the work of [6].For the training of SVM used in our method, we used a total of 86 actual plate images captured from black box videos.The images were selected in such a way that they represent six different types of Korean plates.Also, a total of 137 nonplate images were used from the same videos for the training.Training images have rectangular shape, of which width ranges from 22 to 168.It should be noted that images are resized in such a way that width becomes before feature vector is retrieved.Examples of training images are shown in Figure 8.The algorithm parameters used in our method are listed in Table 1.As testing data, we used two sets of data: positive and negative.The positive data consists of a total of 1,627 driverview images as shown in Figure 9 that contain at least two vehicles with distinguishable license plates.The negative data is a total of 1,000 images that have unrecognizable license plate or no vehicle at all.Both positive and negative images [27] were captured from six different black box videos having at least 1280-by-720 resolution.The positive images were labeled with coordinates of actual plates.
A confusion matrix is used to analyze the classification performance of the proposed method.We build the matrix in such a way that, given positive data, if the number of detected plates, , and coordinates match its label, we consider it as true positive by increasing the count of true positive (TP) by one.Otherwise false negative (FN) is increased.On the contrary, in the case of negative data, we increase the count of true negative (TN) when is zero.Otherwise, false positive (FP) is increased.
For comparison purpose, we implemented the work of Ho [6] and had it run on the same set of the positive and negative dataset.We chose it because it not only claims over 0.9 of recall rate but also shares a similar two-stage structure to ours; it uses Adaboost to select a set of candidate regions, which are then classified by SVM in the second stage.
Table 2 shows confusion matrices which are the results of experimenting with the test data; the result of the proposed method is on the left and Ho et al. 's work is on the right where and are the numbers of actual and detected plates, respectively.We derive from the matrices a list of performance metrics as shown in Figure 10.The improvement percentages in accuracy, precision, sensitivity, and specificity by the proposed method are 5.22%, 3.12%, 8.15%, and 2.35%, respectively.The largest improvement in sensitivity implies that the ability of our method that detects plates if any is more advanced than Ho et al. 's work.More intuitive comparison between the methods comes from receiver operating characteristic (ROC) curve.Figure 11 shows where both methods are positioned within the region of ROC curve.In theory, the closer the position is to the top left corner, the better the classification performance is.Thus, it is evident that the proposed method is superior to Ho et al. 's work.In the future work, complete ROC curve will be explored by using possible combinations of adjustable threshold parameters.Then, more optimal configuration of parameters can be searched so that it enhances performance further.We compare how fast the algorithms work.To this end, we measure the elapsed times from the moment when an input image is given until the detection ends.The proposed method takes 0.58 sec on average, while that of Ho et al. 's work takes 17.9 sec; our work is approximately 30 times faster.Possible reasons of such gap are in part because of the difference in the number of candidate regions produced at the end of the first stage.The proposed method yields 12 candidate regions on average, while that of Ho et al. 's work yields over 400.This implies that Ho et al. 's work has 33 times more load than the proposed method.Also, a sliding window that Ho et al. 's work uses to scan over images repeatedly while changing its sizes is another reason for the gap.
We now analyze how the number of intermediate results decreases along the sequential processes of the proposed method.It helps us to catch a glimpse of narrowing-down nature of our method.Figure 12 shows the average numbers Advances in Multimedia of extracted regions after the substeps when the proposed method works on the test data.The third substep, which is to detect character regions, produces 282.8 regions on average.The fourth substep for merging the regions reduces them to 233.The final fifth step which checks the plate-likeness selects only 12 among them.It implies that the plate-likeness checking is an effective way to filter out nonplate regions.After the second stage involving the SVM classification, the number of detected regions drops down to 1.5, which falls within the range of actual true plate numbers in the test data, [0, 3].
Conclusions
We proposed the two-stage method for detecting vehicle plates from car black box videos.The first stage finds a set of candidate regions which have high probability of being plates and the second stage identifies actual plates among candidates by using a binary machine learning classifier, SVM.Our proposed method works in a bottom-up way in the sense that candidate regions are constructed from a set of single character areas.The performance evaluation results showed that our method improves overall detection accuracy, efficiency, and performance compared with an existing work which has similar multistage structure.
In future works, we would improve the method to become less susceptible to rotation or transformation of plates.For this, current scheme using thresholding for ratios or alignment-based filtering will be reexamined.Other further works are related to real-time performance.A quantitative goal is to detect at least five plates in an image of 1280-by-720 resolution in less than 10 msec on an embedded platform with the hardware specification of Raspberry PI 3. We expect such real-time performance to widen application ranges of the proposed method such as unmanned self-driving vehicles and automatic toll systems.
Figure 1 :
Figure 1: Six different types of Korean vehicle plates.
Figure 2 :
Figure 2: The processing sequence of the proposed method.
Figure 3 :
Figure 3: Morphology operations to remove noise and strengthen character areas.
Figure 4: Detection of character areas by CCA and filtering.
Figure 5 :
Figure 5: Candidate plate regions from merging character areas.
Figure 6 :
Figure 6: Candidate plate regions at the end of the first stage.
Figure 7 :
Figure 7: An example to show derivation of feature vectors.
Figure 8 :
Figure 8: Example images for SVM training.
Figure 9 :Figure 10 :Figure 11 :
Figure 9: Examples of training and test images used for evaluation.
Figure 12 :
Figure 12: The number of detected regions after each substep of the first stage.
Table 2 :
Confusion matrices from experiment results. | 5,602.8 | 2017-11-28T00:00:00.000 | [
"Computer Science"
] |
Language Education in Russian Universities: Advantages, Vulnerabilities and Risks of Online Teaching
At the moment, education systems around the world are taking measures to organize education in the context of the coronavirus (COVID-19) pandemic. In most countries many students are transferred to distance learning. Online education provides a wide range of opportunities and prospects for changing and improving educational systems, for which a critical situation creates forced conditions. The article analyzes the problems associated with the use of online learning. The methodological and theoretical foundation of the work is provided by a systematic approach, the concept of social conditioning of language education, the notion of continuous vocational training. When using online courses as a means of teaching a foreign language, we offer both special programs for teaching a language and checking knowledge of vocabulary and grammar, as well as modern authentic materials for teaching reading and listening. In the process of teaching a foreign language, students perform various types of work: they translate foreign texts and articles, make reports, practice creative project activities and prepare presentations. The priority goal of the article is to identify the advantages, vulnerabilities and risks of online learning. It is shown that mass character, the use of modern and interactive technologies belong to the strengths of online learning. However, the absence of direct contact between the teacher and the student does not allow implementing the competence approach in full measure.
Introduction
Online learning (or so-called "distant education") has become an integral and essential part of the educational standards both in Russia and abroad even prior to the COVID pandemic. The factors contributing to the online learning development include shortage of time, increasingly faster life pace, busy work schedules, spread of broadband Internet connectivity and facilities, shift from desktop computers to the mobile devices (on-the-move learning), popularity of "edutainment" (i.e. integration of entertainment and gaming elements into the learning process), cutting of the travel costs, information accessibility etc. While being questioned and debated as a principal education core, online learning is widely embraced as (at least) a supplementary and additional tool that is to be inherent to any higher educational 8160 Language Education in Russian Universities: Advantages, Vulnerabilities and Risks of Online Teaching organization. According to [3]: "The growth of international cooperation, trade, tourism and emigration arouses the interest of both linguists and teaching experts in the theory and practice of the e-learning and mobile teaching technologies".
Lockdown safety, pandemic conditions and healthcare regulations have recently added a completely new dimension to online learning. Now it is temporarily being considered as a principal tool of education due to pandemic restrictions. Although the limitations vary from one Russian region / territory to another, online courses are partially or fully implemented almost in every higher educational organization, including N.P. Ogarev's Mordovia State University (Saransk) and Ulyanovsk State University, the restrictions and anti-Covid healthcare measures include: wearing masks and protective medical gloves, more frequent cleaning, sanitization and ventilation in school buildings, non-contact temperature tests, social distancing (involving seating in staggered order in the classrooms), rearranging the buildings` entrance and exit patterns (i.e. personnel and students use different doors to access the facilities to prevent mass gatherings), further digitizing of the workflow, alternation of online / offline schedules, restrictions for senior personnel (e.g. "online-only mode" for them) etc. Meanwhile the above-mentioned measures are implemented only in the regions with statistically and relatively low pandemic cases and deaths. In the foreseeable future, the seasonal growth (or the so-called "second wave") might demand a complete lockdown or further restrictions.
The emerging anti-pandemic measures and digitization of the education courses have reshaped and rearranged the learning process, as well as professional thinking and perception. Teaching staff and faculty were forced to quickly acquire the new skills (or improve the existing ones) to master the online learning software, deal with technical issues, introduce new teaching techniques, organizing classes in a completely novel environment. The new and unexpected realia challenged education experts to technical specialists to improve the digital environment and online facilities to control both education process and academic work. This required the update and further development of the university websites, personal online profiles, online attendance and grades, digital workflow connecting them to the existing messaging, audio / video conference software (e.g. Skype, Zoom, Viber, WhatsApp) and social media.
The overall situation raised multiple issues. First, it led to the re-evaluation of the online resources and forced the teaching staff to make them a key element of their work, at least temporarily. Second, it also highlighted a growing need for an instructor`s individual approach to the broad diversity of students` needs and demands. Third, it "rekindled" a debate on the topic of traditional versus contemporary teaching methods (i.e. can online courses substitute the offline classes?). Fourth, it limited the capacity for offline activities -both curricular (e.g. conducting experiments in natural sciences` disciplines, teaching practice, medical internship etc.) and extracurricular (sports, drama, academic conferences, international exchange programs).
Thus, while having a non-negotiable advantage in education, the online learning process is also viewed as a source of potential risks and vulnerabilities. Consequently, it demands constant discussion and improvement to meet personal, educational and professional requirements. These issues are of special importance, especially in the sphere of foreign language learning. There exist multiple ways to use digital technologies and Internet resources in foreign language learning, including the implementation of existing applications and materials, as well as joint resources created by teachers and students. N.S. Kirgintseva and S.A. Nechaev [14] argue that "the use of various types of multimedia in teaching a foreign language facilitates the cognitive activity within students, forms a culture of creative operational thinking and the ability to navigate the rapidly changing information flows of modern society". This article concentrates on the use of the online and multimedia means within the framework of information and communication technologies in teaching foreign languages in higher education (based on the example of Russian higher institutions).
Literature Review
The problem of the digitization of education and e-learning is in focus of various researchers and scholars. They make a comprehensive contribution into the study area by providing their national teaching and academic expertise and experience. For instance, R. Trinder [34] studies informal aspects of e-learning using contemporary methods and argues that "online informal learning of English deserves more attention". The researcher presents an empirical study by surveying Austrian university students', as well as their practices and preferences related to new media methodologies. T.M. Roose and G.E. Newell [26] in the work "Exploring Online Discussions through an Academic Literacies Approach" study "how international students bring their cultural knowledge and experiences into relationship with other writers' ideas" using as an example an extract from an online discussion group, including the assignments, student responses, and comments within a university ESL composition course. The work [12] deal with the digital multimodal composing (DMC) as an instructional activity, discussing its implication, advantages and vulnerabilities.
In the work [25] the scholar considers the existing and possible threats and advances of online education, teaching and academic work as viewed from the perspective of the Chinese colleagues. The study [35] analyzes "the goal orientations, implementation intentions, and self-regulated learning behavior in relation to mobile-assisted language learning", showing that the implementation of these techniques is "largely conditioned upon learners' awareness of integrativeness and a sense of mastery in light of their reasons for or goals of learning English". H. Tawil [32] poses a question of the effectiveness of the current developments in the area of language learning and teaching, the research concentrates on the role of educational technology and digital communications in acquisition of new or second foreign languages. The digital innovations and their influence on classroom instruction are studied in the paper [19] based on the survey carried out in Jouf University, Saudi Arabia. C.A. Chapelle [4] investigates into the distribution of computer-assisted language learning (CALL), its integration into the existing language materials and curricula and distinguishing between CALL tools and other language resources. The research titled "Electronic Evaluation: Facts, Challenges and Expectations" [11] from Hassiba Benbouali University of Chlef, Algeria shares the University`s and overall national experience in electronic and digital evaluation.
The series of works [13] research the adoption of mobile multimedia and Internet technologies based on the opinions of intermediate female EFL students (15)(16)(17)(18)(19)(20) years old) at English Language institutions located in Kerman, Iran. The study of students` attitude to e-learning paradigm in Delta University, Egypt deals with the survey conducted among 100 freshman students, asking them the questions: "1. Does using e-learning software program influence students' attitude towards learning English as a foreign language positively? 2. What are the advantages of e-learning in improving English language skills among freshmen students?". Post-graduate learners in Pakistan and their attitudes to changing of educational realia under the circumstances of anti-Covid measures are surveyed in the study [27]. The conclusion is made that among 100 students of KFUEIT, RYK University and the participation that is found "inspiring" and "positive" within the framework of this field of study.
The introduction and use of the e-learning and distant technologies is studied [9], while the autonomous and informal learning is imaginatively compared to "Riding the Digital Wilds", i.e. the author argues that "the trend has accelerated, with new opportunities for autonomous language learning through mobile devices and the ever increasing availability and use of streaming video and other authentic materials in the target language". H. Chiang [5] from Central Taiwan University of Science and Technology investigates into teacher-led and online text-to-speech dictation for students' vocabulary performance and makes a conclusion that there exists a significant difference between TTS and TLD impacts on the participants` vocabulary mastery.
In the paper "Practice of College English Teaching Reform Based on Online Open Course" [16] introduces and discusses the construction and application of college English online open course in a vocational college in China based on the experience of Yiwu Industrial & Commercial College. Z. Bárkányi and S. Melchor-Couto [2] from The Open University, Milton Keynes and University of Roehampton, London, United Kingdom study learners` reaction to speaking skills mastery within the massive open online language course and their "feeling of intimidation" and "anxiety" obtained by comments from the discussion forums. Currently, research in the field of Internet, digital and online tools in higher education is conducted in the studies of Russian authors [3,6,8,15,20,23,24,30,31,33]. While pedagogical practice leads to deliberations concerning the use of specific methodological tools and their implementation in the so-called "education praxis", scholars conclude that these methodologies are to become an essential component of modern pedagogical discourse. The scholars put forward a thesis of integrating online and e-learning ("distant education") into the regular curricula without violating and diminishing the role of an instructor as a mentor and guide.
Materials and Methods
The methodological and theoretical foundation of the work is provided by a systematic approach, the concept of social conditioning of language education, the notion of continuous professional training (stipulated by the Russian Federal Educational Standards) and the government regulations of reforming higher education.
Internet technologies in language teaching involve the combination of text, sound, animation, video and graphic images in a computer system using educational multimedia software, a projector and a web camera to ensure individual approach to training, the efficiency of mastering the skills and enhancement of the person`s motivation. These teaching aids in foreign language classes provide an opportunity to carry out real communication with native speakers, access to information in centralized information systems, cognitive development and motivation for language learning. For instance, the online learning for Mordovia State University is carried out with the help of the online system (integrated into the University`s website) and supplementary means (messagers and social media).
Methodically, the open online course provider https://www.coursera.org/ is used to aid teaching and learning of the foreign language disciplines.
Results
Nowadays online courses are inherent to the educational process in general. The rapid growth and increase in the number of mass online courses are currently attracting attention from educational institutions and education experts. The development of the language education is characterized by the implementation of information and communication technologies in solving the problem of modernizing the education system. Information and communication tools, as well as multimedia and Internet technologies, contribute to the personal development of students and the individual approach in the educational process of higher education. According to [31], "modern ICTs provide active and creative mastery of the studied subject, which allows students to learn the given material at a novel and qualitatively higher level".
N.G. Mathew [19] et al. argue that "the impact of technological innovations on teaching methodologies has always been a subject of debate due to its attempts to substitute face-to-face teaching-learning contexts with a virtual environment". A skilled lecturer is surely able to efficiently influence students in the classroom. However, attention and motivation management can also be carried out through indirect contact, while training via an online course. To accomplish this, there is an increasing demand in the appropriate tools: the visual design of materials, notifications and support of the courses, the logics in organizing of the curriculum and tasks system, etc. Online training facilitates producing a course that will gain popularity with students. Moreover, if the implementation of online training does not take into account the specifics of the discipline and the direction of specific students' training, even an excellent course may lead to negative results.
Instructors` qualifications and proper methodological organization are essential not only for creating a course, but also for its application in learning and teaching praxis.
One of the principal teaching tasks -creating the environment for successful learning -is also implemented in online courses [10,21], whereas the education medium becomes partially or completely virtual (depending on the chosen training model -mixed or completely remote). The course design commences when one is able to understand the capabilities and limitations of the online format. In addition, an open online course is built around the instructor, as well as their requirements and demands. Research results confirm that the charisma of the teacher, their passion and energy are the determining factors for students of online courses in assessing the quality of video lectures. The course is created for students, and it is their goals and objectives that are put at the forefront of the designing. But the key element of this course is its authors, the bearers of the "living" and "practical" knowledge and expertise [20]. The online format implies that this knowledge can be partially conveyed to a much larger audience than traditional offline methods. It is significant not only to provide the detailed explanation of the material, but also to supplement it with some assessment and evaluation. Otherwise, the audience may find problems with the readiness for theoretical and practical work.
The strengths and advantages of online learning include the following: the coverage of a large number of students-mass participation; the use of modern and interactive technologies to present theoretical material and complete tasks; mobility and training 24/7 -the ability to study anywhere and anytime; variability and diversity; "repeated study" (interesting or complicated materials can be accessed multiple times for clarification); professional development through online courses for people with special needs (inclusive education); opportunity for self-development and self-discipline while studying flexible hours; the optimization of the learning process (students' autonomous work, reducing the teacher's contact); high-quality organization of classroom and students independent work, which is not always consistently and validly assessed during full-time training; increase of the University rating, its popularity and overall students' motivation.
One could add that in the online course, students are exposed to less stress when completing evaluation tasks, as they are placed in a more comfortable environment (home or any convenient place), and also promptly obtain final results and mistakes correction. "Working outside class encourages students to study independently using the E-learning interactive activities and thus spend more time engaged and immersed in the English language which improves their language proficiency" concludes in his research [1]. On the other hand, it is more difficult for the teacher to control and properly assess how independently this work is performed [18]. Therefore, one has to create tasks that can be completed not just by copying the data from the lecture materials, but only by means of thorough comprehension of the topic.
The modern generation exists within the framework of a virtual and digital environment; consequently, online courses are a combination of students` free time with their studies to provide them with an opportunity to learn in an interesting and relevant manner. In the discussion of the subject, author of work [16] clarifies the following, "Online learning contains video lectures, quizzes, unit tests, and supplementary resources. In addition, the online course platform also has a discussion forum between teaching staff and students for answering questions. Offline classroom teaching is an extension and expansion of online courses. Therefore, teachers should use a mixed teaching mode that combines online and offline teaching activities. It is totally different from the traditional classroom teaching mode". Taking online courses may provide additional career guidance to future applicants, information support to undergraduates and postgraduates who studied in a different field of study at the previous stage of education. At the same time, online training provides ample opportunities for the implementation of joint activities of all participants in the educational process.
In addition to the advantages listed above, the following can be noted: online courses also assist in teaching international students when it is difficult for them to learn in the general students` flow, and the teacher should pay additional attention to their demands. Online courses are also important to optimize teaching time (as well as release from the so-called "voice" load), either for academic activity or to improve the quality of educational and methodological work. The listed advantages of online learning could be supplemented by the fact that it motivates instructors to master new educational technologies. H. Pu [25] emphasizes that "these unique times offer opportunities for ELT instructors who have grown used to face-to-face settings to take a closer look at online teaching with fresh eyes and revitalize their teaching repertoire".
In general, the strengths of online courses facilitate self-expression, transferring teaching experience and expertise to an unlimited number of students; the accessibility of education for the students, the opportunity to improve their skills, and acquire new contacts for cooperation in the future.
The weaknesses and vulnerabilities of online learning in foreign languages include: large time-consuming and high cost of design of online courses (from 500 thousand to 1 million rubles, the cost of support and follow-up work might reach up to 200 thousand rubles per year); loss of students' ability to analyze and synthesize, since the online courses material is provided in ready-made shape and give less opportunities for discovery and both creative and critical thinking; the ability to perform multiple tests (as a result, the assessment may not adequately reflect the student's knowledge); problems with proctoring (students' authentication)hence the inability to control the real progress, because tasks might be fulfilled by someone more competent; the excessive formalization and regulation of training, restrictions and limitations (and in some cases suppression) of its creative component; "one-sided" presentation of educational material by the lecturer, limitations of "live" dialogue that can distract the audience's attention of listeners. This limits questions, remarks, supplement materials and illustrations that "enliven" the educational material and motivate audiences to keep close attention to the presented data. The lack of direct (personal) interaction between the instructor and the student (trainee) does not allow to fully implement the competence approach, focusing more on obtaining knowledge, and less on the formation of skills.
However, the lack of a direct contact with the teacher can be considered as a positive point, if he/she does not possess high qualification. For example, an eminent professor's lectures attract a larger audience than an ordinary teacher's class. Teaching staff are usually competent in their narrow subject matter, and they need significant support from specialists in modern pedagogical technologies, since a course is to fit and meet the requirements of the structure of the Bologna system.
Despite the above-mentioned shortcomings, positive results may be achieved taking into account the specifics of the field of study and a rational approach to the development and use of an online course with a specific goal. According to Sh. Kong [16], "the online open course has transformed our teaching concepts enormously. First of all, it makes learning process more participatory, exploratory and experiential, which naturally changes the roles of teachers and students. For students, the online learning can be done anywhere and the learning styles are quite flexible. However, they should learn and complete assignments more independently, and knowledge internalization is fulfilled mainly by themselves". The researcher also emphasizes that "on the other hand, the role of teachers is to guide, inspire and supervise students' learning process. This puts forward higher requirements on teachers' teaching ability and information quality".
It should be emphasized that so far, no massive open online course can compare with a teacher's work at the initial stage, when a conceptual apparatus based on logical thinking is created. Not every student is able to independently organize their time, workspace, and take a disciplined approach to completing tasks. Schedules, colloquiums, and the opportunity to compare their knowledge with the knowledge of other students play motivational and organizing roles. Online learning is more focused on keen students; it does not have enough tools to control the availability of conscious knowledge and skills.
Some subjects, including a foreign language, require the formation of oral speech skills (especially spontaneous abilities) through face-to-face communication with the teacher and other students, so in this case, an online course can only be a useful addition to classroom work. As pointed out in [34] "Due to the ready availability of new technologies, opportunities for the incidental as well as deliberate practice of English have multiplied and far exceed what can be done in more formal environments". However, in this area, online learning opportunities are limited, especially in the format of mass online courses [7,22]. Both at the initial stage and at the subsequent (intermediate and advanced) stages, "live" communication and constant interaction with the instructor should play the leading role while achieving success in the correct mastery of language material and the development of communicative competence.
Each stage of learning a foreign language must be saturated with speech exercises, which are the decisive factor for practical mastery of a foreign language (for example, see [28,29]). It is necessary that the students made oral presentations, participated in discussions and talked with each other in a foreign language. It should be noted that the measure of students' knowledge, skills and abilities cannot be assessed only by passing of a test (which is one of the training exercises). The key assignments include: asking questions and answering them in a conversation with a teacher or in a pair of "student -student"; determining the main topic of the proposed material, abstracts, etc. This is especially true if the student studies a foreign language for further use in oral and written communication in the professional sphere, as well as for education abroad [20]. Moreover, the most vulnerable aspects of speech communication that are extremely difficult to work on in an online format are: phonetics, writing, and speaking, as they all need discussion, feedback, and constant control.
At the same time, it should be noted that the formation and automation of certain skills in the use of certain grammar units, the development of reading and listening skills can be implemented in an online format [22]. N.G. Mathew et al. [19] observe that, "most of the research studies in technology-mediated language instruction focused on the effectiveness of technological aids in second/foreign language classrooms". They also state the following: "The use of computers as instructional aids facilitates language input because of their ability to integrate multimedia material such as videos, images, and text simultaneously into one single screen". Since such courses and technologies already exist, it is necessary to train the teaching staff to use these materials and involve them in further optimization, as well as the development of new materials in this area.
In our opinion, there are still more weaknesses in online learning than strengths. Theoretically it is still possible to give the audience information, but practically it is problematic. In addition, it is not possible to fully implement knowledge assessment. Passing the test does not mean that students have learned the material correctly (one can pass the test basing on logic, not knowledge). The threats of online learning can be attributed to the reduction in the number of teachers who are not fully competent in computer technology, but their discipline is taught at a high professional level according to the traditional system. Personal communication between the instructor and the student, in addition to the information component, also contains the aspect of mentoring. This concept is currently in great demand and commonly lacking. Online training is appropriate in the case of deeply motivated students. But in practice, one can often face the inability to effectively search for the necessary information and plan their own time. In addition, virtual reality has limitations in teamwork, which is essential in real life.
Widely known is the statement of the Russian psychologist V.P. Zinchenko that a teacher is a subject, a bearer of not only institutionalized, but also "living" knowledge, without whom complete education is impossible (for more information, see [36]). Indeed, the task of the teacher is not just broadcasting and conveying information. He/she is the creator of the educational environment and paradigm, outside of which education is defective. Undoubtedly, the advantages of online learning are obvious. However, it is necessary to calculate all the risks and stop the pursuit of rash implementation of not fully tested innovations. Online learning, like any tool, depends on an expert who uses it. It will be practically useful if the online education system is perceived as a supplement to the traditional system, as an effort to compensate for its most vulnerable points.
Summing up, it should be noted that online courses for teachers are a valuable source of methodological experience, which is generally positive. This experience is especially useful if they are planning to create their own online course. However, there are many disciplines in which online courses cannot replace "live" communication with the teacher. In modern conditions, online training is relevant and necessary, but only as a supplement to the traditional teaching.
Thus, the analysis of the problem of implementing online learning in the educational process revealed its strengths and weaknesses. It is shown that the potential of online courses is to provide users with broad opportunities to develop skills in innovative types of research, self-education, and intellectual activities. Along with traditional training, they can ensure the formation of an intellectually and professionally developed personality, promote the development of independent work, and provide access to new sources of educational information. The didactic potential of online courses provides opportunities for the formation of new forms of independent cognitive activity and contributes to the effective use of Internet resources in the educational process.
Discussion
The Internet provides and offers innovative opportunities for better mastering of a foreign language through digital communication, which is subdivided into synchronous (Chats and messengers) and asynchronous (E-mail, forums) types. As shown in practice, synchronous kind is more effective for individual mastering of language skills at a sufficiently high level. While asynchronous communication assists thorough pairing and group work, as opposed to spontaneous communication. In the work [23] define both types of digital communication (within the framework of e-learning) as "exchanging scanned printed materials, graphs, business documents, photographs, charts, newspapers and magazines using electronic transmission methods and processing information". Due to its availability and accessibility, there is an observed increase in frequency of integration of electronic communication into the educational process, which allows authenticating communication; expands the circle of communication on the relevant issues; extends knowledge about the culture of the target language; increases motivation and encourages students to master the linguistic skills through specific teaching methodologies. Digital communication is capable of transferring not only the required data but basic emotions. For instance, the rules of communication are supplemented by the so-called "emoticons" ("smiles"), special symbols for conveying emotions and intonation of the author of messages. According to [15], these rules are available for study at the following addresses: The Smileys and Acronyms Dictionary (www.seekwellness.com); The Net: User Guidelines and Netiquette (www.fau.edu); A Beginner's Guide to Effective Email (www.webfoot.com).
The final assessment of students' performance is determined by their participation in correspondence, and the role of the teacher is to organize the exchange of information and assess the completed assignments. For instance, the following resources may be used for creating joint projects and ensuring communication with pen pals in a foreign language: Intercultural Classroom Companies (http://www.iecc.org); EPALS Classroom Exchange (www.epals.com); The Rap Pal Exchange (www.iwaynet.net); Thomas Robb's E-Mail Keurals for Language Fluency (http://www.kyoto-su.ac.jp); E-pal classroom code (http://www.epals.com).
The method of e-mail projects is used on condition of proper planning, interest in the topic, motivation of students, and an appropriate level of the group proficiency. Communication is carried out in a foreign language with real partners, topical problems are discussed, and the language competence of students is expanded. Practice shows that the work on any project consists of several stages: 1) organizational: search and presentation of partners (discussion of information); 2) selection and formulation of the issue points (definition of goals and objectives, discussion of the plan of activities); 3) analysis of methodological techniques and organization of students' work (structuring, allocation of stages, distribution into groups according to interests, determination of planned results); 4) work on the project (development of tasks for groups, consultations, exchange of information, obtaining results); 5) presentation of the project and summing up. An Internet project must meet such requirements as: the presence of a research problem, the practical significance of the results, the structuring of activities, the distribution of roles to the project participants, the use of research methods in the project. As a result, the listed forms of students` provide advantages for the teacher as an opportunity to move from standard to creative activities; using a flexible system for assessing students' knowledge, including self-control and self-assessment; opportunities to share experience and improve work efficiency.
One of the most successful digital tools is the e-book (or digital course books). According to [6] "the type of organization and the method of delivery to the student determines the division of multimedia textbooks into the following types: CD-ROM with or without a printed copy; specially designed Internet sites; applications for mobile devices". In modern higher education, electronic textbooks are usually available online through the distance learning system, which is an open access to all students. From our point of view, in order to match a given curriculum with teaching aids, instructors and professors should act as authors of various digital resources. Such resources may contain educational materials, as well as an audio / video course, provided by native speakers; tests, posters, teaching aids and additional materials. Programs and electronic textbooks consolidate the skills of determining lexical meanings and grammatical structures; they form writing skills and associate visual images with mental pictures while mastering foreign language material. The advantages of electronic course books are the following: accessibility, increasing motivation due to the visual presentation of the material (illustration, sound, video, animation); interactivity as an opportunity to increase the speed of mastery of the educational material; the feedback in the form of tests, designed for quick control; the ability to adjust and edit the electronic course book as new data becomes available; the increase of motivation and elimination of student overload. The disadvantages of e-books can be explained by the limitation of the possibilities of group work, the lack of real communication; as a result, e-books are often assigned an auxiliary or supplementary role in training.
Telecommunication provides more possibilities for solving communication problems using the Internet in on-line mode. Such classes are used for experimental, distance and variable learning, as well as an addition to extracurricular work (academic communities, elective courses, clubs in foreign languages). The use of telecommunication means in an educational context in teaching foreign languages is manifested in the exchange of messages -often in the process of a telecommunications project a certain topic is studied with a discussion of the results through correspondence, as well as communication in real time. Telecommunication is also expressed in e-learning and can serve as a simulator for teachers and students in the study of special topics, establishing contacts with world experts in various fields of knowledge, varying from the development of educational projects to the structure of the educational 8166 Language Education in Russian Universities: Advantages, Vulnerabilities and Risks of Online Teaching process in an interactive mode. In the modern world, electronic communication takes place during the simultaneous completion of tasks of competitions and projects of different levels. Students are remotely offered the same tasks. Data exchange takes place in the form of a group question-answer game, drawing up conceptual charts, using modern technologies (cluster, insert, etc.) to develop critical thinking in the purpose of further socialization as stipulated by the Russian Federal State Educational Standards.
Educational telecommunication projects are based on topical data exchange: collection, processing, comparison, analysis of the information on a given issue. Students are both creators and consumers of information in the exchange process. The ability to use information extracted from almost any major library in the world, archives of international scientific organizations (NASA, UNESCO, etc.) is an indisputable advantage of Internet tools. On-line modeling of the activities of participants in telecommunication projects is focused on the development of interaction principles; it is based on telecommunication support of traditional forms of education in combination with modern tools.
The web-quest technology provides for independent search work on the Internet using a list of websites that correspond to the subject of the project and the level of knowledge of the students. The Internet resources that contain interactive materials on a foreign language include: Letter Generator (www.readwritethink.org); a website for developing listening skills (www.english-test.net/toeic/listening/the_bund_shan ghai.html). It features a large collection of audio files for listening to foreign language speech; memorizing idioms and phrasal verbs of the English language (http://usefulenglish.ru/idioms). It contains examples of usage of words and phrases, idioms and fixed expressions in various situations in oral and written speech; site for learning English (www.native-english.ru); the comprehensive grammar reference, tests, idioms and proverbs, songs, poems, etc.; BBC Russian-Learning English. The portal offers tests, videos with radio reports from BBC correspondents; BBC World Service provides an opportunity to read and listen to news in different languages; The other resources include: English.ru which offers to determine the level of proficiency in English; "America homepage" introduces states, cities, history, culture of the USA; abc-english-grammar.com features the study of grammar, phonetics and radio programs; lingualeo.ru is designed to improve the skills of listening comprehension, reading, correct pronunciation, and expanding vocabulary; alleng.ru teaches phonetics and includes grammar material, English vocabulary, slang, idioms, dictionaries, tests, essays, abstracts, audiobooks, lyrics, scripts, etc.
To increase the motivation of learning, it is advisable to integrate Internet resources into the educational process, since they are characterized by multimedia and contain various types of information (text, sound, graphic, animation, video) providing a high degree of visibility, as well as instant feedback. According to [24], "in the conditions of modern society, the information and communication competence of a teacher, that is, their ability to solve professional pedagogical problems with the involvement of information and communication technologies, is becoming a vital component of professionalism". A.V. Lebedev et al [17] point out "the importance of thorough students' needs analysis prior to starting a course; the possible scenarios for significant adaptation of course contents due to constant changes in national educational standards and curricula hours; the highly responsible role of an English for specific purposes teacher, a professional, performing multiple assignments".
Conclusions
Based on the results of the study, the following conclusions are made: the use of Internet technologies is characterized by the implementation of a structural approach that provides a productive combination of various information resources in novel ways, the development of students' creative abilities and problem-solving skills; the purpose of e-learning is to acquire new knowledge in the process of active communication to solve educational problems, which include the formation of communication skills; mastering linguo-cultural knowledge, developing skills with the aim of using them in real situations; revealing the creative potential of students; Internet technologies are the basis for a new methodological paradigm in language teaching. They facilitate the resolution of the task to support the educational work of students; provide real communication with native speakers; universal access to the educational process through the data of centralized information systems; the online and multimedia facilities do not and cannot substitute instructor`s work as a mentor or a guide, such tools should be implemented validly, combining digital and traditional teaching aids. As a result, the use of new information technologies makes it possible to increase the efficiency of teaching, improve the skills of everyday and professional communication in a foreign language, and develop the communicative, cognitive, and creative abilities in students. | 8,698.2 | 2020-12-01T00:00:00.000 | [
"Education",
"Linguistics"
] |
Present and Future: Crosstalks Between Polycystic Ovary Syndrome and Gut Metabolites Relating to Gut Microbiota
Polycystic ovary syndrome (PCOS) is a common disease, affecting 8%–13% of the females of reproductive age, thereby compromising their fertility and long-term health. However, the pathogenesis of PCOS is still unclear. It is not only a reproductive endocrine disease, dominated by hyperandrogenemia, but also is accompanied by different degrees of metabolic abnormalities and insulin resistance. With a deeper understanding of its pathogenesis, more small metabolic molecules, such as bile acids, amino acids, and short-chain fatty acids, have been reported to be involved in the pathological process of PCOS. Recently, the critical role of gut microbiota in metabolism has been focused on. The gut microbiota-related metabolic pathways can significantly affect inflammation levels, insulin signaling, glucose metabolism, lipid metabolism, and hormonal secretions. Although the abnormalities in gut microbiota and metabolites might not be the initial factors of PCOS, they may have a significant role in the pathological process of PCOS. The dysbiosis of gut microbiota and disturbance of gut metabolites can affect the progression of PCOS. Meanwhile, PCOS itself can adversely affect the function of gut, thereby contributing to the aggravation of the disease. Inhibiting this vicious cycle might alleviate the symptoms of PCOS. However, the role of gut microbiota in PCOS has not been fully explored yet. This review aims to summarize the potential effects and modulative mechanisms of the gut metabolites on PCOS and suggests its potential intervention targets, thus providing more possible treatment options for PCOS in the future.
INTRODUCTION
Polycystic ovary syndrome (PCOS), characterized by oligo-ovulation or anovulation, hyperandrogenemia, and polycystic ovarian morphology, is a common disorder of the reproductive endocrine system, affecting 8%-13% of the women of reproductive age as well as impairing their fertility and long-term health (1,2). Women with PCOS have a higher risk of infertility and pregnancy complications, accompanied by subsequent complications, such as obesity, type 2 diabetes, non-alcoholic fatty liver disease (NAFLD) (3), cardiovascular disease (4), endometrial cancer, and osteoporosis (5). All these complications have a far-reaching impact on the physical and mental health of women (6).
Until now, the specific etiology and pathophysiology of PCOS remain unclear. PCOS might be a polygenic heritable condition, which is affected by a variety of acquired variables (7). Hyperandrogenemia is generally regarded as the core part of PCOS, causing reproductive disorders, insulin resistance (IR), and metabolic imbalances, such as glucose and lipid metabolic imbalance (8). In particular, the IR and compensatory hyperinsulinemia might cause abnormality in the sex hormone levels, chronic inflammation, and metabolic disorders, thereby contributing to follicular dysplasia (9). These pathological factors create a vicious cycle, which increases the obstacles to PCOS treatment.
With a deeper understanding of gut biology, the potential role of the gut in PCOS has become the center of attention. Gut microbiota, also known as the "second genome" of the host, can affect the metabolism and immune response of the host by interacting with the external environment (8). Alpha-diversity (a-diversity) is regarded as an indicator of ecosystem health, representing the number of species present in the given community, whereas beta-diversity (b-diversity) denotes the similarity of a community or individual sample with another community or individual sample, respectively (10). As compared to the normal group, the dysbiosis of gut microbiota in the PCOS women showed lower aand b-diversities, decreased relative abundance of Bifidobcterium, and increased relative abundances of Bacteroides, Parabacteroides, and Clostridium (11)(12)(13). Furthermore, the dehydroepiandrosterone (DHEA)-induced PCOS rats showed the dysbiosis of gut microbiota, and transferring this microbiota to healthy rats could induce the PCOS-like metabolic and endocrinal dysfunctions, indicating that the gut might be a novel therapeutic target for the treatment of PCOS (14).
Recently, studies on gut metabolites have emphasized the importance of the gut in maintaining general homeostasis. The gut metabolites, such as bile acids (BAs), amino acids, and shortchain fatty acids (SCFAs), are greatly involved in modulating the integrity of the gut barrier, thereby maintaining the internal environment and homeostasis. A disturbance in gut metabolites might increase the gut permeability, leading to the leakage of lipopolysaccharides (LPSs) and endotoxemia, which might disturb the endocrine system, immune system, insulin signaling, glucose metabolism, lipid metabolism (8), and gut microbiota (15). Furthermore, the SCFAs, BAs, and branchedchain amino acids (BCAAs) can directly regulate the secretion and sensitivity of pancreatic insulin in the target organs through endocrine signaling. While circulating through the portal venous system, these metabolites reach the liver to regulate lipid metabolism and oxidation. Moreover, these metabolites also take part in neuronal homeostasis by modulating the integrity of the blood-brain barrier (16). The gut-brain peptides, which can be affected by gut metabolites, might communicate with the brain, thereby influencing appetite and energy maintenance as well as increasing the secretion of luteinizing hormone (LH) (17).
Interestingly, the activity and contents of gut metabolites can be regulated by the gut microbiota (18,19). The correlations between gut metabolites and gut microbiota have been demonstrated in numerous metabolic diseases, such as obesity, type 2 diabetes, NAFLD, and cardiovascular diseases (20,21). Qiao and colleagues also demonstrated that an increase in the relative abundance of Bacteroides in patients with PCOS was related to the disturbance in gut metabolites, which might have a potential pathological role in PCOS (13,22).
These studies indicate that, in PCOS, the gut microbiota and related metabolites might be affected. They both are closely linked to the insulin signaling pathway, steroid hormone levels, glucose metabolism, lipid metabolism, and immunological homeostasis, all of which are greatly involved in the pathogenesis of PCOS (16,17). However, understanding the mechanism of interactions between gut metabolites and PCOS is still unclear. This review aims to summarize the existing studies and demonstrates the interaction between PCOS and gut microbiota-related metabolites, which might help in developing novel treatments for PCOS.
Biosynthesis and Metabolism of BAs
BAs are the key metabolites, which include primary BAs and secondary BAs. In humans, cholic acid (CA) and chenodeoxycholic acid (CDCA) are the most common primary BAs. Under normal physiological conditions, these primary BAs are synthesized from cholesterol in the pericentral hepatocytes through "classical" (neutral) and "alternative" (acidic) pathways (23,24). The classical pathway favors the biosynthesis of CA and CDCA (25), whereas the alternative pathway only favors the biosynthesis of CDCA. After the primary BAs are modified and transported by various enzymes and transporters, they are conjugated with taurine or glycine and secreted into the bile, which are then released into the small intestine and aid in lipid digestion (26). In the ileocecum, the gut microbiota and bile salt hydrolase (BSH) convert the conjugated BAs to free BAs. Following the modifications, such as the removal, oxidation, or epimerization of the nuclear hydroxyl by the host or gut microbiota, the free primary BAs are converted into secondary BAs (25,27). The secondary BAs play a critical role in regulating glucose metabolism, insulin signaling, lipid metabolism, and inflammation. In the distal ileum, most of the secreted molecules (95%) are reabsorbed through apical sodiumdependent BA transporter (ASBT) and are ultimately transported into the liver through the portal vein system. This phenomenon is known as enterohepatic circulation. Meanwhile, the remaining secreted molecules are excreted in feces (15,25). In the ileum, BAs facilitate the secretion of fibroblast growth factor 19 (FGF19) in humans or FGF15 in mice by activating the farnesoid X receptor (FXR). FGF19 further represses the BA synthesis as negative feedback when circulated to the liver (26).
The above process relies heavily on gut microbiota. The gut microbiota can affect the production of BAs by regulating the liver enzymes, such as 7a hydroxylase and sterol-27-hydroxylase, especially CDCA in humans (28). BSH has been widely detected in rodent and human gut microbiota, such as Clostridium spp (29).. The diversity of secondary BAs is greatly affected by the species differences in gut microbiota (23). The mouse models in the absence of gut microbiota demonstrate that almost all the BAs were primary BAs, indicating the importance of gut microbiota in the production of free BAs (15,30). The gut microbiota could affect the ileum mucosa and ASBT to regulate the reabsorption of BA in rodents (28). Moreover, the gut microbiota partially inhibits the BA biosynthesis through the FXR-dependent mechanism (31)(32)(33). The BAs shape the structure of gut microbiota and exert antibacterial effects by selectively promoting the growth of BA-synthesizing bacteria, thereby showing a bidirectional communication between the gut microbiota and BAs (15, 33).
Role of BAs in Metabolism, Endocrine, and Inflammation
One of the most important functions of BAs is their participation in lipid emulsification and solubilization. They convert fat into fat droplets, which can be digested by trypsin and absorbed by gut mucosa, thereby assisting the absorption of dietary fat, which is critically important for lipid metabolism (34). Certain essential vitamins, such as vitamins A and D, are non-polar lipids, which can only be absorbed if bound to micelles in the presence of BAs (25,35). Whenever the concentration of cholate is lower, cholesterol absorption is inhibited (25).
The BAs modulate metabolic homeostasis by stimulating the receptors, such as G protein receptor 5 (TGR5) and FXR. TGR5 is widely distributed in a variety of animal tissues, such as fat, central nervous system, liver, and gut and participates in regulating insulin signaling, glucose metabolism, and energy expenditure in brown adipose tissue and muscle (36). The intestinal hormone glucagon-like peptide 1 (GLP-1) and peptide YY (PYY) are simulated by TGR5 (37). In the murine brain, BAs could activate TGR5, causing the central anorexigenic actions to control the appetite (38). FXR is another BAs receptor, which is found in white adipose tissue, liver, gut, immune cells, and other tissues (39). The BAs, in combination with FXR, can induce FGF15 and/or FGF19, which might regulate glucose tolerance and normal glycemia by reducing hepatic gluconeogenesis. A reduction in the number of activated FXR might reduce the secretion of FGF15 and/or FGF19. This might result in the increase of hepatic gluconeogenesis, deposition of hepatic lipid, and disruption of glucose homeostasis in adipocytes and the decrease of insulin production in pancreatic cells (26,32). Moreover, in the cardiac and visceral fat cells, tauroursodeoxycholic acid (TUDCA) could reduce endoplasmic reticulum stress, thereby preventing obesity and inflammation (40). Therefore, reduction in the TUDCA might result in the diminished suppression of abdominal and visceral fat inflammation, aggravating the IR and metabolic disorders (8,40).
However, the current studies on the link between BAs and metabolic syndrome include a few individuals in the BAs pool. The levels of fasting circulating total BAs were higher among the populations with mild IR and obesity (25). However, the changes in the levels of fasting circulating BAs in disease states as well as the role of each BA in metabolic diseases are needed to be investigated.
Biosynthesis and Metabolism of SCFAs
Recently, the crosstalk between SCFAs and gut has been focused on. The SCFAs originated from the microbiota-accessible carbohydrates (MACs) in the colon (41), which are fermented from the dietary fibers and resistant starch ferment. They mainly consist of acetic, propionic, butyric, valeric, and caproic acids, which are biosynthesized in various pathways, such as the Wood-Ljungdahl pathway, aided by the different classes of gut microbes (42). The exact contents and relative proportion of each type of SCFA might differ based on the diet, composition of the microbiota, and gut transit time (43, 44). When the BCAAs, including valine, isoleucine, and leucine, escape digestion in the upper gut, they might be fermented into branched-chain fatty acids (43, 44). Furthermore, the SCFAs are taken up by colonocytes via passive diffusion or active transport (42). A part of the unmetabolized SCFAs are transported into the liver through the portal system and serve as substrates for the energy metabolism and anabolic processes, thereby playing a prominent role in the inhibition of glycolysis, stimulation of lipogenesis and gluconeogenesis, and regulation of mitochondrial energy production (45).
Role of SCFAs in Metabolism, Endocrine, and Inflammation
SCFAs are important for balancing metabolism and energy. They are taken up by colon cells after binding to G protein-coupled receptors (GPCRs), which are also known as free fatty acid receptors (FFARs) and are present on the enteroendocrine cells of the gastrointestinal mucosa (46), thereby stimulating the secretion of intestinal hormones, such as GLP-1, PYY, gammaaminobutyric acid (GABA), and serotonin (5-HT) (46). The intestinal hormones aid in reducing the production of hepatic glucose, enhancing the absorption of peripheral glucose, and suppressing the appetite (47). Moreover, the SCFAs can also stimulate leptin secretion in adipocytes and insulin secretion in the pancreatic cells (48). The circulating SCFAs can activate the burning of brown adipose tissue, thereby increasing energy consumption and preventing weight gain (43, 49). In addition, the SCFAs also improve insulin sensitivity in the muscle and liver tissues (47).
In contrast to hepatic gluconeogenesis, intestinal gluconeogenesis (IGN) is beneficial for controlling the glucose level by reducing food intake and hepatic glucose output (47,49).
In a study based on mouse models, butyrate could directly promote the IGN expression in enterocytes in a Cyclic Adenosine Monophosphate (cAMP)-dependent manner, whereas propionate could increase the IGN expression by binding to FFAR3 in the portal nerve, thereby initiating the portal-hypothalamic crosstalk, improving the insulin sensitivity and glucose tolerance, and lowering the fat mass (47).
Recently, studies have demonstrated that SCFAs could affect the host's immune system. SCFAs could affect the hematopoietic progenitors in the murine bone marrow, implying that they were important for the development of innate and adaptive immune systems (50). Moreover, they exerted a systematic antiinflammatory effect in mice by affecting the peripheral DCs and T cells (51). In particular, the SCFAs increased the number of T-regulatory (Treg) cells, induced the differentiation of Treg cells, and regulated the production of interleukin, thereby minimizing the oxidative stress and protecting pancreatic cells (52)(53)(54). In a murine model of gout, SCFAs could bind to the GPCR43 in the central nervous system and act on microglia to regulate host immunity (55). Furthermore, they strengthened the integrity of the blood-brain barrier and regulated the levels of neuronal factors and neurogenesis to relieve the neural and central inflammation (52).
Moreover, SCFAs can also increase the expression of intestinal epithelial tight junction protein and decrease the death of intestinal epithelial cells (IEC), thereby promoting gut mucosal immunity and barrier integrity (56,57). Once the intestinal mucosal barrier is disrupted, LPS enters the blood circulation, resulting in a persistent inflammation, which is correlated with IR (51).
The SCFAs, when reaching the brain, can alter the integrity of the blood-brain barrier by increasing the expression of tight junction proteins in the blood-brain barrier and regulating the state of neural and central inflammation (41,52). SCFAs also affect the function of glial cells and neurogenesis in order to maintain neuronal homeostasis (41).
Anabolism and Catabolism of Amino Acids
Amino acids, consisting of essential and non-essential amino acids, are life-supporting molecules, which provide raw materials for protein synthesis. The food amino acids are primarily absorbed in the small intestine via the concentrative amino acid transporters. A small number of amino acids are also absorbed by the large intestine, whereas the remaining are excreted in the feces (58,59). Then, the amino acids are released primarily through passive efflux across the basolateral membrane, which is mediated by a group of transporters (59,60). When released into the bloodstream, they are transported into the cells via the corresponding secondary active transporters, which are also called functional transporters (61). Simultaneously, an increase in the cytoplasmic amino acid pool activates the amino acid metabolism, forcing the excess amino acids to be catabolized via oxidation, hydroxylation, and other processes (62). The majority of amino acids are metabolized and restored in the liver (58). However, they may also be stored in extrahepatic tissues, such as muscle, brown fat, kidneys, liver, and heart tissues (62); this storage in extrahepatic tissues is regulated by the insulin-mediated signaling in the hypothalamus (63).
Role of Amino Acids in Metabolism, Endocrine, and Inflammation
In addition to synthesizing proteins, amino acids are also involved in glycolysis and mitochondrial metabolism through the tricarboxylic acid (TCA) cycle and oxidative phosphorylation and modulate the cellular activities, such as lipid and glucose metabolism (64). By acting on the IR substrates (IRSs), the amino acids can affect insulin signaling (65). Furthermore, recent studies have demonstrated that amino acids are the potential precursors of the brain neurotransmitter, impacting habits (66). Furthermore, amino acids participate in ATP generation, nucleotide synthesis, and redox balance, which support the growth, proliferation, and effector function of immune cells (64,67).
Effect of PCOS on Gut Metabolites
To date, significant differences have been reported between the gut microbiota and metabolites in patients with PCOS as compared to the control group. PCOS, as a multi-system endocrine disease, has a negative impact on the function and composition of gut microbiota and metabolites.
Impact of Sex Hormones on Gut Microbiota and Related Metabolites
According to studies, sex hormones have a substantial impact on the composition of the gut microbiota. They affect the composition of gut microbiota in a sex-specific manner after puberty (10,68). As a result, the gut microbiota in females has higher a-diversity but significantly lower abundances of Bacteroides species, including Prevotella and Bacteroides thetaiotomicron, as compared to that of males. The studies of rodents and other species have also shown similar results, but the outcomes vary across the studies (68,69). Meanwhile, the dysbiosis of gut microbiota in the PCOS women was characterized by the lower a-diversity, decreased relative abundance of Bifidobacterium, increased relative abundance of Bacteroides, and changes in the b-diversity as compared to the control group (11)(12)(13). This showed that the gut microbiota of the PCOS women altered when compared to that of the men. Although different conclusions have been presented, the accumulating data confess that hyperandrogenism might affect the gut microbiota of patients with PCOS by affecting the gut function and regulating the activity of b-glucuronidase and its substrate levels, such as bilirubin, neurotransmitters, and hormones, which are present in the liver (10,11,70). Because the gut metabolites are closely related to the gut microbiota, the changes in gut microbiota in response to hormones might also change the gut metabolites. For example, Sherman et al. revealed that the prenatal androgens were linked to the changes in the abundance of gut microbiota involved in the production of SCFAs in the rat (71). In a nutshell, the disruption of sex hormones in PCOS affects the composition of gut metabolites and microbiota.
Impact of Obese on Gut Microbiota and Related Metabolites
The dysbiosis of gut microbiota is correlated with the phenotype of PCOS. There are differences in the gut microbiota of nonobese and obese individuals with PCOS (17). The abundance of clostridium cluster XVII increased in the non-obese patients with PCOS, whereas that of Clostridium sensustricto and Roseburia decreased (72). Liu et al. reported that the relative abundances of gut microbiota, including Bacteroides, Escherichia/Shigella, and Streptococcus increased, whereas those of Akkermansia and Ruminococcaceae decreased in the patients with PCOS, which were correlated with body mass index (BMI) (17). It has also been reported that obese women with PCOS tend to have lower a-diversity and biodiversity of the gut microbiota as compared to the women with normal BMI (69). Furthermore, according to Li and colleagues, obesity was associated with the altered BA metabolism caused by the dysbiosis of gut microbiota (21). Because adiposity is a source of sex steroids, it might affect the composition of gut microbiota and gut metabolites by affecting the production of sex hormones (73). In addition, obesity might contribute to the development of a chronic inflammatory state, which might alter the gut permeability and microbiota, thereby affecting the function and composition of gut metabolites (21, 74).
Impact of IR on Gut Microbiota and Related Metabolites
Among the women with IR, the relative abundance of Bacteroidaceae increased, whereas that of Prevotellaceae decreased as compared to the PCOS women without IR (71). The IR-induced gut dysbiosis could result in the accumulation of BCAAs (13,36,75,76). On the other hand, the TCA cycle was significantly inhibited by IR, resulting in decreased BCAA clearance (77). Furthermore, IR and compensatory hyperinsulinism contributed to hyperandrogenism, thereby disrupting the gut dysbiosis and metabolites (8, 10).
Impact of Habits on Gut Microbiota and Related Metabolites
Numerous patients with PCOS have bad habits, such as an adoration of sweets, a love of fat, an absence of dietary fiber, and little exercise, which affects gut health (18,78). A high-fat diet (HFD) was linked to an increase in the pro-inflammatory microbiota, such as Clostridiales, Bacteroides, and Enterobacteriales, and a decrease in the anti-inflammatory microbiota, such as Lactobacillus, in the rat (75). The high levels of glucose, fructose, and sucrose could increase the relative abundance of Bifidobacteria while suppressing that of Bacteroides (75). The lack of dietary fiber might result in a decrease in the production of SCFAs. All these could affect the biosynthesis of gut metabolites.
Exercise can enhance gut health by increasing the diversity of gut microbiota and balancing the beneficial and pathogenic bacterial communities (79). Specifically, exercise increases the ratio of butyrate-producing bacteria, such as Roseburia hominis, thereby increasing the concentration of butyrate (80,81). Moreover, exercise can reduce the contact time between feces and the gastrointestinal mucus layer by enhancing gastrointestinal motility to benefit gut health (79,80). It also boosts the production of key antioxidant enzymes and antiinflammatory cytokines in the intestinal lymphocytes, thereby reducing intestinal inflammation (76,77). A reduction in exercise might disrupt the gut metabolites (43,44,79).
In a nutshell, the dysbiosis of gut microbiota is caused by unhealthy habits, hyperandrogenism, obesity, hyperinsulinism, and disturbances in the glucose and lipid metabolism in PCOS, leading to increased gut permeability, exaggerated dysbiosis, and altered gut metabolites (82, 83).
Effect of BAs on PCOS
The BA metabolism is a key metabolic pathway affected by the changes in gut microbiota in patients with PCOS. Zhang and colleagues demonstrated that an increase in the circulating conjugated primary BAs was positively correlated with hyperandrogenism in women with PCOS (84). In both the stool and serum, the levels of secondary BAs, such as glycodeoxycholic acid (GDCA) and TUDCA, were lower in the PCOS group as compared to those in the control group and were correlated with the disturbance of gut microbiota (13).
The findings from PCOS rats revealed that UDCA administration could improve ovarian morphology and decrease the total testosterone and insulin levels. However, the lipid parameters, E1, E2, glucose, and homeostatic model assessment for IR were comparable between the groups (85).
Moreover, BAs could also regulate the performance of gut immune cells. Both the protein and mRNA levels of Interleukin-22 (IL-22) in the cultured group 3 innate lymphoid cells (ILCs) were greatly stimulated in the presence of TUDCA or GDCA, which was also confirmed in mouse models, showing that TUDCA or GDCA therapy could enhance the mRNA levels of gut IL-22 and alleviate the disease symptoms (13). These beneficial effects of BAs on IR and ovarian function in PCOS mice were reversed by knocking out the IL-22 receptor gene (13). The IL-22 levels in serum and follicle fluid of patients with PCOS decreased, whereas the IL-22 administration could improve IR, ovarian dysfunction, dysbiosis of gut microbiota, and prenatal Müllerian hormone in the DHEA-induced PCOS mice (86). Therefore, it could be proposed that the regulatory effects of BAs on PCOS were at least partially mediated by IL-22 (13).
IL-22 has diverse benefits, such as improving insulin sensitivity and regulating the lipid metabolism in the liver and adipose tissues. IL-22 can promote the proliferation of IEC and the production of antimicrobial peptides and mucins in IEC (13,51). Therefore, a reduction in the IL-22 level might further disrupt the integrity of the gut barrier and microbiome hemostasis, thereby aggravating the endotoxemia, chronic inflammation state, and particularly the IR (13,87,88
Effect of SCFAs on PCOS
SCFAs are essential elements in maintaining the homeostasis of gut microbiota and regulating the intestinal mucosal barrier as an important energy source for the gut microbiota and IEC (51,89). For instance, butyrate regulates the utilization of intestinal oxygen, thereby regulating the proportion of aerobic and anaerobic gut microbiota. Therefore, a reduction in the SCFA levels might cause gut dysbiosis in PCOS (90) and disrupt the intestinal mucosal barrier to exacerbate the chronic inflammatory state (51).
In addition to affecting the gut health and microbiota, the SCFAs also exert various physiological effects through IL-22. SCFAs could promote the IL-22 production in CD4 + T cells and ILCs by binding to the histone deacetylase inhibitors and GPCRs (89). The IL-22 could maintain metabolic homeostasis, which was disturbed by SCFAs reduction (89), thereby producing a synergistic effect with BA-IL-22. In addition, studies reported that the secretion of gastrointestinal hormones in patients with PCOS was disturbed, such as a decrease in the GLP-1 level (91). SCFAs could stimulate the secretion of gastrointestinal hormones, such as GLP-1, PYY, GABA, and 5-HT, thereby reversing their decreased levels in the patients with PCOS, maintaining insulin homeostasis, and suppressing the appetite (46).
In addition to their effects on the gut, SCFAs also exert peripheral effects. SCFAs could promote the IGN gene expression in enterocytes through the portal-hypothalamic circuit, thereby maintaining food intake and hepatic glucose output in mice (49). The decrease in the SCFA levels can boost insulin secretion from the pancreatic cells via GPCRs, improve insulin sensitivity, increase the energy expenditure in brown adipose tissue, and upregulate the antilipolytic activity of glucose transporter type 4, thereby further aggravating the PCOS (41,45,47).
Moreover, Lin et al. discovered that the absorption of SCFAs decreased in the PCOS rats. They demonstrated that the fecal SCFA concentrations increased and were positively correlated with the tumor necrosis factor and IL-6 levels (92). Enhancing the SCFA absorption could improve the integrity of the intestinal mucosal barrier and inhibit intestinal and parenteral inflammation (92).
Therefore, SCFAs are critically important for maintaining glucose and insulin homeostasis and ameliorating chronic inflammation throughout the body. The supplementation of SCFAs or enhancing their beneficial effects, such as activating the relevant receptors, might be helpful in the PCOS treatment.
Effect of Amino Acids on PCOS
Recently, the correlations between BCAAs and metabolic dysbiosis have been focused on. The human body cannot synthesize BCAAs, which are essential amino acids; therefore, they must be absorbed from the digestion of food (65). By phosphorylating the IRS-1 and IRS-2 at serine or damaging the mitochondrial function in the b-pancreatic cells, excessive BCAAs could aggravate IR in rodents with PCOS (65,88,93). Furthermore, BCAAs might induce the expression of proinflammatory genes to deteriorate chronic inflammation, thereby developing IR (65).
As compared to the SCFAs and BAs, the current studies on intestinal amino acids are limited. Moreover, the conclusions are not completely consistent due to the inter-and intra-species differences and experimental conditions. The correlations between amino acids and PCOS are still unclear. However, it could be concluded that downregulating the excessive BCAAs or blocking their associated binding sites might further ameliorate IR in PCOS. Whether and how BCAAs can be applied for the prediction and treatment of PCOS are worth exploring.
PROSPECTS AND IMPLICATIONS
Currently, the primary goal of PCOS treatment is to alleviate its symptoms, such as hyperandrogenism, IR, oligo-or anovulation, and infertility. For example, letrozole aids in developing the dominant follicles, whereas metformin is typically for the treatment of metabolic symptoms and IR, possibly restoring their ovulation (94). These treatment strategies can only provide temporary relief from the symptoms or achieve a short-term goal. Therefore, the fundamental and permanent treatment of the pathological processes in PCOS is worth exploring.
As stated above, the PCOS-related hyperandrogenism, IR, obesity, metabolic disturbance, unhealthy diet, and other factors could disrupt the gut microbiota and metabolites, which, in turn, deteriorated the pathological process of PCOS, forming a vicious cycle (95) (Figure 1).
Interestingly, the changes in gut metabolites could predict PCOS (23,96) and might even be associated with the different clinical phenotypes (17). The gut metabolites could be more precise predictors than the gut microbiota due to the susceptibility of gut microbiota to a variety of factors, such as environmental contamination and abrupt changes in diet.
The therapies, targeting the gut homeostasis to break the vicious circle between hyperandrogenism and metabolic abnormalities, can be the tipping point for the treatment of PCOS. For example, adopting a healthier lifestyle; supplementing the specific BAs (TDUCA and GDCA), SCFAs, and IL-22; regulating the metabolism of amino acids; and blocking the BCAA targets could be beneficial for the treatment of PCOS (89,95). It has been reported that the IL-22 levels in patients with PCOS were significantly lower than those in the normal group (22). Therefore, the IL-22 supplementation could be an effective treatment option for PCOS. Studies on the PCOS mice confirmed that the intraperitoneal injection of IL-22 could improve endocrine and metabolic disorders (13). However, there were certain side effects, such as liposarcoma (97). On the other hand, there is limited clinical evidence, supporting the efficiency and safety of IL-22 in humans. Thus, safer therapies are needed to be developed as soon as possible.
The probiotics (or synbiotics) supplementation and fecal microbiota transplantation (FMT) might have a significant impact in this regard. Studies indicated that the administration of probiotics (or synbiotics) for 8-12 weeks could lower serum levels of glucose, insulin, triglycerides (TGs), very low-density lipoprotein, and cholesterol while improving the IR, lipid metabolic disturbance, and inflammatory state. It could also effectively lower the body weight and BMI of patients with PCOS (98-100). Nevertheless, a meta-analysis showed that the effects of probiotics (or synbiotics) supplementation on LDL, weight, BMI, and IR were not significant (101). However, probiotics had certain effects on regulating the metabolisms of glucose, insulin, and lipids, which could lower the serum levels of glucose, insulin, and TG while increasing HDL (101). Another study indicated that the supplementation of probiotics (or synbiotics) improved androgen metabolism without having any other therapeutic effect (102). The administration of different probiotic species and doses to the patients with varied PCOS phenotypes might explain the differences in these outcomes. In rodents, FMT could treat PCOS by restoring the composition of gut microbiota and improving the sex hormone balance and ovarian function (103). However, currently, there are limited applications of FMT in the treatment of PCOS. Therefore, the use of FMT in humans is yet to be determined.
The results showed that the supplementation of insulinenriched synbiotic yogurt to the PCOS mice could decrease the body weight gain, improve estrus cycles and ovary morphology, and reduce the levels of LH while increasing those of folliclestimulating hormone and IL-22 in serum. At the genus level, the synbiotic yogurt increased the relative abundances of Lactobacillus, Bifidobacterium, and Akkermansia (104).
Traditional Chinese medicines (TCMs) are vast treasure, which need to be explored. The TCMs, including flavonoids, polysaccharides, saponins, and other compounds, might possess tremendous prospects for stimulating the growth of a particular gut microbial species, increasing the production of beneficial SCFAs and BAs, and suppressing the growth of pathogenic bacteria and BCAA products (94,105,106). Berberine, a powerful natural product, which is used for the treatment of metabolic syndrome, could lower the abundance of BCAA-producing bacteria and aberrant blood BCAA levels in the HFD-induced rodents, thereby improving the glucose metabolism, lipid metabolism, and IR in the PCOS rodents (107,108). Studies on the rodents demonstrated that baicalin could boost the SCFA generation and alter the BA metabolism by modifying the immunology and gut microbiota, as well as affecting the liver-gut axis by regulating the BA-FXR/TGR5 signaling pathway (109). Ginseng polysaccharides and ginsenosides could also boost the growth of Lactobacillus spp. and Bacteroides spp. in the rat. These two were the most significantly boosted probiotics, restoring the balance of gut microbiota and thereby regulating intestinal metabolism (110). Thus, the TCM and natural products might affect the gut immunity, barrier, and gut microbiota to modulate the local metabolisms. Although studies have revealed that some TCM products have low bioavailability, it is remarkable that gut microbes can transform them into components, which can be absorbed more easily, thereby improving their efficiency and indicating the positive interaction between TCM and gut microbiota (94,(111)(112)(113).
Adopting a healthier lifestyle might also improve PCOS, especially the exercise and diet, which act as the modulators of gut microbiota. As mentioned before, bad habits such as an adoration of sweets, a love of fat, an absence of dietary fiber, and FIGURE 1 | Crosstalk between PCOS and gut metabolites. PCOS, polycystic ovary syndrome; BAs, bile acids; SCFAs, short-chain fatty acids; BCAAs, branchedchain amino acids; IGN, intestinal gluconeogenesis; LPS, lipopolysaccharide; GLP-1, glucagon-like peptide 1; PYY, peptide YY; 5-HT, 5-hydroxytryptamine; LH, luteinizing hormone; SHBG, sex hormone-binding globulin. PCOS disturbs intestinal microbial homeostasis and metabolites, which may be linked to the insulin signaling pathway, steroid hormone levels, glucose metabolism, lipid metabolism, and immunological homeostasis etc., all of which are involved in PCOS pathogenesis, thus forming a vicious cycle.
little exercise, might disrupt gut health. As compared to the Western diet (enriched in animal protein and fat and low in fiber), gluten-free diet, vegetarian diets (high in fermentable plant-based foods), etc., the Mediterranean diet is more recommended for the patients with PCOS (114). Numerous human or rodent studies have demonstrated that the Western diet could significantly decrease the abundance of total and beneficial bacteria species, including Bifidobacterium and Eubacterium (115). The beneficial bacterial populations, such as Bifidobacterium and Lactobacillus, decreased, whereas those of potentially harmful bacteria increased in the human who consumed a gluten-free diet (116). The results of various studies on vegetarian diets are contradictory. In general, the Mediterranean diet is characterized as a healthy and balanced diet, which can improve obesity, lipid profile, and inflammation (114,117). Specifically, the assorted fruits, vegetables, nuts, legumes, and cereals are recommended, whereas the intake of red meat, processed meat, and sweets should be limited (114). Exercise can enhance gut health by increasing the diversity of gut microbiota and balancing the beneficial and pathogenic bacterial communities, and a minimum of 150-min exercise of moderate intensity per week is necessary for patients with PCOS (1).
The studies on gut health and PCOS are still in the initial stages, which limit the scope of this review. On one hand, this review mainly focused on the interactions of PCOS with the SCFAs, BAs, and amino acids. Nevertheless, there might be additional metabolic loops closely associated with the PCOS, such as carnitine metabolism (118). The amino acids, other than BCAAs, are also needed to be thoroughly investigated in the future. On the other hand, there might be some changes in the synthesis and transit of gastrointestinal metabolites between species. Therefore, the findings from animal studies remain to be validated in humans.
AUTHOR CONTRIBUTIONS
MZ and YS contributed to the conception of this review. RH wrote the manuscript. RH, MZ, and YS revised the manuscript. RH and YH designed and illustrated the figures. YH, FZ, FL, ZL, and YG performed the literature search and interpretation. MZ, RH, YH, FZ, FL, ZL, YG, HD, WM, KS, and YS reviewed the manuscript. All authors approved the submission. | 8,092 | 2022-07-19T00:00:00.000 | [
"Biology",
"Medicine"
] |
A discussion on vacuum polarization correction to the cross-section of e+e−→γ*/ψ→μ+μ−
Vacuum polarization is a part of the initial-state radiative correction for the cross-section of e+e− annihilation processes. In the energy region in the vicinity of narrow resonances J/ψ and ψ(3686), the vacuum polarization contribution from the resonant component has a significant effect on the line-shape of the lepton pair production cross-section. This paper discusses some basic concepts and describes an analytical calculation of the cross-section of e+e− →γ*/ψ→μ+μ− considering the single and double vacuum polarization effect of the virtual photon propagator. Moreover, it presents some numerical comparisons with the traditional treatments.
Introduction
In quantum field theory, tree-level Feynman diagrams represent a basic process of elementary particles reaction from the initial state to the final state, and the corresponding lowest order cross-section with order α 2 is called Born cross-section. For accurate calculation, the contribution of higher level Feynman diagrams needs to be considered.
For perturbative calculations up to order α 3 , the radiative correction terms are the interferences between the tree level and higher level (one-loop) Feynman diagrams. In the references mentioned above, all the radiative correction terms were treated as small quantities owing to the extra factor, α, compared to that in the tree-level terms. Such approximations for the QED correction and non-resonant quantum chromodynamics (QCD) hadronic correction are reasonable. However, for the energy regions in the vicinity of narrow resonances, such as charmonium J/ψ and ψ(3686), the contribution of the resonant component of the vacuum polarization (VP) correction is neither a small quantity nor a smooth function of energy. This implies that the energy dependence of the VP correction factor has a significant influence on the line shape of the total cross-section. Therefore, the VP correction in the vicinity of narrow resonances has to be treated appropriately.
The radiative correction of process e + e − →µ + µ − includes the initial-state and final-state corrections. The final-state radiative (FSR) correction is much smaller than the initial-state radiative (ISR) correction owing to the mass relation, m e ≪ m µ [6]. The FSR correction can be neglected if one dose not require very high accuracy. In addition, the contributions of the two-photonexchange diagrams and asymmetry of e ± and µ ± are less important. In this work, only the ISR correction of the process, e + e − → µ + µ − , is considered to keep the discussion succinct, and the discussions only concentrate on the VP correction. The calculations for other correction terms follow the expressions given in the related references [7,8].
The calculations of the resonant cross-section and VP correction need the bare value of the electron width of the resonance, but the value cited in the particle data group (PDG) is the experimental electron width, which absorbs the VP effect [9,10]. Therefore, another motivation of this work is attempt to provide a scheme for extracting the bare electron widths of resonances J/ψ and ψ(3686) by fitting the measured cross-section of e + e − → µ + µ − and then obtain the value of the Born-level Breit-Wigner cross-section.
The basic properties of a resonance with J P C = 1 −− is characterized by its three bare parameters: nominal mass M , electron width Γ e , and total width Γ . The values of the resonant parameters can be predicted by the potential model [11], but the theoretical uncertainties are difficult to estimate. A reliable method for obtaining accurate values of the resonant parameters is to fit the measured leptonic cross section [12,13] or hadronic cross section [14] in the vicinity of these resonances. Extracting the bare values from experimental data can provide useful information to decide the theories and models.
The bare values of the resonant parameters are the input quantities for the calculation of ISR factor 1+δ(s) in the measurement of the R value, which is defined as the lowest level hadronic cross-section normalized by the theoretical µ + µ − production cross-section in e + e − annihilation [15,16]. In fact, the total hadronic crosssection is measured with the experimental data: where N had is the number of hadronic events, L is the integrated luminosity of the data samples, ǫ is the detection efficiency for e + e − → hadrons determined by the Monte Carlo method, and s is the square of the center-ofmass energy of initial state e + e − . However, the quantity of interest in physics is Born cross-section σ 0 ex (s), which is related to σ tot ex (s) by ISR factor 1+δ(s) as follows: and R value is measured: ISR factor 1+δ(s) indicates the fraction of all the highorder Feynman diagram contributions to the Born crosssection, which is a theoretical quantity by definition: where σ 0 (s) and σ tot (s) are the theoretical Born crosssection and total cross-section, respectively. The accurate calculation of 1+δ(s) is a key factor for obtaining the R value from the measured σ tot ex (s). The calculation of σ tot (s) needs the values of σ 0 (s ′ ) from s ′ = 4m 2 π to s as inputs. If the correlation between the continuum and resonant states can be neglected, the hadronic Born cross-section can be written as: where σ 0 con (s) = σ 0 µµ (s)R(s),R(s) is the R value from which the resonant contribution has been subtracted. Generally, the Born-level resonant cross-section is expressed in the Breit-Wigner form: where the resonant parameters (M,Γ e ,Γ ) must be bare quantities. The value of the electron width cited in the PDG is, in fact, the experimental value of Γ ex e , which absorbs the VP effect, but uses the same notation, Γ e , as the bare one. If the users directly use the dressed value of Γ ex e as the bare one, Γ e , in Eq. (6), then the value of 1+δ(s) calculated by Eq. (4) is incorrect. In this regard, and σ tot (s) σ 0 con (s)+σ res (s) =1+δ(s).
Obviously, the obtained value from the left-hand-side of Eq. (8) is VP double deducted. Even if a user notices that the Γ ex e cited in the PDG is a dressed value, he does not know how to extract the bare value, Γ e , from Γ ex e . If a user uses the value of Γ e predicted by the theoretical model, then it becomes difficult to control the uncertainty of Γ e . Some models, for example, the potential model introduced in reference [11], do not provide the theoretical uncertainty of Γ e . Therefore, extracting Γ e from the data is necessary for the R value measurement.
The discussion in the following sections will be concentrated on the VP correction of σ tot (s) for the process, e + e − → µ + µ − . The outline of this paper is as follows: In section 2, the related Born cross-sections are presented. In section 3, the VP correction to the virtual photon propagator described in text books and references is reviewed. In section 4, the experimental lepton width with different conventions is reviewed. In section 5, the properties of the VP-modified Born cross-section are discussed and the line-shapes are shown graphically. In section 6, the analytical expressions of the total crosssection of e + e − →µ + µ − with single and double VP corrections are deduced, and the numerical results are presented. Section 7 presents some discussions and comments.
Born cross-section
In the energy region containing resonance ψ, final state µ + µ − can be produced in the e + e − annihilation 013104-2 via two channels: The mode via virtual photon γ * is the direct electromagnetic production, and another mode is the electromagnetic decay of intermediate on-shell resonance ψ. The tree-level Feynman diagram for this process is the coherent summation of the two diagrams in Fig. 1: Virtual photon propagator γ * is unobservable in the experiment, and its role is transferring the electromagnetic interaction between e + e − and µ + µ − . Intermediate resonance ψ is a real particle, which is a cc-bound state with well-defined mass, life-time, spin, and parity J P C = 1 −− . Resonances J/ψ and ψ(3686) are identified with the 1S and 1P levels of the charmonium family predicted by the potential model [11]. Unstable J/ψ and ψ will decay into different final states via five modes [17]; here, only electromagnetic decay ψ→µ + µ − is discussed.
2.1 Cross-section of e + e − →γ * →µ + µ − Channel e + e − → γ * → µ + µ − is a pure QED process, which corresponds to the left diagram in Fig.1, and the expression of the Born cross-section can be found in any QED text book [5]: 2.2 Cross-section of e + e − →ψ→µ + µ − The channel via intermediate resonance ψ corresponds to the right diagram in Fig. 1, which concerns the production and decay of ψ. This section will provide some description about this mode.
In general, the wavefunction of time for an unstable particle is expressed as a plane wave with a damping amplitude: where θ(t) is a step-function of time, Ψ(0) is the wave function at origin t = 0, ω is the circular frequency, τ is the life-time, and δ is the intrinsic phase angle of Ψ(0).
Here, the relations of mass M =ω and total decay width Γ = 1/τ in natural unit = c = 1 are used. For a free particle, its parameters are bare quantities. Performing the Fourier transformation on t for Ψ(t), the amplitude of an unstable particle is transformed to nonrelativistic wavefunction of energy W : where the following formula is used: Origin wavefunction Ψ(0) can be determined from the normalization condition and production cross-section [5]. Considering a distinct production and decay process with initial state e + e − and final state f , the corresponding nonrelativistic amplitude is[18]: where Γ e and Γ f are the bare electronic and final state widths. For final state µ + µ − , Γ f = Γ µ . The lepton universality implies Γ e =Γ µ under limit m 2 l /s→0. The relativistic amplitude can be obtained easily by adopting the physics picture of the Dirac sea. Dirac considered that an antiparticle corresponded to a hole with same mass M but with negative energy state −W in the Dirac sea. Therefore, the relativistic amplitude, which includes particle-antiparticle, is: For narrow resonances J/ψ and ψ(3686), the value of Γ is assumed much smaller than M and the energy dependence of the total width can be neglected, i.e., Γ is treated as a constant. The Born cross-section for the resonant mode corresponding to the right diagram in Fig.1 is generally written in the Breit-Wigner form: where the following notations are used:
013104-3
Combination parameter F ensures Eq. (15) provides the accurate Breit-Wigner cross-section. Starting with the Van Royen-Weisskopf formula, Γ e can be expressed by the following formula [17,19,20]: where e c =2/3 is the charge of the charm quark in units of electron charge e, N c = 3 is the number of colors, α s is the strong coupling constant evaluated at s=M 2 , and R(0) is the radial wavefunction of R(t) at origin t = 0. Some phenomenological models can provide a rough estimation for the value of R(0), but its accurate value has to be extracted based on the measurements of Γ e and Γ f .
Total Born cross-section
The total production amplitude of µ + µ − should be a coherent summation of the two channels: The total Born cross-section can be written as: In practical evaluations, the parameter values in the Breit-Wigner cross-section typically adopt the experimental values published in the PDG, which contain the radiative effect [10,18]. However, the interesting values in physics are the bare ones. The following sections will deduce the total cross-section formula for e + e − →γ * /ψ→ µ + µ − , in which all the parameters are bare quantities. Based on this formula, the bare parameter values can be extracted by fitting the measured cross-section.
Vacuum polarization correction
From the viewpoint of quantum field theory, two charged particles interact by exchanging quanta of the electro-magnetic field, which corresponds to the virtual photon propagator between the two charges. The VP effect modifies the photon propagator, which is equivalent to a change in the coupling strength between two charges. In the one-particle-irreducible (1PI) chain approximation, an infinite series of 1PI diagrams is summed, and the photon propagator is modified by the VP correction in following manner [5]: where g µν is the metric tensor and Π(q 2 ) is the VP function. For the e + e − annihilation process, q 2 =s. Eq. (22) can be expressed graphically as the bare propagator, γ * , is modified to be the full propagator,γ * : The original algorithm of Π(s) is an infinite integral of fermion-loops (leptons and quarks) in the fourmomentum space. The integral for the QED lepton-loops (e + e − , µ + µ − , τ + τ − ) can be calculated perturbatively according to the Feynman rules [5,21]. The divergence of the infinite integral is canceled by electric charge renormalization e 0 → √ Z 3 e 0 = e, where e 0 is the bare electric charge in the original Lagrangian, e is the physical charge, and the renormalization constant is The remaining finite part of Π(s) isΠ(s) = Π(s)−Π(0), which is used to define running coupling constant α(s) to the leading order: This formula expresses an important physics characteristic: finite partΠ(s) in Eq. (24) is not the entire VP function; infinite part Π(0) is absorbed into the definition of physical charge e. After the charge renormalization, the effect of the VP correction can be explained as bare charge e 0 is redefined as physical charge e and simultaneously fine-structure constant α is replaced by effective energy-dependent running coupling factor α(s). Therefore, finite part 1−Π(s) of the VP factor should be combined with α to yield effective running constant α(s). Thus, α and 1−Π(s) should not be separated in the physical explanations and practical calculations.
In one-photon exchange and chain approximation, the finite part of VP functionΠ(s) can be expressed as the summation of all of fermion-loop contributions [7,8,10]: where ll = e − e + ,µ − µ + ,τ − τ + , and qq = uū, dd, ss, cc, bb, tt. The QED terms of the lepton-loops can be calculated analytically [5,21]. However, for the QCD quarkloops, analytic calculations cannot be used owing to the strong nonperturbative interaction. The solution for this issue is to use the optical theorem and dispersion relation [22,23]. The optical theorem relates the imaginary part of the QCD component of the photon self-energy to the inclusive hadronic Born cross-section [23]: The dispersion relation relates the QCD contribution of the VP function to the integral of the imaginary part of the VP function about the quark-loops: Inserting Eq. (26) in Eq. (27), the nonperturbative QCD VP term can be calculated using the hadronic crosssection, If the interference between the inclusive continuum and resonant hadronic states can be neglected, the contribution of the quark-loops can be written as: Π con (s) can be calculated by the numerical integral: Generally,R(s) uses experimental values below 5 GeV [15,24,25], whereasR(s) adopts the perturbative QCD (pQCD) prediction above 5 GeV. Π res (s) includes all the contributions of the resonances with J P C = 1 −− . If the interference between different resonances having the same decay final states are neglected for simplicity, resonant cross-section σ 0 res (s) can be written as the summation of the Breit-Wigner cross-sections: and the final analytical result is: In the vicinity of J/ψ and ψ(3686), their overlap can be neglected and only one resonance needs to be con-sidered. However, in higher charmonia regions, wide ψ(4040), ψ(4160) and ψ(4415) overlap significantly, and all their contributions and interference effects should be included [14]. Figure 3 exhibits the energy dependence of running coupling constant α(s) expressed by Eq. (24) around resonances J/ψ and ψ(3686). The resonant shape of α(s) is due to the virtual VP effect, instead of the real resonance produced.
It should be noticed that in experiment measurements, there is no strict partition between the continuum and resonant states, as expressed in Eq. (5). For example, observed final state π + π − may be direct production e + e − → π + π − or via intermediate mode e + e − → ρ 0 → π + π − . Therefore, Eqs. (5) and (29) are only roughly divided for simplicity.
It should be stressed that the dispersion relation and optical theorem merely provide a practical algorithm for calculating QCD nonperturbative VP function Π qq (s), which does not provide extra physics explanation. However, the procedures for calculating Π qq (s) from the dispersion relation and optical theorem may be misleading. Some users considered that cross-sections σ 0 con (s) and σ 0 res (s) in the expressions of Π qq (s) imply that the VP effect also produces real continuum and resonant hadronic states in the virtual photon propagator. In fact, the fermion-loop integral of the VP function is the virtual quantum fluctuation by its definition, and it does not have characteristic quantum numbers (such as, mass, spin, parities), which are necessary for any real particle. A real physics state must be able to be measured in detectors, but the fermion-loops with infinite fourmomentum fluctuations in the VP cannot be observed.
In general, the Born cross-sections of the γ * mode and intermediate ψ mode are proportional to α 2 . Considering the VP effect, running coupling constant α(s) leads to an additional energy-dependence of the crosssection. Moreover, for the energy region around J/ψ and ψ(3686), the value of Π res (s) is very sensitive to s, Γ e , and Γ , which implies that the bare values of Γ e and Γ will influence the line-shape of e + e − → γ * /ψ → µ + µ − significantly.
Effective leptonic width
In most references, the value of the electron width in the Breit-Wigner cross section adopts experimental partial width Γ ex e (which is represented as Γ e in the PDG without declaring), with the VP effect being absorbed into the electron width. There are two different conventions for Γ ex e . In reference [9], the experimental electron width is defined as: where the entire VP function is absorbed in Γ ex e . In reference [10], the following definition is adopted: It is seen from the discussion in the above section, it is not necessary to introduce quantity Γ ex e in the expression of the cross-section if α is replaced by α(s). The following sections will discuss this in more detail. Using α(s) to replace α can keep the bare value Γ e in the analysis, which is more natural for understanding the VP effect than introducing Γ ex e . However, if bare value Γ e is measured using the scheme proposed in this paper, one may obtain Γ ex e by the definition in Eq. (33) or Eq. (34) and extract radial wave function R(0) according to Eq. (19).
VP-modified Born cross-section
From the viewpoint of Feynman diagrams, the VP correction modifies the photon propagator, which can be understood from another perspective: the VP effect modifies fine structure constant α to running coupling constant α(s). In this section, the single and double VP effects will be discussed and their differences will be compared numerically.
The VP-corrected total Born cross-section is: The next two sections will discuss the effect of VP on σ 0 γ * (s) and σ 0 ψ (s), respectively.
VP-modified cross-section of γ * channel
Born cross-section σ 0 γ * (s) of γ * channel expressed in Eq. (9) is a smooth function of the energy. When the VP correction is applied to it, (36) Figure 4 shows the line-shapes of σ 0 γ * (s) given in Eq. (9) and ofσ 0 γ * (s) in Eq. (36). The line-shape of σ 0 γ * (s) is smooth for s, andσ 0 γ * (s) gives the obvious resonant structure. Clearly, the resonant structure of σ 0 γ * (s) is owing to the VP effect or the sensitive energydependence of α(s) in the vicinity of ψ, andσ 0 γ * (s) < σ 0 γ * (s) for s < M 2 ,σ 0 γ * (s) > σ 0 γ * (s) for s > M 2 . Thus, the resonant shape of the γ * channel cross-section does not imply that real resonant state J/ψ or ψ(3686) is produced but that resonant component Π res (s) affects the VP function. In the vicinities of narrow resonances, both Born cross-section σ 0 (s) expressed in Eq. (21) and VP functionΠ(s) are sensitive to energy. Therefore, the energy dependence of effective cross-sectionσ 0 (s) is not only determined by σ 0 (s) but also by Π res (s) or α(s).
VP-modified cross-section of ψ channel
Generally, the cross-section of a resonance is expressed in the Breit-Wigner form. If the value of the electron width adopts bare value Γ e , the effective Breit-Wigner cross-section is modified by the VP correction. The reference [9] adopted the convention defined by Eq. (33), which corresponds to the VP effect-modified Breit-Wigner cross-section: The numerator and denominator in Eq. (37) are evaluated at different energy scales; the numerator is evaluated at s, and the denominator is evaluated at peak M 2 . It is inappropriate to make line-shape scan measurements in the vicinity of J/ψ and ψ(3686) because most energy points s i deviate from peak value M 2 . In fact, a more natural VP correction for Breit-Wigner crosssection σ 0 ψ (s) should bẽ which corresponds to the convention: which according to Eq. (19) and Eq. (24 requires VP effect-modified Γ e to be energy-dependent: whereas Γ e uses theoretical values Γ e = 4.8 keV for J/ψ and Γ e = 2.1 keV for ψ(3686) [26]. The difference in the line-shapes based on Eqs. (15) and (37) is small. The peak positions of σ 0 ψ (s) andσ 0 ψ (s) defined by Eq. (37) are the same, and the relative difference in their crosssections at the peak is approximately 6% for both J/ψ and ψ(3686). The shift in the peak positions between σ 0 ψ (s) andσ 0 ψ (s) defined by Eq. (38) is approximately 1.0 Mev and 0.4 MeV, and the relative difference in their cross-section at the peak is approximately 31% and 3% for J/ψ and ψ(3686), respectively. J/ψ is narrower than ψ(3686), and thus, the shift in the vicinity of J/ψ is much larger than that near ψ(3686). The line-shape of the VP-modified Breit-Wigner cross-section adopting Eq. (37) and Eq. (38) is different. It is clear that adopting Eq. (38) is reasonable, and it is consistent with the VP correction to the γ * channel, see Eq. (36).
Single VP correction case
The Feynman diagram with a single VP correction is shown in Fig. 6, where e at the vertex is the electron charge, which represents the coupling strength between the leptons (e ± or µ ± ) and photon (γ * ). The grey bubble represents the VP correction in the 1PI approximation, and the hollow oval represents resonance ψ. For the ψ channel in the Feynman diagram in Fig. 6, only the virtual photon propagator between the initial e + e − and intermediary ψ is corrected by the VP. There is no VP correction for the virtual photon between ψ and final state µ + µ − , which is same as the traditional treatment, i.e., only a single VP correction is considered for the ψ channel. A coherent amplitude is given by sum of two diagrams: Considering the VP effect and that the electromagnetic coupling strength still expresses as α, the Born crosssection is modified as the following expression: where σ 0 (s) is given by Eq. (21). The energy dependence of σ 0 (s) andσ 0 (s) in the vicinity of J/ψ and ψ(3686) is displayed in Fig. 7. It is clear that the VP correction or equivalent α(s) distorts the line-shape of the original resonant structure of σ 0 (s). The Feynman diagram with a single VP correction in Fig. 6 can also be replotted as Fig. 8 equivalently, which has the same topological structure as the tree level in Fig. 1. The black-dot at the vertex is effective running electron charge: For the right Feynman diagram of channel e + e − → ψ→µ + µ − in Fig. 6 or Fig. 8, coupling strength of threeline vertex e + e − γ * is e(s) corresponding to α(s), and for µ + µ − γ * , it is e corresponding to α: , and α(s)= e 2 (s) 4π . (44)
Double VP correction case
In the quantum field theory, processes e + e − →µ + µ − and µ + µ − →e + e − should be invariant under time reversal T ⇄−T , and both processes have the same cross-section if masses m e and m µ can be neglected compared to energy √ s. However, the right Feynman diagrams in Fig. 6 and Fig. 8 violate this basic requirement. This issue can be simply solved by the double VP correction.
Resonant channel e + e − → ψ → µ + µ − has two independent virtual photons, one is between e + e − and ψ, and another is between ψ and µ + µ − . According to the Feynman rule and ISR correction principle, each independent virtual photon propagator will be modified by a single VP correction factor, and the two VP factors cannot be combined into one. A Feynman diagram with time reversal symmetry can be plotted as Fig. 9. The coherent amplitude for the Feynman diagram, as shown in Fig. 9, after the contraction of the Lorentz indices of the virtual photons γ * and intermediary vector meson ψ, can be written as: and the corresponding cross-section is: cross-section. This issue will yield different results when extracting the resonant parameters from experimental data. The Feynman diagram in Fig. 9 with double VP correction can be replotted equivalently as Fig. 11, which is symmetrical for the two time-reversal leptonic processes: The tree-level Feynman diagrams in Fig. 1 and double VP-corrected equivalent diagram in Fig. 11 have the same topology, but the coupling vertexes possess different coupling strengths e and e(s), respectively.
Total cross-section
The Born cross-section corresponding to the treelevel Feynman diagram reflects the basic property of an elementary particle reaction process, which is interesting in physics. However, in experiments, the measured property is the total cross-section. In this section, the general form of the total cross-section for e + e − →µ + µ − is given first. Subsequently, the analytical expression of the total cross-section is deduced for the cases of single and double VP corrections, and they are compared numerically.
General form
In the Feynman diagram scheme, the total crosssection up to order O(α 3 ) can be written as [7,8]: where x ≡ E γ / √ s is the energy fraction carried by a Bremsstrahlung photon, x m =1−4m 2 µ /s is the maximum energy fraction of the radiative photon, s ′ =(1−x)s is the effective square of the center-of-mass energy of the final µ + µ − pair after radiation, δ vert is the vertex correction factor, and the radiative function is: In principle, the integral in Eq. (48) can be calculated using a numerical method. However, in the application for narrow resonances J/ψ and ψ(3686) scan experiment, the e ± beam energy spread effect must be considered. The effect total cross-section that matches the experiment data is: where G(s;s 0 ) is the Gaussian function representing the energy spread distribution of the initial e ± beams and √ s 0 is the nominal center-of-energy of e ± . Eq. (50) is a two-dimensional integral in variables x and s. Integral Eq. (50) contains Eq. (48) and the outer integral in s about energy spread has to be calculated numerically. However, the inner integral in Eq. (48) of x can be evaluated analytically. The analytical calculation in Eq. (48) can save much CPU time and achieve high numerical accuracy.
In the following sections, the analytical expression of integral Eq. (48) is deduced for the two cases of single and double VP corrections, and total cross-section σ tot (s) is evaluated using the analytical results.
Analytical calculation for single VP
If the initial e ± radiates a photon with energy fraction x, the notations in Eqs. (16) and (32) are changed: The Born cross-section with VP correction is: where the quadratic polynomials have the forms: The integrand in Eq. (48) has the following polynomial form: where coefficients u i , v i , w n , and d n are the combinations of known constants and resonant parameters. The integral of Eq. (48) can be performed analytically. The results of the analytical integrals of σ tot (s) are shown in Fig. 12, and the line-shape of σ 0 (s) is plotted to exhibit the effect of the ISR correction.
Analytical calculation for double VP
The integrand of Eq. (48) for the double VP correction can be expressed as the following elementary function: where coefficients p n , q n , and r n are the combinations of known constants and resonant parameters. The integral of Eq. (48) can be performed analytically, and the analytical results are displayed in Fig. 13.
Discussions
This work discusses two issues: (1) treating the VP correction of the γ * channel and ψ channel by a natural and consistent scheme; (2) comparing the cross-sections of e + e − →γ * /ψ→µ + µ − evaluated by the single and double VP corrections schemes.
The tree-level Feynman diagram in Fig. 1 γ * /ψ→µ + µ − is the coherent summation of the γ * channel and ψ channel. The VP-modified Born cross-section is given in Eq. (46), the γ * channel is modified by a single VP factor, and the ψ channel is modified by double VP factors. Figure 14 exhibits the comparison of original Born cross-section σ 0 (s) and single and double VP-modified Born cross-sectionsσ 0 (s) in the vicinity of J/ψ and ψ(3686). The line-shapes ofσ 0 (s) for the single and double VP corrections are significantly different.
Reference [10] discusses the VP-modified Born crosssection of process e + e − → µ + µ − , where the tree-level Feynman diagram is only a continuum γ * channel and there is no resonant ψ channel. In fact, this is the case discussed in section 5.1 in this paper. The VP-modified Born cross-section in reference [10] is same as expressed in Eq. (36) in our paper. Eq. (36) is a very concise and natural expression, and it is easy to understand in physics. Reference [10] made a skillful mathematic identical transformation to VP correction, where the full factor of 1/(1−Π) was divided to two terms: the term with 1/(1−Π 0 ) explained as the continuum amplitude, and termΠ res /(1−Π 0 ) 2 as the resonant amplitude. In this explanation, only non-resonant component Π 0 is viewed as the VP correction factor, whereas resonant compo-nentΠ res is viewed as the resonant amplitude. Thus, the original one-continuum channel is transformed into two channels, which implies that a pure identical transformation in mathematics leads to a new physics picture. Resonant amplitudeΠ res contains non-resonant components Π 0 ofΠ in the following form: where massM and widthΓ are called dressed values: Therefore, the value of Γ ex e defined with convention Eq. (34) cannot be adopted all alone because Π 0 is only a partial VP correction and not the full one,Π. In this case, Γ ex e must be used together withM andΓ for completeness and consistency. It is noticed that only Γ e is present in initial state e + e − in the numerator of Eq. (58) and that there is no Γ f for the appointed final state, µ + µ − . IfΠ res can be interpreted as the resonant amplitude of e + e − →ψ→µ + µ − , why it cannot be for the other final states, such as e + e − , τ + τ − or hadrons? In fact, the true resonant amplitude is written in the Breit-Wigner form in Eq. (14). The VP effect is the quantum fluctuation of vacuum, and it does not refer to any final state. Convention Eq. (34) and the explanation in [10] convert a simple and clear problem as a complex and an obscure one. However, the convention in Eq. (33) is clear and natural.
The bare resonant parameters (M,Γ,Γ e ,δ) are the basic quantities in the Breit-Wigner formula, and they characterize the main properties of a resonance. The values of these parameters can be estimated from phenomenological potential models [26,27]. However, their accurate values have to be measured by fitting the experimental data.
Generally, the cross-section directly measured in experiments is the total cross-section, which includes all the radiative effects. To extract the bare resonant parameters from the measured cross-section correctly, an appropriate treatment of the ISR correction is crucial.
As seen in the previous sections, the value of the total cross-section, σ tot th (s), depends on the VP correction scheme, and it is also the function of the resonant parameters. ISR correction factor 1+δ is a theoretical quantity defined in Eq. (4), and it affects the Born cross-section according to Eq. (2).
The values of the resonant parameters of J/ψ and ψ(3686) can be extracted by fitting the measured crosssection in the line-shape scan experiment based on the least square method: where σ tot ex can be measured using Eq. (1) and ∆ i is the uncertainty of σ tot ex (s i ) at energy point s i . The optimized values of (M,Γ,Γ e ,δ) correspond to the optimized minimum of χ 2 .
When the value of Γ e is extracted, one may obtain Γ ex e by any convention, but it is not necessary in physics and nor in experiments. Γ e connects to original radial wave function R(0) of cc bound state ψ according to Eq. (19). The value of Γ e can deduce the value of R(0) and can test potential models. Γ e can be used to calculate the correct ISR factor in the R measurement.
It is expected that if the values of the resonant parameters (M,Γ,Γ e ,δ) are extracted using the scheme proposed in this paper, the results will not be the same as in previous measurements. Therefore, which scheme is reasonable should be determined by experiments and further studies. | 8,236.2 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
Peculiar behaviour of optical polarization gratings in light-sensitive liquid crystalline elastomers
The angular dependence of the diffraction efficiency of volumetype holographic gratings recorded in a single-domain light-sensitive liquid crystalline elastomer was investigated. Usually this dependence is expected to be very similar for intensity gratings and for polarization gratings. However, our measurements resolved a profound difference between the two types of the gratings: a typical Bragg peak of the diffraction efficiency is observed only for intensity gratings, while polarization gratings exhibit a profound dip at the Bragg angle. The appearance of this dip is explained by strongly anisotropic optical absorption of the actinic light during the recording process. ©2016 Optical Society of America OCIS codes: (050.1950) Diffraction gratings; (050.1930) Dichroism; (160.5335) Photosensitive materials. References and links 1. T. Todorov, L. Nikolova, and N. Tomova, “Polarization holography. 1: A new high-efficiency organic material with reversible photoinduced birefringence,” Appl. Opt. 23(23), 4309–4312 (1984). 2. T. Todorov, L. Nikolova, and N. Tomova, “Polarization holography. 2: Polarization holographic gratings in photoanisotropic materials with and without intrinsic birefringence,” Appl. Opt. 23(24), 4588–4591 (1984). 3. T. Todorov, L. Nikolova, K. Stoyanova, and N. Tomova, “Polarization holography. 3: Some applications of polarization holographic recording,” Appl. Opt. 24(6), 785–788 (1985). 4. C. Oh and M. J. Escuti, “Achromatic diffraction from polarization gratings with high efficiency,” Opt. Lett. 33(20), 2287–2289 (2008). 5. X. Pan, C. Wang, C. Wang, and X. Zhang, “Image storage based on circular-polarization holography in an azobenzene side-chain liquid-crystalline polymer,” Appl. Opt. 47(1), 93–98 (2008). 6. S. R. Nersisyan, N. V. Tabiryan, L. Hoke, D. M. Steeves, and B. R. Kimball, “Polarization insensitive imaging through polarization gratings,” Opt. Express 17(3), 1817–1830 (2009). 7. S. H. Lin, S. L. Cho, S. F. Chou, J. H. Lin, C. M. Lin, S. Chi, and K. Y. Hsu, “Volume polarization holographic recording in thick photopolymer for optical memory,” Opt. Express 22(12), 14944–14957 (2014). 8. A. Shishido, “Rewritible holograms based on azobenzene-containing liquid-cystalline polymers,” Polym. J. 42(7), 525–533 (2010). 9. H. Yu, “Recent advances in photoresponsive liquid-crystalline polymers containing azobenzene chromophores,” J. Mater. Chem. C Mater. Opt. Electron. Devices 2(17), 3047–3054 (2014). 10. N. Kawatsuki, A. Yamashita, M. Kondo, T. Matsumoto, T. Shioda, A. Emoto, and H. Ono, “Photoinduced reorientation and polarization holography in photo-cross-linkable liquid crystalline polymer films with large birefringence,” Polymer (Guildf.) 51(13), 2849–2856 (2010). 11. T. Sasaki, T. Shoho, K. Goto, K. Noda, N. Kawatsuki, and H. Ono, “Photoalignment and resulting holographic vector grating formation in composites of low molecular weight liquid crystals and photoreactive liquid crystalline polymers,” Appl. Phys. B 120(2), 217–222 (2015). 12. X. Pan, C. Wang, H. Xu, C. Wang, and X. Zhang, “Polarization holographic gratings in an azobenzene sidechain liquid-crystalline polymer,” Appl. Phys. B 86(4), 693–697 (2007). 13. S. P. Gorkhali, S. G. Cloutier, G. P. Crawford, and R. A. Pelcovits, “Stable polarization gratings recorded in azodye-doped liquid crystals,” Appl. Phys. Lett. 88(25), 251113 (2006). #258431 Received 28 Jan 2016; revised 15 Feb 2016; accepted 15 Feb 2016; published 29 Feb 2016 © 2016 OSA 1 Mar 2016 | Vol. 6, No. 3 | DOI:10.1364/OME.6.000961 | OPTICAL MATERIALS EXPRESS 961 14. V. Presnyakov, K. Asatryan, T. Galstian, and V. Chigrinov, “Optical polarization grating induced liquid crystal micro-structure using azo-dye command layer,” Opt. Express 14(22), 10558–10564 (2006). 15. C. Provenzano, P. Pagliusi, and G. Cipparrone, “Highly efficient liquid crystal based diffraction grating induced by polarization holograms at the aligning surfaces,” Appl. Phys. Lett. 89(12), 121105 (2006). 16. B. J. Kim, S. D. Lee, S. Y. Park, and D. H. Choi, “Unusual characteristics of diffraction gratings in a liquid crystal cell,” Adv. Mater. 14(13-14), 983–988 (2002). 17. W. Lee and C. C. Lee, “Two-wave mixing in a nematic liquid-crystal film sandwiched between photoconducting polymeric layers,” Nanotechnology 17(1), 157–162 (2006). 18. S. Nersisyan, N. Tabiryan, D. M. Steeves, and B. R. Kimball, “Fabrication of liquid crystal polymer axial waveplates for UV-IR wavelengths,” Opt. Express 17(14), 11926–11934 (2009). 19. D. Xu, G. Tan, S. T. Wu, “Large-angle and high-efficiency tunable phase grating using fringe switching liquid crystal,” Opt. Express 23, 11274–11285 (2015). 20. W. Duan, P. Chen, B. Y. Wei, S. J. Ge, X. Liang, W. Hu, and Y. Q. Lu, “Fast-response and high-efficiency optical switch based on dual-frequency liquid crystal polarization grating,” Opt. Mater. Express 6(2), 597–602 (2016). 21. Y. Zhao, in Smart Light Responsive Materials – Azobenzene-Containing Polymers and Liquid Crystals, Y. Zhao, T. Ikeda, ed. (John Wiley & Sons, 2009). 22. M. Warner and E. M. Terentjev, Liquid Crystal Elastomers (Oxford University Press, 2007). 23. H. Finkelmann, E. Nishikawa, G. G. Pereira, and M. Warner, “A new opto-mechanical effect in solids,” Phys. Rev. Lett. 87(1), 015501 (2001). 24. P. M. Hogan, A. R. Tajbakhsh, and E. M. Terentjev, “UV manipulation of order and macroscopic shape in nematic elastomers,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 65(4), 041720 (2002). 25. Y. Yu, M. Nakano, and T. Ikeda, “Photomechanics: directed bending of a polymer film by light,” Nature 425(6954), 145 (2003). 26. M. Warner and L. Mahadevan, “Photoinduced deformations of beams, plates, and films,” Phys. Rev. Lett. 92(13), 134302 (2004). 27. N. J. Dawson, M. G. Kuzyk, J. Neal, P. Luchette, and P. Palffy-Muhoray, “Experimental studies of the mechanisms of photomechanical effects in a nematic liquid crystal elastomer,” J. Opt. Soc. Am. B 28(8), 1916– 1921 (2011). 28. H. Y. Jiang, S. Kelch, and A. Lendlein, “Polymers move in response to light,” Adv. Mater. 18(11), 1471–1475 (2006). 29. U. Hrozhyk, S. Serak, N. Tabiryan, T. J. White, and T. J. Bunning, “Bidirectional photoresponse of surface pretreated azobenzene liquid crystal polymer networks,” Opt. Express 17(2), 716–722 (2009). 30. K. M. Lee, M. L. Smith, H. Koerner, N. Tabiryan, R. A. Vaia, T. J. Bunning, and T. J. White, “Photodriven flexural-torsional oscillation of glassy azobenzene liquid crystal polymer networks,” Adv. Funct. Mater. 21(15),
Introduction
Optical polarization gratings are periodic holographic structures that are recorded by optical interference patterns at constant intensity but spatially varying polarization states of the optical field [1,2].They exhibit many interesting properties that can be applied in devices for detection and manipulation of the optical polarization state, such as polarization filters and converters, polarizing beam splitters and polarization-sensitive optical data storage units [3][4][5][6][7].Liquid crystalline (LC) materials are particularly suitable for fabrication of polarization gratings, because they possess a collective molecular response in combination with a strong optical anisotropy [8,9].Polarization-responsive recording in liquid crystalline materials is usually based on optical-field-induced reorientation of the mesogenic molecules [10][11][12].Another common mechanism that leads to polarization-type LC gratings is holographic or some other type of patterning of surface layers that control the LC alignment [13][14][15][16][17][18][19][20].In both cases, photoresponse is typically achieved by incorporation of photoisomerizable chemical compounds, mostly azobenzene derivatives [21].
Liquid crystalline elastomers (LCEs) are soft materials that combine liquid crystallinity with rubber elasticity.Light-sensitive LCEs exhibit a huge opto-mechanical response, which is a direct consequence of the specific coupling mechanisms present in these materials [22][23][24].Exposed to optical irradiation, they can change size and shape, flip between different shapes, oscillate, or even move on the supporting surface [25][26][27][28][29][30][31][32].Another interesting feature of light-sensitive LCEs is their strong opto-optical response that is observed as a large photoinduced modification of the optical birefringence.This can be exploited for fabrication of tunable volume-type optical diffraction structures that can be regulated by mechanical strain or by temperature modifications [33][34][35][36][37][38][39].
Our recent papers have reported investigations on LCE-based diffraction gratings fabricated by intensity-modulated interference patterns [40][41][42][43], while this paper reports on polarization gratings.We show that polarization gratings exhibit a similar magnitude of the diffraction efficiency as intensity gratings.However, in contrast to intensity gratings, they display an unusual dip, i.e. a depression structure, instead of the expected Bragg reflection peak.By extending our theoretical model developed for intensity gratings [40], we show that this dip is a consequence of anisotropic absorption of actinic light during the recording process.
Optical transmission gratings were recorded by placing the LCE film in the interference field of two expanded laser beams with a wavelength of either λ r = 351 nm or 364 nm, which resulted in formation of transmission gratings with grating spacing Λ of 1.6 or 1.7 μm, respectively.Recording beams of equal intensity (8 mW/cm 2 ) entered the film symmetrically with respect to the surface normal.The beams were linearly polarized.Their polarization directions with respect to the nematic director n are depicted in Fig. 1.For recording intensity gratings n was set parallel to the s-polarization direction and the recording beams were superposed in parallel (s-s or in p-p) polarization combination.For recording of polarization gratings n was set at 45° with respect to the s-and p-polarization directions and the recording beams were superposed in perpendicular (s-p) polarization combination.The recording time for all gratings was 10 min.Optical diffraction properties of the gratings were examined by a low-power (< 1 mW) laser beam at λ p = 633 nm, a wavelength to which the LCE is not sensitive.The probe beam was either s or p polarized.The sample was mounted onto a rotation stage and rotated around an axis perpendicular to the plane of incidence.The angular dependence of the intensities I 0 , I +1 and I -1 of the 0th and the ± 1st order diffraction peaks in the vicinity of the Bragg angle θ B = arcsin(λ p /2Λ) was measured by photodiode detectors.Intensities of higher diffraction orders were negligible.The (relative) diffraction efficiency was calculated as where i = 1, 0, or + 1.By this definition absorption and scattering losses of the probe beam can be disregarded.The optical absorbance A of the LCE film at λ r = 351 nm was so high (> 4) that its dichroism could not be accessed directly.We therefore characterized the dichroism of the azomesogens (J7) incorporated in the film by introducing them into the conventional nematic liquid crystalline mixture E7 (Shijiazhuang Chengzhi Yonghua Display Material Co.).
Commercial glass cells with a thickness of μm (Instec Inc.) and surface coatings inducing planar LC alignment were filled either with pure E7 or with a mixture of E7 and 1 wt% of J7 and their absorbance as a function of the polarization direction of the linearly polarized incident beam was measured at 351 nm.The results are shown in Fig. 2. Both cells exhibit considerable linear dichroism with maximal absorbance A for the polarization state parallel to the nematic director n (Fig. 2(a)).The absorbance difference between both samples (ΔA), which is attributed to the absorbance of the azomesogens, shows a profound number eight shape (Fig. 2(b)).This signifies a strong preferential alignment of the J7 molecules along the direction of n.We presume that a similar alignment occurs also when J7 is incorporated into the LCE matrix.The Bragg peak exhibits the FWHMs of ∼15° for the grating recorded in the s-s configuration and of ∼8° for the grating recorded in the p-p configuration, respectively.This observation indicates that the depth of recording, i.e. the effective grating thickness, is about two times larger for the p-p grating than for the s-s grating [40].This is a consequence of the weaker absorption (larger penetration depth) of the p-polarized radiation at λ r = 351 nm as compared to the s-polarized beam.Anyway, the peak diffraction efficiency for the p-p grating is about two times smaller than for the s-s grating, which is a consequence of the fact that the p-p grating was probed by the p-polarized probe beam (at λ p = 633 nm) while the s-s grating was probed by the spolarized beam.As discussed in our previous paper, due to the relation between the LC order parameter and birefringence, light polarized perpendicular to the nematic director in general exhibits a lower diffraction efficiency than light polarized parallel to the director [42].shows angular dependencies of the diffraction efficiency for the ± 1st diffraction orders for a grating recorded in the perpendicular (s-p) polarization combination and probed by an s-polarized probe beam.The diffraction efficiency of this grating has a similar magnitude as for the gratings recorded by parallel polarization combinations.However, there exists a striking difference in the observed angular dependence -instead of a peak there is a profound dip observed exactly at the Bragg angle.The depth of the dip varied from sample to sample and from recording to recording, but the dip could always be noticed.We also monitored the angular dependence during decay of the grating due to spontaneous cis-to-trans back isomerization and found that the magnitude of the diffraction efficiency was decreasing with time, however, its angular dependence including the dip remained the same.This observation confirmed that the dip was not associated with the overmodulation of the gratings.
To verify the universality of the observed behaviour we recorded the grating by using perpendicularly polarized recording beams at another recording wavelength, namely at λ r = 364 nm.In addition to this, we measured angular dependencies of the diffraction efficiency for p as well as for s polarized probed beams.The results are shown in Fig. 4. One can notice that the dip at the Bragg peak is present also for these cases.In addition, we analysed some LCE samples with different chemical compositions from the one described above and the dip was observed for them too.Hence we propose that the observed behaviour is a general property of polarization gratings recorded in the LCEs.In the following we will show that it can be explained by the large linear dichroism of the material at the recording optical wavelength.
Theoretical model
Superposition of two coherent optical waves with equal amplitudes but orthogonal polarization states leads to optical interference fields with constant intensity but a periodic spatially varying polarization state [45][46][47].At small intersection angles (2θ) of the beams, as it is the case in our recording setup (Fig. 1), the component of the optical field orthogonal to the intersection plane can be neglected (collinear approximation) [48].Consequently, the superposition of s and p polarized beams results in an interference pattern that is varying between circular and linear polarization as depicted in the top line of Fig. 5.This interference pattern can be decomposed into the superposition of two interference sub-patterns: a subpattern of two beams linearly polarized at −45° with respect to the p polarization (direction parallel to n in Fig. 1(b) corresponding to extraordinary polarization) and a sub-pattern of two beams linearly polarized at + 45° with respect to the p polarization (direction perpendicular to n in Fig. 1(b) corresponding to ordinary polarization) [45,49].The extraordinaryextraordinary (e-e) and ordinary-ordinary (o-o) sub-patterns are shifted for Λ/2 along the grating vector (x axis in Fig. 1) [47].
In the LCE material, due to its large linear dichroism, the e-e sub-pattern decreases much faster with sample depth (z axis in Fig. 1) than the o-o sub-pattern.Consequently, the polarization state of the total optical field changes with sample depth and at large depths practically only the o-o sub-pattern subjected to lower absorption survives.Therefore polarization modulation is transformed to intensity modulation.This effect is schematically shown in Fig. 5. Fig. 5. Optical interference field of s and p polarized optical beams as a function of sample depth (see also Fig. 1).Due to linear dichroism the amplitude of the e-e sub-pattern (polarization at −45° with respect to x axis) decreases with sample depth (z axis) much faster than the amplitude of the o-o sub-pattern (polarization at + 45° with respect to x axis).The direction of nematic director n is designated on the left side at the bottom of the image.
Optical absorption in photo-isomerizable materials takes place in a nonlinear manner associated with the difference between the absorption cross sections of the trans and the cis isomers.Cis isomers usually exhibit lower absorption for UV radiation and consequently after the isomerization reaction the material becomes more transparent for the UV light than before it.As a result of this phenomenon, in the beginning of the irradiation process the optical field is present mainly in the surface region of the material.However, with prolonged irradiation, it penetrates deeper and deeper into the volume.This type of "photo-bleaching" is characteristic also for light-sensitive LCEs and was analyzed in one of our previous studies reporting LCEbased optical gratings [40].However, for intensity gratings considered in Refs [40][41][42][43], the anisotropy of optical absorption was not important.If this anisotropy is taken into account, then the rate equation for the relative concentration of the trans isomers c t = C t /(C t + C c ), where C t and C c denote molar concentrations of trans and cis isomers (in units of moles/m 3 ), in [40] is modified to: where γ denotes the conversion efficiency from the electronic excited state to the cis or trans conformation, σ is the absorption cross section, Ψ is a parameter proportional to the optical intensity, and τ is the thermal relaxation time from cis to trans state.The subscripts t and c denote trans and cis states and the indices e and o denote extraordinary and ordinary polarized optical fields, respectively.In Eq. ( 2) it is assumed that light-induced cis-to-trans back isomerization is independent of the polarization state of the optical field, which is reasonable for the chromophores used in our LCE material.In accordance with Eq. ( 2), the components of the imaginary part of the uniaxial optical dielectric tensor ε″, can be expressed as: where k 0 is the magnitude of the optical wave vector in vacuum.
The above described extensions of the theory presented in [40] were used to calculate depth profiles of the optical intensity for extraordinary and ordinary polarized optical fields for different recording times of the grating.The value of (σ t,e /σ t,o ) = 2, in agreement with the measurements shown in Fig. 2 and Fig. 3(a), was used in the calculation.The sample thickness considered in the calculation was 20 μm, which corresponds to the typical effective depth of the gratings for long recording times [40,41].The values of all other parameters were set to be the same as in [40].The optical field was calculated by using a FDTD methodbased software package (WOLFSIM) designed for solving the wave equation in periodic structures of arbitrary anisotropic media [50,51].The result is shown in Fig. 6.Due to the Λ/2 shift between the e-e and o-o interference patterns, the maxima of the extraordinarily polarized field coincide with the minima of the ordinarily polarized field and vice versa.Besides this, due to lower absorption, the ordinarily polarized field penetrates deeper into the sample than the extraordinarily polarized field.Fig. 6.Calculated intensity profiles of extraordinarily and ordinarily polarized optical fields as a function of sample depth for increasing recording times.The recording time linearly increases from t r = 10 s (upper segment) to t r = 140 s (bottom segment).The profile corresponding to one period Λ of the interference pattern is shown in each segment.The ordinarily polarized field penetrates deeper into the sample than the extraordinarily polarized field.Its maxima and minima are shifted for Λ/2 with respect to the extraordinarily polarized field.
Fig. 6).At large sample depths, the ordinary field prevails and consequently the birefringence is most profoundly reduced in the regions of high intensity of the ordinary field (red-coloured regions on the right side of Fig. 6).Due to the Λ/2 spatial shift of these regions, the modulation of the birefringence changes its sign as a function of the sample depth.The spatial dependence of the anisotropy of the real part of the optical dielectric tensor Δε can be described as [40]: where ' e ε and ' o ε denote extraordinary and ordinary components of the real part of the optical dielectric tensor, ε a is the average anisotropy, Δε 1 (z) is the modulation of the anisotropy that depends on the sample depth z, and K g = 2π/Λ is the magnitude of the grating vector.The calculated dependence of Δε 1 (z) for some selected recording times is shown in Fig. 7.One can notice the change of the sign at a depth of about 30% of the sample thickness.The oscillations at the rear side of the sample are the consequence of boundary conditions and limited mesh size.The change of the sign of Δε 1 (z) with sample depth causes destructive interference between the diffraction field of the probe beam generated in surface region of the sample and the diffraction field generated in inner parts of the sample.This effect is most pronounced when the probe beam enters the sample at the Bragg angle θ B .For a weakly diffracted beam the diffraction efficiency of the ± 1st diffraction orders in the vicinity of θ B can be expressed as [47]: where L is the sample thickness and B θ θ θ Δ = − measures a deviation from the Bragg angle.The calculated angular dependence of the diffraction efficiency η(θ) (limited to −0.2 rad < θ < + 0.2 rad) corresponding to the longest recording time considered in Fig. 7 is shown as a red solid line in Fig. 3(b).A profound dip with minimum at θ = θ B is evident.Also in general, a good agreement between the calculated and the measured dependence of η(θ) can be noticed.The largest variations in the observed angular behavior appear in the value of η(θ = θ B ) (Fig. 4).As follows from Eq. ( 5), this value is determined by Having applications in mind, one might wonder how to avoid a detrimental dip at the Bragg angle.As follows from our study, one simply should use either recording wavelengths for which the material exhibits low linear dichroism or employ parallel polarization states.On the other hand, the dip might also be advantageous, e.g., in multiplexing of the usual intensity gratings with polarization gratings.Here, the former can be optimally read out at the Bragg angle whereas the latter in off-Bragg geometry, therefore any "cross-talk" effects can be reduced.
Conclusions
Our results demonstrate that a strong linear dichroism at the actinic optical wavelength, which is characteristic for light-sensitive LCEs and also for many other kinds of light-sensitive liquid crystalline materials, can drastically influence the volume-type polarization holographic recording in photo-responsive media.However, up till now this effect was mostly neglected [46,52].In combination with strongly nonlinear recording kinetics associated with photo-bleaching, dichroism can lead to diffractive structures exhibiting various unusual phenomena, such as a profound minimum of the diffraction efficiency at the Bragg angle, which was the subject of our present investigation.
Other features that are worth being studied in LCE-based polarization gratings are polarization properties of the diffracted light and the effect of mechanical strain and temperature on these properties [53].Another open problem that can be conveniently investigated by analysis of optical polarization gratings, is photo-induced alignment in the LCE materials [36,54].Besides this, optical polarization gratings recorded in the LCE media can also open up various new challenges for construction of polarization-sensitive diffractive optical elements that can be regulated by different external stimuli, in particular by mechanical strain.
The diffraction efficiency of a few percent, typically observed in our experiments, is usually too low for practical applications.We want to emphasize that this work focused on a fundamental explanation of the reported phenomenon, and for that reason the same LCE material was used as in our previous studies.This approach allowed us to use material parameters deduced from these studies for our numerical simulations and enhances their soundness.However, the investigated LCE material is far from being optimised from the point of view of diffraction efficiency.As shown in our recent paper, small variations of the chemical structure of the azomesogen moiety and/or suitable clamping of the LCE film can improve the diffraction efficiency for one order of magnitude [42].In addition, further improvements are easily possible by optimizing the recording optical wavelength (with respect to the absorption spectrum of the material), so that the effective thickness of the grating is increased and consequently diffraction occurs in the two-wave Bragg diffraction regime.Another important aspect for applications is also to reduce the grating spacing to about Λ~500 nm, which is an open problem with LCE gratings that needs to be investigated in the future.
Fig. 1 .
Fig. 1.Schematic drawings of the recording configuration for parallel polarization combination (a) and for perpendicular polarization combination (b), respectively.
Fig. 2 .
Fig. 2. Absorbance (at 351 nm) of planarly aligned samples of pure E7 and of E7 mixed with 1 wt% of J7 azomesogens as a function of polarization direction of the linearly polarized incident optical beam (a).Difference ΔA between the absorbances of both samples (b).Vertical arrows in the centre denote the orientation of the nematic director n.
Figure 3 (
Figure3(a) shows the angular dependencies of the diffraction efficiency of the ± 1st diffraction orders as a function of deviation from the Bragg angle⏐θ-θ B ⏐ for gratings recorded either in the s-s or the p-p polarization state combination.The value of ⏐θ B ⏐ was around 12° (externally measured with respect to normal incidence).The Bragg peak exhibits the FWHMs of ∼15° for the grating recorded in the s-s configuration and of ∼8° for the grating recorded in the p-p configuration, respectively.This observation indicates that the depth of recording, i.e. the effective grating thickness, is about two times larger for the p-p grating than for the s-s grating[40].This is a consequence of the weaker absorption (larger penetration depth) of the p-polarized radiation at λ r = 351 nm as compared to the s-polarized beam.Anyway, the peak diffraction efficiency for the p-p grating is about two times smaller than for the s-s grating, which is a consequence of the fact that the p-p grating was probed by the p-polarized probe beam (at λ p = 633 nm) while the s-s grating was probed by the spolarized beam.As discussed in our previous paper, due to the relation between the LC order parameter and birefringence, light polarized perpendicular to the nematic director in general exhibits a lower diffraction efficiency than light polarized parallel to the director[42].
Fig. 3 .
Fig. 3. Diffraction efficiency of the ± 1st diffraction orders as a function of deviation from the Bragg angle for a grating recorded by parallel polarization states (a) and for a grating recorded by perpendicular polarization states (b).Polarization states of the recording and readout (probe) beams are denoted in the images.The red solid line in (b) is the result of a theoretical simulation described in detail in Section 3.
Figure 3 (
Figure 3(b) shows angular dependencies of the diffraction efficiency for the ± 1st diffraction orders for a grating recorded in the perpendicular (s-p) polarization combination and probed by an s-polarized probe beam.The diffraction efficiency of this grating has a similar magnitude as for the gratings recorded by parallel polarization combinations.However, there exists a striking difference in the observed angular dependence -instead of a peak there is a profound dip observed exactly at the Bragg angle.The depth of the dip varied from sample to sample and from recording to recording, but the dip could always be noticed.We also monitored the angular dependence during decay of the grating due to spontaneous cis-to-trans back isomerization and found that the magnitude of the diffraction efficiency was decreasing with time, however, its angular dependence including the dip remained the same.This observation confirmed that the dip was not associated with the overmodulation of the gratings.To verify the universality of the observed behaviour we recorded the grating by using perpendicularly polarized recording beams at another recording wavelength, namely at λ r = 364 nm.In addition to this, we measured angular dependencies of the diffraction efficiency for p as well as for s polarized probed beams.The results are shown in Fig.4.One can notice that the dip at the Bragg peak is present also for these cases.In addition, we analysed some LCE samples with different chemical compositions from the one described above and the dip was observed for them too.Hence we propose that the observed behaviour is a general property of polarization gratings recorded in the LCEs.In the following we will show that it can be explained by the large linear dichroism of the material at the recording optical wavelength.
Fig. 4 .
Fig. 4. Diffraction efficiencies of the ± 1st diffraction orders as a function of deviation from the Bragg angle for a grating recorded by perpendicularly polarized recording beams (s + p) and readout by s (squares) and by p polarized (circles) probe beams, respectively.
Fig. 7 .
Fig. 7. Modulation of anisotropy of the real part of the optical dielectric tensor as a function of sample depth for different recording times.The relative recording times are denoted in the inset.The resulting calculated angular dependence of the diffraction efficiency of the ± 1st diffraction orders for t r = 100 s is shown as a thick solid line in Fig. 3(b).
on the details of the evolution of Δε(x,z) during the recording process.Further experiments employing varying recording times are needed to resolve these details. | 6,658.4 | 2016-03-01T00:00:00.000 | [
"Physics"
] |
Processing and Characterization of β Titanium Alloy Composite Using Power Metallurgy Approach
The β titanium alloy matrix composite was made from a mixture of elemental metal powders, including boron carbide. During the high-temperature sintering process, in situ synthesis took place as a result of the TiB and TiC reinforcing phases formed. The identification of these phases was confirmed by X-ray diffraction and microstructural analyses. The presence of unreacted B4C particles and the surrounding reaction layers allowed for the evaluation of diffusion kinetics of alloying elements using SEM and EDS analyses. The direction of diffusion of the alloying elements in the multicomponent titanium alloy and their influence on the in situ synthesis reaction taking place were determined. In addition, the relationship between the microstructural components, strengthening phases, and hardness was also determined. It was shown that in situ reinforcement of titanium alloy produced from a mixture of elemental powders with complex chemical composition is possible under the proposed conditions. Thus, it has been demonstrated that sufficiently high temperature and adequate holding time allows one to understand the kinetics of the synthesis of the strengthening phases, which have been shown to be controlled by the concentrations of alloying elements.
Introduction
Titanium matrix composites (TMCs) are increasingly being studied because of their high-strength properties, high stiffness, and good strength at elevated temperatures; moreover, there are advantages to using titanium alloys as a matrix, which are low-density and resistant to atmospheric corrosion [1,2]. Taking into account the listed properties, TMCs are being considered as potential structural materials primarily, in the the aerospace industry and cosmonautics [3]. Basically, TMCs can be divided into continuously reinforced TMCs, which can contain silicon carbide (SiC) fibers, and discontinuously reinforced TMCs, which are reinforced with particles [4]. Such particles are mainly B 4 C, graphite, TiB, TiC, TiN, or SiC. The most common method for producing discontinuously reinforced TMCs is powder metallurgy. Titanium master alloy powder or commercially pure titanium powder is mixed with reinforcing particles and then consolidated at high temperatures [5]. The SiC particles added to TMCs most often form an ex-situ reinforcement, which means the particles added to the powder mixture do not react with the matrix during sintering and no new particles are formed. A far more common method used in TMC fabrication is to use in situ reactions during the sintering process, which leads to the formation of new strengthening phases [6].
Due to titanium's high reactivity to carbon and boron, the in situ formation of additional strengthening phases through the addition of boron carbide (B 4 C) is possible. Basically, the in situ synthesis follows the exothermic Reaction (1) [7]: 5Ti + B 4 C→ 4TiB + TiC (1) The products of the diffusion-controlled synthesis are TiB and TiC interphases. This approach has also been successfully used in the manufacture of composites from other novel materials such as metallic glasses reinforced with high-entropy alloy particles [8]. Their presence in the matrix primarily affects the enhancement of hardness, stiffness, and strength at elevated temperatures. The additional benefits of using in situ reactions in the fabrication of TMCs are the homogeneous distribution of particles of the strengthening phase and the clean interface between the matrix and strengthening phases, as well as flexibility in chemical composition and the proportion of reinforcement addition.
Several thermally activated mechanisms are responsible for the transfer of material during sintering, resulting in an increase in density. During the densification process, phenomena such as volumetric diffusion, diffusion at the grain boundary, surface diffusion, and viscous or plastic flow occur [9]. The aforementioned mechanisms can be activated during the sintering process simultaneously or sequentially. In general, the early stage of sintering begins with the formation of a neck in the contact area between two adjacent particles. The vacancies are then filled by lattice diffusion of atoms from the grain boundary into the neck region. Diffusion processes in general are complex and depend on various factors, such as the shape and size of the particles; the distribution of the alloying elements in the mixture; the microstructure; and the process parameters, which are temperature, atmosphere, and time.
The currently published research results are focused on developing parameters for fabricating TMCs and characterizing their microstructure and basic strength properties. These studies mostly focus on the reaction between commercially pure Ti powder and B 4 C particles. The research results currently presented in the publications are aimed at determining favorable sintering conditions that will allow the composite to achieve high density and produce a large amount of additional reinforcing phase. In works [10,11], the authors showed what effect the addition of a reinforcing phase has on composite density in the spark plasma sintering (SPS) process. Regardless of the amount of B 4 C, a minimum density of 99% was obtained. Tensile strength at room temperature decreased as the amount of B 4 C introduced increased. On the other hand, it increased at elevated temperatures. An increase in the amount of the reinforcing phase increased the microhardness, regardless of the temperature conditions of the test.
Research on the use of β titanium alloys for a matrix is conducted less often. In this research area at most, the use of master powders was attempted. Grützner et al. [12,13] undertook a study to characterize the reaction kinetics of the synthesis of a Ti-5Al-5Mo-5V-3Cr master alloy powder with B 4 C particles. The mixture was also consolidated by the SPS process. The presence of the TiB, TiB 2 , and TiC phases, as well as unreacted B 4 C particles, was confirmed. The microhardness and compressive strength were studied, and the addition of a reinforcing phase increased both of these properties.
Some research groups [14][15][16] have undertaken the use of additive manufacturing methods to produce TMCs. In this case, the reinforcement also results from an in situ synthesis between titanium and B 4 C particles. Prior to the selective laser melting (SLM) process, titanium powder and B 4 C particles are intensively mixed, resulting in the particles of the strengthening-phase coating the surfaces of the titanium powder particles. Such a prepared mixture is used as raw material in the SLM process. It has been shown that by using a such novel method, it is also possible to synthesize the in situ TiB and TiC phases. The produced products have a high relative density and a microstructure typical of additively manufactured materials, where melt pools are clearly visible. The addition of B 4 C particles in SLM samples increases the tensile strength but causes a significant decrease in ductility. In addition, it should be noted that as the content of the strengthening phase increases, the tensile strength gradually decreases.
Despite many recent studies, there are insufficient knowledge and research results on the behavior of a multi-component mixture of elemental powders. Therefore, this paper presents an analysis of the fabrication process of a composite based on β titanium alloy strengthened in situ, through the synthesis reaction of boron carbide with titanium. The initial material was a mixture of elemental powders with a chemical composition corresponding to Ti-5Al-5Mo-5V-3Cr alloy and B 4 C particles. The effect of the mixture preparation conditions on the microstructure after the high-temperature sintering process was discussed. Then, X-ray dispersive spectroscopy was used to identify the phase composition of the obtained material. Using scanning electron microscopy and EDS analysis, the kinetics of elemental diffusion during sintering was described. The hardness of the composite was also compared with that of the unreinforced material, and the effect of nucleating reinforcing phases on it was determined.
Materials and Methods
Elemental powders of titanium (size < 150 µm, 99.9% purity) aluminum (size < 35 µm, 99.8% purity), molybdenum (size < 35 µm, 99.9% purity), vanadium (size < 150 µm, 99.9% purity), chromium (size < 65 µm, 99.2%), and (as a strengthening phase) boron carbide B 4 C particles (size < 100 µm, 98%) were used in this study. The morphology of the powders is shown in Figure 1. To prepare the mixture, the elemental powders were weighed at a ratio appropriate to the chemical composition of the Ti-5Al-5Mo-5V-3Cr alloy with 2 wt.%. (about 3.58 vol.%) addition of B 4 C. Then, powders were mixed in a ceramic mixing chamber in the presence of 8 mm diameter tungsten carbide balls. The mass ratio of the balls and the mixture was 1:1. The mixing process was carried out for 90 min, with a mixer speed of 55 rpm. Additionally, a reference mixture was prepared without the addition of B 4 C. Then, the mixtures were cold compacted under a pressure of 450 MPa and directly subjected to pressureless sintering at 1250 • C for 4 h in a protective atmosphere of argon. The material was then slowly cooled with the furnace. The density of sintered samples was measured based on the Archimedes method and was 4.03 ± 0.05 g/cm 3 for composite and 4.15 ± 0.04 g/cm 3 for the unreinforced Ti-5553 alloy. Despite many recent studies, there are insufficient knowledge and research results on the behavior of a multi-component mixture of elemental powders. Therefore, this paper presents an analysis of the fabrication process of a composite based on β titanium alloy strengthened in situ, through the synthesis reaction of boron carbide with titanium. The initial material was a mixture of elemental powders with a chemical composition corresponding to Ti-5Al-5Mo-5V-3Cr alloy and B4C particles. The effect of the mixture preparation conditions on the microstructure after the high-temperature sintering process was discussed. Then, X-ray dispersive spectroscopy was used to identify the phase composition of the obtained material. Using scanning electron microscopy and EDS analysis, the kinetics of elemental diffusion during sintering was described. The hardness of the composite was also compared with that of the unreinforced material, and the effect of nucleating reinforcing phases on it was determined.
Materials and Methods
Elemental powders of titanium (size <150 μm, 99.9% purity) aluminum (size <35 μm, 99.8% purity), molybdenum (size <35 μm, 99.9% purity), vanadium (size <150 μm, 99.9% purity), chromium (size <65 μm, 99.2%), and (as a strengthening phase) boron carbide B4C particles (size <100 μm, 98%) were used in this study. The morphology of the powders is shown in Figure 1. To prepare the mixture, the elemental powders were weighed at a ratio appropriate to the chemical composition of the Ti-5Al-5Mo-5V-3Cr alloy with 2wt.%. (about 3.58 vol.%) addition of B4C. Then, powders were mixed in a ceramic mixing chamber in the presence of 8 mm diameter tungsten carbide balls. The mass ratio of the balls and the mixture was 1:1. The mixing process was carried out for 90 min, with a mixer speed of 55 rpm. Additionally, a reference mixture was prepared without the addition of B4C. Then, the mixtures were cold compacted under a pressure of 450 MPa and directly subjected to pressureless sintering at 1250 °C for 4 h in a protective atmosphere of argon. The material was then slowly cooled with the furnace. The density of sintered samples was measured based on the Archimedes method and was 4.03 ± 0.05 g/cm 3 for composite and 4.15 ± 0.04 g/cm 3 for the unreinforced Ti-5553 alloy. X-ray diffraction phase analysis was performed using a Panalytical Empyrean DY 1061 X-ray diffractometer, and a Cu lamp Kα = 1.54 Å, an angular range of 2θ from 20 • to 90 • with a step of 0.03 • , and a scanning frequency of 7 s, at 40kV, 40mA. Samples for microstructural observation were prepared using a standard grinding and polishing procedure and etching with Kroll reagent (2% HF + 6% HNO 3 + 92% H 2 O). A microstructural analysis was performed on a Leica DM4000M light microscope, Hitachi TM-3000, and FEI Inspect S50 scanning microscopes; both microscopes were equipped with an energy-dispersive spectrometry system (EDS). Hardness measurements were carried out by the Vickers method on a Duramin-40 hardness tester, using an indenter load of 19.62 N and for microhardness tests, 0.25 N. A hardness distribution map was prepared in Surfer 17 software using the Kriging griding method. Figure 2 shows elemental distribution maps for a mixture of elemental powders. Powders with different particle sizes were used for the study. The use of tungsten carbide balls had the effect of intensifying the mixing process. Larger and harder particles of titanium and alloy powders became finer, which allowed for the homogeneous distribution of powder particles in the mixture volume. It was noted that after the mixing process, the smaller particles filled the empty spaces between the large particles. Such a phenomenon has a positive effect on bulk density and enables the achievement of higher density of the green compact. Compared to the morphology of the initial powders, it was noted that the B 4 C particles were not crushed. The results of the EDS analysis confirmed the mixing effects, previously noted during observations of the powder mixture morphology. The effects induced during mixing with the presence of WC balls, involving the crushing of the particles of the individual powders and the inserting of smaller and softer particles, particularly aluminum and molybdenum, onto the surfaces of the larger particles such as titanium, are evident. Thus, it was confirmed that the proposed method of mixing elemental powders, including the use of tungsten carbide balls, leads to a more uniform mixture in terms of size and results in the acceleration of the diffusion process of alloying elements during sintering and acquisition of better homogeneity regarding the chemical composition of the product. However, it should be mentioned that such a method of mixture preparation is exposed to oxygen contamination, which is an α-phase stabilizer. During the crushing of the elemental powder particles, the oxidation of the exposed inner surfaces of the powder particles could occur, as well as enclosing within the particle volume of the crushed oxide layers originating from their surfaces. The resulting increase in oxygen content in the tested material could effectively inhibit the effect of β-phase stabilization by other alloying elements, such as molybdenum, vanadium, or chromium [17]. In addition, the oxygen content in titanium alloys increases ultimate tensile strength and yield strength but decreases elongation [18].
Mixture Preparation
X-ray diffraction phase analysis was performed using a Panalytical Empyrean DY 1061 X-ray diffractometer, and a Cu lamp Kα = 1.54 Å, an angular range of 2θ from 20° to 90° with a step of 0.03°, and a scanning frequency of 7 s, at 40kV, 40mA. Samples for microstructural observation were prepared using a standard grinding and polishing procedure and etching with Kroll reagent (2% HF + 6% HNO3 + 92% H2O). A microstructural analysis was performed on a Leica DM4000M light microscope, Hitachi TM-3000, and FEI Inspect S50 scanning microscopes; both microscopes were equipped with an energy-dispersive spectrometry system (EDS). Hardness measurements were carried out by the Vickers method on a Duramin-40 hardness tester, using an indenter load of 19.62 N and for microhardness tests, 0.25 N. A hardness distribution map was prepared in Surfer 17 software using the Kriging griding method. Figure 2 shows elemental distribution maps for a mixture of elemental powders. Powders with different particle sizes were used for the study. The use of tungsten carbide balls had the effect of intensifying the mixing process. Larger and harder particles of titanium and alloy powders became finer, which allowed for the homogeneous distribution of powder particles in the mixture volume. It was noted that after the mixing process, the smaller particles filled the empty spaces between the large particles. Such a phenomenon has a positive effect on bulk density and enables the achievement of higher density of the green compact. Compared to the morphology of the initial powders, it was noted that the B4C particles were not crushed. The results of the EDS analysis confirmed the mixing effects, previously noted during observations of the powder mixture morphology. The effects induced during mixing with the presence of WC balls, involving the crushing of the particles of the individual powders and the inserting of smaller and softer particles, particularly aluminum and molybdenum, onto the surfaces of the larger particles such as titanium, are evident. Thus, it was confirmed that the proposed method of mixing elemental powders, including the use of tungsten carbide balls, leads to a more uniform mixture in terms of size and results in the acceleration of the diffusion process of alloying elements during sintering and acquisition of better homogeneity regarding the chemical composition of the product. However, it should be mentioned that such a method of mixture preparation is exposed to oxygen contamination, which is an α-phase stabilizer. During the crushing of the elemental powder particles, the oxidation of the exposed inner surfaces of the powder particles could occur, as well as enclosing within the particle volume of the crushed oxide layers originating from their surfaces. The resulting increase in oxygen content in the tested material could effectively inhibit the effect of β-phase stabilization by other alloying elements, such as molybdenum, vanadium, or chromium [17]. In addition, the oxygen content in titanium alloys increases ultimate tensile strength and yield strength but decreases elongation [18].
Phase Identification and Microstructure
The XRD patterns for the reference sample and the composite are shown in Figure 3. For the non-reinforced sample, peaks were identified for two phases: hexagonal Ti-α and body-centered cubic Ti-β. The XRD pattern for the in situ reinforced sample showed the presence of both B 4 C and additional phases that nucleated during sintering: TiB and TiC. As a result of the addition of reinforcing particles to the powder mixture, which has high reactivity to titanium, an exothermic reaction occurred during high-temperature sintering according to Equation (1). Compared to the reference material, additional peaks were observed that originated from either B 4 C, TiB, or TiC. Other peaks from the strengthening phases partially overlapped with peaks from Ti-α or Ti-β, resulting in higher intensity compared to the non-reinforced material. The presence of the B 4 C phase indicates that not all boron carbide particles have reacted with the matrix completely.
Phase Identification and Microstructure
The XRD patterns for the reference sample and the composite are shown in Figu For the non-reinforced sample, peaks were identified for two phases: hexagonal Ti-α body-centered cubic Ti-β. The XRD pattern for the in situ reinforced sample showed presence of both B4C and additional phases that nucleated during sintering: TiB and As a result of the addition of reinforcing particles to the powder mixture, which has reactivity to titanium, an exothermic reaction occurred during high-temperature sinte according to Equation (1). Compared to the reference material, additional peaks were served that originated from either B4C, TiB, or TiC. Other peaks from the strengthe phases partially overlapped with peaks from Ti-α or Ti-β, resulting in higher inten compared to the non-reinforced material. The presence of the B4C phase indicates tha all boron carbide particles have reacted with the matrix completely. The optical microstructures of reference material and in situ reinforced composite shown in Figure 4. The microstructure of the reference material ( Figure 4a) was hom neous and consisted mainly of needle-like α' phase precipitates on the β phase ma The new αGB phase grains first formed at the boundaries of the primary β-phase gra Then, as a result of slow cooling, the new α" grains nucleated from the primary β-p grain boundaries to the interior of the grain-forming α" colonies. The microstructure servations revealed significant porosity for both the titanium alloy and the composite. pores were mainly closed and spherical, and their size did not exceed 100 μm. The ex tion was unreacted boron carbide particles, which can be distinguished in Figure 4b; were surrounded by an approximately 20 ± 3 μm reaction layer and a channel void wh length exceeded 100 μm. The locally low consolidation occurring near the B4C part was mainly due to the high melting point of these particles (2350 °C). The sinterin ceramic particles took place at much higher temperatures, whose range was 1800-220 [19,20]. In the case of a titanium matrix composite, this was not possible as the mel point of the main alloy component would be exceeded. These particles were connecte the matrix material via diffusion necks. For the composite, the α-phase morphology similar characteristics to the reference material. The inner needles of the α"-phase gr were slightly shorter. Additionally the additional strengthening phases precipitated ing sintering were noticeable. The optical microstructures of reference material and in situ reinforced composite are shown in Figure 4. The microstructure of the reference material (Figure 4a) was homogeneous and consisted mainly of needle-like α' phase precipitates on the β phase matrix. The new α GB phase grains first formed at the boundaries of the primary β-phase grains. Then, as a result of slow cooling, the new α" grains nucleated from the primary βphase grain boundaries to the interior of the grain-forming α" colonies. The microstructure observations revealed significant porosity for both the titanium alloy and the composite. The pores were mainly closed and spherical, and their size did not exceed 100 µm. The exception was unreacted boron carbide particles, which can be distinguished in Figure 4b; they were surrounded by an approximately 20 ± 3 µm reaction layer and a channel void whose length exceeded 100 µm. The locally low consolidation occurring near the B 4 C particles was mainly due to the high melting point of these particles (2350 • C). The sintering of ceramic particles took place at much higher temperatures, whose range was 1800-2200 • C [19,20]. In the case of a titanium matrix composite, this was not possible as the melting point of the main alloy component would be exceeded. These particles were connected to the matrix material via diffusion necks. For the composite, the α-phase morphology had similar characteristics to the reference material. The inner needles of the α"-phase grains were slightly shorter. Additionally the additional strengthening phases precipitated during sintering were noticeable. Sintering conditions have a key effect on the microstructure of the resulting product, which has a direct impact on the strength properties. The obtained microstructure of the composite matrix is the result of slow cooling from the β-phase field and phase transformations occurring in the metastable Ti-5553 alloy. In general, the lamellar structure is characterized by high strength and fracture toughness but low ductility [21]. Currently, most of the research in the field of the in situ synthesis of TMCs is carried out with the use of commercially pure titanium (CP-Ti) powder and B4C particles. In the works conducted by Sabahi Namini et al. [11,22,23], the effect of B4C addition on the microstructure and properties of an in situ CP-Ti matrix composite produced by the SPS process was studied. The material was heated and sintered in a β-phase field, but no cooling details were provided. The resulting microstructure of this composite consisted of massive α-phase laths, indicating a relatively slow cooling rate. However, it should be noted that pure titanium has a relatively low strength. The bending strength of such a composite does not exceed 1100 MPa, and it decreases with the increase in the strengthening phase [11]. Therefore, the use of CP-Ti as a raw material is a rather suitable way to study the kinetics of the reaction occurring between titanium and particles of the reinforcing phase. A controlled microstructure morphology can only be obtained with the addition of alloying elements to the matrix material and increased strength properties. In the works [12,13], Ti-5553 master alloy powder and B4C particles were used. Sintering was carried out at a temperature above the α+β→β phase transformation, and cooling was uncontrolled. As a result, the matrix microstructure consisted of equiaxial β-phase grains, indicating relatively rapid cooling. The use of the alloy as a matrix material resulted in improved strength properties, and the bending strength increased up to 1600 MPa.
The SEM microstructures of the as-sintered titanium matrix composite are presented in Figure 5. The SEM observations were carried out to analyze the morphology of in situ precipitated strengthening phases. According to the XRD analysis results, the TiC and TiB phases were formed during the sintering process. The EDS point scan was used for the identification of each phase. Due to the fact that the diffusion process was realized at solidstate, the strengthening phases precipitated in the form of colonies than were uniformly distributed on titanium alloy matrix. Sintering conditions have a key effect on the microstructure of the resulting product, which has a direct impact on the strength properties. The obtained microstructure of the composite matrix is the result of slow cooling from the β-phase field and phase transformations occurring in the metastable Ti-5553 alloy. In general, the lamellar structure is characterized by high strength and fracture toughness but low ductility [21]. Currently, most of the research in the field of the in situ synthesis of TMCs is carried out with the use of commercially pure titanium (CP-Ti) powder and B 4 C particles. In the works conducted by Sabahi Namini et al. [11,22,23], the effect of B 4 C addition on the microstructure and properties of an in situ CP-Ti matrix composite produced by the SPS process was studied. The material was heated and sintered in a β-phase field, but no cooling details were provided. The resulting microstructure of this composite consisted of massive α-phase laths, indicating a relatively slow cooling rate. However, it should be noted that pure titanium has a relatively low strength. The bending strength of such a composite does not exceed 1100 MPa, and it decreases with the increase in the strengthening phase [11]. Therefore, the use of CP-Ti as a raw material is a rather suitable way to study the kinetics of the reaction occurring between titanium and particles of the reinforcing phase. A controlled microstructure morphology can only be obtained with the addition of alloying elements to the matrix material and increased strength properties. In the works [12,13], Ti-5553 master alloy powder and B 4 C particles were used. Sintering was carried out at a temperature above the α + β → β phase transformation, and cooling was uncontrolled. As a result, the matrix microstructure consisted of equiaxial β-phase grains, indicating relatively rapid cooling. The use of the alloy as a matrix material resulted in improved strength properties, and the bending strength increased up to 1600 MPa.
The SEM microstructures of the as-sintered titanium matrix composite are presented in Figure 5. The SEM observations were carried out to analyze the morphology of in situ precipitated strengthening phases. According to the XRD analysis results, the TiC and TiB phases were formed during the sintering process. The EDS point scan was used for the identification of each phase. Due to the fact that the diffusion process was realized at solid-state, the strengthening phases precipitated in the form of colonies than were uniformly distributed on titanium alloy matrix. Obtaining reinforcement in the form of TiB and TiC precipitation networks is possible by using a different in situ composite production approach based on powder metallurgy. Wei et al. [24] used graphite powder and TiB2 powder as strengthening-phase additives. Through intensive ball milling, the powder particles of the strengthening phases coated the Ti6Al4V powder particles. Thanks to this procedure, during hot-pressing sintering, graphite and TiB2 reacted with titanium from the matrix and formed a network of reinforcements across the boundaries of the original Ti6Al4V powder particles. This approach undoubtedly achieves a homogeneous microstructure, but the use of alloy powders is preferred. When elemental powders are used, the layer of graphite and TiB2 formed on the surface of the titanium powder during milling can interfere with the diffusion of other alloying elements in the mixture.
TiB precipitated in form of transgranular whiskers (Figure 5c) or grown and elongated blocks was also enriched with carbon (Figure 5b). According to the previous studies [10], the TiB phase precipitated at first by creating the preferred conditions for TiC-phase nucleation due to the high density of stacking faults. TiC precipitates as a form of the equiaxial plates or elongated lamellae on the primary β grain boundary (Figure 5b). It was also noted that the neighborhood of the nucleated reinforcement phases is also carbonenriched (Figure 5e). In Figure 5d, the unreacted particle of boron carbide is presented. It can be clearly seen that the B4C particle is surrounded by the reaction layer and connected to it by the small diffusion neck. Additionally, the reaction layer is connected to the matrix by a visible diffusion neck. Since there is no liquid phase formation during the sintering Obtaining reinforcement in the form of TiB and TiC precipitation networks is possible by using a different in situ composite production approach based on powder metallurgy. Wei et al. [24] used graphite powder and TiB 2 powder as strengthening-phase additives. Through intensive ball milling, the powder particles of the strengthening phases coated the Ti6Al4V powder particles. Thanks to this procedure, during hot-pressing sintering, graphite and TiB 2 reacted with titanium from the matrix and formed a network of reinforcements across the boundaries of the original Ti6Al4V powder particles. This approach undoubtedly achieves a homogeneous microstructure, but the use of alloy powders is preferred. When elemental powders are used, the layer of graphite and TiB 2 formed on the surface of the titanium powder during milling can interfere with the diffusion of other alloying elements in the mixture.
TiB precipitated in form of transgranular whiskers (Figure 5c) or grown and elongated blocks was also enriched with carbon (Figure 5b). According to the previous studies [10], the TiB phase precipitated at first by creating the preferred conditions for TiC-phase nucleation due to the high density of stacking faults. TiC precipitates as a form of the equiaxial plates or elongated lamellae on the primary β grain boundary (Figure 5b). It was also noted that the neighborhood of the nucleated reinforcement phases is also carbon-enriched (Figure 5e). In Figure 5d, the unreacted particle of boron carbide is presented. It can be clearly seen that the B 4 C particle is surrounded by the reaction layer and connected to it by the small diffusion neck. Additionally, the reaction layer is connected to the matrix by a visible diffusion neck. Since there is no liquid phase formation during the sintering process, only solid-state diffusion is involved as a densification mechanism. The diffusion process is slowed down by oxide layers naturally occurring on the surface of powder particles. During heating in a resistance furnace, diffusion is slower due to the progressive heating of the material in the direction from the surface to the interior. Therefore, with pressureless sintering, the holding time must be long enough to achieve adequate homogenization. Oxide layers during heating initially break and later dissolve, and then the diffusion of atoms can occur.
To study the diffusion kinetics of boron and carbon during the sintering process, EDS line-scan measurements were performed across both diffusion necks marked in Figure 5d as I and II. Additionally, the EDS mapping of the unreacted B 4 C particle has been undertaken ( Figure 6c). The EDS line scan results correspond to the diffusion neck within the B 4 C particle, and the reaction layer (I) is presented in Figure 6a. Naturally, boron and carbon concentration in the unreacted particle is elevated. The closer it is to the diffusion neck, the boron concentration decreases slightly, and the carbon concentration increases, indicating a more intense diffusion of carbon toward the matrix. This observation is also confirmed by the EDS mapping, where a higher concentration of boron is seen in the center of the unreacted particle, and carbon accumulates at the periphery of the particle, closer to the reaction layer. In the case of aluminum and chromium, the opposite direction of diffusion was observed, from the matrix through the reaction layer to the center of the unreacted particle. Aluminum concentrates in the center of the particle, at the same location as boron. On the other hand, chromium concentration is higher only closer to the reaction layer, which coincides with the site of increased carbon concentration. The concentration of the other elements (Ti, Mo, and V) inside the B 4 C particle is very low. The complexity of the elemental powder mixture chemical composition means that in addition to the expected reaction of titanium with boron carbide, reactions between other alloying elements and boron or carbon may also occur. These will depend on the diffusivity of the individual elements relative to the alloying additives and the conditions of the sintering process, such as temperature and time. The problem of reactions occurring between aluminum and boron was the subject of early research in terms of describing the Al-B system [25,26], and in terms of producing aluminum matrix composites [27,28]. In general, it has been shown that the formation of new interphases in the Al-B system will primarily depend on three factors: chemical composition, temperature, and time. The reaction between aluminum and B 4 C will occur as early as around 700 • C, where mainly AlB 2 is formed. As the temperature increases, more complex interphases such as Al 3 BC and Al 3 B 48 C 2 form. It should be noted that a successful reaction between aluminum and boron carbide requires a significant holding time at elevated temperatures (48 h or more). Alamdari et al. [28] studied the reaction between pure boron fibers and aluminum with titanium addition up to 500 ppm. They showed that aluminum diffuses very quickly into boron, leading to its complete dissolution. The addition of titanium effectively inhibits the dissolution of boron fibers by forming a TiB 2 layer on their surface, while no diffusion of titanium into boron was observed. Similar observations result from the studies presented in this work. Aluminum can easily diffuses inside the B 4 C particle, and the reaction layer around it is composed mainly of titanium and boron. The presence of titanium inside the B 4 C particle is practically equal to zero. The situation is similar to chromium, which diffuses from the matrix to the inside of the B 4 C particle. Chromium can react with both boron and carbon but since the reaction of titanium with boron has a significantly lower Gibbs free energy than the reaction of chromium with boron (at a temperature of 1200 • C: about −750 kJ/mol and −200 kJ/mol, respectively), it will therefore be preferred, and chromium will instead react with carbon [16,29]. In Figure 6b, the EDS line scan results corresponding to the diffusion neck between the reaction layer and the matrix material are presented (line II in Figure 5d). The reaction layer is mainly composed of titanium and up to about 12 wt. % of boron, which indicates that it is a TiB layer. Additionally, the increased concertation of vanadium and molybdenum can be noticed. A similar observation was presented previously in [12], where a similar composited was taken under the investigation, but as the initial material, the Ti-5553 master alloy powder was used. The presence of other alloying elements in the reaction layer may inhibit the diffusion of boron further to the Ti matrix. However, due to the high diffusivity of B and C in αTi [30], at the beginning of the sintering process, those elements move and concentrate on αGB and form a strengthening phase, which can be observed in Figure 5b. Further diffusion of boron at subsequent sintering stages is hindered due to the presence of the reaction layer and the relatively low diffusion of boron in TiB2 [31], which is present in the reaction layer. As the reaction neck is approached, the content of alloying elements (mainly Al, Mo, and Cr) increases, and the concentration of boron decreases. The carbon content in the reaction layer and the matrix remains the same. This is because carbon diffuses much faster into the matrix, while boron is retained in the reaction layer around the unreacted B4C particle. This phenomenon is responsible for the formation of much more TiC strengthening phases than TiB. As presented in Figure 6b from a distance of 35 μm, the fluctuations in the concentration of the elements composing the matrix depend on the components of the microstructure. The concentrations of titanium and aluminum are higher in the grains of the α phase (dark gray regions) as the latter element is its strong stabilizer. The remaining elements, namely, Mo, V, and Cr, stabilize the β phase, and their concentrations are higher in the light-gray regions.
Hardness Measurements
The addition of B4C particles to the elemental powder mixture and the nucleation of additional strengthening phases during in situ synthesis resulted in an increase in hardness compared to the non-reinforced material from 220 ± 16 HV2 to 287 ± 20 HV2. The relatively low hardness of both materials was mainly due to high porosity. To evaluate the effect of in situ nucleated phases on the material properties, a micro-hardness map was prepared for the selected area (Figure 7). The chosen area was free of pores and had various microstructural components typical for the material under study. The highest hardness was measured for the phase, which was identified as TiB (marked as 1) and was In Figure 6b, the EDS line scan results corresponding to the diffusion neck between the reaction layer and the matrix material are presented (line II in Figure 5d). The reaction layer is mainly composed of titanium and up to about 12 wt.% of boron, which indicates that it is a TiB layer. Additionally, the increased concertation of vanadium and molybdenum can be noticed. A similar observation was presented previously in [12], where a similar composited was taken under the investigation, but as the initial material, the Ti-5553 master alloy powder was used. The presence of other alloying elements in the reaction layer may inhibit the diffusion of boron further to the Ti matrix. However, due to the high diffusivity of B and C in α Ti [30], at the beginning of the sintering process, those elements move and concentrate on α GB and form a strengthening phase, which can be observed in Figure 5b. Further diffusion of boron at subsequent sintering stages is hindered due to the presence of the reaction layer and the relatively low diffusion of boron in TiB 2 [31], which is present in the reaction layer. As the reaction neck is approached, the content of alloying elements (mainly Al, Mo, and Cr) increases, and the concentration of boron decreases. The carbon content in the reaction layer and the matrix remains the same. This is because carbon diffuses much faster into the matrix, while boron is retained in the reaction layer around the unreacted B 4 C particle. This phenomenon is responsible for the formation of much more TiC strengthening phases than TiB. As presented in Figure 6b from a distance of 35 µm, the fluctuations in the concentration of the elements composing the matrix depend on the components of the microstructure. The concentrations of titanium and aluminum are higher in the grains of the α phase (dark gray regions) as the latter element is its strong stabilizer. The remaining elements, namely, Mo, V, and Cr, stabilize the β phase, and their concentrations are higher in the light-gray regions.
Hardness Measurements
The addition of B 4 C particles to the elemental powder mixture and the nucleation of additional strengthening phases during in situ synthesis resulted in an increase in hardness compared to the non-reinforced material from 220 ± 16 HV2 to 287 ± 20 HV2. The relatively low hardness of both materials was mainly due to high porosity. To evaluate the effect of in situ nucleated phases on the material properties, a micro-hardness map was prepared for the selected area (Figure 7). The chosen area was free of pores and had various microstructural components typical for the material under study. The highest hardness was measured for the phase, which was identified as TiB (marked as 1) and was 789 HV0.025. The matrix of the composite mostly consisted of colonies of α" phase lamellas, so the hardness for the remaining area is relatively uniform and oscillates between 380 and 440 HV0.025. The exception is the measurement that was taken closer to the pore and in the area between α GB grains, which is a higher fraction of β phase (marked as 2). In these areas, the hardness locally decreased to a value of about 200 HV0.025. The microstructure strongly affects the hardness of the material, especially for titanium alloys. Depending on the manufacturing method, the heat treatment, or the phase composition, the hardnesses of a material with the same chemical composition will differ significantly from one another [32,33]. The micro-hardness results obtained correspond well with those available in the literature. The β titanium alloys in the state after quenching from the range of the presence of the β phase have a hardness in the range of 280-310 HV. Remodeling of the microstructure as a result of a different type of cooling or heat treatment, and consequently an increase in the α phase fraction, results in an increase in hardness in the 460-500 HV range [34]. Grützner et al. [13], who studied a similar composite but with a higher volume fraction of B 4 C (12.9 vol.%) and made from master alloy powder, obtained similar matrix hardness. However, they did not show hardness values for the in situ nucleating phase. Instead, they reported the hardness of the reaction layer, which they identified as a mixture of TiB and TiC, and it was 13.3 GPa (about 1356 HV), which is higher than the hardness of the in situ nucleating TiB phase shown in this work, probably due to the higher content of the strengthening phase. 789 HV0.025. The matrix of the composite mostly consisted of colonies of α" phase lamellas, so the hardness for the remaining area is relatively uniform and oscillates between 380 and 440 HV0.025. The exception is the measurement that was taken closer to the pore and in the area between αGB grains, which is a higher fraction of β phase (marked as 2). In these areas, the hardness locally decreased to a value of about 200 HV0.025. The microstructure strongly affects the hardness of the material, especially for titanium alloys. Depending on the manufacturing method, the heat treatment, or the phase composition, the hardnesses of a material with the same chemical composition will differ significantly from one another [32,33]. The micro-hardness results obtained correspond well with those available in the literature. The β titanium alloys in the state after quenching from the range of the presence of the β phase have a hardness in the range of 280-310 HV. Remodeling of the microstructure as a result of a different type of cooling or heat treatment, and consequently an increase in the α phase fraction, results in an increase in hardness in the 460-500 HV range [34]. Grützner et al. [13], who studied a similar composite but with a higher volume fraction of B4C (12.9 vol. %) and made from master alloy powder, obtained similar matrix hardness. However, they did not show hardness values for the in situ nucleating phase. Instead, they reported the hardness of the reaction layer, which they identified as a mixture of TiB and TiC, and it was 13.3 GPa (about 1356 HV), which is higher than the hardness of the in situ nucleating TiB phase shown in this work, probably due to the higher content of the strengthening phase.
Conclusions
In the presented work, the process of in situ synthesis of titanium matrix composite produced from elemental powders was characterized. The strengthening of the material resulted from a reaction between titanium and B4C particles that leads to the nucleation of TiB and TiC phases. The analysis and discussion of the obtained test results lead to the following conclusions: • Using properly developed process parameters of powder mixture preparation and the fabrication of the β titanium alloy matrix composite, a material with high homogeneity in terms of chemical composition and microstructure was obtained. • XRD analysis and microstructural observations showed the presence of TiB and TiC strengthening phases and unreacted B4C particles. TiB whiskers and TiC plates were identified. The incomplete reaction between Ti and B4C is most likely due to the disruption of the reaction by additional alloying elements added to the mixture in the
Conclusions
In the presented work, the process of in situ synthesis of titanium matrix composite produced from elemental powders was characterized. The strengthening of the material resulted from a reaction between titanium and B 4 C particles that leads to the nucleation of TiB and TiC phases. The analysis and discussion of the obtained test results lead to the following conclusions:
•
Using properly developed process parameters of powder mixture preparation and the fabrication of the β titanium alloy matrix composite, a material with high homogeneity in terms of chemical composition and microstructure was obtained. • XRD analysis and microstructural observations showed the presence of TiB and TiC strengthening phases and unreacted B 4 C particles. TiB whiskers and TiC plates were identified. The incomplete reaction between Ti and B 4 C is most likely due to the disruption of the reaction by additional alloying elements added to the mixture in the form of elemental powders. Most of the nucleating strengthening phases were identified as TiC.
•
The presence of unreacted particles and surrounding reaction layers made it possible to study the kinetics of elemental diffusion during sintering. It was shown that in addition to the diffusion of B and C into the matrix, there is a diffusion of Al and Cr in the opposite direction (into the B 4 C particle). The reaction layer consists mainly of Ti, B, and a small amount of Mo and V, which inhibit further diffusion of B into the matrix. The C content of the matrix is high, indicating that its diffusion is not particularly inhibited by the alloying elements.
•
Hardness measurements showed an increase in hardness resulting from the reinforcement. It was shown that the increase in hardness results primarily from in situ nucleated phases and from a characteristic microstructure consisting of colonies of α" phase lamellas.
•
The study showed that through the in situ reaction during sintering, it is possible to reinforce the β-titanium alloy made from elemental powders and that the TiB and TiC synthesis is controlled by the adequate addition of alloying elements. | 10,795.4 | 2022-08-23T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Exposure to the Dioxin-like Pollutant PCB 126 Afflicts Coronary Endothelial Cells via Increasing 4-Hydroxy-2 Nonenal: A Role for Aldehyde Dehydrogenase 2
Exposure to environmental pollutants, including dioxin-like polychlorinated biphenyls (PCBs), play an important role in vascular inflammation and cardiometabolic diseases (CMDs) by inducing oxidative stress. Earlier, we demonstrated that oxidative stress-mediated lipid peroxidation derived 4-hydroxy-2-nonenal (4HNE) contributes to CMDs by decreasing the angiogenesis of coronary endothelial cells (CECs). By detoxifying 4HNE, aldehyde dehydrogenase 2 (ALDH2), a mitochondrial enzyme, enhances CEC angiogenesis. Therefore, we hypothesize that ALDH2 activation attenuates a PCB 126-mediated 4HNE-induced decrease in CEC angiogenesis. To test our hypothesis, we treated cultured mouse CECs with 4.4 µM PCB 126 and performed spheroid and aortic ring sprouting assays, the ALDH2 activity assay, and Western blotting for the 4HNE adduct levels and real-time qPCR to determine the expression levels of Cyp1b1 and oxidative stress-related genes. PCB 126 increased the gene expression and 4HNE adduct levels, whereas it decreased the ALDH2 activity and angiogenesis significantly in MCECs. However, pretreatment with 2.5 µM disulfiram (DSF), an ALDH2 inhibitor, or 10 µM Alda 1, an ALDH2 activator, before the PCB 126 challenge exacerbated and rescued the PCB 126-mediated decrease in coronary angiogenesis by modulating the 4HNE adduct levels respectively. Finally, we conclude that ALDH2 can be a therapeutic target to alleviate environmental pollutant-induced CMDs.
Introduction
Angiogenesis is the sprouting of new blood vessels from preexisting vessels. It is one of the most important physiological properties of endothelial cells (ECs) of the vascular tissue. Angiogenesis is essential in both physiological and pathological conditions, including embryonic development, organ perfusion, wound healing, tissue regeneration, and tumor growth [1]. Coronary angiogenesis is critical to maintain proper cardiac perfusion for regulating cardiac function, metabolism, and tissue regeneration. Decreased coronary angiogenesis leads to cardiometabolic diseases (CMDs), including cardiomyopathy, heart failure with preserved ejection fraction (HFpEF), and myocardial ischemia-reperfusion injury (IRI) [2]. The regulation of angiogenesis depends on a dynamic balance between mice showed increased levels of plasma proinflammatory cytokines, increased circulating biomarkers of CVD, altered platelet and red blood cell counts, an increased accumulation of hepatic fatty acids, and accelerated atherosclerotic lesion formation in the aortic root after 10 weeks of PCB 126 exposure [13]. In addition to proinflammatory, hypertrophic, and pro-atherosclerotic effects, PCBs have antiangiogenic effects as well. For instance, PCB treatment in HUVECs significantly decreased angiogenesis compared with the controls, as evident from an in vitro tube formation assay on Matrigel, as well as an aortic ring assay using mouse aorta [14].
Previously, we demonstrated that PCB 126 has a role in endothelial cell dysfunction, oxidative stress, and accelerated atherosclerosis [13], but its impacts on coronary angiogenesis have not been studied yet. In our cell culture studies using mouse coronary endothelial cells (MCECs), we demonstrated that a direct exogenous treatment of 4-hydroxy-2-nonenal (4HNE), a secondary metabolite of oxidative stress that is generated upon lipid peroxidation, reduces angiogenesis [1]. This antiangiogenic effect of 4HNE was aggravated by inhibiting aldehyde dehydrogenase 2 (ALDH2), a mitochondrial enzyme that metabolizes 4HNE, by pharmacologically inhibiting ALDH2 activity with disulfiram (DSF) [1]. In animal studies, we also found reduced coronary angiogenesis in mice with low intrinsic ALDH2 activity due to diabetic stress or a single-point mutation (E487K) in ALDH2, termed ALDH2*2 [15]. When we subjected the ALDH2*2 mutant diabetic mouse hearts to ischemia-reperfusion injury (IRI), we found the augmented apoptosis of CD31+ coronary endothelial cells, along with increased 4HNE adduct formation, compared to wild-type diabetic mouse hearts that underwent similar IRI [15].
Roughly~30% of East Asians, i.e.,~400 million people, carry the ALDH2*2 mutation, leading to an increased incidence of cardiovascular diseases, including myocardial infarction, coronary spasm, diabetic cardiac complications, and heart failure [16]. Several East Asian countries have high environmental pollution, and the people in those countries experience PCB-induced health hazards. A recent study from China reported that ALDH2*2 mutant patients with metabolic disorders such as diabetes mellitus have a higher incidence of HF with preserved ejection fraction (HFpEF), a disease originating from coronary endothelial dysfunction [17].
Thus, understanding the mechanism of PCB 126-induced coronary endothelial cell damage can be beneficial in preventing several critical CMDs, including HFpEF. In this study, we specifically plan to determine if PCB 126-induced coronary endothelial cell damage is mediated via 4HNE and if modulating ALDH2 activity plays a role in this process.
Experimental Animal
Aortas from six-month-old C57BL/6 mice were used for this study. Mice were bred and maintained in the animal care facility at Henry Ford Health System. Mice were humanely euthanized, and aortas were isolated for the aortic ring assay. The animal protocols were approved by the Wayne State University Institutional Animal Care and Use Committee, which conforms to NIH standards.
Cell Culture
The mouse coronary EC (MCEC) line was obtained from Cedarlane (#CLU510) and grown as we already described [1]. We used MCECs after the fourth passage, and they were grown in new DMEM supplemented with 0.2% FBS (low serum) and 1% P/S. After 24 h of low-serum treatment, the cells were used to perform the spheroid assay and subjected to the treatment protocols, as described below.
Treatment Protocols
Protocol 1: We treated MCECs with 4.4 µM of PCB 126 (AccuStandard Inc., New Haven, CT, USA) or the vehicle (dimethyl sulfoxide (DMSO) (#sc 358801)) for 24-72 h, followed by the extraction of mRNA and the protein to perform real-time polymerase chain reaction (RT-PCR), Western blotting (WB), and the ALDH2 activity assay, respectively ( Figure 1A). An initial dose response study using 0, 0.44 µM, and 4.4 µM PCB 126 was used to determine an optimal dose for AhR activation.
Protocol 3: To study the role of PCB 126, along with the modulation of ALDH2 activity in coronary angiogenesis, we treated spheroids and aortic rings with 2.5 µ M disulfiram (DSF), an ALDH2 inhibitor, or 10 µ M Alda 1, an ALDH2 activator, for 2 h, followed by treatment with 4.4 µ M PCB 126 for 48 h and 72 h, respectively. We performed microscopy for sprout growth by the spheroids and aortic rings after 48 and 72 h of PCB 126 treatment, respectively ( Figure 1C).
Spheroid Assay
For making spheroids (~400 cells/spheroid), we diluted them in 4 mL of DMEM supplemented with 10% FBS, 1% P/S, and 20% Methocel (#M7027-250G, SIGMA). To form spheroids, we incubated the plate upside-down in a humidified incubator set at 37 °C with a continuous supply of 5% CO2 for 24 h. We then transferred the spheroid suspension in a 15-mL conical tube and centrifuged it at 200× g for 5 min. We aspirated the supernatant and added 2 mL ice-cold Methocel containing 20% FBS. Separately we prepared a 4 mL collagen stock solution by adding 1 mg/mL collagen (#32160405, MilliPORE SIGMA) in ice-cold DMEM supplemented with 10% FBS, 1% P/S, and 30 ng/mL VEGF (#493-MV, R&D Systems). We mixed ice-cold Methocel containing the spheroids with collagen stock solution and subsequently pipetted 250 μL of collagen stock containing the spheroids in Protocol 3: To study the role of PCB 126, along with the modulation of ALDH2 activity in coronary angiogenesis, we treated spheroids and aortic rings with 2.5 µM disulfiram (DSF), an ALDH2 inhibitor, or 10 µM Alda 1, an ALDH2 activator, for 2 h, followed by treatment with 4.4 µM PCB 126 for 48 h and 72 h, respectively. We performed microscopy for sprout growth by the spheroids and aortic rings after 48 and 72 h of PCB 126 treatment, respectively ( Figure 1C).
Spheroid Assay
For making spheroids (~400 cells/spheroid), we diluted them in 4 mL of DMEM supplemented with 10% FBS, 1% P/S, and 20% Methocel (#M7027-250G, SIGMA). To form spheroids, we incubated the plate upside-down in a humidified incubator set at 37 • C with a continuous supply of 5% CO 2 for 24 h. We then transferred the spheroid suspension in a 15-mL conical tube and centrifuged it at 200× g for 5 min. We aspirated the supernatant and added 2 mL ice-cold Methocel containing 20% FBS. Separately we prepared a 4 mL collagen stock solution by adding 1 mg/mL collagen (#32160405, MilliPORE SIGMA) in ice-cold DMEM supplemented with 10% FBS, 1% P/S, and 30 ng/mL VEGF (#493-MV, R&D Systems). We mixed ice-cold Methocel containing the spheroids with collagen stock solution and subsequently pipetted 250 µL of collagen stock containing the spheroids in each well of a 24-well cell culture plate. We incubated the plates containing the spheroids in a humidified incubator at 37 • C with a continuous supply of 5% CO 2 for 1 h to solidify the collagen bed due to polymerization. We added 250 µL of DMEM supplemented with 10% FBS, 1% P/S, and 30 ng/mL VEGF in each well of the 24-well plate containing the collagen bed with spheroids embedded in it and subsequently incubated in a humidified incubator at 37 • C with a continuous supply of 5% CO 2 for 1 h. Then, we treated the spheroids according to the protocol mentioned in Figure 1C. We captured images of sprout growth from the spheroids embedded in the collagen matrix using a 10× phase-contrast microscope. To determine the angiogenesis, we counted the number of nodes in sprouted spheroids under a high-power field (HPF) using ImageJ software.
Aortic Ring Assay
We performed the aortic ring assay as described previously [18]. Briefly, we isolated the aorta and washed with sterilized PBS to remove the residual blood. We immediately transferred the cleaned aorta to a 100-mm plate containing fresh Opti-MEM (#51985-034, ThermoFisher Scientific). Using a sharp and sterile scalpel, we dissected the aorta into small ring-like fragments with an approximate length of 1 mm. From a single aorta, we made 28-34 aortic rings. Using a sterile sharp-tipped tweezer, we transferred the aortic rings to a new 60-mm/6-well plate containing DMEM supplemented with 0.2% FBS and 1% P/S. We pipetted 200 µL of the collagen stock into each well of a 24-well plate and subsequently planted three aortic rings in each well on the top of the collagen bed using a sharp-tipped sterile tweezer. We planted the aortic rings on the collagen bed so that the luminal axis was perpendicular to the bottom of the well. Then, we incubated the plate in a humidified incubator set at 37 • C with a continuous supply of 5% CO 2 for 1 h to solidify the collagen bed due to polymerization. We added 100 µL of collagen stock to each well on top of the aortic rings to embed them into the collagen bed. Then, we treated the aortic rings according to the protocol we discussed elsewhere in the manuscript ( Figure 1C). We captured images of sprout growth from the aortic rings embedded in the collagen matrix using a 10× phase-contrast microscope. We measured the relative sprouting area (%) from each aortic ring under HPF to determine the angiogenesis using ImageJ software.
Real-Time qPCR
The total RNAs from MCECs purified by Trizol reagent (Invitrogen) were reversetranscribed to cDNA for quantifying with an Applied Biosystems QuantStudio 6 Flex RT-qPCR System using SYBR Green (Applied Biosystems). Samples were analyzed as duplicates, and the expression levels were calculated using the ∆∆Ct method. The PCR primers are described in Table 1. The housekeeping gene, Beta-actin (Actb), is listed at the bottom of the table. The PCB 126 treatment did not significantly impact the Actb expression.
ALDH2 Activity Assay
The ALDH2 activity was measured according to the protocol described elsewhere [18]. Briefly, MCECs were grown on 100-mm plates and treated according to the protocol in Figure 1B, and cellular protein was extracted, and 100 µg of total cellular protein from each sample was used for this assay. Freshly made 50 mM sodium pyrophosphate (#221368-500G, SIGMA) solution as a buffer, 2.5 mM NAD+ (#N3014-5G, SIGMA) solution as a cofactor, and 10 mM acetaldehyde (#402788-100ML, SIGMA) as a substrate were used. The enzymatic activity of ALDH2 from cell lysate was determined spectrophotometrically using the reductive reaction of NAD+ to NADH at a λ340-nm wavelength at 37 • C.
Western Immunoblotting
4HNE protein adducts and ALDH2 protein levels were evaluated using the WB assay, as we described earlier [18]. In brief, after treatment, the cellular protein was extracted from cultured MCECs using a tissue protein extraction reagent (ThermoFisher Scientific) containing protease/phosphatase inhibitors. Specific protein bands were separated using SDS-PAGE, and the proteins were then transferred to nitrocellulose membranes. The membranes were blocked using 5% bovine serum albumin (BSA) and subsequently incubated with 4HNE mouse mAb (#ABN249, Millipore Sigma) and GAPDH (G-9) Mouse mAb (#sc-365062) primary antibodies at a concentration of 1:1000 overnight in a cold refrigerator (4 • C). Depending on the sources of the primary antibodies, the membrane-bound antibodies were incubated with anti-rabbit/anti-mouse horseradish peroxidase (HRP)-coupled secondary antibodies (1:2000) for 1 h at room temperature. Immunolabeling was detected using ECL detection reagents (ThermoFisher Scientific) according to the manufacturer's protocols. The images of the protein bands were taken with a FluorChem E imaging system. The intensity of the scanned WB images was analyzed with ImageJ software (NIH). The GAPDH protein was used as a loading control to normalize the proteins of interest.
Statistical Analysis
We compiled the experimental data and calculated the means and standard error of the means (SEM) using Excel spreadsheets. To determine the statistical significance between two groups, we performed the Student's t-test, and for multiple groups, we employed one-way/two-way ANOVA, followed by Tukey's post hoc test analysis, using GraphPad Prism 9.2.0.332 (GraphPad Software). We considered the level of significance at <0.05.
PCB 126 Activates the Aryl Hydrocarbon Receptor and Increases Oxidative Stress in Cultured Mouse Coronary Endothelial Cells
It is well-established that exposure to dioxin-like pollutants such as PCB 126 elicits a xenobiotic detoxifying response through the activation of the AhR [19,20]. To examine if the AhR targets were inducible in MCECs, we exposed the cells to 4.4 µM PCB 126 for 24 to 72 h. The PCB 126 treatment increased the expression of the AhR target gene Cyp1b1 at 24 h (p = 0.086; Figure 2). A well-established mechanism of dioxin-like pollutant toxicity is an increase in oxidative stress. To examine this in the MCECs, we examined the mRNA expression of the genes known to be oxidative stress-sensitive. Here, we measured the representative antioxidant genes regulated by oxidative stress-sensitive NFE2-like BZIP Transcription Factor 2 (NFE2L2 or NRF2) [21] after vehicle or 4.4 uM PCB 126 treatment for 24 h. The expression of Nqo1 (p = 0.0002), Cat (p = 0.0022), Gsr (p = 0.0009), Gsta1 (p = 0.0064), and Txnrd1 (p = 0.0031) were significantly increased due to PCB 126 treatment (Figure 2), and Gpx2 and Gstm1 showed no significant changes (data not provided). To examine if the observed PCB 126-induced oxidative stress was resolved over time, we next completed a time course to include 48 and 72-h exposure durations. Our results were consistent with previous data that oxidative stress-sensitive genes were induced in the PCB-treated group (Figure 3). The expression of Nqo1 showed a significant increase by PCB 126 treatment at all three time points: 24 h, 48 h, and 72 h, with an observed 2.2fold (p = 0.0489), 2.5-fold (p = 0.0026), and 1.5-fold (p = 0.0351) increase compared with their vehicle counterparts, respectively ( Figure 3A). Significant differences in the expression of catalase ( Figure 3B) were observed 24 h after treatment (1.2-fold, p = 0.0228), but there was no significant difference between the vehicle or PCB 126 at 48 and 72 h. The expression of Gsr ( Figure 3D Figure 3F) was significantly increased due to PCB 126 only at 24 h (2.6-fold, p = 0.0058). The expression of Gpx2 ( Figure 3G) showed a similar trend to Gsr but did not reach significance at any of the three time points. Overall, most of the genes examined, except for Gsr and Gpx2, showed a downward trend of activation over time. For example, the expression of Gsta1 ( Figure 3F) in PCB 126-treated cells at 72 h was significantly decreased compared to at 24 h (0.3-fold, p = 0.0367). Figure 3F) was significantly increased due to PCB 126 only at 24 h (2.6-fold, p = 0.0058). The expression of Gpx2 ( Figure 3G) showed a similar trend to Gsr but did not reach significance at any of the three time points. Overall, most of the genes examined, except for Gsr and Gpx2, showed a downward trend of activation over time. For example, the expression of Gsta1 ( Figure 3F
PCB 126 Decreases ALDH2 Activity, and Pharmacological Inhibition of ALDH2 Exacerbates the PCB-Mediated Effects, Whereas the Pharmacological Activation of ALDH2 Rescues the PCB-Mediated Effects in Cultured Mouse Coronary Endothelial Cells
The 4.4 µ M PCB 126-treated MCECs significantly decreased ALDH2 activity compared with the control (p = 0.03) (Figure 4). A 2.5 µ M DSF pretreatment for 2 h, followed by the treatment with PCB 126 for 24 h, exacerbated the PCB 126-mediated decrease in ALDH2 activity (p = 0.02 vs. PCB 126 alone; p < 0.0001 vs. the control) ( Figure 4). However, a 10 µ M Alda 1 pretreatment for 2 h, followed by the treatment with PCB 126 for 24 h, rescued the PCB 126-mediated decrease in ALDH2 activity (p = 0.0002 vs. PCB 126 alone and p < 0.0001 vs. PCB 126 + DSF) ( Figure 4).
PCB 126 Increases the 4HNE Protein Adduct Levels, Whereas the Pharmacological Activation of ALDH2 Rescues the PCB-Mediated Effects in Cultured Mouse Coronary Endothelial Cells
The 4.4 µM PCB 126 treatment in MCECs significantly increased the 4HNE protein adduct levels compared with the control (p = 0.01) ( Figure 5). However, a 10 µM Alda 1 pretreatment for 2 h, followed by the treatment with PCB 126 for 24 h, rescued the PCB 126-mediated decrease in the 4HNE protein adduct levels (p = 0.006 vs. PCB 126) ( Figure 5).
Pharmacological Inhibition of ALDH2 Exacerbates the PCB 126-Mediated Decrease in Angiogenesis, Whereas the Activation of ALDH2 Attenuates the PCB 126-Mediated Effect in Cultured Mouse Coronary Endothelial Cells
The spheroid assay data showed that 4.4 µ M PCB 126-treated MCECs significantly decreased sprout numbers compared with the control and vehicle (p < 0.0001 vs. the control and p = 0.009 vs. the vehicle) ( Figure 6A-C,F). A 2.5 µ M DSF pretreatment for 2 h, followed by the treatment with PCB 126 for 48 h, exacerbated the PCB 126-mediated decrease in the sprout counts (p < 0.0001 vs. both the control and PCB 126 alone) ( Figure 6A,C,D,F). However, a 10 µ M Alda 1 pretreatment for 2 h, followed by the treatment with PCB 126 for 48 h, rescued the PCB 126-mediated decrease in the sprout counts (p = 0.05 vs. PCB 126 alone and p < 0.0001 vs. PCB 126 + DSF) ( Figure 6C-F). The spheroid assay data showed that 4.4 µM PCB 126-treated MCECs significantly decreased sprout numbers compared with the control and vehicle (p < 0.0001 vs. the control and p = 0.009 vs. the vehicle) ( Figure 6A-C,F). A 2.5 µM DSF pretreatment for 2 h, followed by the treatment with PCB 126 for 48 h, exacerbated the PCB 126-mediated decrease in the sprout counts (p < 0.0001 vs. both the control and PCB 126 alone) ( Figure 6A,C,D,F). However, a 10 µM Alda 1 pretreatment for 2 h, followed by the treatment with PCB 126 for 48 h, rescued the PCB 126-mediated decrease in the sprout counts (p = 0.05 vs. PCB 126 alone and p < 0.0001 vs. PCB 126 + DSF) ( Figure 6C-F).
The aortic ring assay data showed that 4.4 µM PCB 126-treated aortic rings significantly decreased the sprouting area compared with the vehicle (p = 0.002) ( Figure 7A,B,D). A 10 µM Alda 1 pretreatment for 2 h, followed by the treatment with PCB 126 for 72 h, rescued the PCB 126-mediated decrease in the sprout counts (p = 0.003 vs. PCB 126) ( Figure 7B-D).
Discussion
This study suggests that PCB 126 increases AhR activation upregulation, as well as the expression of oxidative stress-sensitive genes and decreases angiogenesis in cultured MCECs by decreasing the activity of ALDH2 while increasing the 4HNE adduct levels. Some of these PCB 126-mediated effects were exacerbated by ALDH2 inhibition using
Discussion
This study suggests that PCB 126 increases AhR activation upregulation, as well as the expression of oxidative stress-sensitive genes and decreases angiogenesis in cultured MCECs by decreasing the activity of ALDH2 while increasing the 4HNE adduct levels. Some of these PCB 126-mediated effects were exacerbated by ALDH2 inhibition using DSF, whereas these effects were rescued by ALDH2 activation with Alda-1.
Dioxins and dioxin-like chemicals are a group of structurally related chemicals with long half-lives that are largely generated by humans through industrial processes, including incineration, the production of herbicides and pesticides, and the use of fertilizers [22]. Dioxins are highly toxic and have been found to work through a common mechanism of action that is mediated through the activation of AhR [23]. Similar to dioxins, man-made dioxin-like PCBs, such as PCB 126, have been manufactured and utilized as additives to sealants and paints and in oils used in industrial processes [24,25]. In this study, when we treated MCECs with PCB-126, there was a near-significant increase in the transcription of the Cyp1b1 genes. Therefore, we can speculate on the activation of AhR and subsequent upregulation of Cyp1b1 and other AhR target genes in MCECs.
These dioxin-like PCBs have a coplanar structure, leading to similar chemical characteristics, modes of action, and toxicities such as dioxins [26]. PCBs began to be mass produced in the early 1930s and have been well-studied as one of the most toxic classes of persistent organic pollutants of worldwide concern [24]. Due to their extreme chemical and thermal stability, PCBs are highly resistant to degradation, leading to bioaccumulation in the environment and in the fatty tissues of animals and humans [27]. As we mentioned earlier, PCBs were eventually banned in the United States in the late 1970s and then worldwide in 2001 after the Stockholm Convention [24,28]. Since this ban, the primary source of exposure to PCBs has been through contaminated food, including high-fat foods such as meat, fish, and dairy products [29,30]. Since PCBs are now ubiquitous contaminants in the global food chain, even humans in the general population have been exposed to PCBs [31]. Numerous epidemiological studies have found that exposure to PCBs is often associated with adverse human health effects, including vascular diseases (US EPA). Hypertension has been associated with increased serum levels of dioxin-like PCBs in highly exposed populations, especially in younger populations [32,33]. The data analysis from the NHANES (1999-2002) revealed that dioxin-like PCBs were positively associated with hypertension but only among men [34]. Dietary exposure to PCBs was also shown to be associated with increased coronary calcium and more intense atherosclerosis in the general male population [5]. Experimental evidence has further highlighted the relationship between exposure to dioxin-like PCBs and the accelerated development of atherosclerosis [13]. In 2013, a cross-sectional study of an elderly Swedish population revealed that high serum levels of PCBs were associated with dysfunction of the left ventricle, independent of other heart failure risk factors [35]. Furthermore, a study examining Swedish population-based prospective cohorts from 1997 to 2010 found that dietary PCB exposure was associated with an increased risk of HF in both women and men [8]. Thus, PCBs are associated with cardiovascular diseases. In our study, we tested whether PCB-126 can increase oxidative stress in cultured MCECs and found it enhanced the activation of AhR and corresponding antioxidant genes Nqo1, Cat, Gsr, Gpx2, Gsta1, and Txnrd1. For most of the markers of oxidative stress, the expression remained significantly elevated compared to the control even 72 h after exposure.
CMDs are one of the major leading causes of death worldwide. One of the contributing factors to CMDs is oxidative stress. Since the human heart is utilizing 15 to 20 times more than its own weight of the daily amount of adenosine triphosphate (ATP) for it to function [36], this puts the heart under huge oxidative stress. The lipid peroxidation of poly-unsaturated fatty acids (PUFAs) such as arachidonic acids by ROS and superoxide is a critical component of oxidative stress [37]. The most prominent products of lipid peroxidation are 4HNE-like reactive carbonyls [37]. Due to the oxidation process, the peroxyl radical addition and disintegration of cardiolipin on the mitochondrial lipid mem-brane generate considerable amounts of 4HNE [38]. 4HNE can form covalent adducts in several cellular signaling cascades [39]. Specialized enzyme systems such as glutathione S-transferases (GSTs), glutathione (GSH), aldose reductase (AR), and ALDH are involved in the detoxification of 4HNE-like reactive carbonyls.
Similarly, in the current study, we found that PCB-126 increased 4HNE and reduced ALDH2 activity, along with angiogenesis. We also found a pharmacological inhibition of ALDH2 with DSF-potentiated PCB-126-induced decreased coronary angiogenesis. However, the activation of ALDH2 by Alda-1 increased the coronary angiogenesis.
Thus, in conclusion, we propose that ALDH2 can be an important target in reducing PCBs such as pollutant-mediated cellular toxicity in endothelial cells. Future studies utilizing human cardiac endothelial cells and well-established rodent models of heart failure will help to establish a clearer directive relationship between dioxin-like pollutant exposure and heart failure risk. Finally, we conclude that ALDH2 activation can be a therapeutic strategy to improve coronary angiogenesis to ameliorate CMDs like HFpEF.
Author Contributions: B.R. performed the experiments and acquired, analyzed, interpreted, and presented the data, as well as wrote the manuscript. Z.Y. performed the experiments and acquired, analyzed, interpreted, and presented the data, as well as contributed to the manuscript writing. G.P. edited the manuscript. K.R. participated in the study conception and edited the manuscript. M.A. participated in the study conception and edited the manuscript. R.S. participated in the study conception and edited the manuscript. M.C.P. conceived the study design, planned the experiments, supervised part of the research team, interpreted the data, and edited the manuscript. S.S.P. conceived the study design, planned the experiments, supervised part of the research team, interpreted the data, and wrote and edited/revised the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: S.S.P. was supported by a grant from the National Heart, Lung, and Blood Institute, 1R01HL139877-01A1, and an internal grant from the Henry Ford Health System, A10249. M.C.P. was supported by two grants from the National Institute of Environmental Health Sciences, P30ES020957 and R00ES028734. B.R. was supported by a predoctoral fellowship grant from the American Heart Association, 835262.
Institutional Review Board Statement: The animal study protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of Wayne State University (#IACUC-20-11-2913, 4 August 2021). | 6,185.2 | 2022-06-01T00:00:00.000 | [
"Biology"
] |
New Refinements and Improvements of Some Trigonometric Inequalities Based on Padé Approximant
Inequalities involving trigonometric functions are used in many areas of pure and applied mathematics. Trigonometric inequalities have attracted many researchers. Many improvements of Jordan’ inequality [1–11], Kober’s inequality [12–16], and Becker-Stark’s inequality [4, 17, 18] have been obtained. Recently, Bercu presented a Padé approximant method [19] and obtained the following inequalities:
Multiple-Point Padé Approximant Method
e Padé approximant has been studied in many literature studies [19,[21][22][23][24]. In particular, Bercu et al. presented good results of several trigonometric inequalities using the Padé approximant. In this section, we present a multiple-point Padé approximant method. Given a bounded smooth function f(x), let be a rational polynomial interpolating of f(x) at multiple points x 1 , x 2 , . . . , x k such that where E(x) � (1 + q j�1 b j x j ) · f(x) − ( p i�0 a i x i ) and p ≥ 0 and q ≥ 1 are two given integers. ere are p + q + 1 unknowns in equation (9), a i and b j , i � 0, 1, 2, . . . , p, j � 1, 2, . . . , q. By selecting suitable values of l 1 , l 2 , . . . , l k , we can obtain the polynomial R(x) by solving equation (9). e general Padé approximant method is a special case of the multiple-point Padé approximant. Here, we just need to consider one point. If f can be written as a formal power series f(x) � c 0 + c 1 x + c 2 x 2 + · · ·, where the coefficients c j , j � 0, 1, 2, . . ., are constant. Taylor's expansion is one of the most common ways to get a power series of a function.
e Padé approximant R f (x) of degree (p, q) of the function f is determined by e Padé approximant is considered the "best" approximation of a function by a rational function of a given degree. e rational approximation is also good for series with alternation terms and poor polynomial convergence.
is is our motivation of using the Padé approximant to approximate trigonometric functions and improve these trigonometric inequalities. Different values of p and q will affect the approximate performance. By selecting suitable values of p and q, we can obtain the "best" approximant. Let (p, q) � (k, k), and we can obtain a simple result. e result is a special case of (10).
It is well known that where for x ∈ (0, π/2), n ∈ N 0 , and B i are Bernoulli's numbers.
Using the Padé approximant and equation (11), we obtain a better approximation of tangent function. Here, we need to pay attention to the value of c j in formula (10). Let c 2j � 0, c 2j− 1 � T(j), and T(j) is given in (12); we can obtain the Padé approximant of tan(x). In the same way, we can also obtain the Padé approximants of other trigonometric functions. Table 1 gives the comparison between the Padé approximant and the Taylor series expansion of tangent function. It is easy to see that the maximum approximation error of the Padé approximant is less than the error of the corresponding Taylor polynomial. e advantage of the Padé approximant is more obvious with the increase of the polynomial degree. e bottom row of Table 1 shows the maximum approximation error of the Taylor polynomial is 6.0401 × 10 − 3 ; however, the maximum approximation error of the Padé approximant is 2.2531 × 10 − 9 . At the same time, we can find that the form of the Padé approximant is simpler because of its lower degree.
New Improvements of Jordan's, Kober's, and Becker-Stark's Inequalities
In this section, we give new improvements of Jordan's, Kober's, and Becker-Stark's inequalities based on the Padé approximant.
Conclusions and Analysis
In this paper, a multiple-point Padé approximant method is presented for approximating and bounding some trigonometric functions. We find that the Padé approximant is a better approximation of trigonometric functions. e conclusion is verified in Table 1. We give new refinements and improvements of Jordan's, Kober's, and Becker-Stark's inequalities based on the Padé approximant. In order to compare our results with the previous methods, we introduce the concept of the maximum error. e maximum error is the most important index to measure the upper and lower bounds of an inequality. MaxError l denotes the maximum error between a function and its lower bound. MaxError u denotes the maximum error between a function and its upper bound. Table 2 gives the comparison of the maximum errors between sinc(x) and its bounds for different methods. It is obvious that the results of this paper are superior to the previous conclusions. e upper and lower bounds of inequality (13) are tighter than inequalities (1) and (4). e results of cos(x) are presented in Table 3. MaxError l and MaxError u of inequality (17) is the smallest of three methods in Table 3. Table 4 gives the comparison of the maximum errors between tan(x) and its bounds for this paper and Zhang et al.'s paper [20]. Because inequality (3) holds in (0, 1.5701), not in [0, π/2], we no longer consider the Bercu [19] (inequality (1)) 2.5981 × 10 − 3 6.2382 × 10 − 5 Zhang et al. [20] (inequality (4)) 1.0615 × 10 − 6 1.7998 × 10 − 6 Results of this paper (inequality (13)) 1.3042 × 10 − 10 1.3411 × 10 − 8 Table 3: Comparison of the maximum errors between cos(x) and its bounds for different methods.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 1,370.6 | 2020-05-19T00:00:00.000 | [
"Mathematics"
] |
Review of Current Standard Model Results from ATLAS
This talk highlights results selected from the Standard Model re-search programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from pp collisions at √ s = 7 , 8 TeV in LHC Run 1 as well as results using data at √ s = 13 TeV in LHC Run 2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of W and Z bosons, including a first measurement of the mass of the W bosons, m W , are discussed. The programme to measure electroweak processes with diboson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model predictions and allow further constraints.
Introduction
The strong and electroweak sectors are central components of the Standard Model (SM) of particle physics. The strong force is described by Quantum Chromodynamics. The high precision and large data statistics available to the ATLAS Collaboration [1] at the LHC [2] allow to further our understanding of these processes from a few charged particles in the non-perturbative to the perturbative regime manifested by high jet multiplicities. The electroweak force is intricately connected to the mechanism of electroweak symmetry-breaking (EWSB). Many electroweak processes are also important backgrounds to the study of the Higgs boson, searches for new physics, or are sensitive to contributions from new physics themselves. Consequently, a large number of related results have been produced by ATLAS. With excellent progress on the detector understanding in recent years, the data from the LHC Run 1 with √ s = 7, 8 TeV are now exploited to their full potential and provide precision measurements of strong and electroweak processes at that energy. Results from LHC Run 2 with √ s = 13 TeV data are also available now. Figure 1 summarizes the status of this work and in the following selected recent results are discussed.
Physics of the Strong Force
Measurements of charged particle production ("Minimum Bias", MB) are very important to describe pile-up underlying all LHC data. ATLAS has measured MB distributions in all data sets, with track transverse momenta down to 100 MeV [4]. The underlying event (soft activity produced along with the hard scatter, i.e. initial-and final-state radiation, multiparticle interaction and color reconnection processes) has been studied most recently in 13 TeV data [5]. The measurement exploits the clean transverse region with respect to the leading particle. The agreement of data of soft-QCD processes with models is generally within 5%. In the perturbative regime jet and photon production cross sections are investigated. For an inclusive measurement at 8 TeV [6] jets are reconstructed using the anti-k t jet clustering algorithm. The dominant systematic uncertainty stems from the jet energy calibration. A significant reduction of the uncertainties compared to previous jet cross section measurements [7] has been achieved. QCD predictions at NLO with the MMHT2014 PDF set corrected for non-perturbative and electroweak effects describe the data well. On the other hand recent measurements at 13 TeV [8] compared to NNLO calculations show tensions depending on QCD scale.
Jet measurements are also used to extract fundamental constants. Energyenergy-correlations of multijet events measured in the transverse plane (TEEC) and their asymmetric version (ATEEC) are sensitive to the strong coupling constant α s . Fits to ATEEC distributions yield the most precise value α s (m Z ) = 0.1196 ± 0.0013 (exp.) +0.0075 −0.0045 (theo.) from ATLAS [9,10]. Prompt photon production is a colorless probe of pQCD. ATLAS measures this process in 13 TeV for E γ T up to 1.5 TeV [11]. The main challenge is background from jets misidentified as photons which is subracted in a datadriven approach. Good agreement with NLO calculations and Monte Carlo predictions is observed. Other recent measurements include inclusive di-photon cross sections at 8 TeV [12] and γ+jet at 13 TeV [13].
Electroweak Physics
High precision measurements of W and Z production cross sections are now available [14]. The measurements are used to obtain a new PDF set ATLAS-epWZ16 from a QCD analysis of LHC and HERA data, and provide confirmation that the strange to light sea quark density in the proton is close to unity at low x Bj . A new CKM parameter V cs measurement from a fit to these data is competitive with previous results. First inclusive W, Z cross sections from 13 TeV data are already sensitive to different PDF sets [15]. For the first time a measurement of the W mass m W has been made by ATLAS at the LHC [16]. The result m W = 80370 ± 7(stat.) ± 11(exp. syst.) ± 14(mod. syst.) MeV = 80370 ± 19 MeV is competitive to current m W measurements from the TeVatron and compatible with the PDG world average, as well as the SM prediction from a global electroweak fit. Measurements of vector boson production in association with jets at 8 TeV and 13 TeV are also available [17,18]. The cross section has been measured as function of a wide range of differential distributions at jet multiplicities of up to 7 jets. Comparisons to various fixed order calculations at (N)NLO are in reasonable agreement with the data.
A large number of diboson processes is now observable and has been analysed [19][20][21][22][23][24]. The W W process has received much attention [25][26][27] since it's final state resembles the same one as top-quark pair production. Great progress has been made in the experimental and theoretical understanding of W W production. Generally, good agreement with SM is now observed in all final The data/theory ratio for several vector boson fusion, vector boson scattering, and triboson fiducial cross section measurements, corrected for leptonic branching fractions [3]. All theoretical expectations were calculated at NLO. The dark-color error bar represents the statistical uncertainty. The lighter-color error bar represents the full uncertainty, including systematics and luminosity uncertainties. The luminosity used and reference for each measurement are also shown. Uncertainties for the theoretical predictions are quoted from the original ATLAS papers. They were not always evaluated using the same prescriptions for PDFs and scales. Not all measurements are statistically significant yet.
di-boson states and a whole industry extracting constraints on aTGC, and effective QFT parameters exists. For a review consult Ref. [29]. Even more complex final states are now also in reach, such as vector boson fusion (VBF), vector boson scattering (VBS) and tri-boson production. Some of them have already been observed. Cross section measurements of several explored processes are shown in Figure 2. The processes are also sensitive to anomalous triple gauge couplings (aTGC) and quartic gauge couplings (aQGC) and allow to constrain them. VBS is intimately connected to the EWSB mechanism and remains an essential probe of the SM even after discovery of the Higgs boson. Spectacular signatures are expected for vector boson fusion and VBS with two high p T forward jets, one or two high-p T central leptons and rapidity gaps in-between. In nature these turn out to be challenging to observe due to large reducible and irreducible backgrounds. Nevertheless both electroweak (VBF) production of Z bosons and W bosons [28] have now been observed in fits to the electroweak component in enriched selections. Exclusive W -pair production γγ → W + W − is important to understand as background to Higgs production. The process has now been observed by ATLAS at 3σ significance [30]. Analysis of same sign W ± W ± + jj final states remains the seminal analysis in VBS studies and has been measured with ever-improving precision and theory understanding [31].
Summary
A large number of SM processes has been explored at the LHC. No significant deviations from SM predictions are observed anywhere reinforcing our trust in the description of nature by the strong and electroweak sectors of the SM. High rate processes allow precision tests of NNLO SM predictions. This precision will increase even further with more luminosity and detector understanding. Limits on aTGCs and aQGCs have been derived and are now testing the sensitivity of our experiments. The planned LHC Run 3 and upgrade programme should bring a big increase in available integrated luminosity, especially for the High Luminosity LHC. If we do not find new physics elsewhere (for example new resonances from SUSY), the electroweak sector is the best place to probe. | 1,965.6 | 2018-03-07T00:00:00.000 | [
"Physics"
] |
Cost–utility analysis of liraglutide compared with sulphonylurea or sitagliptin, all as add-on to metformin monotherapy in Type 2 diabetes mellitus
Aim To investigate the cost-effectiveness of liraglutide as add-on to metformin vs. glimepiride or sitagliptin in patients with Type 2 diabetes uncontrolled with first-line metformin. Methods Data were sourced from a clinical trial comparing liraglutide vs. glimepiride, both in combination with metformin, and a clinical trial comparing liraglutide vs. sitagliptin, both as add-on to metformin. Only the subgroup of patients in whom liraglutide was added to metformin monotherapy was included in the cost–utility analysis. The CORE Diabetes Model was used to simulate outcomes and costs with liraglutide 1.2 and 1.8 mg vs. glimepiride and vs. sitagliptin over patients’ lifetimes. Treatment effects were taken directly from the trials. Costs and outcomes were discounted at 3.5% per annum and costs were accounted from a third-party payer (UK National Health System) perspective. Results Treatment with liraglutide 1.2 and 1.8 mg resulted, respectively, in mean increases in quality-adjusted life expectancy of 0.32 ± 0.15 and 0.28 ± 0.14 quality-adjusted life years vs. glimepiride, and 0.19 ± 0.15 and 0.31 ± 0.15 quality-adjusted life years vs. sitagliptin, and was associated with higher costs of £3003 ± £678 and £4688 ± £639 vs. glimepiride, and £1842 ± £751 and £3224 ± £683 vs. sitagliptin, over a patient’s lifetime. Both liraglutide doses were cost-effective, with incremental cost-effectiveness ratios of £9449 and £16 501 per quality-adjusted life year gained vs. glimepiride, and £9851 and £10 465 per quality-adjusted life year gained vs. sitagliptin, respectively. Conclusions Liraglutide, added to metformin monotherapy, is a cost-effective option for the treatment of Type 2 diabetes in a UK setting.
Introduction
Diabetes is among the most common chronic illnesses worldwide, with Type 2 diabetes mellitus accounting for approximately 90% of all cases [1]. Type 2 diabetes is progressive and is characterized by increased insulin resistance, generally associated with obesity, and deteriorating b-cell function, resulting in chronic hyperglycaemia. As the disease progresses, so do the micro-and macrovascular complications associated with it, which have a negative impact on the quality of life of patients and pose a huge economic burden to the health system [2,3]. For example, in the UK, the cost of Type 2 diabetes accounts for 7-12% of the total National Health Service (NHS) expenditure [4].
The risk of micro-and macrovascular complications is strongly associated with hyperglycaemia, and each reduction of 11 mmol ⁄ mol (1%) in HbA 1c significantly reduces the risk of developing these complications in patients with Type 2 diabetes [5]. In the UK, the National Institute for Health and Clinical Excellence (NICE) recently issued recommendations for the optimum management of Type 2 diabetes, taking into consideration the effectiveness, safety and cost-effectiveness of the available treatments (NICE, 2009) [6]. NICE recommends lifestyle modifications and metformin as first-line therapy, with the subsequent stepwise additions of a sulphonylurea and insulin. A thiazolidinedione or a dipeptidyl peptidase-4 (DPP-4) inhibitor may be considered as second-line options in place of a sulphonylurea if there is a significant risk of hypoglycaemia, or if a sulphonylurea is contraindicated or not tolerated. Sitagliptin (a DPP-4 inhibitor) or a thiazolidinedione can be considered as third-line therapy in place of insulin if insulin is unacceptable. Exenatide may also be considered as a third-line option in combination with metformin and a sulphonylurea in patients with a BMI above 35 kg ⁄ m 2 and problems associated with high weight, or BMI under 35 kg ⁄ m 2 if insulin is unacceptable because of occupational implications, or if weight loss would benefit other co-morbidities [6]. The place of liraglutide (Victoza Ò ; Novo Nordisk A ⁄ S, Bagsvaerd, Denmark) in therapy has also been evaluated recently by NICE [7].
Recommendations advocate the use of liraglutide 1.2 mg daily in triple therapy (with metformin and a sulphonylurea or metformin and a thiazolidinedione) under the same conditions described for exenatide, and in dual therapy (with metformin or a sulphonylurea) if metformin or sulphonylureas and thiazolidinediones or DPP-4 inhibitors cannot be tolerated or are contraindicated [7]. The American Diabetes Association and the European Association for the Study of Diabetes issued similar recommendations in a consensus algorithm based on effectiveness and safety data from clinical trials and on clinical experience, taking into account benefits, risks and costs of the different available treatments [8]. In clinical trials, glucagonlike peptide 1 (GLP-1) receptor agonists, such as liraglutide and exenatide, have been shown to reduce HbA 1c to at least the same, and often to a greater, extent than traditional oral hypoglycaemic agents and both Glucagon-like peptide (GLP-1) receptor agonists and DPP-4 inhibitors such as Sitagliptin, have the additional advantages of reducing the risk of hypoglycaemia, as their insulinotropic effect is glucosedependent, and inducing weight loss (in the case of GLP-1 receptor agonists) or being weight-neutral (in the case of DPP-4 inhibitors) [9]. Additionally, GLP-1 receptor agonists have been shown to have a positive effect on systolic blood pressure [9]. Despite these advantages, sulphonylureas continue to be the preferred second-line choice after metformin, with incretinbased therapies only recommended as second-or third-line therapies in special circumstances [6,8]. The fact that incretinbased therapies are considered more expensive may contribute to these therapies not being recommended more widely.
Liraglutide is a GLP-1 analogue approved in 2009 for use in Europe, including the UK. Because of its recent approval, studies evaluating the cost-effectiveness of liraglutide are scarce. The aim of our study was to investigate the costeffectiveness of liraglutide as add-on to metformin compared with glimepiride or sitagliptin in patients failing treatment with first-line metformin.
Data sources
The cost-utility evaluation carried out in this study is based on patients who participated in two studies performed as part of the phase III clinical development programme for liraglutide: a study comparing liraglutide vs. glimepiride (LEAD-2 study), both in combination with metformin, and a study comparing liraglutide vs. sitagliptin, both also in combination with metformin [10,11]. In the LEAD-2 study, adults with Type 2 diabetes and HbA 1c between 53 and 97 mmol ⁄ mol (7-11%) (if previously treated with oral hypoglycaemic agent monotherapy for at least 3 months) or HbA 1c between 53 and 86 mmol ⁄ mol (7-10%) (if previously treated with oral hypoglycaemic agent combination therapy for at least 3 months) were included. Additional inclusion criteria were age between 18 and 80 years and BMI £ 40 kg ⁄ m 2 . To facilitate recruitment into the trial, previous treatment with other oral anti-diabetes drugs, as monotherapy or in combination, was allowed [10]. However, only the subgroup of patients in which liraglutide or glimepiride was added to metformin monotherapy (approximately 30% of the total trial population) was included in the cost-utility analysis presented here, as this was considered to be more reflective of actual clinical practice. In the liraglutide vs. sitagliptin study, adults with Type 2 diabetes, previously treated with metformin monotherapy for at least 3 months and with HbA 1c between 58 and 86 mmol ⁄ mol (7.5-10.0%) were included. Additional inclusion criteria were age between 18 and 80 years and BMI £ 45 kg ⁄ m 2 [11]. Demographic characteristics of the patients enrolled in these studies have previously been described [10,11].
The CORE Diabetes Model
The cost-utility evaluation presented here was carried out using the CORE Diabetes Model, details of which have been published previously by Palmer et al. [12]. The CORE diabetes model is a validated [13] non-product-specific policy analysis tool based on a series of 15 sub-models that simulate major complications of diabetes: cardiovascular disease, stroke, neuropathy, foot ulcer ⁄ amputation, eye disease, nephropathy, hypoglycaemia, lactic acidosis and non-specific mortality [12]. For each submodel, a combination of semi-Markov model structure and Monte Carlo simulations were used. This structure allows patients to develop multiple complications within each model cycle and over the simulation period. The model projects outcomes for populations, considering baseline cohort characteristics, past history of complications, concomitant medications, current and future diabetes management, screening strategies and changes in physiological variables over time. In this way, incidence of complications, life expectancy, quality-adjusted life expectancy and total costs within populations can be calculated. The results can be expressed in terms of quality-adjusted life years (QALYs) gained and incremental cost-effectiveness ratios, i.e. the cost per QALY gained. An incremental cost-effectiveness ratio threshold of £20 000-30 000 per QALY gained is generally considered to represent good value for money in the UK [14].
Simulation cohorts and treatments
A simulated cohort of patients was defined (Table 1), with baseline demographics and complications taken from the respective clinical trial used in the analysis. Treatment effects with liraglutide (1.2 and 1.8 mg) vs. glimepiride and liraglutide (1.2 and 1.8 mg) vs. sitagliptin were taken directly from the clinical trials (Table 2). Treatment duration was set to 5 years, after which basal insulin therapy was started in an attempt to replicate clinical practice. Simulations were run over patients' lifetimes to capture all events and complications related to the progression of Type 2 diabetes.
Costs and utilities
Costs were accounted from a third-party payer (National Health Service) perspective. Where possible, unit costs for complications were derived from UK-specific published sources in patients with Type 2 diabetes and inflated to 2008 values, the latest available at the time of analysis, using the composite National Health Service price inflation index from the Personal Social Services Research Unit (PSSRU). A summary of the costs of medicines and complications is given in the Supporting Information (Table S1). The utilities used in the base case presented here are summarized in the Supporting Information (Table S2). The costs of medicines, self-monitored blood glucose testing equipment and needles were taken from the Monthly Index of Medical Specialities (MIMS) August 2009 [15]. Utilities and disutilities (i.e. measures of the impact on quality of life) associated with complications of diabetes were obtained from the literature and, where possible, taken from populations with Type 2 diabetes. Discount rates of 3.5% per annum for both costs and clinical outcomes were applied in the base case.
Sensitivity analyses
To assess the impact of varying the key assumptions and outcomes used in the base-case analysis, several sensitivity analyses were performed: treatment duration was set to 3 and 8 years; an alternative weight progression was used in which, when treatment is switched, BMI reverts to baseline level and then increases as predicted with insulin treatment; discount rates were set to 0 and 6% for both costs and outcomes; and hypoglycaemia disutility was removed and also set to 0.0052, as used in the technology appraisal of insulin glargine carried out by NICE [18]. Additional analyses to investigate the contribution of individual clinical effects (weight, cholesterol and triglycerides, systolic blood pressure and HbA 1c ) to quality-adjusted life expectancy were also performed. The values used in the sensitivity analyses were derived from expert consensus or were previously used by, or recommended by, NICE in its Guide to the Methods of Technology Appraisal [16,17]. The results of these analyses are presented as approximate relative impacts of the base-case benefit. It should be noted that these values represent crude approximations (and therefore will not typically sum to 100%), as sensitivity analyses reflecting changes in multiple clinical variables have a complex impact on outcomes (in relation to the base case).
Statistical methodology
A non-parametric bootstrapping approach was used for this health economic analysis. Using second-order Monte Carlo simulation, Type 2 diabetes progression was simulated in 1000 patients through the model 1000 times to calculate the mean and standard deviation of life expectancy, quality-adjusted life expectancy, and costs [12]. The results from the bootstrapped
Base-case analyses
Liraglutide vs. glimepiride Treatment with liraglutide 1.2 and 1.8 mg resulted, respectively, in a mean increase in quality-adjusted life expectancy of 0.32 AE 0.15 QALYs and 0.28 AE 0.14 QALYs, and was associated with higher costs of £3003 AE £678 and £4688 AE £639 over a patient's lifetime, compared with glimepiride. The estimated incremental cost-effectiveness ratios for liraglutide 1.2 and 1.8 mg vs. glimepiride were, respectively, £9449 and £16 501 per QALY gained (Table 3). At a willingness to pay of £20 000 per QALY gained, liraglutide 1.2 mg is a cost-effective treatment option in over 88% of cases, whereas liraglutide 1.8 mg is a costeffective treatment option in over 65% of cases. If the willingness-to-pay threshold is increased to £30 000, the probability that the treatment will be cost-effective increases to over 93% for liraglutide 1.2 mg and 83% for liraglutide 1.8 mg (Fig. 1).
Liraglutide vs. sitagliptin
Compared with sitagliptin, mean increases in quality-adjusted life expectancy of 0.19 AE 0.15 QALYs and 0.31 AE 0.15 QALYs, and higher costs of £1842 AE £751 and £3224 AE £683 were associated with liraglutide 1.2 and 1.8 mg, respectively, over a patient's lifetime. The estimated incremental costeffectiveness ratios for liraglutide 1.2 and 1.8 mg vs. sitagliptin were, respectively, £9851 and £10 465 per QALY gained (Table 3). At a willingness to pay of £20 000, liraglutide 1.2 mg is a cost-effective treatment option in over 77% of cases, while liraglutide 1.8 mg is a cost-effective treatment option in over 85% of cases. The probability that the treatment will be cost-effective increases to 82% for liraglutide 1.2 mg and 92% for liraglutide 1.8 mg when the willingness-to-pay threshold is increased to £30 000.
Liraglutide vs. glimepiride and liraglutide vs. sitagliptin
Decreasing the discount rate resulted in a lower incremental cost-effectiveness ratio with liraglutide 1.2 mg, while increasing the discount rate increased the incremental cost-effectiveness ratio. Reducing treatment duration from 5 to 3 years resulted in a lower incremental cost-effectiveness ratio for liraglutide 1.2 mg (Table 4). In the shorter treatment duration simulation, the full clinical benefit of liraglutide was achieved, but the cost was reduced as liraglutide pharmacy costs were only accounted for 3 years. Increasing treatment duration to 8 years resulted in a higher incremental cost-effectiveness ratio for liraglutide 1.2 mg, as, in this simulation, liraglutide pharmacy costs were accounted )13.7 (11.2) )14.2 (10.8) )12.7 (10.6) )13.1 (11.0) )16.4 (9.7) )10.0 (11.6) )1. 25 (1.02) )1.30 (0.99) )1.16 (0.97) )1. 24 (1.04) )1.50 (0.89) )0.90 (1.04) Change in systolic blood pressure (mmHg) The length of liraglutide treatment for individual patients in a real-life setting will vary, but it is reassuring to note that treatment durations of 3, 5 and 8 years are all cost-effective at a willingness to pay of £20 000 per QALY gained (Table 4). Similar trends were observed for liraglutide 1.8 mg (data not shown).
Contribution of clinical effects to QALYs gained
The results of the additional analyses carried out to investigate the contribution of individual clinical effects (weight, cholesterol and triglycerides, systolic blood pressure and HbA 1c ) to QALYs showed that the gain in QALYs with liraglutide 1.2 mg over glimepiride is equally distributed between systolic blood pressure (32%), weight (30%) and cholesterol and triglycerides (27%), with only a smaller contribution from HbA 1c (11%). Conversely, the gain in QALYs with liraglutide 1.2 mg over sitagliptin arises mainly from improvements in HbA 1c (54%) and weight (44%). Cholesterol and triglycerides and systolic blood pressure changes had a negligible effect on QALYs gained ()3 and )1%, respectively). Additional Supporting Information may be found in the online version of this article.
Discussion
The cost per QALY vs. glimepiride and vs. sitagliptin, for both doses of liraglutide investigated in this cost-utility modelling study (1.2 and 1.8 mg), ranged between £9000 and £16 000. Treatment with liraglutide costs more than with the comparators, but these increased costs were partially offset by reductions in the costs associated with complications, because the risk of developing complications decreases with liraglutide treatment as a result of its combined beneficial effects on body weight, blood glucose, systolic blood pressure and other cardiovascular risk factors. The values obtained lie below the threshold of £20 000-30 000 per QALY, indicating that liraglutide in combination with metformin monotherapy is a cost-effective option for the treatment of Type 2 diabetes compared with glimepiride or sitagliptin. The sensitivity analyses performed indicated that, in the liraglutide vs. glimepiride comparison, systolic blood pressure, weight and cholesterol were the key drivers of cost-effectiveness, with a relatively small contribution from HbA 1c . This was to be expected, as both liraglutide and glimepiride treatment achieved similar HbA 1c reductions in the clinical trial on which this health economic evaluation is based , while liraglutide had a greater impact on reducing systolic blood pressure, weight and cholesterol compared with glimepiride [10]. In contrast, HbA 1c and weight were the key drivers of cost-effectiveness in the liraglutide vs. sitagliptin comparison, with only small effects from systolic blood pressure and cholesterol, reflecting the greater effect of liraglutide vs. sitagliptin on reducing HbA 1c and weight, and the comparable effects of both of these therapies on systolic blood pressure and cholesterol [11]. In the liraglutide vs. sitagliptin comparison, a preliminary subgroup analysis in which patients were stratified by baseline BMI (all > 30 or > 35 kg ⁄ m 2 ) showed that the cost-effectiveness of liraglutide 1.2 mg vs. sitagliptin improved with increasing BMI, with incremental costeffectiveness ratios of £9851, £7593 and £6125, respectively (see also Supporting Information, Table S3), probably because weight loss with liraglutide increases with increasing BMI [19]. This initial finding is interesting and may warrant further investigation at a later date. Treatment satisfaction was also assessed in the liraglutide vs. sitagliptin clinical trial using the Diabetes Treatment Satisfaction Questionnaire (DTSQ) and patients reported greater treatment satisfaction with liraglutide [11]. This result was not taken into consideration in the costutility analysis presented here. However, had it been, the costeffectiveness of liraglutide vs. sitagliptin may have been even further enhanced, as treatment satisfaction could translate into greater adherence and improved clinical outcomes [20]. Furthermore, contrary to the perception that oral treatments are usually preferred to injections, there were no differences in the perceived convenience of treatment between sitagliptin and liraglutide [11].
To put the results of this economic evaluation into context, the cost per QALY of implementing liraglutide in combination with metformin therapy estimated in this study is in the same range as that estimated for implementing education programmes aimed at maximizing the benefits of diet and lifestyle interventions as reported in a recent study, which estimated a cost per QALY ranging from €10 000 to €39 000 [21]. However, a study that investigated the cost-effectiveness of the Diabetes Education and Self management for Ongoing and Newly Diagnosed (DESMOND) programme in UK patients newly diagnosed with Type 2 diabetes reported a lower cost per QALY of £2092 [22]. The estimated cost per QALY of adding pioglitazone to ongoing therapy in patients with Type 2 patients with a history of macrovascular disease and at high risk for further cardiovascular events was reported as £5396 vs. placebo after a mean treatment period of 3 years [23].The cost of adding sitagliptin to metformin monotherapy vs. the cost of adding a sulphonylurea appears to also be in the same range as the cost of adding liraglutide to metformin monotherapy reported here. An analysis to evaluate the cost of adding sitagliptin vs. sulphonylurea to metformin monotherapy in patients with Type 2 diabetes from six European countries (Austria, Finland, Portugal, Scotland, Spain and Sweden) and not reaching the International Diabetes Federation's HbA 1c target of < 48 mmol ⁄ mol (< 6.5%) estimated costs per QALY ranging from €5949 to €20 350 across countries [24]. Similarly, the cost per life-year with statins, a common therapy used in patients with Type 2 diabetes concomitantly with anti-hyperglycaemic agents to treat dyslipidaemia and reduce cardiovascular risk, has been estimated to range from £5400 to £13 300 for primary prevention and from £3800 to £9300 for secondary prevention. [25] A limitation of this study is that the model used, like all models used to assess the long-term outcomes of patients with Type 2 diabetes, predicts long-term outcomes based on the results of short-term studies. However, the CORE Diabetes Model used here has been validated against published studies that had not been used to provide input data for setting up the model [13]. For each validation analysis, the progress of a patient cohort from a published epidemiological, clinical or modelling study was simulated, and the outcomes of the simulation were compared with those of the published study. The results indicated that the CORE Diabetes Model is capable of reliably predicting longterm patient outcomes.
In conclusion, this study investigated the cost-utility, in a UK setting, of liraglutide vs. glimepiride or sitagliptin (all added to metformin monotherapy), scenarios intended to simulate likely clinical practice in real life. The results suggest that liraglutide added to metformin monotherapy leads to improvements in quality adjusted-life expectancy and is a cost-effective option for the treatment of Type 2 diabetes in this setting.
Supporting Information
Additional Supporting Information may be found in the online version of this article: Table S1. Summary of the costs of medicines and complications used in the model adjusted to 2008 costs. Table S2. Summary of utilities and disutilities for the base case used in the model. Table S3. Results of the base-case analysis by BMI subgroup in the liraglutide vs. sitagliptin comparison: quality-adjusted life years, costs and incremental cost-effectiveness ratios.
Please note: Wiley-Blackwell are not responsible for the content or functionality of any supporting materials supplied by authors. Any queries (other than for missing material) should be directed to the corresponding author for the article. | 4,965.8 | 0001-01-01T00:00:00.000 | [
"Medicine",
"Economics"
] |
Automated Detection of Vessel Abnormalities on Fluorescein Angiogram in Malarial Retinopathy
The detection and assessment of intravascular filling defects is important, because they may represent a process central to cerebral malaria pathogenesis: neurovascular sequestration. We have developed and validated a framework that can automatically detect intravascular filling defects in fluorescein angiogram images. It first employs a state-of-the-art segmentation approach to extract the vessels from images and then divide them into individual segments by geometrical analysis. A feature vector based on the intensity and shape of saliency maps is generated to represent the level of abnormality of each vessel segment. An AdaBoost classifier with weighted cost coefficient is trained to classify the vessel segments into normal and abnormal categories. To demonstrate its effectiveness, we apply this framework to 6,358 vessel segments in images from 10 patients with malarial retinopathy. The test sensitivity, specificity, accuracy, and area under curve (AUC) are 74.7%, 73.5%, 74.1% and 74.2% respectively when compared to the reference standard of human expert manual annotations. This performance is comparable to the agreement that we find between human observers of intravascular filling defects. Our method will be a powerful new tool for studying malarial retinopathy.
. Two example fluorescein angiography images illustrating the appearances of IVFDs. Vessels with IVFD are shown by single arrows. Vessels without IVFD, in the same image, are shown by double arrows. (a) Example 1: Intensity of mature parasitized red blood cells in vessels with IVFD is significantly different from normal vessels. (b) Example 2: Edges of vessels with IVFDs become unsmooth, the diameter is changed dramatically when compared to normal vessels. The images on the right are the zoom-in view of the regions enclosed by the green box within the original image on the left respectively. quantitative measurements of vessel geometry such as arteriovenous ratio (AVR), tortuosity, and fractal number 14 . There are few works on automatic vasculature analysis in FA 17,18 , and within this literature, the detection of discrete vessel abnormalities involving specific sections of the vessel wall has received little, if any, attention. Only one study addresses the related objective of detecting arteriolar narrowing in color fundus photography 19 . In their work 19 , a density analysis method is first used to detect the vessels, then connectivity analysis is performed to establish vessel trees, and finally arterioles are separated from venules by analysing vessel colour and width so as to assess arteriolar narrowing. This method had a sensitivity of about 75%.
We propose a new framework for automated detection of IVFD. Essentially we have formulated the problem in terms of image classification, where the objective is to train a classifier to determine if a vessel segment is normal or not based on a set of features that represent each segment. Throughout this paper, a vessel segment is defined as a connected segment of the detected vasculature between junctions or bifurcations, or a segment containing only one endpoint. The proposed framework will address three major challenges: 1) Accurate, efficient and reliable detection of vessels; 2) The process of deriving the features that are most discriminative and able to separate normal and abnormal vessels. 3) A classifier with good performance has to be identified and trained properly.
Our framework includes graph cut-based vessel segmentation, vessel geometry analysis, saliency map generation, and ensemble classification by AdaBoost (details of these technical components are described in the methods, below).
Saliency is a predictor of object regions which attract human attention. It indicates the relative importance of visual features and is closely related to characteristics of human perception and processing of visual stimuli [20][21][22] . Saliency originates from visual uniqueness, unpredictability, rarity, or surprise, and is often attributed to variations in image attributes like colour, gradient, edges, and boundaries 23 . Saliency in 2D images is the perceptual quality that makes an object, person, or pixel stand out relative to its neighbours, and that captures our attention 22 . Estimated saliency maps are widely used in many computer vision applications including object of interest image segmentation 24 , object recognition 25 , and so on. A pixel is salient if its appearance is unusual, considering the context of neighbouring pixels -one always looks at a pixel within its surrounding patch rather than simply observing a pixel in isolation. We define saliency in terms of information content: a key-point corresponds to a particular image location within a structure with a low probability of occurrence (i.e. high information content). Many saliency detection approaches for 2D images exist. They have a similar structure, computing several features in parallel and then fusing their values in a representation which is usually called a saliency map. The most general model of saliency detection is described by Itti and Koch 21 . Other existing saliency detection methods for feature determination can be divided into four classes: pixel-based methods 21,[26][27][28][29][30] ; region-based methods 22,23,31 ; frequency-based methods [32][33][34][35] ; parameter learning-based methods [36][37][38] .
In the case of IVFD, there is a contrast between the normally smooth vessel wall and individual discrete lesions that appear to protrude into the vessel lumen ( Fig. 1(a)). These lesions may be defined as salient regions. Similarly, in the vessels affected by IVFD, some sections of the diameters or curvatures of vessel walls may be significantly different from neighbouring vessels or even other segments of the same vessel ( Fig. 1(b)), such vessel edges may also be determined as salient features. These observations prompted us to use vessel intensity and shape saliency maps, and combine them to generate a combined saliency map.
Results
In this section we describe the dataset used, evaluation metrics, experiments performed to evaluate the effects of various parameters, and the experimental results.
Dataset. Our automated framework was evaluated against a dataset containing 6,358 vessel segments (3,033 abnormal segments) from 10 retinal FA images with a size of 3008 × 1960 pixels. These images were taken in the children with CM admitted to the Malaria Research Project Ward, Department of Paediatrics, Queen Elizabeth Central Hospital, Blantyre, Malawi. All subjects had signs of MR on admission. Ethical approval for retinal examination and imaging was given by committees in Blantyre and at collaborating institutions. Consent was given by the parents/guardians of subjects before examination and imaging. The tenets of the Declaration of Helsinki were adhered to. 50-degree images were taken after pupil dilation with Tropicamide 1% and Phenylephrine 2.5%, using a Topcon 50-EX optical unit (Topcon, Tokyo, Japan) and Nikon E1-H digital camera. Manual annotation of IVFD is extremely time consuming even aided by computer programs, it takes over an hour per image. Therefore, only 10 representative cases were selected for the evaluation of IVFD detection. We intentionally chose images that display a range of IVFD severity to create this dataset. This selection was made by ophthalmologists and professional graders who have been leading concurrent development of a protocol for manual grading of IVFD and other retinal features in cerebral malaria. Although the number of subjects is relatively small, we feel that these images represent a fair range of this spectrum.
Human expert graders used a systematic approach to label vessels as abnormal or normal in terms of IVFDs aided by an in-house Matlab program version 2013a (Mathworks, Natick, CA). During the process, the original and an overlay image of the original with centrelines of vessels highlighted in yellow were displayed side by side. Observers were asked to select abnormal and normal vessel segments in turn by clicking on the vessel segment of interest. The selected abnormal segments were then highlighted in red while normal ones in green. In order to assess the detection performance of the framework on vessel segments with different diameters, the observers were asked to look at the peri-capillary vessels, small vessels or large vessels separately. Following our in-house FA grading workbook, we define capillaries as the smallest vessels visible on a well-focussed angiogram. A post-capillary venule is formed by the confluence of two or more capillaries, and extends up to the point where it is joined by a second post-capillary venule or other larger venular segment. Small venules are defined as any section of vein between the edge of the post-capillary venule complex up to the point of confluence with another vessel of similar or larger calibre. Large venules extend from the point where two small venules converge to the edge of the optic disc.
Three experienced observers in grading MR images were involved in the grading. A professional grader (DGP) and an ophthalmologist (IJCM) labelled the vessels using the same software and following the same guidelines in a masked pattern. The grading results by DGP were reviewed together by a senior ophthalmologist familiar with IVFDs (SPH) and the consensus between them was used as the final reference standard. When human graders were uncertain whether IVFDs were present or absent, vessels were left unlabelled and are not analyzed in this study.
Evaluation Metrics. Four commonly-used metrics were employed to evaluate the performance of the program in terms of vessel segment: sensitivity, specificity, accuracy, and the area under a receiver operating characteristic curve AUC. Sensitivity is a measure of effectiveness in identifying abnormal vessel segments while specificity performs the same function for normal vessel segments. Accuracy indicates the overall classification performance. AUC has the ability to reflect the trade-offs between the sensitivity and specificity in particular in the case of imbalanced data classification. These metrics are defined as follows: where tp, tn, fp and fn indicate the true positive (the number of correctly identified abnormal vessel segments), true negative (the number of correctly identified normal vessel segments), false positive (the number of incorrectly identified abnormal vessel segments), and false negative(the number or incorrectly identified normal vessel segments), respectively. In particular AUC is calculated as suggested by Hong et al. 39 . An AUC of 1.0 means that the classifier distinguishes class examples perfectly. Experiment Settings. The 10 images in the dataset were randomly separated into a training set (8 images) and testing set (2 images). The training set was used to train and validate models while the testing set for evaluating the performance of the final model. An image-wise partition strategy was chosen in order to avoid possible overfitting, which could be introduced by a segment-wise partition strategy. With a segment-wise partition strategy, a classifier trained and tested on vessel segments from the same images may provide surprisingly good results on the training images, but perform poorly on new images. We applied repeated leave-one-out cross validation (LOOCV) to the training set for parameter optimization (or model selection) 40 . In brief, of the 8 images in the training set, 7 images were used to train a model while the remaining image was retained as the validation data for testing the model trained. The process was repeated 8 times with each single portion (image) used exactly once as the validation data. The LOOCV was then repeated five times on different random splits of the dataset, and the mean values of sensitivity, specificity, accuracy and AUC were used for comparisons of different parameter settings. The range tested for the number of trees was 500, 1000, 2000, 5000 and 10,000 while the range for the cost coefficient was 2 to 8 with an interval of 2. The 'optimal' values of the class weights and number of trees found from the repeated LOOCV were used to train the whole training set to obtain the final model. The performance of the final model was determined by applying it to the testing set. Sub-analysis on the performance of the final model for detection of vessel segments at different types was also performed.
Experimental Results. Figure 2 shows the results of the proposed automated abnormal vessel detection framework on 3 FA images where the normal vessels are illustrated in green colour whilst the abnormal vessels are in red colour. As we can see from Fig. 2(b), our method has classified all the vessel segments segmented by our vessel segmentation method into normal and abnormal segments respectively. However, there were a number of thin vessels that were ungradable for the human observers due to poor contrast. In this work, only the vessels labelled by human observers were considered for the purpose of comparison. Comparing results from our automated method (Fig. 2(c)) with those of human observer's ( Fig. 2(d)), it can be seen that the results are visually very similar either in the case of lots of abnormal vessels contained images (Fig. 2 left and middle column) or fewer abnormal contained image ( Fig. 2 right column). Table 1 shows that the evaluation results in terms of sensitivity, specificity, accuracy, and AUC are 0.747, 0.735, 0.741, and 0.742, respectively. In addition, The overall inter-observer agreement for IJCM and DGP was found to be κ = 0.424 (p < 0.001) implying good agreement. The κ value for the framework and DGP is 0.555 (p < 0.001).
In order to provide clinicians with more information about abnormalities in vessel segments, we also evaluated the performance on large, small and peri-capillary vessels separately. Figure 4 shows the results on one image by the program and the expert annotation side by side and Fig. 4(a-c) show the results on large, small and peri-capillary vessels respectively. The results for these three vessel types in terms of sensitivity, specificity, accuracy, and AUC were also presented in Table 1. Overall, the proposed abnormal vessel detection for the vessels from small vessel has the highest performance, which achieve sensitivity of 0.765, specificity of 0.782, accuracy of 0.751, and AUC of 0.776.
Discussion and Conclusions.
We have developed a novel abnormal vessel detection framework to identify IVFD -a neurovascular sign that may represent an important part of CM pathogenesis. The framework comprises four major components: vessel segmentation, analysis of vessel geometry, salient feature generation, and vessel classification. Our evaluation of this framework yielded results that are Table 1. Detection performance of the proposed framework for all the vessel segments, and for vessel segments with different types (large, small and peri-capillary vessels).
comparable to expert human observers. While much work has been done to develop tools to measure retinal vessel geometry, to the best of our knowledge this is the first report of automated analysis of discrete retinal vessel abnormalities. Our method demonstrated satisfactory overall performance: sensitivity of 74.7%, specificity of 73.5%, and accuracy of 74.1%. In terms of vessel type-wise analysis, the framework achieved a sensitivity of 76.4%, specificity of 79.1%, and accuracy of 75.9% on small vessel. These results are consistent with the fact that there are relatively few large vessels, compared to smaller vessels. Unfortunately peri-capillary vessels were not typically photographed with sufficient quality for analysis to be accurate. These promising results largely rely on our novel adaptation of the concept of salient features to the field of medical image analysis. In psychological terms, saliency is a predictor of visual object regions that attract human attention. Saliency indicates the relative importance of components of our visual world, and is closely involved in perception and processing of visual stimuli. In computational terms, saliency refers to a region or object that stands out from its neighbours or background. In this paper, we represented IVFD as salient regions on the background of the retinal image. IVFD can be thought of as minute vessel regions that have different diameter, curvature, or contrast to neighbouring regions. These features of IVFD are in line with the definition of saliency in computer vision field: the salient region is one that is significantly different from nearby regions in terms of contrast or shape.
Another highlight of our approach is the use of weighted ensemble classification method to deal with imbalanced data. This is very important as the proportion of abnormal to normal vessel segments in a retinal image is often skewed. A weighted classification strategy appears to be an appropriate way to penalize misclassification errors for each class differently. Furthermore, an ensemble classification technique will usually provide better performance compared to single classifiers. We chose weighted AdaBoost for this specific application because of its simplicity, efficiency and robustness against potential problem of overfitting. Other classification methods, such as weighted-SVM 41 , could also be used.
Automated analysis of retinal images is an important objective in medical research. The main emphasis has been on analysis of colour fundus photographs rather than FA, and on quantifying vessel geometry rather than identifying particular vessel segments affected by focal lesions. As a result the problem of detecting discrete vessel abnormalities is relatively unexplored. Achieving high performance in automated lesion detection is a challenging task. In our experience, there are many different factors that could compromise performance. First of all, there is often a very large variation in brightness, contrast, and artefact across images. This makes it difficult to have universal criteria to define the abnormalities. Secondly, IVFD can be difficult to grade, even for expert human graders. It is possible that an automated technique such as ours might provide more accurate detection than the current human expert reference standard.
Development of this framework is motivated by medical demands for a tool to measure the number of abnormal vessels in retinal FA images, and our method should allow better estimation of associations between MR and clinical outcome in patients with CM. This work is ongoing. The flexibility of this framework suggests it might be suitable for detecting abnormal vessel segments in other retinal or neurovascular diseases that involve discrete vascular lesions.
In conclusion, we have proposed and evaluated an innovative abnormal vessel detection framework to support the study of malaria retinopathy, and our experimental results have demonstrated its effectiveness. It has potential to be further developed as a useful tool for fast accurate and objective assessment for a range of retinal diseases.
Methods
In this section the proposed automated IVFD detection framework is described in detail.
Vessel Segmentation. The automated detection of blood vessels is a prerequisite in the development of automated system for the analysis of vessels. For this work, we adopted a state-of-the-art segmentation technique for its good accuracy and efficiency 42 . This technique is built on local phase enhancement and graph cut method. Local phase-based vessel enhancement is employed to enhance vessel-like structures in an image to form a 'vesselness map' . As suggested by the name, this filter uses local phase information in the image to enhance vessel-like structures. Compared to the conventional intensity-based filters, this filter is invariant to intensity inhomogeneity within the image and also capable of producing more accurate enhancement results for vessels with different widths, even at the bifurcations or end of vessels. The vessels are segmented by applying a graph-cut based Chan-Vese (CV) model to the vesselness map for its computational efficiency. This model 43 as a region-based active contour model, segments the image into two regions (objects and background) by minimizing an energy for smooth boundary and low intra-region intensity variance. In this work we use the optimal parameter values as suggested by the original paper. In particular, for the graph-cut segmentation model, initialisation is achieved automatically by applying a threshold with an empirically chosen value of 0.5 to the vesselness map (afterwards '1' denotes vessel pixel while '0' background). Effects of different threshold values have been evaluated and it seems that the final results are not sensitive to it. Figure 5(a) shows two original example FA images, and their segmentation results are illustrated on Fig. 5(b).
Geometric Analysis of Vessels. Following the vessel segmentation step, geometrical analysis of the segmented vessels is performed in order to split the vasculature into individual segments for further processing. The morphological thinning algorithm is first applied to the segmented vessel trees in order to estimate the centre line and diameter of vessel segments: the exterior pixels from the segmented vessels are removed iteratively by using the thinning algorithm, and obtaining a new binary image containing connected lines of 'on' pixels locating along the vessel centres. The centerlines are refined by using a least-squared cubic spline technique in order to obtain smoother trajectories 44 . Branch points (>2 neighbours) are removed so as to divide the centrelines of the vascular tree into individual portions where each portion corresponds a vessel segment. Segments with a short centreline length (<10 pixels) Scientific RepoRts | 5:11154 | DOi: 10.1038/srep11154 are eliminated to improve the speed of the later processing. Guided from the centerline location of each segment, individual vessel segments will be isolated from the original segmentation result by removing the branch points and their neighbour pixels. Figure 5(c) demonstrates the vessel segments produced after removing the branch pixels and pixels around them of Fig. 5(b).
The vessel diameters of each segment are estimated by using the distance transform of the inverted binary segmented image as suggested by 45 . It uses the Euclidean distance of each vessel pixel from the closest non-vessel pixel, and thus, doubling the maximum values of the distance transform along the thinned centerlines provides an estimate of the diameter of every vessel segment at its widest point. Bankhead et al. has demonstrated that this method can provide good width estimation results at locations in the middle of vessel segments 45 . It seems to us that this method may suffer at the two ends of vessel segments due to the complex geometry. In order to avoid this problem, only diameters at locations 5 pixels away from branch (or end) pixels are considered for the subsequent analysis. A segment will be removed if its centreline contains fewer pixels than its estimated mean diameter. After this process, each segment will be indexed for subsequent analysis.
Feature Generation. To classify the vessel segments detected in the previous step as normal or abnormal, a set of features need to be derived to represent each vessel segment so as to form an input vector for the classifier to be used. In this work, for each segment, a total of 21 features including intensity and shape saliency maps are generated.
Intensity-based Vessel Saliency. Let w(x) ∈ V to be the viable local representation as a patch that represents pixel x, and V indicates all the vessel segments. The average vessel diameter of our dataset is around 5 pixels, so the size of the patch is set as 3 × 3 in this work, where x is the centre of the patch. The patches can be seen as samples of a multivariate probability function (PDF). A number of methods to estimate an unknown multivariate PDF with a sufficient number of samples have been introduced in the literature. The kernel density estimator (KDE) is chosen in this paper. The KDE is appropriate since it is non-parametric, which will allow to estimate any PDF. Therefore, the probability of a patch w(y) can be defined as where d is a distance function that will be discussed later, K is a kernel, h is a smoothing parameter, and N represents the number of pixels. KDE method is capable to blur the contribution of each sample x by spreading it to a certain area in vessel segments with a certain shape 46 , which is defined by K. The multivariate distribution will have higher probability if the patches are in dense areas. From our experience, the most commonly used and the most appropriate kernel is Gaussian function with zero mean and standard deviation σ k . Using a Gaussian kernel, Eq. (5) is rewritten as The estimated probabilities are taken from an actual PDF by setting a proper constant Γ . σ = 0.2 is chose to substitute h. After determining the probability of the patches, the intensity-based saliency measure can be defined as follows: In our application, the intensity-based saliency finally will be normalized into range [0,1]. d is estimated by relative average distance. The relative distance is used in case the distribution of the data is not uniform, and the distance metric mainly focuses on the relationships between neighboring points. Let a patch set W in a vessel contains n patches w 1 ,w 2 ,...w n . The relative average distance of a pair of patches w(x), w(y) ∈ W is defined as follows: are the average Euclidean distance between w(x) and other patches w(k) belonging to W respectively. For two sets of points/pixels with similar neighboring relationships but different densities (i.e., similar relative density), the absolute distances between corresponding points differ dramatically from each other, but the relative distances are in general similar. This is an advantage of the relative distance in reflecting the relative density of points and relative scale of the objects.
Shape-based Vessel Saliency. Let u be the diameter of p 1 and p' 1 , and v be the diameter of p 2 and p' 2 . Let c 1 and c 2 be the centre points of these two diameters, and their coordinates c 1 = (x u , y u ) and c 2 = (x v , y v ). Denotes u(p 1 , p' 1 ), v(p 2 , p' 2 ) are two random diameters of a given vessel, where (p 1 , p' 1 ) and (p 2 , p' 2 ) are the edge points on the vessel. The dissimilarity values of diameter u due to v in terms of length is given by (u, v), where, Scientific RepoRts | 5:11154 | DOi: 10.1038/srep11154 where is diameter length, and ⋅ calculates the Euclidean distance of centre points of diameter of u and v in the vessel. The dissimilarity values of the orientation of each centreline pixels c 1 and c 2 is calculated as where Θ is the orientation of each pixel located on the centreline. After all the dissimilarity values in terms of diameter length and orientation, they are fused as weighted values = +Θ W L 2 2 . A dissimilarity measure between a pair of diameters may be given as: where h is the control parameter, and h = 3 in our implementation. Let d position (u, v) be the Euclidean distance between the centre points c u and c v of the two diameters of u and v.
We need to compute a distinctness value for each diameter, given the dissimilarity values calculated above. Diameter u is considered salient when it is highly dissimilar to other vertices, i.e., when diss(u, v) is high: ∀v. The saliency value of u is defined as where U is the total number of the diameter in a given vessel. However, in practice, to evaluate the uniqueness of a diameters, there is no need to incorporate its dissimilarity to all the other diameters. If the most similar diameters (low dissimilarity diameters) are significantly different from diameter u, then clearly all diameters are also highly different from diameter u. Therefore, for diameter u, we search for the M most similar diameter according to the dissimilarity values, and define the diameters set as In practice, M is the number of diameters whose dissimilarity value are higher than the average dissimilarity value. Similarly, the shape-based saliency values are also normalized into [0,1].
After obtaining the saliency values for each pixel of vessel and vessel centreline, the shape-based saliency and intensity-based saliency are simply combined together into a final saliency map SM (SM = SI + SS), as shown as Fig. 6(b), the blue colour indicates the most salient regions, and the red colour shows the least salient regions. Two example images were selected: one with many abnormal vessel contained (top image of Fig. 6), and one with few abnormalities (bottom image of Fig. 6). It is clear that the salient regions in the top image are relatively more than the salient regions from the bottom one.
According to the pixel number of abnormal regions of each vessel in the final saliency map, the abnormality rate R(v) for each vessel is calculated as: the number of pixels belonging to abnormal regions of vessel the total number of pixels belonging to vessel 1 4 Figure 6(c) illustrates the saliency map after the thresholding process applied: the vessel regions where their saliency values is larger than an empirically defined threshold value of 0.65 will be set to 1 (abnormal), otherwise will be decided to 0 (normal). Feature Vector. Based on the saliency maps derived above, a feature vector of 21 features is derived for each vessel segment. These features are listed below: AdaBoost Classification. In this work we have used the AdaBoost classifier 47 with weighted cost coefficient classifier for the purpose of classification task. AdaBoost works by building a stronger and more powerful classifier from lots of smaller weak classifiers. We used a decision tree as the weak classifier 47 . The weak classifiers are generated sequentially in order to decrease the estimation error of the previous weak classifier 48 . Although various classification techniques have been proposed, such as artificial neural networks, support vector machine (SVM), decision trees, the choice of classifier is dependent on the complexity of that specific application and the nature of the data. The reasons for our choice of weighted AdaBoost are three-fold. First, AdaBoost is relatively simple, easy to train and less susceptible to over-fitting than other classifiers. As such it usually provides relatively good performance for most classification problems 48 . Second, as an ensemble classifier it can be more effective than a single classifier (c) Vessels are divided into salient and non-salient regions after applying thresholding process to images in the second row respectively. Blue colour indicates the most salient regions while red colour shows the least salient regions for images in the second and third rows. The inset in each image shows the zoom-in view of the region enclosed by the green box within that image. in many cases, though this depends on the statistical properties of the data being analysed. Third, different weights can be easily introduced to tackle challenging classification problems. A weighted AdaBoost classifier has two parameters (class weights and number of trees) that have to be optimized in order to achieve the best classification performance. As described in Section Experiments, these are determined by the repeated LOOCV. | 7,009.8 | 2015-06-08T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Quantum effect-based flexible and transparent pressure sensors with ultrahigh sensitivity and sensing density
Although high-performance flexible pressure sensors have been extensively investigated in recent years owing to their diverse applications in biomedical and information technologies, fabricating ultrasensitive sensors with high pixel density based on current transduction mechanisms still remains great challenging. Herein, we demonstrate a design idea based on Fowler-Nordheim tunnelling effect for fabrication of pressure sensors with ultrahigh sensitivity and sensing density by spin-coating extremely low urchin-like hollow carbon spheres (less than 1.5 wt.%) dispersed in polydimethylsiloxane, which is distinct from the current transduction mechanisms. This sensor exhibits an ultrahigh sensitivity of 260.3 kPa−1 at 1 Pa, a proof-of-concept demonstration of a high sensing density of 400 cm−2, high transparency and temperature noninterference. In addition, it can be fabricated by an industrially viable and scalable spin-coating method, providing an efficient avenue for realizing large-scale production and application of ultrahigh sensitivity flexible pressure sensors on various surfaces and in in vivo environments.
Supplementary Tables
According to the effective medium theory (Equation 1), the current density J (x) increases with the mass fraction of conductive fillers x, which reflects the change in thickness when the composite is compressed due to external pressure. The black line in Supplementary Figure 1 shows the relationship between the readout signal and the pressure-induced deformation under the percolation effect.
where x is the mass fraction of conductivity fillers, J (x) is the current density of the composite, JM is the coefficient current density, xc is the percolation threshold, t = 1.6 for the threedimensional case.
Define ΔJ as the change of current density caused by external pressure, J0 as the initial current density, thus, Then, define a sample with a cross-section of S0 and height of d0, the volume V0=d0×S0. When the pressure applied, d would change Δd, then, Then the mass fraction can be described as follows, Thus, combine Equations 1-4, the relationship between J and Δd is as follows.
Then ΔJ can be described as follows, As strain ε is defined to be Δd/d0, the relationship between ΔJ and ε is as follows, It is usually a three-dimensional type in the application, so t=1. 6 and xc can be set as 28.95% when the filler is assumed to be perfect spheres 48 .
Supplementary Note 2. Relations between ΔRcon and ε of contact resistive.
The relationship between the contact resistance Rcon and the contact force F is described as equation 8 49 and accordingly, the relationship between the readout signal and pressure-induced deformation is shown by the red line in Supplementary Figure 1.
where Rcon is the contact resistance, k is a coefficient related to contact materials, F is the contact force, m is determined by the contact form (empirical studies have shown that when the contact form is point-type, m= 0.5; when it is line-type, m is between 0.5-1, approximately 0.7; when the contact form is face-type, m= 1).
While using E, S0, d0 to represent the modulus, cross-section area, and total thickness respectively. When the sample was pressed with a force F, its thickness will change Δd, then F can be described as, Thus, the ΔRcon and Δd can be as follows, As strain ε is defined to be Δd/d0, the relationship between ΔRcon and ε is as follows, To compare the slope with other two mechanisms, Equation 11 can be changed with mirror symmetry and translational symmetry, define ε' =1-ε, then the relationship between ΔRcon and ε' is as follows, It is usually a face type in the application, so m= 1.
Supplementary Note 3. Relations between ΔJ and ε of F-N tunnelling effect.
According to the F-N tunnelling equation described below, where A and B are empirical constants (A>0, B<0). J is a function of Ed, which is the electric field between two neighbouring UHCS in this study.
Ed is defined as (E is the electric potential, d is the thickness), Δd can be defined as follows, Combine Equations 13-15 to get the relationship between J and Δd, As strain ε is defined to be Δd/d0, the relationship between ΔJ and ε is as follows (A>0 and B<0),
Supplementary Note 4. Advantages of preloading for composite pressure sensors.
There are two reasons why a preloading process is needed in our manufacturing process: First, the sensing film and the electrodes are fabricated independently, so a preloading force is beneficial for ensuring good contact between the electrodes and the sensing film. Secondly, the sensor film is basically a polymer composite and it is widely recognized that the filler concentration in a polymer composite may vary from sample to sample even within the same batch during manufacturing. Given that the sensitivity of the composite film is highly dependent on the filler concentration, this variation in filler concentration may lead to an inconsistent sensitivity of the produced sensors. Therefore, in the fabrication process, we reduced the concentration slightly below the optimum concentration to avoid the inconsistency in sensor performance. Furthermore, by preloading process, the filler concentration can be tuned to make sure every sensor has the same sensitivity. Therefore, the preloading process is helpful in the sensor's performance and reliability. The preloading force can be controlled in the packaging process or tuned by the user before measurement. Of course, this may add an additional step in practical applications.
Supplementary Note 5. Basic requirements of the pressure sensing film for injection application in in vivo.
There are three prerequisites for the PDMS-based thin-film pressure sensors to be used in the implantable area.
First, the film should be flexible and thin enough to ensure a large sensing area after selfunfolding. A sensor array film should be large enough to cover the target organ in practical applications. In order to facilitate the implantation of the sensor array, it can be folded into a small part and be injected to the body. For example, to insert a 10 × 10 mm sensing film into a needle with diameter of 1.54 mm, the film should be folded at least for 5 times. To achieve the 5 times folding, the film should be thinner than 49.7 μm that can be calculated by Equation 18 50 .
where W is the width of a square piece of film with a thickness of t, and n is the desired number of folds to be carried out along alternate directions. Our sensing film is 20 μm thick and flexible, which allows sufficient folding before inserting into the syringe needle.
Secondly, the film should have the ability to unfolded with no damage after being folded and injected. Most flexible sensors with ultra-high sensitivity based on micro/nano structure usually cannot be bended for 180° for multiple times, which will damage these structures. Additionally, the injected sensor should be able to unfold in seconds. When immersed in water, our folded sensor quickly unfolded within 9 s ( Fig. 3a and Supplementary Movie 2), demonstrating its potential for injection into the body and in vivo self-unfolding to support large-area detection.
Thirdly, the implantable sensor should overcome the in vivo isostatic pressure. Supplementary Figure 15b shows two adjacent spheres at a critical position where spheres A and B can just contact with each other. By drawing a circle with point E as the centre and CE as the radius, a round area is formed. If the projection of all the sphere centres on the horizontal plane falls within this area, they will be in contact with their neighbours no matter how they moved horizontally. We define CE as the effective radius (rer), thus a "cylinder 1 (Supplementary Figure 15c)" can be built with a radius of (rcylinder1=rer+rUHCS+lspine), in which all the (n0+1) spheres will always be in contact with each other.
If spheres with a diameter of 600 nm are used, AB and AC are the radii of a hollow sphere (300 nm). BE is the spine length (80 nm). So, the effective radius can be calculated as follows: The concentration of spheres in the cylinder 1 of Supplementary Figure 15c consideration. td is defined as the nearest distance between two adjacent spheres when they are not in direct contact. Thus, td /2 could be regarded as the extension of the radius with UHCS. In this situation, the cylinder diameter dcylinder2 is increased compared with cylinder 1, and considering that the td is in the range of 0-30 nm of 600 nm diameter spheres, here we may use the following expression to calculate the diameter of cylinder1 with a parameter td.
Then the solutions to Equation 37 are As there are three real roots, in this study, td is the distance between two spines, so we can choose the root 0 <xi< 600 as the root for td.
On the other hand, according to the electric-field characteristics on the point of a conical conductor 51 , the electric potential difference is, where ΔU is the electric potential difference, C is a constant determined by the electric and geometric parameters, r is the distance, λ is a constant determined by geometric parameters.
As shown in Supplementary Figure 16b, A The parameter λ can be calculated as follows 51 Thus, when the centre of the second sphere drops in the yellow circle, the probability of the two UHCS contacting each other is 50%. All the UHCSs whose centre drops on the yellow circle will form the big black section, which defines the border of cylinder 3, as shown in Supplementary Figure 17. | 2,200.8 | 2020-07-15T00:00:00.000 | [
"Physics"
] |
Efficacy assessment of commercially available natural products and antibiotics, commonly used for mitigation of pathogenic Vibrio outbreaks in Ecuadorian Penaeus (Litopenaeus) vannamei hatcheries
Bacterial diseases cause high mortality in Penaeus (Litopenaeus) vannamei postlarvae. Therefore, appropriate application of efficient therapeutic products is of vital importance for disease control. This study evaluated through in vitro analyses the antimicrobial effectiveness of commercial therapeutic products used for P. vannamei bacterial diseases and antibiotics against pathogenic Vibrio strains circulating in Ecuadorian hatcheries. Twenty strains were isolated from 31 larvae samples with high bacterial counts from 10 hatcheries collected during mortality events. The strains virulence was verified through challenge tests with Artemia franciscana nauplii and P. vannamei postlarvae. Through 16S rRNA sequence analysis, strains showed a great similarity to the Vibrio sequences reported as pathogens, with 95% belonging to the Harveyi clade. Through antibiograms and minimal inhibitory concentration (MIC) in vitro tests we found that furazolidone, ciprofloxacin, chloramphenicol, norfloxacin, nalidixic acid, florfenicol, fosfomycin and enrofloxacin inhibited the growth of all or most of the strains. Less efficient antibiotics were penicillin, oxytetracycline and tetracycline. A multiple antibiotic resistance (MAR) index of 0.23 showed some level of resistance to antibiotics, with two MAR prevalent patterns (Penicillin-Oxytetracycline and Penicillin-Oxytetracycline-Tetracycline). From a total of 16 natural products (five probiotics, nine organic acids and two essential oils), only three (one probiotic, one organic acid and one essential oil) were effective to control most of the strains. Shrimp producers can apply relatively simple in vitro analyses, such as those employed in this study, to help take adequate management decisions to reduce the impact of bacterial diseases and increase profit.
Introduction
The high demand of postlarvae to support the cultured shrimp industry and consequently the intensification at hatchery level, together with the trade of aquatic animals and their associated products, has increased the occurrence of infectious pathogens in this production stage [1]. One of the main concerns in shrimp hatcheries are the bacterial pathogens [2]. Vibrio spp, such as Vibrio harveyi [3][4][5][6][7][8], Vibrio alginolyticus [9,10] and Vibrio campbellii [7,11,12] are recurrent pathogens in shrimp hatcheries in America and Asia. In Ecuador, shrimp hatcheries have suffered from some bacterial diseases caused by pathogens of the Vibrio genus, such as Bolitas nigricans syndrome, caused by V. harveyi [3], and Zoea 2 syndrome, caused by V. harveyi and V. alginolyticus [4]. Therefore, the efficiency of therapeutic products is of vital importance for the control of aquaculture diseases.
Antibiotics are extensively used as prophylactics against bacterial pathogens [13]. However, the use of antibiotics carries important disadvantages, these being residues in aquaculture products [14][15][16], development and propagation of resistance between pathogens, including human pathogens [17]. For these reasons, the regulation of antibiotics is rigorously controlled, resulting in few antibiotics authorized for use in aquaculture. In this context, alternative strategies of disease control are necessary to replace antibiotics for use in animal production, which has led to consider the use of natural products to control the growth of pathogens in shrimp hatcheries. The administration of probiotics is one of the alternative strategies that may be used in aquaculture [13]; their benefits include the potential for colonization in the gastrointestinal tract, selective antagonism against bacterial pathogens, improvement of the shrimp immune system, enhanced shrimp growth and survival, degradation of detritus and maintenance of water quality [18][19][20]. The use of organic acids, produced by organisms and used as preservatives and bacterial control in food, agriculture, and animal production, is another potential strategy to control bacterial diseases in animal production [21][22][23]. They inhibit the growth of pathogenic V. harveyi, Vibrio cholera, V. alginolyticus, Vibrio parahaemolyticus and V. campbellii [24][25][26], exhibit immunostimulant properties [24,27], and improve the nutritional and health state of shrimp [28,29]. Similarly, essential oils have shown to have antimicrobial [30], antioxidant [31] and antifungal [32] properties, which can be an alternative to the use of additives and drugs in shrimp production [33]. Although the use of organic acids and essential oils in Ecuadorian hatcheries is relatively new, there is an increasing demand for their application as control strategies of bacterial diseases in shrimp hatcheries. In general, there are a huge number of products marketed as therapeutic products for shrimp hatcheries worldwide, therefore producers should take suitable decisions as to which products are effective based on technical information and further tests in their own facilities.
The main objective of this study was to determine through in vitro analyses the antimicrobial effectiveness of antibiotics and some commercial products used in Ecuador as therapeutic agents for shrimp larviculture. To determine this, we first performed a survey to identify the pathogenic bacterial strains circulating in Ecuadorian shrimp hatcheries, confirming their virulence through challenge tests and verifying their molecular similarity with previously reported pathogenic Vibrio. Then, we tested the antimicrobial effectivity of some antibiotics and commercial products used in Ecuador as therapeutics against the pathogenic circulating strains through in vitro tests.
Sample collection and processing
In 2015, 31 samples of Penaeus (Litopenaeus) vannamei larvae [from Nauplii 5 (N5) to 13 days postlarvae (PL13)] were collected from tanks of 10 shrimp hatcheries (Santa Elena, Ecuador) during mortality events. Samples were sent by farmers to the Centro Nacional de Acuicultura e Investigaciones Marinas (CENAIM, Santa Elena, Ecuador) for the quantification of shrimp bacterial load (microbiologic analysis services performed by CENAIM). Bacterial strains isolated from these samples were used for this study. Larvae presented clinical signs of abnormal swimming behavior, empty digestive tract, low activity and retardation of larval development. Samples were transported to the research facilities of CENAIM taking a maximum time of two hours from sampling at the hatcheries to processing at the laboratory. At the laboratory, they were rinsed with 2% sterile NaCl solution and each sample (1 g of larvae) was macerated to homogenize the bacterial load associated with the animals.
Isolation and preservation of bacterial strains
Aliquots (100 μL) of serial 10-fold dilutions (10 −3 to 10 −5 ) of larvae were macerated in duplicate in 2% sterile NaCl solution and plated on marine agar 2216 (MA, Difco). The same procedure was performed with serial 10-fold dilutions (10 −1 to 10 −3 ) on thiosulfate citrate bile salt sucrose agar (TCBS, Difco). All plates were incubated at 30˚C. After one to two days of growth, bacterial counts were performed from plates containing 30 to 300 colonies. Bacterial counts were expressed as colony-forming units (CFU) per gram. Presumptive pathogenic strains were selected from colonies of samples with high bacterial counts on MA (>10 6 CFU g -1 ) or TCBS (>10 5 CFU g -1 ) agars. The criteria of selection were: (1) all different bacterial strains by morphological criterion and (2) luminescent strains. All selected strains were coded and frozen at -80˚C after addition of trypticase soy broth (TSB, Difco) supplemented with 2.0% (w/v) NaCl and 20% (v/v) glycerol.
Challenge tests
The pathogenicity of the presumptive pathogenic strains was first evaluated in brine shrimp Artemia franciscana nauplii, following the procedures described by [34], with few modifications. Briefly, 1 g cysts of A. franciscana (Batch 02143, INVE Aquaculture, Belgium) were hydrated under continuous aeration in 100 mL of filtered and autoclaved distilled water for 1 h, and then transferred to a mixture of 10 mL of sodium hypochlorite (10%) and 15 mL of sodium hydroxide (40%), until the cysts changed color from brown to orange. The decapsulated cysts were washed with filtered and autoclaved sea water and transferred to an imhoff funnel with 500 mL of filtered and autoclaved sea water, providing continuous aeration and illumination with a white lamp, remaining at 28˚C in a sterile environment for 24 h. Nauplii were harvested under sterile conditions with an autoclaved mesh of 100 μm. Eighty-four groups, each composed of 30 Artemia nauplii, were transferred to 50 mL sterile tubes containing 30 mL of filtered and autoclaved sea water. Eighty tubes were used to test the pathogenicity of the 20 bacterial isolates (four replicates per strain) and four tubes were used for the negative control (without bacteria, four replicates). Bacteria were activated on trypticase soy agar (TSA, Difco) and a colony from each strain was transferred to 150 mL of TSB and incubated for seven hours at 30˚C. Bacterial cells, at a density of 10 6 cell mL -1 , were added to the corresponding sterile tubes immediately after the transfer of the Artemia nauplii. Nauplii from all treatments and control were fed once with an inactivated V. alginolyticus commercial probiotic (10 7 ) [35] four hours after infection. For this, the probiotic was cultured in liquid medium for six hours, inactivated by heat (autoclaved at 121˚C for 20 min), centrifuged at 10000 rpm for 10 min and resuspended with autoclaved seawater. Water exchange (50%) was performed 24 hours after infection and the mortality of Artemia was quantified 48 h post-infection. The whole challenge was performed in a Class II Biological Security Cabinet (CSB-180 A). To verify the asepsis condition of the negative control, water samples were collected at the end of the challenge, thus verifying the absence of Vibrio growth in TCBS agar.
Bacterial strains causing higher mortalities in the Artemia challenge test were selected and their pathogenicity was again verified by a challenge test using healthy Penaeus vannamei postlarvae, following the procedures described by [36]. Bacteria were activated following the procedure described in the Artemia challenge test. Thirty shrimp postlarvae (PL2) per replicate were distributed in sterile petri dishes and exposed to the corresponding bacterial treatment, at a concentration of 10 8 bacteria mL -1 for 6 min. Larvae were then transferred to 500 mL plastic containers containing 300 mL of sterile seawater and 10 6 bacteria mL -1 . Larvae were fed during the challenge with a pure culture of Thalassiosira weissflogii (913 cell mL -1 ) every 2 h after exposure to the bacteria. Survival was determined by counting the larvae every 4 h until 38 hours after infection. A negative control was also included in the challenge test (shrimp postlarvae PL2 without bacterial treatment). All treatments including the control had four replicates. The whole challenge was performed in a Class II Biological Security Cabinet (CSB-180 A), with constant aeration. To verify the asepsis condition of the negative control, water samples were collected at the end of the challenge, verifying the absence of Vibrio growth in TCBS agar.
Bacterial characterization by 16S rRNA sequence analysis
Identification of the presumptive pathogenic strains was performed by 16S rRNA sequence analysis. Total genomic DNA was extracted from pure cultures of the bacterial strains after they were cultured on TSA and incubated for 24 h at 30˚C. Bacteria were lysed by incubation at 55˚C for 1 h in 200 μL of STE buffer (10 mM Tris-HCl, 1 mM EDTA and 100 mM NaCl, pH 8), followed by purification, adding an equal volume of phenol-chloroform-isoamylalcohol (25:24:1) and continuing with a chloroform-isoamylalcohol extraction (24:1). DNA was recovered by adding ethanol (100%) followed by centrifugation at 13000 rpm for 10 min. The pellet was washed with 70% ethanol, dried and resuspended in 50 μL of ultrapure water (pH 7.0). DNA was preserved at -20˚C for further use. DNA concentration and purity were estimated with a Varioskan LUX multimode microplate reader (Thermo Fisher Scientific). The 16S rRNA complete gene was amplified using primers suggested by [37] (27F: 5'-AGAGTTT GATCMTGGCTCAG-3', 1492R: 5'-TACGGYTACCTTGTTACGACTT-3'). PCR was performed in a 30 μL reaction mixture containing 1X Buffer NH 4 (Bioline, Sydney, Australia), 2.5 mM MgCl 2 (Invitrogen, Carlsbad, CA), 2 mM of each dNTP, 0.3 μM of each primer, 0.5 units of Taq DNA polymerase and 2 μL of DNA. PCR cycling conditions were: DNA denatured for 5 min at 94˚C, 35 cycles at 94˚C for denaturation, 1 min at 52˚C for annealing step and 1 min at 72˚C for elongation; and a last cycle of 10 min at 72˚C to complete the elongation. Amplicons were separated by 1.5% (w/v) agarose gel electrophoresis, stained with SYBR Safe DNA gel stain (Thermo Fisher Scientific), and illuminated under UV light. Images were captured with an E-Gel Imager System (Thermo Fisher Scientific). PCR products were purified and dissolved in 30 μL of ultrapure water for direct sequencing (Macrogen, Korea). BigDye Terminator Cycle sequencing kit (Perkin Elmer) was used for the sequencing. The sequencing products were analyzed with the ABI 3000 sequencer (Applied Biosystems, Foster City, CA, USA). A phylogenetic analysis was carried out with the complete 16S rRNA sequences (1465 bp) from the bacterial isolates, together with 16S rRNA sequences (n = 362) of different Vibrio (pathogens and non-pathogens) obtained from GenBank. The amino acid sequence alignments were generated with ClustalW and the specific region of 16S rRNA was identified using Bioedit 7.0.0. [38]. Phylogenetic trees were built using Maximum Parsimony (MP), Maximum Likelihood (ML), Neighbor Joining (NJ) and Bayesian Inference (BI). The molecular evolution model was selected through a wide range of phylogenetic and evolutionary tools using a dataset composed only of the unique haplotypes and sequences obtained in the present investigation. JModeltest 2.0 was used to test the evolution models based on the hierarchical likelihood ratio test [39]. Values of amino acid substitutions per site for the gene were calculated with MEGA 6.0 (Molecular Evolutionary Genetics Analysis). The 16S rRNA sequences were deposited in GenBank under accession numbers MH997724 to MH997742.
Antimicrobial effectiveness
The antimicrobial effectivity of 16 natural products (five probiotics, nine organic acids and two essential oils, Table 1) used in Ecuador as therapeutic agents against shrimp bacterial diseases and eleven antibiotics was screened in terms of the susceptibility of the pathogenic circulating strains to these products through antibiogram and minimal inhibitory concentration tests (Table 1). We denominated as pathogenic circulating strains those strains that caused high mortality (> 50%) in the Artemia and shrimp postlarvae challenge tests and presented molecular similarity to species previously reported as Vibrio pathogens. The product details are provided in S1 Table.
The minimal inhibitory concentration (MIC) for the antibiotics approved for use in aquaculture (oxytetracycline and florfenicol, 99% purity, Zhejiang Medicines and Health Company, China) on the growth of the bacterial strains was also determined. The bacterial isolates were activated in TSB liquid medium and incubated at 30˚C for 4 h. The bacterial suspensions were diluted with TSB liquid medium to an approximate density of 10 6 CFU mL -1 by using McFar-land´s 0.5 Barium Sulfate Standard Solution. Ten grams of each antibiotic were diluted in culture broth TSB with 2% NaCl solution. Sixteen concentrations of oxytetracycline, ranging from 1 to 3500 μg mL -1 were distributed into wells of round-bottom 96-well microplates. Each well was inoculated with 20 μl of bacterial suspension, including the positive control (bacterial growth at TSB culture, without any antibiotic). The microplates were incubated at 30˚C between 24 and 48 h. All measurements were performed in triplicate, including those of the controls. Bacterial growth was detected by optical density at 620 nm (enzyme-linked immunosorbent assay ELISA reader, Varioskan Lux). MIC values were obtained as the lowest concentrations of each antibiotic that completely inhibited the bacterial growth. Absence of bacterial growth during the MIC process was confirmed on TSA agar. The same methodology was performed with the MIC tests for florfenicol, testing 15 concentrations of this antibiotic, ranging from 0.1 to 1000 μg mL -1 . Finally, the patterns of multiple antibiotic resistance were analyzed to determine common patterns of resistance between the bacterial strains.
Susceptibility of pathogenic Vibrio strains to probiotics
The susceptibility of the pathogenic circulating strains to five commercial shrimp probiotics (Table 1) was determined by the agar plug diffusion method [40]. Briefly, the probiotics were cultured on Mueller-Hinton agar at 30˚C for 24 h (� 10 7 CFU mL -1 ). In parallel, the pathogenic bacteria were cultured under the same conditions as the probiotics. After incubation, an agarplot from the probiotic culture was aseptically cut and deposited on the agar surface of the plate inoculated with the pathogenic bacteria. Diameters of inhibition halos surrounding the agarplots were measured and expressed in millimeters 24 and 48 h after the agar-plots were transferred. The strains were considered as: sensitive, intermediate and resistant when the diameters of the inhibition halos were � 10 mm, between 4 and 9 mm and � 3 mm, respectively.
Susceptibilities of pathogenic Vibrio strains to organic acids and essential oils
The susceptibility of the pathogenic circulating strains to nine organic acids and two essential oils (Table 1) was determined in a similar way to that implemented for the MIC determination for antibiotics. The evaluated concentrations of organic acids and essential oils ranged between 100 to 3500 μg mL -1 and 100 to 3000 μg mL -1 , respectively.
Cell toxicity of selected products
The toxicity of the most efficient natural products, as well as the antibiotics authorized for use in aquaculture, was evaluated through the in vitro assay of cell viability of shrimp haemocytes [44]. Briefly, the assay was based on the ability of the mitochondria to convert yellow colored 3-(4,5-dimethylthiazol-2-yl) -2-5-diphenyltetrazole bromide (MTT) in purple colored formazan through the enzyme succinate dehydrogenase. The primary culture of shrimp haemocytes was activated in Hank's salts for 75 min and exposed to each product for 2 h at different concentrations and then incubated for 2 h with 5 mg mL -1 of MTT in Hank'cs salts. The supernatant and the formazan crystals were diluted with isopropanol mixed with 0.04 N hydrochloric acid. The colorimetric reaction was read at 620 nm. The results were transformed to percent of cell viability, considering the primary culture of haemocytes without chemical exposure as the positive response of optimal cellular respiration (maximum cell viability). The concentrations evaluated of antibiotics were 100, 200, 400, 500, 1000, 2000, 2500, 3000, 4000, 5000, 6000 and 7000 μg mL -1 . The concentrations evaluated for the natural products were below the corresponding minimum inhibitory concentrations.
Data analysis
The index of multiple antibiotic resistance (MAR) was calculated according to [45] and calculated as the number of antibiotics to which the isolate was resistant divided by the total number of antibiotics tested. Differences in cumulative mortality among treatments (presumptive pathogenic strains) were analyzed by one-way analysis of variance (ANOVA) at the end of each challenge test (48 and 38 h after infection for A. franciscana nauplli and P. vannamei postlarvae, respectively). The null hypothesis (no treatment effect) was rejected with a P-value � 0.05. Variance homogeneity of all treatments was examined using the Bartlett test. Assumption of normality was examined through the Shapiro-Wilk normality test. Tukey's Honest Significant Difference test was used to compare treatment means. The effect of differences between treatments was considered significant at P-value � 0.05. The same statistical analysis was performed to evaluate differences of percent of cell viability (cell toxicity) for each one of selected products. All statistical tests were carried out with the R statistical software [46].
Isolation and preservation of bacterial strains
A total of 20 different bacterial strains were selected by morphological and luminescence criteria from 121 colonies from 31 samples of P. vannamei larvae with high bacterial counts on MA (>10 6 CFU g -1 ) or TCBS (>10 5 CFU g -1 ) agars from 10 hatcheries collected during mortality events.
Challenge tests
To estimate the virulence of the presumptive pathogenic strains, a first screening was performed challenging A. franciscana nauplii with all bacterial strains. All bacterial strains presented variable and high mortality in the challenge test (>74.2%, Table 2). A group of 11 strains caused the highest mortality in A. franciscana (> 95.0%, Table 2). The strain L15.19.1, belonging to this group caused 100% mortality in all replicates (Table 2). No significant mortality differences were observed among the Artemia nauplii challenged with the remaining ten strains of this group (P > 0.995, Table 2). These strains were used for a second challenge test with shrimp P. vannamei postlarvae (PL2). No significant mortality differences were observed among the postlarvae challenged with these strains (P = 0.148, Table 2).
Bacterial characterization by 16S rRNA sequence analysis
An initial screening of phylogenetic analysis was performed with the 16S rRNA gene sequences of the pathogenic circulating strains and 362 Vibrio sequences reported in GenBank as pathogens and non-pathogens. The evolutionary history was inferred by using the Maximum Likelihood method based on the General Time Reversible model. A discrete Gamma distribution was used to model evolutionary rate differences among sites (5 categories, +G, parameter = 0.1000). The rate variation model allowed for some sites to be evolutionarily invariable ([+I], 9.76% sites). The pathogenic circulating strains of this study showed a greater similarity to the sequences of Vibrio reported as pathogens. To identify the isolates, a last analysis was performed using exclusively the 16S rRNA sequences obtained from GenBank that were more related to our sequences (Fig 1). Two groups of strains with high values of bootstrap support were identified (Fig 1). The first group contained eight sequences that presented similarity levels with sequences identified as V. campbellii and V. harveyi (Fig 1). All strains characterized in this study as luminescent belong to this group (Fig 1). A ninth luminescent strain was not included in the phylogenetic analysis, as the molecular identification was not possible with the obtained sequences (Fig 1). The second group included seven sequences of bacterial isolates with a high similarity level with identified sequences of V. alginolyticus and Vibrio natriegens 1). In addition, three different groups with low bootstrap values were also identified, which contained four sequences with similarities to Vibrio inhibens, Vibrio owensii and Vibrio sp. Strain PaH1.25 (Fig 1). In total, ninety-five percent of the isolated strains (19/20) belong to the Harveyi clade. Most of the strains were identified as V. harveyi and V. alginolyticus (12/19 strains). The least frequent strains were V. campbelli, V. owensii, V. inhibens and V. natriegens (Fig 1).
Susceptibility of pathogenic Vibrio strains to antibiotics
All pathogenic strains were sensitive to furazolidone, ciprofloxacin, norfloxacin, nalidixic acid, chloramphenicol and florfenicol (Table 3). A total of 60, 95 and 90% of the strains were sensitive to tetracycline, fosfomycin and enrofloxacin (Table 3). All strains showed resistance or intermediate sensitivity to penicillin and oxytetracycline (Table 3). Few strains exhibited intermediate sensitivity to enrofloxacin (10%), oxytetracycline (15%) and tetracycline (10%) ( Table 3). The complete list of diameters of the inhibition halos is provided in S2 Table. All strains presented resistance to at least two antibiotics at the same time, being 50%, 45% and 5% of the strains resistant to 2, 3 and 4 antibiotics, respectively (Tables 3 and 4). The MAR index on average was 0.23 (Tables 3 and 4). The most prevalent pattern of multiple resistance was P-T (10/20 strains = prevalence of 50%, Table 4). The other most prevalent pattern was P-T-TE (7/20 strains = prevalence of 35%, Table 4). The other two MAR patterns were: P-T-Te (2/20 strains = prevalence of 10%, Table 4) and P-T-TE-FF (1/20 strains = prevalence of 5%, Efficacy assessment of commercially available natural products and antibiotics to mitigate pathogenic Vibrio Table 4). All strains were resistant at the same time to penicillin and oxytetracycline (Tables 3 and 4). Forty percent (8/20) of the strains were resistant to both antibiotics of the tetracycline group (Tables 3 and 4). Forty-five percent of the strains exhibited MIC values for oxytetracycline of less than 100 μg mL -1 (Table 5). Forty-five percent of the strains presented MIC values between 200-500 μg mL -1 (Table 5). Ten percent of the strains presented a high MIC value for oxytetracycline (> 3500 μg mL -1 , Table 5). All strains presented sensitivity to florfenicol at low concentrations (� 40 μg mL -1 , Table 5).
Susceptibilities of pathogenic Vibrio strains to organic acids and essential oils
The MIC analyses showed that 100% of the strains were resistant to 5 products (OA1, OA2, OA3, OA5 and OA8) up to 3500 μg mL -1 of each product (Table 5), and 15, 25 and 10% of the strains were sensitive to OA4, OA6 and OA7, respectively, at concentrations equal to or lower than 1500 μg mL -1 (Table 5). In general, most of the strains showed sensitivity to product (OA9, Table 5). Product EO1 controlled 100% of the strains between 100 to 3000 μg mL -1 . Forty percent of the strains (8/20) were sensitive up to 3000 μg mL -1 against EO2 (Table 5).
Cell toxicity of selected products
The most efficient products in terms of bacterial sensitivity were organic acids OA6 and OA9, but OA9 was cytotoxic at all assayed concentrations, not showing significant differences of cell viability between the evaluated concentrations of OA9 (P = 0.802, Table 6). However, OA6 was not toxic for shrimp haemocytes between 100 and 400 μg mL -1 , exhibiting similar values of cell viability at those levels (P � 0.848, Table 6). Significantly lower levels of cell viability were reported for concentrations of OA6 from 1000 to 3000 μg mL -1 compared with concentrations from 100 to 400 μg mL -1 (P < 0.001, Table 6). No significant differences of cell viability were found at concentrations of OA6 from 1000 to 3000 μg mL -1 (P � 0.129, Table 6). The haemocytes cell viability was not affected by florfenicol and oxytetracycline up to 2000 and 4000 μg mL -1 , respectively (Table 6). Cell viability of all four control replicates was 100%.
Discussion
The efficiency of therapeutic products is of vital importance for the control of aquaculture diseases. In the present study, we investigate through some in vitro analyses the antimicrobial effectiveness of antibiotics and commercially available therapeutic products used in Ecuador to control pathogenic bacterial strains of shrimp larvae. To perform these analyses, we isolated the strains that were circulating in the shrimp hatcheries, verified their virulence through Antibiotics: oxytetracycline (T) and florfenicol (F); Organic acids: OA1, OA2, OA3, OA4, OA5, OA6, OA7, OA8 and OA9; Essential oils: EO1 and EO2. Maximum concentration analyzed: 3500 (oxytetracycline and organic acids) and 3000 (essential oils) μg mL -1 .
https://doi.org/10.1371/journal.pone.0210478.t005 Table 6. Cell viability (%) of P. vannamei shrimp haemocytes after exposure at varied concentrations of authorized antibiotics for use in aquaculture and the two most effective natural products against pathogenic circulating bacterial strains. Efficacy assessment of commercially available natural products and antibiotics to mitigate pathogenic Vibrio challenge tests and identified their molecular similarity with previously reported pathogenic Vibrio. By doing this, we confirmed that we were working with the circulating strains that cause real problems at the production level. The results were dependent on the product, concentration of the product and bacterial strain. Antibiotics were the most efficient therapeutic agents against the growth of pathogenic bacteria. Eight of the antibiotics inhibited the growth of all or most of the pathogenic bacterial strains, but most of these products are not authorized for use in aquaculture. We included several antibiotics in our evaluation because we wanted to investigate if the pathogenic circulating strains exhibited patterns of multiple antibiotic resistance, which could be associated with antimicrobial use [47]. In our study, the MAR index was on average 0.23, showing some level of resistance to antibiotics. High antibiotic resistance has been found in hatcheries worldwide, as well as higher MAR indexes in hatcheries rather than in shrimp farms [48,49]. For instance, MAR index ranges from 0.21 to 0.38 have been reported for bacteria isolated from shrimp hatcheries [50,49], whereas, MAR indexes in shrimp farms range from 0.11 and 0.32 [49][50][51][52]. The average MAR index determined in this study is low compared to values reported in other shrimp hatcheries. However, all strains were resistant at the same time to penicillin and oxytetracycline, both antibiotics used in human medicine. Similar observations of isolates with higher resistance to antibiotics used in human medicine than those used in aquaculture have been reported by several authors [53,48]. Most of the sampled hatcheries are in a region of multiple anthropogenic activities, without wastewater treatment, which could be a source of antibiotic pollution.
Concentration of products (μg mL -1 ) Product
In our screening, in addition to penicillin and tetracycline, oxytetracycline was a less efficient antibiotic, which is one of the two authorized antibiotics for use in aquaculture, and the antimicrobial most commonly used in Ecuador for shrimp larval stages, although now it is in decline. The MIC values found in our study are high and similar to those reported by the shrimp culture. For instance, an average MIC of 304 μg mL -1 has been reported for Vibrio isolated in Mexico [53], values up to 400 mg mL -1 for Vibrio isolated from water in hatcheries and farms in Brazil [50] and up to 512 μg mL -1 for Vibrio isolated from Penaeus monodon and P. vannamei shrimp in Thailand [54]. Nevertheless, oxytetracycline is toxic for Penaeus stylirostris larvae at concentrations from 135.5 to 238 μg mL -1 [55]. Fifty-five percent of our screened strains (11/20) presented MIC values for oxytetracycline in this toxic range or above, indicating that the application of this antibiotic is not suitable for most of the pathogenic circulating strains. Considering this observation, it cannot be discounted that the prolonged use of oxytetracycline may explain the observed resistance.
Florfenicol is the other authorized antibiotic for use in aquaculture and was highly efficient in controlling bacterial growth at low concentrations, with a MIC of 8 μg mL -1 for 75% of the strains. Our results are in accordance with the MIC values from 0.5 to 4 μg mL -1 found for Vibrio spp. isolated in Ecuador, USA, Japan and Thailand [56,57] and values from 0.25 to 8 μg mL -1 for Vibrio spp. isolated in Mexico [53]. Florfenicol is a broad-spectrum antibiotic, with the same mechanism of action to that of chloramphenicol (inhibition of protein synthesis), and commonly used for the treatment of bacterial diseases in shrimp, such as necrotizing hepatopancreatitis [58]. Although it has been reported that florfenicol is toxic for P. vannamei larvae (Zoea 1) at concentrations higher than 20 μg mL -1 [59], we found that haemocytes could thrive up to 2000 ug mL -1 without any problem of cell unviability. Possibly, these seemingly contradictory results could be explained by the fact that MTT analyses were performed using haemocytes of juvenile shrimp, and at this stage, shrimp could handle a high concentration of this antibiotic. Sixteen of our screened strains (80%) showed MIC values equal to or higher than 5 μg mL -1 , indicating a loss of bacterial sensitivity and a development of resistance to this antibiotic, and therefore not recommended as a control strategy at production level.
Given these conditions, it is therefore necessary for producers to consider alternative strategies for the control of pathogenic bacteria. In our study, only one commercial probiotic (P5) exhibited a high antagonistic capacity against the bacterial strains (85% of the strains). P5, whose declared composition is V. alginolyticus, has been employed as a probiotic in Ecuadorian hatcheries since 1992 [60,61] and it has been observed that it enhances postlarvae survival and immune response when shrimp get to the juvenile stages [35]. The rest of the probiotics could inhibit the growth of 15-30% of the strains, which showed intermediate effects, and therefore could be considered as functional for the growth control of some pathogenic bacterial strains. Of the two strains that were not completely inhibited by probiotic P5, neither inhibited the other probiotics. The administration of either multiple or single probiotics remains controversial [62,63], and, as regards this, although we did not test many probiotics, P5, a single strain, was the most efficient probiotic, whereas, P1, P2, P3 and P4 declared as mixture strains were not particularly effective.
Only one organic acid (OA9) showed inhibition of growth of most of the strains, at low concentrations. This product is a mixture of acetic acid, propionic acid and formic acid. These acids, as well as butyric acid, are efficient for the control of aquatic and shrimp pathogenic Vibrio [26,28,29]. OA6 was the second most efficient organic acid, and contains lactic, fumaric, citric, malic and succinic acids. Lactic and citric acids seemed to be the best organic acids to control pathogenic V. harveyi in Macrobrachium rosenbergii [64], but lactic acid can inhibit the pathogenic microbiota of fishes [65]. OA4 contains propionic acid and formic acid, whereas OA7 contains formic acid. Three of these four organic acids contain formic acid, which is considered to be particularly effective against pathogenic Vibrio [26]. In addition, OA9 contains three of the four organic acids reported as good bacterial inhibiters, including acetic acid, which is a good disinfectant of V. parahaemolyticus [66]. Possibly the five organic acids whose MIC were not determined, could be effective at higher concentrations than tested in this study, which makes them inefficient products.
Essential oils are effective for inhibiting bacterial growth; in our study the essential oil EO1, whose declared composition includes extract of oregano oil, efficiently inhibited the growth of all evaluated bacterial strains, with MIC values equal to or lower than 3 mg L -1 . Similar results were observed by Teixeira et al. 2013 for other bacterial genera, where the Origanum vulgare essential oil was effective at MIC values lower than 5 mg mL -1 [67]. Possibly, the efficacy for the bacterial inhibition of EO1 might be related to the presence of thymol and carvacrol, two of the compounds of oregano essential oil [68,69] that decrease the bacterial counts of V. vulnificus, V. parahaemolyticus and V. cholera in muscle and hepatopancreas of juvenile P. vannamei [70]. Carvacrol also increases the survival of A. franciscana larvae challenged with V. harveyi [71]. However, despite the high potential of the essential oil against bacterial growth, it could be toxic at high concentrations. Concentrations higher than 14.9 mg L -1 of carvacrol turned out to be toxic for A. franciscana [71], whereas concentrations of O. vulgare leaf crude extract and essential oil were toxic for both A. salina [72] and P. vannamei shrimp [44] at concentrations higher than 2 mg L -1 and 10 mg L -1 , respectively. Other authors have mentioned the ability of essential oils to interrupt bacterial communication, decreasing bacterial virulence and pathogenicity [73,74]. Therefore, it would be advisable to evaluate these properties of oregano essential oils and at the same time its toxic effect.
The tests performed in this work are designed to analyze whether the commercial products inhibit the growth or kill the pathogenic bacteria, but the natural products evaluated in this work could exhibit other modes of action not studied in this work. Therefore, further studies will be necessary to evaluate their efficacies, in terms of others mode of action, such as: capacity to disrupt bacterial communication, improvement of the shrimp immune system, colonization of the gastrointestinal tract, enhanced shrimp growth and survival, among others.
Ninety-five percent of the isolated strains (19/20) belong to the Harveyi clade, known to be the pathogenic clade for shrimp [75]. This was consistent with the results of the challenge tests, thus verifying that the isolated strains were pathogenic. Most of the strains were identified as V. harveyi and V. alginolyticus (12/20 strains), which have been a continual problem for the Ecuadorian hatcheries since 1988-1989 [2][3][4]. This study, however, has identified the fact that new pathogenic strains have appeared (V. campbellii, V. owensii, V. inhibens and V. natriegens) in the Ecuadorian hatcheries, diversifying the circulating bacteria and making it crucial to study the effectiveness of treatments for each pathogenic strain. Periodic surveys at regional level and further challenge tests could be performed to identify the pathogenic circulating bacterial strains and to focus on the bacteria of concern. At the same time, shrimp producers can apply relatively simple in vitro and in vivo analyses, such as those employed in this study and take adequate management decisions based on these results, which in turn could reduce the impact of bacterial diseases and increase profits.
Supporting information S1 | 8,311.8 | 2019-01-30T00:00:00.000 | [
"Biology"
] |
Orbit constraint of a small bead on a rotating large circular hoop in a horizontal plane
Orbit constraint problems can be encountered in mechanical equipment and amusement equipment. Mechanics exercises generally consider the ideal physical model, and the practical problems also consider the influence of friction, which makes the problem more complex and and practical. The problem of the force and oscillation of objects on orbit needs to be deeply discussed. In order to simulate the orbital motion of objects more realistically and help students expand their theoretical mechanics beyond class, we study the orbit constraint of a small bead on a rotating large circular hoop in a horizontal plane about an axis passing through a point on the circumference. The coupling equations followed by the bead on the hoop are derived using Newton’s second law in a planar polar coordinate system and solved by numerical methods. We found that under the action of friction, when the initial angular velocity of the bead is greater than the critical angular velocity, the bead will rotate on the ring, and the number of rotations is related to the initial angular velocity and influenced by the friction coefficient. At different initial angular velocities, the number of oscillations of the bead on the hoop is basically the same and ultimately stops near the fixed point.
www.nature.com/scientificreports/W f ϕ Work done by the angular component of the friction, [J] W f r Work done by the radial component of the friction, [J] W N ϕ Work done by the angular component of the constraint force, [J] W N r Work done by the radial component of the constraint force, [J] Orbit constraint problems can be encountered in mechanical and entertainment equipment.It is also an important issue in theoretical mechanics.When a bead or ring is regarded as a particle moves on a curved orbit, it will be subject to the constraint force of the orbit.In mechanics textbooks, only ideal models are generally considered, and the practical problems also consider the influence of friction, which makes the problem more complex and practical.In order to simulate the orbital motion of objects more realistically and help students expand their theoretical mechanics beyond class, the problem of the force and oscillation of objects on orbit needs to be deeply discussed.Due to the general curvilinear motion of objects, natural coordinate systems are often used.When solving the constraint problem, the constraint is generally removed, replaced by the constraint force, and the moving object is regarded as a free particle.The constraint force is generally unknown, unlike ordinary forces.It is not entirely determined by the constraint itself, but is related to other forces acting on the particle and the motion state of the particle itself.Moreover, relying solely on the constraint force itself cannot cause any movement of the particle, so the constraint force is often referred to as the passive force.The constraint force usually acts on the contact point between a particle and a curve or surface.In the absence of friction, it follows the normal of a curve or surface, while in the presence of friction, it slopes at a certain angle to the normal 1 .
The motion of a bead on a circular hoop is one of the orbital constraint problems.The movement of the bead exhibits various motion modes, such as oscillation on one side of the hoop and rotation around the hoop.It also displays a series of characteristics of dynamic systems, such as fixed points, bifurcation, reversibility, symmetry breaking, etc [2][3][4][5][6][7] .In theory, Johnson et al. studied a bead on a hoop rotating about a horizontal axis and developed a new approach for creating a one-dimensional, gravitational ponderomotive trap 5 .Dutta et al. explored the diverse modes of motion of a bead moving on a vertically rotating circular hoop, including frictionless and frictional motion 6 .Animasaun et al. pointed out that a critical function of starting angular velocity is calculating the overall angular displacement of a rotating object 8 .We previously studied the constraint problem of a small ring on an elliptical orbit, and analyzed the influence mechanism of velocity and orbit shape on the constraint force and motion of the ring 9 .
For a smooth orbit, it is possible to obtain the analytical result of the orbital motion equation of the particle, but when considering friction, the actual situation is much more complex, and then it needs to be solved by numerical method.In this paper, the orbit constraint of a bead on a rotating large circular hoop in a horizontal plane about an axis passing through a point on the circumference is studied.We need to solve a second-order differential equation, which can be transformed into a first-order coupled system of equations by reducing the order.We can solve it numerically using the forward difference formula.Because the calculation time is relatively short, the time step can be taken very small, which ensures the calculation accuracy.Because one of the equations is complex, it is not easy to obtain its numerical solution.Fortunately, we can regard it as a quadratic equation about the angular velocity of the next step time, and the roots of the angular velocity can be obtained by using the root formula.Its two roots correspond to the clockwise and counterclockwise rotation of the bead relative to the hoop.
By solving the above equations, this paper mainly studies three aspects of problems.Firstly, the oscillation of the bead on the hoop without friction is studied.Secondly, the influence mechanism of friction coefficient on the motion of the bead under a given initial angular velocity is studied.Thirdly, the influence mechanism of initial angular velocity on the motion of the bead under the action of friction is studied.By solving the eigenvalues of the Jacobian matrix of the system, we analyzed the fixed point types of the phase diagram, and the conclusion is consistent with the numerical results.We hope these studies can provide some references for similar amusement facilities and engineering design.
Model equations
We consider the motion of a bead that can be regarded as a particle on a rotating large hoop.As shown in Fig. 1, there is a large circular hoop with radius R in the horizontal plane.A bead with mass m is set at point M on the hoop, and the friction coefficient between it and the hoop is µ .Let the hoop rotate around a point O on the hoop at an angular velocity � = � 0 + βt in the horizontal plane, where 0 is the initial angular velocity and β is the angular acceleration.The plane cartesian coordinate system O − xy and the polar coordinate system are established, with O as the pole and x axis as the polar axis.Let the center of the hoop be C, connect OC and extend it, and its included angles with MO and MC are γ and θ respectively.We specify that the counterclockwise rotation direction is positive, so the position of the bead on the hoop can be described by θ .We set the initial angular velocity of the bead relative to the hoop as ω = ω 0 .Figure 1 shows the force analysis of the bead when the angular velocity ω > 0 , where N 1 is the constraint force pointing to the center of the hoop, and its normal and angular components are N r and N ϕ , respectively, f is the friction, and its normal and angular components are f r and f ϕ , respectively.In addition, the bead is also subjected to gravity mg and vertical supporting force N 2 , where g is gravity acceleration, N 2 = mg and the friction f = µ N 2 1 + N 2 2 .The constraint force here forces the bead to make curvilinear motion on the hoop, and its size depends on the gravity, angular velocity and angular acceleration of the bead.Friction hinders the relative movement of the bead and the hoop.The coordinates of the bead in polar coordinates satisfy the relationship r = 2R cos γ , ϕ = ϕ 0 + � 0 t + βt 2 /2 + γ , where ϕ 0 is initial angle, and ϕ 0 = 0 when OO ′ and x axes coincide.
The differential equation of motion of the bead varies according to the value range of θ .Let A schematic diagram of force analysis of a bead on a rotating circular hoop with radius R when the angular velocity ω > 0 of the bead relative to hoop.� 0 t + βt 2 /2 is the rotation angle of the OO ′ axis of the rotating hoop relative to the x axis, γ is the included angle between MO and OO ′ , and is the rotation angle of the bead relative to the hoop.N 1 and f are the constraint force and friction of the bead, respectively.
Vol:.( 1234567890) . The + and − in symbol ± correspond to the clockwise and counterclockwise rotation of the bead relative to the hoop, respectively.According to Taylor's formula, the difference schemes have a first-order accuracy of O(�t).
The radial and angular components of the constraint force N 1 and friction f are N r , N ϕ , f r and f ϕ respectively.
The selection of + and − depends on the values of ω , n and θ 0 .Table 1 shows the selection of positive and negative signs of N r , N ϕ , f r and f ϕ in different value ranges of ω , n and θ 0 .In the planar polar coordinate system, we specify that the directions along the vector r and perpendicular to r (the direction of θ increase) are positive.The positive and negative signs in Table 1 indicate whether the components of N 1 and f are consistent with the positive directions or opposite.
During the movement of the bead, according to the definition of work in a planar polar coordinate system 1 , the work done by the radial and angular components of the constraint force N 1 and friction f is The total work done by the combined external force in the process of the bead motion is Table 1.The selection of positive and negative signs of N r , N ϕ , f r and f ϕ in Eqs. ( 7)- (10) with different ranges of ω , n and θ 0 .According to the kinetic energy theorem, the kinetic energy of the bead can also be expressed as where v 0 is the initial velocity of the bead.
Analysis of results
This paper uses international units.The units of length, time, mass and angular velocity are m , s , kg and rad/s respectively.In the calculation, we let g = 9.8 , R = 1 , m = 1 , β = 0 , = 0 = 2 , ϕ 0 = 0 and time step t = 10 −5 .In Fig. 2, we display a phase diagram of the relationship between θ and ω for µ = 0 and ω 0 = 1 , 2, 3 and 4. As can be seen from the figure, the initial angular velocity ω 0 of the bead is too small to rotate around the hoop.Instead, it rotates with the hoop and oscillates periodically near the equilibrium position O ′ relative to the hoop.The orbits are closed, and the larger the area surrounded by the orbit, the longer the oscillation period.The angular component N ϕ of the constraint force N 1 provides the restoring force.The amplitude of the bead increases with the increase of ω 0 , and the maximum oscillation range is θ ∈ (−π, π) .When the initial angular velocity continues to increase, the bead will rotate around the hoop.The critical angular velocity is ω c = 4 .This can be explained by the kinetic energy theorem.When bead moves from θ = 0 to π , if the initial kinetic energy E k0 of the bead and the work done by the constraint satisfy E k0 + W N r + W N ϕ = 0 , the critical initial angular velocity ω c can be obtained.If ω 0 > ω c , then the bead can rotate around the hoop, otherwise the bead oscillates within the range of θ ∈ (−π, π).
Next, we increase the initial angular velocity ω 0 to study the influence of friction coefficient on the motion of the bead.Figure 3 shows the relationship between θ and ω at the initial angular velocity ω 0 = 10 .Table 2 shows the numerical results of the evolution of θ and ω with time t, and the time interval is 0.5.It can be seen from the figure that when µ = 0 , the bead rotates on the hoop, and the angular velocity changes periodically relative to θ .When µ = 0.05 , the kinetic energy of the bead decreases due to the work done by friction.The bead rotates two laps around the hoop, and then does damping oscillation near the equilibrium position O ′ .Finally, the bead stops at the equilibrium position and does circular motion with the hoop.When µ = 0.1 , 0.15 and 0.2, due to the work done by friction, the kinetic energy loss of the bead is greater, the bead only rotates one lap on the hoop, and the oscillation times at the equilibrium position also decreases or disappears.When µ = 0.25 , the bead is stationary relative to the hoop after only half a lap around the hoop.At this time, the bead does not stop near the initial equilibrium position O ′ , but near the origin of coordinates O.At the coordinate origin, the restoring force N ϕ of the bead is also 0, but compared to point O ′ , point O is less stable, so the probability of the bead stopping at point O ′ is greater than point O.When θ = 0 and ω = 0 , the fixed point of the system, i.e. the equilibrium position, can be obtained.At this point ω = 0 .Therefore, from equation Eq. (3a), it can be obtained that www.nature.com/scientificreports/For a given value of µ , multiple solutions of θ can be obtained by solving Eq. ( 18).These solutions are located near points O ′ and O.When the values of µ are 0.05, 0.1, 0.15, 0.20 and 0.25, respectively, it can be seen from Fig. 3 that the values of the stop position θ of the bead are 4.0494, 2.0092, 2.1357, 1.9824, and 0.8623, respectively.According to solutions of Eq. ( 18), the fixed point values that are close to the θ values mentioned above are 4.0504, 1.8986, 1.8465, 1.7922 and 0.7892, respectively.It can be seen that for a relatively small µ value (e. g. µ = 0.05 ), the bead almost stops at the fixed point.When µ increases, the friction makes the bead stops near the fixed point.When µ = 0.15 , 0.2 and 0.25, there is no oscillation process of the bead because its angular velocity www.nature.com/scientificreports/has become 0 when it passes near the fixed point, and the larger friction hinders its oscillation.Figure 4 shows the change of angle γ with time t at µ = 0 and 0.05.It can be seen from the figure that when µ = 0 , the bead rotates on the hoop, the variation range of angle γ is (−π/2, π/2) , and there is an angle mutation from π/2 to −π/2 .When µ = 0.05 , the bead oscillates near the equilibrium position O ′ after rotating two laps on the hoop.
At this time, the angle γ changes continuously, and the variation range of the angle γ decreases with the increase of time, and finally tends to 0. Figure 5 shows the relationship between the constraint force N 1 of the bead and the angle θ under different friction coefficients µ ( µ = 0 , 0.05, 0.1, 0.15, 0.2 and 0.25).Table 3 shows the numerical results of the evolution of θ and N 1 with time t, and the time interval is 0.5.As can be seen from Fig. 5, the constraint force changes periodically with angle θ at µ = 0 .With the increase of µ , the constraint force tends to decrease, and its maximum and minimum values are near the initial position O ′ and the coordinate origin O, respectively.This is consistent with the maximum and minimum positions of angular velocity in Fig. 3. Figure 6 shows the variation of work W N r , W N ϕ , W f r and W f ϕ done by the radial and angular components of the constraint force N 1 and friction f with the angle θ under different friction coefficients µ .As can be seen from Fig. 6a-d, the work W N r and W N ϕ done by N r and N ϕ has a certain periodicity with the increase of θ .Radial friction f r always does negative work and angular friction f ϕ can do positive work.Table 4 shows the positive and negative signs of the work W N r , W N ϕ , W f r and W f ϕ done by N r , N ϕ , f r , and f ϕ with different ranges of θ 0 , ω , and � 0 + βt + ω .In the table, + and − represent positive work and negative work, respectively, which is consistent with the calculation results in Fig. 6.
Figure 7 shows the relationship between kinetic energy and θ of the bead under different friction coefficients µ .We calculated the left and right sides of Eq. ( 17) respectively.It can be seen from the figure that the two b) are the change of angle γ with time t at µ = 0 and 0.05, respectively.When µ = 0 , the bead rotates on the hoop, and the variation range of angle γ is (−π/2, π/2) .When µ = 0.05 , the angular velocity of the bead decreases with time t, and the angle γ changes continuously, and finally tends to 0. www.nature.com/scientificreports/calculation results agree well and satisfy the kinetic energy theorem.This also ensures the accuracy of our calculation results.When µ = 0 , the kinetic energy of the bead changes periodically with the increase of θ .When µ > 0 , its kinetic energy decays, and the degree of attenuation increases with the increase of µ.
Finally, we study the influence mechanism of initial angular velocity on the kinematics of the bead.Figure 8 shows the relationship between the angular velocity ω and θ of the bead at different initial angular velocities ω 0 when friction coefficient µ = 0.06 .Table 5 shows the numerical results of the evolution of θ and ω with time t, and the time interval is 0.5.As can be seen from the figure, when ω 0 = 12 and 18, the bead rotates 2 and 3 laps around the hoop respectively; when ω 0 = 6 and 9, the bead rotates one lap on the hoop, then does damping oscillation near the equilibrium position and finally stands still relative to the hoop; when ω 0 = 3 , the bead oscillates only near the equilibrium position and then stops at point O ′ .The number of rotation laps of the bead around the hoop is related to the initial angular velocity, but the final oscillation times are basically the same.
When ω 0 = 15 , the angular velocity of the bead becomes 0 after rotating 2.5 laps around the hoop.At this time, the bead does not stop at O ′ point, but near its other fixed point O. Table 3.The numerical results of the evolution of θ and N 1 with time t in Fig. 5 when µ =0, 0.05, 0.1, 0.15, 0.2 and 0.25, with a time interval of 0.5.
Discussion of results
Next, we will further analyze the types of fixed points.Let the coordinates of the fixed point of the system be θ f , ω f .When g≫a(ω + � 0 ) 2 + a� 0 2 cos θ , this represents the situation where the bead is not subject to friction and has a small angular velocity.The Jacobian matrix of the system is 17) respectively.It can be seen from the figure that the two calculation results agree well and satisfy the kinetic energy theorem.This also ensures the accuracy of our calculation results.The coefficient before the first power of is η = 0 .Its eigenvalues are 1,2 = ±� 0 − cos θ f .So the fixed point is the center point.The phase orbit forms a closed curve around the fixed point 11 .This is consistent with Fig. 2. When g≪a(ω + � 0 ) 2 + a� 0 2 cos θ , this represents the situation where the bead moves at a large angular velocity under the action of friction.The Jacobian matrix of the system is The coefficient before the first power of is η = ±µ� 0 .When η = µ� 0 , it is the case of motion enhancement, and when η = −µ� 0 , it is the case of motion attenuation.Therefore, the latter is consistent with our model.When µ = 0.25 , it can be obtained from the fixed point θ f = 0.7892 , 2 = 1.0049 0 , 3 = 1.5049 0 , and 4 = −2.5049< 0 .This indicates that there is an unstable direction near the fixed point, and the fixed point is a saddle point.When µ=0.05, 0.1, 0.15 and 0.2, � < 0 , the fixed point is the stable focus, and the orbit shrinks to the fixed point while rotating 11 .This is consistent with Fig. 3. Through similar analysis, it can be seen that in Fig. 8, when θ f ≈ 2nπ , the corresponding fixed points are the stable focuses, and when θ f ≈ (2n + 1)π , the corresponding fixed point are the saddle points.
Conclusions
In this paper, the orbit constraint of a bead on a rotating large circular hoop in a horizontal plane have been studied.The model we have studied is characterized by considering the existence of friction, which makes the problem more complicated.The dynamic coupling equations of the bead have been derived by using the classical mechanics theory in the polar coordinate system.By numerically solving the coupled equations, the relationship between the angular position of the bead and angular velocity, constraint force, friction and work have been studied.Firstly, we studied the oscillation of the bead on the hoop without friction.The calculations indicate that the angular component of the constraint force is used as the restoring force to make the bead oscillate periodically on the hoop, and its amplitude increases with the increase of the initial angular velocity.Secondly, (19) J = 0 − � 0 2 cos θ f 1 0 .
Figure 2 .
Figure 2. The relationship between ω and θ for µ = 0 .With the increase of the initial angular velocity ω 0 , the oscillation range of the bead gradually increases, and the critical angular velocity of the bead oscillating on the hoop is ω c = 4.
Figure 3 .
Figure 3.The relationship between θ and ω at the initial angular velocity ω 0 = 10 .When µ = 0 , the bead rotates on the hoop, and the angular velocity changes periodically.With the increase of friction coefficient, the attenuation degree of angular velocity of the bead increases.When µ = 0.05 , 0.1, 0.15 and 0.2, the bead stops near the equilibrium position O ′ .When µ = 0.25 , the bead stops near the coordinate origin O.
Figure 4 .
Figure 4. (a) and (b)are the change of angle γ with time t at µ = 0 and 0.05, respectively.When µ = 0 , the bead rotates on the hoop, and the variation range of angle γ is (−π/2, π/2) .When µ = 0.05 , the angular velocity of the bead decreases with time t, and the angle γ changes continuously, and finally tends to 0.
Figure 5 .
Figure 5.The relationship between the constraint force N 1 of the bead and the angle θ under different friction coefficients µ .The constraint force changes periodically with angle θ at µ = 0 .With the increase of µ , the constraint force tends to decrease, and its maximum and minimum values are near the initial position O ′ and the coordinate origin O, respectively.
Figure 6 .
Figure 6.The variation of work done by the radial and angular components of the constraint force N 1 and friction f with the angle θ under different friction coefficients µ .The work W N r and W N ϕ done by N r and N ϕ has a certain periodicity.Radial friction f r always does negative work and angular friction f ϕ can do positive work.
Figure 7 .
Figure 7.The relationship between kinetic energy and θ of the bead under different friction coefficients µ .The dotted line and solid line are the calculation results on the left and right of Eq. (17) respectively.It can be seen from the figure that the two calculation results agree well and satisfy the kinetic energy theorem.This also ensures the accuracy of our calculation results.
Figure 8 .
Figure 8.The relationship between ω and θ at different initial angular velocities when friction coefficient µ = 0.06 .When ω 0 = 12 and 18, the bead rotates 2 and 3 laps around the hoop respectively.When ω 0 = 6 and 9 the bead rotates one lap on the hoop then stops at O ′ point.When ω 0 = 3 , the bead oscillates only near the equilibrium position and then stops at point O ′ .When ω 0 = 15 , the bead rotates 2.5 laps around the hoop and then stops at the origin O.
Table 2 .
The numerical results of the evolution of θ and ω with time t in Fig.
Table 4 .
Positive and negative signs of the work W N r , W N ϕ , W f r and W f ϕ done by N r , N ϕ , f r , and f ϕ with different ranges of θ 0 , ω , and � 0 + βt + ω .+ and − in the table represent positive work and negative work respectively.
Table 5 .
The numerical results of the evolution of θ and ω with time t in Fig.8when ω 0 =3, 6, 9, 12 ,15 and 18, with a time interval of 0.5. | 6,200.6 | 2024-03-08T00:00:00.000 | [
"Engineering",
"Physics"
] |
Evaluation of the Capability of ExoMars-TGO NOMAD Infrared Nadir Channel for Water Ice Clouds Detection on Mars
: As part of the payload of the 2016 ExoMars Trace Gas Orbiter (TGO) mission, the Nadir and Occultation for MArs Discovery (NOMAD) suite instrument has been observing the Martian atmosphere since March 2018. NOMAD is mainly dedicated to the study of trace atmospheric species taking advantage of a high-spectral resolution. We demonstrate that when NOMAD is observing in nadir mode, i.e., when the line-of-sight points to the centre of Mars, it can be also exploited to detect ice. In this study we present a method based on the investigation of nadir observations of the NOMAD infrared channel, acquired during Mars Years 34 and 35 (March 2018 to February 2021). We take advantage of the strong water ice absorption band at 2.7 µ m by selecting the diffraction orders 167, 168, and 169. We derive the Frost and Clouds Index (FCI), which is a good proxy for ice mapping, and obtain latitudinal-seasonal maps for water ice clouds. FCI is sensitive to the Polar Hood clouds. Nevertheless, detections in the Aphelion Cloud Belt (ACB) are limited. This is consistent with previous observations showing different physical properties between the two main Martian atmospheric structures and making the ACB less detectable in the infrared. We hence derive the infrared nadir channel sensitivity limit for the detection of these clouds.
Introduction
Understanding the exchanges between the atmosphere and the surface remains pivotal in planetary climate research. On Mars, seasonal variations in the main atmospheric gaseous components are strongly affected by their condensation and sublimation within the polar caps [1]. In this framework, the formation of ice clouds plays a fundamental role in sculpting the Martian climate. By observing their spatial and seasonal distributions, we can improve our knowledge on atmospheric transport, as well as on water vapour and CO 2 cycles. Moreover, studying the formation and composition of ice clouds can help to better understand smaller scale phenomena such as convective regimes and thermal effects of radiative forcing [2].
In this work, we investigate the information content of the ExoMars-TGO NOMAD infrared channel nadir dataset. NOMAD is a suite of high spectral resolution instruments mainly dedicated to studying the Martian atmosphere trace gases and climatological processes [31][32][33][34][35]. Given the peculiar mode of operations of the instrument for nadir observations, it is difficult to discern clouds from suspended dust and surface ice deposits. For that reason, we define here a method aimed to identify H 2 O ice clouds in NOMAD infrared nadir data, based on the characterization of the 2.7 µm ice absorption band. A brief description of the NOMAD instrument and of the nadir observations that we used in this study is presented in Section 2. Then, we describe the methodology of the study and derive the Frost and Clouds Index in Section 3. The analysis of NOMAD nadir data for MY34 and MY35 is performed in Section 4, where the results obtained over 1.5 MY of NOMAD acquisition are discussed and compared to previous studies and model predictions.
NOMAD Instrument
The Nadir and Occultation for MArs Discovery (NOMAD) instrument is a suite of three high resolution spectrometers that was selected as part of the payload of the 2016 ExoMars Trace Gas Orbiter (TGO) mission. Led by the Royal Belgian Institute for Space Aeronomy (BIRA-IASB), NOMAD has been observing the Martian atmosphere since March 2018 (L S = 150 • in MY34) through three channels operating in the ultravioletvisible (UV-VIS), and infrared (IR) spectral ranges. A first spectrometer is devoted to solar occultation observations (SO channel), operating in the 2.3-4.3 µm IR spectral range. A second spectrometer, capable of performing nadir, limb, and solar occultation observations (LNO channel) covers the 2.3-3.8 µm IR spectral range. A third spectrometer (UVIS channel) can work in the three observation modes covering the 200-650 nm UV-VIS spectral range. A complete description of the instrument can be found in the following papers: Neefs et al. [36], Vandaele et al. [37,38], Thomas et al. [39], and Patel et al. [40].
In the present work, we select the LNO channel for the nadir observations covering the 2.3-3.8 µm IR spectral range with a spectral resolution of 0.3 cm −1 [8]. This channel provides observations of the Martian surface and atmosphere with a typical integration time of around 200 ms. The ground track footprint is approximately 0.5 km × 17.5 km from the TGO orbit, 400 km above Mars. Therefore, NOMAD-LNO is able to map the majority of the surface of the planet every 30 sols [39]. The spectrometer does not observe the whole LNO spectral range simultaneously. Instead, acquisitions are performed nearly simultaneously in 22 cm −1 wide spectral windows (called here orders from now on), representing specific diffraction orders of the diffraction grating. Considering the signal-to-noise ratio (SNR), a maximum number of 6 diffraction orders can be selected for each observation every 15 s, by suitably tuning the frequency of the entrance Acousto-Optical Tunable Filter (AOTF) [8] through an internal radio-frequency generator.
We use level 1A LNO data, which provides data converted into a reflectance factor, i.e., the LNO radiance divided by the measured Solar flux at Mars and by the cosine of the solar zenith angle. The LNO reflectance factor defined at wavelength λ can be written as
Mars
(1) where L λ is the LNO measured spectral radiance (W m −2 sr −1 µm −1 ), Φ Sunλ is the Solar flux at 1 astronomical unit (AU), d Mars is the Sun-Mars distance in AU, and SZA is the solar zenith angle. More details about the LNO calibration we adopt are presented in Thomas et al. [41]. A slightly different calibration approach, in agreement with the former, within 3%, is given by Cruz Mermy et al. [42].
It is important to note that the general shape of the NOMAD raw spectra is strongly affected by the AOTF transmission and by the spectral response of the grating, i.e., the Blaze function [39]. While the Blaze function is defined by a Gaussian curve, the AOTF transmission presents a strong peak with several side-lobes. A combination of a sinc function with a Gaussian is used to represent the AOTF curve [8]. Nevertheless, these secondary peaks allow photons from a larger spectral range to fall on the grating. As a result, an unexpected signal is summed with the expected spectral information. This becomes significant on the edges of each order. After the spectral and radiometric calibrations [8, 41,42], the AOTF and Blaze modulations also propagate to the reflectance factor conversion in the form of low-frequency oscillations in the spectral continuum. For this reason, we only work with reflectance factors at the central value of each spectral order in order to mitigate these oscillations.
Regarding the SNR of the data, Thomas et al. [39] made an analysis taking into account different sources of uncertainty. The main source of noise is represented by the instrument thermal background, which limits the integration times in order to avoid the saturation of the detector. The 15 s period of observations is divided by the number of orders (maximum of 6). Therefore, measuring fewer orders achieves a better SNR. In the same sequence of observations, two or three orders are typically measured, given an average SNR value of 10. For the best scenario, strongly affected by SZA, the measured SNR is expected to be around 15-20 [41].
Data Selection
The first step of this work is to identify the water ice spectral features covered by NO-MAD diffraction orders. A brief description of the spectral content and main scientific focus of each available LNO diffraction order is presented in Oliva et al. [43], who discussed the capability of the NOMAD infrared nadir channel to detect surface ice by using a spectral ratio with orders 190 (2322.9-2341.5 nm) and 169 (2611.8-2632.7 nm). These two orders allowed estimation of the relative depth of the 2.7 µm absorption band, which is the strongest CO 2 and H 2 O ice absorption band in the LNO spectral range. However, this approach is less effective for the detection of transient phenomena such as ice clouds. It hence requires a more stringent temporal and spatial coincidence between the two orders. Nevertheless, as detailed in Section 2, NOMAD is not capable of observing its entire spectral range simultaneously due to its mode of operation and high resolution. Therefore, NOMAD alternates observations according to diffraction orders. As a result, the observation period of order 190 does not fully coincide with the other. We are therefore very limited in the temporal coverage. For this reason, we investigate an alternative approach focused on orders 169 (2611.8-2632.7 nm), 168 (2627.2-2648.2 nm) and 167 (2642.8-2663.9 nm), all falling on the short-wavelength shoulder of the strong 2.7 µm ice absorption band (see Figure 1). Figure 1 presents simulated CO 2 ice, H 2 O ice, and dust reflectance spectra using the Multiple scattering Inverse radiative TRansfer Atmospheric (MITRA) tool [44][45][46][47]. MITRA code is based on the multi-solver LibRadtran radiative transfer package [48] and can be operated both as a forward model and as an inverse retrieval algorithm to study planetary atmospheres [44][45][46][47]. In this study, we take advantage of the forward model in order to reproduce spectra in the LNO spectral range. Nevertheless, we do not attempt to derive aerosol microphysical information. Indeed, as we work in nadir mode, the signal is highly convoluted with the surface properties and therefore, abundance and grain size retrievals are characterised by non-negligible uncertainties when using only a few orders. Regarding the ice reflectance spectra, surface CO 2 ice and H 2 O ice are simulated with effective radii of 100 µm (r eff = 100 µm) (dark/light blue solid lines), while r eff = 7 µm and 10 µm are respectively used for CO 2 ice and H 2 O ice clouds (dark/light blue dashed lines). Conversely, the red solid line represents the dust spectrum with r eff = 1 µm [43]. As highlighted by the vertical dashed green line, order 190 is located on the ice cloud continuum. The challenge of this work is to define a technique that exploits only orders [44][45][46][47]. The simulations have been performed with the following characteristics: a surface albedo of 0.2 (A = 0.2), an incidence angle of 0° (i = 0°), and an optical depth at 2 µm of 0.5 (τ2 = 0.5). We describe all aerosol layers by adopting lognormal size distributions with an effective variance of 0.1 (veff = 0.1) and characteristic grain sizes (reff). Vertical dashed lines indicate the centres of orders. See the legend on the panel for the colour definition. Figure 1 presents simulated CO2 ice, H2O ice, and dust reflectance spectra using the Multiple scattering Inverse radiative TRansfer Atmospheric (MITRA) tool [44][45][46][47]. MITRA code is based on the multi-solver LibRadtran radiative transfer package [48] and can be operated both as a forward model and as an inverse retrieval algorithm to study planetary atmospheres [44][45][46][47]. In this study, we take advantage of the forward model in order to reproduce spectra in the LNO spectral range. Nevertheless, we do not attempt to derive aerosol microphysical information. Indeed, as we work in nadir mode, the signal is highly convoluted with the surface properties and therefore, abundance and grain size retrievals are characterised by non-negligible uncertainties when using only a few orders. Regarding the ice reflectance spectra, surface CO2 ice and H2O ice are simulated with effective radii of 100 µm (reff = 100 µm) (dark/light blue solid lines), while reff = 7 µm and 10 µm are respectively used for CO2 ice and H2O ice clouds (dark/light blue dashed lines). Conversely, the red solid line represents the dust spectrum with reff = 1 µm [43]. As highlighted by the vertical dashed green line, order 190 is located on the ice cloud continuum. The challenge of this work is to define a technique that exploits only orders 167, 168, 169 and that allows detection of H2O ice clouds and frost separated from dust (see Sections 3.2 and 4.2).
Frost and Clouds Index through the 2.7 µm Absorption Band
In the LNO spectral range, the surface reflectance is the main source of signal variability. Therefore, any effort to detect ice cloud's spectral signatures has to account for the surface contribution. Previous studies using MGS-TES [49], MEX-OMEGA [50], and CRISM [51] nadir observations have demonstrated albedo spatial variations over the
Frost and Clouds Index through the 2.7 µm Absorption Band
In the LNO spectral range, the surface reflectance is the main source of signal variability. Therefore, any effort to detect ice cloud's spectral signatures has to account for the surface contribution. Previous studies using MGS-TES [49], MEX-OMEGA [50], and CRISM [51] nadir observations have demonstrated albedo spatial variations over the whole Martian surface [49,52,53]. These albedo variations, coming from different surface mineralogy absorptions, also represent the main source of variability in LNO reflectance factors. By comparing LNO reflectance to a Martian albedo map it is possible to remove surface albedo contributions and to spotlight anomalous detections resulting from, for example, an ice cloud's spectral signature. To this extent, OMEGA data provide reflectance spectra in the NIR allowing the construction of albedo maps in the 0.97-2.7 µm spectral range [53]. Nevertheless, as cautioned by Riu et al. [54], OMEGA albedo maps can be partially biased in low-albedo equatorial terrains, where plagioclase minerals predominate [54][55][56]. Indeed, in the NIR, this mineral phase presents a lack of spectral absorption features that may be altered by the presence of dust and could constitute a caveat for the construction of OMEGA albedo maps. On the contrary, TES is able to detect plagioclase features in the TIR range [55], and therefore, takes into account the low-albedo equatorial terrains effect on the NOMAD spectra. Moreover, TES data are also filtered to partially minimise the effect of atmospheric dust and clouds. For these reasons, we will rely on the TES bolometric For each LNO observation, i, characterised by different longitudes and latitudes, we define for simplicity the LNO Norm ratio as where R is the reflectance factor value taken at the centre of the selected order (i.e., at λ = 2622.3 nm, 2637.8 nm, and 2653.6 nm for orders 169, 168 and, 167 respectively), and TES is the bolometric Martian albedo value averaged in each considered LNO footprint. Such a ratio will be sensitive to anomalies pertaining to both ice and dust, and for this reason, we investigate how it behaves with the two components by performing simulations with the MITRA tool. Figure 2 illustrates LNO Norm simulations of dust and H 2 O ice, adopting average effective radii and optical depths, and considering order 169 as an example (dashed lines). The ice absorption makes the resulting curve depart significantly with respect to the dust curve. Conversely, the same behaviour is not observed in order 190, also shown in Figure 2 as reference, where the two curves are quite similar (solid lines). It is important to stress that changing the aerosol's microphysics in the simulations has an impact on the LNO Norm value, in principle making dust mimic ice anomalies especially on low-albedo terrains and hence, possibly yielding false positive detections. Given that LNO data moderate SNR, the LNONorm parameter is affected by non-negligible fluctuations. In order to mitigate this effect, we combine LNONorm of the three orders defining a Frost and Clouds Index as where LNONorm 167, LNONorm 168, and LNONorm 169 are respectively the LNONorm values defined in Section 3.2 for orders 167, 168, and 169. Similarly to Figure 2, simulations have been performed for FCI in the presence of suspended dust and water ice particles (see Figure 3). Given the way FCI is defined, as expected, its simulated values are larger for water ice (blue line). While the LNONorm ratio allows to spot anomalous detections in presence of water ice clouds, FCI allows to empha- Given that LNO data moderate SNR, the LNO Norm parameter is affected by nonnegligible fluctuations. In order to mitigate this effect, we combine LNO Norm of the three orders defining a Frost and Clouds Index as Figure 2, simulations have been performed for FCI in the presence of suspended dust and water ice particles (see Figure 3). Given the way FCI is defined, as expected, its simulated values are larger for water ice (blue line). While the LNO Norm ratio allows to spot anomalous detections in presence of water ice clouds, FCI allows to emphasise simultaneous detections happening in the three orders and helps to potentially derive a threshold value for ice detection as will be shown in Section 4.1. Nevertheless, it is important to mention that, despite the TES map being filtered to minimise the effect of atmospheric dust and clouds, it still contains surface ice [49], increasing the uncertainty in transitional ice/no-ice regions. As in the case of the Ice Index [43], the discussion coming from the FCI is semi-qualitative and the value that we derive later should not be considered as an actual threshold for ice detection, but rather an indication for abundant frost or dense water ice clouds.
Data Analysis and Results
For this study, we analyse 1.5 Martian Years of NOMAD LNO nadir infrared observations thanks to the three orders mentioned in Section 3.1. Table 1 presents the number of orbits acquired for each order. Even if order 168 has been deeply used in MY35, we globally note a fair distribution in the total number of observations between MY34 and 35.
Data Analysis and Results
For this study, we analyse 1.5 Martian Years of NOMAD LNO nadir infrared observations thanks to the three orders mentioned in Section 3.1. Table 1 presents the number of orbits acquired for each order. Even if order 168 has been deeply used in MY35, we globally note a fair distribution in the total number of observations between MY34 and 35. In order to focus the analysis on seasonal ice coverage, we construct a latitudinalseasonal map of FCI (see Figure 4). Initially, all the LNO data are organised by latitude (from 90 • N to 90 • S) and time (MY34 and 35), expressed in terms of L S , with a 2 • × 2 • binning. Each bin of latitude and L S contains data averaged at all available longitudes. By computing the FCI map, we keep track of the observations falling into common bins for all the three orders. Then, we remove the worst-case scenarios with the lowest SNRs, i.e., below SNR~10. We hence select the LNO observations with a solar zenith angle lower than 60 • , as larger illumination angles seriously affect the signal intensity measured by NOMAD. In order to keep a significant colour dynamic, in Figure 4 we saturate the colour bar for FCI values larger than 3. Important saturations can be observed at the highest latitudes in both hemispheres, i.e., when L S = 180-270 • in MY34-35 in the Southern hemisphere and when L S = 0-45 • in MY35 in the Northern hemisphere. These observations represent the sublimation phase of the polar cap [1,43,[57][58][59][60][61]. Indeed, surface ice presents a strong absorption at 2.7 µm (see Figure 1) and directly impacts the FCI values. The polar regions are deeply discussed in Oliva et al. [43], which investigates the LNO information content in order to obtain latitudinal-seasonal maps for CO 2 ice in both polar regions. Being outside the scope of this paper, the polar caps observations are not discussed in detail here.
However, we observe bins with a high FCI value also in non-polar regions. In the Northern hemisphere, most of them can be found above latitude 40 • N. Some high FCI values are also present around the equator for L S = 45-180 • in MY35. In the South, from latitude 20 • S to 40 • S, FCI returns some saturated pixels for L S = 150-180 • in MY34 and around L S = 90 • in MY35. The investigation of these non-polar high FCI pixels is discussed in the following section. First, we discuss the sensitivity of FCI to detect frost (see Section 4.1). Then, we attempt to derive a detection limit for water ice clouds (see Section 4.2).
On the other hand, as already mentioned in Section 3.2, different surface mineralogies are responsible for the global variations of surface albedo [62][63][64]. This affects the measured reflectance, especially over regions of low surface albedo [18,52]. It is worth noting that the high FCI values around latitude 60 • N are over low surface albedo regions. This dark latitudinal band covers Acidalia and Utopia Planitia [65]. Szantai et al. [18] studied the diurnal cloud life cycle over these large regions using OMEGA data. Similar to the NOMAD data analysis in this work, they defined a spectral ratio (Reversed Ice Cloud Index (ICIR) based on Madeleine et al. [16]) at the 3.1 µm water ice absorption band and used it as a proxy of the water ice column. The results are not always in agreement with the model predictions. They found the highest ICIR uncertainty (>20%) over Acidalia and Utopia Planitia, regions with low surface albedo. For that reason, we also investigate the possibility of having surface effect residues in the results (see Section 4.3).
itudes in both hemispheres, i.e., when LS = 180-270° in MY34-35 in the Southern hemisphere and when LS = 0-45° in MY35 in the Northern hemisphere. These observations represent the sublimation phase of the polar cap [1,43,[57][58][59][60][61]. Indeed, surface ice presents a strong absorption at 2.7 µm (see Figure 1) and directly impacts the FCI values. The polar regions are deeply discussed in Oliva et al. [43], which investigates the LNO information content in order to obtain latitudinal-seasonal maps for CO2 ice in both polar regions. Being outside the scope of this paper, the polar caps observations are not discussed in detail here. However, we observe bins with a high FCI value also in non-polar regions. In the Northern hemisphere, most of them can be found above latitude 40°N. Some high FCI values are also present around the equator for LS = 45-180° in MY35. In the South, from latitude 20°S to 40°S, FCI returns some saturated pixels for LS = 150-180° in MY34 and around LS = 90° in MY35. The investigation of these non-polar high FCI pixels is discussed in the following section. First, we discuss the sensitivity of FCI to detect frost (see Section 4.1). Then, we attempt to derive a detection limit for water ice clouds (see Section 4.2).
On the other hand, as already mentioned in Section 3.2, different surface mineralogies are responsible for the global variations of surface albedo [62][63][64]. This affects the measured reflectance, especially over regions of low surface albedo [18,52]. It is worth noting that the high FCI values around latitude 60°N are over low surface albedo regions. This dark latitudinal band covers Acidalia and Utopia Planitia [65]. Szantai et al. [18] studied the diurnal cloud life cycle over these large regions using OMEGA data. Similar to the NOMAD data analysis in this work, they defined a spectral ratio (Reversed Ice Cloud Index (ICIR) based on Madeleine et al. [16]) at the 3.1 µm water ice absorption band and used it as a proxy of the water ice column. The results are not always in agreement with the model predictions. They found the highest ICIR uncertainty (>20%) over Acidalia and Utopia Planitia, regions with low surface albedo. For that reason, we also investigate the possibility of having surface effect residues in the results (see Section 4.3).
Frost Detection
We discuss here the possibility to derive a threshold value for the detection of frost.
In order to define a quantitatively and statistically robust threshold value, we compute the histogram of FCI values distribution in logarithmic X-scale. As shown in Figure 5, the bulk of the histogram follows a Gaussian distribution peaked at −0.35 (µ = −0.35) with a standard deviation of 0.13 (σ = 0.13). Nevertheless, the distribution is not totally symmetrical around its mean value. We can observe a wing on the right-hand side of the distribution, corresponding to the high FCI pixels in Figure 4. Similarly to what has been done by Oliva et al. [43], we are able to tune the threshold value so that the edge of polar caps is detectable. This happens for FCI values exceeding the average value of the distribution by 3.5σ. This threshold is indicated by the vertical dashed red line in Figure 5. As a comparison, we also estimate an average FCI value for polar deposits, represented by the vertical dashed blue line ( Figure 5). Sensitive to surface ice deposits, the FCI value over the polar cap exceeds the average value of the distribution by 10σ.
We apply it on the FCI map of Figure 4 and present the results in Figure 6A. As expected, detections in the polar regions (see regions 1, 2, and 3 in red) are in good agreement with the expected boundaries of the polar caps, but high values of FCI are also found at mid-latitudes. We now focus on these detections found at latitudes within the range of +/−30 • (see regions from A to G in Figure 6A). A possible explanation for these detections is the presence of ice surface deposits. In order to verify this hypothesis, we need to take into account two parameters: the surface temperature (T) and the Local True Solar Time (LTST). Indeed, even at mid-latitudes, temperatures may drop below the frost point, i.e., T~148 K for CO 2 [66] and T~193 K for H 2 O [67] at 610 Pa (average Martian pressure at 0 elevation). This can be achieved in the early morning just after the sunrise. Piqueux et al. [66] observed surface temperatures consistent with CO 2 frost at all latitudes and predicted a survival time of less than 1 h after sunrise. On the other hand, the Martian topography may play an important role, especially within ancient volcanoes, cracks, and craters. This complex geometry of terrain allows the existence of shadowed areas on a local surface, which can maintain low temperatures even during the day. Taking into account the LTST (see Figure 6B), we use the Mars Climate Database v5.3 (MCD) [25,68] in order to estimate the surface temperature for the mid-latitude detections in the end of MY34 (region A) and MY35 (from region B to G). For all these detections, the MCD predicted surface temperatures are listed in Table 2. It can be seen that MCD predicts surface temperatures that are always higher than the H 2 O frost point (T > 193 K). We can hence exclude the presence of both CO 2 and H 2 O frost even for 8:20 and 8:25 LTST, where residual night frost can survive (see Table 2 for D1 and F1). Moreover, the MCD predictions for the regions A, E, F and, G seem consistent with Carrozzo et al. [67] who observed H 2 O frost only before L S = 150 • . In contrast, as mentioned above, frost can survive in shadowed regions along scarps and craters. However, this possibility is not easy to verify due to the large NOMAD nadir channel footprint (see Section 2).
Frost Detection
We discuss here the possibility to derive a threshold value for the detection of frost. In order to define a quantitatively and statistically robust threshold value, we compute the histogram of FCI values distribution in logarithmic X-scale. As shown in Figure 5, the bulk of the histogram follows a Gaussian distribution peaked at −0.35 (µ = −0.35) with a standard deviation of 0.13 (σ = 0.13). Nevertheless, the distribution is not totally symmetrical around its mean value. We can observe a wing on the right-hand side of the distribution, corresponding to the high FCI pixels in Figure 4. Similarly to what has been done by Oliva et al. [43], we are able to tune the threshold value so that the edge of polar caps is detectable. This happens for FCI values exceeding the average value of the distribution by 3.5σ. This threshold is indicated by the vertical dashed red line in Figure 5. As a comparison, we also estimate an average FCI value for polar deposits, represented by the vertical dashed blue line ( Figure 5). Sensitive to surface ice deposits, the FCI value over the polar cap exceeds the average value of the distribution by 10σ. We apply it on the FCI map of Figure 4 and present the results in Figure 6A. As expected, detections in the polar regions (see regions 1, 2, and 3 in red) are in good agreement with the expected boundaries of the polar caps, but high values of FCI are also found at mid-latitudes. We now focus on these detections found at latitudes within the range of +/−30° (see regions from A to G in Figure 6A). A possible explanation for these detections is the presence of ice surface deposits. In order to verify this hypothesis, we need to take into account two parameters: the surface temperature (T) and the Local True Solar Time (LTST). Indeed, even at mid-latitudes, temperatures may drop below the frost point, i.e., T~148 K for CO2 [66] and T~193 K for H2O [67] at 610 Pa (average Martian pressure at 0 elevation). This can be achieved in the early morning just after the sunrise. Piqueux et al. [66] observed surface temperatures consistent with CO2 frost at all latitudes and predicted a survival time of less than 1 h after sunrise. On the other hand, the Martian topography may play an important role, especially within ancient volcanoes, cracks, and craters. This complex geometry of terrain allows the existence of shadowed areas on a local surface, which can maintain low temperatures even during the day. Taking into account the LTST (see Figure 6B), we use the Mars Climate Database v5.3 (MCD) [25,68] in order to estimate the surface temperature for the mid-latitude detections in the end of MY34 (region A) and Table 2. Solar longitudes (L S ), latitudes, local times (LTST), and MCD predicted surface temperatures (T) of the regions of interest (A to F) identified in Figure 6. Numbers in the regions D, E, and F represent bins from left to right in each region.
Region of Interest L S ( • ) Latitude ( • ) LTST T (K)
A 301 (MY34) −27 16 always higher than the H2O frost point (T > 193 K). We can hence exclude the presence of both CO2 and H2O frost even for 8:20 and 8:25 LTST, where residual night frost can survive (see Table 2 for D1 and F1). Moreover, the MCD predictions for the regions A, E, F and, G seem consistent with Carrozzo et al. [67] who observed H2O frost only before LS = 150°. In contrast, as mentioned above, frost can survive in shadowed regions along scarps and craters. However, this possibility is not easy to verify due to the large NOMAD nadir channel footprint (see Section 2).
(A) (B) Given the above discussion, we see that the interpretation of mid-latitude detections (regions A to G in Figure 6A) as surface frost can be discarded. In contrast, the detections in the regions B, C and, D fall in the Aphelion season, i.e., L S~6 0-160 • . During this season, the Aphelion Cloud Belt occurs every year at low latitudes. Therefore, these regions could be suitable for ACB detections. Moreover, in Figure 6A, we can also see many detections present at high latitudes (above 40 • N and below 30 • S in MY34-35) which relate to another important atmospheric structure, the Polar Hood. We hence decide to verify the cloud hypothesis in the next section by selecting specific regions where clouds are expected to be present by general circulation models or have been observed by other instruments.
Water Ice Cloud Detection
As already mentioned in the previous section, Figure 6A presents high FCI values even outside the red regions 1, 2 and, 3. In the Northern hemisphere, they are found above latitude 40 • N. Moreover, we observe high values in the Southern hemisphere, mainly before L S 180 • (MY34) and around L S 90 • (MY35), in addition to the scattered detections at mid-latitudes (see Section 4.1). In this section, we verify the possibility to spot potential clouds using the FCI. To this extent, we highlight new regions of interest in which the detections are spotted (see the regions H to O in Figure 6). As we can observe, for the regions H to O in Figure 6b, the LTST mainly concentrates in a range of about 2 h around noon. Nevertheless, early detections exist in region L (see at L S 90 • and 164 • and latitudes around 50 • N), with a 7:13 LTST and 8:45 LTST respectively. MCD simulations always predict surface temperatures higher than 197K and are, therefore, inconsistent with the presence of frost. For this reason, we suggest that the detections are related to atmospheric ice (see Section 3). However, it is important to keep in mind that this region may lead to surface effect residues present in the results (see Sections 4 and 4.3 for more details).
Two main cloud structures can be observed on Mars: the Polar Hoods (PH) and the Aphelion Cloud Belt (ACB) (see Section 1) [4,9,17]. The PH occurs above the high latitudes (~40 • N and~40 • S) of the winter hemispheres. The Northern Polar Hood (NPH) is usually observed about three-quarters of the Martian Year, starting at L S 150 • , and covers all longitudes. Moreover, the NPH is always extended to the pole [15]. Instead, the Southern Polar Hood (SPH) is an annular ring that is not extended to the pole due to less available water vapour in the South than in the North [12]. The SPH is only present during two phases: between L S = 10-70 • (phase 1) and between L S = 100-200 • (phase 2) [14]. During phase 1, the structure is extended over a large range of latitudes, namely from 30 • S to 75 • S. As shown in region N, FCI detections are present from latitude~25 • S to 50 • S. Given the period of observations (L S = 0-200 • ), the FCI results appear to be compatible with clouds in the SPH. Figure 7 presents a direct comparison with the MCD simulations. However, the limited coverage for the Southern hemisphere does not allow us to observe the whole cloud structure. Nevertheless, we are able to observe the evolution of the SPH at equatorward latitudes. Detections between L S = 15-83 • appear compatible with phase 1. On the other hand, phase 2 seems to cover L S 107 • to 195 • , following the recession of the polar cap. This phase is partially observed in region I (L S = 153-187 • ) due to the lack of observations in MY34. Moreover, it is worth noting that the specifics of the TGO orbit influence the spatial and temporal coverages of the NOMAD observations. Indeed, as shown in Figure 6B, the LTST changes over the latitudes and L S. Nevertheless, important diurnal variations of water ice clouds occur in the Martian atmosphere. Such effects can hence affect the results, underestimating the presence of clouds compared to the MCD predictions [19,69]. For example, at L S 20 • , observations are acquired in the morning for the Southern Hemisphere. At that time, MCD predicts a low water ice column at the probed latitudes (see Figure 7A). On the other hand, at Ls~100 • , MCD simulations seem to be more in agreement with the FCI results by selecting a LTST in the early afternoon (see Figure 7B).
In [9,10,15,17]. It is important to mention that the NOMAD instrument provides simultaneous observations of these clouds in the UV through the UVIS channel (see Section 2), that is sensitive to clouds and confirms their presence at latitude~74 • N (personal discussion with Y. Willame, ice clouds retrieval based on Willame et al. [17]). Moreover, these results are also supported by the MCD predictions. Nevertheless, FCI detections in region L are uncommon, especially between L S~5 0 • and 150 • . At L S~5 0 • and 150 • , they cover latitudes up to 35 • N. A possible explanation is that they are related to the northern part of ACB, which seems to connect with the NPH. Although these detections are not predicted by MCD, they probably represent the so called 'cloud bridge' [70] detected during previous Martian Years [6,14,17,18,71,72]. Given the disagreement between the model predictions and FCI detections, we decide to discuss the results for the region L in the next section. other hand, phase 2 seems to cover LS 107° to 195°, following the recession of the polar cap. This phase is partially observed in region I (LS = 153-187°) due to the lack of observations in MY34. Moreover, it is worth noting that the specifics of the TGO orbit influence the spatial and temporal coverages of the NOMAD observations. Indeed, as shown in Figure 6B, the LTST changes over the latitudes and LS. Nevertheless, important diurnal variations of water ice clouds occur in the Martian atmosphere. Such effects can hence affect the results, underestimating the presence of clouds compared to the MCD predictions [19,69]. For example, at LS 20°, observations are acquired in the morning for the Southern Hemisphere. At that time, MCD predicts a low water ice column at the probed latitudes (see Figure 7A). On the other hand, at Ls~100°, MCD simulations seem to be more in agreement with the FCI results by selecting a LTST in the early afternoon (see Figure 7B). During the rest of spring and summer (LS~50-150° in region K), detections are not consistent with previous studies [9,10,15,17]. It is important to mention that the NOMAD instrument provides simultaneous observations of these clouds in the UV through the UVIS channel (see Section 2), that is sensitive to clouds and confirms their presence at latitude ~74°N (personal discussion with Y. Willame, ice clouds retrieval based on Willame et al. Regarding the ACB, it is not visible in our results in Figure 6. The structure is known to appear at low latitudes (10 • S to 30 • N), with enhancements over volcanoes in the Tharsis region [13]. Different types of clouds compose the cloud belt, ranging from formless morning thin haze to large-scale thick clouds. In terms of microphysical properties, two main groups have been observed for the thick clouds. The difference in particle size defines the core and the periphery of ACB. The first group corresponds to regions strongly controlled by local dynamics and topography. These clouds are observed over the Tharsis region with a 5 µm grain size. On the other hand, regional wind circulation forms the second group composed of large-scale clouds and is characterised by particle sizes of 2-3.5 µm [5,16]. In our results (Figure 6), only a couple of detections are present at equatorial regions during this season (see regions B, C and, D) and could be explained on the basis of the differences in morphology and microphysical properties described above. One of the reasons could be that the large LNOs footprint (Section 2) makes it difficult to detect optically thin hazes and clouds. On the other hand, large-scale cumulus clouds, from 5 to 10 km in size, could be detectable in the NOMAD nadir footprint. Nevertheless, they are relatively thin at the beginning of the northern spring and only become thicker late in the ACB season. The thickest of these clouds have been observed during the early northern summer, forming at the beginning of the afternoon [16,20] consistently to the regions C and D. Moreover, the clouds' ice abundance could play an important role in their detection. Olsen et al. [10] derived the water ice column (WIC) for MY26-32 using the nadir IR observations of OMEGA. They estimated two ranges of values, about 1.2-1.6 pr. µm over the ACB and 1.5-2.5 pr. µm over the PH. This distinction between the two cloud structures can explain the global results presented in Figure 6. While we register several detections over the PH, only a few isolated clouds are detected over the ACB. This behaviour suggests that the LNO nadir dataset is only sensitive to clouds with ice columns larger than 1.5 pr. µm.
As mentioned in Section 4.1, detections are also present at mid-latitudes during the perihelion season (L S = 180-360 • , see regions A, E, F and, G in Figure 6). These results are difficult to confirm with the MCD predictions, although the simulations agree with the presence of clouds in region E for a 10:30-11 LTST. Moreover, the results for the regions A, E and, F are not consistent with the OMEGA data analysis [10]. During the perihelion season, the solar flux increases and relatively warms the Martian atmosphere promoting dust activity. This hence limits water ice cloud column opacities [12]. Nevertheless, previous works have demonstrated the presence of water ice clouds during this period. They mainly occur in the mesosphere (at altitudes above 50 km) [4,8,73]. The results in the regions A, F, and G appear to be in agreement with the previous studies focused on MRO-CRISM [51] and NOMAD-SO (SO channel, see Section 2) data analysis [4,8]. They spotted water ice clouds around L S 200 • , 270 • , 300 • , and L S 345 • , which were also observed by SPICAM [17] for the regions A and G. Moreover, especially for the regions A and F, the detections correspond to the period of the perihelion cloud trails (PCT). This class of mesospheric clouds are formed between L S 210 • and 310 • in the late morning to the mid-afternoon. Horizontally extended (200 to 1000 km), they are observed over specific regions between latitudes 5 • S and 40 • S, i.e., in the Arsia Mons, Syria, and Solis regions and along the Thaumasia Planitia, Valles Marineris margins, and the north east of Hellas Basin [74,75].
Given the above discussion, it is important to remind that the kind of threshold value that we use indicates abundant frost or dense water ice clouds instead of an absolute value for ice detection (see Sections 3.2 and 4.1). Therefore, the FCI appears to be sensitive only to ice that is characterised by a particular microphysics. Nevertheless, some FCI detections (especially region L in Figure 6) are still difficult to justify with previous studies or the MCD predictions. For that reason, we discuss these results in the next section.
FCI Sensitivity
It is worth noting that the detections in region L are over low surface albedo regions covering Acidalia and Utopia Planitia [65]. As already mentioned in Section 4, the dark latitudinal band around 60 • N affects the measured reflectance [18,52]. Therefore, we investigate in this section the possibility of having surface effects residues in the results of the region L (see Figure 6). We apply the FCI on observations between L S 30 • and 150 • in MY35. In order to verify an eventual correlation between the FCI and the dark terrains, we can perform a direct comparison with the TES albedo map. The results are given in terms of latitude and longitude in Figure 8. As we can see, saturated values of the FCI (>3.5σ, see Section 4.1) are present over Acidalia Planitia and the north of that region (see red rectangle on the left panel). They can also be seen at the highest-probed latitudes in both hemispheres (see black circles on the left panel). These high index values should be related to the presence of water ice clouds (see Sections 3.2 and 4.2). Nevertheless, by comparison with the right panel, we see that the high FCI values inside the red rectangle present a correlation with dark terrains. Since TES albedo is bolometric, the correlation we observe can result from an overestimation of albedo values over dark regions at 2.7 µm with respect to LNO. Therefore, the high values in the red rectangle (see left panel) are likely linked to the presence of surface effect residues over Acidalia Planitia in the LNO data. This is hence in agreement with the OMEGA results in Szantai et al. [18]. In Figure 8, the selected period corresponds to the aphelion season. The circles cover regions with an intermediate surface albedo. These regions are not comparable with dark regions in the northern high latitudes [65]. Therefore, the saturated FCI values in both hemispheres are likely linked to the presence of clouds. As highlighted in Section 4.2, the low WIC of the ACB makes its detection difficult in the LNO spectral range [10]. The full ACB structure is not hence visible. For that reason, we attempt to derive the FCI sensitivity limit for cloud detection. From MCD simulations, we notice that the PH water ice column can take values of the same order as those of the surface water ice (~10 −2 kg/m 2 ). They are then superior by a factor 10 to those of the ACB (~10 −3 kg/m 2 ). However, the PH is not always fully detected (see region I in Figure 6). Moreover, some detections have been recorded around the equator during the Aphelion season (see the regions B-D in Figure 6). Local thick clouds in the ACB can hence be detected. In order to estimate the sensitivity limit of the FCI, we compare the FCI results in Figure 8 with the MCD predictions (see In Figure 8, the selected period corresponds to the aphelion season. The circles cover regions with an intermediate surface albedo. These regions are not comparable with dark regions in the northern high latitudes [65]. Therefore, the saturated FCI values in both hemispheres are likely linked to the presence of clouds. As highlighted in Section 4.2, the low WIC of the ACB makes its detection difficult in the LNO spectral range [10]. The full ACB structure is not hence visible. For that reason, we attempt to derive the FCI sensitivity limit for cloud detection. From MCD simulations, we notice that the PH water ice column can take values of the same order as those of the surface water ice (~10 −2 kg/m 2 ). They are then superior by a factor 10 to those of the ACB (~10 −3 kg/m 2 ). However, the PH is not always fully detected (see region I in Figure 6). Moreover, some detections have been recorded around the equator during the Aphelion season (see the regions B-D in Figure 6). Local thick clouds in the ACB can hence be detected. In order to estimate the sensitivity limit of the FCI, we compare the FCI results in Figure 8 with the MCD predictions (see Figure 9). We can see that the WIC of the ACB takes values lower than 3 × 10 −3 kg/m 2 . In contrast, those in the centre of the SPH are generally higher than 5 × 10 −3 kg/m 2 , while the WIC of the NPH can reach 1.3 × 10 −2 kg/m 2 . From the saturated index values at the highest latitudes (white pixels), we can estimate the limit of the FCI sensitivity for cloud detection at~4 × 10 −3 kg/m 2 . Some saturated pixels are found at mid latitudes, which may indicate the presence of thicker clouds compared to the rest of the ACB (see black circle). Nevertheless, as over Acidalia Planitia (see red rectangle), we suspect a surface effect in the detection inside the yellow rectangle. Indeed, this region covers a terrain with a low surface albedo (TES albedo lower than 0.1, see Figure 8).
Remote Sens. 2022, 14, x FOR PEER REVIEW 16 of 22 Figure 9). We can see that the WIC of the ACB takes values lower than 3 × 10 −3 kg/m 2 . In contrast, those in the centre of the SPH are generally higher than 5 × 10 −3 kg/m 2 , while the WIC of the NPH can reach 1.3 × 10 −2 kg/m 2 . From the saturated index values at the highest latitudes (white pixels), we can estimate the limit of the FCI sensitivity for cloud detection at ~4 × 10 −3 kg/m 2 . Some saturated pixels are found at mid latitudes, which may indicate the presence of thicker clouds compared to the rest of the ACB (see black circle). Nevertheless, as over Acidalia Planitia (see red rectangle), we suspect a surface effect in the detection inside the yellow rectangle. Indeed, this region covers a terrain with a low surface albedo (TES albedo lower than 0.1, see Figure 8).
Conclusions
NOMAD-LNO is a spectrometre mainly designed to investigate the presence of trace gases in the Martian atmosphere. The instrument uses preselected spectral orders to resolve the absorption lines of the single species. Due to this mode of operations and to its high resolving power, it cannot acquire a single spectrum over the full spectral range (see Section 2). Therefore, the spatial coverage linked to the full spectral range is also limited, since each spectral order observes a different footprint. Moreover, due to technical constraints imposed by the spacecraft, the SNR is not optimal. Finally, there is an intrinsic limitation linked to the spectral behaviour of ice clouds and dust. Depending on their microphysical properties, it can be really challenging to distinguish between them. Having all these constraints and limitations in mind, we have described a technique that takes advantage of three NOMAD-LNO diffraction orders (167, 168, and 169), covering the
Conclusions
NOMAD-LNO is a spectrometre mainly designed to investigate the presence of trace gases in the Martian atmosphere. The instrument uses preselected spectral orders to resolve the absorption lines of the single species. Due to this mode of operations and to its high resolving power, it cannot acquire a single spectrum over the full spectral range (see Section 2). Therefore, the spatial coverage linked to the full spectral range is also limited, since each spectral order observes a different footprint. Moreover, due to technical constraints imposed by the spacecraft, the SNR is not optimal. Finally, there is an intrinsic limitation linked to the spectral behaviour of ice clouds and dust. Depending on their microphysical properties, it can be really challenging to distinguish between them. Having all these constraints and limitations in mind, we have described a technique that takes advantage of three NOMAD-LNO diffraction orders (167, 168, and 169), covering the short wavelength shoulder of the 2.7 µm ice absorption band. The application of such a technique Remote Sens. 2022, 14, 4143 16 of 20 allows us to map surface ice and H 2 O ice clouds into the Martian atmosphere. We applied the method on regions where ice clouds are either expected by general circulation models or have been observed by other instruments. The method is based on spectral ratios between LNO reflectance factor spectra and TES bolometric dust-clean albedo. We have defined a Frost and Cloud Index (FCI) as a useful proxy for ice mapping (see Section 3). We applied the method to the LNO dataset in Martian years 34 and 35 (March 2018 to February 2021) excluding observations with SZA larger than 60 • to avoid the lowest SNRs (see Section 4). As discussed in Section 4.2, the acquisition of data during MY34-35 allows us to construct seasonal maps for water ice clouds. The results appear in agreement with previous studies focused on Mars Express SPICAM/UV and OMEGA data analysis [10,17]. FCI is sensitive to the Polar Hood clouds, although the full structure is not detected. Moreover, detections in the Aphelion Cloud Belt (ACB) are limited. This is consistent with previous OMEGA observations [10] showing different physical properties between the two main Martian atmospheric structures and making the ACB less detectable in the infrared. We hence derived the LNO channel sensitivity limit for cloud detection (see Section 4.3).
Finally, the analysis presented in this paper represents one of several studies dedicated to the exploitation of the LNO nadir dataset and opens the way for different follow-up papers. As a direct continuation to this work, further comparison with the NOMAD-UVIS channel about the cloud's detection is already planned. It will help to tune the FCI, hopefully to increase its sensitivity to ice clouds and to limit the number of false detections. In addition, an in-depth radiative transfer analysis of the 2.35 µm feature in CO 2 ice clouds spectra is currently being performed.
Data Availability Statement:
The data used for this study was obtained from the Royal Belgian Institute for Space Aeronomy (IASB-BIRA). At the time of writing, the dataset is not publicly available, however the LNO data used for this study will be added to the ESA Planetary Science Archive (https://archives.esac.esa.int/psa) in the near future. | 12,362 | 2022-08-23T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
In Vitro Assembly Studies of FtsZ/Tubulin-like Proteins (TubZ) from Bacillus Plasmids
Proteins with a weak sequence similarity to tubulin and FtsZ are expressed from large plasmids of Bacillus anthracis and Bacillus thuringiensis and are probably involved in plasmid segregation. Previously designated RepX and TubZ, we designate them here as TubZ-Ba and TubZ-Bt. We have expressed and purified the proteins for in vitro studies. TubZ-Ba and TubZ-Bt share only 21% amino acid identity, but they have remarkably similar biochemical properties. They both assemble into two-stranded filaments and larger bundles above a critical concentration, and they hydrolyze GTP at a very high rate, ∼20 GTP min–1 TubZ–1. Assembly is also supported by GTPγS. A tiny amount of GTPγS stabilizes polymers assembled in GTP and inhibits the GTPase by a mechanism involving cooperativity. The nucleotide in the polymers is almost 100% GDP, which is similar to microtubules but very different from the 20–30% GDP in FtsZ polymers. This suggests that the TubZ polymers have a capping mechanism that may be related to the GTP cap that produces dynamic instability of microtubules.
ones that have been studied so far are from the pXO1 plasmid of Bacillus anthracis and the pBtoxis plasmid of Bacillus thuringiensis. These show only 21% amino acid identity to each other (4).
The FtsZ/tubulin-like protein from pXO1 was shown to be essential for maintaining the plasmid in B. anthracis (5). That study went on to show that this was the only plasmid-encoded protein that was necessary for plasmid stability. A minireplicon containing only a short DNA sequence from pXO1 (presumably a centromere-like segment) could be maintained provided the FtsZ/tubulin-like protein was produced, either from the same plasmid or in trans. Tinsley and Khan (5) suggested that this protein might be involved in replicating the plasmid DNA and named the protein RepX. An alternative is that RepX is involved in partitioning the low-copy plasmids to daughter cells, as originally suggested by Berry et al. (4). We now believe this to be the case, especially in light of the more detailed analyses of TubZ from pBtoxis.
The FtsZ/tubulin-like protein from pBtoxis, ORF156, was identified as one of two plasmid-encoded proteins needed for plasmid maintenance in B. thuringiensis (6). Similar to the study of pXO1, a minireplicon expressing these two proteins and containing a short DNA sequence with an iteron repeat (a centromere-like segment) was stably maintained. Tang et al. (6) suggested that the protein was functioning for plasmid partitioning. Larsen et al. (3) were interested in this protein as a distant relative of tubulin and FtsZ, and named it TubZ. They constructed a GFP-tagged version that they expressed in B. thuringiensis and in Escherichia coli. In both species the TubZ assembled into a single filament that spanned most of the length of the cell. The filaments were remarkably uniform in fluorescence, both along the length of the filament and between filaments in different cells. The filaments demonstrated a treadmilling behavior, growing at one end and shrinking at the other, at a rate of about 30 nm per s. Margolin (7) has reviewed the discoveries and mechanisms of these two proteins.
The likely function of TubZ from B. thuringiensis is plasmid partitioning (3,4,6). We believe this also applies to RepX. To avoid the confusion of different names for the FtsZ/tubulin-like proteins from different species, we propose a simple and uniform nomenclature. We will use the name TubZ, which was meant to designate the similarity to tubulin and FtsZ (3), and we will add a species identification. Thus, we will designate the proteins from B. anthracis and B. thuringiensis TubZ-Ba and TubZ-Bt. This nomenclature can be easily extended to the larger group of FtsZ/tubulin-like proteins on other plasmids and Archaea (3). An advantage of this nomenclature is that it implies nothing about function.
Assembly of TubZ-Bt has been characterized in the cytoplasm of B. thuringiensis and E. coli as discussed above (3). Assembly has not been studied in vitro. Assembly of TubZ-Ba has not been studied either in vitro or in vivo. Their extreme sequence divergence from each other suggests that their functions could be as different as those of FtsZ and tubulin. To explore these proteins further, we have expressed and purified both of them and characterized their assembly properties in vitro.
EXPERIMENTAL PROCEDURES
Protein Expression and Purification-We used PCR to clone the TubZ-Ba sequence from plasmid pDSW208-FtsZ PXO1 -GFP, a gift from Dr. Theresa Koehler, University of Texas Medical Center, which was generated from B. anthracis strain 7702. BamHI and EcoRI restriction sites were added at the ends and used to insert TubZ-Ba into the pGEX2T expression vector, which adds an N-terminal GST tag. The TubZ-Ba-pGEX2T vector was then transferred into E. coli strain BL21.
Protein was expressed by adding 0.5 mM isopropyl-1-thio--Dgalactopyranoside to the culture when its absorbance at 600 nm was ϳ1.0. After 3 h, cells were centrifuged and resuspended in 50 mM Tris-HCl, pH 7.9, 300 mM KCl. 1 mM phenylmethylsulfonyl fluoride and 0.2 mg/ml lysozyme were added, and cells were incubated for 30 min at 4°C. Cells were lysed with a French pressure cell and centrifuged at 32,000 rpm for 20 min. The supernatant was then mixed with 5 ml of glutathione-agarose (Sigma, G4510) for 1 h at 4°C. The agarose was loaded onto a column and washed with 50 mM Tris, pH 7.9, 300 mM KCl. GST-TubZ-Ba protein was eluted with 10 mM reduced glutathione, in the same buffer. The GST tag was removed by adding 2 units/ml of thrombin for 2 h at 4°C. The protein was further purified by chromatography on a Source Q 10/10 column (GE Health Care, Piscataway, NJ), eluted with a linear gradient of 50 mM to 500 mM KCl in 50 mM Tris-HCl, pH 7.9, 1 mM EDTA, 10% glycerol. Peak fractions were identified by SDS-PAGE and stored at Ϫ80°C. For most experimental measurements, the protein was dialyzed into assembly buffer, sometimes referred to as HMK100: 50 mM Hepes, pH 7.7, 100 mM KAc, 5 mM MgAc, 1 mM EGTA.
The TubZ-Bt gene was obtained from MosquitoDunks® (Summit Chemical Co.), a commercial sample of B. thuringiensis subsp Isrealiensis. Bacteria were propagated and genomic DNA, including the DNA of the plasmid pBtoxis, was isolated. The TubZ-Bt gene was obtained by PCR and transferred to pET15. The His tag protein was expressed and purified by affinity chromatography on a Talon column (Clontech Lab, Inc.). The His tag was removed with 2 units/ml of thrombin, and the protein further purified on a Source Q column.
GTPase Assay-To measure the GTPase activity, we used the continuous, regenerative coupled GTPase assay of Ingerman et al. (8). Our assay mixture included 0.4 mM phosphoenolpyruvate, 0.3 mM NADH, 20 units/ml each pyruvate kinase and lactate dehydrogenase (Sigma), and 0.1-0.5 mM GTP. Each GDP released from TubZ-Ba is regenerated to GTP with the loss of one molecule of NADH. The NADH concentration was monitored by its absorbance at 340 nm (extinction coefficient 6220 M Ϫ1 cm Ϫ1 ) using a Shimadzu UV-2401PC spectrophotometer.
Following addition of GTP, the absorbance showed a linear decrease over time. We measured the slope of the straight line after 100 s, typically between 100 and 300 s, and used the extinction coefficient of NADH to determine the GTP hydrolysis rate. We then plotted the rate versus TubZ concentration, and the slope of this line (above the critical concentration) gave the overall rate of GTP hydrolysis in GTP per min per TubZ.
Light Scattering Measurements-90 degree light scattering was measured using a Shimadzu RF-5301 PC spectrofluorometer, with both excitation and emission set at 350 nm. Assembly reactions were initiated by adding 20 -500 M GTP, and/or the indicated concentrations of GTP analogue, into 3-10 M TubZ. Light scattering measurements were begun about 2-3 s after nucleotide addition.
Electron Microscopy-TubZ filaments were visualized by negative stain EM. About 10 l of the sample (5-10 M) in the appropriate buffer was incubated with GTP for 1-2 min and applied to a carbon-coated copper grid. Samples were negatively stained with 2% uranyl acetate, and photographed using a Philips 301 electron microscope at ϫ50,000 magnification.
Measurement of Filament-bound Guanine Nucleotide-We used the method of Romberg and Mitchison (9) to determine the content of nucleotide bound to TubZ polymers. TubZ at 0, 5, 10, 15, and 20 M was added to 10 l of a GTP-regenerating system (2 mM phosphoenolpyruvate and 40 units/ml pyruvate kinase). 20 M GTP and 0.8 Ci of [␣-32 P]GTP were added, and after 100 s, the samples were denatured with 30 l of 1 M perchloric acid and 10 mM EDTA. The mixture was neutralized with 20 l of 1 M Na 2 CO 3 , and the samples were centrifuged at 10,000 rpm for 5 min to remove the precipitate.
The nucleotides in the sample were analyzed on poly(ethylenimine)-cellulose thin-layer chromatography plates. Plates were pre-run in water and dried, and a 5-l sample was spotted. The plates were run in 1 M LiCl. The dried plates were exposed to a Fujix bas 1000 phosphorimager plate, which was read in a Typhoon 9400 Variable Mode imager and analyzed by Image-Quant software. The amount of GDP as a fraction of total nucleotide was determined. Because all free GDP is immediately converted to GTP by the regenerating system, the measured GDP must come from nucleotide bound to filaments. We repeated the experiment with FtsZ from E. coli (EcFtsZ) as a control. Fig. 1 shows the GTP hydrolysis rate for increasing concentrations of TubZ-Ba. The GTPase was linearly proportional to the TubZ-Ba concentration above a critical concentration of 0.2-0.4 M (depending on the buffer), giving 24 GTP TubZ-Ba Ϫ1 min Ϫ1 (Fig. 1). The rate did not vary significantly with pH, being 21ϳ25 GTP TubZ-Ba Ϫ1 min Ϫ1 at room temperature in most buffers (pH 6.5, 7.0, and 7.7 with 100 mM KAc and 5 mM MgAc). We have reported that the GTPase activity of E. coli FtsZ (EcFtsZ) 2 is around 4 -7 GTP FtsZ Ϫ1 min Ϫ1 at room temperature (10). Thus the GTPase of TubZ-Ba is 3-4-fold higher than that of EcFtsZ.
TubZ-Ba Has a Very High GTPase Activity and Assembles into Double Helical Filaments-
Negative stain EM showed that TubZ-Ba assembled into short, twisted two-stranded filaments and some larger bundles (Fig. 2). The filaments alternate between wider and straight segments that show the two strands and narrow edge views that show a gentle curvature. The flattened structure in the negative stain image suggests a three-dimensional structure in which the two-stranded ribbon forms a helix, with a diameter of ϳ7 nm and pitch of 75 nm. Occasionally the ribbon showed a third strand. The assembly required GTP or a GTP analogue.
We then tested TubZ-Ba in crowding solution conditions that should mimic the conditions of the bacterial cytoplasm (11)(12)(13). When assembled in buffer containing 10% polyvinyl alcohol, TubZ-Ba retained its very high GTPase activity, about 18 GTP TubZ-Ba Ϫ1 min Ϫ1 . Assembly, however, was substantially changed. Instead of the short, twisted two-stranded filaments, in polyvinyl alcohol TubZ-Ba assembled very long bundles of ϳ2-10 protofilaments (Fig. 2e).
GTP␥S Can Support TubZ-Ba Filament Assembly, and a Tiny Amount Can Stabilize Assembly with GTP-GTP␥S is a GTP analogue that can substitute for GTP in many GTP-binding proteins and is either slowly hydrolyzed or not hydrolyzed at all. When GTP␥S was added to TubZ-Ba, it assembled into filaments similar to those assembled in GTP. However, the GTP␥S filaments were much longer and tended to cluster into somewhat larger irregular bundles (Fig. 2c). We note that in rare cases the two-stranded filaments unwound and left lengths of single filaments that were apparently stable without the lateral contacts. Polymers assembled with 100 M GTP plus 1 M GTP␥S appeared similar to those assembled with 20 M GTP␥S (Fig. 2d).
A light scattering assay showed that in 100 M GTP, polymers rapidly assembled and immediately began to disassemble (Fig. 3a). The 5 M TubZ-Ba would have consumed 50% of the 100 M GTP in 25 s. The disassembly began even before this, so it is probably induced by the accumulation of GDP. Light scattering was back to the baseline after 80 s. When assembly was initiated with 1.5 M GTP␥S, the light scattering rose somewhat more slowly and remained at the plateau. With 1.5 M GTP␥S, we expect only 1.5 M TubZ-Ba to assemble. When 100 M GTP was added to this assembly at 400 s, the light scattering increased further and then decreased back to the original plateau. We believe that the initial assembly of TubZ-Ba in GTP␥S formed filaments that were very stable. When GTP was added later, the remaining TubZ subunits assembled into filaments that were either separate from or attached to the end of the GTP␥S filaments. These new filaments rapidly hydrolyzed the GTP and disassembled, leaving the stable GTP␥S filaments. The GTP␥S subunits, once assembled into their original polymers, apparently did not mix with the GTP subunits.
We then tested assembly initiated by a mixture of 1.5 M GTP␥S plus 100 M GTP. The light scattering rose to a value twice that with 1.5 M GTP␥S alone, and when it reached the plateau it decreased only very slowly. EM showed that the filaments assembled in 1.0 M GTP␥S plus 100 M GTP were very similar to those assembled in 20 M GTP␥S (Fig. 2d).
To investigate this further, we assayed the rate of GTP hydrolysis for 5 M TubZ-Ba, with 20, 100, and 500 M GTP, and variable amounts of GTP␥S. For this experiment we assayed the steady state GTP hydrolysis rate as the slope of the absorbance curve as soon as it became linear, typically from 100 to 300 s following initiation of assembly. As shown in Fig. 3b, the GTPase was completely inhibited at 1, 2, and 5 M GTP␥S for the three concentrations of GTP.
GMPCPP is a slowly hydrolyzable GTP analog that supports assembly of tubulin and FtsZ. We found that it induced assembly of TubZ-Ba into bundles that were similar to those assembled in GTP␥S. GMPCPP also inhibited the GTPase activity of TubZ-Ba, but only at much higher concentrations. GMPCPP needed to be approximately equimolar to GTP to completely inhibit the GTPase of 5 M TubZ-Ba (Fig. 3c). These results suggest that GMPCPP binds TubZ-Ba with an affinity similar to that of GTP, while GTP␥S binds with much higher affinity.
One possible mechanism for the inhibition of GTPase would be simple competitive binding to block GTP binding. In this case, a plot of 1/GTPase versus GTP␥S or GMPCPP concentration should be linear. A linear plot was obtained for the case of FtsZ and GMPCPP (Fig. 3e), but both analogs gave non-linear curves for TubZ-Ba ( Fig. 3d and data not shown). The nonlinearity was suggestive of cooperativity, so we re-plotted the data in the form of a Hill plot. We assumed the equation where Y is the fractional inhibition of the GTPase activity and M is the concentration of GTP␥S or GMPCPP. As shown in Fig. 3f, both GTP␥S and GMPCPP gave linear Hill plots for TubZ-Ba. For GTP␥S, the Hill coefficient was about 1.7 for the three GTP concentrations, and for GMPCPP it was 1.6 and 1.9 for 30 and 100 M GTP. These values are all close to 2, so we take this as the measure of the cooperativity for the analogs inhibiting the GTPase. For a comparison we tested the inhibition of the GTPase of FtsZ by GMPCPP. The plot of 1/GTPase versus GMPCPP was linear (Fig. 3e), and the Hill coefficient was 1 (Fig. 3f).
To explore the differences between GMPCPP and GTP␥S, we tested how excess GDP would affect polymers of TubZ-Ba assembled with the GTP analogs. We had already observed that polymers assembled in GTP rapidly disassembled as the GTP was depleted and GDP accumulated (Fig. 3a). Polymers assembled in 20 M GMPCPP showed a complex disassembly when a 10-fold excess of GDP was added. An initial rapid phase of disassembly was followed by a slow phase (Fig. 4a). However, for TubZ-Ba assembled in 20 M GTP␥S, 200 M GDP had almost no effect. Significant filament disassembly was only obtained with 1-5 mM GDP (Fig. 4b). The disassembly induced by 5 mM GDP for the GTP␥S assembly was similar to that produced by 0.2 mM GDP for GMPCPP assembly. This is consist-ent with the suggestion that TubZ-Ba protein has a much higher binding affinity for GTP␥S than for GMPCPP, GTP, and GDP.
The Nucleotide in TubZ-Ba Filaments Is Almost Exclusively GDP-Romberg and Mitchison (9) confirmed that microtubules contained almost 100% GDP, but the nucleotide content of FtsZ filaments was only 20% GDP (9). We used their assay to determine the nucleotide content of TubZ-Ba filaments. TubZ-Ba was incubated with 20 M GTP and 0.8 Ci of [␣-32 P]GTP in the presence of a GTP-regenerating system (see details under "Experimental Procedures"). The GTP-regenerating system rapidly converts all free GDP to GTP, so the only GDP that exists in the assembly mixture is that which is protected from regeneration by being bound to polymers. Fig. 5 shows the GDP in increasing concentrations of TubZ-Ba protein, with a comparable assay of FtsZ as a control. For FtsZ about 30% of the nucleotide was GDP, similar to the 20% value determined previously by Romberg and Mitchison (9). (The previous measure was at pH 6.5, 50 mM potassium, and ours was at pH 7.7, 100 mM potassium.) In contrast, in the TubZ-Ba polymers 95% of the nucleotide was GDP. In this respect TubZ-Ba polymers are similar to microtubules.
Assembly and GTPase of TubZ-Bt-We did a similar analysis of TubZ-Bt, the tubulin-like protein from the plasmid pBtoxis of B. thuringiensis (3,6,14). Negative stain EM showed that assembly in 100 M GTP produced filaments similar to those of TubZ-Ba, although the two-stranded structure was less obvious ( Fig. 6a). Assembly in 1 M GTP␥S gave filaments that were much longer and somewhat more bundled, and showed more clearly the two-stranded structure (Fig. 6b). Assembly in 100 M GTP plus 1 M GTP␥S gave much larger bundles (Fig. 6c). In crowding condition (10% polyvinyl alcohol), TubZ-Bt assembled into long bundles similar to TubZ-Ba (not shown).
The biochemical properties of TubZ-Bt were remarkably similar to those of TubZ-Ba. TubZ-Bt had a very high GTPase activity, 16ϳ18 GTP TubZ-Bt Ϫ1 min Ϫ1 , and a critical concentration of 0.8 M (Fig. 6d). The nucleotide in TubZ-Bt filaments was almost exclusively GDP, similar to TubZ-Ba and tubulin. Light scattering showed a very rapid assembly in 100 M GTP, followed by disassembly as the GTP was consumed (Fig. 6e). In GTP␥S the assembly occurred more slowly, but when the plateau was reached at ϳ80 s, it was stable. In the mixture of 100 M GTP plus 1 M GTP␥S assembly occurred in a first phase at an intermediate rate to 40 s, followed by a steady increase in scattering. The second phase is probably due to the increased bundling seen by EM (Fig. 6c). Finally, we found that the GTPase was strongly inhibited by very small amounts of GTP␥S. In 100 M GTP, the GTPase of 3 M TubZ-Bt was reduced by 80% by 1 M GTP␥S (not shown). The plot of 1/GTPase versus GTP␥S was non-linear (not shown). The Hill plot was linear with a Hill coefficient of 1.9 (Fig. 6f).
DISCUSSION
Assembly of TubZ-Ba and TubZ-Bt both showed two aspects of cooperativity, which may not be related to each other. The first aspect of cooperativity is the characteristic critical concentration (Cc) at steady state. Below Cc there is no polymer, and above Cc all excess protein assembles into polymer. Actin and microtubules are among the best characterized cooperative assemblies, and they show a classic Cc. FtsZ also shows a Cc, although the basis for cooperativity is still obscure for this onesubunit thick filament (15).
The second aspect of cooperativity is the Hill coefficient of ϳ2 found for the inhibition of GTP hydrolysis by GTP␥S and GMPCPP. This is an unusual application of the Hill equation but we think it is informative. The normal application of the Hill equation is to describe binding of a ligand to multiple sites on an enzyme or other substrate. In our application we used it to describe the inhibition of GTPase by GTP␥S, in particular how the inhibition depends on the concentration of GTP␥S. Steady state GTP hydrolysis probably involves several steps. By analogy with tubulin, and probably also FtsZ, exchange of nucleotide probably occurs only on free subunits. Following a hydrolysis event the subunit must disassemble and exchange its GDP for GTP. The subunit can then reassemble and undergo another round of hydrolysis. We suggest that the inhibition of steady state GTP hydrolysis by GTP␥S may involve a block in the disassembly stage. The Hill coefficient of 2 then suggests that to block disassembly there needs to be two adjacent subunits with GTP␥S. This may in turn be related to the twostranded structure of the filaments, which is especially obvious for TubZ-Ba. Note that for FtsZ, which assembles primarily one-stranded protofilaments, the Hill coefficient for GMPCPP inhibition was 1.
A Possible Capping Mechanism-The TubZ-Ba and TubZ-Bt polymers share several features with microtubules that may indicate a capping mechanism related to dynamic instability. Most important, nearly 100% of the nucleotide in the polymer is GDP for both microtubules and TubZ. Because the GDP polymer is unstable and disassembles, a mechanism is needed to stabilize the polymers and promote growth. This has been explored in detail for microtubules, where a small GTP cap at each end is thought to stabilize the GDP tubulin in the core (16,17). A similar capping mechanism for TubZ is suggested by the fact that the filaments are maintained in solution, despite being nearly 100% TubZ-GDP.
Capping is also suggested by the substoichiometric stabilization of polymers by GTP␥S. The GTPase of 5 M FtsZ in 100 M GTP was inhibited 50 and 100% by 1 and 2 M GTP␥S (Fig. 3b), this correlated with the almost complete stabilization of polymers at 1.5 M GTP␥S (Fig. 3a). Assuming that GTP␥S binds much more tightly than GTP, the polymers from this mix would contain 1.5 M GTP␥S and 3.5 M GTP (a more accurate analysis would require knowledge of the K D for binding each nucleotide). That the 3.5 M GTP subunits are mostly stabilized by the minority GTP␥S subunits implies a cooperative mechanism for filament stabilization, or capping. The Hill coefficient of ϳ2 further suggests that the cap might consist of two GTP or GTP␥S subunits at the end of the two-stranded filament. This is at present a preliminary speculation, but further study may shed light on mechanisms common to TubZ and tubulin.
GTP␥S does not support assembly of FtsZ by itself. However, when protofilament bundles were assembled with GTP in the presence of Ca, GTP␥S could stabilize these polymers (18). A capping mechanism was suggested to explain this stabilization (18). This mechanism appears to be quite different from what we observe for TubZ, where GTP␥S can generate assembly by itself, and is incorporated into the polymers.
A capping mechanism generates dynamic instability in microtubules, but TubZ-Bt behaves very differently from microtubules in vivo. Microtubules show dynamic instability at both ends, elongating for several m, and then shortening by several m. TubZ-Bt showed a very different treadmilling behavior when assembled in bacteria (3). The filament grew continually at one end and shrank at the other. It is possible that the long filament of TubZ-Bt seen in bacteria has a GTP cap at the growing end, and GDP on all other subunits, including the shrinking minus end. Continuous growth at the GTP end and disassembly at the GDP end would generate treadmilling. An alternative possibility is that there is a GTP cap at each end, at one end the growth phase greatly exceeds shrinking, and at the other end the shrinking phase exceeds growth. This kind of differential dynamic instability can produce treadmilling in microtubules (17). In this case, GTP␥S would stabilize the polymers by placing permanent caps at both ends.
Divergent Sequences but Identical Biochemistry-The weak sequence similarity of TubZ-Bt to FtsZ suggested a possible role in plasmid partitioning (3,4,6,14). Although TubZ-Ba was originally suggested to function in DNA replication (hence the name RepX), we believe that it functions like TubZ-Bt in plasmid partitioning. However, TubZ-Ba showed only 21% sequence identity to TubZ-Bt, suggesting that the two proteins may have functions as different as those of FtsZ and tubulin (1).
Our in vitro analyses show, however, that TubZ-Ba and TubZ-Bt have remarkably similar biochemical properties. How can one reconcile the extremely divergent sequences with the apparently identical biochemistry? One possibility would be that they had separate origins as divergent genes, and acquired the virtually identical biochemistry by convergent evolution. A second is that they had a common origin and somehow diverged rapidly in the different Bacillus species.
A comparison of the two plasmids favors the second possibility, a common origin followed by divergence of sequence. 29 of 125 predicted pBtoxis proteins showed sequence similarity to predicted proteins from pXO1(4). Several groups of these genes map to the two plasmids with similar spacing, suggesting a common ancestry. In particular, the TubZ-Ba and TubZ-Bt genes are both located just before the likely replication origin (4). Why then does FtsZ have 40 -50% sequence identity across almost all bacterial and archaeal species, yet TubZ-Ba and TubZ-Bt share only 21% sequence identity? We suggest that the function of TubZ in plasmid partition may be much less stringent than the function of FtsZ in cytokinesis, and permit the greater divergence.
Apart from the TubZ proteins of Bacillus plasmids, a number of very divergent FtsZ/tubulin-like sequences have been identified in the genomes of various archaea (2, 3). These do not appear to be involved in cell division, because each species has one or more conventional FtsZs. It may be that these divergent FtsZ's function primarily to assemble long cytoskeletal filaments, as a part of some still unknown mechanism.
Plasmid Partitioning-Plasmid partitioning can be achieved not only with very divergent tubulin homologs, but with at least two other protein systems unrelated to tubulin. The ParA/ SopA system partitions a wide range of plasmids, and also bacterial chromosomes (19 -21). Many of the ParA proteins from different plasmids and bacterial species show only 25-30% sequence identity to ParA of plasmid P1. Another well-studied partitioning system is based on the actin homolog ParM (22,23). A second actin-homolog partitioning system is the newly discovered AlfA (24). AlfA forms long, dynamic filaments in the cytoplasm, and appears to function like ParM. Yet the amino acid sequence identity between AlfA and ParM is only 15%, less even than the 21% identity between TubZ-Ba and TubZ-Bt.
It is remarkable that a plasmid partitioning machine can apparently be constructed from three completely unrelated proteins. The one thing the proteins have in common is the ability to polymerize into long, thin filaments, which can apparently also bind a plasmid at each end. It may be that plasmid partitioning requires only a protein that can assemble a long filament, and some mechanism for attaching the plasmid to it. | 6,318.2 | 2008-03-28T00:00:00.000 | [
"Biology"
] |
Exploring the roles and potential therapeutic strategies of inflammation and metabolism in the pathogenesis of vitiligo: a mendelian randomization and bioinformatics-based investigation
Introduction: Vitiligo, a common autoimmune acquired pigmentary skin disorder, poses challenges due to its unclear pathogenesis. Evidence suggests inflammation and metabolism’s pivotal roles in its onset and progression. This study aims to elucidate the causal relationships between vitiligo and inflammatory proteins, immune cells, and metabolites, exploring bidirectional associations and potential drug targets. Methods: Mendelian Randomization (MR) analysis encompassed 4,907 plasma proteins, 91 inflammatory proteins, 731 immune cell features, and 1400 metabolites. Bioinformatics analysis included Protein-Protein Interaction (PPI) network construction, Gene Ontology (GO), and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. Subnetwork discovery and hub protein identification utilized the Molecular Complex Detection (MCODE) plugin. Colocalization analysis and drug target exploration, including molecular docking validation, were performed. Results: MR analysis identified 49 proteins, 39 immune cell features, and 59 metabolites causally related to vitiligo. Bioinformatics analysis revealed significant involvement in PPI, GO enrichment, and KEGG pathways. Subnetwork analysis identified six central proteins, with Interferon Regulatory Factor 3 (IRF3) exhibiting strong colocalization evidence. Molecular docking validated Piceatannol’s binding to IRF3, indicating a stable interaction. Conclusion: This study comprehensively elucidates inflammation, immune response, and metabolism’s intricate involvement in vitiligo pathogenesis. Identified proteins and pathways offer potential therapeutic targets, with IRF3 emerging as a promising candidate. These findings deepen our understanding of vitiligo’s etiology, informing future research and drug development endeavors.
Introduction
Vitiligo is a common autoimmune acquired pigmentary skin disorder, affecting approximately 0.5%-2% of the global population (Eidsmo, 2022).Despite its prevalence, the exact etiology and pathogenesis of vitiligo remain elusive, involving a complex interplay of genetic predisposition and environmental triggers (Wang et al., 2021).While depigmented patches on the skin are the hallmark of vitiligo, the underlying mechanisms driving the disease extend beyond the visible symptoms.Recent advancements in research have illuminated the critical roles of inflammation and metabolic processes in the pathogenesis of vitiligo (Lyu and Sun, 2022).
However, past research has primarily consisted of observational studies, constrained by sample size and confounding factors, resulting in conflicting findings in some instances.For example, Laddha et al. (2012) reported elevated levels of TNFα in vitiligo patients compared to the control group, while several other studies have reached the opposite conclusion, finding no significant difference in TNFα concentration compared to normal controls (Pichler et al., 2009;Singh et al., 2012;Camara-Lemarroy and Salas-Alanis, 2013).Furthermore, although many observational studies have yielded relatively consistent results, such as increased expression of CXCL10 in vitiligo patients compared to the control group (Speeckaert et al., 2023), they often only provide correlational conclusions, making it challenging to establish causal relationships with vitiligo.
Mendelian randomization (MR) is a genetic epidemiological research method that utilizes single nucleotide polymorphisms (SNPs) as instrumental variables (IVs) (Burgess et al., 2019).It infers potential causal relationships based on Mendel's laws of inheritance, offering several advantages over observational studies.Genetic variations are determined at conception, preceding disease development, and are generally not influenced by confounding factors such as postnatal factors and social environment.Therefore, causal relationships derived from MR studies exhibit more credible temporality, reducing confounding bias and minimizing the likelihood of reverse causation (Smith et al., 2007).This study employs a bidirectional two-sample MR research design, incorporating extensive datasets that encompass various biological factors, including inflammatory proteins, immune cell characteristics, and metabolites.Through bioinformatic analysis, we aim to elucidate the roles of the identified core proteins in cellular pathways and functions, providing potential targets for vitiligo treatment.Ultimately, through drug target exploration and molecular docking validation, we seek to propose potential therapeutic strategies based on biomarkers (Davies et al., 2018).
Study design
To investigate the role of inflammation and metabolism in the pathogenesis of vitiligo, and to identify potential pharmacological targets and biomarkers, we employed a bidirectional two-sample MR analysis along with bioinformatics analysis, using primary data sourced from genome-wide association studies (GWAS) (Uitterlinden, 2016).Please refer to Figure 1 for detailed procedures.
Data sources
The data can be broadly categorized into exposure data and outcome data.The outcome data pertaining to vitiligo is sourced from the latest and most comprehensive Finnish database, R10 version (https://www.finngen.fi/en/access_results).Exposure data primarily consists of two major components: inflammation and metabolism.For metabolism analysis, we have incorporated 1,091 blood metabolites and 309 metabolite ratios obtained from the GWAS catalog (https://www.ebi.ac.uk/gwas/studies/GCST90199621-902010209).Inflammation analysis is further subdivided into three components: 4,907 plasma proteins, 91 inflammatory proteins, and 731 immune cells.All GWAS data included in this study for MR analysis are of European ancestry.Refer to Supplementary Table S1 for detailed information regarding the data.
Plasma protein screening
We utilized the circulating protein expression level GWAS study from deCODE Genetics (35,559 Icelanders,4,907 proteins) to identify protein quantitative trait loci (pQTL).However, due to the presence of numerous proteins in plasma unrelated to inflammation and immunity, further filtering is necessary.We utilized the Gene Set Enrichment Analysis (GSEA) website (link: https://www.gsea-msigdb.org/gsea)to download human-relevant gene sets (H, C1-C8) from the Molecular Signatures Database.Subsequently, we filtered these gene sets using the keywords 'inflammation' and 'immunity,' resulting in 5,886 genes related to inflammation and immunity, as detailed in Supplementary Table S11.Next, we conducted an intersection operation between these genes and the 4,907 plasma proteins obtained from the deCODE dataset, yielding 925 proteins.Our focus was primarily on proteins associated with inflammation and immunity, thus completing the screening of plasma proteins.
Merging with GWAS catalog for final protein selection
Merging the 925 proteins selected from Section 2.2.1, which are associated with inflammation and immunity, with the 91 inflammatory proteins from the GWAS catalog (https://www.phpc.cam.ac.uk/ceu/proteins), resulting in a final set of 1,016 proteins included in the MR analysis.
Genetic instrumental variable selection
In this section, we employed a rigorous process for the selection of genetic instrumental variables to ensure the robustness and reliability of our study.The steps involved in this selection are outlined below.
Identification of SNPs significantly associated with the phenotype
We initially identified SNPs that exhibited a significant association with the phenotype, utilizing a stringent threshold (P < 5E-06) (He et al., 2024).All GWAS datasets included in this study provided p-values for the association between SNPs and exposure, similar to the p-value.exposuredisplayed in Supplementary Tables S2-S4.
Integration, concordance, and correction of palindromic SNPs
We integrated and assessed the concordance of the exposureoutcome dataset.Additionally, we corrected palindromic SNPs with ambiguous strands based on allele frequency information, ensuring accurate alignment and interpretation.This step was primarily implemented using the "harmonise_data" function within the "TwoSampleMR" R package.It automatically removes SNPs with palindromic sequences (e.g., where the effect allele is base C and the other allele is base G) during the final MR analysis.Study design.GO, gene ontology, IVs instrumental variables, KEGG, kyoto encyclopedia of genes and genomes, MR, mendelian randomization, PPI, protein-protein interaction.
Assessment of instrumental variable (IV) strength
To evaluate the strength of the instrumental variables, we calculated the F-value.We excluded potentially weak IVs by setting a threshold (F > 10) to mitigate bias between the instrumental variables and exposure factors.The formula for calculating F is as follows: Where (R^2) is the proportion of variation in the exposure database explained by SNPs, (N) represents the number of participants in the GWAS sample, (β) is the estimated effect size of the SNP, (SE) represents the standard error of the effect estimate, and (EAF) represents the effect allele frequency.
Refer to Supplementary Tables S2-S4 for detailed information regarding the SNP data.
MR analysis and sensitivity analysis
In this study, analysis was conducted using the "Two-Sample MR" and "MRPRESSO" packages in R 4.1.0software.The primary method employed was the Inverse Variance Weighted (IVW) method to calculate the odds ratio (OR) and its 95% confidence interval (CI), assessing the potential causal relationship between exposure and outcome.Additionally, supplementary analyses were performed using MR-Egger regression and Weighted Median Method (WME), with the Wald Ratio method applied for exposures with only one SNP (Bowden et al., 2015;Bowden et al., 2016;Perry et al., 2021).Subsequently, sensitivity analyses were conducted to ensure the validity and robustness of the MR analysis results.For heterogeneity assessment, Cochran's Q was employed to test SNP heterogeneity.If p < 0.05, indicating heterogeneity, a random-effects model was used; otherwise, a fixedeffects model was applied.To assess horizontal pleiotropy, MR-Egger method and MRPRESSO (MR pleiotropy residual and outlier) method were jointly utilized.Exposure data exhibiting horizontal pleiotropy were removed to ensure the reliability of conclusions.To address the issue of multiple testing, the Benjamini-Hochberg method was employed, which incorporates the false discovery rate (FDR).The significance threshold was set at p < 0.05.The exposure with both original p values and FDRcorrected p values less than 0.05 is considered to have a significant causal relationship with vitiligo, while the exposure with an original p-value less than 0.05 but an FDR-corrected p-value greater than 0.05 is considered to have a potential causal relationship with vitiligo.
Bioinformatics analysis
Following the outlined procedures, we identified a total of 49 inflammation-immune-related proteins causally associated with vitiligo.Subsequent bioinformatics analysis was executed on this protein set.
Protein-Protein Interaction (PPI) network construction
Utilizing the STRING database (https://string-db.org/),we retrieved and validated the aforementioned 49 inflammationimmune-related proteins.Leveraging known physical interactions and functional relationships, we constructed a comprehensive Protein-Protein Interaction (PPI) network (Szklarczyk et al., 2023).
GO and KEGG analysis
Conducting Gene Ontology (GO) functional enrichment analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis on the 49 proteins offered additional insights into their roles across biological processes (BP), cellular components (CC), molecular functions (MF), and pathways.
Subnetwork discovery and identification of hub proteins
To unveil functional modules and hub regulatory proteins within the PPI network, we employed the Molecular Complex Detection (MCODE) plugin in Cytoscape software for subnetwork discovery.We set parameters (degree cutoff = 2, node score cutoff = 0.2, k-core = 2, and max.depth = 100) for optimal results (Menon and Elengoe, 2020).
Colocalization analysis
For the six hub proteins identified by MCODE, we performed co-localization analysis using the R package coloc (Wallace, 2021).Bayesian co-localization assesses the probability that a protein and vitiligo share the same SNP, mitigating bias introduced by linkage disequilibrium (LD) in MR analysis (Giambartolomei et al., 2014).In co-localization analysis, five hypotheses were considered.
Particular attention was given to the H4 hypothesis, and when PP.H4 exceeded 0.75, it was considered strong evidence of colocalization.
Exploration of drug targets and molecular docking validation
Through successive filtering, we identified hub proteins with promising drug target potential.Records of past or ongoing clinical drug development projects for these proteins were retrieved from the Therapeutic Target Database (http://db.idrblab.net/ttd/)and ClinicalTrials.gov(https://clinicaltrials.gov/).To assess the binding affinity and interaction patterns between the candidate drug/small molecule and its target, molecular docking validation was conducted using Autodock software (Morris et al., 2008).Two-dimensional protein structures were obtained from the Protein Data Bank (PDB) (https:// www.rcsb.org/),and the chemical structures of drugs were searched on PubChem (https://pubchem.ncbi.nlm.nih.gov/)(Wang et al., 2017).
Results of MR analysis and sensitivity analysis
Through conducting MR analysis on multiple plasma proteins, inflammatory proteins, immune cell features, and metabolites related to inflammation, immunity, and metabolism, we have identified a series of biomarkers causally associated with vitiligo.Detailed results can be found in Figure 2; Supplementary Tables S5-S7.Sensitivity analysis results can be found in Supplementary Tables S8-S10.
Proteins
From the initial screening of 925 inflammation-immunerelated proteins and 91 inflammatory proteins, a total of 49 proteins causally related to vitiligo were identified through MR analysis.No horizontal pleiotropy was observed among these proteins.After FDR correction, the p values of six proteins remain less than 0.05: IRF2, IRF3, ISG15, PGM2, ST3GAL1, GNLY.
Immune cell
Through MR analysis of 731 immune cell features, we identified 45 immune cell features causally associated with vitiligo.
Due to the presence of horizontal pleiotropy in six features, they were excluded from the final results, resulting in 39 immune cell features.After FDR correction, the p values of eight immune cell phenotypes remain less than 0.05: CD8 on Effector Memory CD8 + T cell, CD20 on IgD-CD38 − B cell, CD25 on CD39 + CD4 + T cell, CD39 + CD4 + T cell %T cell, CD39 + CD8 + T cell Absolute Count, CD39 + CD8 + T cell %T cell, CD28 + CD45RA-CD8dim T cell %T cell, and BAFF-R on CD20 − CD38 − B cell.
Metabolites and metabolite ratios
MR analysis of 1,400 metabolites and metabolite ratios revealed 61 causal relationships with vitiligo.MR analysis illustrates the causal relationships between inflammatory-immune-related proteins, immune cells, metabolites, and metabolite ratios with vitiligo.(A) The volcano plot displays the causal relationships between 49 inflammatory-immune-related proteins and vitiligo.However, certain protein names, including TNFRSF11B, TNFSF12, SELL, TLR3 (all with odds ratios less than 1), were not displayed due to overlapping positions, represented by gray circles; (B) The volcano plot displays the causal relationships between 39 immune cell features and vitiligo; (C) The forest plot presents the causal relationships between 19 metabolite ratios and vitiligo; (D) The forest plot presents the causal relationships between 40 metabolites and vitiligo.
Two were excluded due to horizontal pleiotropy, resulting in a final set of 59 metabolites and metabolite ratios.After FDR correction, the p values of 59 metabolites and metabolite ratios are all greater than 0.05, suggesting potential causal relationships with vitiligo.
Reverse MR analysis
Using vitiligo as the exposure and the aforementioned 49 proteins, 39 immune cell features, and 59 metabolites and ratios as outcomes, we analyzed for bidirectional associations.The results revealed four proteins, three metabolites and ratios, and one immune cell phenotype exhibiting bidirectional causal relationships.Detailed results can be found in Figure 3.
PPI analysis results
We subjected the 49 proteins to Protein-Protein Interaction (PPI) analysis using the STRING website, with a minimum required interaction score set to high confidence (0.700).Under this criterion, we identified interactions among 19 proteins, and these relationships are detailed in Figures 4A, B. Notably, TNF and CXCL10 had the highest number of connections with other proteins.
GO and KEGG analysis
We conducted GO and KEGG analyses on the 49 proteins through the STRING website.The results revealed the most significant BP as the immune system process, CC primarily located in the extracellular region, and MF involving signaling receptor binding.Additionally, KEGG pathway analysis highlighted the most significant pathway as cytokine-cytokine receptor interaction.These findings collectively underscore the importance of these proteins in the immune system.For a more detailed analysis, please refer to Figure 5.
Results of subnetwork discovery and identification of hub proteins
Utilizing Cytoscape's MCODE plugin, we identified two subnetworks comprising six hub proteins: CD86, granzyme B (GZMB), selectin L (SELL), toll like receptor 3 (TLR3), interferon regulatory factor 3 (IRF3), and ISG15.These proteins may play more pivotal regulatory roles.For further details, please refer to Figures 4C, D. The forest plot illustrates the results of the reverse Mendelian Randomization analysis when vitiligo is considered as the exposure.
Frontiers in Genetics frontiersin.org
Results of colocalization analysis
Only IRF3 among the six hub proteins passed the colocalization analysis (PP.H4 > 0.75).Detailed results can be found in Figure 6.However, it is noteworthy that a negative colocalization result does not necessarily imply the ineffectiveness of the MR analysis.
Results of exploration of drug targets and molecular docking validation
The molecular structure of Piceatannol (Compound CID: 667,639) was obtained from the PubChem compound database (https://pubchem.ncbi.nlm.nih.gov/).The 3D coordinates of the protein IRF3 (PDB code: 3QU6; resolution: 2.3 Å) were downloaded from the Protein Data Bank (PDB) (http://www.rcsb.org/).Molecular docking results indicate that Piceatannol binds to IRF3 through visible hydrogen bonds and strong electrostatic interactions.Piceatannol successfully occupies the hydrophobic pocket of IRF3.The binding energy is −7.293 kcal/ mol, suggesting a highly stable binding.Detailed results can be found in Figure 7.
Discussion
In this study, through MR analysis, we validated the causal relationships between 49 inflammation-immune-related proteins and vitiligo.Notably, proteins such as IL-17C, CXCL10, NKR2B4 (CD244), and TNF receptor superfamily member 11b (TNFRSF11B) exhibited bidirectional causality with vitiligo.GO enrichment analysis unveiled the involvement of these proteins in multiple biological processes, including inflammatory responses, immune regulation, and positive regulation of interferon-gamma production.Additionally, they were associated with cellular components such as extracellular region and vesicles, as well as molecular functions like receptor binding, receptor ligand activity, and cytokine receptor binding.KEGG analysis further underscored the significance of the Cytokine-cytokine receptor interaction pathway.PPI analysis revealed the interplay among these proteins, with TNF and CXCL10 showing the highest connectivity.The MCODE plugin identified six hub proteins, including CD86, GZMB, SELL, TLR3, IRF3, and ISG15.IRF3, supported by co-localization analysis, was associated with vitiligo and holds potential as a therapeutic target.Drug target exploration suggested that the small molecule Piceatannol could serve as an inhibitor for IRF3, and molecular docking validated the stable affinity between them.Further research is required to ascertain whether Piceatannol can effectively treat vitiligo by inhibiting IRF3.After conducting MR analysis on 1,400 metabolites and metabolite ratios, we confirmed 40 metabolites and 19 metabolite ratios causally linked to vitiligo, including 11 related to Bilirubin and its 2018) revealed a significant decrease in serum Heme Oxygenase-1 (HO-1) and its metabolites, including Bilirubin, CoHb, and iron concentrations in vitiligo patients compared to the healthy control group.They successfully controlled the progression of vitiligo by using an HO-1 agonist to restore the functionality of regulatory T cells (Tregs).This finding suggests that HO-1 might be a potential therapeutic target for vitiligo.Based on our research results, we speculate that the protective effect of HO-1 on vitiligo is likely closely associated with Bilirubin.Additionally, we identified causal relationships between 731 immune cell features and vitiligo.We confirmed 39 immune cell features causally linked to vitiligo, with CD8 on Effector Memory CD8 + T cells showing the highest significance (p = 9.58E-05).In addition to CD8 + T cells, we should also pay attention to other immune cells that may potentially have a protective effect against vitiligo.For instance, the presence of CD66b on Granulocytic Myeloid-Derived Suppressor Cells (p = 1.61E-02) has drawn our attention.This is consistent with the findings of Douguet et al. (2018), who utilized a transgenic mouse model carrying the ret oncogene (Ret mice) that develops a spontaneous metastatic melanoma and observed a reduction in the number of Myeloid-Derived Suppressor Cells (MDSCs) at the primary tumor site in mice with vitiligo.This suggests that MDSCs may play a protective role in the development of vitiligo to some extent.It is intriguing to note the close associations between MDSCs and various inflammatory proteins and metabolites identified in our study.For instance, Tran et al. (2020) found that Bilirubin enhances the recruitment of MDSCs and suppresses the activities and functions of T cells in blood in the sepsis mouse model.According to Lu et al. (2021), upregulation of CXCL10 in a murine renal cancer model was associated with a reduction in the frequency and immunosuppressive activity of MDSCs.Additionally, Cheng et al.'s research (Cheng et al., 2020) indicated that cGAMP, by stimulating the cGAS-cGAMP-STING-IRF3 pathway, decreased the quantity of MDSCs, suggesting a potential inhibitory role of IRF3 in regulating MDSCs numbers.Interestingly, our findings suggest a protective role of both MDSCs and Bilirubin against vitiligo, while IRF3 and CXCL10 may potentially increase the risk of vitiligo occurrence.
Current research suggests that vitiligo results from the combined effects of genetic factors (approximately 80%) and environmental stressors (about 20%) (Bergqvist and Ezzedine, 2021).Under this interplay, melanocytes in vitiligo patients are more susceptible to oxidative stress, leading to cellular damage (Jadeja et al., 2020;Chang and Ko, 2023).This process prompts melanocytes to release exosomes containing specific antigens, activating CD8 + T cells to produce various cytokines such as IFNγ, TNF, and GZMB.Notably, IFNγ induces the secretion of CXCL9 and CXCL10 by keratinocytes, where CXCL10, through interaction with CXCR3B, induces apoptosis of melanocytes (Tulic et al., 2019;Su et al., 2020;Bergqvist and Ezzedine, 2021).Our research findings robustly confirm a bidirectional positive causal relationship between CXCL10 and vitiligo.On one hand, the Displays the colocalization analysis results between IRF3 and vitiligo.IRF3, interferon regulatory factor 3.
increase in CXCL10 contributes to the development of vitiligo, and on the other hand, the presence of vitiligo leads to a significant upregulation of CXCL10 expression.This discovery aligns with previous studies, emphasizing CXCL10 as a potential effective target for treating vitiligo.Our study also addresses some controversies in previous research, confirming a causal relationship between TNF and vitiligo, indirectly supporting the rationale for using TNF inhibitors in vitiligo treatment (Kemp, 2015).However, we also note that some tumor necrosis factor inhibitors may induce the onset of vitiligo, a phenomenon observed in patients with various other conditions, such as hidradenitis suppurativa, ankylosing spondylitis, Crohn's disease, and psoriasis (Dunn et al., 2019;Anthony et al., 2020;Phan et al., 2020).This paradoxical result prompts further consideration.Interestingly, through MR analysis, we identified TNFRSF11B, TNF alpha induced protein 3 (TNFAIP3), TNF superfamily member 12 (TNFSF12) as potentially protective factors against vitiligo, which may explain why some patients experience depigmentation after using TNF inhibitors.
In our study, we made a notable discovery, revealing for the first time the potential involvement of IRF3 in the pathogenic mechanism of vitiligo.Previous research by Sen et al. (2019) indicated that the inhibition of DNA damage repair proteins poly ADP-ribose polymerase (PARP) and checkpoint kinase 1 (CHK1) significantly increases PD-L1 expression in patients with small cell lung cancer (SCLC), thereby activating the STING/TBK1/IRF3 immune pathway.Activation of this pathway leads to the release of chemokines such as CXCL10, inducing the activation of cytotoxic T lymphocytes.We hypothesize that in vitiligo, IRF3 might contribute to the development of the condition by promoting the release of CXCL10.Furthermore, findings from the study by Dang et al. (2004) further support the importance of IRF3 in immune regulation.Their experiments in a mouse model of septic shock revealed that Piceatannol exhibits inhibitory effects by effectively blocking lipopolysaccharide (LPS)-mediated IRF3 activation.This inhibitory effect, achieved by downregulating the expression of various inflammatory factors, successfully suppressed the occurrence of inflammation.These results provide additional support to our discovery, suggesting that IRF3 may serve as a crucial node in the regulation of inflammation.Piceatannol, acting as an inhibitor of IRF3, may play a role in modulating the pathogenic mechanism of vitiligo.Although our molecular docking validation demonstrated the affinity between Piceatannol and IRF3, further in-depth research and validation are necessary to explore the therapeutic potential of Piceatannol in vitiligo.
Our research has certain limitations that need to be acknowledged.Firstly, we focused solely on the causal relationships between peripheral blood protein levels, immune cells, metabolites, and vitiligo, without considering skin tissue.This limitation arises from the unavailability of large, publicly accessible GWAS datasets specifically related to skin tissue.Secondly, our study exclusively covers the European population, potentially restricting the generalizability of conclusions to other ethnic groups.
In summary, our study revealed causal relationships between 49 proteins, 39 immune cell features, and 59 metabolites with vitiligo.We addressed some controversies present in traditional observational studies and conducted in-depth exploration.Notably, we identified IRF3 as a potential novel therapeutic target for vitiligo.These research findings provide crucial insights for a deeper understanding of the pathogenic mechanisms of vitiligo and the development of future therapeutic strategies.
FIGURE 3
FIGURE 3 FIGURE 4 (A) The PPI graph created using the STRING website; (B) Further processing of the 19 proteins using Cytoscape software; (C,D) Identification of two subnetworks comprising six hub proteins using the MCODE plugin.
FIGURE 5 (A) Results of GO enrichment analysis, sorted by FDR values, with only the top 10 BP pathways displayed; (B) Results of KEGG pathway analysis.BP, biological processes; CC, cellular components, MF, molecular functions.
FIGURE 7
FIGURE 7 Binding mode of screened drugs to their targets by molecular docking.(A) Cartoon representation, overlay of the crystal structures of small molecule compounds and their targets were illustrated by Molecule of the Month feature; (B) The PyMOL software displays the three-dimensional structure of the binding pocket along with the linkage between the compound and its target; (C) The 3D structure of IRF3; (D) The 3D structure of Piceatannol. | 5,448.8 | 2024-04-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
An accurate approach based on the orthonormal shifted discrete Legendre polynomials for variable-order fractional Sobolev equation
This paper applies the Heydari–Hosseininia nonsingular fractional derivative for defining a variable-order fractional version of the Sobolev equation. The orthonormal shifted discrete Legendre polynomials, as an appropriate family of basis functions, are employed to generate an operational matrix method for this equation. A new fractional operational matrix related to these polynomials is extracted and employed to construct the presented method. Using this approach, an algebraic system of equations is obtained instead of the original variable-order equation. The numerical solution of this system can be found easily. Some numerical examples are provided for verifying the accuracy of the generated approach.
problems that have recently been modeled by such operators can be found in [23,24]. However, similar to constant-order fractional equations, the major challenge in dealing with VO fractional equations is finding their analytical solutions, which is often impossible. For this reason, in recent years, many numerical approaches have been constructed to solve this category of problems. For instances, see [25][26][27][28][29].
The Sobolev equation is a well-studied partial differential equation which has been frequently utilized in the fluid dynamics to express the fluid motion through rock or soil, and other media [30]. This equation is a special form of the Benjamin-Bona-Mahony-Burgers problem, where the coefficients of nonlinear term and both first-order derivatives are zero [31]. Many applications of the Sobolev equation have been reported in moisture migration in soil [32], thermodynamics [33], and fluid motion [33]. There are many approaches that have been applied to solve various types of the Sobolev equation in recent years. For instances, see [30,31,[34][35][36][37].
Recently, the author of [38] introduced a new nonsingular VO fractional derivative, where the Mittag-Leffler function is its kernel. As far as we know, there is no previous VO fractional version of the Sobolev problem. This motivates us to pursue the following goals: • Defining a VO fractional prescription of the Sobolev equation using the nonsingular fractional derivative expressed in [38]. • Constructing a highly accurate method based upon the orthonormal shifted discrete Legendre polynomials (DLPs) for this equation. So, we concentrate on the problem under the initial and boundary conditions θ (y, 0) =θ (y), where θ (·, ·) is the undetermined solution, μ and ν are positive constants, ζ (·) is a continuous function in its domain, and ϕ(·, ·),θ (·),θ 0 (·), andθ 1 (·) are given functions. Also, is the VO fractional derivative of order ζ (τ ) with respect to τ in the Heydari-Hosseininia (HH) sense of the functions θ (y, τ ) [38]. This equation can have useful applications in many applied problems, such as the transport phenomena of humidity in soil, the heat conduction phenomena in different media, and the porous theories concerned with percolation into rocks with cracks. Note that in the case of ζ (τ ) = 1, this problem reduces to the classical Sobolev problem.
One good idea for solving fractional functional equations is employing polynomials as basis functions to construct numerical methods. This is important for two reasons: First, the computation of the fractional derivative and integral of these functions is easy; and second, if the solution of the problem under study is sufficiently smooth, high-precision solutions can be achieved. Basis orthogonal polynomials are classified into discrete and continuous kinds regarding the method of calculating their expansion coefficients [39]. Unlike continuous polynomials, the expansion coefficients of which are calculated by integrating (in most cases numerically), the expansion coefficients of discrete polynomials are calculated accurately using a finite summation. In recent years, discrete polynomials have been extensively applied for solving diverse problems. For instances, see [39][40][41][42][43][44][45][46][47].
This study applies the orthonormal shifted DLPs for solving the Sobolev equation (1.1) subject to conditions (1.2). To this end, a new fractional matrix related to the VO fractional differentiation of these polynomials is obtained and applied for generating a numerical technique for this problem. The intended approach is constructed using these polynomials expansion and the tau technique. This technique converts the VO fractional problem into an algebraic system of equations that readily can be handled. Note that since it is easier to obtain the operation matrix of VO fractional derivative of the orthonormal shifted DLPs than continuous polynomials, we have considered these discrete polynomials as basis functions for solving this VO fractional problem.
Organization of this article is as follows: The VO fractional derivative in the HH sense is reviewed in Sect. 2. The orthonormal shifted DLPs are reviewed in Sect. 3. Some matrix equalities are obtained in Sect. 4. The computational approach is explicated in Sect. 5. Numerical examples are given in Sect. 6. Conclusion of this study is provided in Sect. 7.
Preliminaries
Here, we review the definition of the VO fractional differentiation used in this study. First of all, we express the definition of the Mittag-Leffler function that is given in [4] by Please remember that for b = 1 it is considered as E a (τ ) = E a,1 (τ ). The VO fractional derivative of order ζ (τ ) ∈ (0, 1) (where ζ (τ ) is a continuous function on its domain) in the HH sense of the function θ (τ ) is given in [38] as follows: The above definition results in where r ∈ Z + {0}.
2)
S (m) k s are the first type Stirling numbers, and i k is the binomial coefficient. These polynomials can be utilized for approximating any continuous function θ (τ ) over [0, τ b ] as follows: in which Likely, a continuous function θ (y, τ ) defined over [0, can be approximated by the orthonormal shifted DLPs as
Matrix relationships
Here and in what follows, we give some matrix relationships related to the orthonormal shifted DLPs.
] is a matrix of order (N + 1) with entries Moreover, for any integer n, we have ] is a matrix of order (N + 1) with entries , the first row should be zero. Assumeî The above result can be approximated as where, regarding (3.5), we havê Eventually, via the change of indicesî = i -1 andĵ = j -1, and considering q
Computational method
In order to use the orthonormal shifted DLPs for problem (1.1) with initial and boundary conditions (1.2), we express the unknown solution as where is an (M + 1) × (N + 1) matrix, and its elements are undetermined. Besides, Theorem 4.2 together with the above relations yields In addition, we represent ϕ(y, τ ) using the orthonormal shifted DLPs as follows: where Φ is an (M + 1) × (N + 1) given matrix, and its elements are evaluated like in (3.8).
By inserting (5.2)-(5.5) into (1.1), we obtain The functions given in (1.2) can also be approximated via the orthonormal shifted DLPs asθ Finally, by solving (5.11) and finding the elements of the matrix , we find a numerical solution for the primary VO fractional problem by inserting into (5.1).
Numerical examples
The approach generated using the orthonormal shifted DLPs is applied in this section for solving some numerical examples. The L 2 -error of the numerical results is measured as where θ andθ are the analytic and numerical solutions, respectively. The convergence order (CO) of this approach is computed as follows: where ε i and ε 2 are the first and second L 2 -error values, respectively. Furthermore,N i = (M i + 1) × (N i + 1) for i = 1, 2 is the number of the orthonormal shifted DLPs utilized in the ith implementation. In addition, we have applied Maple 18 (with 15 digits precision) for obtaining the results. Meanwhile, the series generating the Mittag-Leffler function is applied for 25 terms. We have applied the expressed method for this example with three choices of ζ (τ ). The extracted results are listed in Table 1. This table shows the high-precision of the proposed approach in solving this example. It also confirms that the results have a high degree of convergence. The last column of this table confirms the low computational works of the presented algorithm. Graphical behaviors of the extracted results for ζ (τ ) = 0.50 + 0.25 sin(τ ) where (M = 9, N = 8) are illustrated in Fig. 1. This figure shows the high accuracy of the presented method for obtaining a smooth solution for this example.
The technique established upon the orthonormal shifted DLPs is implemented for this example. The gained results are provided in Table 2, and they confirm the high-precision and low computations of the approach. It can also be seen that as the number of the orthonormal shifted DLPs increases, the accuracy of the results increases rapidly. The obtained results with (M = N = 8) whenever ζ (τ ) = 0.65 + 0.25τ 3 cos(τ ) are shown in Fig. 2. This figure illustrates that the proposed method can provide a highly accurate solution for this example across the domain.
Conclusion
In this study, the Heydari-Hosseininia fractional differentiation as a kind of nonsingular variable-order (VO) fractional derivative was utilized for generating a VO fractional version of the Sobolev equation. The orthonormal shifted discrete Legendre polynomials (DLPs) as a convenient family of basis functions were employed to generate a numerical algorithm for this equation. A new fractional operational matrix related to VO fractional differentiation of these polynomials was obtained. The established scheme converts solving the problem under consideration into solving an algebraic system of equations. The validity of this technique was investigated by solving two numerical examples. The obtained results confirmed that the established method is able to generate numerical solutions with high accuracy for such problems even by applying a small number of the orthonormal shifted DLPs. As future research direction, the VO fractional derivative applied in this study can be utilized for generating VO fractional version of other applicable problems, such as Schrödinger equation and advection-diffusion equation. | 2,315.6 | 2021-05-26T00:00:00.000 | [
"Mathematics"
] |
Prognosis Prediction of Uveal Melanoma After Plaque Brachytherapy Based on Ultrasound With Machine Learning
Introduction Uveal melanoma (UM) is the most common intraocular malignancy in adults. Plaque brachytherapy remains the dominant eyeball-conserving therapy for UM. Tumor regression in UM after plaque brachytherapy has been reported as a valuable prognostic factor. The present study aimed to develop an accurate machine-learning model to predict the 4-year risk of metastasis and death in UM based on ocular ultrasound data. Material and Methods A total of 454 patients with UM were enrolled in this retrospective, single-center study. All patients were followed up for at least 4 years after plaque brachytherapy and underwent ophthalmologic evaluations before the therapy. B-scan ultrasonography was used to measure the basal diameters and thickness of tumors preoperatively and postoperatively. Random Forest (RF) algorithm was used to construct two prediction models: whether a patient will survive for more than 4 years and whether the tumor will develop metastasis within 4 years after treatment. Results Our predictive model achieved an area under the receiver operating characteristic curve (AUC) of 0.708 for predicting death using only a one-time follow-up record. Including the data from two additional follow-ups increased the AUC of the model to 0.883. We attained AUCs of 0.730 and 0.846 with data from one and three-time follow-up, respectively, for predicting metastasis. The model found that the amount of postoperative follow-up data significantly improved death and metastasis prediction accuracy. Furthermore, we divided tumor treatment response into four patterns. The D(decrease)/S(stable) patterns are associated with a significantly better prognosis than the I(increase)/O(other) patterns. Conclusions The present study developed an RF model to predict the risk of metastasis and death from UM within 4 years based on ultrasound follow-up records following plaque brachytherapy. We intend to further validate our model in prospective datasets, enabling us to implement timely and efficient treatments.
INTRODUCTION
Uveal melanoma (UM) is the most common aggressive ocular tumor in adults. The annual incidence rate per million people is 6 in non-Hispanic whites (1) and 0.3-0.6 in Asians (2)(3)(4).
Although new techniques such as proton beam therapy have been introduced (5), plaque brachytherapy, mainly using iodine-125, remains the dominant option as an eyeball-conserving treatment for UM. In the United States, the ratio of plaque brachytherapy is increasing each year, which accounts for more than 50% recently (6,7). The same case was also seen in our eye center. However, patients with UM have high mortality with approximately 50% of patients developing metastatic disease and eventually dying within 5 years (8,9). Therefore, it is important to predict the metastasis risk and long-time survival accurately.
Several factors have been proven to correlate with patient outcomes. These include tumor size and location, as well as related features such as retinal detachment, extrascleral extension, and retinal invasion (10,11). The most significant factor for melanoma-specific mortality prediction is dependent on tumor-specific genetic alterations and histopathologic factors including epithelioid cell type, monosomy 3 and 6p gain, and loss of BAP-1 gene (12). Gene expression profiling (GEP) of 15 genes was divided into class 1 and class 2 UM, those with the class 2 GEP have a greater rate of metastasis and mortality compared to class 1 GEP. However, fine-needle aspiration is not available in most cases for patients with UM treated by plaque brachytherapy. Therefore, we wish to construct a prediction model with more readily accessible clinical data.
Ultrasonography, a cost, and time-effective non-invasive examination is the most used application for determining the dimensions of a posterior UM. And it is essential throughout follow-up for tumor measurement (13). Tumor regression has commonly been evaluated as a percentage change from initial tumor thickness measured with B-scan ultrasonography. According to the Collaborative Ocular Melanoma Study, a 15% increase in tumor thickness after brachytherapy should be considered as a failure. Many previous studies have shown that such local treatment failure (14)(15)(16)(17) and rapid regression of tumors after plaque brachytherapy (18,19) predict a lousy prognosis.
Previous models based on clinical and demographic characteristics have been developed to predict individual patient prognosis after UM treatment (20)(21)(22)(23)(24)(25)(26). To our knowledge, this is the first report that describes a mathematical model for patients with UM after iodine-125 plaque brachytherapy using postoperative follow-up ultrasound data. The present study investigates the prognostic value of dynamic morphometric parameters to predict 4-year survival and metastasis status (Figure 1).
Source of Data
This is a retrospective, single-center study conducted in the Beijing Tongren Eye Center. The study population included adult patients that were clinically diagnosed with UM from July 2007 to December 2016. Generally, iodine-125 plaque brachytherapy was used for tumors with a thickness of <10 mm in our center. The standard dose of irradiation was 100 Gy to the apex of the tumor. However, patients who were refractory to other treatments and strongly requested it were also treated by brachytherapy.
Selection of Participants
Patients who were diagnosed with UM at the Beijing Tongren Eye Center and subsequently received brachytherapy were included in this study. The exclusion criteria were: (1) age<18 years, (2) received other therapies, (3) alive and had a follow-up time of fewer than 4 years, (4) the third follow-up time was more than 3 years, (5) filled follow-up time was later than the time of outcome, (6) had metastatic disease at the time of diagnosis. Finally, 454 patients were included to construct the model for predicting death and 424 patients to build the model for predicting metastasis (Figure 2A). Moreover, 177 surviving patients with UM had a follow-up duration ranging from 3 to 4 years. They will be included in the prospective validation of our models in future studies ( Figure 2B).
Data Collected
The age, gender, and involved eye were recorded from each patient's record during the initial interview. The presence of subretinal fluid, optic disk involvement, vitreous hemorrhage, ciliary body involvement, tumor thickness, minimum and maximum tumor diameter, tumor shape and position, intraocular pressure and visual acuity, photographs, and ultrasound records were collected from the preoperative medical records. Several strategies, including fundus photography, fluorescein angiography, indocyanine green angiography, standardized echography, and orbital MRI, were conducted to assist diagnosis. Tumors were staged according to the American Joint Committee on Cancer (AJCC) consensus. We excluded duplicate factors and factors that did not differ among groups ( Figure 2C).
Ultrasound images were reviewed by two independent radiologists with at least 5 years of experience in interpreting ocular ultrasound images. The radiologists were blinded to the clinical data. When two radiologists failed to reach a consensus through their independent assessment, the image would be reviewed jointly to ultimately achieve agreement. They measured the tumor's thickness from the inner surface of the sclera to the tumor apex and maximum basal diameter. Thickness and the minimum basal diameter were measured from two meridians, along with the maximum basal diameter and perpendicular to it. Representative digitized scans were stored at the time of each diagnostic and follow-up visit.
Missing Value Completion
There were some missing data values due to the loss of clinical data and some missing features. The missforest algorithm (R package missForest) was used to fill in the blank values in the dataset (27). Missforest iteratively filled all features with missing values by predicting missing values from existing values. The order for filling missing values was from features with the fewest missing value to the feature with the most missing values. Moreover, numerical features and nominal features were predicted with Random Forest (RF) regression and classification, respectively. The follow-up information of patients with less than three visits was also filled, and the length of follow-up was less than that of outcome events.
Prediction Model
Machine learning is a powerful tool for mining the hidden relationships in our dataset which included imaging (28)(29)(30)(31)(32), genetic (33), clinical (27,34), multi-modal sensor data (35)(36)(37), and other sources (38). RF is a type of ensemble learning method which encapsulates multiple decision trees to vote the classification results. The decision tree is a basic machine learning method that applies tree data structure to recursively split the whole dataset into multiple subsets. Finally, the samples in each leaf node either belong to one class or own more features could be used to be split, namely, the class of each sample can be inferred according to the paths from the root node to leaf nodes in the tree (39)(40)(41)(42). In our research, the RF model was used to construct models of whether a patient will survive for more than 4 years and whether the tumor will metastasis within 4 years after plaque brachytherapy. This was done using demographic attributes, clinical features, and follow-up records.
Additionally, all datasets used were imbalanced. Therefore, the most convenient, cost-sensitive method (43) was used to tackle this problem and assist RF in constructing the models. Synthetic Minority Oversampling Technique (SMOTE), the simplest oversampling algorithm, is typically used to enrich the minority in each training set of the internal cohort. Numerical and nominal features are preprocessed differently in terms of measuring the distance of two samples. However, we did not adopt this method because we cannot guarantee the ratio for generating more minority class samples. It will also import some noise into the dataset. The numerical and nominal features were separately oversampled and then merged. The under-sampling method (44) randomly deleted some majority samples in the training set, which was not suitable for our study because the follow-up dataset is precious. We cannot sacrifice the majority class to trade off the minority class. Similar to the multi-objective optimization, the cost-sensitive method (43) leveraged another objective function (cost function) and accuracy function in constructing a machine learning model. The number of trees in RF was primarily set as 500 when experiments were carried out. Four-fold stratified cross-validation was used to evaluate the performance of RF fairly, and the subjects in each fold were independent (a patient owns only one entry of data).
Statistical Analysis
The baseline characteristics of enrolled participants were presented and compared between survivors and non-survivors by applying either Student's t-test, Chi-square test, and Mann-Whitney U-test as appropriate. Continuous variables were characterized as mean (standardized differences [SD]) or median (interquartile range [IQR]), while categorical or ranked data were reported as count and proportion. One-way ANOVA and Kaplan-Meier analysis were used to evaluate tumor regression patterns. All calculations were performed in Statistical Package for the Social Sciences (SPSS) version 26 and GraphPad Prism version 7. Random forest was performed using Python 3.7.3 (Wilmington, DE, United States) and MATLAB R2016a. Accuracy, sensitivity, specificity (32,45,46), Receiver Operating Characteristic (ROC) curve, Precision-Recall (PR) curve, and Area under Receiver Operating Characteristic Curve (AUROC) were used to evaluate the performance of models.
Baseline Characteristics
A total of 454 patients with UM treated by plaque brachytherapy were included in the death analysis. 210 (46.3%) were male. UM occurred in 248 right eyes and 206 left eyes. The mean (Figure 3).
Evaluation of Model Performance
In our research, we developed a model to predict death 4 years after treatment, (Figures 4A,B) with 70.51% sensitivity, 56.96% specificity, and overall diagnostic accuracy of 58.51% using the first follow-up data. The overall performance of the prediction model was improved when three follow-up records were included. The performance was raised to a sensitivity of 80.45%, a specificity of 83.35%, and overall diagnostic accuracy of 83.02% (Figure 4C, Supplementary Table 2). Due to imbalanced datasets, we used a relatively high cost-sensitive parameter to increase sensitivity. A higher sensitivity means that patients with poor prognoses are more likely to be detected in clinical practice and radical treatments can be undertaken earlier to improve patient outcomes. The maximum basal diameter was the top-ranked preoperative factor related to death within 4 years after surgery ( Figure 5). Position, preoperative minimum basal diameter, corrected visual acuity, and intraocular pressure was also clearly correlated with death. In addition, the span of records from the follow-up was remarkably correlated with predicting death. Thus, obtaining data from three follow-ups had the greatest impact on accuracy. Moreover, we constructed a model to predict four-year metastasis status (Figures 4A,B), with 66.67% sensitivity, 69.42% specificity, and 69.10% accuracy. We then incorporated additional follow-up information to achieve a sensitivity of 77.08%, a specificity of 79.79%, and overall diagnostic accuracy of 79.48% (Figure 4D, Supplementary Table 2). The model for predicting death did perform better than the one for metastasis. We found that the maximum basal diameter, intraocular pressure, minimum basal diameter were the most critical factors (Figure 5). Similarly, additional follow-up information beyond the first collection was significantly related to successfully predicting metastasis. Tumor thickness recorded in the third follow-up was the most important information.
Regression Pattern
We next investigated investigate the importance of tumor thickness after treatment. We classified the tumor response to brachytherapy into the following four main patterns (47, 48) (Figure 6). Pattern D (decrease) involved at least one follow-up visit, the thickness decreased by at least 15% compared to the preoperative period, and two other visits also showed a decrease in thickness. Pattern S (stable) indicates there was < a 15% change in thickness. Pattern I (increase) is defined by at least one follow-up visit, the thickness increased by at least 15% compared to the preoperative period, and thickness also increased at two other visits. Pattern O (others) indicates an irregular change in thickness. Preoperative tumor sizes of different patterns are listed in Table 1. It was found that the tumor regression rate increased with increasing tumor thickness (P < 0.001) (Figure 7).
As shown in Figures 8A,B, there is a statistical significance relating metastasis and death (P < 0.001) to different tumor regression patterns. Patterns D/S were associated with a significantly better prognosis than the I/O group. Then, we further categorized the O group into three subtypes: DI (decrease followed by increase), ID (increase followed by decrease), and Z ("zigzag" or alternating measurements). Kaplan-Meier survival analysis revealed that pattern DI was significantly related to a higher death rate (P < 0.001) (Figure 8).
DISCUSSION
Great changes have taken place in traditional medicine after entry into the era of data. Physiological parameters can be recorded by wearable smart products (such as smart glasses, watches, and bracelets (49,50), biological parameters can be expressed by gene sequencing (51), and anatomical parameters can be displayed by image data (52). The limits on analysis of such data by humans alone have clearly been exceeded, necessitating an increased reliance on machines. Accordingly, at the same time that there is more dependence than ever on humans to provide healthcare, algorithms are desperately needed to help (53).
Uveal melanoma (UM) is the most common intraocular tumor in adults. Although several treatments are available for patients with UM, more than half of patients end up with distant metastases. Unfortunately, there is currently no effective treatment for the metastatic disease, and the median survival time for metastatic UM is only 12 months (54-56). So risk factors that allow the early prediction of the metastasis and survival time of patients will contribute to the implementation of a more aggressive treatment strategy and improve patient outcomes (57). Additionally, numerous studies have shown that the great majority of patients want to know whether their prognosis is good or bad both before surgery and during follow-up. Although the bad news is particularly upsetting, patients feel a sense of empowerment over their future planning and a reduction in uncertainty and accompanying anxiety (58)(59)(60).
Our previous studies, and those of others, have shown that clinical characteristics such as male gender, advanced age, larger tumor size, epithelioid cell type, subretinal fluid, and ciliary body involvement can increase the risk of metastasis and death (10,11,(61)(62)(63)(64). Additionally, the treatment response by tumors can also affect the outcome to some extent. Several studies discovered that local treatment failure, defined by COMS as a 15% increase in tumor thickness after brachytherapy was significantly related to uveal melanoma-related mortality and systemic dissemination (15,65). Furthermore, Augsburger and Kaiserman (19,66) found that rapid regression of tumors after plaque brachytherapy indicates an unfavorable prognosis. Also, in other treatment modalities, Christoph et al. (67) reported a non-linear influence of the regression rate of choroidal melanoma as an independent risk factor of metastatic disease after linear accelerator stereotactic fractionated photon radiotherapy. Thus, tumor size change after surgery is significantly correlated with prognosis. In our research, we added this aspect to the construction of the model to determine whether postoperative information could improve model performance for prediction. Medicine has experienced the recent emergence of artificial intelligence (AI) as a novel tool for analyzing large amounts of data (68). AI has recently achieved high accuracy in recognizing ocular structure. Deep-learning convolutinal neural networks (CNNs) developed by have shown superior performance in assessing axial length, subfoveal choroidal thickness, and fundus tessellated density with color fundus photographs. In the diagnosis of multiple ocular disorders, AI outperformed human experts with multimodality imaging, including magnetic resonance imaging (MRI), fundus photographs, and fundus fluorescence angiography (FFA). An updated meta-analysis demonstrated that AI-based algorithms are capable of detecting age-related macular degeneration (AMD) in fundus images with a pooled AUC 0.983 (72,73). Naoya Nezu et al. (74) , which provides a tool for assessing the personalized risk for metastasis based on individual and tumor characteristics. The accuracy of the risk prediction was 80% using only chromosomal features, 83% using only clinical features, and 85% using combined clinical and chromosomal information. However, in most eye centers, chromosomal information is not available. Fine-needle aspiration biopsy is an invasive method and may contribute to some related complications such as vision loss, persistent hemorrhage, and even extraocular extension (76). Therefore, most patients being treated by plaque brachytherapy are reluctant to accept this examination.
We previously applied machine learning technology to establish a model to predict whether a patient would die or metastasize within 2 years after initial treatment. This model achieved an overall accuracy of 77.0 and 75.0% with all features (77). Information extracted from B-ultrasound images was additionally applied to machine learning to provide personalized risk prediction. To the best of our knowledge, ours is the first machine learning-based UM prognosis model using follow-up information after surgery. With the increasing availability of follow-up information, the performance of predictive models has improved significantly. The AUC of models increased from 0.708 to 0.883 after two additional follow-up records were added. Figure 5 shows that follow-up data were remarkably correlated with 4-year survival. This suggests we can provide a more accurate prognostic evaluation for patients by intensive follow-up, which is readily obtained. In our study, tumor treatment response was divided into four patterns. The D pattern of decreasing tumor thickness correlated to the best prognosis, contrary to some previous research (18,19). It found that early rapid regression of tumors after plaque brachytherapy was associated with an unfavorable outcome for patients with UM. However, a greater regression indicated a better prognosis in our relatively longer postoperative follow-up. In addition, similar to their results, a positive correlation between tumor thickness and regression rate was also found in our research.
Among the patients enrolled for model construction, the 177 surviving patients with UM with follow-up ranging from 3 to 4 years, can validate algorithms in a short time. Additionally, we welcome external datasets, especially with Asian patients, to continue our validation efforts. We hope that a predictive model for Asian patients can be established using factors that are non-invasive and easily available clinically in the future.
Deep learning (DL)-powered ultrasound has begun to be widely used in diagnosing certain diseases and for distinguishing between benign and malignant tumor types (78)(79)(80). But it has been used less for determining prognosis. Thus, we have also tried to construct a DL model using B-ultrasound images to predict long-term survival in patients with UM. However, the performance was found to be unsatisfactory. We do plan to undertake additional prospective studies that will incorporate uniform standard ultrasound images and color Doppler flow imaging to gather more prognostic information. In addition, multiple imaging modalities have been used recently with deep learning, including CT and MRI. Using these tools, researchers can attain more specific and informative histology and prognostic information. Compared to ultrasound, MRI provides excellent contrast resolution and multiple tissuecontrasts. Due to the paramagnetic effect, lesions with different melanin contents will present distinct signal intensities in MRI. Furthermore, the use of multiple sequences including dynamic contrast-enhanced (DCE) sequence and diffusionweighted MR imaging has made it easier to identify intertumor heterogeneity (81,82). It has been proven that quantitative multiparametric MRI can be used to predict monosomy 3 and UM metastasis (83,84). Therefore, we propose to adopt DL to automatically extract high-throughput features from multi-modal, multi-channel preoperative MRI to predict the survival time for patients with UM. This will enable us to better develop personalized treatment plans and realize precision medicine.
There are some limitations in our study that should be noted. First, while death is an outcome that can be precisely determined metastasis can only be detected at follow-up visits. Therefore, metastasis may present before the clinical diagnosis, which would affect our model's predictive value for metastasis. Second, due to the retrospective nature of this study, the follow-up interval after surgery in our study was not fixed. This affected the results to some extent. Third, based on the COMS data, post-therapy surveillance relies on decreasing thickness with ultrasound B repeated every 6 months for 2 years and yearly after that (16). But most of our patients can only be checked three times within 3 years. Our results showed that the algorithm's performance could be enhanced with more follow-up visits. Frequent followup of patients is advisable, ideally leading to earlier detection of metastasis and timely enrollment into treatment and care. Thus, patients will be strictly followed up in the future to further explore the role of data from follow-up examinations in predicting prognosis.
CONCLUSIONS
In conclusion, the present study developed an RF model to predict the risk of UM metastasis and death within 4 years based on ultrasound follow-up records following plaque brachytherapy. We intend to further validate our model in prospective datasets, which can prompt us to implement timely and efficient treatments.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Beijing Tongren Hospital of Capital Medical University. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
YaL and WW contributed to the concept, design of the study, revised the manuscript, and handled the supervision. JL, YC, and YY wrote the manuscript. JL developed the study. YC, YY, KZ, YuL, HZ, LD, and JX participated in the final design of the study. JL, YC, YY, KZ, YuL, and HZ carried out the study. JL, YY, YuL, and HZ collected the data. All authors read and approved the final submitted version of the manuscript. | 5,304.6 | 2022-01-21T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Investigating NP-Chunking with Universal Dependencies for English
Chunking is a pre-processing task generally dedicated to improving constituency parsing. In this paper, we want to show that universal dependency (UD) parsing can also leverage the information provided by the task of chunking even though annotated chunks are not provided with universal dependency trees. In particular, we introduce the possibility of deducing noun-phrase (NP) chunks from universal dependencies, focusing on English as a first example. We then demonstrate how the task of NP-chunking can benefit PoS-tagging in a multi-task learning setting – comparing two different strategies – and how it can be used as a feature for dependency parsing in order to learn enriched models.
Introduction
Syntactic chunking consists of identifying groups of (consecutive) words in a sentence that constitute phrases (e.g. noun-phrases, verb-phrases). It can be seen as a shallow parsing task between PoStagging and syntactic parsing. Chunking is known as being a relevant preprocessing step for syntactic parsing.
Chunking got a lot of attention when syntactic parsing was predominantly driven by constituency parsing and was highlighted, in particular, through the CoNLL-2000 Shared Task (Tjong Kim Sang and Buchholz, 2000). Nowadays, studies (Søgaard and Goldberg, 2016;Hashimoto et al., 2017) still compare chunking -as well as constituency parsing-performance on these same data from the Penn Treebank. While dependency parsing is spreading to different languages and domains (Kong et al., 2014;Nivre et al., 2017), chunking is restricted to old journalistic data. Nevertheless, chunking can benefit dependency parsing as well as constituency parsing, but gold annotated chunks are not available for universal dependencies.
We want to automatically deduce chunks from universal dependencies (UD) (Nivre et al., 2017) and investigate its benefit for other tasks such as Part-of-Speech (PoS) tagging and dependency parsing. We focus on English, which has properties that make it a good candidate for chunking (low percentage of non-projective dependencies). As a first target, we also decide to restrict the task to the most common chunks: noun-phrases (NP).
We choose to see NP-chunking as a sequence labeling task where tags signal the beginning (B-NP), the inside (I-NP) or the outside (O) of chunks. We thus propose to use multi-task learning for training chunking along with PoS-tagging and feature-tagging to show that the tasks can benefit from each other. We experiment with two different multi-task learning strategies (training parameters in parallel or sequentially). We also intend to make parsing benefit from NP-chunking as a preprocessing task. Accordingly, we propose to add NP-chunk tags as features for dependency parsing.
Contributions. We show how to (i) deduce NPchunks from universal dependencies for English in order to (ii) demonstrate the benefit of performing chunking along with PoS-tagging through multitask learning and (iii) evaluate the impact of using NP-chunks as features for dependency parsing.
NP-Chunks
While chunks are inherently deduced from constituent trees, we want to deduce chunks from dependency trees in order to not rely on specific constituent annotations which would not be available for other domains or languages. In this case, it means that only partial information is provided by the dependencies to automatically extract the chunks. We thus choose to only deduce nounphrase (NP) chunks (Ramshaw and Marcus, 1995) from the dependency trees. Figure 1: NP-chunks deduced from a UD tree of the English Web Treebank (EWT).
Automatic Deduction. We deduce minimal NPchunks, which means that embedded prepositional (PP) chunks are not included in our NP-chunks, e.g. in Figure 1 "screenshots of two beheading video's" is split in two distinct NPs instead of one long NP with an embedded PP ("of two beheading video's").
We first identify the core tokens of NPs: the nouns (NOUN), proper nouns (PROPN) and some pronouns 1 (PRON). After identifying these core tokens, we form full NPs by joining these core tokens with their direct and indirect children which are not part of PPs. In practice, they are those for which the incoming dependency is labeled with one of the following relations (modulo some individual conditions): • compound, compound:prt, flat, flat:name, goeswith, fixed, nummod; • det if the child is located before its head; • conj if the child and its head are adjectives.
We want "excellent and strong performers" to be one NP and "these challenges and possible solutions" to be split in two NPs; • amod if the child is not an adverb. We don't want to attach preceding adverbs such as "not" to a NP; • appos if the child is directly before or after its head; • advmod if the child is not a PART or a VERB and its head an adjective; • nmod:poss if the child is not a NOUN or a PROPN. We want to group "your world" but not "John's last day (where "John" and "last day" would be two distinct NPs; 1 All pronouns but the interrogative and relative pronouns.
• following and preceding obl:npmod and obl:tmod; • obl if its head has a amod incoming dependency.
In addition, when grouping a core token with one of its det, compound, nummod or nmod:poss children, we automatically attach tokens which are in between. If split chunks remain, we attach the non-attached tokens which are in between two part of a chunk. It allows us to attach the adverbs which modify adjectives such as "very" in "my very best friend" or some specific punctuation such as the slash in "The owner/baker".
Manual Annotation. To assert the correctness of the automatically deduced chunks, we manually annotate noun-phrases on a small portion of the test set of the EWT UD treebank data. For 50 sentences (from which 233 NP chunks were manually annotated), the accuracy of the automatic deduction reaches 98.7%. Errors in deduction are mostly due to punctual inconsistencies in the UD annotations.
Sequence Labeling
We implement a deep recurrent neural network with an architecture based on bidirectional Long Short-Term Memory (bi-LSTM) (Graves and Schmidhuber, 2005) that can exploit contextual information for processing sequences. The base network is composed of an embedding layer that feeds two hidden bi-LSTM layers (forward and backward). The outputs of the bi-LSTMs are then concatenated to feed the next layer. Multiple bi-LSTM layers can be stacked. In the end, these outputs are fed to a Softmax output layer. The embedding layer is a concatenation of a word embedding layer and a character embedding layer. It takes as input a sequence of n tokens. The output of the network is a sequence of n tags.
We use this architecture for PoS-tagging, feature-tagging (i.e. morpho-syntactic tagging) and NP-chunking. In order to make the tasks benefit from each other, we adapt the network to multitask learning. We propose to compare two strategies for multi-task learning : shared or stacked.
Shared multi-task learning. In this architecture, different tasks are trained at the same level in a similar way as in Søgaard and Goldberg (2016). They share parameters through all the network and feed different outputs.
Stacked multi-task learning. In this architecture, different tasks are trained at different levels as proposed by Hashimoto et al. (2017). A bi-LSTM layer is dedicated to a task. The output of a layer for a given task feeds the next layer dedicated to the next task.
Dependency Parsing
Our dependency parser is a reimplementation of the arc-hybrid non-projective transition-based parser of de Lhoneux et al. (2017b).
In this version of the arc-hybrid system, the SWAP transition is added to the original transition set (Kuhlmann et al., 2011) made up of the standard transitions RIGHT, LEFT and SHIFT. The SWAP transition allows to build non-projective dependency trees. The standard transitions are trained using a dynamic oracle (Goldberg and Nivre, 2013), which alleviates error propagation, and a static oracle for training the SWAP transition.
The parser uses a bi-LSTM network to learn vector representations of the tokens. These vectors are combined through a feature function and used for learning and evaluating the transitions using a multi-layer perceptron with one hidden layer. In de Lhoneux et al. (2017a), PoS tags are removed from the feature function and instead, the bi-LSTM is fed with only word and character embeddings. In our version of the parser, we reintroduce the PoS tags as features and also make use of the predicted NP-chunks. The PoS and/or NPchunk tags are turned into embeddings and concatenated with the word and character embeddings to represent the tokens.
Experiments
As a baseline for PoS-tagging, feature-tagging and NP-chunking, we first train our sequence tagger for each task separately. We then train the tagger in a multi-task setting -with PoS-tagging as a main task-alternating the auxiliary tasks and the strategies (shared or stacked multi-task learning).
As a baseline for dependency parsing, we train the parser using only word and character embeddings as input to the bi-LSTM. We then add the PoS and NP-chunk embeddings, separately and simultaneously, for training enriched models. As an upper bound, we also propose to run the experiments with "gold" NP-chunks, i.e. we feed the parser (for training and testing) with NP-chunks that were automatically deduced from the dependencies.
Data. We evaluate all tasks on the three English treebanks included in the version 2.1 of the Universal Dependencies project (Nivre et al., 2017) : EWT (254k tokens), LinES (82k tokens) and Par-TUT (49k tokens). In average, 3.8, 3.3 and 6.2 NP-chunks per sentence are deduced respectively for each treebank. 2 Note that the LinES treebank does not contain features (morpho-syntactic tags), so we exclude feature-tagging from the evaluation for this treebank.
Hyper-parameters. We use the development data to tune our hyper-parameters and to determine the number of epochs (via early-stopping) for each experiment.
For sequence tagging, we use the RMSProp optimizer with a learning rate at 0.0005. Hidden layers of dimension 300 is used for ParTUT and 100 for EWT and LinES. We use a dropout of 0.2 on the hidden layers. For dependency parsing, the hidden layer of the bi-LSTM has a dimension set at 125 and uses a dropout of 0.33.
The dimension of the word and character embeddings are respectively 200 and 50. For dependency parsing, embedding dimensions for PoS and NP-chunk tags are set respectively to 6 and 3.
Evaluation. We average the scores on 5 runs for each experiment. We evaluate accuracy on PoStagging and feature-tagging and F 1 3 on chunking. For dependency parsing, we calculate the label accuracy (LA), the unlabeled attachment score (UAS) and the labeled attachment score (LAS). As for the CoNLL 2017 Shared Task (Hajič and Zeman, 2017), only universal dependency labels are taken into account (ignoring language-specific subtypes), i.e. we consider a predicted label correct if the main type of the gold label is the same, e.g. flat:name is correct if the gold label is flat. We also exclude punctuations from the evaluation.
Tagging results
See PoS-tagging, feature-tagging and NPchunking results in Table 1. For all three treebanks, multi-task learning is beneficial for at least one task. Only the LinES treebank does not benefit from it for PoS-tagging (i.e. equivalent performance), however it greatly improves NP-chunking (+2.9). For the smallest treebank rect and recall is the percentage of gold chunks that are correctly predicted.
(ParTUT), multi-task learning is beneficial for all tasks (at best, +0.6 for PoS-tagging, +0.7 for feature-tagging and +1.73 for NP-chunking). For the EWT treebank, equivalent scores are achieved for feature-tagging but PoS-tagging and NP-chunking are enhanced through multi-task learning (respectively +0.14 and 0.67).
Globally, the shared multi-task learning strategy achieves the best results. The stacked strategy outperforms the baseline for the small treebank but gets lower scores on the big treebank.
It is also worth noting that multi-task learning makes the models more stable. We observe a significant decrease of the standard deviation for most of the experiments. 4
Dependency Parsing Results
See dependency parsing results in Table 2. Adding PoS and NP-chunk tags as features significantly improve dependency parsing performance for the smallest treebank, ParTUT (+0.96 LAS). Using NP-chunks alone is also beneficial on the LinES data (+0.22 LAS over the baseline) but using only PoS-tags is actually more relevant than including both features. For the biggest treebank, EWT, the baseline outperforms all other enriched models. However, the upper-bound shows that the NPchunk tags as features are relevant for improving dependency parsing, suggesting that the quality of the predicted NP-chunks -as well as the PoS-tagsis not sufficient for improving parsing.
It is worth noting that training converges faster when using features (17.6 epochs on average VS 25.8 for the baseline) which might also indicate a training issue since models that stop after few epochs (11/12) achieve lower performance.
Conclusion
We showed that it is possible to extract NP-chunks from universal dependencies that can be useful for improving other tasks such as PoS-tagging and dependency parsing. While the improvement for PoS-tagging is systematic on all English UD treebanks, the results are mixed for dependency parsing suggesting that NP-chunks as features might be useful for training on small datasets. Further experiments will be performed in future work in order to extend the results to other languages and to investigate the possibility of extracting embedded chunks. | 3,028.2 | 2018-11-01T00:00:00.000 | [
"Computer Science"
] |
EURASIP Journal on Applied Signal Processing 2002:9, 936–943 c ○ 2002 Hindawi Publishing Corporation P-CORDIC: A Precomputation Based Rotation CORDIC Algorithm
This paper presents a CORDIC (coordinate rotation digital computer) algorithm and architecture for the rotation mode in which the directions of all micro-rotations are precomputed while maintaining a constant scale factor. Thus, an examination of the sign of the angle after each iteration is no longer required. The algorithm is capable to perform the CORDIC computation for an operand word-length of 54 bits. Additionally, there is a higher degree of freedom in choosing the pipeline cutsets due to the novel feature of independence of the iterations and in the CORDIC rotation.
INTRODUCTION
CORDIC (coordinate rotation digital computer) [1,2] is an iterative algorithm for the calculation of the rotation of a 2dimensional vector, in linear, circular, or hyperbolic coordinate systems, using only add and shift operations. It has a wide range of applications including discrete transformations such as Hartley transform [3], discrete cosine transform [4], fast Fourier transform (FFT) [5], chirp Z transform (CZT) [6], solving eigenvalue and singular value problems [7], digital filters [8], Toeplitz system and linear system solvers [9], and Kalman filters [10]. It is also able to detect multiuser in code division multiple access (CDMA) wireless systems [11].
The CORDIC algorithm consists of two operating modes, the rotation mode and the vectoring mode, respectively. In the rotation mode, a vector (x, y) is rotated by an angle θ to obtain the new vector (x * , y * ) (see Figure 1). In every micro-rotation i, fixed angles of the value arctan(2 −i ) are subtracted or added from/to the angle remainder θ i , so that the angle remainder approaches zero. In the vectoring mode, the length R and the angle towards the x-axis α of a vector (x, y) are computed. For this purpose, the vector is rotated towards the x-axis so that the y-component approaches zero. The sum of all angle rotations is equal to the value of α, while the value of the x-component corresponds to the length R of the vector (x, y). The mathematical relations for the CORDIC rotations are as follows: where σ i is the weight of each micro-rotation and m steers the choice of rectangular (m = 0), circular (m = 1), or hyperbolic (m = −1) coordinate systems. The required microrotations are not perfect rotations, they increase the length of the vector. In order to maintain a constant vector length, the obtained results have to be scaled by a scale factor K. Nevertheless, assuming consecutive rotations in positive and/or negative directions, the scale factor is constant and can be precomputed according to The computation of the scale factor can be truncated after n/2 iterations because the multiplicands in the last n/2 iterations are 1 due to the finite word-length and do not affect the final value of K 1 , There are two different approaches for the computation of the CORDIC algorithm. The first one uses consecutive rotations in positive and/or negative direction, where the weight of each rotation is 1. Hence, σ i is either −1 or 1, depending on the sign of the angle remainder z(i). In every iteration a significant amount of time is used to examine the most significant bit in case of a binary architecture or the most significant three digits of a redundant architecture to predict the sign of z(i) and hence the rotation direction σ i . In comparison to the CORDIC implementations with constant scale factor, other implementations use a minimally redundant radix-4 or an even higher radix number representation [12,13,14]. These architectures make use of a wider range of σ i . In case of a minimally redundant radix-4 architecture, σ i ∈ {−2, −1, 0, 1, 2}. By using this numbering system, the number of iterations can be reduced. However, the computation time per iteration increases, since it takes more time to differentiate between five different rotation direction values and to generate five different multiples of arctan(2 −i ). The scale factor also becomes variable and has to be computed every time, due to the absence of consecutive rotations leading to an increase in area.
To speed up the computation time of the CORDIC algorithm, either the number of iterations or the delay of each iteration have to be minimized. The proposed algorithm introduces a novel approach, in which the rotation direction can be precomputed by adding the rotation angle θ, a constant and a variable adjustment which is stored in a table. Hence, a significant speedup of the delay per iteration is obtained. Since all rotation directions are known before the actual rotation begins, more than one rotation can also be performed in one iteration leading to a reduction in latency. The proposed architecture also eliminates the z-datapath and reduces the area of the implementation. This paper is organized as follows. Section 2 presents the theoretical background for the novel CORDIC algorithm for rotation mode and Section 3 presents the novel architecture. Section 4 performs an evaluation of different CORDIC architectures while Section 5 concludes the paper.
Mathematical derivation using Taylor series
The summation of all micro-rotation with their corresponding weight σ i is equivalent to the rotation angle θ where σ i ∈ {−1, 1}, corresponding to the addition and subtraction of the micro-angles θ i . Since consecutive rotations are employed, the scale factor is constant. The value of σ can be interpreted as a number in radix-2 representation. The goal of the proposed method is to compute the sequence of the micro-rotation without performing any iteration. To accomplish this, σ i is recoding as 2d i − 1 leading to a binary representation in which a zero corresponds to the addition of a micro-angle [15,16]. This allows the use of simple binary adders. Adding and subtracting 2 −i to (4) results in where c 1 corresponds to c 1 = 2 − ∞ i=0 (2 −i − arctan(2 −i )). Solving (8) for d results in where c corresponds to 0.5c 1 . Table 1 shows the values of the partial offsets i for the first 10 values of i and indicates that the value of i decreases approximately by a factor of 8 with increasing i. Hence, the summation of d i i can be limited to Rather than storing the partial offsets i and computing the sum over all i of the product d i i , δ = n/3 i=1 d i i can be precomputed and stored. Hence, the only difficulty consists of determining which offset corresponds to the input θ. This can be achieved by comparing the input θ with a reference angle θ ref . The reference angles θ ref correspond to the summation of the first n/3 micro-rotation. To be certain to obtain the correct offset, θ has to be larger than the reference angle θ ref . All reference angles are stored in a ROM and are accessed by the most significant n/3 bits of θ. In addition to the reference angles, the values of δ are stored. In case of a negative difference θ ref − θ, the corresponding δ is selected, otherwise the next smaller value of δ is chosen to be subtracted from θ + c − sign(θ) · 0 .
Example 1. Assuming we have a word-length of 16 bits and θ = 0.9773844. According to Table 2, θ ref corresponds to 0.97337076 and δ = 0.03644375. Hence, d is computed as
High precision
By using a mantissa of n = 54 bits (corresponding to the floating point precision), the ROM for storing all offsets would require 2 18 entries. This is rather impractical since the required area to implement the ROM will exceed by far the area for the CORDIC implementation. To reduce the area for the ROM, δ can be split into two parts, where δ ROM is stored in a ROM while δ r is computed. By examining the Taylor series expansion of arctan(2 −i ), it becomes obvious that the partial offset for iteration i and i + 1 By comparing (13) and (16), it can be seen that (13) is about 2 3 times larger than (16). Assuming a word-length of n bits and i > n/5 − 2, the factor is 2 3 . Hence, the term n/5 −1 = −2 −3( n/5 −1) /3 + 2 −5( n/5 −1) /5 can be stored in a ROM and the remaining offset δ r is computed as The largest magnitude of δ r is smaller than 2 −3( n/5 −1) .
Example for high precision
Assume that we have a word-length of 50 bits and θ = 0.977384381116. Using the most significant 9 bits of θ, δ ROM = 0.03644501895249 can be obtained. Hence, d is computed according to
The rotation mode in hyperbolic coordinate systems
Similar to the circular coordinate system, a simple correlation between the input angle θ and the directions of the micro-rotation can be obtained. Due to the incomplete representation of the hyperbolic rotation angle θ i , some iterations have to be performed twice. In [2], it was recommended that every 4th, 13th, (3k + 1)th iteration should be repeated to complete the angle representation. Similar to the rotation mode in circular coordinate system, the rotation angle θ is equivalent to the summation of all micro-rotation with their corresponding weight. This leads to Performing a Taylor series expansion and applying σ i = 2d i − 1 results in where c corresponds to Since these extra rotation are not known in advance, an efficient high precision VLSI implementation is not possible. However, for signal processing applications using a wordlength of less than 13 bits, the ROM size corresponds to only 14 entries.
THE NOVEL ROTATION-CORDIC ARCHITECTURE
For an implementation with the operand word-length of n bits, the pre-processing part consists of a ROM of 2 n/5−2 entries in which the reference angles θ ref and the corresponding offsets δ are stored, respectively (see Figure 2). To avoid a second access to the ROM in case of θ ref > θ the next smaller offset δ k−1 is additionally stored in the kth entry of the ROM. The ROM is accessed by the n/5−2 MSB bits of θ. A binary tree adder computes, whether θ is smaller or larger than the chosen reference angle θ ref and selects the corresponding offset (either δ k or δ k−1 ). Using a 3 : 2 compressor and another fast binary tree adder, the two required additions to obtain d approx = 0.5θ + c 2 + δ ROM can be performed, where c 2 corresponds to c+sign(θ) 0 . Using the bits d n/5 −1 to d n/3 , δ r can be computed according to (17) and has to be added to d approx . For the worst case scenario, there is a possible ripple from the bit d 3( n/5 −1) to the bit d ( n/5 ) which would call for a time consuming ripple adder. However, by employing an extra rotation for d 3( n/5 −1)−1 this limitation can be resolved. This extra rotation corresponds to the overflow bit of the addition from the bits d approx −3( n/5 −1)···n and δ r . The additional rotation also does not affect the scale factor, since 3( n/5 − 1) > n/2. For a precision of n ≤ 16 bits, there are less than 32 offsets which can be stored in a ROM and the additional overhead to compute δ r can be removed.
The alternative architecture can be chosen by realizing that the directions of the micro-rotations are required in a most significant bit first manner (see Figure 2). As in the previous architecture, a fast binary adder is employed to determine which offset has to be selected. A redundant sign digit adder adds 0.5θ, c, and δ ROM and an on-the-fly converter starts converting resulting into the corresponding binary representation. Normally, the most significant bit cannot be determined until the least significant digit is converted. However, such worst cases do not exist in the CORDIC implementation, due to the redundant representations of the angles arctan(2 −i ), where as opposed to the binary representation iteration has to be rotated into the opposite direction. This happens if the angle remainder z i ≈ 0. Table 3 shows the maximum number of consecutive unidirectional rotations depending on the iteration number i. This limitation leads to a reduction in the complexity of the online converters and its most significant bits can already be used to start the rotations in the x/ y datapath.
The next rotation has to be performed in the negative directions, since θ 4 > 0. Hence, it is not possible to obtain rotation sequence like σ 0···4 = 01111 but it has to be σ 0···4 = 01110.
Delay analysis
In this paper, we assume a similar delay model as proposed in [14]. Nevertheless in [14], the unit delay is set to a gate delay while in our evaluation the unit delay is set to a full-adder delay. Hence, the delays for 2-input (NAND, NOR) gate, XOR, multiplexer, register, and full-adder are 0.25, 0.5, 0.5, 0.5, and 1t FA . The determination of which offset has to be chosen consists of the delay of the decoder, the ROM, a fast binary nbit tree adder and a multiplexer. Assuming a delay of log 2 (m) gate delays for the decoder, where m corresponds to the number of rows in the ROM (m < log 2 (n) + 1), one for the word-line driver and another for the ROM, log 2 (n) · t Mux for the fast binary adder and 0.5 · t FA for the multiplexer, we can obtain the correct value of δ ROM after a delay of (0.5 log 2 (n) + 1 + 0.25 log 2 (log 2 (n))) · t FA .
A 3 : 2 compressor can be employed to reduce the number of partial products to two. An additional fast binary tree adder can compute the final value of d approx . Hence, the entire delay to obtain d approx corresponds to 0.5 log 2 (n) + 1 + 0.25 log 2 log 2 (n) After obtaining the bits d n/5 −1 to d n/3 , δ r can be computed.
Since the value of δ r is smaller than 2 −3( n/5 −1) and the value of d approx + δ r is not required before 2 3( n/5 ) t FA the computation of δ r is not in the critical path. Alternatively to the 3 : 2 compressor and the tree adder, a minimally redundant radix-4 sign digit adder can be employed which has a delay of two full-adders. Hence, all output digits are available after these two full-adder delays. An additional on-the-fly converter converts the digits into its equivalent binary representation starting with the MSD. It requires a delay of multiplexer and four NANDs/NORs to convert one digit which results in 1.5t FA per digit (1 digit = 2 bits). The last digit is converted after a delay of (n/2 + 1) · 1.5t FA . As already described in Table 3, bit n/3 is stable as soon as the last digit (corresponding to bit n) has been converted. Hence, the n/3 rotation can be performed after a delay of (n/2 + 1) · 1.5t FA . Therefore, the iterations i = 0 can already be performed after a delay of (n/2 + 1) · 1.5t FA − n/3 · 2t FA = (1/12 · n + 1)t FA . Note that the conversion of one redundant digit is performed faster than the addition/subtraction of the x/ y datapath. Hence, an initial delay of (1/12 · n + 1)t FA + (log 2 (n) + 2.25)t FA = (1/12 · n + log 2 (n) + 3.25)t FA has to be added to the delay of the x/ y datapath.
Area analysis
Previously, the area of the z-datapath consists of n/2 iterations in which (n+log 2 n+2) multiplexers and (n+log 2 n+2) full-adders and registers are employed. Additionally, due to the Booth encoding, in the last n/4 iterations, about 2(n + log 2 n + 2) multiplexers and (n + log 2 n + 2) full-adders are required. Assuming A FA = 1.93 · A mux and A FA = 1.61 · A reg (values are based on layouts), the hardware complexity of the z-datapath results in A z = 1.7·n(n+log 2 n+2)A FA . Assuming a word-length of 54 bits and neglecting the required area for the examination of the most significant three digits, about 5700A FA are required.
The proposed architecture utilizes a ROM of word-length n and 2 n/5−2 entries, requiring an area of n · 2 n/5−2 · A FA · 1/50 resulting in 552A FA for a word-length of 54 bits. The implementation of the decoders can be done in multiple ways. NOR based decoders with precharge lead to the fastest implementation. However, the decoder area becomes larger. The decoder size per word-line corresponds to A dec = 0.83A FA . Since 2 n/5 −2 decoders are required, the area for all decoder corresponds to A dec,total = 0.83 · 2 n/5 −2 = 424A FA , assuming a 54 bit word-length. The ROM has to store θ ref , δ k , and δ k−1 . This results in a total area for the ROM and the decoder of about 2080A FA . The computation of δ r requires n/3 − n/5 + 2 = 2n/15 + 2 rows of CSA (carry-saveadders) and Muxes and a final fast binary tree adder. Note that each row of CSA adders and Muxes only consists of (n − 3n/5 + 6 = 2n/5 + 6) bits (the more significant bits are zero). The required area corresponds to 10 · 27A FA + 10 · 27A mux and 5 · 27A FA , respectively. Hence, the computation of δ r requires 540A FA . Moreover, the two redundant sign digit adder require 2n · A FA , while the converter consists of about (0.5n 2 + n)A mux . This corresponds to 108 and 696A FA for a word-length of 54 bits. This makes a total of 3426A FA , which is about 60% of the z-datapath previously employed.
Evaluation of the x/ y datapath
In the first n/2 micro-rotations, the critical path of the x/ y rotator part consists of a multiplexer and a 4 : 2 compressor, which has a combined critical path of 2 full-adders. The last n/2 micro-rotations can be performed only using n/4 iterations, since Booth encoding can be employed. However, the delay of the selection for the multiple of the shifted x/ y components requires slightly more time, resulting in a delay of about one full-adder delay. The delay for the 4 : 2 compressor remains 1.5 full-adder. Hence, the critical path of the entire x/ y rotator part consists of n/2 · 2t FA + n/4 · 2.5 · t FA = 1.625n · t FA . Note that the direction of the first iteration is already known; hence, the first iteration is not in the critical path. Therefore, the critical path of the entire x/ y rotator part consists of (1.625n − 2)t FA .
As an example, for a word-length of n = 16 bits, the x/ y datapath delay and the entire delay of the CORDIC algorithm corresponds to 24 and 32.5 full-adder delays, respectively.
Scale factor compensation
Since the scale factor is constant, the x and y values can already be scaled while the rotation direction is being computed. The scaling requires an adder of word-length (n + log 2 (n)) bits. Using a binary tree adder, this results in a delay of log 2 (n + log 2 (n)) · t Mux . For the scale factor, a CSD (canonic signed digit) representation can be used, leading to at most n/3 nonzero digits. Applying a Wallace-tree for the partial product reduction, the total delay of the scaling results into (0.5 log 2 (n + log 2 (n)) + log 1.5 (n/3)) · t FA < (1/12 · n + log 2 (n) + 3.25) · t FA = t initial . Hence, the scaling of the x and y coordinates does not affect the total latency of the novel algorithm.
OVERVIEW OF PREVIOUSLY REPORTED CORDIC ALGORITHMS
The delay of every iteration can be decomposed into two different time delays, t d,σ and t d,xy , where t d,σ corresponds to the time delay to predict the new rotation direction while t d,xy corresponds to the time delay of the multiplexer/add structure of the x/ y datapath. Various implementations have been proposed to obtain a speedup of the CORDIC algorithm. Improvements have been especially made in the reduction of t d,σ .
In [17], the angle remainder has been decomposed every k = 3k + 1 iteration. From the given angle θ, the first four rotation directions can be immediately determined. After performing the corresponding addition/subtraction of the terms σ i · α i from the input angle θ using CSA arithmetic, a fast binary tree adder computes the nonredundant result z 4 . The bits 4 to 13 of z 4 deliver the rotation direction σ 4 to σ 13 which are used to perform the rotation in the x/ y datapath and the computation of the next angle remainder z 40 . Hence, a low latency CORDIC algorithm is obtained. However, a significant reduction in latency is achieved at the cost of an irregular design. Furthermore, it is difficult to perform a π/2 initial rotation or the rotation of index i = 0 for circular coordinates, as it would force a conversion from redundant to conventional arithmetic for the z coordinate just after the first micro-rotation which is costly in time and area. Hence, this parallel and nonpipelined architecture only converges in the range of [−1, 1]. The overall latency of this architecture corresponds to about 2n + log 3 (n) + log 2 (n) full-adder delay.
In [18], a direct correlation between the z remainder after n/3 rotations and the remaining rotation direction have been shown. Hence, no more examination of the direction of the micro-rotation has to be performed leading to a considerable reduction in latency. However, in the first n/3 iteration a conventional method has to be employed.
In [19], the directions of the micro-rotation have been recoded using an offset binary coding (OBC) [20]. The obtained correlation is approximately piecewise linear since small elementary angles can be approximated by a(i) = arctan(2 −i ) ≈ s · 2 n−i−2 , where s is the slope of the linearity. This is valid for i ≥ m, where m is an integer which makes the approximation tolerable (normally m = n/3 ). Hence, the following correlation can be obtained: By performing some arithmetic computations, the following correlation of the rotation direction can be obtained: Hence, a multiplication by the inverse of the slope s is required. This multiplication can be simplified to two stages of addition for an operand word-length of 9 bits. However, in most digital signal processing application, the operands have a word-length of up to 16 bits. Hence, for those applications, the presented method requires more stages of addition to compensate the multiplication resulting in a more complex implementation and an increase in delay.
In [21], a double rotation method is introduced which compensates for the scale factor while performing the regular x/ y rotations. However, due to the double rotation nature of this method, t d,xy is increased to about twice its original value.
To reduce the latency of the CORDIC operation, [22] proposed an algorithm using online arithmetic. However, this results in a variable scale factor. This drawback is removed in [23]. In every iteration a significant amount of time is used to examine the most significant three digits to predict σ i . The employed random logic requires a delay of about 1.5 full-adder delays. Since the x/ y datapath consists of a 4-2 compressor, it requires also a delay of 2 full-adders. Hence, the overall iteration delay corresponds to 3.5 full-adder delays. To maintain a constant scale factor, consecutive rotations are required in the first n/2, where n corresponds to the word-length of the operands. For the computation of the last n/2 bits, Booth encoding can be employed reducing the number of iterations by a factor of 2. However, the selection of multiple of the shifted x and y operands requires an additional multiplexer delay and increases the overall iteration delay to 4 full-adder delays. Hence, the number of iteration is equivalent to 0.75n which corresponds to a total latency of 3n full-adders (this does not include the scale operation and the conversion).
Other implementations like [24] remove the extra rotations by a branching mechanism in case that the sign of the remainder cannot be determined (most significant three digits are zero). Hence, no extra-rotations are required while the required implementation area is doubled. Nevertheless, the most significant three digits (or most significant six bits) still have to be examined for the prediction of the next rotation direction. In [25], the double step branching CORDIC algorithm is introduced which performs two rotations in a single step. Nevertheless, this method requires an examination of the most significant six digits to detect two rotation directions. Since some of the digits can be examined in parallel, the delay increases only to 2t FA . The computation time of a double rotation in the x/ y datapath is slightly reduced compared to two normal x/ y rotations. Hence, the total amount of computation time corresponds to 0.5n(2t FA + 3t FA ) = 2.5t FA .
In [26], the signs of all micro-rotations are computed serially. However, a speed up of the sampling rate is achieved by separating the computation of the sign and the magnitude of every z i or y i remainder. The sign of every remainder is computed by a pipelined carry-ripple adder (CRA) leading to an initial latency of n full-adders before the first CORDIC rotation can be performed. Nevertheless, after this initial latency, the following signs can be obtained with a delay of only one Table 4: An overview between the proposed algorithm and other CORDIC implementations.
In comparison to the CORDIC implementations with constant scale factor, other implementations use a minimally redundant radix-4 or an even higher radix number representation [12,13,14]. By using this number system, the number of iterations can be reduced. However, the prediction of the σ i becomes more complicated, since there are more possible values for σ i . In addition, the scale factor becomes variable and has to be computed every time, due to the absence of consecutive rotations. An online computation of the scale factor and a parallel scaling of the x and y operands can be achieved. Depending of the use of CSA or fast carry-propagate-adders (CCLA), the number of iterations can be reduced to 2 n/3 + 4 and n/2 + 1, respectively. The iteration delay t d,CSA of the architecture using CSA adders corresponds to the same delay as already described for the last n/2 iteration in the constant scale factor using Booth-encoding, while the architecture employing the fast CCLA adders requires 1.5· d,CSA [14]. Hence, the overall latency of these CORDIC algorithm using a minimally redundant radix-4 digit set corresponds to about 2n full-adder delays. Table 4 provides a delay comparison between the proposed algorithm and other CORDIC implementations. Some of the delays have been taken from [14,17,26].
CONCLUSION
This paper presented a CORDIC algorithm for the rotation mode which computes the directions of the required microrotation before the actual CORDIC computations start while maintaining a constant scale factor. This is obtained by using a linear correlation between the rotation angle θ and the corresponding direction of all micro-rotations for the rotation mode. The rotation directions are obtained by adding the rotation angle θ to a constant and a variable offset which is stored in a ROM. An implementation for high precision is also provided which reduces the size of the required ROM. Hence, neither extra or double rotations nor a variable scale factor are required. The implementation is suitable for wordlengths up to 54 bits, while maintaining a reasonable ROM size. | 6,651.4 | 2002-01-01T00:00:00.000 | [
"Computer Science"
] |
Intelligent detection and applied research on diabetic retinopathy based on the residual attention network
This study proposes a high‐accuracy (ACC) algorithm to automatically detect diabetic retinopathy (DR) and diabetic macular edema (DME) in retinal fundus images. Three DR datasets were obtained for use in this study: EyePACS, Messidor, and IDRid. In the EyePACS dataset, two DR classifications and five classifications experiments were conducted. The Messidor and IDRid dataset were graded DR and DME. After preprocessing, enhancement, and normalizing, common convolutional neural networks (CNN) were used to obtain the classification results. Afterward, an optimization method residual attention network (RAN) was introduced that was based on the residual attention module, and incorporated dilated convolution, so as to optimize the experimental results. The focal loss was then added to solve the imbalance problem. Next, a five‐fold cross‐validation strategy was introduced so as to assess and optimize the proposed model, after which the prediction ACC, sensitivity, specificity, area under receiver operating curve, and Kappa score were assessed. The proposed method RAN was shown to achieve 89.2% ACC (95% confidence interval [CI], 0.8782–0.9123) for two DR classifications (normal and abnormal) on the EyePACS dataset, 89.8% ACC (95% CI, 0.8751–0.9275) for two DR classifications on the Messidor dataset. The IDRid dataset achieved an ACC of 71.5% (95% CI, 0.6941–0.7423) for the two DR classifications. RAN mainly improves the results of commonly used CNN methods on the same dataset. Therefore, the classification and diagnosis of DR may be improved by adopting the proposed method.
| INTRODUCTION
Diabetic retinopathy (DR) is a late manifestation of diabetes mellitus and is one of the most severe complications of diabetic microangiopathy. If it is not detected and treated early, DR can cause irreversible visual impairment or even blindness in severe cases. 1 Fundus imaging is a vital method of inspection for the early detection of DR lesions. The corresponding fundus changes and grading standards used in DR are shown in Figures 1 and 2. As ophthalmologists in less developed areas are lacking, patients suffering from diabetes are unable to receive an early diagnosis and treatment for DR. 2 Therefore, computerized screening technology based on fundus images is of great significance in delaying the progression of DR.
High-quality color retina images can assist doctors in investigating and diagnosing retinopathy. However, the diagnosis of DR requires a clinically experienced ophthalmologist, and DR screening is not performed in most grass-roots areas, which has significantly increased the risk of blindness due to diabetes. 3 Therefore, adopting computer-assisted remote diagnostic technology in fundus imaging can effectively result in reductions in visual impairment among diabetic patients due to insufficient medical resources.
At present, most ophthalmology image analysis work focuses on DR classification, vessel segmentation, and detection of retina structures. [4][5][6] Pratt et al. 7 developed a network with convolutional neural networks (CNN) architecture and data augmentation that can identify intricate features involved in the classification of DR. They then trained the model on the EyePACS dataset, achieving a sensitivity (SE) of 95% and accuracy (ACC) of 75% for 5000 validation images. Rahim et al. 8 presented an automatic detection method for DR and maculopathy in fundus images by employing fuzzy image processing techniques. A combination of fuzzy image processing techniques, circular hough transform, and several feature extraction methods were implemented. Eftekhari et al. 6 adopted a two-step process with two online datasets to train CNN, which solved the problem of imbalance and reduced the training time while performing accurate detections. Seth et al. 9 used CNN and linear support vector machines to train the network on the benchmark dataset EyePACS dataset, which demonstrated that the model had high SE and specificity (SP) in detecting DR. Dutta et al. 10 proposed an automatic knowledge model to identify critical prerequisites for disaster recovery. After testing the model using a central processing unit CPU-trained neural network, three types of back-propagation neural networks were used. Accordingly, the model was able to quantify the characteristics of different types of blood vessels, exudates, bleeding, and microaneurysms. Adem et al. 11 13 proposed a hierarchically Coarse-tofine network (CF-DRNet) to classify the five stages of DR severity using CNN, which showed that CF-DRNet outperformed various state-of-the-art methods in the publicly available IDRiD and EyePACS datasets. Arenas-Cavalli et al. 14 evaluated the automated DR screening tool DART, for which the receiver operating curve (ROC) analysis indicated a SE of 94.6%, SP of 74.3%, and AUC of 0.915. Furthermore, Dai et al. 15 developed a system called DeepDR, which was able to detect early to late stages of DR. The grading of DR as mild, moderate, severe, and proliferative achieved an AUC of 0.943, 0.955, 0.960, and 0.972, respectively. This article will factor in the needs of both ophthalmologists and diabetic patients by proposing a DL algorithm RAN to improve the performance of DR diagnosis in the model (Tables 1-3).
| MATERIAL AND METHODS
In this study, experiments were conducted on the EyePACS, Messidor, and IDRid datasets, as shown in Figure 3. Based on ResNet, 27 the algorithm RAN proposed in this paper integrated the attention mechanism and added the attention guided module (AGM) and dilated convolution.
| Database
This study utilized three public datasets. EyePACS's 28 training set contained 35 125 fundus images released by the California Medical Foundation from eye-PACS users, including level 0 25 809 (74%), level 1 2443 (7%), level 2 5292 (15%), level 3 873 (2%), and level 4 708 (2%). Due to the excessive number of normal fundus images in this dataset, 40% of the normal images were selected for training and testing during the two classification experiments, while only 20% of the normal images were selected for training during the five classification experiments and tests. The Messidor dataset 29 consisted of 1200 fundus images from three ophthalmology hospitals, 800 of which were images obtained following pupil dilation. Each picture was marked with a DR lesion grade of 0-3, and each picture had a DME lesion grade of 0-2. Table 4 lists the number distribution. The image sizes in the dataset were 1440 Â 960, 2240 Â 1488, and 2304 Â 1536, respectively, in tif format. The IDRid dataset 30 included lesion segmentation, disease classification, and optic disk and fovea detection. In this experiment, only disease classification data were used, including 413 pictures in the training set and 103 pictures in the test set. All pictures were 4288 Â 2848 and in jpg format. Table 5 shows the number and proportion distribution. In these three datasets, 60%, 15%, and 25% of the images in each dataset were randomly selected as the training set, validation set, and test set, respectively.
As the number of abnormal pictures in the Messidor and IDRid dataset was too small, it was more meaningful to perform two classifications in terms of clinical application. Evidently, according to the above dataset, the most prominent feature of medical images is the imbalance in the distribution of data; the number of samples in normal images is much higher than that of abnormal images, while the amount of grading data with the severity of the disease is becoming increasingly less. In order to solve this problem, data preprocessing and optimizing the loss function are most commonly utilized. Commonly used data augmentation methods include translation, rotation, cropping, scaling, noise addition, affine transformation, and so forth. These methods usually do not change the type of object and are the earliest and most widely used types of image-enhancing methods. The color of the image can also be changed according to four areas: brightness, contrast, saturation, and tone ( Figure 4).
In order to reduce the difference between the different images of the dataset, before sending the image to the The workflow of the proposed work for classification of diabetic retinopathy network for training, normalization was implemented in each image: where Þ is the normalized image, i and j is the coordinates of the pixel points, k represents the three channels of the image (blue, green, and red), m k represents the average value of the kth channel pixel value, and σ k represents the standard deviation of the kth channel pixel value.
The loss function in the neural network was used to measure the gap between the predicted value obtained by the model and the actual value of the data, which also served as the standard used to measure the generalization ability of the model. The smaller the loss function, the better the performance of the model, and the loss function used by different models are generally different. The most commonly used loss function is cross-entropy. 31 Since the imbalance problem generally exists in DR datasets, focal loss 32 was introduced in this experiment, which was modified on cross-entropy. It multiplies the original crossentropy by an index that weakens the contribution of the easily detectable object to the model training.
| Residual attention network
The core of RAN ( Figure 5) served as the attention mechanism, which can increase the area information of the lesion and suppress other background information, thereby improving the ACC of the model in the DR classification. 33 As module stacking becomes more in-depth, F I G U R E 4 Messidor dataset enhanced results different levels of attention information can be extracted from top to bottom, and the attention perception function from different modules will change adaptively. 34 The added attention residual learning structure can train very deep residual attention networks, which may also be easily extended to hundreds of layers. By stacking this residual attention structure, the advantages of residual learning and attention mechanism can be thoroughly combined to achieve better results. Each attention mechanism module is divided into two branches 33 : the soft mask branch and trunk branch ( Figure 6). The formula of the attention mechanism is where T represents the main branch and M represents the mask branch. The mask branch used several maximum pooling to increase the receptive field. After reaching the minimum resolution, a symmetric network structure was used to amplify the features back. 33 As shown in Figure 7, AGM was also added into RAN, which was composed of two 1x1 convolution layers with different activation functions in the adaptive average pooling layer. The specific operation was carried out according to the following. First, the input feature map passed through an adaptive average pooling layer, with an output feature map dimension of F I G U R E 5 Residual attention network structure 33 F I G U R E 6 Soft mask branch and trunk branch 33 R 1Â1ÂM . Next, after a 1 Â 1 convolution layer with the linear rectification function (ReLU) activation function, the output feature map dimension was R 1Â1ÂM=r , and the number of channels was reduced from M to M=r. Then, after a 1 Â 1 convolution layer with sigmoid activation function, the number of channels was expanded from M=r to M, and a channel descriptor with dimension R 1Â1ÂM was obtained so as to recalibrate the original feature map. Among them, the hyper-parameter r can control the calculation amount of AGM, which was set to 16. Finally, by multiplying the obtained channel descriptor and input feature map, recalibration of the feature map was completed, and the importance of each channel was recalibrated by integrating the global information. The importance of different channels varied, highlighting the important information while suppressing background information.
| Dilated convolution module
In order to expand the receptive field and capture multiscale contextual information, this article also adopted a dilated convolution module. 35 As shown in Figure 8, the dilated convolution was equivalent to the filling d À 1 dilations between adjacent convolution kernel parameters. When the dilation rate d = 1, the dilated convolution degenerated into a standard convolution; the larger the d, the larger the receptive field of the convolution kernel. In this article, 1 Â 1 standard convolution, 3 Â 3 dilated convolution with dilation rate d = 2, 3 Â 3 dilated convolution with dilation rate d = 3, 3 Â 3 dilated convolution with dilation rate d = 5, and global average pooling was used to extract the features. Five levels of image information were subsequently extracted. The specific process in regard to using global average pooling to extract features was to initially utilize an adaptive average pooling layer in order to generate a 1 Â 1 Â 512 dimension feature map. Then, a 1 Â 1 convolution was used to change the number of channels to 256, after which the bilinear interpolation algorithm was adopted to expand its size to 14 Â 14. The extracted feature maps of five levels were then spliced with the original feature maps to obtain a 14 Â 14 Â 1792 dimension feature map. Finally, 1 Â 1 convolution was done to change the number of channels to 512. After each convolution operation, a batch normalization layer and ReLU activation function were integrated. Before each dilated convolution extracted the features, the feature map carried out a padding operation in order to ensure that the resolution of the feature map before and after did not change.
| Transfer learning
Transfer learning 36 is a method of machine learning that encompasses transplanting the model obtained from training one task to training other tasks. In this experiment, the pretrained EfficientNet weights were loaded on ImageNet to the three DR datasets so as to train the proposed model to obtain better results. In addition, in order to improve the performance and alleviate issues pertaining to the small amount of data in the Messidor and IDRid datasets, the weights RAN model trained on the EyePACS dataset was transferred to the Messidor and IDRid datasets.
| Implementation details
The PyTorch framework and OpenCV image processing library were applied in this experiment, which was implemented on the Ubuntu16.04 operating system with a GeForce RTX 2080Ti graphics card. The Adam optimizer initial learning rate was 0.001, the batch size training phase was 16, the testing phase was 4, and a total of 60 epochs were trained. In addition, each image was initially scaled to 512 Â 512 pixels, which was then sent to the network for training and testing. The test set was tested after every epoch of training, only outputting models and results with the highest SE and ACC.
In this experiment, the relationship between the model prediction result and the true label of the data was evaluated according to the following criteria: True Positive, False Negative, False Positive, and True Negative. ACC, SE, SP, receiver operating curve, and AUC were also applied in order to evaluate the experimental results.
In clinical settings, a missed diagnosis has a greater adverse effect on patients; hence, the SE in DR classification is more significant. In the DR five-category experiment, the Kappa coefficient was also added as an evaluation criterion (Table 6).
| RESULTS
In this paper, according to the three DR datasets of EyePACS, Messidor, and IDRid, the commonly used DL methods, and the proposed RAN were experimented, respectively. Here, we utilized cross-entropy and focal loss to conduct classification and diagnosis experiments for DR and DME, which were then compared and analyzed, as shown in Tables 7-12. In the EyePACS dataset, the SP, SE, and AUC of the RAN for DR two classifications reached 0.894 (95% CI, 0.8646-0.9108), 0.930 (95% CI, 0.9047-0.9486), and 0.917 (95% CI, 0.8976-0.9287), respectively, while the ACC reached 0.892 (95% CI, 0.8782-0.9123), which was 4.6%, 3.7%, 9.4%, and 5.6% higher than VGG-16, respectively. Moreover, RAN attained an excellent level of ACC in the DR classification. The ACC of RAN in the five DR classifications reached 0.815 (95% CI, 0.8024-0.8456), which was 5.3% higher than that of VGG-16. Meanwhile, the Kappa score reached 0.865, which was higher than the 0.829 obtained in the DR classification competition held As seen in Figure 9, due to the imbalance problem in the DR datasets, focal loss was more suitable than cross entropy as the loss function in each classification task. Accordingly, the ACC was noted to be greatly improved.
| DISCUSSION
This paper proposed a classification algorithm RAN for DR detection, in which the classification experiments were verified on the EyePACS, Messidor, and IDRid datasets. Since the imbalance between data categories will lead to overfitting during model training, data augmentation, and focal loss were introduced. The image augmentation method used in this experiment can make the amount of data in each classification of DR reach a relatively balanced state. Moreover, the focal loss also achieved satisfactory results in alleviating data imbalance issues. In order to address the minor differences between DR categories, normalizing was also carried out on the original retinal image to highlight the bleeding and exudation in the fundus image. In addition, the attention mechanism, which focuses on features in the fine-grained image during classification, was added to the network in order to extract the features of the fine-grained images so that the network can better distinguish the differences between the types of lesions. Furthermore, dilated convolution was added to the network to increase the receptive field. The above results demonstrated that the intense competitiveness of CNN in clinical diagnostic applications and RAN achieved better performance in DR detection. In short, using the proposed RAN can enhance the ACC of DR classification and diagnosis of most fundus images. Through this combination of ResNet, attention mechanism, and dilated convolution, the ACC of classification of DR can be improved.
However, the rise in ACC of the proposed method is not significant enough. In our future studies, we will integrate additional information related to DR, such as age, blood glucose, blood pressure, intraocular pressure, and past history, into the DR classification model to effectively improve the diagnosis results. Moreover, multi-task experiments will be conducted to mutually promote the improvement of the experimental results. How to integrate the results of exudates, bleeding, microaneurysms detection, and blood vessel segmentation into the DR classification model will also be the focus of our subsequent works. Algorithm engineers and clinicians both aspire to build a robust and accurate DL model for DR detection, and this desire cannot be achieved without the joint efforts and cooperation of both parties. | 4,218.2 | 2022-04-18T00:00:00.000 | [
"Computer Science"
] |
A handy-assay procedure to measure simultaneously the polyphenol content and the antioxidant capacity in teas using the Folin Ciocalteu reagent
Cícera Pimenta Marcelino<EMAIL_ADDRESS>Faculdade de Medicina do ABC, Santo André, São Paulo, Brasil. Lucas Belini Oliveira<EMAIL_ADDRESS>Faculdade de Medicina do ABC, Santo André, São Paulo, Brasil. Waila Evelyn Lima Santana<EMAIL_ADDRESS>https://orcid.org/0000-0002-4175-9297 Faculdade de Medicina do ABC, Santo André, São Paulo, Brasil. Horacio Dorigan Moya<EMAIL_ADDRESS>http://orcid.org/0000-0003-0888-291X Faculdade de Medicina do ABC, Santo André, São Paulo, Brasil. Folin Ciocalteu reagent (FCR) is commonly used for the quantification of total polyphenol content (TPC) in plants and theirs derived materials. Tea is one of the most consumed beverages in the world and widely used in developing countries in folk medicine. Tea contains several compounds (especially polyphenols) which are believed to have antioxidant capacity. In the present study, a handy-assay procedure to measure simultaneously the TPC and the antioxidant capacity in teas using the FCR was developed. A spectrophotometric procedure using the FCR was undertaken to evaluate simultaneously the TPC and the antioxidant capacity, expressed in Trolox equivalents antioxidant capacity (TEAC), of 32 samples of 7 different types of teas. For comparison purposes, the antioxidant activity of the same samples was determined using the CUPRAC (cupric reducing antioxidant capacity) reagent. Finally, the antioxidant capacity in a cup of tea (200 mL) obtained with the FCR and expressed in ascorbic acid equivalents (AAEC) was performed. The TEAC values obtained with FCR presented a good positive correlation with the CUPRAC method (r = 0.853), suggesting that both reagents can be used to quantify the antioxidant capacity. The TEAC and CUPRAC values also showed good agreement with the TPC (r = 0.969 and r = 0.809, respectively) indicating that the antioxidant capacity should be due to the presence of polyphenols. The results obtained and the calculation strategy used may be an easier way to present the antioxidant capacity values to the final consumer who is most commonly unfamiliar with this important concept.
Folin Ciocalteu reagent (FCR) is commonly used for the quantification of total polyphenol content (TPC) in plants and theirs derived materials. Tea is one of the most consumed beverages in the world and widely used in developing countries in folk medicine. Tea contains several compounds (especially polyphenols) which are believed to have antioxidant capacity. In the present study, a handy-assay procedure to measure simultaneously the TPC and the antioxidant capacity in teas using the FCR was developed. A spectrophotometric procedure using the FCR was undertaken to evaluate simultaneously the TPC and the antioxidant capacity, expressed in Trolox equivalents antioxidant capacity (TEAC), of 32 samples of 7 different types of teas. For comparison purposes, the antioxidant activity of the same samples was determined using the CUPRAC (cupric reducing antioxidant capacity) reagent. Finally, the antioxidant capacity in a cup of tea (200 mL) obtained with the FCR and expressed in ascorbic acid equivalents (AAEC) was performed. The TEAC values obtained with FCR presented a good positive correlation with the CUPRAC method (r 2 = 0.853), suggesting that both reagents can be used to quantify the antioxidant capacity. The TEAC and CUPRAC values also showed good agreement with the TPC (r 2 = 0.969 and r 2 = 0.809, respectively) indicating that the antioxidant capacity should be due to the presence of polyphenols. The results obtained and the calculation strategy used may be an easier way to present the antioxidant capacity values to the final consumer who is most commonly unfamiliar with this important concept.
INTRODUCTION
The Folin Ciocalteu reagent (FCR), a mixture of phosphomolybdate and phosphotungstate, is widely used in alkaline solution for in vitro spectrophotometric determination of various reducing compounds. In these analytical procedures (generally called the FC methods) a solution containing certain reducing agent (the analyte) is mixed with an amount of FCR and then sodium carbonate solution is added to alkalize the mixture. After some waiting time (usually 30 minutes) at room temperature (23 o C), the solution acquires a quite stable blue color which can be achived more rapidly under heating at 40 o C despite the fact that loss of color is more pronounced over time ORTHOFER;LAMUELA-RAVENTOS, 1999). The absorbance values recorded between 715 and 760 nm are then related to the analyte (reducing agent) concentration.
Historically, the FCR was first proposed by Otto Folin and Vintila Ciocalteu (1927) for the determination of tyrosine (which containing a single phenol group) and tryptophan (without phenolic groups) in protein samples and then used by other authors in several foodstuffs' samples (MCFARLANE;FULMER, 1930). In the same decade studies on the determination of some aminoacids (tyrosine, tryptophan, cystine and histidine) in lake water samples and for quantification of phenol in pasteurized milk were carried out (KAY; GRAHAM, 1935;KUISEL, 1935).
Latterly, Schild and Enders (1936) mentioned that the FCR was not specific for tryptophan determination since it was reduced by a large number of substances which promote the appearance of the same blue coloration.
Those findings led to further studies using the FCR reagent in the quantification of other reducing species.
In fact, some years later Balls and Arana (1941) quantified the vanillin contents in vanilla beans extracts. Their results showed that different phenolic substances (other than vanillin and coumarin) contributed to the flavor and aroma of those products. Those results were then reported as "phenol values". Heintze (1964) pointed out that polyphenols present in plants and foods could be quantified with the so-called "Folin-Ciocalteu method" using chlorogenic acid as a polyphenol standard.
After a period of one year Singleton and Rossi (1965) investigated the spectral properties of various phenolic compounds using the Folin-Denis reagent and FCR. Their conclusions suggested that the latter would be more appropriate for the quantification of tannins in spirits and wine samples.
As a matter of in the late 60's the total phenolic composition of red wine samples was measured using FCR, finding much better results when compared with the then extensively used "permanganate index" (RIBEREAU-GAYON;SARTORE, 1970).
Afterwards, Peri and Pompei (1971) using a procedure with several precipitation steps quantified different phenolic groups (condensed tannins, hydrolysable tannin, non-tannin flavans and simple phenolics) in vegetable extracts using the FCR.
Since then, the FCR has been used for the quantification of polyphenols in several samples of vegetable-based matters such as aqueous extracts of medicinal plants, wines, beers, fruit juices and teas (ATOUI et al., 2005;DU TOIT;VOLSTEEDT;APOSTOLIDES, 2001;GORJANOVIĆ et al., 2012;IVANOVA et al., 2005;LEE et al., 2011;NAKAMURA;MOYA, 2012;PEKAL et al., 2012;WU et al., 2015). As the identification of individual phenolic compounds is not possible with the FCR without other analytical steps, it is more usual to quantify the total polyphenol content (TPC) in those samples expressing the obtained value in equivalents of a standard phenolic compound (e.g., tannic, gallic, pyrogallic, chlorogenic or ferulic acids and catechin) (ROBBINS, 2003;ORTHOFER;LAMUELA-RAVENTOS, 1999).
Tea is one of the most consumed beverages in the world and widely used in folk medicine especially in developing countries where traditional medicine is not always easily accessible.
The natural constituents present in the tea leaves are considered to be responsible for bringing various benefits for human health. Moderate and regular ingestion of teas has been linked to reduced levels of cholesterol, blood pressure, reduction of the risk of coronary heart disease and even certain types of cancer (AHMED-BELKACEM et al., 2005;FUJIKI;SUGANUMA, 2012;JUNG et al., 2008;YANG et al., 2011;WANG et al., 2000;WENZEL et al., 2000).
A retrospective study carried with 13,842 individuals in Taiwan proved that the daily consumption of tea in controlled amounts reduces the risk of obtaining kidney stones, thus encouraging tea consumption in that country (CHEN et al, 2018). In a study with 20,643 participants China LI and colleagues found that drinking tea was beneficial to women's bone health. (LI et al, 2019).
A recent review evaluated the relationship of green tea consumption with some types of cancer and cardiometabolic diseases. For endometrial, lung, oral, ovarian and non-Hodgkin's lymphoma cancer) positive results were observed while for other cancer types they were null or inconclusive results. Although this review does not show positive results for all types of cancer, it mentions that tea intake can be considered beneficial to human health (ABE; INOUE, 2021).
In fact, infusions of Camellia sinensis L. Kuntze, for instance, presented antibacterial activity and other health benefits that were associated with the high content of polyphenols and, consequently, the antioxidant capacity (CHAN et al., 2011).
In reality, it is well known that due to their antioxidant properties polyphenols can interrupt some chain reactions caused by reactive oxygen species, such as O2• -, HO•, ROO•, and reactive nitrogen species, such as NO•, ONOOand (ALIPÁZAGA; MOYA; COICHEV, 2021; DE SOUZA; MOYA, 2014) protect the human body against the so-called oxidative stress (HUSSAIN et al., 2016).
Hence, the interest in the quantification of polyphenols and antioxidant capacity in teas has increased over the years since polyphenols are an important exogenous source of antioxidants. However, the antioxidant compounds present in teas (mainly polyphenols) responsible for antioxidant capacity are chemically different from each other, which effectively hinders their identification and individual determination in routine analyzes. In this context, simple methods that can simultaneously quantify the total polyphenolic content and also the total antioxidant capacity are definitely wanted and eventually welcome (TABART et al., 2009).
In the present work the FCR procedure was undertaken to evaluate simultaneously the TPC and the total antioxidant capacity in 32 samples of 7 different types of tea herbs (Baccharis genistelloides, Camellia sinensis L. Kuntze, Cymbopogon citratus Stapf, Ilex paraguariensis St. Hil., Matricaria recutita L., Mentha piperita L., Peumus boldus) from 11 brands commercially available in the local market (Santo Sandré city, state of São Paulo).
For comparison purposes, the total antioxidant capacity values obtained with the FCR were compared with values obtained with the cupric reducing antioxidant capacity (CUPRAC reagent) method, which is based on reduction of Cu(II) to Cu(I) in a solution containing neocuproine (APAK et al., 2004).
The results presented in this study allow inferring that the FCR procedure and the calculation strategy adopted here can help in the choice of which tea to be consumed or even to be used as a routine trial in quality control.
EQUIPMENTS
Absorbance measurements were performed on the HPUV 8453 (Agilent Technologies, USA) spectrophotometer using a glass cuvette (1.00 cm optical path).
REAGENTS AND SOLUTIONS
Reverse osmosis water (Quimis Q842-210, Brazil) was used to prepare all solutions, except when another solvent is indicated.
PREPARATION OF TEA SAMPLES
In this study only tea bags (different herbs of various brands) purchased from local marketplaces were included. All samples were dry and within the expiration date established by the manufacturer.
Extraction of the water-soluble compounds from dried material was performed using the same procedure described in Brazilian Pharmacopoeia for preparation of aqueous extracts of medicinal plants (Brazilian Pharmacopoeia, 2010). Briefly, 0.300 g of dry material was transferred to a 100 mL becker containing 50.0 mL of water which was maintained in a water-bath (30 min; 65-70°C). After cooling, the mixture was transferred to a 100.0 mL volumetric flask and its volume fulfilled with water. This mixture was then filtered through quantitative filter paper (Nalgon, 3552, Germany) and if necessary diluted before analysis.
QUANTIFICATION OF THE TOTAL POLYPHENOL CONTENT
A calibration curve using gallic acid (GA) was obtained by mixing aliquots (100.0-400.0 μL) of a 0.094 mg mL -1 (5.0×10 -4 mol L -1 ) GA standard solution with 200 μL of FCR in a 5.0 mL volumetric flask and completed with 10% Na2CO3 solution providing GA final concentration from (1.9-7.5)×10 -3 mg mL -1 ((1.0-4.0)×10 -5 mol L -1 ). With the calibration curve (A715nm vs CGA, being CGA the concentration of GA in mg mL -1 ) the linear equation A715nm = a + b × CGA was obtained, where a and b are the values of the linear and angular coefficients, respectively.
The standard multiple addition method was used in all analyzes as follows: aliquots of freshly prepared tea samples ranging from 150-1000 L (depending on the type of tea) were transferred to 5.0 mL volumetric flasks containing 200 L of the FCR which were filled up with 10% Na2CO3 solution. In four out of five volumetric flasks aliquots (100-400 μL) of 0.094 mg mL-1 (5.0×10 -4 mol L -1 ) GA solution were added and the volume was completed with the same 10% Na2CO3 solution (HARRIS, 2005;MARINO et al., 2009;SANTO;MOYA, 2013).
In both curves (calibration and multiple standard additions with tea samples) the absorbance measurements were recorded at 715 nm (A715nm) after 30 minutes using water as the reference solution (blank). All measurements were made in triplicate. The TPC values were expressed in mg GA/g dry material (DM) as shown in Table 1 and can be more easily calculated using the Equation 1 where a and b are the linear and angular coefficients values of the linear equation of the standard multiple addition calibration curve, respectively, fd is the dilution factor (see Preparation of tea samples) and V is the volume (mL) of the tea sample. 313500 is a numerical constant that includes the volumes of 100.0 mL (see Preparation of tea samples) and 5.0 mL (see Quantification of the total polyphenol content), the molar mass of gallic acid (188.13 g mol -1 ), the mass of dry material (0.300 g) and the conversion of g to mg (1000). TPC (mg GA/g DM) = (a x fd x 31350)/(b x V) (Equation 1)
DETERMINATION OF TOTAL ANTIOXIDANT CAPACITY IN TROLOX® EQUIVALENTS
A calibration curve with Trolox® was obtained as performed with GA and can be described by the linear equation A715nm = a + b × CTrolox® (where CTrolox® is in mg mL -1 ). Interpolating the A715nm value obtained with the tea sample using with FCR (without GA addition) into the above equation a corresponding concentration in Trolox® (mg mL -1 ) is obtained. Considering the tea aliquot used and the dilution required, the mass of Trolox® in 100 mL of tea is found. The results of the total antioxidant capacity in Trolox® equivalents (TEAC) were expressed in µmol Trolox®/g DM (Table 1) and can be also found using the Equation 2, in which A715nm is the absorbance value at 715 nm of the diluted tea obtained with FCR. a and b are the linear and angular coefficients of the calibration curve with Trolox®, respectively, fd is the dilution factor and V is the volume (mL) of the tea sample. 6659 is a numerical constant that incorporates the volumes of 100.0 mL (see Preparation of tea samples) and 5.0 mL (see Quantification of the total polyphenol content), the molar mass of Trolox® (250.29 g mol -1 ), the mass of dry material (0.300 g) and the conversion of g to mg (1000).
CALCULATION OF THE ANTIOXIDANT CAPACITY IN A TEA CUP EXPRESSED IN ASCORBIC ACID EQUIVALENTS
A calibration curve with ascorbic acid (AA) was obtained as performed with Trolox® and can be described by the linear equation A715nm = a + b × CAA (where CAA is in mg mL -1 ). Interpolating the A715nm value obtained with the tea sample using with FCR into the above equation a corresponding concentration in AA (in mg mL -1 ) is calculated. Considering the tea aliquot used and the dilution required the mass of AA in 200 mL (regular cup of the tea) is found. The ascorbic acid equivalents (AAEC) values were expressed in mg AA/200 mL tea (Table 1) and can be also found using the equation (3), in which A715nm is the absorbance value at 715 nm of the tea obtained with FCR. a and b are the linear and angular coefficients of the calibration curve with AA, respectively, fd is the dilution factor, V is the volume (mL) of the tea sample and 1000 is a numerical constant that incorporates the volume of 5.0 mL (see Quantification of the total polyphenol content) and the regular volume of a cup of tea (200 mL). This procedure (APAK et al., 2004;APAK et al., 2006) was performed as described in previous works with minor modifications, notably in the replacement of nitrate; chloride; or copper (II) sulphate by a solution of copper(II) perchlorate previously synthesized and standardized (LEE et al., 2011;NAKAMURA;MOYA, 2012).
RESULTS AND DISCUSSION
Several studies are found in the literature dealing with the determination of total polyphenol content (TPC) and the antioxidant capacity of Brazilian medicinal plant species (BLAINSKI; LOPES; MELLO, 2013;BRIGHENTE et al., 2007;HABERMANN et al., 2016;MANOEL;SANTANA;. Most of them investigated a unique part of the plant species (leaves, stem, seeds, etc.). There are not many studies about herbs as they are found by consumers in supermarkets (sachets) so a study of this nature is worthy of investigation. In the present study only samples commercially available in teas bags, used as infusion, were analyzed.
Commercial teas consumed by the population are normally prepared by using one sachet bag (~ 2.0 g) per cup (150 -225 mL) (ATOUI et al., 2005;CHAN et al., 2011;DU TOIT;VOLSTEEDT;APOSTOLIDES, 2001;KARORI et al., 2007;KIM et al., 2011;PEKAL;DROZDZ;PYRZYNSKA, 2012;TEJERO et al., 2014). On the other hand, the procedures used for infusion preparation used for scientific studies change greatly (particularly regarding time and temperature), which effectively interfere in the compound's extraction. It has been mentioned that five minutes using hot water is a handicap to extract the antioxidant compounds from tea (CAMPANELLA; BONANNI; TOMASSETTI, 2003).
TOTAL POLYPHENOL CONTENT
The TPC results found in the tea samples analyzed varied from type to type as it would expect (Table 1). Although the establishment of a TPC-based classification of each type requires a greater number of samples to avoid misleading conclusions, tea samples used in the present study can be divided into two mainly groups: i) Cymbopogon citratus Stapf, Matricaria recutita L., Baccharis genistelloides (11.5-23.1 mg GA/g DM) and ii) Mentha piperita L., Peumus boldus, Ilex paraguariensis St. Hil., Camellia sinensis L. Kuntze (56.9-133 mg GA/g DM).
It is noted that Matricaria recutita L. type has the lowest TPC value (11.5 -23.0 mg GAg DM) while Camellia sinensis L. Kuntze provided the largest TPC values (69.5 -133 mg GA/g DM).
Regarding only Camellia sinensis L. Kuntze group the TPC ranking is: black (82 ± 17) mg AG/g DM < green (103 ± 22) mg AG/g DM < white (128 ± 4) mg AG/g DM. It can be explained by considering the degree of fermentation to which the leaves were submitted that affected their polyphenolic components (BALENTINE; WISEMAN; BOUWENS, 1997). In fact, in white tea the young leaves are partially affected by steaming. On the other hand, in green tea the leaves are more affected by steaming and in the black tea the fermentation process is the most intense. In addition, it was pointed out that different manufacturing processes of the teas can also affect their characteristics (WAN; LI; ZHANG, 2008).
TOTAL ANTIOXIDANT CAPACITY
Trolox® (a water-soluble vitamin E analogue) was chosen to express the antioxidant capacity of the teas analyzed with FCR since it is used as a standard antioxidant compound in the ABTS •+ method, a widely used procedure for the quantification of antioxidant capacity (RE et al., 1999, SAHIN, 2013OSZMIAŃSKI;CZEMERYS, 2007). Besides, Trolox® has a high sensitivity (represented by the value of the angular coefficient, 'b', of the calibration curve) and very good reproducibility with FCR ('b' = (18.5 ± 0.9) L cm -1 mg -1 ; n = 12; CV = 5.3 %; linear range (1.0-3.8)×10 -2 mg mL -1 ). Table 1 shows the TEAC results ( mol Trolox®/g DM) of 32 samples of 7 types of teas (Baccharis genistelloides, Camellia sinensis L. Kuntze, Cymbopogon citratus Stapf, Ilex paraguariensis St. Hil., Matricaria recutita L., Mentha piperita L., Peumus boldus) analyzed (11 different trademarks). Although in vitro TEAC values obtained can not be closely related to bioavailability of the compounds responsible for in vivo antioxidant capacity, a TEAC ranking of the analyzed samples can be established: Matricaria recutita L. < Cymbopogon citratus Stapf. < Baccharis genistelloides < Mentha piperita L < Peumus boldus < Ilex paraguariensis St. Hil. < black < green < white (Camellia sinensis L. Kuntze). Figure 1 shows the comparison of the TPC values with the TEAC results for all analyzed samples. High positive correlation (TPC vs. TEAC, adjusted r 2 = 0.969) was verified between the antioxidant capacity and the polyphenolic content. Furthermore, it was verified that the group with lower polyphenol content, 11.5 -23.1 mg GA/g DM, presents a lower correlation with TEAC (TPC vs. TEAC, adjusted r 2 = 0.472) than the group with the higher polyphenol content, 56.9 -133 mg GA/g DM, (TPC vs. TEAC, adjusted r 2 = 0.912) which can, at first, be attributed to the phenolic compounds. Although polyphenols seem to be the compounds responsible for the antioxidant capacity of these samples, specific assays for other compounds should be performed if a phytochemical screnning is required.
Considering only the three teas of the same plant species (Camellia sinensis L. Kuntze) the TEAC average value ( mol Trolox®/g DM) of the analyzed samples follows the order: black (907 ± 156) < green (1193 ± 235) < white (1586 ± 73), (TPC vs. TEAC, adjusted r 2 = 0.968), proving that the steaming process suffered by the leaves affected the antioxidant capacity of the tea (Figure 1, inserted). Considering all analyzed samples, the antioxidant capacity values obtained with the CUPRAC reagent showed good correlation with the TEAC (CUPRAC vs. TEAC vs. adjusted r 2 = 0.853) and with the TPC values as well (CUPRAC vs TPC vs. adjusted r 2 = 0.809). It confirms that the FCR can effectively be used to quantify the antioxidant capacity of tea samples. It is also possible to express the antioxidant capacity of all analyzed teas in 200 mL of infusion, taking this volume as regular cup of tea (Table 1). For this, ascorbic acid (AA) currently the most widely used vitamin supplement worldwide can be used as a standard despite the little (or none) amount of AA in these samples, and the results expressed as AA. In fact, AA also showed a high sensitivity and reproducibility with FCR (b = (55.5 ± 3.5) L cm -1 mg -1 , n = 11, CV = 6.3 % for a linear range (1.8-14)×10 -3 mg mL -1 ) and besides being less costly than Trolox®/g.
The CUPRAC reagent can also be used for this purpose. A calibration curve with CUPRAC method obtained with AA and can be described by the linear equation A454nm = a + b × CAA (where CAA is the ascorbic acid concentration in mg mL -1 ). AA resulted a high reproducibility with CUPRAC reagent (CV = 4.7 %) for twelve curves in a linear range (1.76-1.23) mg mL -1 but with less sensitivity (b = (0.0883 ± 0.0042) L cm -1 mg -1 .
Obviously, the AA/cup of tea values presented in Table 1 show the same good correlation with TPC as those found with TEAC. In any case, it may be a good alternative to present the antioxidant capacity values of teas to the consumers who are commonly unfamiliar with this concept and. In this context, Table 1 can be used as an easy guide for choosing which tea to intake.
CONCLUSIONS
In the present study a large number of commercially available teas were evaluated in order to simultaneously determine the polyphenol content and antioxidant capacity using the Folin Ciocalteu reagent. These results were compared and showed excellent correlation with each other, indicating that the polyphenols should be responsible for the antioxidant capacity in these samples. In addition, the results of antioxidant capacity obtained with FCR were well correlated with the values obtained with the CUPRAC reagent, confirming that the former can effectively be used to quantify the antioxidant capacity in teas.
All the analyzed samples presented some antioxidant capacity so possible health benefits can be obtained if these teas are regularly consumed. The antioxidant compounds present in these infusions are chemically different which make it difficult to adopt a single method to evaluate this important parameter. In this context, the FCR procedures and calculation strategies adopted here can serve as an easy guide to choose which tea to be intake or even as a routine test to be used in quality control. | 5,628 | 2020-01-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Understanding the Determinants and Future Challenges of Cloud Computing Adoption for High Performance Computing
: High performance computing (HPC) is widely recognized as a key enabling technology for advancing scientific progress, industrial competitiveness, national and regional security, and the quality of human life. Notwithstanding this contribution, the large upfront investment and technical expertise required has limited the adoption of HPC to large organizations, government bodies, and third level institutions. Recent advances in cloud computing and telecommunications have the potential to overcome the historical issues associated with HPC through increased flexibility and efficiency, and reduced capital and operational expenditure. This study seeks to advance the literature on technology adoption and assimilation in the under-examined HPC context through a mixed methods approach. Firstly, the determinants of cloud computing adoption for HPC are examined through a survey of 121 HPC decision makers worldwide. Secondly, a modified Delphi method was conducted with 13 experts to identify and prioritize critical issues in the adoption of cloud computing for HPC. Results from the quantitative phase suggest that only organizational and human factors significantly influence cloud computing adoption decisions for HPC. While security was not identified as a significant influencer in adoption decisions, qualitative research findings suggest that data privacy and security issues are an immediate and long-term concern.
Introduction
Cloud computing is one of the major emergent paradigms in information systems research and practice. Attracted by the promise of information technology (IT) efficiencies and increased business agility, enterprises are incorporating cloud computing in their IT strategies [1]. Despite the extensive literature on the benefits of cloud computing, our understanding of cloud computing adoption decisions is marred by inconsistencies on the influence of a myriad of organizational, technological, environmental, and human factors [2][3][4][5][6], which vary by situational context [7][8][9]. Thus, in highly complex and under researched contexts such as high performance computing (HPC), there is a need for research that elucidates the role of the various organizational, technological, environmental, and human factors.
Advances in the design of higher performance processors, functional accelerators, interconnects, and associated software have resulted in increasingly powerful computers often referred to as supercomputing or HPC [10]. HPC can be defined as the coordinated use of massive parallel RQ2. What are the critical issues that are currently impacting and are likely to impact HPC in the cloud in the near future (1-5 years) and in the long term (5+ years)?
This study seeks to advance the literature on technology adoption and assimilation in the under-examined HPC context through a mixed methods approach. Firstly, the determinants of cloud computing adoption for HPC are examined with a survey of 121 HPC decision makers worldwide [22]. Secondly, a modified Delphi method was conducted with 13 experts to identify and prioritize critical issues in the adoption of cloud computing for HPC. The remainder of this article is organized as follows. In the next section, we discuss the related literature in the cloud computing context. Two pertinent theories are integrated to develop the research model in Section 3. The methodologies and findings for the quantitative and qualitative phases of the study are presented in Sections 4 and 5, respectively. These are discussed in Section 6. The paper concludes with summary of the study's contribution, limitations and opportunities for future research.
Literature Review
This paper seeks to examine cloud computing adoption for HPC by leveraging the technology adoption literature in the field of IS.
Technology Adoption in Information Systems
In the 1960s, research in innovation adoption began with Rogers seeking to understand what influenced farmers' decisions to adopt agricultural innovations, with innovations described as an idea, practice, or object that is perceived as new to an individual or organization [23]. Early work in this area focused either on the characteristics of the individual adopting the innovation or the characteristics of the innovation [23]. Since the 1980s, researchers have focused on understanding the factors driving adoption of different technologies, leading to the development of a well-established literature stream and a number of theories. These theories can be discussed under two categories: (i) adopter-centered and (ii) innovation or organization centered. The prevailing theories are outlined in Table 1.
The majority of models in the first category are based on the theory of reasoned action (TRA) [24], which posits that individuals' behaviors are formed based on their salient beliefs and attitude toward the behavior. While TRA does not prescribe what these salient beliefs are, it has been leveraged as the underlying foundation of several later models beginning with the technology acceptance model (TAM). TAM lists perceived usefulness and perceived ease of use as the salient beliefs which shape individuals' acceptance and usage of new technologies [25,26]. TAM has been leveraged to examine technology adoption in various contexts and adapted to produce various new models. Venkatesh et al. [21] re-examined the popular technology models to create the Unified Theory of Technology Acceptance and Usage (UTAUT). UTAUT proposes that the likelihood an individual will accept a new technology is based on their perception of the performance expectancy (similar to perceived usefulness), effort expectancy (similar to perceived ease of use), social influence and facilitating conditions to adopt. The first theory in the second category, is Diffusion of Innovation (DOI) theory, which holds that the technology adoption decision is influenced by innovation characteristics: (i) relative advantage, (ii) compatibility, (iii) complexity, (iv) observability, and (v) trialability [23]. Similar to DOI, the TOE Framework proposed by Tornatzky and Fleischer [27] suggests that IT adoption is influenced by three contexts-(i) technological, (ii) organizational, and (iii) environmental. These contexts are consistent with DOI theory in that the technological context incorporates the innovation characteristics. TOE theory also includes organizational factors, such as firm size, technology readiness, employee competence, and top management support [6][7][8][9]28,29]. In congruence with DOI and TOE, the HOT-fit model, proposed by Yusof et al. [30], asserts that organizational and technological factors are determinants of successful IS adoption. The HOT-fit model also introduces human factors, claiming that user attitude and competence also have a positive impact on technology adoption [30]. Theories from both categories have been leveraged to study adoption across a wide range of contexts with several adaptations and combinations used. However, while the predictive power of adopter-centered theories has been established and human factors are relevant, we argue that the complex nature of HPC lends itself to a broader approach that considers organizational and technological factors.
Technology Adoption Research in Cloud Computing
As our review failed to identify any studies which have previously investigated the factors influencing decisions to adopt cloud computing for HPC applications, it was important to review what theories have been utilized in the broader cloud computing context. A recent review of the literature on cloud computing adoption found that many studies fail to utilize the prevailing technology adoption theories [31]. Furthermore, those studies that do adopt a guiding framework lack a unified approach with studies adapting and combining elements of DOI, TOE, and HOT-fit theories. Among these theories, the influential factors in cloud computing can be categorized into four groups: (i) human factors such as personal innovativeness, perceived technical competence [9,30]; (ii) organizational factors including adequacy of resources, top management support, perceived indirect benefits, and relative advantage [7,9,32]; (iii) technological factors such as perceptions of the innovation's complexity, compatibility, and reliability and security [3,5,33]; and (iv) environmental factors such as competitive pressure, government policy, and partner support [2,7,9,29]. The existing literature leveraging these theories to understand cloud computing adoption is outlined in Table 2. As highlighted above, there are a number of combinations of these theories, and conflicting categorization of variables as human, organizational, or technical factors. However, several observations can be made. Firstly, there is no dominant model for exploring technology adoption, but there are many similarities among the theories operationalized. Furthermore, the trend of combining numerous theories is evident in this context. This approach is appropriate due to the complementarity of many of the popular technology adoption theories. For instance, DOI, TOE, and HOT-fit theories complement each other, to provide a comprehensive understanding of the key determinants of technology adoption [3,7,9]. Secondly, in terms of the human factors and HOT-fit, Lian et al. [9] found support for the influence of innovativeness and IT competence on cloud computing adoption by hospitals in Taiwan. It is thus important to explore the role of human factors in the HPC context. Thirdly, in relation to the organizational factors listed across TOE and HOT-fit models, perceived indirect benefits [8], top management support [1,5,6], and the firm's IT capability all influenced cloud computing adoption [5,8]. Fourthly, a number of innovation characteristics from DOI and the technological factors from HOT-fit and TOE have received support. This includes the role of complexity and compatibility from DOI which was found to influence cloud computing adoption in Taiwan [32] and security [9]. Lastly, prior research offers mixed support for the importance of environmental factors from TOE in the cloud computing context with competitive pressure [5,7,8], regulatory support [7], and partner pressure [6] all found to be insignificant in the cloud context. In this study, we focus on combining two theories, DOI and HOT-fit, for two key reasons. Firstly, the mixed support for the importance of environmental factors and the role of government are not conducive to this study's focus as the HPC market is spread across countries and industries. Second, DOI and HOT-fit include the majority of the TOE elements and are commonly used to explain technology adoption and provide a more holistic understanding of cloud computing adoption [8,9,34].
Research Model and Hypotheses
DOI theory asserts that the salient innovation characteristics will vary across contexts [35]. Thus, it is imperative to consider the research context when deciding on the appropriate technology adoption factors [34]. Thus, we draw on DOI and HOT-fit to develop the framework proposed in Figure 1.
Human Factors
Personal innovativeness denotes the openness of an individual to new technology. Studies show that an individual's perceived innovativeness can influence how they respond to new technologies [9,36]. Thus, personal innovativeness may predict whether a person intends to adopt an innovation earlier than others [5,36]. Given the lag in adoption of cloud computing in the HPC context, we argue that innovative decision makers are more likely to adopt. Hence, we posit:
Hypothesis 1 (H1). Personal innovativeness is positively related to the adoption of cloud computing for HPC.
In order to enhance technological readiness, organizations are required to have specialized human resources (e.g., competent HPC staff and/or IT staff) who have the knowledge and skills to implement cloud computing for HPC. Thus, employees' technological competencies may influence cloud computing adoption for HPC workloads. Studies show that IT expertise is essential for organizations that intend to adopt cloud computing [5,9,29]. Whilst this expertise may vary depending on the type of cloud computing service, we posit that technical competence is relevant for HPC, as organizations using HPC often use proprietary, tightly configured, or specialist software requiring specialist expertise. Thus, we propose: Hypothesis 2 (H2). IT/IS competence is positively related to the adoption of cloud computing for HPC.
Hypothesis 3 (H3)
. HPC competence is positively related to the adoption of cloud computing for HPC.
Technological Factors
Complexity represents perceived difficulty in the usage of an innovation [23]. Organizations tend to adopt new technologies that are easy to use, as complex technologies can result in decreased adoption [29]. Therefore, we posit:
Hypothesis 4 (H4). Complexity is negatively related to the adoption of cloud computing for HPC.
Compatibility refers to "the degree to which an innovation is perceived as consistent with the existing values, past experiences, and needs of potential adopters" [6]. Organizations are less likely to adopt cloud computing if it is not compatible with these values, as adoption may require major adjustments in organizational processes and considerable learning [6,37]. Compatibility has been found to influence cloud computing adoption [5]. It encompasses a wide range of expectations including performance and cloud compatibility with extant practices in an organization and HPC. Thus, we propose:
Hypothesis 5 (H5).
Compatibility is positively related to the adoption of cloud computing for HPC.
In the context of HPC in the cloud, industry reports suggest concerns within the HPC community that cloud computing may not be a reliable medium to process HPC workloads and that cloud communication speeds may not be sufficient to handle the data movement necessary for such workloads [12,38,39]. Security concerns are consistently reported as a major barrier to cloud computing adoption [12]. Security is perceived to directly relate to the reliability of a cloud computing system and is positively associated with the cloud computing adoption decision with higher perceptions of security increasing adoption intentions [14]. Thus, we hypothesize: Hypothesis 6 (H6). Perceived reliability and security are positively related to the adoption of cloud computing for HPC.
Organizational Factors
The adoption of cloud computing requires the integration of resources and a supportive environment [40]. Top management involvement in the cloud computing adoption decision and implementation process is essential as it guarantees that sufficient resources will be allocated to support implementation [5] and the value of adopting will be communicated throughout the organization [6]. Thus, we propose:
Hypothesis 7 (H7).
Top management support is positively related to the adoption of cloud computing for HPC.
Adequate resources refer to the resources needed for the adoption of cloud computing. Previous studies show that organizations with the necessary resources are more likely to adopt cloud computing [41]. Cloud computing adoption for HPC is a large project that requires top management commitment, ample time, money, competent human resources, and technological competencies [9,42]. Thus: Hypothesis 8 (H8). Adequate resources are positively related to the adoption of cloud computing for HPC.
Extant literature suggests that cloud computing provides organizations with direct and indirect benefits. For instance, cloud computing enables organizations to gain access to hardware, software, or other ICT infrastructure not available in their own data centers [8]. Such advantages are termed direct benefits. In addition to these direct benefits, organizations are also motivated by the indirect benefits, such as improving the organizational image, competitive advantage, or relationships with customers or business partners [43]. We posit: Hypothesis 9 (H9). Indirect benefits are positively related to the adoption of cloud computing for HPC.
Cloud computing adoption allows organizations to reduce upfront capital expenditure and operational costs, thereby enabling them to gain a cost reduction advantage [41,44]. Studies show that cost reduction is positively associated with organizations' perceptions of the ease of use and convenience of adopting cloud computing, thus increasing adoption intention [33]. HPC infrastructure has extremely high operating costs compared to more general IT [17]. Thus, we posit: Hypothesis 10 (H10). Cost reduction is positively related to the adoption of cloud computing for HPC.
Methodology
A questionnaire was designed to test the proposed framework presented above. The first stage of questionnaire design involved sourcing validated items to represent all of the variables in the framework. All items were sourced from the technology adoption literature and adapted to fit the context of HPC. Perceived innovativeness (four items), HPC competence (six items), IT/IS competence (five items), complexity (five items), and adequate resources (four items) were adapted from [9]. The six items used to measure indirect benefits were adapted from [9,43]. Top management support (three items) and compatibility (four items) were adapted from [9,37]. The four items to measure reliability and security were adapted from [45], while cost reduction items were adapted from [9,45]. Each item was measured on a five-point Likert scale. The dependent variable in this study was one self-developed item which asked respondents if their organization had implemented cloud computing for HPC workloads. The second step involved pilot testing. This study was part of a broader international project, CloudLightning [46]. The questionnaire was pilot tested among international academics in the consortium from technical and business disciplines to explore comprehension. Following a number of minor wording changes, the questionnaire was pilot tested among industry members of the project consortium. The third step involved recruiting respondents. Given that HPC is a small knowledge-intensive market, a database was developed to recruit HPC decision makers, which included C-suite employees across various industries from oil and gas to genomics, as well principal investigators at universities. Decision makers were identified through an online search. The survey was distributed to 619 HPC decision-makers worldwide using publicly available email addresses.
A total of 121 participants completed the survey. This response rate of 19.55% was deemed adequate given the relatively small number of organizations in the HPC market. Among the sample, 53.30% are based in the European Union, 36.10% in North America and the remainder worldwide. Respondents' organizational contexts include academic (58.20%), commercial (27.90%), and government (9.80%). Among these respondents, 45.1% indicate that they have adopted cloud computing for HPC in their organization, while 54.9% indicate they have not yet adopted. Respondents reported a range of relevant job titles including CEO, CTO, Professor, Researcher, Scientist, Director, Head of Research and Development to name but a few. This distribution allows us to compare the factors predicting adoption for adopters and non-adopters. As the response rate was slightly lower than the recommended threshold (36+/−13) [47], non-response bias was tested in order to check the representativeness of the responses in this study. Following Wilcox et al. [48], we tested the non-response bias by comparing the organizational variables from the early respondents with the late respondents. The number of HPC users, the weekly HPC usage, and the respondent's familiarity with HPC were used as benchmarking organizational variables. The t-test results showed that there was no significant difference between early and late respondents across any of these variables. Therefore, the sample was deemed representative. Exploratory factor analysis was carried out in SPSS 23 to test the reliability and validity of the measures for all key constructs in the new context of HPC. Factor analysis with principal-component factoring method and VARIMAX rotation was used to test discriminant and convergent validity. The Kaiser-Meyer-Olkin (KMO) values of the three dimensions were above the threshold of 0.70, indicating that these items were suitable for conducting factor analysis. Three factors were generated under the human dimension; personal innovativeness, HPC technical competence, and IT/IS technical competence. Three factors were also generated under the technology dimension; complexity, compatibility, and reliability and security. One item from the security construct was dropped to increase reliability. Four factors were generated under the organization dimension; indirect benefits, adequate resources, top management support, and cost reduction. All Cronbach's alpha values were larger than 0.60, suggesting that the measures were reliable [49]. All items loaded onto their expected construct with factor loadings above 0.60. Thus, convergent validity and discriminant validity for each construct were achieved. The composite scores of all factors were calculated for further data analysis. As shown in Table 3, respondents rank innovativeness, compatibility, complexity, indirect benefits, and cost reduction as the most critical factors influencing cloud computing adoption. To compare the perceptions of adopters and non-adopters, a series of t-tests was conducted. The results presented in Table 4 suggest that adopters and non-adopters have significantly different perceptions of several factors, namely HPC competence, compatibility, indirect benefits, adequacy of resources, top management support, and cost reduction. Unsurprisingly, these mean differences show that adopters perceive cloud computing adoption for HPC more positively than non-adopters. Table 4. T-test results of perceptions between adopters and non-adopters.
Hypothesis Testing
To test the framework, logistical regression analysis was conducted in SPSS 23 ( Table 5). The variance inflation factors (VIF) and tolerance values were calculated to test the multicollinearity of these factors. VIF scores ranged from 1.07 to 2.14, all below the threshold of 3. The tolerance values ranged from 0.47 to 0.93, all above the cut-off score of 0.10. These results suggest that multicollinearity is not an issue [49].
Model Evaluation.
The omnibus test of model coefficients explores whether the independent variables in the model can explain variations in the dependent variable. A significant result suggests that the independent variables can improve the prediction of the dependent variables. In this case, a chi-squared of 44.54, with a degree of freedom of 10 and a significance value below 0.01 (p = 0.00) reveals that the ten factors of interest can significantly improve the prediction of the cloud computing adoption decision.
Goodness-of-fit statistics. The Hosmer and Lemeshow test gives a c2 (8) of 3.12, with a significance value of 0.93. This non-significant result suggests an acceptable match between the predicted adoption decision and actual adoption decision. −2 log likelihood (−2 LL) and Nagelkerke pseudo R-squared were also calculated to show the power of the research model in explaining the data variation. The lower the −2 LL value is, the better the model fit is. In this case, a −2 log likelihood value of 117.23 was acceptable. Nagelkerke Pseudo R-squared represents the amount of variation explained by the model. The model explains 42.3% of the variance in cloud computing adoption.
Statistical test for individual predictors. Wald chi-square statistics were used to check the predictive ability of individual predictors. The results show that indirect benefits (p < 0.01) and HPC competence (p < 0.05) are statistically significant predictors of cloud computing adoption. Thus, H3 and H9 were supported. Indirect benefits and HPC competence are positively related to an organization's likelihood of adopting cloud computing for HPC. Perceptions of IT/IS competence is approaching significance (p = 0.09).
Discriminating power. The logistic regression analysis also reveals the predictive accuracy of the research model ( Table 6). The model yields a correct prediction rate of 79% for non-adopters and of 75% for adopters. The overall correct prediction rate is 77%. These results illustrate that these predictors have sufficiently higher discriminating power than the random choice model.
Methodology
To answer RQ2, a modified Delphi method was conducted to elicit expert opinions on critical issues and problems related to the adoption of HPC in the cloud. It was viewed as an appropriate method due its flexible methodology and suitability to topics where there is limited knowledge as in this study [50]. Delphi inquiries enable researchers to achieve a consensus view from a group of experts on important issues using written responses [51]. An online approach to data collection was used as it reduces the time between iterations [52]. In this study, a four-round online Delphi process was conducted as outlined below in Figure 2. The study consisted of 13 expert participants from five countries including representatives from universities and research centers, SMEs, and multinationals. During each round, experts were asked to provide their comments and carefully examine the comments made by other participants and refine their comments and reprioritize issues until a consensus was reached among the group. In this study, each round of the Delphi had a distinct aim. The aim of round one was to identify all potential issues facing the future of HPC in the cloud. Participants were asked to identify the crucial issues facing HPC in the cloud in the near and further future. These views were collated using an online survey and sent back to participants for them to rank the issues in order of importance. In round two, participants were asked to again review all issues and were reminded of their initial ranking. They were also presented with the results from other experts and asked to reflect on their own opinions and decide whether or not to change the ranks they had assigned. This process continued in rounds three and four until consensus was reached. The data derived from the Delphi process was analyzed in alignment with the research model.
Findings
Participants identified and achieved consensus on five major critical issues currently impacting the adoption of HPC in the cloud and likely to impact adoption for the next one to five years. The issues are ranked as follows: Participants were worried about the legal constraints and implications of using the cloud, as well as the lack of transparency about data center security in relation to the cloud. In this respect, participants had many of the same concerns as other industry stakeholders and consumers regarding the perceived vulnerabilities of the cloud from a data protection perspective and the implications for them and their organizations, particularly where they were dealing with commercially sensitive or personally sensitive data. Informants highlight that these concerns mostly relate to public cloud use and not private clouds, where the advantages are control and security. However, as also noted by informants, this control and security comes at a cost, in terms of capital and operational expenditure, flexibility, and elasticity. Participants view all of these issues as remaining critical in the longer-term future (5+ years) albeit with a slightly different ranking as outlined below: The long-term ranking suggests that security will remain a critical issue, but external privacy concerns will be much more important than general internal data protection and control concerns. This suggests a shift in focus from internal issues or pressures to external ones. Notwithstanding these main findings from the Delphi study, it should be noted that informants discussed many issues which transcend human, organizational, and technological categories, namely the issue of performance or perceptions around performance. In this way, this second study makes a significant contribution in that it unpacks the compatibility dimension from Study 1 to identify some latent concerns regarding the capabilities of cloud infrastructure to achieve the same levels of performance as conventional HPC. Several informants highlighted performance as a core issue for the next five years and longer term but albeit from different perspectives. Proponents of HPC noted concerns around the technology itself including perceptions of workload compatibility and communications speeds to match the demands of HPC. In addition, they noted organizational concerns including funding and resources. In contrast, cloud proponents suggested that, human factors, including potentially flawed perceptions of cloud computing might hinder individuals' willingness to consider, let alone, adopt cloud computing in this context. Notwithstanding this, the technological impact of performance did not rank as high due to agreement that in many cases cloud computing may be appropriate e.g., loosely coupled workloads such as 3D image rendering, MATLAB programs, or simulations, and in others not so appropriate, e.g., workloads requiring high interprocessor communication speeds. Similarly, while not ranking as a major concern, education and training and the need for higher education to provide a pipeline of both HPC and cloud computing graduates were noted by all.
Discussion
Integrating DOI and HOT-fit, our quantitative findings suggests that organizational and human factors significantly influence organizations' cloud computing adoption decisions for HPC. Specifically, decision makers' perceptions of indirect benefits and existing HPC competences predict their cloud computing adoption decisions with an overall correct prediction rate of 77%. Notwithstanding this, our qualitative findings suggest that organizational and technological issues related to data privacy and security are a significant concern today and will be in to the future.
Human factors. The results indicate that, even though innovativeness is high, this does not necessarily lead to cloud computing adoption. This finding is inconsistent with previous studies [5,9]. Descriptive statistics reported in Table 3 show that personal innovativeness is the top-rated factor of cloud computing adoption for HPC. Thus, the reasons behind this inconsistency merit further investigation. The results also suggest that organizations with superior HPC competences perceive themselves to possess more advanced computing abilities, and thus are more willing to adopt cloud computing [8]. The insignificant relationship between IT competences and adoption may be because organizations with sufficient IT expertise may have gone through some of the technological changes required for cloud computing, reducing the impact of technological competences on the cloud computing adoption but an increased effect on the extent of cloud computing implementation [6]. Existing literature also reports mixed results with some reporting positive impacts [8,29,32], while others report insignificant results [6,34]. Human factors thus require further investigation.
Technological factors.
None of the technological factors-compatibility, complexity, and reliability and security-influence the adoption of cloud computing for HPC. However, there may be alternative explanations for these results. First, organizations may realize that the benefits of adopting cloud computing for HPC are likely to outweigh the perceived complexity. Thus, they may adopt cloud computing for HPC to maximize these perceived benefits. Second, the results may reflect different cloud conceptualizations. Pure private clouds, and hybrid clouds may not have the same security concerns as public clouds although this is widely disputed (see, for example, [39]). Third, if the organizations perceive themselves as having superior existing technological competences, they may adopt cloud computing regardless of the compatibility between cloud computing and their existing HPC systems or processes. Fourth, though cloud security remains a major concern [5], the technological advances in securing data privacy and confidentiality on cloud computing platforms may have given organizations confidence in implementing cloud services [7,39]. Furthermore, as highlighted by the Delphi study, data privacy and security are both a significant concern for HPC in the cloud today and moving forward, which needs to be addressed by the cloud computing industry.
Organizational factors. Only perceived indirect benefits significantly influenced cloud computing adoption decision for HPC. This is congruent with previous studies [3,8,29], suggesting that indirect benefits have a strong influence on adoption [33]. A useful implication arising from this is that cloud computing service providers should highlight the wider set of benefits associated with adopting cloud computing for HPC to their customers in order to increase their likelihood to adopt. Surprisingly, adequate resources and top management support are not found to influence cloud computing adoption for HPC. These results contradict many prior studies [5][6][7]29]. Finally, cost reduction does not influence cloud computing adoption for HPC. This may be HPC-specific as HPC typically involves specialist computing and scientific expertise where the decision-making is devolved to those with the required expertise. Similarly, HPC has a high cost profile compared to general IT expenditure. This may also reflect a sampling characteristic. The sample comprises organizations operating in the traditional HPC market not SMEs. The use of cloud computing for cost efficiencies is a widely reported benefit for smaller or new enterprises [32]. However, as larger organizations have sufficient resources they are less likely to adopt cloud computing for the purpose of cost reduction.
Conclusions
The realization of the benefits of HPC has been somewhat limited due partly to the large start-up costs required. This paper leverages HOT-fit and DOI (i) to identify the determinants of cloud computing adoption for HPC, and (ii) elicit expert opinions on the issues facing HPC in the cloud in the near and long-term future. The paper makes two important contributions. First, the study incorporates DOI with HOT-fit theories to provide a holistic view of determinants of cloud computing adoption for HPC. As indicated earlier, there is a dearth of research on cloud computing adoption in the HPC area. Thus, this study provides important insights and answers calls for clarification on the drivers and inhibitors of cloud computing adoption in differing contexts [31]. Second, this paper contributes both to literature and practice by providing insights for cloud service providers with findings suggesting that cloud service providers should emphasize the indirect benefits of adopting cloud computing for HPC in their communications to potential customers and specifically address issues and concerns relating to data privacy and security. However, as is the case with all studies, this paper is not without its limitations. These include the use of single-informant, the small sample size, and potential over-representation in the sample from the academic community. The results would be more robust with data from multiple respondents and from a larger sample. When considering the diversity of the sample, a comparison of attitudes towards different cloud deployment models (public, private, hybrid, federated, and community) and service models (IAAS, PAAS, SAAS) may prove insightful. Furthermore, future research could explore the role of these factors over time to see which factors are more influential in the short and long term. It is our hope that this research can inform future research which seeks to explore the adoption and assimilation of new technologies in complex contexts.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 7,453.6 | 2020-08-11T00:00:00.000 | [
"Computer Science"
] |
Enrichment of small pathogenic deletions at chromosome 9p24.3 and 9q34.3 involving DOCK8, KANK1, EHMT1 genes identified by using high-resolution oligonucleotide-single nucleotide polymorphism array analysis
High-resolution oligo-SNP array allowed the identification of extremely small pathogenic deletions at numerous clinically relevant regions. In our clinical practice, we found that small pathogenic deletions were frequently encountered at chromosome 9p and 9q terminal regions. A review of 531 cases with reportable copy number changes on chromosome 9 revealed142 pathogenic copy number variants (CNVs): 104 losses, 31 gains, 7 complex chromosomal rearrangements. Of 104 pathogenic losses, 57 were less than 1 Mb in size, enriched at 9p24.3 and 9q34.3 regions, involving the DOCK8, KANK1, EHMT1 genes. The remaining 47 cases were due to interstitial or terminal deletions larger than 1 Mb or unbalanced translocations. The small pathogenic deletions of DOCK8, KANK1 and EHMT1 genes were more prevalent than small pathogenic deletions of NRXN1, DMD, SHANK3 genes and were only second to the 16p11.2 deletion syndrome, 593-kb (OMIM #611913). This study corroborated comprehensive genotype-phenotype large scale studies at 9p24.3 and 9q24.3 regions for a better understanding of the pathogenicity caused by haploinsufficiency of the DOCK8, KANK1 and EHMT1 genes. None; it is not a clinical trial, and the cases were retrospectively collected and analyzed.
Background
Chromosomal microarray analysis (CMA) has been widely utilized for the genome-wide screening of microdeletion and microduplication syndromes [1]. The sizes of well-known microdeletion and microduplication syndromes were usually larger than 1 Mb, such as 1.4 Mb for Williams-Beuren syndrome (OMIM #194050) or 2.8 Mb for DiGeorge syndrome (OMIM #188400). Small (<1 Mb) pathogenic deletions at regions which were not well characterized were frequently encountered during our daily clinical practice, for instance, the chromosomal regions at 9p24.3 and 9q34.3.
High-resolution oligo-SNP array is able to reveal a variety of chromosomal disorders including uniparental disomy or extremely small pathogenic deletions which would be missed by low-resolution oligonucleotide CMA. Our and other previous studies showed the cases with uniparental disomy were relatively limited in number on chromosome 9 as compared to chromosome 15, 11 and 7 [2,3]. In contrast, small pathogenic deletions were frequently encountered at chromosome 9p24.3 and 9q34.3 by using high-resolution oligo-SNP array in postnatal studies. Research endeavors have been significantly prioritized to specific genes such as NRXN1 and SHANK3 in the past [4,5]. To the best of our knowledge, only four cases with small deletions of 192 kb, 225 kb, 465 kb and 518 kb in size at 9p24.3 involving the DOCK8 and/or KANK1 gene [6][7][8], and a case of 40 kb deletion in the EHMT1 gene at 9q34.3 [9] have been documented.
The purpose of this study is to evaluate 1): the incidence of small (< 1 Mb) pathogenic deletions in postnatal specimens, 2): whether the small pathogenic deletions at 9p24.3 and 9q34.3 constituted a significant proportion of small deletions, 3): what proportion of deletions on chromosome 9 was caused by small pathogenic deletions at 9p24.3 and 9q34. 3,4): the efficacy of identifying extremely small homozygous pathogenic deletions using high-resolution oligo-SNP array.
Rarely encountered extremely small homozygous pathogenic deletions were discovered in two cases
Extremely small homozygous pathogenic deletions were identified in two cases: 1): a new born baby girl who presented with a metabolic disorder (abnormal reflexes, hypotonia, seizures, and elevated glycine) was revealed to contain a 25-kb homozygous deletion in the GLDC gene, which gave rise to autosomal recessive glycine encephalopathy (nonketotic hyperglycinemia; OMIM #605899). Besides that, a 50-kb heterozygous deletion was also found in the 5′ region of the GLDC gene. Parental study showed the mother was a carrier of a 25-kb heterozygous deletion and the father was a carrier of a 75-kb heterozygous deletion of the GLDC gene. The 25-kb maternally inherited deletion was located within the 75-kb paternally inherited deletion, and therefore inheritance of abnormal allele from both parents led to a 25-kb homozygous and a 50-kb heterozygous deletion in the proband (Fig. 4); 2): a 1-year-old boy was found to have a 74kb homozygous deletion of the CDK5RAP2 gene in a region of homozygosity (Additional file 3: Figure S2) which
Discussion
The subtelomeric region such as 1p36 is known to be gene-rich and prone to have deletions, supported by a study with a large cohort of over 5,000 cases [10]. The cases with subtelomeric rearrangements comprised of about 46 % of all the genomic abnormalities identified by CMA [10]. However, as compared to 1p36, 22q13, 4p16, 5p15, very limited cases at the ends of chromosome 9p and 9q were established [10]. When we reviewed the profile of copy number losses from our database of 38,000 postnatal cases studied by using high-resolution oligo-SNP array and sorted it based on chromosomal regions, we discovered that all the cases with 1p36 deletions were over 1 Mb: 20 cases were 1-3 Mb, eight cases were 3-5 Mb, eight cases were 5-10 Mb and five cases were 10- NA not applicable 20 Mb in size (unpublished data). In contrast, 61 of 104 pathogenic deletions on chromosome 9 were either smaller than 1 Mb (57 cases) or between 1 and 1.5 Mb (4 cases). This finding demonstrated that the size of copy number losses conspicuously varied between chromosomal regions. In the clinical practice, the concept of vigilant selecting appropriate methods to characterize the genomic losses at different regions becomes essentially important. For instance, FISH analysis using subtelomeric or locus-specific probe may be approriate to identify 1p36 microdeletions, but may miss cases with small deletions on chromosome 9. Extremely small intragenic deletion of the EHMT1 gene was only reported in one case: a 40-kb intragenic deletion of the EHMT1 gene with uncertain phenotypic consequence [9]. A more recent update on Kleefstra syndrome exhibited that the 16 newly diagnosed 9q34.3 deletions were all larger than 200 kb [11]. In our cohort, we identified a total of 24 cases with 9q34.3 deletions: 16 with small (<1 Mb) deletions involving the EHMT1 gene (case 42-57, Additional file 2: Table S1), 5 with terminal deletion of 9q (case 28-32, Additional file 2: Table S2), and 3 with small (880-1001 kb in size) 9q34.3 deletions due to unbalanced translocations (case 45-47, Additional file 2: Table S2). Remarkably, we brought about three extremely small (22 kb, 39 kb and 40 kb in size) intragenic deletions of the EHMT1 gene and all were clustered at the 3′ end of the gene (Fig. 3, b-d). The 22-kb deletion (Fig. 3b) was identified in a 32-year-old female with intellectual disability; whereas, the 40-kb deletion (Fig. 3c) was found in a 5-year-old girl and the 39-kb deletion (Fig. 3d) was discovered in a 1-year-old girl with typical features of 9q34.3 deletion including developmental delay, speech and motor delay, and hypotonia [9]. In addition, a 26-year-old male with a 165-kb deletion involving EHMT1 and CACNA1B gene (case 57, Additional file 2: Table S1) also presented clinical features which were typical for 9q34.3 deletion syndromes including mental retardation, developmental delay, speech delay, motor delay, learning disability, autism spectrum disorder, asymmetry of temporal lobe, localized polymicrogyria, loping gait and scoliosis [11].
In contrast to the EHMT1 gene of which haploinsufficiency score was much better established by ClinGen Fig. 4 Family study of homozygous and heterozygous deletion of the GLDC gene. High-resolution oligo-SNP array analysis of the proband revealed a 25-kb homozygous and a 50-kb heterozygous deletion at the 5′ region of the GLDC gene. These two deletions involved multiple exons and led to autosomal recessive glycine encephalopathy (nonketotic hyperglycinemia; OMIM #605899). Family study showed the mother was a carrier of a 25-kb heterozygous deletion and the father was a carrier of a 75-kb heterozygous deletion of the GLDC gene. The 25-kb maternally inherited deletion was located within the 75-kb paternally inherited deletion, and thus led to a 25-kb homozygous and a 50-kb heterozygous deletion in the proband (https://www.ncbi.nlm.nih.gov/projects/dbvar/clingen/ ), the sensitivity to haploinsufficiency for DOCK8 and KANK1 genes was not proven (Additional file 2: Table S3). There were two unrelated patients with mental retardation and developmental disability (MRD2; OMIM #614113) who were disclosed to have a heterozygous disruption of the longest isoform of the DOCK8 gene by either deletion or a translocation of t(X;9) [12]. A recent report of another two patients with almost identical deletion involving both DOCK8 and KANK1 displayed two distinct phenotypes [6]. The study of a four-generation with a 225kb deletion of the KANK1 gene implied that an imprinting mechanism may play a role in the phenotypic variation in this family. The authors suggested KANK1 is a maternally imprinted gene and only expressed in the paternal allele [8]. However, other report did not support this finding [7]. In our cohort, deletions of KANK1 were found in 6 cases (case 25-30, Additional file 2: Table S1). There were not enough clinical data to determine whether KANK1 is a maternally imprinted gene. On the other hand, DOCK8 gene is unlikely to be maternally imprinted since the two half-brothers (a 2year-old boy and a 7-year-old boy, case 13 and 14, Additional file 2: Table S1) both inherited the deleted DOCK8 allele from the same mother. In addition, our patients with small deletions of the DOCK8 gene had very strong family history (case 13/14, 17, 20/21 Additional file 2: Table S1) and shared similar clinical features, including developmental delay and intellectual disability (4 out of 5 cases), speech and motor delay (3), learning disability (2), behavior problems or autism (3), macrocephaly (2), dysmorphic or congenital anomalies (4). Our cohort provided additional pathogenic evidence for haploinsufficiency of DOCK8 gene.
Although extremely rare, two cases with homozygous deletions of GLDC and CDK5RAP2 genes were discovered in this cohort. In our previous study, we demonstrated the autosomal recessive disorders could be linked to regions of homozygosity (ROH) containing gene with point mutation which was inherited from related parents [2]. In this study, we showed additional two cases with autosomal recessive disorders which can be identified by high-resolution oligo-SNP array. The first was due to inheritance of allele with heterozygous deletion of different size from each carrier parent, which led to a homozygous deletion of GLDC gene (Fig. 4). The second was a homozygous deletion of the CDK5RAP2 gene, inherited from closely related parents who carried the same heterozygous deletion (Additional file 3: Figure S2A). These two cases proved the efficacy of using high-resolution oligo-SNP array in the identification of extremely small homozygous pathogenic deletions.
Patients
Patients with a broad range of clinical indications including intellectual disability, developmental delay, multiple congenital anomalies, dysmorphic features and pervasive developmental disorders were referred to our laboratory for oligo-SNP array studies. The data for this study were compiled from de-identified results of 38,000 consecutive patient specimens referred to our laboratory for constitutional oligo-SNP array study from 2011 to 2015. The patients were majorly from general population in the United States, with < 5 % from Mexico and other countries.
Oligonucleotide-single nucleotide polymorphism array analysis platforms and threshold setting
Oligo-SNP array analysis was performed on either Human SNP Array 6.0 (in 2011) or CytoScan® HD (2012-2015)(Affymetrix, Santa Clara, CA), using genomic DNA extracted from whole blood. The Human SNP Array 6.0 has 1.8 million genetic markers, including about 906,600 SNPs and 946,000 probes for the detection of CNVs. The CytoScan® HD has more than 2.67 million probes, including 1.9 million non-polymorphic copy number probes and 750,000 SNP probes. The overall resolutions are approximately 1.7 kb for Human SNP Array 6.0 and 1.15 kb for CytoScan® HD. For chromosome 9, the probes for Human SNP Array 6.0 covered: 9p(chr9:37,747-47,217,164) and 9q(chr9:65,596,318-141,091,382); for CytoScan HD®: 9p (chr9:192,129-40,784,142, chr9:43,400,082-44,900,526) and 9q (chr9:66, 837,485-141,025,328). Genomic coordinates were based upon genome build 37/hg19 (2009). Hybridization, data extraction, and analysis were performed as per manufacturers' protocols. The Affymetrix® Chromosome Analysis Suite (ChAS) Software version 2.0 was used for data analysis, review, and reporting. For genome-wide screening, thresholds were set at > 200 kb for gains and > 50 kb for losses. For cytogenetically relevant regions, thresholds were set at > 100 kb for gains and > 20 kb for losses. Benign CNVs that are documented in the database of genomic variations (http://dgv.tcag.ca/dgv/app/home?-ref=GRCh37/hg19) and present in the general population were excluded from reporting. | 2,818.2 | 2016-11-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Vanadium and tantalum doping of tin dioxide: a theoretical study
The increasing demand of efficient optoelectronic devices such as photovoltaics has created a great research interest in methods to manipulate the electronic and optical properties of all the layers of the device. Tin dioxide (SnO2), due to his charge transport capability, high stability and easy fabrication is the main electron transport layer in modern photovoltaics which have achieved a record efficiency. While the wide band gap of SnO2 makes it an effective electron transport layer, its potential for other energy applications such as photocatalysis is limited. To further improve is conductivity and reduce its bandgap, doping or co-doping with various elements has been proposed. In the present density functional theory (DFT) study, we focus on the investigation of vanadium (V) and tantalum (Ta) doped SnO2 both in the bulk and the surface. Here we focus on interstitial and substitutional doping aiming to leverage these modifications to enhance the density of states for energy application. These changes also have the potential to influence the optical properties of the material, such as absorption, and make SnO2 more versatile for photovoltaic and photocatalytic applications. The calculations show the formation of gap states near the band edges which are beneficial for the electron transition and in the case of Ta doping the lowest bandgap value is achieved. Interestingly, in the case of Ta interstitial, deep trap states are formed which depending of the application could be advantageous. Regarding the optical properties, we found that V doping significantly increases the refractive index of SnO2 while the absorption is generally improved in all the cases. Lastly, we investigate the electronic properties of the (110) surface of SnO2, and we discuss possible other applications due to surface doping. The present work highlights the importance of V and Ta doping for energy applications and sensor applications.
SnO 2 also known as cassiterite and stannic oxide, represents one of the most used wide bandgap semiconductors in energy devices 1,2 .It is characterized by n-type conductivity, which can be attributed to its intrinsic defects, such as oxygen vacancies 3 and tin interstitials 4 .As a compound, SnO 2 exhibits low resistivity 5 and high-dielectric constant, and has therefore been considered for gate oxides on Si-based electronic devices 6,7 as well as for electron transport layer in perovskite photovoltaics 8 .Undoped SnO 2 has a wide bandgap value of ~ 3.6 eV 9 and its reported resistivity values range from 10 −2 to 10 −3 Ω cm 10 .One of the main quests in photovoltaic technologies is to enhance the conductivity of the electron transport layer without reducing the bandgap 11 .In essence, the efficiency of the devices is highly connected with the charge recombination and losses due to the limited transport properties of the used layers.Furthermore, the stoichiometric SnO 2 exhibits low intrinsic carrier density and low mobility of its charges due to the oxygen vacancies which act as donors.Doping is examined as a strategy to further reduce the resistance of SnO 2 and to enhance the transition in the visible wavelengths 12,13 .For photocatalysis, it is important to increase the conductivity of the photocatalyst and decrease the bandgap of the semiconductor.The increase in the carrier concentration makes the intermediate energy gap between valence band and conduction band more active 14 .
To further improve the response of gas sensing devices and improve the sensitivity and selectivity, it is common to add porous materials and catalytically active agents 15,16 .Doping the metal oxide -such as SnO 2 -based sensor promotes the physicochemical reactions between the surface and the gas 17 .Various reports propose that incorporation of the appropriate dopant is an efficient technique to enhance the sensitivity, selectivity, operating temperature and recovery time of SnO 2 based gas sensor as the dopant modifies the structural, electronic
Methodology
The Cambridge Serial Total Energy Package (CASTEP) 29 was used.To encounter the effect of localized electrons and bandgap underestimation we employed the hybrid functional PBE0 with norm conserving pseudopotentials 30 .Convergence tests revealed that a cutoff energy was chosen of 800 eV and 2 × 2 × 3 k-points for the sampling of the Brillouin zone were sufficient for the 48 atom supercell (2 × 2 × 2 unit cells).The supercell was chosen by taking into account that although hybrid functionals can provide more reliable results, they are very computationally expensive.For the optimization of the relaxed structures and the prediction of the ground state of each system, we used the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method which was seen to predict the correct ground state to various different system 31 .For the interstitial positions we examined all possible configurations in the supercell in conjunction with geometry optimization and retained the lowest energy configuration.Specifically, we placed the interstitial defects in various sites and we used as the final ground state the configuration with the minimum total energy.The surface simulation was based on a slab model with a vacuum of about 12 Å vertical to the (110) direction.Here, the top two layers represent the surface, whereas the bottom two layers are fixed and represent the bulk region.Considering the DOS calculations a k-point mesh of 5 × 5 × 5 was used for bulk and 3 × 3 × 1 set for the surface.We set the convergence criteria at 2.0 × 10 −5 eV/atom for the SCF tolerance, 0.05 eV/Å for the force tolerance and 0.001 Å for the max displacement tolerance.www.nature.com/scientificreports/supercell.We have considered all the possible configurations of these defects in the supercell but the subsequent figures report results on the minimum energy configurations only.
Results and discussion
In Table 1 we have gathered the lattice parameters for all the doping cases.As it is seen, in all the cases except the vanadium substitutional, the volume of the unit cell increases.Typically, a larger supercell is connected to larger area for the chemical reactions to take place and can lead to improved photocatalytic activity of SnO 2 .Our approach agrees well with experimental reports 23,32 .Specifically, Alvarez-Roca et al. 23 investigated at various vanadium doping concentration the X-ray diffraction and transmittance electron microscope for the determination of the structural changes in SnO 2 .Similar to our study, the found that the V atoms can be incorporated inside the structure of SnO 2 and at low concentrations the cell volume is reduced.According to their study, this reduction enhances the specific surface area to volume ration which is highly beneficial for applications of V:SnO 2 in catalysis, sensors and energy applications.Continuing with the recent work of Uwihoreye et al. 32 , where they investigated the structural, electronic and optical properties of Ta:SnO 2 thin films, they found that at low Ta concentrations, the lattice parameters of SnO 2 are slightly reduced with the volume remaining almost unchanged.At low concentrations the incorporation of Ta atoms takes place in Sn sites and due to the smaller radius of Ta atoms (0.064 nm) compared to Sn atoms (0.065 nm) this leads to relative decrease of the lattice parameters.As they report, at higher concentration this phenomenon is reversed and the volume is increased.We believe that this is happening because at higher concentrations it is more likely for interstitials to form, and as we show, Ta i :SnO 2 exhibits a higher volume than Ta Sn :SnO 2 .While our work predicts similar trends with the above mentioned experimental works, in order to explain completely and with great accuracy the above mentioned experimental results, different doping concentration should be examined, which is beyond of the scope of this paper.Ali and Islam 33 investigated using DFT the effect of Ta doping in SnO 2 and they also predicted that incorporation of Ta in SnO 2 increases the volume of the supercell.
Continuing our work with the electronic investigation of Ta/V doped SnO 2 we report in Fig. 2 the calculated total DOS and the (partial) PDOS for each doping case.As it is shown in Fig. 2e, the hybrid functional DFT calculations using PBE0 result in a bandgap value of 3.35 eV 13 , in excellent agreement with the experimentally determined bandgap 23 .Doping with Ta i or V i will result in a small increase of the band gap, however Ta and V at substitutional sites reduce the band gap by about 0.5 eV (refer to Table 2).Overall, The decrease of the band gap for the V and Ta doping, is caused by the overlapping of V-3d (Ta-3d) with O-2p.Those overlapping could initiate the formation not only of states near the band edges but also of intermediate bands (deep states).The increase of the bandgap in Ta i could be attributed to the effect of electron doping which is responsible for shifting the fermi level into the conduction band, this phenomenon is also called Burstein-Moss effect 34 .
For both V i :SnO 2 (Fig. 2a) and Ta i :SnO 2 (Fig. 2c) in gap states form to the conduction band edge and the bandgap calculated to increase to 3.39 eV and 3.48 eV, respectively (Table 2).This formation of energy states in the middle of the bandgap can be advantageous for photocatalytic applications, however it is detrimental for photovoltaics and light emission diodes as they act as traps that reduce the device photocurrent and photogenerated charge carriers.Conversely, for V Sn :SnO 2 (Fig. 2b) and Ta Sn :SnO 2 (Fig. 2d) the band gap is reduced to 2.86 eV and 2.84 eV, respectively (refer to Table 1).This band gap reduction is attractive for photocatalytic applications.
Figure 3 reports the refractive index with respect the phonon energy for all the doping cases considered.For zero frequency the refractive index is predicted to be a 1.40, in excellent agreement with previous theoretical studies but lower as compared to the experimental value (1.70) 35,36 .From Table 2 and Fig. 3 it is clear that there is an increase for the lower photon energies and a decrease in the upper energies.
Figure 4 reports the reflectivity (i.e.amount of photons that are reflected) and it is predicted that V i :SnO 2 has the highest reflectivity in the near-infrared region (refer to Table 2).Ta-doped SnO 2 has low reflectivity in the infrared and visible region therefore it can be used as an antireflective coating.
In Fig. 5a,b present the optical conductivity and absorption coefficient for all the doping cases considered here.The optical conductivity is effectively represented by the mobility of excitons (electron-hole pairs) a crucial parameter in the design of optical detectors 37 .The excitons are generated when photons have higher energy than the optical bandgap and because of the electronic charge neutrality do not contribute to the electrical conductivity 38 .The absorption of undoped SnO 2 starts at 380 nm in fair agreement with the experimental value (400 nm) 39 .From Fig. 5b it is observed that that Ta i and V i have the highest absorption in the visible region.
Surface SnO 2
The investigation of the most exposed surface, in this case the (110) surface 13 , is of crucial importance for the application of doped SnO 2 to various technologies such as gas sensors 40 and photocatalytic hydrogen production 41 .Specifically for the gas sensors the adsorption and desorption of oxygen and gas molecules occurs on the surfaces of the material while for the hydrogen production the most high-energetic surface will play the www.nature.com/scientificreports/role of the active site in photocatalytic reactions.As it is seen from the available literature, the studies of the (110) SnO 2 surface are significantly less compared to the studies of the bulk.In this section, the electronic density of states changes of the undoped and the V. Ta doped SnO 2 were investigated.The (110) surface is cleaved from the bulk SnO 2 and it is represented with a nine-layer slab model, as it is presented in Fig. 6.The (110) surface consists of 16 SnO 2 atoms and 32 O atoms while the size of the vacuum is chosen at 12 Å.For our calculations the bottom 4 layers are kept fixed during the geometry optimization in order to represent the bulk side while the top 5 are free to relax for the energy minimization.Our model has been used to various other studies, such as were the adsorption of hydrogen molecules on Cu-doped SnO 2 surface is chosen 42 .Similarly, the same process was followed in this paper in order to simulate the minimum energy system for the doped surface.Furthermore, to accurate predict the DOS characteristics the hybrid functional PBE0 was used.
The total Density of states of the undoped and the V,Ta-doped SnO 2 surface, as well as the partial density of states of O, Sn, V and Ta atom as are depicted in Fig. 7a-c.From our results in Fig. 7a it can be seen that V doping produces a surface band gap of nearly 2 eV.Compared to the pure SnO 2 (110) facet it can be observed that the valence band is increased after V doping as some additional states are created at 0.5 eV.Our analysis shows that V doping reduces the bandgap of approximately 0.5 eV.Moreover, additional states are created at the conduction band edge.Continuing with the Ta doping, it can be observed that the bandgap is reduced to a value equal to 1.8 eV while the only energy states that arise are located at 1 eV.Compared to the undoped, Ta doping reduces the bandgap nearly 0.7 eV.
From the surface simulations it can be concluded that gap states that serve as electron traps are formed near the conduction band in all the doping cases.As these trap states are of great importance for the gas sensors and photocatalysis and so further experimental investigation is suggested.
Conclusions
In the present DFT investigation the structural, electronic and optical properties of V and Ta doped SnO 2 were calculated.Our first principles studies investigated the potential of these doping techniques for energy and sensing applications as it involved advanced hybrid calculations both for the bulk and the surface of SnO 2 .The DOS calculations revealed that there is a small bandgap increase for Ta i and V i doping, whereas for both the Ta and V substitutionals, the bandgap is decreased.Our calculations for the bulk agree well with other experimental reports and explain the trends that could be seen in them.The reduction of the band gap in the substitutional www.nature.com/scientificreports/cases and the mid-gap states for the interstitial cases can be beneficial for photocatalytic applications while when the band gap is increased especially in the V i case this can be beneficial for other applications such as electron transport layers.Furthermore, surface calculations indicate that these systems can be applicable for gas sensors as they can provide active sites for the sensing reactions to take place and also the gap states formed can further enhance these reactions.Therefore experimental work is necessary.
Figure 3 .
Figure 3.The refractive index for (a) V i :SnO 2 , (b) V Sn :SnO 2 , (c) Ta i :SnO 2 and (d) Ta Sn :SnO 2 .The dotted purple and dotted green, correspond to the dielectric function of the undoped SnO 2 .
Figure 4 .
Figure 4.The reflectivity of the doped structures.The dotted purple line corresponds to undoped SnO 2 .
Figure 5 .
Figure 5. (a) The optical conductivity with respect to the photon energy and (b) the absorption coefficient with respect to the wavelength for the investigated dopants.
Figure 6 .
Figure 6.The slab model used for the V, Ta doped SnO 2 .
Table 1 .
Lattice parameters of Ta and V doped SnO 2 .
Table 2 .
The predicted electronic and optical constants. | 3,628.8 | 2023-11-28T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Camelid VHHs Fused to Human Fc Fragments Provide Long Term Protection Against Botulinum Neurotoxin A in Mice
The bacterium Clostridium botulinum is the causative agent of botulism—a severe intoxication caused by botulinum neurotoxin (BoNT) and characterized by damage to the nervous system. In an effort to develop novel C. botulinum immunotherapeutics, camelid single-domain antibodies (sdAbs, VHHs, or nanobodies) could be used due to their unique structure and characteristics. In this study, VHHs were produced using phage display technology. A total of 15 different monoclonal VHHs were selected based on their comlementarity-determining region 3 (CDR3) sequences. Different toxin lethal dose (LD50) challenges with each selected phage clone were conducted in vivo to check their neutralizing potency. We demonstrated that modification of neutralizing VHHs with a human immunoglobulin G (IgG)1 Fc (fragment crystallizable) fragment (fusionbody, VHH-Fc) significantly increased the circulation time in the blood (up to 14 days). At the same time, VHH-Fc showed the protective activity 1000 times higher than monomeric form when challenged with 5 LD50. Moreover, VHH-Fcs remained protective even 14 days after antibody administration. These results indicate that this VHH-Fc could be used as an effective long term antitoxin protection against botulinum type A.
Introduction
Botulinum neurotoxin (BoNT) is the strongest organic poison for humans and animals. It is produced by the anaerobic, Gram-positive, spore-forming, rod-shaped bacterium Clostridium botulinum [1,2]. The estimated human lethal dose is about 10 nanograms per kilogram of bodyweight if the toxin is inhaled and one microgram if it is taken orally [3,4]. The most common forms of natural botulism are food-borne, wound, and infant [2]. Food-borne botulism occurs through contaminated food ingestion, and the case fatality rate is now about 5-10% in developed countries (as opposed to 60-70% before 1950). Wound botulism occurs when an open wound is exposed to C. botulinum spores, and the case fatality is approximately 10-15% of patients even with aggressive treatment.
Infant intestinal botulism occurs with spores ingested and spread over the digestive tract, as infants lack the protective flora of adults, with case fatality rate estimated to be le,ss than 1-2%. Inhalation botulism does not occur naturally and may occur in the context of a bioterrorist attack [2,5]. Among the four major human pathogenic BoNTs (BoNT/A, B, E, and F), BoNT/A poses the most serious challenge for medical treatment due to its extremely high potency and extraordinary persistence in human patients [6]. According to the World Health Organization [7], an antitoxin, usually based on equine antitoxin, should be administered as soon as the diagnosis is made. However, side effects, such as allergic reactions, fever serum sickness, and anaphylactic shock, often occur. In the past, at-risk persons and populations have been vaccinated with a chemically inactivated penta-serotype BoNT/A-E toxoid [8]; however, its use has been discontinued due to declining potency [9]. In other affected individuals, botulism treatment is mainly supportive, with mechanical ventilation being the only effective life-saving treatment [10,11].
A good alternative to antitoxin serum with minimal or no side effects is treatment with monoclonal antibodies (mAbs) to BoNT, which can be prduced in vitro. Along with conventional antibodies, single-domain antibodies (sdAb), also referred to as VHHs (variable domains of heavy-chain only stibodies) or nanobodies, have been widely used since their discovery in camelids along with classical immunoglobulins (IgG) [12][13][14]. The absence of light chains in IgG and a lack of constant domain 1 (CH1) in the heavy chain are the key characteristics of heavy-chain antibodies (HC-Abs). Therefore, an antigen-binding site of HC-Abs is formed only by a single domain, which is linked directly via a hinge region to the Fc (fragment crystallizable) domain. HC-Abs recognize the antigen with only one special variable domain, referred to as VHH. The structure of the VHH domain resembles the VH IgG domain [15]. The complementarity-determining region 3 (CDR3) of these HC-Abs possesses the extraordinary capacity to form long finger-like extensions, which can extend into cavities on antigens, as the CDR3 is often much longer than that of conventional VH domains [16,17]. Despite their small size (~15 kDa), sdAbs maintain affinities and antigen-binding specificities comparable to those of full-size mAbs [18]. Clear advantages include the ability to recognize hidden antigenic sites that are inaccessible for conventional antibodies due to their structure; stability over a wide range of temperatures and pH; high solubility, as well as economical and facile expression and production in large quantities in microorganisms [15,17]. Several studies were conducted to test the potency of VHHs as inhibitors of viral infections [18] and different toxins, produced by such plants and microorganisms as Ricinus communis [19], Mycoplasma hominis [20], Clostridium tetani [21], Clostridium difficile [22], Crotalus durissus terrificus [23], and Shiga toxigenic Escherichia coli (STEC) [24]. It has been demonstrated that the antigen-binding region of VHHs produced by camelids showed strong anti-BoNT activities in animal models [25,26]. Due to their unique features and high efficacy, sdAbs are currently in clinical phase trials for the treatment of a wide range of diseases, including cancer, inflammation, hematology, and respiratory diseases [27]. For instance, Cablivi is a nanobody-based medicine, which has recently been approved by the Food and Drug Administration (FDA) for acquired thrombotic thrombocytopenic purpura (aTTP) treatment [28].
However, the advantages of VHHs may also become their impediments. Their small size enables good penetration into tissue and rapid distribution, but their short half-life in the blood circulation limits the time for interaction with hard-to-reach epitopes and for crossing endothelial barriers in sufficient amounts [27,29,30]. Nevertheless, due to their relatively simple structure, VHHs can be optimized by genetic engineering to obtain desired properties, including extended half-lives [29].
One approach was to add an albumin-binding peptide to the C-terminus of VHH anti-botulinum, which increases the serum half-life of the VHH to 1-2 days [31]. Another approach was to generate virus vectors encoding chimeric proteins with one or more VHHs fused in frame to a cDNA encoding the red blood cell membrane proteins glycophorin A or Kell. In vitro studies and stem cell transplantation research has demonstrated that the half-life of these VHHs could be extended to several weeks with retained functionality [32]. The VHH could also be fused to the fragment crystallizable (Fc) region of an IgG. Such modifications, termed fusionbodies, increase the total protein complex size, as well as VHH-Fc interaction with neonatal Fc-receptor (FcRn), making it impossible to clear by the renal filtration system [30]. Furthermore, the attached Fc fragment can promote effector functions, such as phagocytosis and cytotoxicity, thereby enhancing the neutralization of pathogen entry and replication [33].
In this study, we focused on screening specific BoNT/A neutralizing alpaca VHHs from a VHH antibody immune library by phage display using BoNT/A and BoNT/A treated with dithiothreitol (BoNT/A-DTT) as antigens. Two clones (B11 and G3) with high neutralizing potency at lethal dose (50 LD 50 ) were obtained in phage forms. These clones were produced as VHHs, which were then modified as dimers or fused with human IgG Fc-fragments (VHH-Fc, fusionbody). The VHH-Fc modification greatly increased their neutralizing potency and serum half-life.
Generation of VHHs to BoNT/A
Alpaca immunization was performed using five sequential injections with an interval of 14 days between the first and second immunizations and 10 days between all subsequent immunizations to generate an immune library of single-domain antibodies (Figure 1a). After 24 and 49 days post-immunization, blood was collected, and the BoNT/A specific antibodies titer was measured by enzyme-linked immunosorbent assay (ELISA). The immune serum showed a clear response to BoNT/A (Figure 1b). Before immunization, all sera showed the background levels of antibody titers. The final titer of toxin-specific antibodies in the alpaca serum was more than 1/600,000 as the result of the five-fold immunization scheme.
VHH Library Construction and In Vivo Polyclonal Neutralization Verification
To generate a panel of single-domain antibodies to BoNT/A, we constructed an alpaca immune VHH library for display on the bacteriophage surface by cloning the nucleotide sequences coding for VHH repertoires of the immunized alpaca into an expression phagemid vector. After the transformation of recombinant phagemids into competent E. coli cells, a library, with a size of 5 × 10 6 phagemids, was obtained. Phage display and two rounds of selection were performed for acquiring phage libraries. Purified BoNT/A was used as an antigen for the first round. Two independent second rounds were performed separately, with BoNT/A and BoNT/A-DTT used as antigens. DTT reduction was used for fragmenting toxins into two constituent parts-the heavy chain (HC) and light chain (LC) [34]-for more uniform absorption of chains on the immunoplate's surface and to evenly distribute the epitopes on both chains. The polyclonal phage library titer for all rounds was approximately 10 13 CFU (colony-forming unit)/mL (Figure 2a).
After two rounds of selection, the specificity of each library was determined by ELISA ( Figure 2b). To test the neutralizing potency and specificity of the polyclonal phage libraries obtained in the second round of selection, an in vivo toxin neutralization assay was performed ( Figure 2c). BALB/c female mice were divided into groups and received one intraperitoneal injection with 10 LD 50 or 50 LD 50 BoNT/A after 1 h incubation with phage libraries. Previously obtained phages specific to tetanus neurotoxin (TeNT) were used for controls as non-specific phages. All mice in the positive control group survived, while all mice in the negative control group and the TeNT group died. Challenging two experimental groups with 10 LD 50 and 50 LD 50 provided full and partial protection, thus allowing further selection of monoclones.
Selection of Individual VHH Clones and In Vivo Monoclonal Neutralization Assay
A total of 39 clones showing ELISA readouts higher than 0.2 ( Figure 3a) and negligible reactivity with bovine serum albumin (BSA) were selected for sequencing. A total of 15 clones with different CDR3 amino acid sequences were selected for further research.
To test the neutralizing potency of the selected clones, an in vivo monoclonal phage neutralization assay was performed in which mice were administered the corresponding phage clone (Figure 3b). Each group received one intraperitoneal injection with 10 LD 50 BoNT/A previously mixed with the corresponding phage clone at 10 11 CFU. Only the four clones (B10, C10, B11, G3) that fully protected the mice against a 10 LD 50 challenge were chosen to test their protectiveness with a 50 LD 50 challenge. Mouse groups, which received clones B10 and C10 premixed with BoNT/A, were partially protected, while two other groups, which received clones B11 and G3 premixed with BoNT/A, showed 100% protection. Thus, clones B11 and G3 were chosen for further research as the most protective. It should be noted that the clones obtained by selection on BoNT/A-DTT showed a greater diversity in the CDR3 sequence and demonstrated neutralizing activity, unlike the clones, which were selected on BoNT/A.
The most protective clones, B11 and G3, were titrated in series down to 10 7 CFU and introduced intraperitoneally into mice simultaneously with 10 LD 50 of the toxin. The neutralizing potency of both clones was comparable, with the lower threshold being 10 10 CFU (Figure 3c).
Finally, both clones were tested for cross-reactivity protectiveness with another toxin serotype, BoNT/B. Each group of mice was challenged with 5 LD 50 of the toxin and administered 10 11 CFU/mL phages. Both clones failed to protect the mice, demonstrating that the two protective clones are specific to BoNT/A.
Modification of VHH Clones to Improve their Protective Activity
To increase the protectiveness of the selected clones by increasing the antibody circulation time in the blood and/or their additional functionality, two modified constructions were synthesized. One construction was a dimer form of each clone (B11-dimer and G3-dimer) held by a glycine-serine linker (Gly4Ser) 3 and expressed in the pET30 plasmid. The second construction was a monomer of each clone linked to a human IgG Fc fragment (B11-Fc and G3-Fc) (Figure 4a). Fc-modifications have been used to increase the antibody circulation time in blood. Human Fc-fragments cross-react with murine Fc-receptors and are capable of binding mouse FcRn [35,36]. Human Fc was chosen for these constructions to be used in further research and clinics. The dimers were produced and purified from the bacterial periplasmic fraction and verified by SDS-PAGE to be~30 kDa. VHHs fused with Fc fragments were produced in Chinese hamster ovary (CHO-S) cells and verified by SDS-PAGE to be~40 kDa under reducing conditions (Figure 4b). To determine the polypeptide chain of the toxin that bound specific antibodies, it was denatured by DTT into its HC and LC. SDS-PAGE was performed in denaturing conditions followed by western blot. B11-Fc and G3-Fc clones were tested, and anti-human IgG (Fc-specific)-peroxidase antibodies (1:2500) were used as detection reagents ( Figure 4c). These antibodies were associated with the HC (100 kDa) of the toxin, which corresponds to the receptor domain. An immunoblot with BoNT/A and BoNT/A-DTT (under reducing conditions) were performed as control (Figure 4d). We investigated the affinity of clones B11 and G3 via surface plasmon resonance (SPR). Amine coupling was used to immobilize BoNT/A and BoNT/A-DTT on the sensor chip. The kinetic binding on-and off-rates between the antibodies and the toxin were determined from sensorgram analysis and used to calculate the equilibrium dissociation constant (KD) ( Table 1). Table 1. Kinetic parameters of antibody interactions with the toxin obtained by SPR (surface plasmon resonance). Association (on-rate, K a ), dissociation (off-rate, K d ), maximum analyte binding capacity (R max ), equilibrium association constants (K A ), equilibrium dissociation constants (K D ), and Chi 2 for the chosen VHHs (variable domains of heavy-chain only antibodies) binding to botulinum neurotoxin (BoNT)/A or BoNT/A-DTT (dithiothreitol).
Clones Binding to
BoNT/A-DTT K a (1/Ms) For the in vivo protection analysis, modified forms of VHHs, different amounts of monomers, dimers, and VHHs fused with Fc fragments were premixed with 5 LD 50 BoNT/A and injected intraperitoneally into mice ( Figure 5a). Monomers had only partial protectiveness even at the highest amount, 100 µg, with protectiveness gradually decreasing and failing at 10 µg. The dimers showed a better result on average, with full protectiveness of the B11-dimer at 100 µg and partial protectiveness at lower amounts, failing at 1 µg, as well as partial protectiveness of the G3-dimer at 100-20 µg amounts, failing at 10 µg. The best results were demonstrated by VHHs fused with Fc fragments, with G3-Fc failing to fully protect the mice at 0.1 µg and B11-Fc protecting the mice at amounts as low as 0.001 µg. All forms of VHHs partially protected the mice at high amounts, whereas VHHs fused with Fc fragments could protect the mice even at the lowest amounts tested (0.1-0.001 µg).
To assess the circulation time of various antibody modifications in the blood after a single injection, ELISA was used to measure the concentrations of G3, B11, G3-dimer, B11-dimer, G3-Fc, or B11-Fc in mouse blood taken at different time points after injection. The concentrations of G3, B11, as well as B11-dimer and G3-dimer, detected at 1 h post-injection were only about 10% of the initial level observed at 0 h. However, B11-Fc and G3-Fc modification constructs had a relatively long serum half-life. The concentrations of B11-Fc and G3-Fc 96 h post-injection were approximately 10% of the initial level. Moreover, two weeks after injection, the concentrations of B11-Fc and G3-Fc antibodies were approximately 1% of the initial level. These data are in agreement with reports on other VHHs with monomeric and fusionbody formats [37] (Figure 5b).
VHHs fused with Fc fragments demonstrated the best protection, and the mice treated with these preparations were alive two weeks after the end of the experiment. Based on the analysis of B11-Fc and G3-Fc clones' circulation time in the serum (presence of antibodies 14 days after injection), we decided to conduct an experiment on the survival of these mice, which previously received a single injection of the VHHs with the Fc fragment, with a repeated administration of only the lethal toxin dose 14 days after the original administration. These mice were challenged with BoNT/A 100 LD 50 to examine their protection over time (Figure 5c). All mice that previously received 100-50 µg of VHH-Fc were still fully protected. The mice which previously received 20-10 µg of the VHH-Fc were partially protected (75-25%) (clone B11-Fc) or not (clone G3-Fc). After two weeks, 1 µg of the preparation had no protectiveness.
We also tested the prophylactic efficacy and possible treatment of the toxin by administering the two selected Fc-fused clones one hour and three hours before and after toxin challenge with 10 LD 50 (Figure 5d). Both clones fully protected the mice before the toxin challenge and one hour after. Three hours after toxin administration, the clones lacked protectiveness. Overall, we obtained numerous clones after two rounds of biopanning; we selected 15 clones for initial analysis based on their CDR3s, chose two clones (B11 and G3) with the best pre-mixed results in phage form in vivo, produced them in protein form, and modified their structure and characteristics by dimerization via a (Gly4Ser) 3 linker and fusion to a human IgG Fc fragment to enhance their protective activity.
We demonstrated that modification of neutralizing VHHs with an Fc fragment (fusionbody) significantly increased the circulation time of antibodies in the blood. At the same time, Fc fusion significantly increased the protective activity of VHH clones to more than 1000 times compared to monomeric forms. Moreover, clone B11-Fc increased the protective activity at least 100,000 times compared to the monomeric form, with 100% protectiveness even at 0.001 µg. The estimated molecular ratio of the 5 LD 50 to 0.001 µg pre-mix was 1:22.
Discussion
BoNT is the most potent and lethal known toxin. Therefore, the development of new therapeutic agents for exposure prevention and treatment is essential. Therapeutic approaches have been summarized in previous publications [11]. Along with serum therapy, specific antibodies are a promising tool for neutralizing BoNTs. Approaches for the generation, selection, combination, and modification of mAbs against BoNT/A have been described and tested [38][39][40][41][42].
Camelid VHHs are a popular tool for constructing recombinant antibodies to detect and neutralize a range of targets [18][19][20][21][22][23][24]. In particular, it has been shown that HC-only antibodies or VHHs derived from HC-only antibodies, produced by camelids demonstrate strong anti-BoNT activities in animal models [6,43,44]. The main differences of VHHs from conventional antibodies include specific conserved amino acid substitutions in framework region 2 (FR2) that make contact with the LC of classical antibodies, as well as a longer length, more variable amino acid composition in the CDR3s, and a folded back loop [27,45,46]. Specific VHHs can be isolated from VHH libraries. One type of such libraries-immune libraries-can be produced based on peripheral blood lymphocytes isolated from camelids that have been immunized with an antigen of interest in a prime-boost strategy [47]. In our study, we used a toxoid consisting of BoNT/A toxin and small amounts of hemagglutinin (HA) as an antigen and screened for VHHs against BoNT/A and BoNT/A-DTT using phage display technology.
The neutralizing clones in our work were selected on BoNT/A-DTT. It should be noted that the neutralizing clones were later found in the BoNT/A library as well. However, there were fewer of them, so a greater number of clones had to be analyzed. Overall, approximately 300 clones from BoNT/A-DTT and more than 1000 clones from BoNT/A libraries were examined. The approach of splitting the toxin into its HC and LC enriches the neutralizing clones in the library. Based on their neutralizing activity when pre-mixed with different toxin LD 50 doses, two clones were selected. Interestingly, all clones with neutralizing potency did not have the highest ELISA signals. Therefore, the OD (optical density) of the ELISA signal does not always mean high neutralizing activity and in vivo protective activity of these VHHs. The next step was to produce and purify VHHs in protein form to test their neutralizing potency in vivo. However, their protection was weak. Even when pre-mixing a 100 µg dose of these VHHs with the toxin, they had a 50-75% protection efficiency. Administration of lower doses of the preparation was non-protective, with 10 µg lacking effectiveness. Therefore, two clones-B11 and G3-were modified to increase their half-life and protection efficacy. Increasing the VHH size by oligomerization-the coupling of two or more VHHs via a specific linker-increased their protectiveness; however, even bivalent constructs were rapidly cleared [46].
Another approach was VHH fusion to an Fc region of the IgG molecule. Constructs with fused Fc fragments were previously made to extend the antibody half-life to neutralize the Middle East respiratory syndrome coronavirus (MERS-CoV) [48,49], target the C-X-C chemokine receptor type 4 (CXCR4) to prevent human immunodeficiency virus 1 (HIV-1) strain entry and replication in vitro [50], and increase the Fc effector functions to neutralize rotavirus [51] as well as the influenza virus HA [52]. A recent study of modified VHHs dimers and VHHs fused to Fcs targeting unique epitopes on the immunogen, composed of a portion of the central delivery domain and the entire combined repetitive oligopeptides (CROPs) domain of Clostridium difficile type B, showed modest and much greater toxin inhibitions, respectively [53]. It should be noted that a combined approach for efficient serum clearance of BoNT/A has been developed before. Sepulveda et al. [54] used single-chain variable fragments (scFvs) as protein binding agents pre-complexed with one or two single anti-tag mAbs with an Fc domain and tested this construction in vivo for protection and pharmacokinetics. When three or four types of such constructions were given simultaneously, mice were protected at high LD 50 s, and BoNT/A was rapidly cleared from the sera. Protection against BoNT/A light chain was observed when scFvs fused with Fc fragments (scFv-Fc) obtained from a macaque immune library were tested ex vivo [55]. Furthermore, the substantial contribution of the homologous Fc fragment to the potency of three individual anti-botulinum mAbs in antibody preparations has been demonstrated [56].
In our work, two chosen clones were dimerized via a (Gly4Ser) 3 linker or fusion to human Fc fragments to stabilize the molecules and slow down their clearance. These modifications led to improved protectiveness of the preparations. The dimers demonstrated protectiveness at lower doses compared to VHHs, with 75-50% protection efficiency at 50 and 20 µg. Clone G3-dimer failed to protect the mice at 10 µg, like the VHHs, while clone B11-dimer showed partial protectiveness and failed at 1 µg. Therefore, the effective dose (ED 50 ) for clone B11-dimer was 15 µg, and for the clone G3-dimer, was 20 µg.
The best results were obtained after the fusion of the VHHs to Fc fragments, which is consistent with previous research. Both clones fully protected the mice at doses as low as 0.1 µg. The protection provided by G3-Fc was half at 0.01 µg, and the clone failed to protect the mice at 0.001 µg. B11-Fc demonstrated full protection down to the lowest tested dose of 0.001 µg. The dose at which its protectiveness begins to decrease was not reached. Therefore, both clones with Fc fragments showed at least a 1000-fold improvement in protectiveness compared to conventional VHHs and dimers. The ED50 dose for clone B11-Fc could not be determined, while it was 0.01 µg for clone G3-Fc.
Besides, the pharmacokinetics analysis of various antibody forms showed that clones B11 and G3 containing the IgG Fc fragment were detected in the mouse serum 14 days after a single injection, which is confirmed by full or partial animal protection from the challenge with the toxin 14 days after administration of various B11-Fc and G3-Fc doses.
Testing the prophylaxis and possible treatment of the toxin by administering B11-Fc and G3-Fc one hour and three hours before and after toxin challenge with 10 LD 50 showed that both clones fully protected the mice before the toxin challenge as well as one hour after, but failed to protect the mice three hours after toxin administration, suggesting that these clones could be used for prophylaxis and emergency therapy immediately after toxin administration. This corresponds with the results obtained previously by Sepulveda et al. [54] showing that mice receiving scFvs pre-complexed with anti-tagged mAbs with Fc fragments were protected only two hours after intoxication, while four hours after, toxin administration lethality was only delayed.
The acquisition of antibodies against BoNT/A and confirmation of their neutralizing potency in vivo in phage forms when pre-mixed with the toxin has provided a new method to screen for neutralizing VHHs before obtaining them in protein form, which can efficiently reduce time and material costs for their production and testing. VHHs, along with their modifications as oligomers and fusions to Fc fragments of the IgG, increase the range of options for BoNT/A targeting and neutralization by improving the blood circulation time and therapeutic potential.
Animal Housing Conditions
Alpaca immunization and blood collection were performed on one clinically healthy 4-year-old male alpaca (Vicugna pacos) on the "Russian Alpacas" Farm, private land located in Pokhodkino, Moscow Region, Russia. This sample collection did not involve endangered or protected species, and no specific permissions were required for these locations/activities.
Six-week-old female Balb/c mice (weighing 18-20 g) were purchased from "Pushchino breeding facility" (Pushchino, Moscow Region, Russia) accredited by Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC International) and maintained at the central animal facility at the Gamaleya Research Center of Epidemiology and Microbiology. Mice were kept at a constant temperature (22 ± 2 • C) and relative humidity (50%) with 12 h of artificial light per day. They were housed in individual cages (8 per cage). Mice were fed with dried food and water ad libitum. Mice were observed every two hours post-injection except during the night for one week. The animals with characteristic symptoms of botulism, including muscle paralysis and respiratory difficulty, were euthanized by cervical dislocation.
Antigen Preparation
BoNT/A from the C. botulinum strain A98 was obtained from the Gamaleya Research Center, Moscow, Russia collection. BoNT/A is very toxic; therefore, appropriate safety precautions were taken during these experiments. The neurotoxin was handled at a Class 2 biosafety cabinet. The antigen was a toxoid consisting of BoNT/A toxin and small amounts of HA, and 0.1% formaldehyde was used for toxoid preparation. The treatment was carried out for seven days at 40 • C [57]. Mice received intraperitoneal injections to determine residual toxicity, which showed its absence. Before use in alpacas, the antigen was purified through a 0.22 µm filter. The final concentration was 60 µg/mL. Before immunization, the neurotoxin was inactivated by 0.05% formaldehyde at pH = 6 during 3-5 days. BoNT/A preparations were used with Freund's adjuvant without pre-adsorption.
Alpaca Immunization
Alpaca immunization was performed using five sequential injections with an interval of 14 days between the first and second immunizations and 10 days between all subsequent immunizations. For the first injection time, 60 µg (1 mL) of the antigen and complete Freund's adjuvant (Sigma, St. Louis, MO, USA) at the v/v ratio of 1:1 were administered. The four subsequent immunizations were performed with 90 µg (1.5 mL) of the antigen and incomplete Freund's adjuvant (Sigma, St. Louis, MO, USA) at the v/v ratio of 1:1. Small blood samples (5-7 mL) were collected before immunization, as well as after the third and fifth immunizations as a control. Five days after the final injection, 50 mL blood sample was collected and placed into a sterile vacuum collection tube with lithium heparin to prevent blood clotting.
Phage Display Library Construction
mRNA isolation, PCR amplification, and library construction were performed, as described elsewhere [58,59]. Briefly, camelid VHHs from peripheral blood B-lymphocytes (about 10 6 cells/mL) of the immunized alpaca were cloned into a pHEN1 expression phagemid vector [60]. The primer set used for PCR amplification of antibody genes appended the SfiI (NEB) restriction site at the 5 -end and the NotI (NEB) site at its 3 -end (Table 2). Recombinant phagemids were introduced into freshly prepared competent suppressor TG1 E. coli cells (Lucigen, Middleton, WI, USA). Using this method, a library of 5 × 10 6 individual clones for biopanning and isolation was obtained.
Phage Preparation and Biopanning
The bacteria from the VHH library were added to 2 × YT medium (Sigma, St. Louis, MO, USA) (with 100 µg/mL ampicillin and 1% glucose) and incubated at 37 • C in a culture shaker at 210 rpm to an OD600 = 0.6. KM13 helper phages (Patrick Chames "Antibody therapeutics and Immunotargeting team" of the Cancer Research Center of Marseille) were added to the bacteria (multiplicity of infection, MOI = 20) and left without shaking at 37 • C for 30 min. The culture was centrifuged at 4000 rpm for 20 min at 4 • C, and the cell pellets were resuspended in 2 × YT medium (with 100 µg/mL ampicillin and 50 µg/mL kanamycin) and cultured overnight at 30 • C, in a culture shaker at 210 rpm. The next day, the culture was centrifuged, the supernatant was purified and concentrated by 20% polyethylene glycol (PEG) 8000, 2.5 M NaCl precipitation, and the pellet was resuspended in phosphate-buffered saline (PBS) with 80% glycerol.
BoNT/A-DTT was treated with 1 mM DTT for at least 30 min at 25 • C. Microtiter plate wells were coated with 5 µg of BoNT/A for the first round and with BoNT/A and BoNT/A-DTT for the second round in 0.05 M NaHCO 3 buffer (pH = 9.6) at 4 • C overnight. After rinsing three times with PBS with 0.1% Tween 20 (TPBS), the plate was blocked with blocking buffer (TPBS with 5% non-fat dried milk) at 37 • C for 1 h. A total of~10 11 phages were added to each well and incubated at 37 • C for 1 h. Unbound phages were removed by washing 10 times with TPBS. The bound phages were eluted by trypsin with a final concentration of 1 mg/mL. TG1 E. coli cells at OD600 = 0.6 were infected with the eluted phages and incubated without shaking at 37 • C for 30 min. After culturing the mixture in 2 × YT agar plates (with 100 µg/mL ampicillin and 1% glucose) at 37 • C overnight, the cells were scraped. Recombinant phages were obtained by packaging with KM13 helper phages, and their titers were determined. A total of two panning rounds were performed.
ELISA Screening for Specific VHHs
For a polyclonal ELISA, an immunoplate (MaxiSorp, Nunc) was coated with 100 ng of BoNT/A and BoNT/A-DTT with 0.05 M NaHCO 3 buffer (pH = 9.6) at 4 • C overnight. The plate was rinsed three times with TPBS, blocked with blocking buffer at 37 • C for 1 h, and a total of~10 11 phages from the starting library or the first or second rounds were added to each well and incubated at 37 • C for 1 h. The unbound phages were removed by washing 10 times with TPBS. The plate wells were detected by horseradish peroxidase (HRP)-conjugated anti-M13 antibodies (1:5000) (Abcam, Cambridge, UK), followed by the addition of peroxidase substrate 3,3 ,5,5 -tetramethylbenzidine (TMB) (Bio-Rad, Hercules, CA, USA). The reaction was stopped by 1 M H2SO4, and the absorbance at 450 nm was read with an iEMS Reader MF (Thermo Labsystems, Waltham, MA, USA). A monoclonal ELISA was performed following the same protocol, with particular clones bound to BoNT/A and BoNT/A-DTT being selected, and the phage plasmids were isolated for sequencing. Antibody genes in phagemids were sequenced with a primer set used for PCR according to the protocol of Big Dye Terminator 3.1 Cycle Sequencing kit for the Genetic Analyzer 3500 Applied Biosystems (Waltham, MA, USA). The electrophoretic DNA separation was performed in 50 cm capillaries with POP7 polymer.
Protein Expression and Purification
The selected VHHs expressed in pHEN1 plasmid were transformed into E. coli BL21 cells (NEB, Ipswich, MA, USA) for expression and purification. An overnight culture was obtained at 37 • C, in a culture shaker at 210 rpm. The next day, the cells were harvested by centrifugation, lysed by BugBuster Protein Extraction Reagent (Novagen, Madison, WI, USA), and VHHs were purified from the lysate by TALON Superflow containing Co 2+ agarose (GE Healthcare Bio-Sciences AB, Uppsala, Sweden). The eluted fraction was subjected to dialysis with Visking dialysis tubing (MWCO 12000-14000) (Serva, Heidelberg, Germany). VHHs were separated by SDS-PAGE (Bio-Rad, Hercules, CA, USA) under denaturing conditions and had an expected molecular weight of~15 kDa.
Affinity and Binding Kinetic Measurements
Antibody affinity was determined by surface plasmon resonance (SPR) using a Biacore 3000 instrument (GE Healthcare Bio-Sciences AB, Uppsala, Sweden). BoNT/A and BoNT/A-DTT were immobilized on the surface of CM5 sensor chips in the amount of 10 µg each in 10 mM sodium acetate buffer pH = 4.5 using the amine coupling kit recommended by the manufacturer (GE Healthcare Bio-Sciences AB, Uppsala, Sweden). VHHs (2-fold dilutions from 300 µg down to 0 µg) were captured on the sensor chips and submitted at a constant flow rate of 15 µL/min with HBS-EP (0.01 M HEPES pH 7.4, 0.15 M NaCl, 3 mM EDTA, 0.005% v/v Surfactant P20) as a running buffer at 25 • C with an injection time of 3 min and dissociation time of 10 min. After each injection, the chip surface was regenerated with 20 mM Tris-HCl, pH = 2.0 for 30 s at a flow rate of 20 µL/min. Calculations were performed using BIAEvaluation software (GE Healthcare Bio-Sciences AB, Uppsala, Sweden) with reference-subtracted fitting.
Production of VHHs in Dimer Form and Fused with IgG Fc Fragments
The selected VHHs were held by a glycine-serine linker (Gly4Ser) 3 and expressed in the pET30 plasmid. Protein expression and purification were performed as for VHHs.
Nucleotide sequences of VHH genes fused to the human IgG Fc-fragment were synthesized and cloned into the plasmid pShuttle-CMVFUSE (Stratagene, La Jolla, CA, USA). The CHO-S cell culture (ThermoFisher, Waltham, MA, USA, R80007) was transiently transfected with pFUSE plasmid using the CHO Gro System (Mirus Bio, Madison, WI, USA), according to the manufacturer's protocol. A plasmid carrying the green fluorescent protein gene in a 10% amount was used as control. The efficiency of transfection was assessed using Axio Imager Z1 (Carl Zeiss, Oberkochen, Germany) and determined by the number of fluorescence cells compared to their overall amount, which was 90%. Cells were cultured in shake flasks at 125 rpm, 5% CO 2 , 80% humidity, at 37 • C during transfection and 32 • C 24 h after the transfection for 10 days. Starting from day three, Cellboosts 7a (3%), 7b (0.3%) (HyClone, San Angeo, TX, USA), and 1% Sigma Bioreactor Feed (Sigma, St. Louis, MO, USA) were added each day. After 10 days of cultivation, the culture was clarified by centrifugation at 5000× g and cleared through a 0.8 µm filter. The antibodies were purified using protein A affinity chromatography on an AKTA start chromatography system (GE Healthcare Bio-Sciences AB, Uppsala, Sweden), with a mAbSelect SuRe 1 mL column (GE Healthcare Bio-Sciences AB, Uppsala, Sweden), according to the manufacturer's protocol.
Identification of the Toxin Polypeptide Chain that Binds Antibodies by Western Blot
To determine which BoNT/A polypeptide chain binds VHHs, the toxin was denatured by DTT into its HC and LC. SDS-PAGE was performed using a mini-protean TGX stain-free precast gel (Bio-Rad, Hercules, CA, USA) in denaturing conditions. The separated bands were transferred onto a nitrocellulose membrane using a Trans-Blot Turbo System (Bio-Rad, Hercules, CA, USA). The membrane was blocked with a blocking buffer at 37 • C for 1 h. VHHs were diluted in the blocking buffer, added to the membrane, and incubated at 37 • C for 1 h. After rinsing the membrane three times with TPBS, anti-His-tag-HRP-mouse antibodies (GenScript, Piscataway, NJ, USA) diluted 1:5000 in blocking buffer were added, and the membrane was then incubated at 37 • C for 1 h. The membrane was rinsed three times with TPBS, followed by the addition of Clarity Western ECL Blotting Substrates (Bio-Rad, Hercules, CA, USA) for detection. The membrane was visualized on an Amersham Imager 600 (GE Healthcare, Buckinghamshire, UK).
Toxin Preparation for In Vivo Neutralization Assay
The neurotoxin as a multi-oligomeric complex toxin with HAs and NTNHA (non-toxic non-hemagglutinin) (800 kDa) was used for immunization. Toxins were prepared and purified as follows. The strain was cultivated under anaerobic conditions for five days. Bacterial cells were separated by centrifugation at 5000× g for 30 min at 10 • C. The proteins from the culture filtrate were concentrated by acid precipitation at pH = 3.8 for 45 min. The precipitate was separated by centrifugation at 12,000× g for 30 min at 10 • C and dissolved in 47 mM citrate-phosphate buffer with pH = 5.6. Gel filtration S300 and ion-exchange chromatography on AKTA start chromatography system (GE Healthcare Bio-Sciences AB, Uppsala, Sweden) on DE cellulose (Pharmacia, Uppsala, Sweden) were then carried out. The toxin (90-95%, 150 kDa) was purified by additional DE cellulose chromatography (Pharmacia, Uppsala, Sweden) in borate buffer pH = 8 and eluted by 50 mM NaCl. Non-sorbed material containing specific antigenic and biological activity was used as BoNT complex with HA. The specific antigenic activity was a positive reaction with monospecific antibodies to BoNT/A, HA, and NTNHA (Gamaleya Laboratory of Clostridiosis and commercial preparation of Scientific Centre for Expert Evaluation of Medicinal Products Russian Federation).
In Vivo Toxin Neutralization with Phages or Proteins
Phages with titers 10 11 -10 12 CFU/mL were prepared, as described above, and mixed with standard saline solution. VHH proteins at different amounts ranging from 100 µg to 0.001 µg were prepared, as described above, and mixed with standard saline solution. Balb/c 18-20 g female mice were divided into groups of four, including the positive and negative control groups. Phages or VHH proteins were premixed with the appropriate toxin LD 50 (1LD~30 pg BoNT/A) and incubated for 1 h at 37 • C. All mice received one intraperitoneal 500 µL injection of phages or proteins premixed with various LD 50 s of the toxin and were observed once a day for one week. A specific pathological pattern was observed in sick mice, which is the pathological pattern with dystonia's abdominal muscles (waistline increasing) and death at 30-50 pg that is neutralized by monospecific antibodies to BoNT/A. The positive control group received the rabbit antitoxin IgG (Gamaleya Laboratory of Clostridiosis), and the negative control group received a standard saline solution.
Blood Clearance of VHHs Modifications in Mice
A group of five six-week-old female Balb/c mice was intravenously (i.v.) injected with 100 µg G3, B11, G3-dimer, B11-dimer, G3-Fc, or B11-Fc into the tail vein. Blood was collected from the facial vein at 0, 1, 4, 24, 48, 96, 168, 240, 336 h time points. Sera were separated and stored at −20 • C until further use. Concentrations of the injected antibody molecules in the above-collected samples were measured by ELISA. For ELISA, BoNT/A was coated on microtiter plates (Nunc) overnight at 4 • C at 100 ng/well in 50 mM bicarbonate buffer. After washing three times with TPBS, plates were blocked with 5% dry milk in PBS for one hour at 37 • C. Then, 200× diluted sera were added to the wells, followed by a one-hour incubation. Mouse anti-His-tag antibody [HRP] (1:1000) (GenScript, Piscataway, NJ, USA) was used to detect monomers and dimers of G3 and B11 antibodies in the mouse sera. Goat anti-human IgG | 9,100 | 2019-08-01T00:00:00.000 | [
"Biology"
] |
Reversible heart rhythm complexity impairment in patients with primary aldosteronism
Excess aldosterone secretion in patients with primary aldosteronism (PA) impairs their cardiovascular system. Heart rhythm complexity analysis, derived from heart rate variability (HRV), is a powerful tool to quantify the complex regulatory dynamics of human physiology. We prospectively analyzed 20 patients with aldosterone producing adenoma (APA) that underwent adrenalectomy and 25 patients with essential hypertension (EH). The heart rate data were analyzed by conventional HRV and heart rhythm complexity analysis including detrended fluctuation analysis (DFA) and multiscale entropy (MSE). We found APA patients had significantly decreased DFAα2 on DFA analysis and decreased area 1–5, area 6–15, and area 6–20 on MSE analysis (all p < 0.05). Area 1–5, area 6–15, area 6–20 in the MSE study correlated significantly with log-transformed renin activity and log-transformed aldosterone-renin ratio (all p < = 0.01). The conventional HRV parameters were comparable between PA and EH patients. After adrenalectomy, all the altered DFA and MSE parameters improved significantly (all p < 0.05). The conventional HRV parameters did not change. Our result suggested that heart rhythm complexity is impaired in APA patients and this is at least partially reversed by adrenalectomy.
Analysis of the variation of heart rate oscillation, mostly known as heart rate variability (HRV), is commonly used to assess the alteration of autonomic function in human studies due to its simple, noninvasive, and inexpensive approach 10 . Traditional linear analysis of HRV is also used as a tool to evaluate the autonomic system, and especially high frequency power spectrum analysis to evaluate parasympathetic function 10 . HRV has been commonly applied to predict outcome in patients with cardiovascular disease 11 . In recent years, newer methods based on nonlinear and nonstationary signal modeling have been developed and successfully applied 12 . The concept of heart rhythm complexity analysis by nonlinear methods including detrended fractal analysis (DFA) or multiscale entropy (MSE) is based on the assumption that a healthy system exhibits a meaningful complex control over different time scales to maintain operation in ever-changing environment 13,14 . Conversely, decreased complexity of heart rate dynamics has been demonstrated in patients with diseases states, such as heart failure, stroke, sepsis, and critical illness requiring extracorporeal life support [15][16][17][18] . Compared to traditional HRV parameter based on linear methodology, heart rhythm complexity analysis showed better predictive power for prognosis in patients with cardiovascular disease 15,19 .
Whether aldosterone excess affects heart rhythm complexity is unclear and formed the basis of the current study.
Results
Patients. Twenty patients (9 men) with aldosterone producing adenoma (APA) undergoing adrenalectomy and 25 patients with EH were enrolled. The clinical data are shown in Table 1. Patients with APA had significantly higher plasma aldosterone concentration (PAC), higher left ventricular mass index, higher plasma aldosterone-to-renin activity ratio (ARR), lower serum potassium levels, and lower plasma renin activity (PRA) than patients with EH. Regarding medication usage, a significantly higher percentage of APA patients received α -blocker and spironolactone treatment, and lower percentage received angiotensin receptor blocker (ARB) treatment.
Post-adrenalectomy follow-up. By one year after adrenalectomy, the number of anti-hypertensive medications, log PAC, log ARR, and left ventricular mass index had significantly decreased while PRA and serum K level had increased (Table 4). Eleven out of 20 patients were cured of hypertension.
Discussion
The major findings of this study were 1) APA patients had worse heart rhythm complexity than EH patients, and this was independent of BP; 2) In the correlation study regarding all participants, the parameters of heart rhythm complexity correlated with Log PRA and Log ARR, but not blood pressure or altered cardiac structure; 3) the impaired heart rhythm complexity improved after adrenalectomy.
The presented study is the first study to show the adverse effects of aldosterone on heart rhythm complexity, which quantifies the complex regulatory dynamics of human physiology 20 . HRV is commonly used to assess autonomic function and risk stratification of patients with cardiovascular disease 11,21 . Compared to traditional linear HRV parameters, nonlinear metrics (including DFA and MSE) showed better predictive power for clinical outcomes in heart failure patients and in an experimental sepsis model 15,19,22 . Recently, the MSE method, specifically developed to treat heterogeneous complexity, was shown to extend the traditional entropy algorithm to quantify the information richness over multiple time scales in physiological systems 20 . This complex structure will "break down" in diseased patients, such as those with heart failure or with critical illness, and may be further affected in those with poor prognosis 15,18 . In our previous study, MSE provided the best prognostic prediction in patients with heart failure 15 . MSE also predicted outcome of severe traumatic patients requiring intensive care unit admission across the diverse spectrum of traumatic injury 23 , the neurological outcome after stroke 16 , the clinical consequences of sepsis 17 , and outcome of patients with critical illness receiving extracorporeal life support 18 In the present study, PA patients had worse heart rhythm complexity than EH patients, suggesting that aldosterone excess impaired the complex regulatory dynamics of human physiology. Furthermore, there were significant associations between heart rhythm complexity and log PRA and log ARR, but not BP or cardiac structure in the correlation study regarding all participants. This implied a direct association between aldosterone and heart rhythm complexity. Moreover, after adrenalectomy, the impaired heart rhythm complexity improved, suggesting that the impairment is at least partially reversible.
In contrast to the significant differences of values in non-linear parameters between PA and EH patients, the values of traditional HRV parameters (such as time and frequency domain parameters) were comparable in both groups in the current study. This also applied when comparing the pre and post-operative data of APA patients. As mentioned earlier, traditional HRV parameters are commonly used to evaluate the autonomic system, and particularly high frequency power spectrum analysis to evaluate parasympathetic function 10 . Data from the current study did not provide further evidence to support PA patients having worse autonomic function than EH patients. The relation between aldosterone and the autonomic system is complex. In animal studies, excess of aldosterone is associated with autonomic dysfunction. It was found that there are mineralocorticoid receptors near the hippocampus and regulated by mineralocorticoids and glucocorticoids 24 . Further, aldosterone infusion into cerebral ventricles increased SNA and BP, an effect which was antagonized by administration of a mineralocorticoid antagonist 6 . In another animal model, the increased BP induced by aldosterone was blocked by chemical sympathectomy 25 . In the elderly hypertensive population, the mineralocorticoid antagonist spironolactone decreased norepinephrine levels 26 . The evidence suggests a close relationship between aldosterone and the autonomic system.
Whether PA patients have impaired autonomic function is controversial. Acute infusion of aldosterone in healthy volunteers increased the standard deviation of RR intervals and total power, and was associated with a trend towards increased time domain HRV parameters 27 . However, in that study, basal muscle SNA, BP and heart rate remained unaffected by aldosterone administration. This suggests that, acutely, aldosterone infusion tends to increase cardiac vagal activity and has no effect on sympathetic activity. PA patients have lower sympathetic vasomotor tone than patients with EH 28 . Several studies have evaluated muscle SNA in patients with PA 7,8 . In the study by Kontak et al., PA patients had similar muscle SNA to EH patients 7 . Both PA and EH patients had higher muscle SNA than normotensive subjects. However, the muscle SNA decreased significantly after adrenalectomy, accompanied by a decrease in BP 7 . Whether removal of aldosterone excess or decrease of BP contributed to this improvement is unclear. In another study by Miyajima et al., PA patients had lower muscle SNA compared to EH patients 8 . In a third study by Matsukawa et al. 9 , PA patients have lower muscle SNA compared to normotensive subjects. Interestingly, the muscle SNA was significantly elevated after unilateral adrenalectomy in PA patients. It was in contrast to the study by Kontak et al. 7 . Although different race and other inter-individual differences may have contributed to this disparity, it may also reflect the fact that regulation of sympathetic activity in PA patients is complex and influenced by factors other than aldosterone itself.
The relationship between aldosterone and parasympathetic activity is also complex and conflicting data exist. Acute infusion of aldosterone in healthy volunteers increased cardiac vagal activity in one HRV study 27 . Aliskirin, which reduces activity of the renin-angiotensin-aldosterone system via direct inhibition of renin, increased parasympathetic function as measured by two cardiovascular autonomic reflex tests 29 . However, the high frequency domain of HRV, an index of cardiac parasympathetic tone did not change 29 . Studies assessing parasympathetic function in PA are quite few. To the best of our knowledge, there are only two studies from the same group dealing with this issue. In the first, the baroreflex sensitivity of PA patients was found to be similar to age-matched healthy subjects. In contrast, the baroreflex sensitivity was impaired in EH patients 30 Table 2. Holter parameter of patients at baseline. Values are mean ± SD. APA = aldosterone producing adenoma; EH = essential hypertension; SDNN = standard deviation of normal RR intervals; pNN20 = percentage of the absolute change in consecutive normal RR interval exceeds 20 ms; pNN50 = percentage of the absolute change in consecutive normal RR interval exceeds 50 ms; LF/HF = the ratio between low and high frequency components.
patients showed similar high frequency in HRV compared to normotensive subjects, but high frequency in HRV was decrease in patients with EH 28 . These data suggest that, although hypertensive, PA patients have intact parasympathetic activity. Against this notion, however, was the observation that the high frequency value decreased after adrenalectomy in the PA patients 28 .
In this study, we only enrolled APA patients that underwent adrenalectomy, but not patients with bilateral adrenal hyperplasia. Adrenalectomy is the treatment of choice for APA patients. In contrast, medical treatment with spironolactone is the treatment of choice in patients with bilateral adrenal hyperplasia. Compared to medical treatment with spironolactone, adrenalectomy decreases left ventricular mass more quickly 31 . In another study, only adrenalectomy but not spironolactone improved arterial stiffness in PA patients (average follow-up period: one year) 32 . This evidence implies that adrenalectomy is a more effective method to reverse the effects of aldosterone on cardiovascular system. Therefore,
we enrolled APA patients that underwent adrenalectomy as the study group to enhance the difference between pre-and post-treatment. One advantage of this study is that the BP and the number of antihypertensive medications was comparable between APA and EH groups. It is common that PA patients have significantly higher BP than EH patients 33 . In our previous studies, APA patients also had significantly higher BP than EH patients 4 Values are mean ± SD. APA = aldosterone producing adenoma; EH = essential hypertension; SDNN = standard deviation of normal RR intervals; pNN20 = percentage of absolute differences in normal RR intervals greater than 20 ms; pNN50 = percentage of absolute differences in normal RR intervals greater than 50 ms; LF/HF = low frequency to high frequency.
in this study) would make the interpretation of results more difficult and complex. Although the correlations between HRV parameters and BP were not significant, we still could not exclude a possible confounding effect from a difference in BP. Thankfully, the BP difference was not significant between APA and EH groups in this study, which made the interpretation easier. Our study has several limitations. First, the study population is small. Further large prospective studies are needed to confirm the results. Second, we did not measure muscle SNA to evaluate sympathetic activity. However, the main purpose of this study was to evaluate the influence of aldosterone on heart rhythm complexity rather than sympathetic activity per se.
In conclusion, APA patients had impaired heart rhythm complexity compared to EH patients, and this was independent of BP. The impaired heart rhythm complexity improved after adrenalectomy.
Methods
Patients. This prospective study enrolled 20 patients diagnosed with unilateral APA and who underwent adrenalectomy during the period from December 2006 to October 2009. The patients were evaluated and registered in the Taiwan Primary Aldosteronism Investigation (TAIPAI) database. The database was constructed for quality assurance in two medical centers (National Taiwan University Hospital (NTUH), Taipei; Taipei Medical University Hospital, Taipei), three metropolitan hospitals (Cardinal Tien Hospital, New Taipei City; Taipei Tzu Chi Hospital, New Taipei City; Yun-Lin Branch of NTUH, Douliou City), and two local hospitals (Hsin-Chu Branch of NTUH, Hsin-Chu City; Zhongxing Branch of Taipei City Hospital, Taipei) [34][35][36] . In addition, another 25 patients with EH were enrolled as the control group. Medical history including demography and medication was carefully recorded. Biochemical parameters were measured at the first evaluation of these patients in National Taiwan University Hospital. PRA was measured as the generation of angiotensin-I in vitro using a commercially available radioimmunoassay kit (Cisbio, Bedford, MA); PAC was measured by radioimmunoassay with commercial kits (Aldosterone Maia Kit; Adaltis Italia, Bologna, Italy). All antihypertensive medications were discontinued for at least 21 days before measuring plasma PRA and PAC levels. Diltiazem and/or doxazosin were administered for control of marked high blood pressure when required. All APA patients underwent 24-h ambulatory ECG Holter recording (MyECG E3-80, Mircostar Company, Taipei) within 3 months before operation and one year after operation. EH patients also underwent 24-h ambulatory ECG Holter recording at the time of enrollment. The ECG signals were sampled at 250 Hz and stored in secure digital memory card for offline analysis on a microcomputer. This study was approved by the Institutional Review Board of National Taiwan University Hospital, and all subjects gave informed consent in written form including for the storage of their information in the hospital database and usage for research. The methods in the study were carried out in accordance with the approved guidelines.
Data Pre-Processing. Each digitalized 24-hour ECG data was annotated by an automatic algorithm then carefully inspected and corrected by the technicians for extracting the RR intervals and the ectopic beats were interpolated by its adjacent RR intervals. A four-hour length of RR intervals within daytime (between 9 AM-5 PM) was selected for analysis in order to avoid the confounding effects that may occur due to sleep or diurnal rhythm 37 . Only the RR series of subjects in whom qualified normal sinus beats made up more than 80% of the recording were included for further analysis.
Time and frequency domain analysis. The standard deviation of normal RR intervals, the percentage of the absolute change in consecutive normal RR interval exceeds 50 ms, and percentage of the absolute change in consecutive normal RR interval exceeds 20 ms were calculated to represent the total variance and vagal modulation of heart rate. The spectrum analyses were carried out according to the recommendations from the European Society of Cardiology and the North American Society of Pacing Electrophysiology 10 . Instead of calculating the spectrum in overall length, we divided the data into 16 segments and Fourier transformation was performed individually to avoid the influence of external nonstationarity. The spectral density of each frequency band-high frequency (0.15-0.4 Hz), low frequency (0.04-0.15 Hz), and very low frequency (0.003-0.04 Hz) were computed by averaging the absolute powers (msec 2 ) in separated segments.
Nonlinear methods. Nonlinear analysis enables researchers to probe the fundamental characteristics of the signals. We apply two methods (DFA and MSE) for their ability to evaluate the underling properties of the signals hidden beneath the seemingly chaotic dynamics 20,38 . DFA analysis. DFA is a modified root-mean-square analysis that is used to evaluate the fractal correlation (a time-invariant property) beneath the heart rate fluctuation originated from the well-interacted regulatory mechanisms 38 . First, it eliminates the environmental inferences by removing the linear-fitted "local" trend over different time scales in an integrated time series. Then, the root-mean-square fluctuation of this integrated and detrended time series is calculated. This procedure is repeated in different time scales and then the slope of the curve (α exponent) can be computed on the log-log plot of fluctuation versus box size which indicates the fractal correlation property of the time series.
Since the heart rate oscillation over short timescale is predominated by respiratory sinus arrhythmia, a crossover phenomenon of the α exponent in heart rate dynamics between short (4-11 beats) and long Scientific RepoRts | 5:11249 | DOi: 10.1038/srep11249 (11-64 beats) time scales has been proposed to provide better understanding of the fractal correlation property in a physiological system 38 . The short-term (α 1) as well as long-term (α 2) fractal correlation were calculated.
MSE analysis.
In contrast to simply using a single time scale to estimate the predictability of a time series, MSE assesses the complex structure of the physiological signals in different time scales. It comprises two steps: 1) coarse-graining the signals into different time scales; 2) quantifying the degree of predictability in each coarse-grained time series by using sample entropy 39 . The calculated entropy can be represented as a function of scale provides the meaningful information richness embedded in different time scales. It has been shown that different features of small and large scales in different groups of subjects may assist the clinical categorization 20 . Therefore, four different parameters were calculated from the MSE profile: the summations of entropy values of scale 1-5 (area 1-5), scale 6-15 (area 6-15), or scale 6-20 (area [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] which quantify the complexity exhibited in short and long time scales, respectively; and the linear-fitted slope of scale 1-5 (slope 5) was also calculated to characterize the behavior of short-scale (Fig. 2) 20 .
In order to avoid an unwanted effect of external nonstationarity which may compromise the entropy-based analysis, we use the empirical mode decomposition method which is based on Hilbert-Huang transform to detrend the original R-R interval signals 40 . The data was subsequently evaluated by the MSE analysis after processing. Instead of removing trend with a priori mathematical formulas such as polynomial or linear functions, the empirical mode decomposition algorithm could better approximate the hidden trend in complex time series 40-42 . Diagnosis of APA. APA was diagnosed on the basis on the following four conditions: (1) autonomous excess aldosterone production evidenced with an ARR > 35, a TAIPAI score larger than 60% 43 , and post-saline loading PAC > 10 ng/dl; (2) adenoma evidenced with a computed tomography (CT) scan on pre-operative evaluation; (3) lateralization of aldosterone secretion at adrenal venous sampling or during dexamethasone suppression NP-59 SPECT/CT 44 ; (4) pathologically proven adenoma after adrenalectomy for those undergoing surgery, and subsequent cure or improvement of hypertension control with correction of hypokalemia and normalization of PAC and PRA 34,45 . Dexamethasone suppression NP-59 SPECT/CT was used in patients who were at risk of contrast nephropathy, had inconclusive results from adrenal venous sampling or had discordant lateralization between imaging studies (such as CT) and adrenal venous sampling.
Echocardiography.
A Hewlett-Packard Sonos 5500 ultrasound system equipped with a S3 transducer was used. Echocardiography including two-dimensional, M-mode and Doppler ultrasound recordings were performed. Left ventricular dimension, interventricular septum and posterior wall thicknesses, and left ventricular ejection fraction (M-mode) were measured via a parasternal long axis view. Left ventricular mass index was calculated according to the method of Devereux et al. 46 . Four parameters of the MSE were assessed. The first two of these were the linear-fitted slope between scales 1-5 (slope 5); the area under curve between scale 1-5 (area [1][2][3][4][5] to represent complexity between short scales. For longer scales, the common profile of entropy gradually increases as the time scale elongates and reaches a plateau around scale 15 where information richness can be accumulated rapidly if the system can respond well. We therefore calculated both the area under curve between scales 6-15 (area 6-15) and 6-20 (area 6-20) to represent complexity between long scales.
Scientific RepoRts | 5:11249 | DOi: 10.1038/srep11249 Post-adrenalectomy follow-up. Repeat serum biochemistry, PAC, PRA, 24-h ambulatory ECG Holter recording and echocardiography were performed one year after adrenalectomy. Hypertension was considered cured if the blood pressure decreased to 140/90 mm Hg or less after adrenalectomy, and anti-hypertensive medications were not required. These criteria had to be met one year post adrenalectomy 31 . Patients who were cured within one year but later developed hypertension were still classified as cured.
Statistical analysis. Data were expressed as mean ± SD. Comparisons for continuous data between APA and EH patients were made using the t test. Differences between proportions were assessed by chi-square or Fisher exact test. Comparisons between pre-operative and post-operative parameters were made using paired t test. The Pearson's correlation test was used to analyze the association among heart rhythm complexity parameters and its determinants. Data of PAC, PRA, and ARR were log-transformed before the correlation study due to non-normality which was determined by the Kolmogorov-Smirnov test. Before further analysis, the log-transformed data were tested again to assure the normality of distribution. Significant determinants in the Pearson's correlation test (p < 0.05) were then tested by a multivariate linear regression test with stepwise subset selection to identify independent factors to predict MSE parameters. A value of p < 0.05 was considered to indicate statistical significance. | 4,940.6 | 2015-08-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Forensic analysis of auditorily similar voices
ABSTRACT Purpose: to verify contributions of acoustic spectrographic analysis in the forensic identification of speakers with auditorily similar voices, considering the distinctive behavior of acoustic parameters: formants of vowel “é”, of connected speech, mean fundamental frequency in Hz, linear prediction curve of vowel “é” and linear prediction curve area; and to propose an objective method to use the analyzed parameters. Methods: a quantitative, qualitative and descriptive study, conducted in Pernambuco on 16 pairs of male siblings, aged 18-60 years. The subjects recorded videos from which the audios were extracted, numbered and sent to three examiners, in two groups: older brothers and younger brothers, for perceptual-auditory pairing. The correct pairings, indicated by at least two examiners, were submitted to acoustic analysis. The statistical tests included Wilcoxon, Kruskal-Wallis and Bonferroni, with p<0.05. Results: the results of analyses of formants and the mean fundamental frequency were not enough to distinguish similar voices. Unprecedentedly, in the measurements of areas generated by the linear prediction curve graphs, a distinctive statistical significance was observed. Conclusion: it was concluded that, among the parameters studied, the measurements of areas of the linear prediction curve objectively indicated effectiveness in distinguishing speakers with auditorily similar voices.
Three methods are used by specialists in the field of forensic speaker identification: the auditory-perceptual method, the acoustic method and the automatic method 7 .
The perceptual-auditory method highlights the parameters to be analyzed and presents a strong subjective aspect through a qualitative approach 8 .
The acoustic method uses the spectrogram to analyze the waves produced at the moment of vocal emission, allowing quantitative analysis 9 .The evaluation by acoustic parameter must be standardized, since this analysis provides a number 10 , which facilitates analysis, comparisons and storage of measurements.The spectrogram generated in this method is a three-dimensional graph that records the acoustic measurement of the sound wave.It contains information related to sound parameters, i.e., intensity, duration and frequency (time on the horizontal axis, frequency in Hertz on the vertical axis and intensity in Decibel by the color 9 . In a simplified manner, the acoustic evaluation quantifies the sound signal, which leads to an objective analysis of voice.There is also the following distinction: while acoustics performs measurement of the sound signal, the auditory-perceptual evaluation offers a description of the vocal signal with only hearing as a basic instrument 11 .The importance of the two proposed methods (perceptual-auditory and acoustic) in association, besides confirming that one is not better than the other but complement each other, was the conclusion of a recent study at the University of Pernambuco 7 .
The other method, the automatic, is performed by softwares that try to reduce subjective analyses as much as possible.The software is fed with information such as vocabulary, programmed and pronounced in many different manners.In some European countries, the use of automatic systems is accompanied by insights from a professional with knowledge in phonetics and even linguistics.For example, at the University of Gothenburg, the software used is ALIZE SpkDet, and the results obtained by the software are combined with traditional acoustic and auditory analysis 12 .
INTRODUCTION
In ancient and contemporary history, there are several reports of people recognition through voice, the most famous being the Lindberg case in 1932.Since voice recognition is a fragile test, based on only one sense of a single person, currently the proposal is to identify speakers, using scientifically based protocols.
Studies are constantly evolving, and several methods have been used for the forensic identification of speakers, in most cases.In Brazil, voice identification methods were introduced for forensic purposes in the 1990s, involving experts from the states, the Federal Police and the Federal District 1 .The interception of telephone communications for investigation and as evidence in the Brazilian Criminal Proceedings is an increasingly used procedure 2 .
To assist and support the preparation of forensic evidence, Forensic Science is available, which is the set of all scientific knowledge and techniques that are used to unravel not only crimes, but also other legal issues.Concerning sciences, those directly involved with the forensic identification of speakers for legal purposes include Forensic Linguistics, Forensic Phonetics and Forensic Speech Therapy, whose professionals are dedicated to the complex task of identifying speakers through their voice and speech.
Forensic Linguistics is a branch of applied linguistics dedicated to the investigative context that points to elements that analyze communication in its several aspects 3 .Forensic Phonetics goes beyond the identification of speakers; it permeates many criminalistic mysteries.The main objective of Forensic Speech Therapy is to respond to legal demands related to human communication, acting in several analyses involving forensic comparison of voice, speech and language; graphotechnics; facial biometrics; transcription, textualization and analysis of audio, video and image content; and description of the communicative profile 1 .
Recently, on October 22 nd 2020, the Brazilian Federal Council of Speech Therapy recognized the field of Forensic Speech Therapy by resolution n. 584 4 .
For the Forensic Identification of Speakers, it is necessary to compare the standard sample with the sample under analysis 5 .It should be explained that the standard sample is the audio recording that contains the speech of the suspect, accused or defendant (of known identity), and the questioned sample is the audio recording that contains the speaker's speech, whose identity must be known 6 .
in the state of Pernambuco.After the participants were defined according to the previously described inclusion and exclusion criteria, data were collected by video, captured by the participant's cell phone using the device software.The videos had the following recording script, previously explained to the participants: say the name, the date, show an identification document with photograph and date of birth; talk about the state of Pernambuco for 3 to 5 minutes.Afterwards, the videos were sent to the researcher.To perform the first methodological stage, listening to the voice samples, the videos were converted into audio in Wav format by the investigator, using the multimedia conversion software Format Factory®.Preparation of the material for the stage of listening and pairing of voice samples constituted the formation of two groups GimV (group of older brothers) and GimN (group of younger brothers).Then, the names of participants in group (GimV) were replaced by consecutive numbers from 1 to 16.In the group of younger siblings (GimN), the names were randomly replaced by numbers 17 to 32.After this procedure, two groups of voice samples were obtained, GimV with numbers from 1 to 16 and GimN with random numbers between 17 and 32.
To compose the samples of auditorily similar voice to be later investigated by the acoustic spectrographic analysis by the investigator in the second stage, the voice samples of the GimV and GimN groups were submitted to perceptual-auditory pairing, conducted by three speech therapists specialized in Voice by the Federal Council of Speech Therapy -CFFa.The speech therapists who performed the perceptualauditory pairing were asked to listen to the GimV voices and to indicate the pair of the respective sibling in the GimN and record each pair using a pairing table (Chart 1).Acoustic analysis was performed on pairs of siblings considered to be auditorily similar in a correct manner, belonging to the same family, appointed as peers by at least two of the three speech therapists.Of the 16 pairs submitted to perceptual-auditory pairing performed by speech therapists, six were coincident and submitted to acoustic analysis.The result of the perceptual-auditory pairing is shown in Chart 1.
more studies are being conducted in this field, so that the binary comparison of voices may be used for legal purposes.
The general objective of this study was to verify the contributions of acoustic spectrographic analysis in the forensic identification of speakers in auditorily similar voices, and to propose an objective method of using the analyzed parameters.The specific objectives were to verify the usefulness of the acoustic parameters: formants of vowel "é", mean fundamental frequency in Hz, formants F1, F2, F3 in speech, linear prediction curve (LPC) curve of vowel "é", and area of the LP for distinguishing auditorily similar voices.
METHODS
The study was conducted at the state of Pernambuco and was approved by the Institutional Review Board of the State Hematology and Hemotherapy Foundation, Brazil, under report n.4.303.659and CAAE 38306620.3.0000.5195.The independent variables were place of birth, age, sibling and gender, and the dependent variables were the first four formants of vowel "é" (represented by "/ɛ/"); mean fundamental frequency, F1, F2, F3 in connected speech, LPC of vowel /ɛ/ and area of the LPC curve.
The study was conducted on 32 people, being 16 pairs, two brothers from each family.The following inclusion criteria were adopted: being brothers (due to genetics), being male (due to the proximity of vocal frequency), being aged between 18 and 60 years (since the voice does not undergo significant changes in this age group) and being native and residing in the state of Pernambuco (due to the accent and especially the pronunciation of vowel "e", marked in the region).Exclusion criteria were: being twins, considering the existence of previous studies on twins, and/or having a viral, bacterial or inflammatory process in the upper airway on the day of collection, which would influence the voice and possibly the distinction of voice among peers, and/or not having signed the Informed Consent Form.
The investigator (S.C.W.C) recruited participants randomly, sending an invitation specifically designed for this purpose, on social networks and institutions In the second stage, the correctly paired samples were analyzed using acoustic spectrographic analysis, aiming to verify whether and which of the analyzed acoustic parameters would have sufficient statistical power to distinguish people from the same family with auditorily similar voices, and whether and which acoustic parameters were coincident in people born and residing in the State of Pernambuco.The acoustic spectrographic analyses were performed by the investigator (S.C.W.C) using the acoustic analysis software PRAAT ® .
In this study, individual acoustic parameters were verified and later compared between the paired brothers, between the pairs and between the two groups (GimV and GimN).The acoustic parameters analyzed were the first four formants (F1, F2, F3, F4) of vowel /ɛ/, which were extracted after the first minute of speech; mean fundamental speech frequency in Hz; F1, F2 and F3 in connected speech, which were extracted in the first four minutes of speech; LPC curve by the PRAAT ® software.The area of the LPC curve was also analyzed from the graphs of the individual LPC curves generated by the PRAAT ® software, to propose an original analysis method in the present study.Calculation of the area generated by the comparative LPC graph of each pair studied was performed by an Informatics professional, who generated an algorithm specifically for this purpose.The LPC curve of each audio separately generated in PRAAT ® was submitted to analysis of its area to obtain measurements of the areas formed below the curves, which could be analyzed and submitted to intrapair comparison in the statistical analysis.
To achieve this area, an algorithm was used to generate graphs and calculate the integral (area under the curve).Initially, the image was converted from RGB to a monochrome version and the intermediate gray levels were removed, leaving only completely white or completely black pixels.
Then, a loop was made, first varying the "y" coordinate, in principle, from the first to the last line of the figure.Since the study was dealing with 3,600 x 2,400 resolution figures, this means varying "y" from 0 to 2,399; in each interaction of the "y" loop, another loop was performed, this time varying the "x" coordinate, in principle, from the first to the last column of the figure, i.e., varying "x" from 0 to 3,599.This is described as "in principle" because the pixel colors are evaluated during scanning, and initially all are white pixels.When the first black pixel was found, both loops ended, since it was known to be the upper left part of the graph, reminding that the coordinate point (0,0) is on the first line (uppermost) and first column (leftmost).From the point immediately before this pixel found, defined as: dx = xf im−xini 104, since 104 is the final value of the "x" axis in all graphs, and the initial value is zero.Then, an integral variable was initiated with value zero, and a loop was started varying the "x" coordinate, in principle, from xini to xend, and at each iteration of this loop the "y" coordinate was varied, in principle, from ybottom to ytop, that is, going upwards, passing through white pixels, then through black pixels (the graph line), and stopping one pixel before the transition from black to white, where the graph point is, at coordinate (xi, yf(xi)).
Each time a point (xi, yf(xi)) was found, the coordinates expressed in pixels were converted to coordinates expressed in graph units, using the T "x" Map and T "y" Map tables.The value yf(xi) is added to the integral variable, zeroed at the beginning of the outermost loop, so that its value at the end of loops is multiplied by the dx value obtained above, providing the final value of the integral, i.e., the area under the curve.
For statistical analysis, the results of the analyzed acoustic parameters were extracted and inserted in a digital spreadsheet.Descriptive analyses were performed, using measures of central tendency, and inferential, using non-parametric comparison tests, since the data did not meet the normality criteria.The Wilcoxon test was used for paired analysis between siblings, and the Kruskal-Wallis test was used to compare groups of older and younger siblings and comparison between pairs of siblings, besides the post hoc Bonferroni test for multiple comparisons.The SPSS software version 21 was used at a significance level of 5% (p<0.05).
RESULTS
Table 1 shows the comparison of measurements of formants of vowel /ɛ/ between the older and younger brothers of each pair.
i.e., the coordinates (xblack − 1, yblack), in which the coordinates (xblack, yblack) are those of that first black pixel found, the "y" coordinate was increased, recording the "y" values where variations are found from white to black, or vice versa.Since the column was being scanned immediately before the "y" axis of the graph, these variations are found in the markings on the "y" axis scale (0, 20, 40, and 60 dB/Hz, depending on the graph being analyzed).Thus, the T "y" Map table was generated, in which the mean "y" coordinate between the transition from white to black and the following transition from black to white was recorded, assuming that the scale value is exactly on the half of the marking stroke.This T "y" Map table allows to map the "y" coordinates expressed in pixels in the figure to their respective values in dB/Hz.Following, an analogous table T "x" Map was created, this time varying the "x" coordinates from the point (xblack, ymark_min), in which xblack is the "x" coordinate of the first black point found above, and ymark_min is the "y" coordinate of the mark with the lowest dB/Hz value on the "y" axis.Thus varying, the "x" coordinate of the first transition from black to white was recorded, xini, which characterizes the first column of the graph region; as well as the last transition from white to black, xend, characterizing the last column of this region.The T "x" Map table, thus created, allowed mapping of "x" coordinates, with xini → 0 dB, and xend → 104 dB.Finally, the "y" coordinate of (xini, ystroke_ min) was varied, increasing the "y" value, i.e., following downwards on the graph until finding a transition from white to black, which will occur on the coordinate ybottom, where the "x" axis is located.
Similarly, the "y" coordinate was varied again, this time decreasing it (i.e., going upwards), until finding the ytop coordinate, where the upper frame of the graph is located.From there, the dx value was calculated, Table 2 presents the comparison of formant measures, of the mean frequency in connected speech between older and younger siblings of each pair.
The acoustic measurements extracted from vowel /ɛ/ for F1, F2, F3 and F4 did not show statistically significant differences, as shown in the results in Table 1.subjects are not related, but only have a common birthplace.Thus, Table 3 shows the comparison of acoustic measurements between pairs.
The acoustic measurements presented in this table are not statistically significant.
In Table 3, the possibility of differences in measurements between pairs was considered, since these The frequency parameter between the six pairs (Table 3) revealed a statistically significant difference between peers, i.e., even knowing that this parameter has a population mean, interpair differences were found.
The Bonferroni's test for multiple comparisons was then performed to observe where these differences occurred, as shown in Chart 2, considering that such differences may contribute to the forensic identification of speakers in general.
The following images demonstrate the differences between audios, since the two resulting curves are distinct, even though in some cases they superimpose or even intertwine.
With this analysis, no significance was found between the pairs in relation to frequency, i.e., even between all pairs there was not a frequency that could highlight a pair, or even a voice, as previously observed.
Figure 1 presents six images that represent the LPC curve between pairs, the siblings' audios in the graphs are represented by curves with different colors.
Mean difference (I-J)
Standard error Sig.
Lower limit Upper limit
Mean speech frequency 1-31 In the present study, the LPC was considered in vowel /ɛ/, whose results are presented in Figure 1.The analysis applied to a speech signal allows achieving the spectral envelope and the frequencies corresponding to the formants.
DISCUSSION
As shown in the results of comparison of each extracted acoustic measurement, referring to the /Ɛ/ vowel formants between older and younger brothers of each pair, the measurements were not able to differentiate the brothers even in the high frequency formant, which is in line with the findings of studies described below.
A recent study 13 revealed consistent patterns regarding the comparison of high-and low-frequency formants in pairs of twins and non-genetically related speakers, with high-frequency formants exhibiting greater speaker discriminatory power compared to low-frequency formants.It should be mentioned that this study was conducted on pairs of twins (genetically related) and on non-genetically related subjects.
Another study 14 demonstrated that male and female speakers produced vowels with F1 and F2 values relatively close to the targets of native speakers of the state of Paraíba (PB), and the mean values for non-native male speakers were almost identical to the means of native speakers.Formantic measurements are the main acoustic correlates associated with the description of vowel segments 15 .In the present findings, the values of vowel /ɛ/ formants were not sufficient to differentiate pairs of siblings with auditorily similar voices.The absence of distinctive vowel characteristics indicates that this parameter should be used with caution in the forensic identification of speakers among siblings.That is, once again in this study, formants that are classified as highly individual 11 were not able to identify the auditorily similar voices in each pair, demonstrating limitations in the use of formants for the identification of speakers with auditorily similar voices.
Regarding the fundamental frequency, it was observed that the acoustic measurements referring to the means in connected speech between siblings of the same pair did not present statistical significance, corroborating a study 16 that analyzed the mean fundamental frequency of speech of twins and its standard deviation in a reading task.The mentioned study investigated to which extent the similarity observed for the fundamental frequency was genetically influenced when comparing data from monozygotic twins (MZ) with data from heterozygotic twins (HZ).In that study, there were no differences between MZ twins and HZ twins in terms of mean fundamental frequency of speech (FFF) and its variation (standard deviation), although correlations were observed between measurements in the first group.generated, in an unprecedented manner, which were submitted to statistical analysis.With the analysis of these measurements, it was possible to detect the distinction in most pairs, except for those in which the vocal similarity was high.Other studies on larger samples are needed to assess the sensitivity of this new method.This resource proved to be promising for the distinction of voices and should be combined with acoustic evaluations to complement and strengthen the delineation of cases, since this is an innovative measurement that can contribute to greater reliability in future forensic reports by bringing less subjectivity and providing reproducibility for the work of forensic experts.
This study reinforces how delicate is the forensic identification of speakers mainly with auditorily similar voices.It also points to acoustic analysis and its tools used in line with the desired forensic analysis; the more similar the compared voices, the more resources should be used.
This study is completed and simultaneously raises new hypotheses for studies in this field, which has been growing as recorded oral communication is increasingly used in the most diverse processes as an element of forensic evidence.
CONCLUSION
This study demonstrated that the formants of vowel "é" and connected speech, and the mean fundamental frequency in Hz were not enough to distinguish auditorily similar voices.It also showed that the unprecedented resource of measuring the area of the LPC curve was able to distinguish most of them, thus, representing an objective and reproducible parameter to be used in forensic evidence.Therefore, as observed in the present study, the fundamental frequency, when used between siblings with auditorily similar voices, will probably not be efficient to distinguish such speakers.
The research also analyzed the LPC curve.When the exam to be performed is the identification of speakers, in which it is important to study the resonance poles of the vocal tracts, it is also necessary to study the response curve in Frequency, which is obtained by the LPC 17 .Whenever possible, the examiner should use linear prediction analysis (LPC), since this strategy is the most adequate for measuring sound formants 11 .
The LPC graphs generated from the acoustic analysis of vowel /ɛ/ of the pairs of siblings, in the present study, corroborate the literature, showing different curves between siblings of the same pair (curves were traced with different colors for each sibling of the same pair for easy viewing).However, to allow their use as forensic evidence, it was decided to generate values that could be statistically analyzed to prove whether or not there were significant differences between siblings in pairs.Under this scientific view, the graphs were submitted to measurement of the area of the LPC curve generated from the audio of vowel /ɛ/ of each subject.This resource was used to provide a new method for forensic use based on an objective parameter herein represented by the measurement of area of the LPC curve.
After analyzing the graphs resulting from the measurements of areas of the LPC curves, values were generated, in which the measurements of pairs of siblings are statistically compared.
Comparing the areas of the LPC curves between pairs of siblings, it was observed that there were statistically significant differences in pairs 1-31, 3-21, 9-32, 14-19.In pairs 6-28 and 10-25, no statistically significant differences were observed.It is relevant to mention that, at study onset, in the perceptual-auditory pairing, the pair 6-28 was the only considered coincident by the three examiners specialized in voice.In general, this resource was able to differentiate the voice of older and younger brothers in the same pair, except when there is marked auditory similarity.This resource demonstrates the importance of analyzing the area of the LPC curve in differentiating auditorily similar voices.The results of the LPC curves visually demonstrated that the curves must belong to different subjects.However, since this is a scientific research and aiming to exclude subjectivity in data interpretation, measurements of the LPC areas were
Chart 1 .
Perceptual-auditory analytical pairing performed by speech pathologists specialized in voice by the Federal Council of Speech Pathology = Coincident; D = Divergent.Source: Carmo et al. (2021).
Figure 2
Figure 2 presents 12 images with measurements of the area of LPC graphs.
Figure 1 .Figure 2 .
Figure 1.Linear Prediction Curve of the same pair with different colors for each curve on the same screen
Table 1 .
Comparison of each extracted acoustic measure referring to formants of vowel /ɛ/ between older and younger siblings of each pair
Table 2 .
Comparison of each extracted acoustic measure referring to speech formants, mean frequency of speech among older and younger siblings of the same pair
Table 3 .
Comparison of general means of voice acoustic measures between the six pairs of older and younger siblings.
Table 4
compares the areas of LPC curves and shows that this measure is able to distinguish, as an
Table 4 .
Comparison of areas of Linear Prediction Curve measurements of the voice of siblings of each pair. | 5,740 | 2023-06-05T00:00:00.000 | [
"Physics"
] |
ECONOMIC EFFICIENCY OF BREEDING TSIGAI SHEEP IN THE CENTRAL AND SOUTH – EAST EUROPE
Tsigai is an indigenous sheep breed present in entire Central and South-East Europe. Due to its low meat and milk production number of Tsigai sheep is in a sharp decline. But, there is a strong need to preserve valuable genetic resources of this breed. Therefore, the goal of this research is to evaluate economic performance of Tsigai breeding and to define strategies for its future use. In the paper profitability of Tsigai breeding is determined as well as economic efficiency of investments in Tsigai farms (using Net Present Value and Internal Rate of Return). To perform the analysis in risky circumstances authors applied sensitivity and decision tree approach. The results indicated that breeding of Tsigai sheep requires state subsidies to be profitable and economically efficient. Decision tree approach resulted in calculation of expected NPV. Investments in Tsigai farms proved to be economically efficient, but associated with high level of risk. © 2020 EA. All rights reserved.
Introduction
Tsigai breed originates from Asiatic Ural, and it is "triple-purpose breed reared for wool, milk and meat" (Savić et al., 2000). Origin and relations between Tsigai and some other indigenous Balkan sheep breeds are discussed in details by Draganescu (2007), as well. Tsigai breed is present in many countries in the Central and South -East Europe but the most important breeder countries are Serbia, Romania, Hungary and Slovakia.
Taking into account variability of Tsigai sheep, it is very important to study genetic differences among various Tsigai populations, in order to determine and maintain their genetic diversity. Such type of research was performed by Savić et al. (2000) and Ćinkulov et al. (2008) for Tsigai sheep in Serbia, by Kusza et al. (2009) for Slovak population of Tsigai, by Kusza et al. (2010) and Annus et al. (2015) for Tsigai sheep population in Hungary, and by Zăhan et al. (2011) for Tsigai sheep in Romania. The same issue was discussed by Kusza et al. (2011) for local sheep breeds in Southern and Eastern Europe (Romania, Albania, Croatia, Turkey and Serbia). Research carried out by Vlaic et al. (2015) emphasized importance of preservation of genetic Tsigai sheep resources because of possible increased demand concerning international exchange of sheep genetic resources due to climate changes. Petrović et al. (2011) emphasized importance of traditional breeds noticing that genetic improvement increased productivity of domestic animals, but "animals selected for high and efficient production are exposed to greater risk" which primarily assumes "physiological and immunological problems".
Although Romanian word Tsigai means soft, fine wool, nowadays production of wool is not the main goal of Tsigai breeders. The reason is low price of wool and decreasing trend in total wool production worldwide. According to Lescheva and Ivolga (2015) as a result of such a negative trend, proportion of wool in the manufacture of all textile fibers in the world in 2012 was only 1.3%, while proportion of artificial fibers was 67.1%. According to the results of European Food Safety Authority (EFSA) panel (2014) Tsigai breed is "selected for survival and production under local environmental circumstance" and it is "often multi-purpose traditional breed", while wool production is "seldom primary breed criteria". Although Tsigai sheep is an indigenous breed with rather low productivity there were some cases in which Tsigai sheep was successfully used to improve the traits of some other local sheep breeds, for example in Ukraine (Sedilo et al., 2016).
Despite the fact that Tsigai sheep could be used for production of wool, milk and meat, Vrdoljak et al. (2007) stated that in Croatia meat production is the most important one. Similarly, due to low wool prices Tsigai breeders in Romania shifted their interest from wool production to meat or milk production (Ilişiu et al., 2013). In Serbia, general trend in sheep production is also oriented towards meat production (Petrović et al., 2011), while the same trends are noticeable in Hungarian sheep production (Kukovics, Németh, 2011).
Because the Tsigai is a traditional multipurpose sheep breed with rather low level of productivity, the question arises how to stop decreasing trend in number of Tsigai sheep and preserve valuable genetic resources. Besides, it should be mentioned that there are other significant benefits for the entire society from Tsigai breeding (not only preservation of genetic resources). Considering importance of preservation of Tsigai breed, but at the same time bearing in mind very low level of its productivity, the goal of this paper is to analyze economic efficiency of production using Tsigai breed as well as to determine possible directions of future use of this breed.
Materials and methods
Tsigai sheep is present in number of countries across Central and South -East Europe, but each country has a bit different production environment. To conduct the research authors primarily used data describing real production conditions in Serbia. Nevertheless, production potential of Tsigai breed is estimated not only on the basis of research conducted in Serbia (Gutić et Additional data were gathered through interviews and monitoring of 20 farms specialized in Tsigai sheep breeding in Serbia, which flock size was between 50 and 200 ewes. All the producers are situated in the Province of Vojvodina where Tsigai breed is commonly used. The area is located in the northern part of Serbia bordering Romania, Hungary and Croatia where breeding of Tsigai sheep is also traditionally present. Data related to production performance of Tsigai breed are also acquired through interviews with employees of Serbian agricultural advisory service. Revenues and costs are calculated on bio-economic model of Serbian family farm specialized in Tsigai sheep production having 150 ewes. The farm is performing meatwool type of production, which is in line with results presented by Petrović et al. (2009) who stated that in future Tsigai breed in Serbia should be used for meat production (due to body mass of adult animals and body mass of lambs). The size of state subsidies for quality breeding ewes and sold lambs is determined on the basis of appropriate Serbian regulations. Relevant information regarding prices of outputs and inputs are provided by STIPS database (System of Agricultural Market Information of Serbia) which is operated by Serbian Ministry of Agriculture, Forestry and Water Management.
To discover economic efficiency of investments, authors used the most important capital budgeting indicators such as Net Present Value (NPV) and Internal Rate of Return (IRR). Sensitivity analysis is performed to determine crucial factors affecting profit, NPV and IRR. Decision tree method for evaluation of investments in risky circumstances was applied to calculate expected NPV.
Results and Discussions
The research is based on an assumption that a farmer invests in modern building and equipment for accommodation of 150 ewes and appropriate number of other categories of sheep. The highest percentage of initial cash outlay is related to the construction of a completely new building ( Table 1). Financing of the investment is supposed to be 50% from equity funds (interest rate for opportunity costs is 1%) and 50% from loan (interest rate is 6.5%). Therefore weighted average cost of capital (WACC) used for discounting is 3.75% Profit in sheep production (based on data from year 2019 regarding prices of final products, prices of raw material and level of subsidies) is calculated starting from two possibilities ( Table 2). First possibility is that farmer uses all available subsidies for Tsigai production in Serbia, while other possibility is that farmer does not use subsidies at all (because he is not registered with the appropriate agency which is in charge of payment of state subsidies These results are in line with findings reported by other authors. Investigating economic efficiency of extensive sheep and goat farming in Serbian conditions using indigenous breeds (not only Tsigai sheep but also Pramenka sheep and Balkan goat breed) Ivanović (2018) determined that such production is economically efficient, but it is less profitable than intensive livestock production. It was also determined that this type of production is not profitable without state subsidies. Similar conclusion was made by Krupová et al. (2014) for multi-purpose extensive local sheep breeds in Slovakia, determining that such production was profitable only with existing governmental subsidies and EU payments. Data reported by Niżnikowski et al. (2006) indicated that sheep production in majority of countries of Central and Eastern Europe has low or mediate profitability, or that they are even not profitable (depending on type of costs involved in calculations). De Rancourt et al. (2006) reported similar results concerning economic efficiency of sheep production in Mediterranean area. Authors found out that dependence of meat production systems and extensive production systems on subsidies is higher than dependence of milk production systems on subsidies. On the other hand, milk production systems have higher income, but they are more sensitive to changes of market prices of milk products. Discussing relations between Common Agricultural Policy and conservation of rare sheep and goat breeds, Canali (2006) stated that some breeds are rare because in short run they provide lower level of profitability, and that their survival is essentially dependant on the level of EU subsidies. Agriculture, Year 67, No. 1, 2020, (pp. 175-188), Belgrade It is evident that existence and level of subsidies is the key issue for profitability of Tsigai sheep breeding. In this case, total amount of state subsidies paid to farmer is 11,205.00 EUR (sum of subsidies for quality breeding ewes and subsidies for sold lambs), while participation of subsidies in total revenue is very high (38.75%). Minimal amount of subsidies needed to break-even is 8,633.88 EUR, which means that present level of subsidies could decrease only 22.95%. Otherwise the production would not be profitable, which is an important indicator for policy makers.
Economics of
On the other hand, the state has no influence on the level of lamb prices (in the calculation authors used lamb price 2.5 EUR/kg of live weight), because they are formed on free market. But it is necessary to bear in mind that revenue from sold lambs dominates in total revenue (52.35% of total revenue), and that price decrease of only 16.80% leads to zero profit, which means that the lower acceptable lamb price (assuming that the level of other elements of calculation is unchanged) is 2.08 EUR/kg. It is also necessary to point out that the state has no influence on the level of production costs, which are dominated by feed costs, so that the increase of feed costs of only 13.26% would lead to zero profit. These results indicate that, although without state subsidies Tsigai production is not sustainable, even greater risks for this production originate from variability of lamb prices and feed costs ( Table 3). The same conclusion could be reached if change of 10% for each factor is analyzed ( In such a situation farmers should keep their costs as low as possible, searching at the same time for the ways of lamb price increase. Taking into consideration that Tsigai is an indigenous and endangered breed, there are following ways for improvement of its revenues: -Production of premium (organic) products, production of products with geographic origin, improvement of marketing based on the use of endangered local breed (Ilişiu et al., 2013).
-Integration of production, processing and marketing in cooperative associations (Drăgănescu, 1998).
- Krupová et al. (2014) suggested that "economic sustainability of multi-purpose sheep farms in marginal areas can be reached mainly by the exhaustion of the reserves in the biological potential of the current breeds". Authors also considered that an "increase of the proportion of milk processed to cheese on farms" could improve profitability, while possible problems regarding possibility to sell additional quantities of cheese should be taken into account.
- Niżnikowski et al. (2006) enlisted solutions such as development of local market for sheep products, improvement of direct sale to reduce related costs, common approach of several countries to European market and alike.
To get the better insight into economic performance of Tsigai breeding further analysis addresses economic efficiency of investments in Tsigai farms. On the basis of an average net cash flow and appropriate discount rate (3.75%), it was determined that the investment in establishment of Tsigai farm is economically efficient (net present value is positive and internal rate of return is higher than the discount rate) only if subsidies are used ( Table 5). On the other hand, without subsidies an average yearly net cash flow is negative as well as net present value and internal rate of return. Agriculture, Year 67, No. 1, 2020, (pp. 175-188), Belgrade Having in mind that the NPV is the most important indicator of economic efficiency of investments, it is analyzed how certain factors influence NPV of farms that receive subsidies. The results led to the conclusion that (similarly to sensitivity analysis of profit) the most influential factor on NPV is feed costs ( Table 6). It is also important to point out that the NPV is more sensitive (comparing to profit) to changes in the observed factors. Therefore, minimal lamb price needed to have zero profit is 2.08 EUR/kg while minimal lamb price which leads to zero NPV is 2.20 EUR/kg. Analysis could be extended to other factors influencing NPV (such as discount rate and amount of initial investment -cash outlay) and IRR (while height of discount rate does not affect IRR). The results indicated that amount of NPV is less influenced by initial investment and discount rate, comparing to other factors ( Table 7). Changes of discount rate (cost of capital) have the smallest effect on the size of NPV. Besides, variation of observed factors has greater effect on NPV than on IRR. Taking into consideration that investments in Tsigai breeding are very risky (rather small changes of observed factors are causing negative NPV), it is necessary to discuss possibilities to lower required investments in this production. The most convenient solution is to avoid investments in new housing capacities. Instead, existing premises (buildings) could be used which would lead to a significant decrease of total level of investments. This approach is based on results of research presented by Radivojević (2014), as well as Marković et al. (2014) who determined that existing capacities for livestock housing in Serbia are not used enough in the production, and the same conclusion could be made for feed storages for livestock production.
Economics of
This is a result of many factors such as a long term decreasing trend in number of livestock in Serbia, depopulation of villages, downfall of big agricultural enterprises which existed in socialism etc. If farmers used existing premises (buildings) instead of investing in the new ones, total investment would decrease by 44.75% (from 66,480 EUR to 36,730 EUR). Such an approach is possible because Tsigai sheep is an indigenous sheep breed adapted to local conditions and does not require up to date accommodation facilities. The effects of such business decision (with and without subsidies) are presented in Table 8. Sheep production faces many risks related to all above mentioned factors, so it is possible to predict a lot of scenarios for future business environment, which could be presented by the use of a decision tree (Figure 1). The analysis started from the following assumptions: -After the initial investment has been made, in the first year of the project it is possible to predict level of revenues and costs with certainty. On the other hand, it is not possible to know the level of subsidies with certainty. Therefore, four scenarios (with appropriate probabilities) for state subsidies are assumed. If there is no subsidies the investments will be abandoned after the first year (the project will be sold according to its accounting value). Agriculture, Year 67, No. 1, 2020, (pp. 175-188), Belgrade
Economics of
-From year two to the end of the observed period (total analyzed period is 10 years) three levels of net cash flow (NCF) are predicted. They are the best, the most likely and the worst scenario. In these scenarios NCF is influenced not only by the level of subsidies but also by more or less favorable values of lamb prices and feed costs.
-For each scenario probability of occurrence is estimated and NPV is calculated. Taking into account NPVs of all ten scenarios and their probabilities, expected NPV for this investment is calculated. The expected NPV of the investment is positive (6,851.81 EUR), so it could be concluded that the investment is economically efficient, although expected NPV is lower than initially calculated NPV (which was determined for expected business condition, without taking risk into account). Nevertheless, there is 42% probability that this investment will have negative NPV.
At the same time, standard deviation of expected NPV is 46,477.46 EUR, which provides an idea of how far above or below the expected value the actual value of NPV is likely to be. Coefficient of variation of this investment is determined to be 6.78 indicating risk per unit of NPV and considering at the same time level of risk and effects of the investment. Therefore, it should be considered that this is a high risk investment, but final decision (whether to invest in Tsigai breeding or not) depends primarily on farmers' risk preference.
Conclusions
This analysis, as well as other research conducted to evaluate economic efficiency of Tsigai sheep production (or other indigenous multi-purpose traditional sheep breeds), proved that it is not profitable without subsidies. Similarly, investments in Tsigai breeding are economically efficient only if farmers use subsidies. Also, such investments are related to high level of risk caused by fluctuations of feed costs and lamb prices. Therefore, policy makers in all the states of South -East Europe have great responsibility when deciding on the level of appropriate subsidies. It is determined that farmers' actions should be directed towards costs reduction followed by efforts to improve marketing of Tsigai products resulting in an increase of their prices. General conclusions of this analysis could be used in all countries across the region dealing with Tsigai sheep production. | 4,174.8 | 2020-03-25T00:00:00.000 | [
"Economics",
"Agricultural and Food Sciences"
] |
OPTIMAL SLIDING MODE CONTROLLER DESIGN BASED ON WHALE OPTIMIZATION ALGORITHM FOR LOWER LIMB REHABILITATION ROBOT
The Sliding Mode Controllers (SMCs) are considered among the most common stabilizer and controllers used with robotic systems due to their robust nonlinear scheme designed to control nonlinear systems. SMCs are insensitive to external disturbance and system parameters variations. Although the SMC is an adaptive and model-based controller, some of its values need to be determined precisely. In this paper, an Optimal Sliding Mode Controller (OSMC) is suggested based on Whale Optimization Algorithm (WOA) to control a two-link lower limb rehabilitation robot. This controller has two parts, the equivalent part, and the supervisory controller part. The stability assurance of the controlled rehabilitation robot is analyzed based on Lyapunov stability. The WO algorithm is used to determine optimal parameters for the suggested SMC. Simulation results of two tested trajectories (linear step signal and nonlinear sine signal) demon-strate the effectiveness of the suggested OSMC with fast response, very small overshoot, and minimum steady-state error.
INTRODUCTION
Spinal cord injury, accidents, and stroke are the significant sources of disability for the athletics, drivers, and elderly persons that create troubles in their lifes (Furlan et al., 2021;Rodrigues & Rodrigues, 2018). Rehabilitation tools were focused on recovering full/partial functionality by enhancing their motion capabilities using different techniques. Recently, wearable robots of lower-limb exoskeletons have been employed for helping disabled people with mobility issues (Rupal et al., 2017).
A rehabilitation robot is a robot that helps patients recuperate from strokes or other types of extremity injuries. The goal of developing a rehabilitation robot is to assist individuals with daily living problems. Since robots are suited to provide a precise and reproducible physiotherapy, they are excellent tools for providing high-quality treatment at a low cost with minimal intervention (Saryanto & Cahyadi, 2016).
The preferred path of robot joints needs a strong controller to reduce a steady-state error to minimize disturbances and the variation of system parameters. Different controller strategies with parameter optimization techniques are developed to guarantee asymptotic stability and to estimate the uncertainty aspects of adaptive controlled lower-limb systems. The Sliding Mode Controller (SMC) is often considered one of the most effective methods used to control robotic systems, including rehabilitation robots. Babaiasl et al. (2015) proposed a sliding mode controller for upper limb rehabilitation robots to track desired trajectories and reject system uncertainties and disturbances. (Zhou, Zhou & Ai, 2016) proposed an impedance control strategy for rehabilitation robots based on nonsingular terminal sliding mode control to ensure precision in trajectory tracking and improve the stability of the system. (Liu et al., 2018) proposed Adaptive Sliding Mode Control (ASMC) for a lower limb exoskeleton rehabilitation robot to achieve improved performance in terms of jitter elimination and trajectory tracking. For alternative control methods in lower limb rehabilitation robots, (Yang & Gao, 2020) suggested the Adaptive Neural Sliding Mode Controller. The authors proposed a control strategy that dynamically switches between assistance and challenge modes depending on the user's performance by amplifying or decreasing the deviation between the user and the rehabilitation robot in their analysis. A multisensor fusion system was proposed for a seamless cognitive and physical interaction between the robot and the patient. The system uses radial basis function (RBF) to provide reliable activity and motor capability recognition, fall detection, and physical fitness assessment in the rehabilitation training process. (Abbasimoshaei & Mohammadimoghaddam, 2020) designed Adaptive Fuzzy Sliding Mode Controller (AFSMC) for a hand rehabilitation robot to overcome uncertainties and disturbances, reduce chattering effects, and compensate the varying forces of the patients. (Almaghout et al., 2020) proposed super-twisting nonsingular terminal sliding mode control for design and control of a lower limb rehabilitation robot, taking into account negative torques of the patient's limb to obtain the desired training missions; their results are comparable to those of adaptive sliding mode control. A Fuzzy Sliding Mode Controller (FSMC) was also proposed by (Maalej et al., 2020) for minimizing torques applied to a rehabilitation robot to help children, suffering from several diseases, to walk compared to the use of wheelchairs. Their simulation results show that the proposed controller is effective, moreover, it has been shown that the fuzzy sliding mode controllers are robust against parametric variations such as masses and lengths of kid's legs.
This research focuses on designing an Optimal Sliding Mode Controller (OSMC) based on Whale Optimization Algorithm (WOA) for tracking the trajectory of a two-link lowerlimb rehabilitation robot by using dynamic equation for a human two-joint during-walk lower-limb model. WOA is used to tune the parameters of the suggested controller. The dynamic model of this robot is was derived by (Rezage & Tokhi, 2016) depended on anthropometric data (described by Winter (2009)). The stability analyses of both joints of a closed-loop controlled system based on the dynamic robot equations are explained by Lyapunov stability.
The rest of this paper is organized as follows, the dynamic mathematical model of the two-link lower-limb rehabilitation robot is given in section 2, the suggested controller is detailed in section 3, the WOA is illustrated in section 4, simulation results are presented in section 5; finally, the conclusions are provided in section 6.
LOWER LIMB REHABILITATION ROBOT DYNAMIC MODEL
The structure of a two degree of freedom (2-DOF) rehabilitation robot is shown in Figure (1), this robot consists from two link with two joints of the lower limb: a joint at the hip and a joint at the knee, link1 assists the rehabilitation of the hip and link2 for the knee. The dynamic model of this robot is was derived by (Rezage & Tokhi, 2016) depended on anthropometric data (described by Winter (2009)) for person with 74 kg in weight and 1.69 m in height (Alshatti, 2019;Winter, 2009).
OPTIMAL SLIDING MODE CONTROLLER DESIGN
Sliding mode control has two significant advantages. The first advantage is that the system's dynamic behavior can be tailored by selecting a specific sliding function, the second advantage is that it is able to treat any uncertainties that affect the control system. The SMC can be used to control nonlinear processes that are subject to external disturbances and large model uncertainties in practice. Usually, the SMC is composed of two parts. The first part involves designing a sliding surface that satisfies design requirements for sliding motion. The second concern is with selecting a control law that will make the switching surface appealing to the system state (DeCarlo, Zak & Matthews, 1988;Hung, Gao and Hung, 1993).
The designed Optimal Sliding Mode Controller (OSMC) that is suggested in this paper for the two-link rehabilitation robot is shown in Figure (2).
Fig. 2. The block diagram of the suggested OSMC
In order to design this controller, Eq. (1) is rewritten to the following form: or ̈= ( ,) + ( ) ( ) where and One of the important steps in designing the SMC is the selection of the sliding surface. Here in this paper, we assume the sliding surface (sliding function s(t)) for each link i ( i= 1, and 2) is given by: where = ( 1 , 2 ), = ( 1 , 2 ), and = ( 1 , 2 ) are proportional, derivative, and integral gains in respectively of link i (i=1, 2), while e(t)=[ 1 ( ) 1 ( )] and ( ) = [̇1( ) ̇1( )] are the tracking error and the derivative of the tracking error in respectively. The tracking errors will aim to zero asymptotically ∀ ≥ 0 if system states remain on the sliding surfaces chosen. The system state trajectories are then guided to the sliding surfaces using the control law ( ). The main challenge is to select a Lyapunov function of the form = 0.5 . < 0 and choose such a control law (Nguyen, Ha & Nguyen, 1989): or: The scalar is positive and sgn(.) is signum function. The designed control law ( )= [ u1(t) u2(t)] is selected as: where ( ) = [ 1 ( ) 2 ( )] is the equivalent control part and ( ) = [ 1 ( ) 2 ( )] is supervisory control part. The ( ) is given by: The ̈ = [̈1 ̈2 ] is the desired acceleration of link i (i=1, 2), 1 = ( 11 , 12 ) and 2 = ( 21 , 22 ), where the parameters of C1 and C2 are positive optimal values of link i (i=1, 2) obtained by WOA.
WHALE OPTIMIZATION ALGORITHM (WOA)
WOA is a modern meta-heuristic algorithm; WOA simulates the humpback whale population bubble-net as they hunt their prey. Whales are considered the world's largest mammals. Because of the spindle cells in their brain, they are intelligent. The humpback whale has a unique hunting mechanism as follows: Bubble-net feeding, this hunting activity is achieved by blowing special bubbles in a spiral or nine-shaped path. Humpback whales (search agents) are aware of their prey's position and surround them. They believe the current optimal solution is an ideal solution and similar to the desired solution (Mohammed Umar & Rashid, 2019). Following the optimal candidate solution assignment, the other agents attempt to update their positions to align with the best search agent, this is given by Eq. 15 and Eq. 16 below which are the basic principles of the Whale optimization algorithm: where t indicates the current iteration, and indicate the vectors of coefficient, ( * ⃗⃗⃗⃗ ) denotes the optimal solution's position vector, and indicated the position vector of a solution, and | | indicates the absolute value. The and vectors are determined as in Eq. 17 and Eq. 18 respectively: Over the course of iterations, the components of are linearly decreased from 2 to 0, and ( ) is a random vector whose value is between [0,1]. The bubble-net mechanism is mathematically formulated as follow: 1. Shrinking encircling mechanism: the value of ⃗⃗⃗ in Eq. 17 is a random value in the interval [-a, a], and the value of a is reduced from 2 to 0 over iterations. 2. Spiral updating position mechanism: this mechanism calculates the distance between the whale's position and the prey's position, and the humpback's helix-shaped movement is formed as given by Eq. 19: ( + 1) = . cos(2 ) . ′ ⃗⃗⃗⃗ + * ⃗⃗⃗⃗ ( ) where ′ ⃗⃗⃗⃗ = | * ⃗⃗⃗⃗ ( ) − ( ) ⃗⃗⃗⃗⃗⃗⃗⃗ | is the distance between the optimal solutions (prey) and the i th whale, b is a constant, and l is a random number in the range [-1,1].
When humpback whales swim around their prey, they implement the two mechanisms described by the mathematical model above. It is assumed that there is a 50% reasonable probability to update Whales' position as given by Eq. 20: where is a random number in [0,1]. During the search phase, search agents scan for best solution at random and adjust their positions in response to other agents' movements. We use the ( ) ⃗⃗⃗⃗⃗⃗⃗ with values > 1 or <1 to push the search agent to travel further away from the reference agent. The search phase has the following mathematical model: where ( ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ ) is a randomly selected position vector from of the current population (Mirjalili & Lewis, 2016). Figure (3) below illustrates the whale optimization algorithm flowchart.
SIMULATION RESULTS
With the facility included in the version of the MATLAB software (R2019b), various simulation scenarios of lower limb rehabilitation robot are executed for both linear (step) and nonlinear paths with 10% uncertainties in parameters of the ( ,) are considered to illustrate the efficiency of the suggested controller. The parameters of the suggested controller are tuned based on the whale optimization algorithm. WOA parameters are given in Table 2, the WOA fitness function ITAE (Integral Time Absolute Errors) is given by Eq. 23: The optimal suggested controller parameters tuned by WOA are given in Table 3.
Linear path with 10% uncertainties
The step response (positive unity step for link1, and negative unity step for link2) of the controlled lower limb rehabilitation robot (position and control signal) with 10% uncertainty in the parameters of the ( ,) function are shown in Fig.(4) and Fig.(5). These results show that the performance of the robot with the suggested controller is more efficient, where the robot flows the desired path very fast (ts=1.605 sec. for link1 and ts=1.468 sec. for link2) very small overshoot and zero steady-state error, with a smooth control signal. the evaluation parameters of simulation results for the suggested controller are given in Table 4.
Nonlinear path with 10% uncertainties
The simulation results of the lower limb rehabilitation robot with the suggested OSMC tested by the desired nonlinear input signal ( 1 = 4 ⁄ + (1 − 3 ) for link1 and 1 = 6 ⁄ + (1 − 5 ) for link2) with 10% uncertainty in the parameters of the ( ,) function are illustrated in Fig. 6 and Fig.7; these results show that despite the nonlinearity of the input signal, the WOA optimized controller converges with precise control over the plant. It achieves very good performance parameters and zero error.
CONCLUSIONS
The main aim of this work was to design an Optimal Sliding Mode Controller (OSMC) for tracking the desired trajectory and improve the performance of a two-link lower limb rehabilitation robot. The parameters of the SMC were optimized by using a Whale Optimization Algorithm (WOA). The transient parameters of the obtained results show the effectiveness of the suggested controller achieving zero steady-state error in two scenarios, the linear with 10% uncertainty and the nonlinear with 10% uncertainty in parameters of the ( ,) function. The controlled output settled within the vicinity of the desired value after 1.605 sec. and 1.468 sec. for the linear case. These results show reliability of the proposed approach and suggests investigating their capabilities in more complex scenarios as well as physical implementation. | 3,225.6 | 2021-09-30T00:00:00.000 | [
"Engineering",
"Medicine"
] |
THE ROLE OF E-LEARNING INFRASTRUCTURE AND COGNITIVE COMPETENCE IN DISTANCE LEARNING EFFECTIVENESS DURING THE COVID-19 PANDEMIC
: Covid-19 has led to the closure of educational institutions around the world and turned formal learning into distance learning. This study aims to investigate the effect of e-learning infrastructure and individual’s knowledge and competence on distance learning during the Covid-19 pandemic outbreak in 2020. E-learning infrastructure and individual’s cognitive competence were also used to determine the readiness of educational institutions in distance learning. The e-learning infrastructure includes Learning Management System (LMS), electronic devices, communication applications, and internet accessibility. Quantitative approach was used in this study with a sample of 324 participants from three major universities in Yogyakarta, Indonesia. Data were collected through online surveys. The descriptive statistical approach and one-layer regression analysis were used to examine the problems raised in this study. The results show that distance learning is positively influenced by e-learning infrastructure and the cognitive competence of the students, the faculty, and administrative staff. The results also point out the university’s readiness level in adopting online learning based on their previous experience of using the learning system. Finally, the study proposes that in order to improve the e-learning process, there needs to be sufficient financial support from the government, whereas the universities are advised to conduct workshops and training, and to provide teleconferencing applications.
INTRODUCTION
been made by some educational institutions to move to online-learning while many others are still struggling to adopt the online learning as a solution for this problem.
Even though some universities around the world have adopted e-learning during last years, readiness for full transition was not expected. Sudden transforming to distance learning forced educational institutions to adopt online learning depending on the e-learning infrastructure of universities and cognitive competence of students, lecturers and staff. These two factors are significant to accomplish this task. Some of studies considered e-learning infrastructure as hardware and software (Gladun, Rogushina, Garcı´a-Sanchez, Martínez-Béjar, & Fernández-Breis, 2009;Wang, Peng, Huang, Hou, & Wang, 2008;Zhong-Ping & Hui-Cheng, 2004).
Although, there is no easy way to define the boundaries of e-learning, infrastructure of e-learning, this study includes e-learning system, electronic devices, communicating applications and internet access. Infrastructure of e-learning in related universities is not included due to breakout of Covid-19 and disability of attendance physically to campus. Other studies focused on the impact of ICT including the infrastructure on e-learning (Al-Ansi, Suprayogo, & Abidin, 2019; Du, Fu, Zhao, Liu, & Liu, 2013;Lee, Hsieh, & Hsu, 2011) and resulted that ICT has positive and significant impact on e-learning process. This study emphasizes the importance of e-learning infrastructure for distance learning as compulsory need during outbreak of Covid-19 pandemic which led to ending up traditional learning totally.
E-learning has become necessary and critical for educational institution to survive. Effective e-learning system allows students and lecturers to interact quickly and easily. Since the beginning of the 21th century many of e-learning systems have been implementing by educational institutions. This development was as a result of the technology development and innovation but it didn't replace the traditional learning. Furthermore, study by Divayana, (2017) which examines the implementation of e-learning faces several problems, including 1) e-learning planning is not optimal; 2) Students and lecturers are not familiar with using e-learning in the learning process; 3) Supporting facilities for e-learning are still limited; 4) Existence of e-learning has not been well socialized. It seems important that a culture of knowledge sharing and socialization about e-learning needs to be considered so that implementation can be well realized (Meylasari & Qamari, 2017).
During Covid-19 pandemic, e-learning have already replaced traditional learning totally resulting many problems related to lack of effective infrastructure and human expertise. To remain competitive in today's tight labor market, organizations and companies including universities are employing advances in technology to train staffs more quickly, more effectively, and at less expense than the past (Urdan & Weggen, 2000).
Designing and implementing of e-learning systems have been conducting by educational institutions continuously. Improving these systems depends on these institutions and their abilities to adopt new technologies and innovations. Some of studies of designing and implementing these systems have conducted by researchers. Interactive system for e-learning model were improved to deliver better lectures and contents to the students studying in the remote areas, and hence to improve the quality of education and interest. These systems include a dedicated educational satellite. The satellite is responsible to distribute the e-learning contents to the universities connected to it (Siddiqui & Masud, 2012).
Introducing improved e-learning system allows upload of learning materials online and give room for one-on-one interaction with the lecturer by creating an avenue for the students to ask questions and get their answers online (Buhari & Roko, 2017). Electrical devices and applications are also known as hardware and software. Personal notebooks and cellphones are the most useable device in e-learning in addition to PCs and tablets. Increasing development in technology reflects the remarkable usage of electronic devices in higher education. Using Technology devices in e-learning is significant especially in higher education related to high competence of students and complexity of study in this level (Al-Ansi et al., 2019). Transforming knowledge of e-learning such as video, audio, text, images and data are represented through effective electronic devices. PCs, Notebooks, Cellphones and Tablets are the sufficient tools for e-learning.
The other role to conduct online learning depends on the ability of students and lecturers to use e-learning systems and applications. Recent studies in the field of education have examined the application of technology to support and increase motivation in the classroom (Serio, Ibáñez, & Kloos, 2013;Huitt, 2011;Taran, 2005). These studies have concluded that students who have an incentive to learn are more likely to engage, persevere, and make an effort to complete assignments than students who are not motivated. Although learning is a complex process that cannot be understood simply by analyzing human responses to the characteristics of technology, previous research has shown that certain techniques can be used to enhance student motivation (Serio et al., 2013). Previous experience for students and lecturers helped them to adopt e-learning quickly.
The aim of this study is to measure the impact of e-learning infrastructure and individual's competence on distance learning during the pandemic in addition to determining the readiness of universities to adopt online learning during the outbreak of Covid-19. The reason of choosing distance learning is due to the Covid-19 pandemic where traditional learning has been changed to distance learning. Three educational institutions in Yogyakarta, Indonesia were chosen to conduct this study. Because these three universities have adopted online learning as a solution to overcome the problem of outbreak this virus. The influence of e-learning structure and cognitive competence of individuals on distance learning is critical due to importance of these two factors in accomplishment of distance learning (Karim & Hashim, 2004;Chow & Croxton, 2017;Teo, Kim, & Jiang, 2020).
Distance learning in this study includes learning through learning management system (LMS) and applications. Effective distance learning depends on e-learning infrastructure and cognitive competence (Pokrovskaya, Lychkovskaya, & Molodtsova, 2020;Morze, Vorotnykova, & Makhachashvili, 2017) of students, lecturers and administration staff. Readiness of educational institutions was illustrated by effective of e-learning systems or LMS, electronic devices, communicational applications and internet accessibility in addition to ability and knowledge of both students and lecturers to use technologies and learning applications.
Research Approach
This study was conducted by using descriptive statistic method and regression analysis which is categorized under quantitative approach. Based on the objectives of study which included e-learning infrastructure, individual's cognitive competence and distance learning, questionnaires were structured in three different categories. E-learning infrastructure included four indicators which are effectiveness of e-learning system, electronic device, communication applications and internet accessibility. Questionnaires were distributed online in three universities in Yogyakarta city, Indonesia.
Participants
A sample of 324 participants were accepted after review. Strata sampling technique included three categories: students, lectures and staff and sampling was randomly between groups. Characteristics of sample are showed in the Table 1. Choosing these three universities was based on their quick response to changing of traditional learning to distance learning in short time (two weeks). Questionnaires were distributed during the first semester of pandemic (April-even semester). Demographics of sample are illustrated based on university, gender and academic status.
Procedures and Instruments
By using Google Drive, questionnaires were added based on the different variables of the study. Link of questionnaire was distributed between selected sample of students and guidelines were provided by researchers.
Instruments of e-learning infrastructure were e-learning system effectiveness, electronical devices, applications and internet accessibility. Each instrument includes five items (Questions). A sample of 20 expertise, lecturers, students and staff were chosen to conduct the validity of the questions. The respondent's answers showed that Cronbach alpha was .789 which shows the testing tool is reliable while some questions were modified again to ensure the validity of research instruments (questionnaires are included in appendixes of this research). While individual's cognitive competence and distance learning was determined by five and six items respectively. Items of questionnaire were developed based on previous studies in the field including Gable, Sedera, & Chan (2008) infrastructure of e-learning and Al-Ansi et al. (2019) using ICT in learning and Ehlers (2011) cognitive competence and distance learning (Gable et al., 2008;Al-Ansi et al., 2019;Ehlers, 2011).
Data Collection and Analysis Techniques
Data was collected by online questionnaires. Questions were closed and answers were rating between (1-5). Items were measured by using Likert scale 5 balanced (strongly agree, agree, normal, disagree and strongly disagree). Analysis was carried out through quantitative study of the data by using descriptive statistics and regression analysis. Descriptive statistics analysis using SPSS (25) includes mean and standard deviation to describe e-learning infrastructure and cognitive competence in three higher educational institutions. In addition, correlation and regression analysis to measure the influence of e-learning infrastructure and cognitive competence on distance learning. Regression analysis included one-layer test to measure the direct impact e-learning infrastructure and individuals' cognitive competences on distance learning effectiveness.
FINDINGS AND DISCUSSION
The purpose of presenting descriptive statistics of e-learning infrastructure factors and cognitive competences of individuals is to understand the role of every factor in implementing and supporting distance learning effectiveness and to measure the readiness of universities of adopting e-learning as solution during the outbreak of Covid-19 pandemic.
Findings
Effectiveness of Distance learning in this study was determined by e-learning infrastructure and cognitive competences of students, lecturers and staff of using e-learning systems and applications. Descriptive statistics of both variables are showed in Table 2. This part explains the ability of universities to adopt e-learning based on the e-learning infrastructure elements and individuals' cognitive competence. The frequencies and percentages of every item used in the survey are explained in Table 2. For e-learning infrastructure, instruments include effective e-learning systems, electronic devices, online applications and internet accessibility. Every instrument includes five items. These results showed that e-learning system was useful for distance learning and enabled respondents to accomplish their tasks better. Most of respondents were using laptops, Cellphones and PCs to attend online classes. These results showed that these devices were compatible with common browsers and easy to use for e-learning. Quality and effectiveness of electronical device was appropriate enough for learning online. High percentage of agreement showed the significance of electronic device in e-learning.
Communicating applications played important role in face-to-face e-learning. Some of these used application was Zoom, Microsoft Teams, Cisco WebEx, Google Hang Outs and Webinar. These results showed that using video call (conferencing Videos applications) is easy during online learning classes. In addition to universities support to use these applications, respondents think that video call applications are appropriate method for conducting online classes. Students can interact with lecturers and classmates easily during using e-learning applications. In addition to video call applications, students, lecturers and staff use social media application to communicate, send and receive educational materials. Internet accessibility is also important to conduct online classes.
In addition, items of survey included important questions about the availability of internet, coverage, speed and cost of internet. This result explained some problems in availability of internet and some problems related to coverage, speed and high cost. Although universities and government represented by telecommunication sector in the country have supported the e-learning process, the problem of coverage, speed and cost is continued.
Cognitive competences or abilities of using e-learning systems, devices and applications is significant to describe the readiness of related universities for e-learning. Using e-learning system is known previously by students, lecturers and staff while using video call applications (conferencing) was the first time for all of them. This result explains the good knowledge of participants of using e-learning system (see Table 3). In addition, previous knowledge of e-learning system and using applications in both laptop and cellphone as part of learning process enabled students and lecturers to switch quickly to fully online learning. Doing assignments were done by using laptops and university was equipped with projectors. Students and lecturers usually use communication application all period of studying.
Distance learning indicators are explained in the Table 4. Six items were chosen to determine the satisfaction of students, lecturers and staff of all process of distance learning. These items include willingness of students and lecturer to communicate, ability to manage classes and accomplish assignments online, necessity of face-to-face communication, difference of learning at classroom and home, motivation of learning online and level of difficulty of online learning. The result of descriptive statistics shows good acceptance of distance learning between students, lecturers and staff of educational institutions. Willingness of students and lecturer to communicate through e-learning systems and applications was moderate level related to some problems in communications applications. Ability to manage classes and accomplish assignments online is good and appropriate while necessity of face-to-face communication is significant and more effective to learn. Learning online is effective as learning in classrooms and students have more motivation of learning online. Sudden transforming from ordinary classes to online class impact on some students and lecturer to adopt online classes quickly.
Finally, most students believe that a complete course can be given by the Internet without difficulties.
Role of e-Learning Infrastructure and Cognitive Competence in Distance Learning
Before testing the role of e-learning infrastructure and cognitive competence in distance learning, researcher have tested the correlation between variables and instruments with each other. Correlation analysis was conducted to explain the relationship between these instruments and variables to ensure there is no problem to conduct regreation analysis.
Correlation between infrastructure e-learning, cognitive competence and distance learning is positive and significant with .625 and .571 respectively (see Table 5). Instruments of e-learning infrastructure are correlated and acceptable as well. These results show a high significant level with p less than .05 of confidence (p < .05) as Table 6.
Correlational research observes and measures patterns between 4 variables included in infrastructure e-learning which are e-learning systems, Devices, Applications, and Internet Access. Correlational research reveals a positive relationship between the aforementioned variables.
In this research, the strength of a correlation between quantitative variables is measured using a Pearson's Correlation Coefficient (or Pearson's r). A positive correlation is indicated by a value of 1.0, a perfect negative correlation is indicated by a value of -1.0 while zero correlation is indicated by a value of .0.
R-squared coefficient for this multi-variant regression mode (e-learning infrastructure and cognitive competence as independent variables and distance learning as dependent variable) is .562, meaning that there is 56% of distance learning variation explained by dependent variables.
Examining Table 7, the results show that: there is a positive impact with (β = .349) and significant with (p < .000) of e-learning infrastructure on distance learning. Also, there is a positive impact with (β = .355) and significant with (p < .000) of individuals cognitive competence on distance learning while using e-learning systems and applications. After breakout of Covid-19, distance learning depends on these two variables completely. These results explain that, readiness of universities for online learning during Covid-19 was good regardless some problems related to internet access.
Discussion
This study discusses two main issues related to learning process during covid-19 pandemic in 2020. The first one is about the ability of the universities to adopt online learning during the pandemic while the second one is about the influence of e-learning infrastructure and cognitive competence of individuals as appropriate determinants of distance learning. Ability of universities during outbreak of Covid-19 in 2020 to adopt distance learning as solution for this problem was described and analyzed by infrastructure of e-learning and cognitive competence of students, lecturers and staff to use this technology for e-learning. Selecting three top universities in Yogyakarta city in Indonesia was related to the high ranking for these universities in local and international level. Based on the universities ranking index 2020 (UniRank, 2020), universities, as the object under study, have ranked number 1, 6 and 15 in Indonesia respectively. Three universities are located at the same city and have adopted e-learning during Covid-19 pandemic.
E-learning infrastructure was measured by four indicators. Mean of each indicator was e-learning system (3.51), electronic devices (3.71), applications (3.36) and internet access (3.08). There is no high covariance between these means where the electronic devices were higher and internet access less than other indicators. This shows that personal laptops and cellphones were available for all students and lecturers (self-provided) and internet access was limited related to some problems in the infrastructure of education in the country. Cognitive knowledge was also at the same level of other infrastructure indicators with mean (3.51). Descriptive statistic of distance learning was intermediate with mean of 3.10 out of 5. Many studies have supported the same result (Bennett, 2010;Ehlers, 2011;Schulze, 2016;Wang et al., 2008), with little differences where mean in the previous studies was higher related to that distance learning was optional whereas in this study became compulsory.
Before the covid-19 pandemic, studying influence of e-learning infrastructure and cognitive competence of individuals to use e-learning approaches is not sufficient due to the traditional learning where students and lecturers have to come campus every day. Physical attendance was important and classes were conducted face to face. Complete transforming to distance learning made this study meaningful and important. Infrastructure of normal learning process changes to e-learning infrastructure where no campus, buildings, laps and physical attendance.
In another hand, ability of individuals to use technology such as conferencing applications, using LMS, receiving and sending materials online and doing many assignments simultaneously was the key to include all these elements under the individual's cognitive competence. The second part of this study discussed the influence of e-learning infrastructure and cognitive competence of individuals on the distance learning. Results showed that there is a positive and significant impact of e-learning infrastructure and cognitive competence on distance learning with (β = .349 and β = .355) respectively. These results also similar to previous studies of impact ICT on Learning process (Gable et al., 2008;Al-Ansi et al., 2019) with taking into consideration differences in full transforming to distance learning.
During Covid-19 crisis, universities introduced some video call applications such as Zoom, Cisco, Microsoft Team, Hangouts and many other Web conferencing tools. Usage and benefits of using ICT applications is more effective in higher levels of education related to many factors such as: policies of learning, difference of abilities, capacity of absorption, specification of studies, extent of need and complexity (Al-Ansi et al., 2019). In addition, using social media is critical to conduct online classes and to communicate between learners and facilitators of distance learning.
Although the Internet penetration rate is only 53.7 percent, which is lower than many countries in the Asia Pacific region, Indonesia is one of the countries with the largest number of Internet users in the world, as of December 2017, 143.3 million (Statista, 2019a) of the country's total population of 260 million were active Internet users. Due to the prohibited content and the various restrictions on media freedom, Indonesia is ranked only partially free in the Freedom House Index 2018, which ranks countries according to the degree of Internet freedom, with 46 index points (Statista, 2019b).
Speed of internet, Coverage and cost were determinants of effective internet accessibility in educational institutions. Role of government and universities as facilitators of e-learning process is significant towards effective distance learning. Internet access is important factor of e-learning infrastructure in this study in addition to hardware and software.
Communicating between students and educators through internet is critical to successful learning distance. Both students and educators have to be able to use technological device and applications. Successful distance learners are always self-trained, committed, effective and not afraid to defend themselves. Important aspects of self-management are motivation and possessing a learning strategy, which have a significant effect on learning results (Wang et al., 2008) and transformative learning (Qamari, Ferdinand, Dwiatmadja, & Yuniawan, 2020).
Instructor feedback sets clear expectations, provides encouragement for the student and identifies areas of improvement. Instructors play a significant role by setting high academic standards and providing quick and clear feedback that includes useful examples (Schulze, 2016). Cognitive competence in using social media, educational applications and electronic devices is the core of distance learning. Without good knowledge of these technologies, distance learning will remain difficult and unreachable. Some previous studies investigated the characteristics of learners and effective distance education (Schrum & Hong, 2002;Gibson, 2003). These studies concluded some of these characteristics such as a larger range of a more favorable cost / benefit ratio is essentially a greater possibility (for both facilitators and learners) because they believe in the possibility of allocating distance learning, at this level where it reaches uniqueness.
Distance learning is also known as e-learning, online learning or distance education, means existence of teachers and students in different environment out of campus. It includes using of various technological tools to facilitate communication between students and teachers during instruction process. Development of distance learning continued by development of technologies and internet. Revolutionary change in technologies and social media at the beginning of the twenty-first century led to interactive and web-based learning (Kiryakova, 2009;Fachrunnisa, Adhiatma, & Tjahjono, 2020;Suprijo, Tjahjono, Muafi, & Prajogo, 2019).
Many previous studies confirmed that distance learning should include important factors to accomplish this mission (Bennett, 2010). These studies included important features of distance learning such as user identification and user management, preparation of course contents, course management, student start-up programs, homework and project preparation/ preparation, test preparation, monitoring and analyzing student behavior, determining student success status, creating and managing an interactive communication environment. This study investigates distance learning based on e-learning infrastructure and cognitive competences of students and lecturers in addition to administration staff of related universities.
CONCLUSION
Readiness of three universities to adopt distance learning (online learning during Covid-19) was proper and significant but still need to be improved. Conducting distance learning had an immediate response during two weeks. E-learning system was implemented previously and electronic devices such as laptops and cellphones were provided by individuals in addition to university's support for staff. These results are unique due to completely transforming of traditional learning to e-learning because they explain the effectiveness of distance learning during Covid-19 pandemic. In addition, these results measure the readiness of universities and their ability to cope this problem depending on distance learning and its features and innovative approaches.
According to the regression analysis, the results also revealed that e-learning infrastructure and individuals' cognitive competence have positive and significant impact on distance learning. In fact, after Covid-19, the educational institutions have completely depended on e-learning infrastructure tools such as internet, electronic device, applications and LMSs in addition to the previous knowledge of students, lecturers and staff of distance learning.
Finally, there is need for more efforts by government and educational institutions to improve the distance learning. Investing in the e-learning infrastructure, supporting students and lecturers with needed materials, workshops and training is important to increase their knowledge to adopt online learning. Facilitating software and hardware of e-learning infrastructure is the main factor to successful education process. | 5,576.2 | 2021-02-15T00:00:00.000 | [
"Education",
"Computer Science"
] |
Zero-bias anomaly and role of electronic correlations in a disordered metal film
Localization and electron correlation play significant roles in understanding the electronic states of low-dimensional systems. We carried out the tunneling spectroscopy measurements on a crystalline nano-sized island and a disordered two-dimensional metal film. The low temperature zero-bias anomaly was studied using PE theory and statistical analysis of the spatial distribution of the local density of states in both the systems. The effective capacitance and resistance of the tunnel junction extracted from PE theory gives the energy and temperature dependency of the measured ZBA. Statistical analysis reveals the electron correlation effect and the electron correlation length. By combining PE theory and the statistical analysis, we found that the microscopic origin of ZBA formation in the disordered two-dimensional film is strongly related to the electron localization and the correlations.
Introduction
Localization theory, developed by Anderson et al, predicted the increase in resistivity of a disordered Fermi system at low temperatures due to the destructive interference of electron wave functions [1][2][3][4]. This unusual resistive behavior has been observed in various metallic and semiconducting systems [5,6]. Following the initial prediction and observation, extensive experiments on disordered metallic systems led to the discovery of the zero-bias anomaly (ZBA) in conductance spectra [7][8][9][10]. A pioneering study by Altshuler and Aronov treated the electron-electron interaction perturbatively and reproduced the logarithmic temperature dependence of the resistivity and the ZBA in weak localization regimes [11,12]. However, their model fails to describe the very low energy and low-temperature density of states (DOS) of disordered thin films [13]. It has been demonstrated using P (E) theory that Coulomb interactions in a disordered system can be treated nonperturbatively by using the effective environmental parameters of tunnel junctions. This approach successfully describes not only the DOS character of the dynamic Coulomb blockade effect in ultra-small junctions but also the localization effect in disordered metallic systems and the low-temperature behavior of resistivity and low energy DOS [13,14].
Low-temperature scanning tunneling microscopy (STM) studies on a sample with impurities showed significant suppression of the DOS at the Fermi level, owing to the electron-phonon interaction [15,16], dynamic Coulomb blockade [17][18][19], and localization [20,21]. Notably, the ZBA spectrum observed in ultra-small Pb islands and Pb wetting layers were well described by P (E) theory, and the effective capacitance and resistance of the tunnel junction were extracted via fitting the measured data with the theory [17,18]. However, the fitting parameters of P (E) theory do not distinguish the various physical origins of the ZBA discussed above, and the many-body effects in disordered films cannot be understood fully from P (E) theory.
In this article, we report STM and scanning tunneling spectroscopy (STS) studies on a disordered Pb film and crystalline Pb islands. The dI/dV spectrum acquired on both surfaces showed temperature-dependent ZBA, which were well described by P (E) theory, as reported by other studies [17,18]. To understand the physical origin of the ZBA, we studied the spatial correlation of the local density of states (LDOS) through the autocorrelation of the zero-energy DOS and the normalized DOS distribution from the STS data. The autocorrelation of the zero-energy DOS of the disordered Pb film follows exponential decay as a function of the distance, while that of the crystalline Pb island decays linearly. The DOS distribution of the disordered Pb film follows a log-normal distribution, while that of the crystalline Pb island follows Gauss-normal distribution. These statistical analyses of the disordered Pb film suggest that the microscopic origin of the ZBA developed on the disordered film is due to the electron localization and electronic correlations.
Results and discussion
The disordered Pb atomic film on top of Si (111) was prepared in situ in an ultra-high vacuum environment (base pressure < 10 −10 Torr), and STM and STS measurements were carried out with a home-built variable temperature STM. The tip and the sample are thermally connected to the cold stage, which is equipped with an internal heater. The measurement temperature of the tip and the sample was controlled simultaneously by a feedback controller (see supplementary materials section 2) (https://stacks.iop.org/NJP/22/083045/ mmedia). Degenerately doped Si (111) with arsenic was used as a substrate, which was flash annealed at T = 1250 • C to form a Si (111)-(7 × 7) reconstructed surface. To control the substrate conduction, we prepared the silicon sample via ten cycles of annealing at 1250 • C for prolonged times (typically 60 s). The arsenic dopants diffuse easily inside silicon at this temperature and dissipate at the surface of silicon, which results in a reduced number of dopants near the surface [22]. The disordered 2D film was grown on top of this silicon substrate by thermally evaporating four monolayers (MLs) of Pb at room temperature. The extended high-temperature treatment depletes the dopants near the surface, and the formation of a disordered Pb film on top of silicon eliminates the dangling bonds of Si (111)-(7 × 7) and makes the silicon substrate effectively insulating at low temperatures. As a result, the Pb film formed on the silicon acts as a 2D conduction channel near the tunnel points. Figure 1(a) shows a typical STM topography of the prepared sample. Pb formed three MLs of a disordered wetting layer (colored blue in figure 1(a)), which covered the Si (111)-(7 × 7) surface, and the excessive Pb atoms formed the crystalline islands (colored yellow in figure 1(a)). Schematics of the sample configuration is shown in figure 1(b). The Pb island has a lateral area of 1626 nm 2 and a thickness of 2.0 nm, which corresponds to seven MLs of Pb films on top of a Pb wetting layer. The quantum well states developed on this island confirms the effective thickness and the crystal structure of the island [23] (see supplementary materials section 1).
Figures 1(c) and (d) display the spatially averaged STS measurements on the Pb island and the wetting layer, respectively. The blue curves are the averaged dI/dV spectra measured at 80 K, which show metallic DOS on the Pb island and broad suppression of the DOS on the wetting layer. The red curves in figures 1(c) and (d) are the averaged dI/dV spectra measured at 13 K. The low-temperature spectrum displays an unusual narrow dip at the Fermi level with a width of 17.5 mV on the island and an enhanced gap on the wetting layer. The normalized dI/dV intensity outside the gap feature is not affected by the measurement temperature. The measurement temperature was well above the superconducting critical temperature of the Pb island, which is typically 6 K for similar-sized Pb islands [24], and the size of the gap is one order of magnitude bigger than the superconducting gap (see supplementary materials section 2). We also checked that the temperature-independent quantum well states, and the resonance peaks are located at −550 mV and +610 mV on this island. Thus, the DOS suppression at low temperature is neither from the superconductivity nor from the quantized states.
We carried out position-dependent dI/dV measurements for both surfaces, the results of which are shown in figure 2. The topographic profile across the island edge shows a roughness of 1.7 Å for the Pb island and 4.3 Å for the wetting layer (figure 2(a)). The size and the density of the grain found in the Pb island are very similar to that of the clusters in the wetting layer. This similarity implies that the crystalline Pb island is formed after the disordered wetting layer, and the roughness of the underlying wetting layer is smoothened by the Pb island [17]. The ordered structure of the Pb island is further justified in figures 2(c) and (d), where we have plotted the individual dI/dV spectra measured for different spots on the Pb island (figure 2(c)) and the wetting layer (figure 2(d)). The dI/dV spectrum measured for the wetting layer show arbitrary LDOS in the high energy region (|E| > 20 mV) owing to the disorder and a reminiscence of Coulomb blockade. On the contrary, the dI/dV spectrum obtained for the island shows reasonably homogeneous LDOS over the broad energy window, which is a consequence of the ordered atomic structure of the island. Interestingly, the DOS suppressions around the Fermi level for both surfaces are surprisingly well overlapped to other DOS measured on the same surface, and the variation of the normalized zero-bias conductance is suppressed for both surfaces, as shown in figure 2(b). This relatively homogeneous dI/dV spectrum at low energy indicates the STS measurements with the STM set bias (V set = −100 mV) adequately reflect the absolute LDOS. The temperature-dependent dI/dV spectrum shown in figure 1 and the energetically symmetric shape of the gap around the Fermi level shown in figure 2 strongly suggest that the origin of the dip in the DOS is related to the symmetrical energy loss in the dynamical Coulomb blockade of the ultra-small tunnel junctions.
P (E) theory successfully described the ZBA observed in ultra-small tunnel junctions and disordered metallic systems [17][18][19]21]. Instead of using the conventional elastic tunnel assumption, the P (E) function incorporates the energy-dependent capacitive noise effect as well as the interaction effects in a disordered system in the tunneling process [25]. The modified tunneling probability is where n t and n s are the DOS of the tip and the sample, respectively, f (E) is the Fermi-Dirac distribution, and R T is the tunneling resistance. P (E) is generally defined as J (t) is the phase-phase correlation function, which has the form where T is the temperature, and R K = h/e 2 . The total impedance Z(ω) of the STM tunnel junction is defined as We used numerical methods described in [19] to calculate J(t) and P(E) and extracted the resistance (R env ) and junction capacitance (C J ) of the environmental impedance by fitting the measured STS data. The effective circuit diagram of the tunnel junction is shown in the inset of figure 3(c). The STS data obtained along with the red dashed line in figure 1(a) is displayed in a color plot ( figure 3(a)). The edge of the Pb island is located at 13 nm in this plot. As we have seen in figures 2(c) and (d), the DOS suppression at zero energy shown in figure 3(a) is very uniform over the long lateral distance and does not vary with the local DOS variation related to the disorder in the Pb film. Figure 3(b) is the averaged dI/dV data obtained for the island region (blue dots) and the wetting layer region (yellow dots). The solid line in figure 3(b) is the fitted curve from P (E) theory. We used the energy window from −15 mV to 15 mV for the STS data (gray region in figure 3(b)) to extract the environmental parameters that are not altered by the DOS variation (the fitting method is described in supplementary materials section 4). The dashed lines in figure 3(b) are the dI/dV data simulated using the same fitting parameters at 80 K. The temperature dependency of the STS data shown in figures 1(c) and (d) is well captured by P (E) theory. The spatial distribution of the environmental parameters are plotted in figure 3(c) which shows a homogeneous environmental resistance (R env = 6.64 ± 0.03 kΩ) over the whole measurement range, whereas the average junction capacitance of the wetting layer (C J = 2.90 ± 0.21 aF) is smaller than that of the Pb island (C J = 8.83 ± 0.11 aF). The homogeneous environmental resistance of both the wetting layer and island suggests that the sample resistance is mainly limited by the electrical behavior of the wetting layer [18]. The extracted junction capacitance and resistance of the Pb island are comparable to those from previous reports, where nano-sized islands were grown on top of an insulating film [18]. However, the capacitance of the wetting layer is significantly smaller than that of a wetting layer from a similar study [17]. It can be related to the doping dependent conductivity and the dielectric constant of the silicon substrate.
In general, the resistance of the sample R compared to the resistance quantum R K = h/e 2 = 25.8 kΩ defines the regime of the electron behavior in the sample. The system is in the diffusive regime when R/R K 1 and in the strong localization regime when R/R K 1. In this study, since the calculated resistance of the sample is comparable to the resistance quantum (R env /R K =∼ 1), we can regard the system as being in the intermediate regime, where the LDOS develops ZBA at the Fermi level. Moreover, we found that the dI/dV spectra obtained at the wetting layer exhibited bumps that are symmetrically located around the Fermi energy, as shown in figure 2(d). These features can be understood as a reminiscence of the Coulomb blockade that develops conductance enhancement at the Coulomb energy of the system, defined by the effective capacitance of the system [26]. The broadening shown in the individual dI/dV spectrum is related to the incomplete isolation of the sample, owing to the leakage path through the wetting layer. If we regard the shoulder of the dI/dV spectrum as the onset of the Coulomb blockade gap, the size of the first gap is approximately 50 mV, and the second gap is approximately 150 mV, which corresponds to capacitance values of 3.2 aF and 1.0 aF, respectively. These values are close to the derived capacitance of the wetting layer from P (E) theory and the tip-sample capacitance (see further details in supplementary materials section 3).
To understand the electron localization effect in this system, we calculated the normalized autocorrelation Corr (E, r) from the spatially resolved tunneling conductance G (E, r) along the straight line over the wetting layer.
where Corr 0 (E) = dr G E, r 2 . The contour plots of the autocorrelation Corr (E, r) obtained for the island and the wetting layer are plotted in figures 4(a) and (b). Each curve represents the lateral position of the constant autocorrelation strength as a function of sample bias. As can be seen in figure 4(a), the autocorrelation decays linearly in the space on the island for all sample biases measured in this study. The autocorrelation decreases with distance on the wetting layer as well, but the decay rate highly depends on the sample bias. Notably, the autocorrelation near the zero energy decays fast at a short distance (r < 4 nm) and decays slowly at further distances. Figure 4(c) displays the energy-dependent autocorrelation of LDOS obtained at 0 mV and 70 mV, which emphasizes the spatial variations of the autocorrelation. The autocorrelation of LDOS collected on the Pb island (blue square and black cross in figure 4(c)) decays slower than the one obtained for the wetting layer (green triangle and red diamond in figure 4(c)). Moreover, the autocorrelation of zero-energy LDOS obtained for the wetting layer decays nonlinearly (red The localization length ξ = exp (πk F ) (where is the mean free path) and the thermal diffusion length L T = D/k B T (where D is the diffusion coefficient, and the T is the temperature) governs the low temperature electronic behaviors of the system in the strong localization regime and in the diffusive regime, respectively. The localization length depends on the electron conductivity as the formula indicates, and the thermal diffusion length increases with decreasing temperature. Thus, the crossover from the diffusive regime to the strong localization regime occurs at the temperature where R/R K =∼ 1. Considering the high resistivity of the sample in this study, the estimated thermal diffusion length L T is about 11 nm with the diffusion coefficient D = 2 cm 2 s −1 . This long thermal diffusion length at 13 K compared to the decay length ξ extracted from the decay rate of the autocorrelation suggests that the measurement temperature is below the crossover temperature and the electron localization mainly dominates the electronic properties of the sample.
The LDOS distribution shows the electron interaction effect in the disordered system. The noninteracting electron system is predicted to have a Gaussian normal distribution, since the LDOS is randomly distributed without correlation, while the interacting electron system follows the log-normal distribution [4,21]. Figure 4(d) displays the distribution of LDOS along a straight line of 10 nm length on the island and the wetting layer, respectively. The x-axis represents the LDOS normalized by the mean value LDOS , and the y-axis is the probability of finding the normalized LDOS value. As seen in figure 4(d), the LDOS obtained for the island follows the Gaussian normal distribution with the mode at 1.04 (E = 70 mV, black curve in figure 4(d)) and 1.02 (E = 0 mV, blue curve in figure 4(d)) from the fitting. Although the LDOS distribution measured on the wetting layer with energy 70 mV shows a standard deviation of 0.23, which is three times broader than the LDOS distribution determined for the island, the LDOS distribution fits into the Gaussian-normal distribution with the mode at 0.98. On the contrary, the LDOS distribution measured for the wetting layer with energy 0 mV shows a skewed distribution that was not symmetric about the mode that is located at 0.75. This skewed LDOS distribution follows log-normal distribution when the length of the measured STS is longer than the decay length ξ. This unusual LDOS distribution strongly suggests that the electron correlation plays a crucial role in developing ZBA on the disordered surface. In contrast, the origin of the ZBA developed on the Pb island is not likely due to the electron correlation. | 4,110.4 | 2020-08-18T00:00:00.000 | [
"Physics"
] |
Operating Latency Sensitive Applications on Public Serverless Edge Cloud Platforms
Cloud native programming and serverless architectures provide a novel way of software development and operation. A new generation of applications can be realized with features never seen before while the burden on developers and operators will be reduced significantly. However, latency sensitive applications, such as various distributed IoT services, generally do not fit in well with the new concepts and today’s platforms. In this article, we adapt the cloud native approach and related operating techniques for latency sensitive IoT applications operated on public serverless platforms. We argue that solely adding cloud resources to the edge is not enough and other mechanisms and operation layers are required to achieve the desired level of quality. Our contribution is threefold. First, we propose a novel system on top of a public serverless edge cloud platform, which can dynamically optimize and deploy the microservice-based software layout based on live performance measurements. We add two control loops and the corresponding mechanisms which are responsible for the online reoptimization at different timescales. The first one addresses the steady-state operation, while the second one provides fast latency control by directly reconfiguring the serverless runtime environments. Second, we apply our general concepts to one of today’s most widely used and versatile public cloud platforms, namely, Amazon’s AWS, and its edge extension for IoT applications, called Greengrass. Third, we characterize the main operation phases and evaluate the overall performance of the system. We analyze the performance characteristics of the two control loops and investigate different implementation options.
with features never seen before is promised, while the burden on developers and application providers is reduced or more exactly, shifted toward the cloud operators. On-demand vertical and horizontal resource scaling in an arbitrary scale, dependability, fault tolerant operation, controlled resiliency are just highlighted features provided inherently by cloud platforms. However, latency sensitive applications with strict delay constraints, such as several distributed IoT services, generally do not fit in well with the new concepts and today's platforms and pose additional challenges to the underlying systems. When strict delay bounds are defined between different components of a microservice-based software product, or between a software element and the end device, novel mechanisms and concepts are needed. A crucial first step toward the envisioned future services is to move compute resources closer to customers and end devices. Edge, fog, and mobile edge computing [30], [31], [37], [38] address this extension of traditional cloud computing. Nevertheless, solely adding cloud resources to the edge is not enough as the cloud platform itself could significantly contribute to the end-to-end delay depending on the internal operations, involved techniques and configurations.
In this article, we adapt some relevant aspects of the cloud native approach and related operating techniques for latency sensitive IoT applications operated on public cloud platforms extended with edge resources. Our general design concepts are applied to one of today's most widely used and versatile public cloud platforms, namely, Amazon Web Services (AWS) [1], and its serverless services. We identify the missing components, including novel mechanisms and operation layers, required to achieve the desired level of service quality. More precisely, we focus on serverless architectures and the Function as a Service (FaaS) cloud computing model where the microservice-based application is built from isolated functions which are deployed and scaled separately by the cloud platform. In our previous work [7], we proposed a novel mechanism to optimize the software "layout," i.e., to minimize the deployment costs, in central cloud environment, e.g., in a given AWS region, while meeting the average latency constraints defined on the application. A dedicated component is responsible for composing the service by selecting the preferred building blocks, such as runtime flavors (defining the amount of resources to be assigned) and data stores, and the optimal grouping of constituent functions and libraries which are packaged into respective FaaS platform artifacts. This approach can be extended to edge cloud infrastructures This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ but further considerations are necessary. More specifically, Amazon provides an edge extension for IoT services, called Greengrass, where the edge infrastructure nodes are owned and maintained by the user (or application provider) but managed by AWS. Obviously, the pricing scheme and the performance characteristics of serverless components in this realm is totally different from the regular billing policy and operation, therefore, our models should be adjusted accordingly. In this article, we aim to extend our basic model for edge cloud platforms and to enable dynamic and automated application (re-)deployment if an online platform monitoring module triggers that.
Our contribution is threefold. 1) We propose a novel system on top of public cloud platforms extended with edge resources which can dynamically optimize and deploy applications, following the microservice software architecture, based on live performance measurements. We add two different control loops and the corresponding mechanisms which are responsible for the online reoptimization of the software layout and constituent modules at different timescales. The first one addresses the control of the steady-state, long-term operation of given applications and it is suitable for following, e.g., the daily profiles, while the second one implements a more responsive control loop which can directly reconfigure the runtime environments of deployed functions if the monitoring system triggers that as a response to, e.g., SLA violation. 2) We provide a proof-of-concept prototype. In this article, we target AWS and its edge extension for IoT applications, called Greengrass, however, the concept is general and it can be applied to other public cloud environments as well. Our current solution supports geographically distributed edge cloud infrastructures under the low-level control of AWS. The system encompasses a layout and placement optimizer (LPO), a serverless deployment engine (SDE) and a live monitoring system with dedicated components and operation workflows. 3) We characterize the main operation phases and conduct several experiments and simulations to evaluate the overall performance of the system. We analyze the performance characteristics of the two control loops as well and investigate different implementation options. Finally, we reveal further challenges and open issues. The remainder of this article is organized as follows. In Section II, the background is introduced and a brief summary on related works is provided. In Section III, an illustrative use case is defined which motivated our work. Section IV highlights the main principles driving our system design and presents the high level architecture of the system. In Section V, the proposed models related to the applications and the underlying platforms are presented, and the optimization problem is formulated. Section VI is devoted to the proposed system including the details of the relevant components. In Section VII, we evaluate the performance of the overall system and our main findings are discussed in detail. Finally, Section VIII concludes this article.
II. BACKGROUND AND RELATED WORK
The cloud native paradigm aims to build and run applications exploiting all the benefits of the cloud computing service models. It includes several techniques and concepts, from microservices across DevOps to serverless and FaaS architectures, and everyone defines that in a slightly different way. According to the cloud native computing foundation (CNCF) [6], the ultimate goal is an open source, microservicebased software stack, where distinct containers are separately orchestrated and scaled by the cloud platform enabling the optimal resource utilization and agile development. The serverless approach allows to shift the focus from "where to deploy" to "how to create" the applications. It can be realized by following either the Container as a Service (CaaS) computing model or the FaaS paradigm, depending on the granularity level that the developer can consider when creating the software. In this article, we focus on the latter approach because it provides finer granularity in the organization of the application and more opportunities for optimization. There are several public cloud providers offering both services, such as Amazon [1], Google [13], Microsoft [24] or IBM [15], and a number of open source platforms are also available for private deployments, such as Kubernetes [18], Knative [17], OpenWhisk [2], or OpenFaaS [27]. This section provides a brief introduction on Amazon's serverless solutions over cloud and edge domains. Tools for automated deployment of serverless components fostering the development and operation of such applications are also highlighted together with open issues.
A. Serverless on Amazon Web Services
AWS [1], the platform of the market leader public cloud provider, offers a wide selection of services that can support building applications in the cloud. Among those, two options are adequate for executing serverless code: elastic container service (ECS) with the Fargate launch type, and Lambda which is a FaaS solution. Both can ease the task of deploying application components in different ways providing diverse configuration options and pricing models. Lambda offers fewer options for configuration but at the same time it simplifies automatic deployment and connecting other AWS services or Lambda functions. The service increases the assigned CPU power together with the only adjustable flavor parameter, available memory size. Instance startup, load balancing between the instances and networking configuration is taken care of by the Lambda framework without any need for developer interaction. There is also a select set of AWS services that have built-in triggers for Lambda, while other, third party services can invoke Lambda functions via the software development kit (SDK). Lambda defines methods for easy versioning and branching of deployed functions via Lambda versions and aliases. Compared to Lambda, ECS offers more options for setting up resources and networking, while also providing possibilities for quick invocations, however, it lacks the automatic load balancing options. Larger sized function code and related artifacts are better suited for deployment with ECS, since Lambda poses a 250-MB size limit of uncompressed packages.
AWS's CloudFormation service [3] provides possibilities for automating the deployment process of application components realized by either of these services. However, code deployment to edge nodes is only available via AWS IoT Greengrass.
B. Serverless at the Edge With AWS IoT Greengrass
AWS IoT Greengrass is a service that is part of AWS's IoT offerings and its main task is to make AWS Lambda functions available on edge devices. The service's basic building blocks are Groups that can be configured in the cloud and their deployment is managed by AWS. They are the collections of different entities serving different roles. The 1) Core is at the center of each group that has a two-pronged representation. It is present in the cloud as a link to the edge node while it is also a software instance running on the edge node and handles communication with the cloud. Every message flowing between the edge and the cloud is encoded using RSA keys for which an X.509 certificate is used. This also has to be set up in the cloud, and assigned to the Core, as well as transferred to the edge node before starting up the Core software; 2) Devices and local resources [e.g., devices connected via USB or machine learning (ML) artifacts] serve as inputs; for 3) edge Lambda functions which are linked to cloud Lambdas via AWS Lambda aliases. Configuration of edge and cloud functions is handled separately which enables extensions upon AWS Lambda functionality. Although code size limits are inherited from the cloud version, edge functions do not have lower or upper values in their memory settings and increments can be made in 1-kB steps, as opposed to the cloud version's 64-MB steps. A remarkable difference compared to on-demand cloud Lambdas is that edge functions can be longlived (pinned in AWS terminology) and the single long-lived function instance can be kept running indefinitely. On-demand edge functions are handled by the Core similarly to cloud functions, multiple instances of a single function can run concurrently and they are stopped after reaching the configured timeout value. Three containerization method is offered for executing edge functions: a) Greengrass; b) Docker; or c) no containerization. The first option is the most versatile while the rest severely limit available functionality. Access to other edge and cloud functions is granted; and via 4) subscriptions.
C. Automated Serverless Deployment and Optimization
Deploying cloud applications across different platform services is a complex task. In order to ease this process, multiple tools exist that are able to set up required resources with different cloud service providers. For example, the Serverless Framework [35] uses a YAML configuration file to declare resources in a provider agnostic way and, with its own CLI, provides an interface for managing these resources. The service is able to cooperate with, e.g., AWS [1], Microsoft Azure [24], Google Cloud Platform [13], and Apache OpenWhisk [2]. Terraform [34] is a similar tool that enables setting up and managing cloud infrastructure spanning over multiple public cloud domains. The higher level, provider agnostic interface makes it easier to move the infrastructure from one provider to the next but it cannot fully hide provider specific parameters. These tools were designed to receive external parameters to be used at deployment from other services, e.g., for specifying resource types or memory size. One such external service is Densify [9] that, leveraging its separate optimization and monitoring components, makes cloud applications self-aware. It monitors AWS virtual machine (EC2) instances with a proprietary monitoring component and collects CPU, memory and network utilization data. Based on these, the optimization component uses ML to model the application's utilization patterns while also estimating the best fit of compute resources for current needs and predefined specifications. Such estimations can give recommendations on instance flavors to be used and on the number of such instances. These recommendations can be forwarded to application maintainers via different channels (e.g., Slack or email), or can be applied automatically. Such automatic redeployments can happen using templating tools that support dynamic parameter assignment or parameter stores, e.g., AWS CloudFormation, Terraform or Ansible. The service enhances change recommendations with a cost monitoring interface as well. As for AWS specific deployment options, the provider offers different services for managing resources. All of them use the same AWS API but they provide different levels of complexity. Low-level options, such as the Web console, the SDKs and the CLI have smaller granularity thus they make handling of applications containing multiple resources overly complex. CloudFormation [3] (that is also used by the AWS Cloud Development Kit and many third party options) can treat a whole deployment as a unit of workload. It can handle the setup, modification and deletion tasks of complex applications (called stacks or stack sets in CloudFormation terminology) using its own templating language. Stackery [33] is a set of development and operations tools accelerating serverless deployments on top of AWS. It supports the management of production serverless applications throughout their life cycle.
Albeit the availability of these versatile tools in deployment, they do not prove to be adequate for deploying applications to hybrid edge cloud scenarios when latency is of concern. While AWS tools offer edge node management, and the AWS Compute Optimizer [4] serves as a recommendation engine to help right-sizing EC2 instances, such an optimization engine for serverless applications is not available. AWS independent tools share a similarity in this regard, as they do not venture into the serverless domain and consider resource utilization but omit the investigation of application performance. Additionally, they usually lack the capability of handling edge resources altogether or have started to support this feature only recently thus not covering yet the full feature set made accessible by the cloud provider.
Besides the tools supporting deployment and orchestration over cloud platforms, there are only a few papers in the literature dealing with cloud native and cost-aware service modeling and composition. Eismann et al. [10], Fotouhi et al. [12], Leitner et al. [20], and Winzinger and Wirtz [36] provided pricing models for microservice-based application deployment over public clouds, but they focus only on supporting offline cost analysis for predefined deployment scenarios. Online cost tracing of a serverless application is a cumbersome task due to limited billing information provided by the cloud platforms. To tackle this issue, Costradamus [19] realizes a per request cost tracing system using a fine-grained cost model for deployed cloud services, however, it lacks any optimization features. Researchers in [11] and [21] studied the optimization problem of cloud native service composition and provide offline solutions based on game theoretic formulation and a constrained shortest path problem. Other recent works in [5], [8], and [22] target similar problems of performance optimization of serverless applications leveraging public cloud resources, but only regarding the placement problem of the service components and missing any adaptive and automated service reoptimization task.
III. TARGETED USE CASE
In this section, we highlight an envisioned use case motivating our work. The application exploits cloud features and serverless tools in order to provide IoT services at large scale. The use case, presented in Fig. 1, addresses live object detection on Full HD video streams. As we target cloud native deployment and follow the serverless approach, we have stateless functions requiring all data as input. Therefore, making use of dedicated data stores is the reasonable (or the only feasible) way of data exchange. Here, we strive to decrease bandwidth requirements by preprocessing images before submitting them to elaboration and finally marking them with detailed object classification results. The preprocessing stage in steps 1 -10 resizes and grayscales captured video frames and performs a preliminary object detection on the modified picture. At the end of the preprocessing stage, the full size image is cut into pieces based on the bounding boxes provided by the preliminary object detection. As a next step, the Cut function calls the second stage object detection function for each cropped image which performs the object classification task. Observe that the number of calls depends on how many objects we found during the preprocessing stage which we consider as an application specific metric. It depends on the software whether the calls are synchronous and invoked serially or asynchronous and handled in parallel. Consequently, the implementation of the next function, Collect Results, could be different for the two approaches. In any case, it awaits while each second stage detection function finishes and collects their individual results. Finally, this function calls the Tag function that marks detected objects on the full size image and annotates it with object classification results. For our use case, we interpret the end-to-end (E2E) latency as the average elapsed time between the arrival of a frame and the event when a recognized object's classification is written out into the data store.
In our implementation, we used Python and leveraged features of the OpenCV [26] library for image processing and object detection steps, relying on its deep neural networks module and the MobileNet-SSD network. In the remainder of this article, we focus on steps 1 -12 , the main parts of the application, and the components related to the rest of the steps are not deployed in our tests.
IV. SYSTEM DESIGN
This section is devoted to the main goals and principles driving our architecture design and the high level system description is also provided.
A. Design Goals
Our main goal is to foster the development and operation of latency sensitive IoT applications by adapting the cloud native paradigm. More specifically, we aim at improving latency control for serverless applications and allowing optimization of operation costs on public cloud platforms extended with privately owned edge infrastructures. We focus on the FaaS cloud computing model, however, the concepts are general and can be applied to container-based serverless solutions as well (such as Fargate containers or Kubernetes pods). Although, the finer granularity in the construction of the application provided by the FaaS approach yields more optimization options and requires more sophisticated solutions. Formally, the operation cost of the application is to be minimized by finding the cost optimal software layout required to meet the average latency bounds. To enable this optimization, we need to construct accurate application and platform models capturing the performance characteristics and operation prices. The first reasonable way of controlling latency is the careful placement of software components: the functions can be run in the central cloud or in available edge domains. Current APIs of today's systems typically do not provide sophisticated placement control based on delay information, therefore, we strive to explicitly select the domains to run the functions. We assume that edge resources are scarce, following different cost models as cloud resources, and the preferred deployment option is always the central cloud while edge resources are used only if the delay constraints require that. We argue that besides placement, the efficient grouping of constituent functions and libraries, which will be packaged into respective FaaS platform artifacts, and the selection of the runtime flavors are crucial tasks which significantly affect both the performance (e.g., end-to-end latency) and the operation costs. A top level component is able to address all these targets and generate a software layout description including the function grouping with the selected flavors and placement information. Based on this general description, an adapter layer can directly deploy the application to the underlying cloud infrastructure while exploiting the exposed APIs and related cloud services.
As user demands, application characteristics and platform performance can vary in time, dynamic reoptimization is an essential feature which can be provided based on a versatile monitoring system. We target such a system making use of available cloud services and custom extensions. Two different approaches are considered to implement control loops. The first option is to realize a full reoptimization cycle starting with a model update gathered from live measurements, followed by the optimization task and the full redeployment of the application. Obviously, this yields a larger operation timescale. In order to ameliorate the response time, we address an alternative option as well, which realizes a shorter control loop. If different deployment options are onboarded in advance, the reconfiguration of the application can be executed much faster. However, we need to add a dedicated component to control the specific application based on monitored metrics, while a customized version of the FaaS runtime is also required in order to allow on-the-fly reconfiguration.
B. High Level Architecture and Operation
The high level architecture of the proposed system is depicted in Fig. 2. The system is capable of composing, deploying and dynamically reoptimizing IoT applications operated on serverless resources. We note, that the first and basic version of the system, without any support for the edge, was introduced in [7] and [29]. In the former, we focused on the optimization layer, while in the latter, we investigated deployment tasks. In the current work, we leverage their composition and extend upon it with support for edge deployment and a second option for inducing changes in the application layout.
At the top level, the Layout and Placement Optimizer (LPO) receives input data from the developer (or the operator in other scenarios). The data consists of the application model (Application Components with Requirements) and the platform model (Cloud and Edge Node Properties). The graphbased service model encompasses functions, data stores (as nodes) and invocations (function calls), read, write operations (as edges). Average function execution time, call rates, latency requirements on critical paths, etc. can also be defined for the service. The other input of the system is the platform model which describes the cloud platform's performance and pricing schemes and the list of available edge nodes with their properties. It can be given a priori based on previous measurements, however, the model parameters can be adjusted on-the-fly based on live monitoring. The LPO works with these service-and platform agnostic abstract models and constructs an optimal application layout by grouping the functions into deployable units (e.g., FaaS artifacts), defining the corresponding minimal flavors together with the hosting domains (central cloud versus edge) and determining the required data stores and invocation techniques (e.g., one for invoking functions on the edge, a different one for calling functions in the central cloud). The main objective is to minimize the operation costs while meeting the average latency bounds given by the developer or user. The application layout together with monitoring conditions is passed to the Serverless Deployment Engine (SDE) in step 1 that transforms incoming data into platform specific API calls and adapts the application layout to the underlying edge or central cloud environments. In today's systems (such as AWS), the central cloud and edge domains are controlled via distinct deployment engines (Cloud/Edge Deployment Engines on Fig. 2) and APIs in separate calls (steps 2 and 4 ).
As a result, the Managed Application can have parts running on edge nodes or in the central cloud launched in steps 3 and 5 , respectively. We assume that the platform can run the same function artifacts in both runtime environments and in-memory data stores can be used for state management. In either case, the grouped application components are executed by our special-built Wrapper which is an essential extension to the platform's own runtime environment. The purpose of the Wrapper is threefold.
1) It enables grouping of functions into artifacts by handling both the internal interactions among the encompassed functions and the interactions with the outside world: state store access and invocation to other components.
2) The Wrapper logs measured metrics on these operations, including platform related and application specific ones, to the managed monitoring system.
3) The Wrapper grants on-the-fly reconfiguration access
to the runtime environment via a novel API which is used by the runtime optimizer (RO), the controller of the shorter control loop (step 9 ). This reconfiguration allows to change the function calls (e.g., invoking the central cloud version of a function instead of the edge variant) or data store access in the artifact based on live monitoring without the need of redeployment. The monitoring infrastructure, consisting of the managed monitoring system and the RO, is deployed in steps 6 and 7 when the application has already been set up. The monitoring system aims at monitoring performance and application level metrics and it can send alarms to the LPO and the RO. In addition, a periodic querybased operation is also provided to support enhanced responsiveness (steps 8a and 8b ).
V. OUR MODELS AND OPTIMIZATION PROBLEM
In this section, we define our service and platform models capturing the main performance and cost characteristics. The introduced notations are summarized in Table I. To establish accurate models, a comprehensive performance analysis of AWS Lambda and Greengrass is the essential first step.
A. Performance of AWS Lambda
In our previous works [7], [28], we provided a comprehensive performance study of delay characteristics of AWS FaaS and CaaS offerings, based on short-and long-term experiments. Here, we give a summary on them focusing on our main findings with regards to AWS Lambda.
Each AWS region operates using multiple CPU types with different capabilities, and the configured resource flavor (memory size) can have an impact on the selected CPU type. For single-threaded Python code, Lambda performance approximately doubles as assigned memory size is doubled until reaching the peak performance at around 1792 MB (one physical core is allocated). Our measurements indicate that execution time has no correlation with the time of the measurement but it is highly affected by the assigned CPU type and the selected Lambda resource flavor. We observed that AWS, time independently, assigns Lambda instances to different types of CPUs available in the chosen region in an undisclosed manner. At small flavors we measured significant differences among CPU types, but as higher flavors were selected, the differences diminished. Many different methods exist for invoking Lambda functions but most of them are inadequate for handling latency sensitive applications as they impose high delays with high variation, even for small transmitted data size. The quickest Lambda invocations are the SDK's and the API Gateway's synchronous calls, however, they have adverse effects on the execution time (thus the price) of the invoker function. Therefore, using asynchronous SDK calls can be a better fit for latency constrained applications. Long-term SDK asynchronous invocation tests showed no dependency on either the time of the call, the CPU type or the flavor of the instance. On average, we measured 103 ms when transmitting payloads with 130-kB size and 79 ms for 1 kB. Considering the asynchronous nature of the call, we measured surprisingly high blocking delay (the time while the invoker function gets blocked during an invocation) in the invoker function (52 and 44 ms, respectively). As Lambda is designed to serve stateless functions, whenever states should be stored we have to use an external service. In our previous work, we concluded that Amazon ElastiCache for Redis outperforms every other AWS offering for serving such purposes. It can handle both read and write operations under 1 ms for data smaller than 1 kB. Redis performance is among the best throughput-wise as well, and it handles increasing concurrent access notably well.
B. Performance of AWS Greengrass
Although Greengrass and cloud Lambda functions share many features, they differ in multiple aspects, as discussed in Section II-B, that significantly affect performance. In case of latency sensitive applications, the most important performance features to measure are flavor dependent computation proficiency and invocation latency. In order to investigate these aspects with AWS Greengrass, we repeated the respective benchmarks discussed in [28]. We used two different edge nodes to execute the tests: a local server with four Intel Xeon E5-2650 v3 CPU cores, 6 GiB of memory running Ubuntu 18.04 and an Amazon EC2 t2.micro instance with 1 vCPU and 1 GiB memory running the same OS in the eu-west-1 (Ireland) AWS region. Each measurement was repeated 100 times to obtain average values and standard deviation.
1) Execution Time: As opposed to AWS Lambda behavior, Greengrass does not apply a memory size dependent access to compute resources. The service limits instance access to resources by using cgroups, however, it always provides access to unlimited processor time for each running function instance as their cpu.share parameter is set to 1024. Our measurements proved to be perfectly in line with this, as running multiple instances of the same function cause no significant increase in execution time until every core has been occupied by a function instance. However, when we start up twice as many function instances as the number of CPU cores, the execution times doubles. The behavior shows that the management jobs executed by the Greengrass core do not require significant CPU resources when no messaging is performed among the function instances.
2) Invocation Delay: As Fig. 3 depicts, there can be four different call paths among functions when AWS Greengrass is used depending on the location of the Invoker and Receiver functions.
1) When both functions are on the same edge node and local invocation is used. 2) The function locations are the same, but the call goes through the AWS IoT Cloud topic.
3) The two functions are on different edge nodes. 4) The Receiver function is an AWS Lambda function residing in the central cloud. We benchmarked these scenarios on both of our edge nodes. In accordance with our previous measurements in [28] using the same methodology as here, invoking a function in the central cloud from the edge is the slowest, taking 125-231 ms to complete. In terms of latency, this invocation type is one of the slowest of available AWS Lambda calls and is 20-30-ms slower than asynchronous SDK calls between Lambda functions. Results for the rest of the cases measured on the t2.micro instance are shown in Fig. 4 together with the blocking delay caused by the invocations. (We opted to exclude the depiction of edge to central cloud calls from the figure in order to provide better visibility on invocation delay characteristics between edge functions). We can conclude that Greengrass local calls (calls between functions managed by the same Greengrass Core) are extremely fast compared to other Lambda function invocation options. As the local AWS Greengrass Core can handle the invocations, they last only 2.3-4.3 ms depending on payload size. Because of the Greengrass service's architecture, any other invocation has to interact with the AWS IoT Core, thus calls have to traverse the IoT Cloud topic. These invocations experience 7.8-19.5-ms delay when the Receiver function is found on a Greengrass node. When using our on-premise edge node, the increase in latency corresponded to the latency between our premises and the AWS region we used for the test. Blocking delay, the time while the Invoker function gets blocked during an invocation, is always small, ranging from 1.5-2.8 ms which is a fraction of those measured for the asynchronous cloud calls (50-70 ms).
Comparing the above results with those given by [28], we can conclude, that using AWS IoT Greengrass solutions result in relatively low latency only when the cloud functions are not involved. If an application requires low latency as well as edge and cloud functions, it is better to use SDK calls between them instead of relying on AWS IoT.
C. Service Model
The service model describes the user-defined service request including the software components and their interactions. Let S be the service structure description which is basically a directed multigraph. Function nodes F represent the simple, stateless and single-threaded basic building blocks, which use invocations I to call other functions and read R, write W arcs to perform I/O operations on data store nodes D. A dedicated platform node ℘, in the role of the API Gateway or the user, represents the main entry point of the service and designates the ingress service invocations. Recursive loops are modelled in their expanded form in which each iteration step is given with explicit invocations. This concludes the invocation subdigraph S[F ℘ ], unlike Control Flow Graphs, to be loopless, that is, a directed acyclic graph (DAG). Moreover, functions are considered to have only a single entry point which has a strict syntax typically predefined by the execution framework. The single-predecessor function characteristic further restricts S[F ℘ ] to be a directed rooted tree with ℘ as the root node.
Functions are characterized by the execution time τ measured on one vCPU core, while arcs have the average invocation rate ω r attribute along with the explicit blocking delay δ introduced in the invoker function. Data stores can be described by their workload capacity in general. In addition to the graph-based description, the service model also keeps track of user-defined node-disjoint path(s) with associated latency limit l π as the basic constraints for the layout optimization.
D. Platform Model
Our platform model captures the performance characteristics and cost models of function execution, invocation and data store access methods, respectively. For the runtime environment, we only consider single-threaded serverless functions. However, our models can be extended to use containers [7] or to support multithreaded functions by using explicit function execution profiles. Runtime flavors are specified by their offered vCPU fraction n c . To extend our previous model with edge computation capabilities, we introduce edge nodes as standalone flavors. Thus, an assigned flavor implicitly carries basic placement information, that is, designating the specific edge node or the central cloud as required by the deployment engine. While Greengrass Lambdas always have access to one vCPU core on edge nodes, i.e., n c ( E ) 1, the core fraction of cloud Lambdas can be derived from their assigned memory as n c 1}. The first Lambda flavor granting one core is * λ = 1792 MB as stated in Section V-A.
Regarding invocation types, we assume two different options relevant to latency sensitive applications. More specifically, async SDK invocation, depicted in Section V-B, and local invocation are considered. Local invocation is used when one function directly invokes another function in the same group and its blocking overhead is negligible in terms of latency.
E. Cost and Latency Models
Making use of our service and platform models, we can describe the end-to-end latency and the operation costs of the application. While serverless platforms support parallel function execution via autoscaling, internal parallelization (within a Lambda function) could also be realized by applying multithreading and internal asynchronous calls scheduled by the runtime. However, we consider single-threaded functions and runtime environments with a single core. Therefore, in order to calculate overall latency (and costs), we can model all functions as single-threaded components.
These functions can be composed together, where they can call each other directly in a synchronous manner, and executed in a single Lambda function. This way, the grouping of functions can reduce the overall latency by eliminating SDK invocation overheads in return for additional costs. In the same time, function grouping introduces serialized execution of the encompassed functions resulting in increased group execution time. The number of consecutive executions of a function is determined by its caller component's behavior. This can be modelled with a multiplier, i.e., the serialization ratio, which is the ratio of the caller and called component's invocation rates. This quotient is greater than 1 when the caller iteratively performs invocations, around 1 if it realizes one-toone mapping and less than 1 if outgoing calls are filtered by conditional statements. First, let T s define the overall service runtime. Then, let t p denote the execution time of function group p ∈ P F on selected flavor φ p ∈ . In (1) we define t p as the sum of the actual function execution times including flavor-related data and egress invocation overheads A + (p), and multiplied by the serialization ratio. Invocation i f and i p mark the ingress invocations of function f and belonging group p In accordance with AWS billing patterns, we use rounded up group execution time for the Lambda cost calculation. In addition, we define the summed number of received requests as r p ω r (i p )T s for group p ∈ P F . The flavor-dependent group cost function c p is formulated in (2), where C r , and C p are the billing constants specified by the cloud provider for the total number of requests and rounded group execution time c p p, φ p = r p C r + C p t p 100 ms . (2) Although service cost calculation relies on the entire group execution time, the observed latency differs from t p values. The end-to-end latency measured at a function can include different number of consecutive executions of the preceding functions based on their position in their serialization sequence. Thus, the number of distinct execution variations from which the measured latency value is computed is determined by the serialization ratios of the preceding functions. As these execution variations contribute evenly to the average latency value we define a modified formulal p for the group latency calculation in With the same approach, we can formalize the cost function for data stores as well. As there are no outgoing data transfers, the data store cost only depends on the service runtime T s and instance type C i . Therefore, it can be expressed as a single layout-independent cost value C i T s .
F. Optimization Problem
The LPO's output is the service layout which defines the function partitioning P F (equivalently called as clustering in the literature) along with the flavor assignment ϕ. Thus, the optimization task is to find the cost-efficient layout over the cloud/edge environment considering latency requirements.
Our problem, which falls under the topic of graph partitioning, is a complex task in general. For simplicity, we make the following assumptions without losing the original target.
1) We consider only one central cloud Lambda flavor and one edge flavor. 2) Since data stores S[D] do not form a connected subdigraph and their cost is layout-agnostic depending on T s solely, data store flavor assignment can be realized as a separated upper-bounded aggregation. Thus, we focus on the S[F] partition problem in the following (as ℘ must not be part of any group). 3) We do not assume internal thread-based parallelization as functions represent simple software building groups. This means, P F has to be technically a valid graph partitioning of S[F] where partition groups are considered to be directed linear chains with no limit either on their number or size. Summarizing the above, we define our objective function as to find the chain partitioning (P F , ϕ) with the minimal cost min p∈P F c p (p, ϕ(p)) (4) such that the following constraints must be met. 1) Latency limit l π of a given path π is not to be violated.
2) Function group p ∈ P F must contain exactly one chain.
VI. PROPOSED SYSTEM
We have applied our general design principles presented in Section IV to AWS Lambda and Greengrass and the complete system is shown in Fig. 6. Our prototype was implemented in Python3 making use of the AWS SDK and AWS IoT Greengrass SDK. In this section, the main components, algorithms and workflows are described in detail and we present how the exposed APIs of AWS are exploited by our system.
A. Layout and Placement Optimizer
The main task of the LPO is to solve the optimization problem defined in (4). Graph partitioning or clustering have been well-researched for decades. While partitioning is known to be N P-complete for arbitrary directed graphs as well as weighted trees [14], several polynomial-time algorithms exist for sequential graph partitioning (SGP) which restricts the partition groups to contain only consecutive nodes [16], [25]. The available techniques for solving SGP assume either an upper bound for the group sizes or consider only fixed number of groups. However, our problem differs from the traditional variants of SGP in several aspects. Since we aim to split up trees explicitly into chains and the partition groups are bounded by the latency limits of service-wide critical paths in contrast to the locally verifiable group size or count limits, the aforementioned methods cannot be applied directly to our problem. By extending our prior algorithm designed for public clouds [7], we propose a heuristic approach for cost-efficient and latencyconstrained partitioning of trees into chains on cloud and edge resources, called Chain-based Tree-Partitioning (CTP).
1) Chain Partitioning: First, we define the relaxed Chain-Partitioning (CP) algorithm utilized by CTP as a subproblem to solve tree partitioning. CP specifies chain partitioning as a variant of noncrossing sequence partitioning and leverages its related divide-and-conquer approach [23]. Suppose an n-length chain of functions f with their measured performance characteristics, the number of counted subcase latency bounds B and an optional path [π s , π e ] limited by l π as the algorithm's input. Here, the cost-efficient partitioning along with the assigned flavors, overall cost and latency values can be derived by iteratively evaluating the recurrence relations in (5).
In the recursive formulas, the subcase of the first i nodes of the chain grouped into j groups is divided into two subparts: the previously calculated subcase of the first k −1 nodes into j − 1 groups and the remaining last k * − → i nodes as a single group. Since the assigned flavor φ of the last group and its invoker group's flavor ν inherently predetermine the group execution times and the invocation delay between the two subparts, the selection of a minimal cost subcase cannot be guaranteed to be globally optimal regarding the overall latency constraint. Therefore, we use B precalculated latency bounds for each subcase and cache the related cost-optimal partitioning which enables tracking of more expensive subcase variants with better latencies that are optionally chosen during a subsequent iteration. These bounds are calculated evenly between the overall latency limit l π and the smallest execution time of single function groups in descending order, keeping cheaper variants with lower bound indices b. The cost-optimal partitioning of a subcase is designated by the specific k * value where the summed cost of the two subparts is minimal. In case of multiple minima, the subcase with the lowest index k, that is, the lowest group count, is chosen. During each iteration, all flavor combinations ν, φ are examined and only those prior subcases with feasible bounds b k ν,φ are taken into account which meet the given bound b including the execution time of the last group and its invocation delay. To track the relevant subcases' values, dedicated matrices C and L are introduced for storing the summed cost and latency calculated with c p andl p from (2) and (3). Latency calculation formulated in l(k, i, ν, φ) is performed only for the constrained path [π s , π e ] using the flavor-dependent invocation delays formed in matrix D. To be able to reconstruct the partition groups, matrices K, B and F are used for caching the barrier node k * , by which the optimal subcase is divided, the opted latency bound b * of the prior subcase and the last group's flavor φ * opted for k * , respectively The dynamic programming technique provides an efficient way to solve the recursive formulas in (5). Algorithm 1 Algorithm 1 Chain-Partitioning 1: procedure CHAINPARTITION(. . . , π s = 0, π e = n, B = n, l π = ∞) 2: Define DP ← n × n × r matrix with 5-tuples as C, L, K, B, F 3: Apply memoization to functionsl p , c p with cache size n * | | 4: if l π = ∞ then Calculate latency bounds 5: bounds ← [∞] 6: else Decreasing bounds of evenly spaced r ranges 7: bounds ← LINSPACEBOUNDS(l π , min f ∈F,φ∈ l p (f , φ), B)
8:
for i ← 1 to n; b ← 1 to | | do Precalculate trivial subcases 9: REVERSESORTBYLATENCY(DP[i, 1]) 11: for i ← 2 to n; j ← 2 to i; k ← j to i; ν, φ ∈ do 12: 2) Tree Partitioning: Following an analogous formalization, CTP recursively calculates the partitioning of a subtree in S by leveraging the CP algorithm and previously calculated subtree groupings to enforce the node-disjoint critical paths . To accomplish this efficiently, CTP precalculates all the reachable leaves from each node by labeling the nodes using postorder DFS and the label definition in the following equation, In order to ensure CTP to inspect every candidate partitioning of a subtree, we define the Subchain-Pruning action which leverages Node-Labeling to track the chain from subtree root r to target leaf l, while it also fetches the chain-adjacent subtree roots N + c (r * − → l). It operates roughly as follows: Starting from the subtree root, Subchain-Pruning iteratively checks the labels of descendant nodes. The child node that has the target label is a member of the actual chain and marked as the next step, while the remaining successors belong to the chain neighbors.
We can get valid chain partitioning of an arbitrary subtree if we perform Subchain-Pruning, then apply Chain-Partitioning on the resulting root-leaf chain and take the partitioning of chain-adjacent subtrees. Consequently, we shall cover the cost-optimal subcase if we perform Subtree-Pruning on each leaf-ending chain designated by the subtree root's labels. To ensure the latency constraints, a separated chain traversal step is realized, similarly to Subchain-Pruning. Each critical path originating on the chain is checked during the traversal, while the related latency fragments of impacted paths are cached in L. The latency limit that fits entirely on the chain is enforced by CP itself. This follows that CTP accepts constraints assigned for distinct node-leaf subchains solely, otherwise only the critical path originating the closest to ℘ can be guaranteed. Finally, we formulate our recursive CTP algorithm in To for all node ∈ REVERSEDBFS(tree, root) do 5: for all leaf ∈ L(node) do 6: chain, nghbrs ← SUBCHAINPRUNING(tree, node, leaf ) 7: π , π s , π e ← GETCRITICALPATH(tree, chain, ) 8: params ← GETCHAINPARAMETERS(chain) 9: part, cost ← CHAINPARTITION(params, π s , π e , B, l π ) 10: valid, sub_lats ← CHECKCRITPATHS(chain, part, , L) 11: sum_cost
B. Serverless Deployment Engine
One level below the LPO, the SDE is responsible for translating application layout and monitoring conditions arriving in step 1 (see Fig. 6) from the LPO to calls that AWS can process for setting up resources. On a high level, the SDE communicates directly with AWS accessing its CloudFormation (CF) and Greengrass services. The former is configured via its own templating language. CF processes incoming template requests describing what resources to set up, in which order and what connections will these resources have with each other, and creates individual CF stacks or stack sets from them. In our implementation, the SDE synthesizes templates specifying single CF stacks as a simplification. In step 2 , the SDE passes a template to CF that defines all the components for the AWS Managed Application in the cloud as well as it configures Greengrass related resources to be deployed to edge nodes. When cloud resources have been set up by CF in 3 , the SDE calls AWS Greengrass directly for deploying resources to edge nodes in step 4 , since CF is incapable of deploying code to edge resources. After completing the whole application setup with edge deployment in step 5 , the SDE configures elements required for the AWS Managed Monitoring of the deployed application and the RO component. These are also exchanged with CF in step 6 and are set up in step 7 . In steps 2 and 6 , application and monitoring code and other artifacts are shared between the SDE and CF in compressed format using AWS's own object storage service, Amazon S3.
In accordance with these, the SDE goes through four phases internally for creating the Application and Monitoring CloudFormation templates and artifacts for applications written in Python. (Of course, the concept can be applied for other programming languages supported by AWS Lambda as well). In phase D1 , external libraries and developer defined function resources required by the application components are collected. These are compressed depending on component placement and then uploaded to Amazon S3. In phase D2 , the actual code of application components gets processed. The code for every component group defined by the LPO is collected and purpose-built Wrapper code is added to them as well. Two special functions are added to the application. The Entry point function is able to divert incoming requests to the application's own entry point be it on an edge node or in the cloud. The Edge monitor function performs CPU and memory load measurements on edge nodes. The resulting AWS Lambda functions are compressed and uploaded to S3. In phases D3 and D4 , the SDE formulates the application and monitoring CF templates, respectively. During application template creation, incoming layout and flavor specifications are used. These are complemented with code and artifact locations in S3 as well as additional AWS resources that are needed in order to set up the application properly. Such resources include but are not limited to AWS Lambda layers, versions, aliases, Amazon VPC, subnets, Internet Gateways, NAT Gateways, ElastiCache clusters, AWS IAM security policies and roles as well as Greengrass groups, cores, resources and subscriptions. The SDE's Python3 implementation contains around 2500 LoC.
The AWS Managed Application is ready to run as soon as CF finishes with step 3 if the application does not use edge resources or at step 5 otherwise. As depicted by the bottom side of Fig. 6, all interactions among application components with each other or with data stores, traverse our Wrapper. This lightweight runtime extension is capable of hiding (edge or cloud) placement differences, function invocation and data store access specifics from application components. It serves as the unique standardized entry point to functions that have been grouped together by the LPO, and it even relays function specific environment variables. Configuration of the Wrapper is also performed via environment variable assignment in the template at phase D3 within the SDE. Here, the specific Lambda, IoT topic and Redis endpoints are supplied to the Wrapper. During normal application operation, our Wrapper implementation, comprising of 630 lines of Python code, adds negligible latency to application E2E latency as configuration parameters are cached (in Python dictionaries and objects) and the Wrapper's internal handler components are extremely lightweight. In case of cold start up, when configuration parameters need to be processed, Wrapper overhead is slightly greater but still remains under 2 ms.
C. Automated Monitoring
Since every communication attempt between application resources goes through the Wrapper, it proves to be ideal for handling monitoring related functionality as well. As these invocations and data store accesses traverse the Wrapper, it measures then logs call latency and rate, blocking delay as well as function execution time. An interface for logging custom application level metrics of application components is provided as well. Measured values are reported to the AWS Managed Monitoring component. This entity has three tasks: aggregating metrics, sending out alerts and providing a queryable interface. The first two tasks are handled by Amazon CloudWatch (CW). When logging monitoring data to CW, the Wrapper experiences significant, available CPU dependent delay. In case of the smallest Lambda flavor (128 MB), we experienced 125 ms on average with high variance using the highest available batching (20 metrics). However, thanks to implementation details, this does not contribute to application delay at all (metrics logging is running virtually in parallel with the application). It does, however, contribute to the price of maintaining the application. Data coming from the Wrapper goes to CW Metrics and limit violations are handled by CW Alarms. This latter AWS service is configured within the monitoring template in phase D4 of the SDE for conditions coming directly from the Developer or the LPO. Alerts are sent out from the Monitoring component to the LPO and RO in steps 8a and 8b , respectively, using the integration between CW and Amazon Simple Notification Service (SNS). For measurement data that does not trigger alarms, the component offers access via a Metric Inquirer function that is also deployed at steps 6 -7 .
D. Dynamic Reoptimization
The above discussed features of the Monitoring component serve as a basis for the closed loop reoptimization of the application. After deployment, the application starts to log usage metrics automatically that either trigger an alarm, or one of the optimization components discovers a nonalerting change in the application's behavior and initiates a change (see steps 8a and 8b ). Depending on which component reacts, we define two control loop behaviors that differ in their reaction timescale as well as in their possibilities to make changes in the application.
1) Steady-State Control:
The steady-state control loop strives to follow usage trends, daily profiles, or changes in the application users' behavior. As a default means to accomplish this, the LPO periodically queries the Managed Monitoring component via the Metric Inquirer facility of the latter (see step 8a ) and updates its own Platform and Application Models. Periodicity of the query is dependent on LPO configuration and certain use cases can require more frequent updates than others. For convenience, the Monitoring component is able to trigger the LPO directly as well, supplying notifications about changes in reported metrics out of regular query periods. In the current implementation, the SDE sets up such triggers as application E2E latency alarms in the Monitoring component in step D4 of the deployment, when a latency constraint is provided in the service specification. Both types of changes can induce service reoptimization in the LPO. In order to decide whether the deployed layout is worth replacing with a new one, a dedicated redeployment metric is applied. The LPO compares the user-given threshold value with the weighted sum of the following values to make the deployment decision: 1) costs of the relative change in the layout; 2) relative profit gain which is the difference of the deployed layout cost calculated with the updated service parameters and the new layout cost; 3) summed latency gain; 4) relative latency margin on critical paths; and 5) the number of avoided latency constraint violations. When the LPO deems a new layout better than the currently deployed based on this metric, it initiates a full redeployment incorporating steps 1 through 6 .
2) Dynamic Runtime Reconfiguration: In this case, the RO is the component making changes in the application. This has limited possibilities as it can switch between predeployed layouts by offloading functions from edge nodes or reverting these changes. The RO interacts with the Monitoring component in step 8b . With push-based alerting, the RO cannot get triggered sooner than 10 s after an alarm condition presents itself, because of CW Alarms limitations. Faster reaction time is achieved by placing periodic queries to CW Metrics, realizing poll-based execution. In order to perform such queries by the RO, we use a combination of an Amazon EventBridge event and an AWS Lambda function. Both of these are deployed at step 7 , and EventBridge sets up a trigger event that gets fired every minute (which is the shortest time period for the service). The event trigger invokes our custom-made RO Trigger function that schedules the RO to run frequent periodic queries to the monitoring component. In either push or poll-based execution, the RO interacts directly with the Wrapper as shown by step 9 in Fig. 6. For on-demand functions in the cloud, the SDE configures a Redis instance at deployment, while for edge functions it uses the one available on the edge. The RO writes offloading information to these Redis instances and the Wrapper checks them before each function execution. As this data is small in size and Redis read operations have small latency, the average delay of this overhead is negligible (less than 1 ms) compared to the execution time of the application component. After reading a change request, the Reconfiguration Handler in the Wrapper changes subsequent invocations from edge local calls to cloud calls or vice versa.
VII. EVALUATION
In this section, we evaluate the performance of our system investigating the use case presented in Section III, in varying operating regimes. First, the main operation phases of the overall system are characterized. Second, the performance of the steady-state control loop is analyzed, and finally, we evaluate the performance of the dynamic runtime reconfiguration loop. For describing our software deployment layouts, we introduce the following interval-based notation: Here, groups of single or multiple consecutive application functions denoted by their ordinals 1-n indices i, j, k ∈ N (see also in Fig. 1) are defined using square brackets and subscripts C or E identify the assigned cloud or edge flavors, respectively. E.g., in case of [6] C }, functions #1-#4 (Image Grab-Object Detection Stage 1 in Fig. 1) are placed within a group assigned to the edge, while functions #5 (Cut) and #6 (Object Detection Stage 2) are deployed in two distinct groups in the cloud. The experiments are conducted in Amazon's data centers located in the Ireland (eu-west-1), Frankfurt (eu-central-1) and Oregon (us-west-2) regions. Table II illustrates the performance characteristics of the overall system when deploying select layouts. The first five options are generated by our system during normal operation. These show how the LPO changes the application's layout as it transitions from being completely cloud-based to completely deployed to the edge node, depending on different circumstances. (We discuss these cases and circumstances in more detail in Section VII-B.) The last three layouts in the table are corner cases created manually for comparison. The operation of the SDE is split into four distinct phases: translation from LPO to AWS CloudFormation (CF) format, application code management (application source code and external library collection, and upload), CF and edge deployment. For each layout, we executed 25 iterations where only application components were updated, state stores were not changed. Our system was executed on a t3a.2xlarge Amazon EC2 instance running in the same region chosen for deploying the application.
A. Overall System Performance
As shown in the table, LPO execution, and LPO→CF format translations have the lowest impact on deployment delay, both having subsecond values (under 7 ms) with our simple application. The LPO does not display high variance between different layouts as its execution time is dependent only on the number of used flavors which is now a fixed parameter. Using a fixed application, the translation's execution time depends on two factors: 1) number of groups in the layout and 2) the placement of these groups. As Table II shows, creating more groups naturally increases translation time, as setting up a function in AWS usually requires the specification of multiple resources. Assigning functions to the edge node slows down the translation step for the same reason. Code management and CF deployment take significantly more time. The former requires 20-81 s to complete, as handling external libraries and ML models contribute heavily to phase latency. In case of the simplest P C layout, a single artifact containing all the code, libraries and ML models is created and uploaded to AWS. In the worst case, P 6C , all these are packaged separately and sequentially for the six different functions, resulting in six comparatively big deployment packages. Phase delay is reduced when functions are mainly deployed to the edge, thanks to merging libraries and ML models in one single artifact on the edge. In case of P ECC , however, the SDE still has to create deployment packages for functions #1-#4 and their shared libraries as well as separate ones for functions #5 and #6 that results in a comparatively high phase delay. CF deployment adds another 1-3.3 min to complete deployment time since connected Lambda functions are deployed sequentially instead of parallelly by CF and their update takes around 20 s each. As the difference between P 6C and P 6E (every function deployed separately in the cloud or on the edge, respectively) shows, edge related setup further adds to phase delay. The increase is due to the fact that for the edge, AWS needs to configure the complete Greengrass setup. Not only functions but the merged artifact containing libraries and ML models, as well as AWS IoT communication topics between the function groups. Edge deployment is comparatively quicker and less dependent on function grouping as external packages, shared among application functions, are deployed together in a common edge resource by AWS Greengrass. One or two function groups are deployed in 6.1-7.3 s, while assigning each function to a different group increases phase latency only with an additional 0.9 s. Overall, our measured complete deployment delay is 1.2-4 min depending on application layout. As the LPO's measurement update period is 15 min in our tests, delay for a complete reoptimization cycle via the steady state control loop can reach 19 min in total.
B. Reoptimization via the Steady State Control Loop
In order to design and conduct comprehensive test scenarios covering all cases for our proposed system, we perform preliminary simulations with the LPO module. The optimization algorithm is validated using a test request based on our use-case application described in Section III. The service Fig. 7(a) depicts the resulting groupings for the applied limits (horizontal) and the assigned flavour for each function component (vertical), while Fig. 7(b) shows the predicted values of E2E latency, overall application cost and partial cost required to be paid to the cloud provider. The results align with our expectation as stricter latency limits enforce the LPO to utilize compute resources at the edge, otherwise prefer the cheaper but, in terms of E2E latency, underperforming public cloud. It can be observed that the jumps in the overall cost at 2.1 and 2.6 s correlate with the increases in the aggregated function execution time assigned to the edge, while the predicted latency values give close approximation to the upper latency limits, but always fall below them.
Regarding the different deployment scenarios, we can also notice that only five distinct and feasible software layouts are distinguished and generated by the LPO, out of the 132 possible grouping options. (Since the number of noncrossing partitions of an n-element set/chain is given by the nth Catalan number C n , where n equals to the number of functions in our case, our use case application has C 6 = 132 distinct layouts [32]). These results show that the LPO can also be used to calculate feasible application layouts for a given latency limit in advance, thus, significantly reducing the state space of deployment options for additional layout reconfiguration features (see Section VII-C).
Based on the simulation outcomes, we construct a comprehensive and all-encompassing experiment to validate the behavior and performance of our system on AWS. Although, our proposed system implicitly manages the cloud-related performance fluctuations with the help of the control loops, there is no way to control the internal network characteristics and server workloads in a public cloud environment. For this reason, we select E2E latency and detected object count (an application specific metric) as the two input parameter which may vary in time and may affect the deployed application layout considerably. Therefore, our steady state control loop experiment is divided into two phases to observe the effect of these parameters' change separately, from a common initial state, while covering all the feasible deployment options.
For the experiment, we utilize dedicated requests generated during the previous simulations and apply two distinct input sources: a low (LO) and a high (HO) object count video stream resulting in 1 and 5 objects per frame in average, respectively. The detected object count directly influences the invocation rate between the last two functions, Cut and Object detection stage 2, as highlighted in Section III. The experiment is conducted in the Ireland region, whereas a dedicated VM with 8 vCPU in Frankfurt is set up as edge node. Each function is assigned to the runtime flavor with 1024-MB memory. The LPO is configured to apply a 15-min reoptimization period which is the time window used for periodically obtaining the measurement updates and for predicting the different layout costs, as well. The used system parameters are also summarized in Table III. 1) Phase 1: In the first phase of our experiment, we deploy our use case application using a reasonably permissive latency limit of 3.0 s and apply the LO video stream as test input. Then, we switch to the HO stream during reoptimization period 2, altering the application specific metric. At the initial deployment, the LPO decides to encompass all functions in a single group (P C = {[1-6] C }) resulting in 2.2 s measured E2E latency. Based on live measurements acquired directly from CloudWatch, shown in Fig. 8(a), we can state that both the deployment and detected object count metric remain unchanged after the first reoptimization period. As the input stream is altered from LO to HO, the detected object count, thus, the invocation rate of the last function rises.
Consequently, the measured E2E latency of the active deployment layout exceeds the 3.0 s constraint, which is detected by the LPO at the end of the second period. At this point, the LPO initiates the service redeployment process. During the reoptimization, the LPO calculates a new optimal layout, while meeting the given latency constraint by moving the last component into a separate group (P CC = { [1][2][3][4][5] The reason behind this decision is that the E2E latency can be reduced by eliminating the significant intragroup serializations and leveraging the platform-supported parallelization, in exchange of higher operational cost and additional intracloud invocation delay. Afterwards, the new layout remains optimal, keeping a steady state setup with an experienced 2.7 s E2E latency, and no other redeployment is performed in spite of the fluctuations in the measured values. Fig. 8(b) sheds light on the decision process of the LPO from an internal point of view. It depicts the predicted cost in millionth dollar units (μ$) and the E2E latency predicted at the beginning of the given periods along with the measured E2E latency acquired at the end of the periods for each step. It also visualizes the predicted cost of the nonreoptimization option, which is the recalculated cost of the layout in operation, but with the updated metrics, and used at the layout replacement decision. We can observe at period 2, when the measured value exceeds the limit and deviates from the predicted latency, that the LPO opts for a new layout, despite being 3.4% more expensive, in order to avoid the constraint violation. Fig. 8(b) also confirms that the predicted E2E latency aligns with the measured values in steady state, having only 0.8-2.6% difference.
2) Phase 2: In the second phase, we examine the effects of different E2E latency limits on the generated layouts, similarly to the simulation tests before. Continuing our experiment, we carry on with the HO video stream and set a 4.0 s latency limit to ensure the same initial cloud-only deployment as for Phase 1. After reaching the steady state (P C ), we deploy different layout options by iteratively sending new service requests with decreasing latency limits. The used arbitrary limits, which are 4.0, 3.0, 2.6, 2.1, and 1.7 s, are chosen from the simulations' results to cover all the generated deployment options. Between the deployments we leave enough time (at least 15 min) for our system to update the application metrics and confirm the steady state before proceeding to the next deployment. Fig. 9(a) presents the measured E2E latency acquired from CloudWatch for the entire duration of Phase 2. We can observe that the experienced latency values stepwisely follow the decrease of the applied limits, providing stricter E2E latency in each step. As it is examined in the previous phase, between the first two cloud-only deployment, P C and P CC , we can achieve around 0.9 s latency gain due to the platformprovided parallelization. By applying the next two constraints, we get mixed deployments of P ECC = { [1][2][3][4] [6] C }, where the limits force the first several functions to be grouped together and assigned to the edge. With these layouts we can further reduce the E2E latency, despite introducing higher edge-cloud invocation latency. Utilizing edge resources moves processing closer to the video source, while keeping the last function in the cloud can still leverage its innate parallelization capabilities. Finally, applying the strictest latency limit results in a two-group, edgeonly layout P EE = {[1-5] E , [6] E }. Apart from the cloud-only scenarios, we can observe notable downtime during the layout replacement operation when the edge flavor is involved. The lack of support for seamless transition stems from the limitation of AWS CloudFormation, as described in Section VII-A. Although, supporting downtime-free replacement in the steady state control loop is matter of future work, our system offers rapid and seamless switching between layouts leveraging the runtime reconfiguration loop. Additionally, we also observed increased relative standard deviation (2.9-6.0%), which is calculated offline from exported CloudWatch logs, in the measured E2E latency compared to cloud-only layouts (0.8-1.1%). This stems from the presence of edge-cloud invocation in the deployments. Fig. 9(b) depicts the predicted and measured latency values along with the predicted costs for the aforementioned layouts. For the sake of comparison, we also deploy and measure three manually assembled layouts, which represent the de facto, cloud-native deployment approaches of executing each code component separately (P 6C , P 6E ), or encompassing them together (P E ). Applying these corner cases we can achieve similar E2E latency as with the corresponding cloudonly and edge-only layouts (P C , P EE ) generated by the LPO, but at 2-2.4 times a higher cost (up to 22%). In addition, if we compare the LPO-calculated layouts to these manually crafted ones, while considering the associated latency limits, we can observe a significant 3.2 times cost increase in the worst case (P EE ↔ P CC ). These differences in the layout costs confirm our argument, that is, an additional optimization mechanisms with precise models are required for operating serverless applications over public cloud in a cost-efficient manner. Moreover, it is worth highlighting that during Phase 2 of the experiment, the predictions approximate the measured values well, including the mixed deployments, experiencing only 0.5-3.8% overestimation.
C. Dynamic Runtime Reconfiguration
As presented in Section VI-D2, two versions of the runtime reconfiguration loop are available: a push-based solution where Amazon CloudWatch (CW) sends out alarms to the RO, and a poll-based mechanism where the RO actively queries CW for limit violations. Both approaches are affected by the capabilities of CW. The former is limited by a 10 s, while the latter by a 1 s measurement window. CW also needs an undisclosed amount of time to consolidate metric data. As depicted in Fig. 10, we set up a test environment using our system to investigate detection time of limit violations. Our test application consisting of a single Trigger Event Source component is deployed in the cloud. Monitoring happens the same way as described by Section VI-C, via CW Metrics. The application component sends out trigger events that cause limit violations and logs their generation time. The RO also logs the time when it detects these violations. Time difference between the event generation and its detection is calculated by a separate Lambda function. Our measurements show that the effective feedback delay in case of the push-based option is 20.13 s, on average with 15.25 s minimum and 20.4 s maximum values based on our 100 tests. For the poll-based one, however, we can achieve 3.2 s average delay with 0.58 s minimum and 8.95 s maximum values. To determine total reconfiguration time of the application, we have to add another approximately 2 ms in both cases, when communicating with cloud functions. This delay is due to data exchange between the RO and function Wrappers via Redis instances. Edge reconfiguration is slower, as the 2 ms exchange latency is increased by network delay between the RO's cloud region and the edge location.
We also investigated the performance of component offloading from edge to cloud in our object detection application with both available options. We set up our edge node having four CPU cores in the eu-central-1 (Frankfurt) AWS region while us-west-2 (Oregon) was chosen for cloud execution. The sample video was streamed from Budapest, outside of AWS, with a sample frame rate of 2/s. In this experiment, we use two layouts from those given by the LPO in Section VII-B: P EE = {[1-5] E , [6] E } as initial deployment, and P EC = {[1-5] E , [6] C } for offloading Object Detection Stage 2 to the cloud. RO-driven offloading is triggered by the object count application level metric, supplied by the Cut function, surpassing the number of the edge node's CPU cores. In case of the application being triggered more frequently than the minimum execution time of the Object Detection Stage 2 function, the metric can signal an edge node overload condition. In such cases, concurrent instances of the function would consume more CPU resources than available. Fig. 11 depicts the effect the different alarm detection options have on the application performance. Displayed metrics are taken from CW and in case of the object count and E2E delay, use a 1 s measurement window for aggregation. In case of CPU load, however, the Edge Monitor component logs the aggregated utilization metric less frequently. As expected, the poll-based mechanism outperforms the push-based in every regard. As the object count in the video stream increases, the push-based loop is slow to react and the edge CPU load reaches 100% while E2E latency tops at 9.87 s. The poll-based option experiences far lower rise in the E2E latency (with a maximum of 2.75 s) and manages to to keep the CPU load on the edge in check, with a maximum of 79% which is a 16% rise compared to normal behavior. After the end of application reconfiguration and function cold start latency, the E2E latency settles at 2.3 s (up from the original 1 s) and edge CPU usage at 33%. The 16 s transient time of the poll-based option is significantly shorter than the 43 s of the push-based (refer to the intervals T 3 and T 1 , respectively, in the figure). The comparatively long transient time is caused by the increased execution time on the edge node as well as clod start delay for starting up functions in the cloud. After the object count decreases below four, the RO shifts Object Detection Stage 2 back to the edge. As both E2E latency and edge CPU usage return to their original values, we can observe that transition, in case of the poll-based reconfiguration, is unsurprisingly quicker again (T 4 = 7 s compared to the push-based version's reaction time of T 2 = 16 s).
Based on our tests, it is clear that although push-based application reconfiguration is cheaper to realize, it might not be sufficient for avoiding edge node overload. Depending on application characteristics, the poll-based option can improve performance, but with higher invocation rates that might fail as well. As our implementation reaches the limits of CW, if an even smaller reaction time is required, a different solution should be used for collecting application metrics.
VIII. CONCLUSION
In this article, we adapted the cloud native programming and serverless operating techniques for latency sensitive IoT applications. A novel system was proposed on top of public cloud platforms providing serverless solutions for central and edge domains. The general approach was applied to Amazon's AWS leveraging its FaaS offerings, Lambda and Greengrass. Our main findings are summarized as follows.
1) We argue that application latency and operational costs are significantly affected by the grouping of the constituent functions (how to group and package user functions into FaaS platform artifacts); the selected flavors providing the runtime for the functions; and the placement of the components (central cloud or edge domains). Developers or operators of latency sensitive applications can benefit from defining their expectations on latency and cost, while scaling to current workload is delegated to the cloud providers. 2) We propose to add an optimization component on top of public cloud stacks to optimize deployment costs while keeping soft latency boundaries. This component controls the deployment via available services and exposed APIs. Such a control loop allows supervising serverless deployments in the range of minutes or tens of minutes, which is sufficient to follow daily profiles and usage trends. 3) In order to support control on lower timescales, the platform and the FaaS runtime are required to provide direct configuration interfaces for swapping layouts. We presented an extension to a state-of-the-art FaaS platform implementation. As a result, control within a few seconds can also be realized if different deployment options are onboarded in advance. 4) Instrumentation is needed to implement the detailed monitoring required as input for optimization. Customization of cloud monitoring offers a simple implementation, which enables capturing the performance characteristics of the deployed applications and the underlying platforms with acceptable accuracy. Therefore, adequate models of applications and platform components can be established, hence such a monitoring system fulfills all requirements to enable closed-loop control for latency sensitive serverless applications. Since then he worked with the Department of the Telecommunications and Media Informatics contributing in several research projects and gained wide knowledge about SDN, NFV, and microservice technologies. His current Ph.D. research focuses on cloud-native service modeling and provisioning. | 18,491.2 | 2021-05-15T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
3D Simulation and Optimization of Characteristics of Al0.1Ga0.9N/In0.2Ga0.8N High Electron Mobility Transistor with B0.03Ga0.97N Back-Barrier Layer
The objective of this paper is to simulate the effect of a BGaN back-barrier on performances of a high electron mobility transistor (HEMT) based on AlGaN/InGaN, by using TCAD 3D Silvaco simulator. We simulate some DC and AC characteristics; we note that with only 60 nm BGaN back-barrier layer and 3% of boron in BGaN, HEMT shows improvement of 33.34% in the maximum drain current, 64.7 % in the transconductance, 19% in the threshold voltage, 50% the drain-induced barrier lowering, 34.67% in the subthreshold swing, 20% in the breakdown voltage, 10.18% in the cut-off frequency, 12% in the maximum oscillation frequency, and record high ION/IOFF of over 10 12.9 .
Introduction
The synthesis The development of civil and military communication systems, such as radar or mobile telecommunications, requires electronic components capable of generating high power levels in the microwave domain 1 New technologies are being explored to meet these two operating criteria. The high electron mobility eld effect transistor (HEMT), combined with wide band gap semiconductors such as Gallium Nitride, appears to be an excellent candidate for this type of application. 2 In fact, this HEMT based on AlGaN/InGaN heterostructure has both a high density of carriers con ned to the heterojunction and high electronic mobilities. Cut-off frequencies up to several tens of gigahertz are also obtained.
Exploring new materials and their properties is one of the most important things to expand the range of applications. In most cases, this translates into new band gap gaps or network parameters that dictate the mechanical, electrical, or optical behavior of a device. 3 It allows for band engineering and to obtain a new wavelength or new electrical properties. Among the nitride-based semiconductors that have a wide band gap, a new class of materials has emerged based on the boron alloy, electrochemical lms, BGaN alloy is a new material that has not been studied very much until now. Recently, a Japanese team has shown the possibility of developing the BGaN ternary by incorporating a small molar fraction of boron into GaN (up to 1% boron) 4 Beyond this small percentage, the fundamental di culty is to avoid phase separation GaN-BN in which the alloy is no longer formed, rich zones in boron or rich in gallium appear in the layer.
Our work ts into this context; it consists in simulating the electrical performances of a AlGaN/InGaN HEMT which contains a boron gallium nitride (BGaN) back-barrier layer under the (InGaN) channel layer and comparing them to those of the transistor without this back-barrier layer.
Proposed structure and simulation model The ideal Several works exist on the AlGaN/InGaN HEMT structures but very little on the AlGaN/InGaN/BGaN HEMT. The purpose of our work is therefore to make a comparison between these two structures, we use Silvaco software under the module DevEdit 3D and Atlas to obtain different characteristics. where ε is the electrostatic potential, ψ is the local permittivity, and ρ is the local space charge density.
The continuity equations describe the temporal variations of the charge densities (electrons, holes); they are de ned by the Eq (2) and (3).
where n and p are the electron and hole concentrations, Jn and Jp are the electron and hole current densities, Gn and Gp are the generation rates for electrons and holes, Rn and Rp are the recombination rates for electrons rates for electrons and holes, and q is the magnitude of the charge on an electron.
The basic band parameters for de ning heterojunctions in Blaze are bandgap parameter, electron a nity, permittivity, and mobility.
The Energy band gap of the B x Ga 1-x N depending on the boron fraction can be approximated using a modi ed Vegard's law including the bowing parameter (b) 8, 9 , in addition to the linear interpolation; this is given by Eq. (4). The transfer characteristic is shown in Fig. 3. we obtain a threshold voltage (Vth) of about −4.25 V and − 3.5 V, respectively for HEMT with and without BGaN back-barrier layer. The incorporation of boron in GaN increases the resistivity of the BGaN back-barrier layer and improves the mobility of the carriers in the active layer; this layer makes the buffer layer more resistant so that the leakage of electrons from the channel to the substrate becomes more di cult, it serves as an electrostatic barrier.
Where we notice that without the back-barrier (a)-HEMT, the Ion = 10 -2.2 A and the Ioff =10 -7.8 A resulting in an Ion/Ioff ratio of 10 5.5 , so the (b)-HEMT exhibit an Ion/Ioff ratio better than the (a)-HEMT because of the BGaN Back-barrier.We get an Ion / Ioff ratio that is almost 10 7.4 times larger for (a)-HEMT.
The sub-threshold swing (SS) is determined on the log (Ids) characteristic as a function of Vgs. It corresponds to the gate-source voltage to be applied to reduce the drain current by one decade. It is obtained for Vgs values close to the pinch, and is de ned in mV/dec (variation of Vgs when Ids is divided by ten). For the (b)-HEMT with the back-barrier, the gate-leakage current is invariant with the gate bias, the device offers a gate leakage only of 7.10 -35 A at -0.2 V, where we notice that with the back-barrier (b)-HEMT, the gate leakage only of 7.10 -35 A. This extremely low value is evident to indicate the high quality of the device The frequency device performance is studied by small-signal AC analysis, the cut-off frequency (f t ) and the maximum oscillation frequency (f max ). We study the in uents of a BGaN back-barrier on characteristics RF of an high electron mobility transistor (HEMT). Fig. 8a When the boron added , the BGaN ternary compound becomes more resistive and opposes better the leakage of the charge carriers towards the substrate. The DC and AC properties were compared and investigated, our results allow us to conclude that device performance continuously augment with of B 0.03 Ga 0 . 97 N back-barrier layer. It is found that the saturation drain current , the peak transconductance , SS, DIBL, the cut-off frequency (ft), the maximum oscillation frequency (fmax) and rapport ION/IOFF increases identically with B 0.03 Ga 0.97 N back-barrier layer.
It can be said that a layer of BGaN can be very resistive with only a few percent boron, which could be very interesting for devices such as HEMTs, the proposed device structure is promising for highperformance and high-speed applications. | 1,526 | 2021-05-20T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Approximation by Zygmund means in variable exponent Lebesque spaces
In the present work we investigate the approximation of the functions by the Zygmund means in variable exponent Lebesgue spaces. Here the estimate which is obtained depends on sequence of the best approximation in Lebesgue spaces with variable exponent. Also, these results were applied to estimates of approximations of Zygmund sums in Smirnov classes with variable exponent defined on simply connected domains of the complex plane.
We say that the variable exponent p(x) satisfies local log-continuity condition, if there is a positive constant c 1 such that for all x, y ∈ T, |x − y| < 1 2 .
A function p ∈ ℘ is said to belong to the class ℘ log , if the condition (1) is satisfied.The spaces L p(.) (T) are called generalized Lebesgue spaces with variable exponent. It is know that for p(x) := p (0 < p ≤ ∞), the space L p(x) (T) coincides with the Lebesgue space L p (T). For the first time Lebesgue spaces with variable exponent were introduced by Orlicz [26]. Note that the generalized Lebesgue spaces with variable exponent are used in the theory of elasticity, in mechanics, especially in fluid dynamics for the modelling of electrorheological fluids, in the theory of diferential operators, and in variational calculus [4], [6], [7] and [28]. Detailed information about properties of the Lebesque spaces with variable exponent can be found in [8], [24] and [31]. Note that, some of the fundamental problems of the approximation theory in the generalized Lebesgue spaces with variable exponent of periodic and non-periodic functions were studied and solved by Sharapudinov [32]- [35]. Let be the Fourier series of the function f ∈ L 1 (T), where a k (f ) and b k (f ) are Fourier coefficients of the function f . The n − th partial sums, Zygmund means of order k (k ∈ N) of the series (2) are defined, respectively as [12], [36]:
29
where I is the identity operator. Note that the k−th− modulus of continuity Ω k (f, ·) p(.) is a nondecreasing, nonnegative, continuous function and Let G be a finite domain in the complex plane C, bounded by a rectifiable Jordan curve Γ, and let G − := extΓ. We denote and let ψ denote the inverse of ϕ.
For any measurable bounded exponent p(z) ≥ 1 we denote by L p(.) (Γ) the set of functions f, such that We denote by K segment [0, 2π] or Jordan rectifiable curve in the complex plane C. We suppose that Lebesgue measurable function p(.) : K → [0, ∞) satisfies the following conditions: If p(.) satisfies the conditions (3) and [18]. We define also the variable exponent Smirnov class E p(.) (G) as For f ∈ L p(.) (Γ) with p ∈ ℘ log we define the function
Approximation in variable exponent Lebesgue spaces
Let h be a continuous function on [0, 2π]. Its modulus of continuity is defined by If Γ is a Dini-smooth curve, then there exist [38] the constants c 2 , c 3 , c 4 and c 5 such that a.e. on T * and on Γ, respectively. Note that if Γ is a Dini-smooth curve, then by (4) Let Γ be a rectifiable Jordan curve and f ∈ L 1 (Γ). Then the functions f + and f − defined by Γ, are analytic in G and G − respectively, and f − (∞) = 0. Thus the limit exists and is finite for almost all z ∈ Γ.
According to the Privalov's theorem [9, p. 431] if one of the functions f + or f − has the non-tangential limits a.e. on Γ, then S Γ (f )(z) exists a.e. on Γ and also the other one has the non-tangential limits a.e. on Γ. Conversely, if S Γ (f )(z) exists a. e. on Γ, then the functions f + (z) and f − (z) have non-tangential limits a.e. on Γ. In both cases, the formulae and hence f = f + − f − holds a.e. on Γ.
Let ϕ k (z), k = 0, 1, 2, ... be the Faber polynomials for G. The Faber polynomials ϕ k (z), associated with G∪Γ, are defined through the expansion and the equalities for every z ∈ G. Considering this formula and expansion (6), we can associate with f the formal series where The series (7) is called the Faber series expansion of f, and the coefficients c k (f ) are said to be the Faber coefficients of f. The Zygmund sums of the series (6) is defined as Let P := {all polynomials (with no restriction on the degree)} , and let P (D) be the set of traces of members of P on D. We define the operator
Approximation in variable exponent Lebesgue spaces
Then using (6) we have where ϕ k (z), k ∈ N, are the Faber polynomials of G. Use of (5) and (6) gives us Faber series representation where We shall use the c, c 1 , c 2 , ... to denote constants (in general, different in different relations) depending only on quantities that are not important for the questions of interest. We denote by E n (f ) p(.) the best approximation of f ∈ L p(.) (T) by trigonometric polynomials of degree not exceeding n, i.e., where Π n denotes the class of trigonometric polynomials of degree at most n.
In this study we investigate the approximation of the functions by Zygmund means in variable exponent Lebesgue spaces. Note that estimates in this study are obtained in terms of the best approximation E n (f ) p(.) and modulus of smoothness. These results were applied to estimates of approximations of Zygmund sums in Smirnov classes with variable exponent defined on simply connected domains of the complex plane. Similar problems of the approximation theory in the different spaces have been studied by several authors (see, for example, [3], [10], [12]- [14], [20]- [23], [25], [29], [36] and [39]).
Note that for the proof of the new results obtained in the variable exponent Lebesgue spaces we apply the method developed in [10], [13] and [15].
Our main results are the following.
Theorem 1.1. Let f ∈ L p(.) (T), r ∈ Z + , k ∈ N and let the series converges. Then f is equivalent (equal almost everywhere) to a 2π− periodic absolutely continuous function ψ ∈ AC (T) and the inequality Then the estimate Theorem 1.3. Let Γ be Dini -smooth curve and p(.) ∈ Φ log 0 (Γ) . Then for f ∈ E p(.) (G) the following estimate holds: The proof of the main results we need the following results. Let f ∈ E p(.) (D). Applying Corollary 1 in the work [23] for the boundary values of f ∈ E p(.) (D) we have:
Proofs of Theorems
Proof of Theorem 1.1. According to [37,Theorem 6] f is equivalent (equal almost everywhere ) to a 2π−periodic absolutely continuous function ψ ∈ AC (T) and the following inequality holds: On the other hand the inequality holds [23]. Using (9) and (10) we get which completes the proof of Theorem 1.1.
Proof of Theorem 1.2. Let T n (f, x) be a trigonometric polynomial of best approximation to f in L p(.) (T). It is known that the following identity holds: Considering [33] we obtain (12) Z n,k (f, ·) L p(.) (T) ≤ c 22 f L p(.) (T) .
Consideration of (11) and (12) gives us If k−is an even number the following relation holds: (14) T . Then using (13), (14) and [37,Corollary 2] we get Let T n (k) (f, x) be a trigonometric conjugate of T (k+1) (f, x). If k is a odd number the relation (16) T holds. Also, according to [37] we obtain .
Proof of Theorem 1.3. Let f ∈ E p(.) (G). The function f has the Faber series Then by [18, Lemma 1] f + 0 ∈ E p 0 (.) (D) and for the function f + 0 the Taylor expansion c k (f )w k , w ∈ U holds. According to [5,p. 38,Theorem 3.4] boundary function f + 0 ∈ L p 0 (.) (T) has the Fourier expansion | 1,946 | 2019-01-01T00:00:00.000 | [
"Mathematics"
] |