id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
29762597
pes2o/s2orc
v3-fos-license
O-GlcNAcylation of 8-Oxoguanine DNA Glycosylase (Ogg1) Impairs Oxidative Mitochondrial DNA Lesion Repair in Diabetic Hearts* mtDNA damage in cardiac myocytes resulting from increased oxidative stress is emerging as an important factor in the pathogenesis of diabetic cardiomyopathy. A prevalent lesion that occurs in mtDNA damage is the formation of 8-hydroxy-2′-deoxyguanosine (8-OHdG), which can cause mutations when not repaired properly by 8-oxoguanine DNA glycosylase (Ogg1). Although the mtDNA repair machinery has been described in cardiac myocytes, the regulation of this repair has been incompletely investigated. Here we report that the hearts of type 1 diabetic mice, despite having increased Ogg1 protein levels, had significantly lower Ogg1 activity than the hearts of control, non-type 1 diabetic mice. In diabetic hearts, we further observed increased levels of 8-OHdG and an increased amount of mtDNA damage. Interestingly, Ogg1 was found to be highly O-GlcNAcylated in diabetic mice compared with controls. In vitro experiments demonstrated that O-GlcNAcylation inhibits Ogg1 activity, which could explain the mtDNA lesion accumulation observed in vivo. Reducing Ogg1 O-GlcNAcylation in vivo by introducing a dominant negative O-GlcNAc transferase mutant (F460A) restored Ogg1 enzymatic activity and, consequently, reduced 8-OHdG and mtDNA damage despite the adverse hyperglycemic milieu. Taken together, our results implicate hyperglycemia-induced O-GlcNAcylation of Ogg1 in increased mtDNA damage and, therefore, provide a new plausible biochemical mechanism for diabetic cardiomyopathy. In diabetes mellitus, inadequate glucose regulation and the resultant hyperglycemia are widely regarded as significant con-tributors to diabetic complications (1). Hyperglycemia leads to overproduction of reactive oxygen species (ROS). 4 Because diabetes is accompanied by increased free radical production and/or impaired antioxidant defense capabilities, excessive ROS production has been proposed as the link between high glucose and the pathways responsible for hyperglycemic damage in diabetes (1). ROS are highly reactive molecules that can oxidize essential cellular macromolecules (lipids, proteins, and DNA), thus damaging cell membranes, altering cell metabolism, and increasing mutation rates (1)(2)(3). mtDNA is exposed to high levels of oxidative stress. The proximity of mtDNA to the inner mitochondrial membrane and the oxidative phosphorylation chain makes it highly susceptible to oxidative stress from the electron transport system (4 -6). mtDNA is a 16.5-kb double-stranded, circular DNA molecule. It does not contain introns and encodes 13 polypeptides for complexes I, III, IV, and V of the electron transport system as well as 22 tRNAs and two rRNAs (7,8). Thus, any damage to mtDNA could potentially induce dysfunctional mitochondrial transcripts that could lead to reduced oxidative phosphorylation and decreased mitochondrial function (9). In addition, oxidative mtDNA damage has been reported to be more extensive and longer lasting than that of nuclear DNA damage (10 -12). Base excision repair (6) appears to be the most important mtDNA repair pathway. Base excision repair is initiated when a DNA glycosylase recognizes and removes an improperly modified DNA base. The resulting abasic site is cleaved by endonucleases and/or phosphodiesterases, removing the sugar residue, before DNA polymerase and DNA ligase complete the repair (6). 8-oxoguanine-DNA glycosylase (Ogg1) is the principal DNA glycosylase responsible for repair of the ROS-induced mutagenic DNA lesion 8-hydroxy-2Ј-deoxyguanosine (8-OHdG) and ring-opened fapyguanine in humans (13). Ogg1 alterations in diabetic models have only been incompletely explored. Oxidative stress-related DNA damage has been proposed as a novel important factor in the pathogenesis of human type 2 diabetes (14) based on the finding that Ogg1 is up-regulated in type 2 diabetic islet cell mitochondria, which is consistent with a rodent model in which hyperglycemia and consequent increased ␤ cell oxidative metabolism lead to DNA damage and induction of Ogg1 expression (14). Moreover, Ogg1 acetylation by p300 significantly increased enzyme activity in both in vitro and in vivo oxidatively stressed cells (13), and phosphorylation by Cdk4 was found to modulate Ogg1 function in in vitro experiments (15), suggesting a complex regulation of Ogg1 DNA repair activity by different posttranslational modifications. Another common posttranslational modification particularly relevant to diabetes occurs by O-linked N-acetylglucosamine addition (O-GlcNAcylation). O-GlcNAcylation of serine or threonine residues of nuclear, cytoplasmic, and mitochondrial proteins is a dynamic and ubiquitous protein modification (16). This process is emerging as a key regulator of critical biological processes, including nuclear transport, translation and transcription, signal transduction, cytoskeletal reorganization, proteasomal degradation, and apoptosis (17). As glucose is the main precursor for the amino sugar substrates of the hexosamine biosynthesis pathway, which ends in production of UDP-GlcNAc, glucose availability is well accepted to be the key mediator of this protein posttranslational modification (18). Consequently, the elevated protein O-GlcNAcylation level in hearts from diabetic animals has been implicated in glucose toxicity and linked to multiple aspects of cardiomyocyte dysfunction in diabetes (19 -25). GlcNAc residues are monosaccharide units that are linked posttranslationally to the hydroxyl groups of serine and threonine residues of proteins to form O-GlcNAc (26). Protein O-GlcNAcylation levels are regulated by two enzymes that act antagonistically. Uridine diphosphate N-acetylglucosamine polypeptidyl transferase, known as O-GlcNAc transferase (OGT), is the glycosyltransferase that catalyzes transfer of GlcNAc from uridine diphosphate N-acetylglucosamine to acceptor proteins (16,27). N-acetyl-␤-D-glucosaminidase (O-GlcNAcase, OGA) is a glycoside hydrolase that catalyzes cleavage of O-GlcNAc from proteins (28,29). Both enzymes are ubiquitous and have been shown to be essential for development in vertebrates (30,31), which underscores their fundamental roles in vital processes. During the past decade, diabetes-associated O-GlcNAcylation has been described as either a maladaptive phenomenon that can interfere with protein function (32)(33)(34) or, conversely, has been shown to be cardioprotective when acutely activated (21,35,36). Moreover, elevated mitochondrial O-GlcNAcylation caused by hyperglycemia, as occurs in diabetes, has been reported to significantly contribute to mitochondrial dysfunction that leads to diabetic cardiomyopathy (19, 20, 23-25, 33, 37), supporting a widely held belief that an excessive increase in O-GlcNAc levels is detrimental to the heart. Ogg1 O-GlcNAcylation is completely unexplored in either physiological or pathological conditions. Here we report that Ogg1 is highly O-GlcNAcylated in our type 1 diabetes (T1D) murine model and that reducing O-GlcNAcylation with a dom-inant negative mutant OGT restores Ogg1 activity, likely by competition for Ogg1 binding, which leads to improved mtDNA quality in T1D hearts. Diabetic Murine Hearts Show Increased Oxidative Stress, mtDNA Damage, and Impaired Ogg1 Activity and O-Glc NAcylation Status-To study whether mtDNA is affected by hyperglycemia-dependent oxidative stress, we compared 10-week diabetic (T1D) mice (glucose levels in the blood Ͼ 500 mg/dl) with control (CTR) mice (glucose levels in the blood Ͻ 150 mg/dl). Compared with CTR, we found significantly higher (10-fold) levels of 8-OHdG staining in T1D hearts (Fig. 1A). We then performed an assay to examine mtDNA damage and found a statistically significant increase with T1D. Fig. 1B shows that amplification of a 10-kb fragment from isolated mtDNA was selectively inhibited in T1D versus CTR, indicating that a higher number of lesions were present in mtDNA from diabetic hearts. In addition, we observed increased levels of cleaved caspase 3 in diabetic hearts (Fig. 1C), which indicates an increase in apoptosis. Ogg1 is a key enzyme in mtDNA repair (38). The Ogg1 DNA glycosylase recognizes and hydrolyzes the modified base 8-OHdG, removing the mutagenic 8-OHdG lesion situated opposite cytosine, whereas MutY glycosylase removes adenine from 8-OHdG/A mismatches with its adenine glycosylase activity (39) when 8-OHdG has already induced a mutation. Because we detected increased 8-OHdG levels, which may result in the mtDNA damage registered in T1D hearts, we focused our investigation on determining whether Ogg1, and, thus, the dependent DNA repair process, was somehow affected. We assessed Ogg1 enzymatic activity using a specifically designed fluorescent molecular beacon carrying an oxidized guanosine residue in the stem portion of the molecule annealed to a cytosine residue and compared it with the protein level. We observed a 50% decrease in Ogg1 enzymatic activity in T1D murine heart lysates compared with CTR (Fig. 1D); however, Ogg1 protein levels (detected as a 45-kDa protein) were found to be increased ( Fig. 1E). High levels of protein O-GlcNAcylation have been associated with hyperglycemia (32)(33)(34)(35)(36)(37). We found that OGT (Fig. 1F) and OGA (Fig. 1G) protein levels were increased and decreased, respectively, in T1D versus CTR hearts; therefore, we checked whether Ogg1 was differently O-GlcNAcylated. Fig. 1H shows that increased Ogg1 O-GlcNAcylation was detected in T1D versus CTR. O-GlcNAcylation Inhibits Ogg1 Activity in Vitro-Next we investigated whether Ogg1 activity was affected by O-GlcNAcylation. We expressed and purified a 40-kDa FLAGtagged Ogg1 and a 120-kDa OGT from HEK 293T cells and a 130-kDa His-tagged OGA from "Rosetta" Escherichia coli (Fig. 2, A-C). In vitro O-GlcNAcylation of Ogg1 was performed by incubating eluted FLAG-Ogg1 and FLAG-OGT at 37°C in OGT assay buffer in the presence of 50 M UDP-GlcNAc. Ogg1 activity was then assayed every 30 min by taking 50 l from the reaction mixture. The data reported in Fig. 2D show an ϳ50% decrease in Ogg1 activity after 30 min from the start of the O-GlcNAcylation reaction. We observed no further inhibition at the 90-and 210-min time points, suggesting that OGT activity was sufficient to saturate Ogg1 O-GlcNAcylation sites in the first 30 min. We then introduced recombinant His-OGA, in excess with respect to OGT (0.2 mg/ml versus 0.1 mg/ml), and, after 60 min, we found complete restoration of basal Ogg1 activity. Aliquots of the reaction mixture taken at 0, 210, and 270 min were used to detect the O-GlcNAcylation status of Ogg1. Samples were Ogg1-immunoprecipitated, and Western FIGURE 1. Diabetic murine model characterization. T1D mice were studied 10 weeks after STZ treatment. A, IF analysis in heart sections of 8-OHdG levels, a DNA oxidative stress marker. The 8-OHdG IF signal was quantified using Pixel Counter from ImageJ and represents the percentage of red pixels. Negative control (neg ctr) slides were incubated in solution lacking the primary anti-8-OHdG antibody. Images are representative of at least 5 fields/animal (n ϭ 3 CTR and T1D). B, mtDNA damage estimation. 1/(10 kb/117 bp) represents the number of lesions per kilobase within the amplified long mtDNA fragment (10 kb). Data were normalized relative to amplification of a short mtDNA fragment (117 bp). 15 ng of isolated mtDNA was used as a template for the reaction, and, to ensure linearity, 0.5ϫ template (7.5 ng) was included in the experiment. M, DNA molecular weight marker. Data are representative of n ϭ 3 CTR and T1D animals. C, cleaved caspase (Casp.) 3 Western blotting analysis from heart lysates. CPA was used as a loading control. Results are representative of n ϭ 3 CTR and T1D animals. MW, molecular weight; A.U., arbitrary units. D, Ogg1 enzymatic activity from heart lysates. 25, 50, and 100 g of protein lysate were incubated at 37°C with 0.5 M molecular beacon in Ogg1 assay buffer for 30 min. ⌬RFU per minute and ⌬RFU per minute per microgram of lysate (inset) are reported. Black line, CTR; dashed line, T1D. E, Ogg1 Western blot from 100 g of total heart lysate. Actin was used as a loading control. F and G, OGT (F) and OGA (G) Western blots from 100 and 200 g of total heart lysate, respectively. Actin was used as a loading control. H, O-GlcNAcylation status of Ogg1. 500 g of total heart lysate was immunoprecipitated using anti-Ogg1, separated by SDS-PAGE, and analyzed by Western blotting for O-GlcNAc (RL2) and Ogg1. The O-GlcNacylated Ogg1 band was normalized to Ogg1 detected in the IP. As a control, samples were treated with anti-rabbit IgG. The data in D-H are representative of n ϭ 4 CTR and T1D animals. All data (A-H) are expressed as mean Ϯ S.E. ***, p Ͻ 0.001; **, p Ͻ 0.01; *, p Ͻ 0.05. FIGURE 2. In vitro O-GlcNAcylation and de-O-GlcNAcylation of Ogg1. A, recombinant FLAG-Ogg1 and FLAG-OGT were expressed and purified from HEK 293T cells (silver staining). As a control, fractions from pcDNA3-FLAG-transfected cells are reported. Arrows represent nonspecific transgenic products also found in eluted fractions from cells transfected with empty vector. Arrowheads mark the FLAG-Ogg1 and FLAG-OGT recombinant proteins generated and isolated. These proteins represent the most abundant compounds among those eluted. Blocked arrows mark a 25-kDa protein present only in eluted fractions from cells transfected with either pcDNA3-FLAG-Ogg1 or pcDNA3-FLAG-OGT. We may consider this an Ogg1/OGT interactor that does not interfere with subsequent in vitro applications. CE, crude extract; FT, flow-through; W, wash; E, elution; MW, molecular weight. B, recombinant His-OGA was expressed and purified from E. coli "Rosetta." Fractions collected and stained (Coomassie staining) after the last gel filtration step with G50 resin to remove excess imidazole are reported. Fractions from #6 to #14 were pooled, concentrated, and used for further applications. C, Western blotting analysis of the recombinant FLAG-Ogg1, FLAG-OGT, and His-OGA. D, recombinant FLAG-Ogg1 activity assayed after incubation at time 0 and for 30, 90, and 210 min (0Ј, 30Ј, 90Ј, 210Ј, respectively) with FLAG-OGT and 50 M UDP-GlcNAC. To restore initial basal activity, the recombinant His-OGA was added to the reaction mixture, and recombinant FLAG-Ogg1 activity was assayed after 30 min (240Ј) and 60 min (270Ј). Data are expressed as mean Ϯ S.E. and were analyzed by comparing all conditions versus activity assayed at time 0. ***, p Ͻ 0.001;, **, p Ͻ 0.01. E, Western blotting analysis of Ogg1-immunoprecipitated (IP Ogg1) aliquots withdrawn at different O-GlcNAc reaction time points (0Ј, 210Ј, and 270Ј). Recombinant FLAG-Ogg1 (ϳ40 kDa) was revealed with anti-FLAG and anti-Ogg1, and O-GlcNAc-Ogg1 (ϳ40 kDa) was revealed with the RL2 antibody. F, Western blots for FLAG-Ogg1 (ϳ40 kDa), FLAG-OGT (ϳ120 kDa), and His-OGA (ϳ130 kDa) levels were performed as loading controls. Aliquots withdrawn at the same time points (0Ј, 210Ј, and 270Ј) were analyzed using anti-FLAG or anti-OGA antibodies. blots (WB) with anti-O-GlcNAc (RL2), anti-FLAG, or anti-Ogg1 were performed. Fig. 2E shows an increased, ϳ40-kDa Ogg1-specific protein O-GlcNAcylation signal (measured with the RL2 antibody) at 210 min compared with 0 min and a decreased signal at 270 min compared with 210 min, suggesting that the observed inhibition of Ogg1 activity was due to the increased O-GlcNAcylation mediated by OGT and that de-O-GlcNAcylating Ogg1 with OGA restored the basal activity. Interestingly, both anti-FLAG and anti-Ogg1 WB show lower Ogg1 levels in the immunoprecipitated sample at 210 min, suggesting that O-GlcNAcylation altered Ogg1 interaction with the Ogg1-specific IgG. As a control, WB analysis of non-Ogg1-precipitated aliquots of reaction mixture showed OGT and Ogg1 at equal levels throughout the experiment and OGA presence only after 210 min of the in vitro O-GlcNAc reaction (Fig. 2F). High-glucose Conditions Replicate the Diabetic Phenotype in Neonatal Cardiac Myocytes (NCM)-To test methods to reduce Ogg1 O-GlcNAcylation, we used an ex vivo model where NCM primary cultures were exposed to high glucose levels (HG ϭ 25 mM) for 72 h to mimic diabetes-associated hyperglycemia. As in T1D, 8-OHdG levels were significantly increased in HG versus cells cultured with normal glucose levels (NG ϭ 5.5 mM glucose ϩ 19.5 mM mannitol) (Fig. 3A). 8-OHdG also co-localized with the mitochondrial import receptor (TOM20), indicating that the observed DNA oxidation was primarily extranuclear. We then visualized Ogg1 by IF and again observed predominantly extranuclear localization based on closely overlapping signals from Ogg1 and from the mitochondrial marker complex 3 IF (Fig. 3B). Ogg1 localization and in situ activity were examined by visualizing an 8-OHdG-containing molecular beacon following overnight incubation at 37°C. The images in Fig. 3C demonstrated extranuclear Ogg1 localization and lower activity in HG versus NG. We next measured Ogg1 activity from whole cell lysates (Fig. 4A). Ogg1 activity was confirmed to be ϳ50% lower in HG compared with NG lysates even though the Ogg1 protein (detected as a 45-kDa protein) level was slightly increased (Fig. 4B). We then checked whether Ogg1 O-GlcNAcylation was altered in NCM under hyperglycemic conditions, as observed in T1D hearts in vivo, based on an trend toward increased OGT levels (Fig. 4C) and decreased OGA levels (Fig. 4D). Fig. 4E shows that, after 72 h of exposure to HG, Ogg1 was significantly more O-GlcNAcylated than in cells cultured in NG. Moreover, co-immunoprecipitation (IP) of Ogg1 and OGT from lysates revealed that Ogg1 interacts with OGT and, interestingly, that the interaction with OGT was increased in NCM cultured in HG (Fig. 4F). This suggested a possible mechanism through which Ogg1 O-FIGURE 3. 8-OHdG, Ogg1, and in situ Ogg1 activity staining in NCM cultured under hyperglycemic conditions. A and B, co-immunostaining of 8-OHdG with the mitochondrial marker TOM20 (A) and co-immunostaining of Ogg1 with the mitochondrial marker complex 3 (B). As a negative control (Neg ctr), cells were incubated in solution lacking either anti-8OHdG or anti-Ogg1. C, in situ Ogg1 activity fluorescent staining. NCM were seeded on gelatin-coated coverglasses, and after 72 of h treatment, cells were transfected with 1 nM 8-OHdG-containing molecular beacon and incubated overnight at 37°C. Cells were then fixed in 4% PFA and mounted on a slide with one drop of mounting medium containing DAPI. As a negative control, cells were transfected with the molecular beacon and fixed immediately. 8-OHdG and the molecular beacon signal were quantified with Pixel Counter from ImageJ, and values reported represent the percentage of either red or green pixels. Images are representative of at least 5 fields/condition. A.U., arbitrary units. Data are expressed as mean Ϯ S.E. and represent at least five independent experiments. **, p Ͻ 0.01; *, p Ͻ 0.05. GlcNAcylation increases under hyperglycemic conditions and that we investigated in subsequent studies. The Dominant Negative OGT Mutant F460A Decreases Ogg1 O-GlcNAcylation in NCM Primary Cultures-We next decided to introduce a dominant negative OGT mutant (OGT F460A) into our ex vivo hyperglycemia model with the goal of reducing Ogg1 O-GlcNAcylation. OGT catalyzes the addition of a single O-GlcNAc residue to serine or threonine residues of target proteins (16). The catalytic region of OGT consists of two highly conserved domains: CD1 and CD2 (Fig. 5A). It has been shown that Phe-460 in CD1 is an important site for OGT function, as mutation to Ala (F460A) completely abrogates the function of OGT (40). NCM were transduced with an adenovirus encoding the OGT F460A transgene (Adv-OGT F460A) or with an empty vector control (Adv-ctr), and, after 72 h of treatment, total cellular proteins were isolated. HG treatment increased total protein O-GlcNAcylation by 167% compared with NG (Fig. 5B). NCM transduced with Adv-OGT F460A showed increased OGT expression (ϩ40%) and decreased total protein O-GlcNAcylation (Ϫ59%) compared with NCM transduced with Adv-ctr and cultured in HG, demonstrating the efficacy of such a treatment in reducing overall O-GlcNAcylation (Fig. 5B). Ogg1 O-GlcNAcylation was then checked, and NCM exposed to HG and transduced with Adv-OGT F460A showed decreased Ogg1-specific O-GlcNAcylation that returned to normal levels. OGT F460A was shown to likely compete with endogenous OGT for interaction with Ogg1, as detected by co-IP experiments (Fig. 5C). Moreover, reducing the O-GlcNAcylation of Ogg1 resulted in increased enzymatic activity both in NCM lysates Ogg1-OGT co-IP assay (F). Ogg1 was immunoprecipitated, and both O-GlcNAcylation and interaction with OGT were detected in the IP with anti-O-GlcNAc (RL2) and anti-OGT, respectively. As control samples were immunoprecipitated with anti-rabbit IgG. OGT and Ogg1 IP signals were normalized to their relative input levels. RL2 signal was normalized to the Ogg1 IP signal. Actin was used as a loading control. All data are expressed as mean Ϯ S.E. and represent at least five independent experiments. **, p Ͻ 0.01; *, p Ͻ 0.05. (Fig. 6A) and in situ (Fig. 6B), which led to lower 8-OHdG levels (Fig. 6C). In Vivo OGT F460A Treatment Restores Ogg1 Activity and Improves mtDNA Quality-To evaluate OGT F460A treatment in vivo, we used adeno-associated virus serotype 9 (AAV9) to deliver OGT F460A. In T1D murine hearts, total protein O-GlcNAcylation was significantly increased (ϩ115%) compared with CTR. In contrast, diabetic mice expressing OGT F460A displayed decreased (Ϫ62%) total protein O-GlcNAcylation, returning O-GlcNAcylation levels to normal (Fig. 7A). OGT F460A treatment was able to specifically reduce in vivo Ogg1 O-GlcNAcylation, presumably by competing with endogenous OGT for Ogg1, as demonstrated by co-IP experiments in which the amount of OGT co-immunoprecipitated with Ogg1 in T1D was not significantly increased despite the increase in the total amount of OGT present, both wild-type and mutant (Fig. 7B). This is in line with what was observed in primary cultures. Reducing Ogg1 O-GlcNAcylation returned enzymatic activity to normal levels (Fig. 7C), leading to reduced levels of 8-OHdG in murine hearts (Fig. 8A). We also found fewer mtDNA lesions associated with lower 8-OHdG (Fig. 8B) and decreased levels of activated cleaved caspase 3 (Fig. 8C). Discussion We investigated whether mtDNA repair processes were influenced by hyperglycemia-dependent oxidative stress and protein O-GlcNAcylation in T1D mice, as no data are current available according to our knowledge. 8-OHdG is one of the most abundant and well characterized DNA lesion generated by oxidative stress (41). 8-OHdG is a miscoding lesion that can cause G:C to T:A or T:A to G:C transversion mutations (42). With age, these transversions accumulate in DNA, particularly in the mitochondrial genome, and the progressive mutagenic changes in DNA sequences have been causally linked to several cancers and neurodegenerative diseases (43). Because mitochondrial endogenous oxidative damage has been reported by Anson et al. (44) to be approximately three times overestimated when analyzed from isolated mitochondria from aged mice, we initially focused our attention on characterizing our T1D murine model by comparing the immunofluorescence levels of . IP signals were normalized to input levels, except for RL2 IP detection, which was normalized to Ogg1 detected in the immunoprecipitate. Actin was used as a loading control. All data are expressed as mean Ϯ S.E., represent at least five independent experiments, and were analyzed by comparing all conditions versus NG Adv-ctr. *, p Ͻ 0.05; **, p Ͻ 0.01. DECEMBER 16, 2016 • VOLUME 291 • NUMBER 51 JOURNAL OF BIOLOGICAL CHEMISTRY 26521 8-OHdG in T1D versus CTR hearts sections, thereby avoiding the artifact that was described. We then observed a correlation between 8-OHdG levels and mtDNA quality. We found that diabetic murine hearts had increased levels of 8-OHdG and an increased number of lesions compared with non-diabetic hearts. 8-OHdG, when chronically accumulated, is known to cause mutations (45). This, together with possible other lesions on mtDNA induced by diabetes, may be the reason why we observed a delayed amplification in our mtDNA damage assay by the polymerase, which may have had problems in amplification of the long fragment over the first crucial cycles of the reaction, as reported for other types of polymerases in vitro (46). As a result of this maladaptive scenario, the subsequent observation of increased apoptosis in diabetic hearts was not surprising. O-GlcNAc Inhibits Ogg1 mtDNA Repair Activity In T1D, we found increased Ogg1 (as expected because of the impelling need for DNA repair under diabetic conditions). Surprisingly, Ogg1 activity was 50% lower compared with nondiabetic animals. Thus, we explored the possibility that this inverse relationship was due to some maladaptive posttranslational modification resulting in diminished activity in the context of diabetes-associated hyperglycemia. Our group and others have clearly reported that normal protein O-GlcNAcylation is disrupted in diabetes and that mitochondria are one of the organelles greatly affected by this phenomenon (32,37). Consistent with a previous report (20), our diabetic mice showed increased OGT and decreased OGA expression, likely contributing to increased protein O-GlcNAcylation. Therefore, we checked the O-GlcNAcylation status of Ogg1 and found it to be increased in T1D versus CTR hearts. The effect of altered Ogg1 O-GlcNAcylation status on Ogg1 activity was investigated in in vitro O-GlcNAcylation and de-O-GlcNAcylation experiments using recombinant Ogg1, OGT, and OGA. Incubation with OGT and 50 M UDP-GlcNAc rapidly inhibited Ogg1 enzymatic activity, whereas subsequent incubation with OGA restored the original basal activity by removing O-GlcNAc. Moreover, the in vitro immunoprecipitations performed to detect the Ogg1 O-GlcNAcylation level over the course of the experiment revealed less Ogg1 in the immunoprecipitated pellets corresponding to samples in which Ogg1 was more highly O-GlcNAcylated. Protein levels were equal in all samples collected during the experiment, and thus the data suggest that O-GlcNAc may change the Ogg1 3D structure, interfering with protein-protein and, possibly, protein-substrate interaction. However, this phenomenon was observed as markedly evident only in our in vitro experiment. Even though we observed less Ogg1 signal in both Ogg1-immunoprecipitated T1D and HG samples relative to their input levels, we cannot exclude that what we observed in vitro was enhanced because of high Ogg1 abundance obtained from the recombinant preparations. Over the last decade, it has been reported that Ogg1 activity is modulated by acetylation in oxidatively stressed cells (13) and by phosphorylation via interaction with Cdk4 (a serine-threonine kinase) and c-Abl (a tyrosine kinase) (15). Acetylation and Cdk4-mediated phosphorylation were both reported to enhance the rate of 8-OHdG repair by Ogg1, suggesting a complex regulation of the activity of this DNA repair protein. To find evidence that O-GlcNAcylation also negatively modulates Ogg1 activity in vivo, we decided to perform rescue experi- ments in our T1D mice by reducing the excessive Ogg1 O-GlcNAcylation in an attempt to restore Ogg1 activity and improve mtDNA repair. First, we mimicked diabetes-associated hyperglycemia by culturing NCM for 72 h in HG. Under these acute conditions, we observed, as in T1D mice, increased mtDNA oxidative stress. Ogg1 activity was found to be decreased in HG-cultured NCM cells compared with cells cultured in NG. Under all conditions, Ogg1 and its activity were found to be principally localized outside nuclei and co-localized with either TOM20 or complex 3, confirming the emerging hypothesis that mitochondria are the major site of Ogg1 repair activity under conditions of oxidative stress (38). Moreover, as observed in T1D mice, Ogg1 protein levels did not correlate with activity, as we observed a slight increase in Ogg1 protein, whereas O-GlcNAcylation levels were significantly higher compared with NCM cultured in NG. We further investigated the Ogg1 and OGT interaction by co-immunoprecipitation assays. Cells cultured in HG displayed an increased interaction between Ogg1 and OGT, suggesting that this phenomenon likely contributes to excessive Ogg1 O-GlcNAcylation and, therefore, represents a strategy that could be adapted for lowering Ogg1 O-GlcNAcylation. For this, we used a dominant negative OGT (F460A) that was previously shown to be catalytically inactive (40). NCM transduced with Adv-OGT F460A showed an overall decrease in total protein O-GlcNAcylation and also reduced levels of O-GlcNAcylated Ogg1, likely because of OGT F460A competition with endogenous OGT for interaction with Ogg1. As a result, Ogg1 activity returned to normal, and this led to lower 8-OHdG levels. We therefore administered OGT F460A to T1D mice using an adeno-associated viral vector approach. Mice expressing OGT F460A displayed reduced total and Ogg1-specific O-GlcNAcylation, presumably by competitive interaction for Ogg1 between the mutant and endogenous OGT. This was accompanied by a complete restoration of Ogg1 activity, lower 8-OHdG levels, and improved mtDNA quality. This work demonstrates for the first time that diabetes-associated and hyperglycemia-dependent protein O-GlcNAcylation impairs the activity of Ogg1, one of the most important mtDNA repair proteins, and that Ogg1-OGT interaction likely contributes, together with imbalanced OGT and OGA levels, to excessive Ogg1 O-GlcNAcylation. Reducing Ogg1 O-GlcNAcylation led to restored enzymatic activity and to improvement in mtDNA repair, thus giving interesting per-spectives for a new plausible biochemical mechanism for diabetic cardiomyopathy. Treatment Animals-All investigations conformed to the Guide for the Care and Use of Laboratory Animals published by the National Institutes of Health (Publication 85- 23, revised 1985). This study was conducted in accordance with the guidelines established by the Institutional Animal Care and Use Committee at the University of California, San Diego. In NIH Swiss male mice (25 g, 3 months old), T1D was induced by giving a daily intraperitoneal injection of streptozotocin (STZ) (40 mg/kg) for 5 consecutive days (47). Diabetic status was confirmed by blood glucose levels. Experiments were carried out 10 weeks after STZ injection. In vivo adeno-associated virus transgene delivery in diabetic mice was performed by direct jugular vein injection. AAV9 expressing OGT F460A (AAV-OGT F460A) (6 ϫ 10 11 viral particles in 100 l) was injected 6 weeks after injection with STZ. Experiments were carried out 10 weeks after STZ injection and 4 weeks after AAV-OGT F460A delivery. NCM Primary Culture-Primary cultures of murine neonatal cardiac myocytes were prepared as described previously (32). Cells (10 ϫ 10 6 cells/10-cm 2 dish; 2 ϫ 10 6 cells/well of a 6-well plate; 8 ϫ 10 4 cells/well of a 24-well plate) were plated onto gelatin-coated culture dishes. The plating medium consisted of 4.25:1 Dulbecco's modified Eagle's medium:M199, 10% horse serum, 5% fetal bovine serum, 1% penicillin/streptomycin, and 5.5 mmol/liter D-glucose. Cells were allowed to adhere to the plates for at least 24 h before treatment. Cells were cultured in maintenance medium (4.5:1 Dulbecco's modified Eagle's medium:M199, 2% fetal bovine serum, 1% penicillin/streptomycin/ Fungizone) supplemented with either NG (5.5 mM ϩ 19.5 mM mannitol) or HG (25 mM). Cells were also treated with Adv-OGT F460A or Adv-Ctr (19). Cells were infected at a multiplicity of infection of 50/cell for both viruses the day exposure to either NG or HG started. The culture medium was changed daily until cells were harvested after 72 h. Adenovirus and Adeno-associated Virus-A dominant negative (F460A) single amino acid point mutation of OGT (TTT GCA) (40) was generated using the QuikChange site-directed mutangenesis kit (Stratagene) with pTrc-HisA-OGT (kindly provided by Dr. G. W. Hart) as template and two primers: 5Ј-CACAACCCTGATAAGGCTGAGGTATTCTG-CTGCC and 3Ј-GGCATAGCAGAATACCTCAGCCTTA TCAGGGTTGTG. OGT F460A was then cloned into pENTR1A using BamHI and EcoRV restriction enzyme sites and cloned into pAAV-Shuttle using KpnI and EcoRV restriction enzyme sites. The following primers were used to add 5Ј BamHI and/or KpnI and 3Ј EcoRV to the OGT F460A cDNA: 5Ј BamHI, CATGGGATCCATGGCGTCTTCCGTGGG CAAC-GTG; 5Ј KpnI, ACGGGGTACCATGGCGT CTTCCGTGGG-CAACGTG; and 3Ј EcoRV, CCCGGAT ATCTCAGGCTGA-CTCAGTGACTTCAAC. The adenovirus expressing OGT F460A was generated using the ViraPower TM Adenoviral Gateway TM expression kit (Invitrogen). Adeno-associated virus serotype 9 expressing OGT F460A was generated by the University of California, San Diego Vector Development Core. The sequence of each vector was confirmed and shown to produce an open reading frame with the appropriate amino acid change. Western Blotting and Immunoprecipitations Total cell or tissue lysates were homogenized in Nonidet P-40 buffer (20 mM Tris, 150 mM NaCl, 0.025 mM O-(2-Acetamido-2-deoxy-D-glucopyranosylidenamino) N-phenylcarbamate (PUGNAc), and 1% Nonidet P-40 (pH 7.4)). 50 -200 g of protein samples were loaded on NuPAGE 4 -12% BisTris gels (Invitrogen). Separated proteins were transferred to nitrocellulose membranes that were subsequently blocked in 5% milk/ TBS, 0.05% Tween. Anti-Ogg1 (Genetex, GTX20204), anti-O-GlcNAc (RL2) (Thermo Fisher Scientific, MAI-072), anti-OGT (Abcam, Ab177941), anti-OGA (Abcam, Ab124807), anti-FLAGM2 (Sigma, F1804), anti-cleaved caspase 3 (Cell Signaling Technology, 9661S), anti-Actin (Santa Cruz Biotechnology, SC-1616), and anti-cyclophilin A (CPA) (Abcam, Ab41684) were used as primary antibodies. Anti-mouse IgG-HRP-conjugated (Amersham Biosciences), anti-rabbit IgG-HRP-conjugated (Cell Signaling Technology), and anti-goat IgG-HRP-conjugated (Santa Cruz Biotechnology) were used as secondary antibodies. RL2 and anti-Ogg1 antibodies were validated by Western blotting. An Ogg1-specific siRNA (Ambion, Life Technology, AM16708) was used to knock down Ogg1 in NCM and to demonstrate the specificity of the anti-Ogg1 anti-body. High-glucose treatment of NCM for 72 h was used to increase O-GlcNAc levels, and incubation for 2 h at 37°C with recombinant OGA was used to decrease the O-GlcNAc-specific signal detected by RL2. Blots containing protein from NCM cultured for 72 h in high glucose and incubated with RL2 in the presence of 1 M GlcNAc (MP Biomedicals, catalog no. 100068) inhibited RL2-protein interaction (supplemental Fig. 1). To analyze Ogg1-specific O-GlcNAcylation and the interaction between Ogg1 and OGT, 500 g of total protein samples from either primary cultures or murine hearts were immunoprecipitated with anti-Ogg1 antibody using the Pierce Crosslink Immunoprecipitation Kit (Thermo Fisher Scientific) following the instructions of the manufacturer. Immunoprecipitates were analyzed by Western blotting using RL2, Ogg1, and OGT antibodies. Images were acquired with the ChemiDoc TM MP System (Bio-Rad). Band density was quantified with ImageJ software. Control immunoprecipitations were performed using normal rabbit IgG (Santa Cruz Biotechnology, SC-2027) as a nonspecific IP antibody. Ogg1 and OGT detected in the immunoprecipitated samples were normalized to their corresponding input levels, and inputs were normalized either to Actin or CPA. The Ogg1-specific RL2 signal detected in Ogg1 immunoprecipitated samples was normalized to the Ogg1 signal detected in IP. 8-OHdG IF microscopy Immediately after excision, murine hearts were rinsed in 1ϫ PBS, bisected transversely, and then the apical half was fixed in 4% paraformaldehyde (PFA) overnight at 4°C with constant mixing. The following day, fixed tissues were incubated at 4°C for 2 h in 15% sucrose/1ϫ PBS, followed by subsequent incubation for 2 h in 25% sucrose/1ϫ PBS and 2 h in 1:1 25% sucrose/1ϫ PBS: (optimal cutting temperature compound-Sakura Finetek USA Inc.). Then tissues were embedded in optimal cutting temperature compound and frozen in a dry ice/2methylbutane bath. Samples were stored at Ϫ80°C until slides were prepared by sectioning tissue blocks with a cryostat. Slides (10 m) were rinsed once in 1ϫ PBS and then incubated for 30 min at room temperature on a shaking platform in blocking/ permeabilization buffer (20 mM glycine, 1% BSA IgG-free, 3% normal goat serum, 0.1% Triton X-100, 0.05% Tween 20, and 1ϫ PBS) before incubation overnight at 4°C with either anti-8-OHdG (Genetex, GTX41980) or anti-Ogg1 (Genetex, GTX20204) diluted 1:20 in 1:10 blocking/permeabilization buffer:PBS (v/v). Anti-8-OHdG was validated using the Bioxytech 8-OHdG-EIA TM kit (OXIS, catalog no. 21026). Anti-Ogg1 was further validated by transfecting NCM cells with Ogg1-specific siRNA (Ambion, Life Technology, AM16708) for 24 h (supplemental Fig. 2). Anti-TOM20 (Santa Cruz Biotechnology, SC-11415) and anti-UQCRC2 (complex 3) (Abcam, Ab14745), diluted 1:100 and 1:50, respectively, in 1:10 blocking/permeabilization buffer:PBS (v/v), were used in co-localization studies. Slides were incubated with only 1:10 blocking/permeabilization buffer:PBS (v/v) as a control for nonspecific detection of antigens by the secondary antibody. The following day, slides were rinsed three times with 1ϫ PBS/ 0.05% Tween 20 and then incubated at room temperature for 1 h with either goat anti-mouse IgG secondary antibody Alexa Fluor 568 conjugate (Thermo Fisher Scientific, A11004) or goat anti-rabbit IgG (HϩL) secondary antibody Alexa Fluor 488 conjugate (Thermo Fisher Scientific, A11034) diluted 1:200 in 1:10 blocking/permeabilization buffer:PBS (v/v). Slides were rinsed five times with 1ϫ PBS/0.05% Tween 20. One drop of mounting medium with DAPI (ProLong Diamond Antifade Mountant with DAPI, Thermo Fisher Scientific) was then added to the tissue slides before applying coverglasses and sealing the edges with a non-fluorescent nail polish. Images were captured with a Delta Vision deconvolution microscope system (Applied Precision) using a ϫ100 lens at the University of California, San Diego, School of Medicine Light Microscopy Facility. At least 5 fields/biological sample were imaged, and ϳ20 serial optical sections, spaced by 0.2 m, were acquired. The datasets were deconvolved using SoftWorx software (Applied Precision) on a Silicon Graphics Octane work station. 8-OHdG levels were quantified by counting the percentage of positive (red) pixels compared with total pixels using the Plugin "Color Pixel Counter" in ImageJ. Data were normalized by subtracting the nonspecific signal detected in samples incubated with no primary antibody. NCM cells were immunostained following the same protocol with few modifications. After treatment cells were washed once with 1ϫ PBS and fixed in 4% PFA for 15 min at room temperature. Subsequently, cells were treated the same way as the tissue slides. Ogg1 Activity Assay from Total Cell and Tissue Lysates For assessing Ogg1 activity, we used an ad hoc-designed molecular beacon (5Ј-6-FAM-GCACT[8-OXOdG]AAGCGC-CGCACGCCATGTCGACGCGCTTCAGTGC-DABCYL-3Ј, Sigma) carrying an oxidized guanosine residue in the stem portion of the molecule. The underlined sequence represents the stem-annealed region, whereas the non-underlined sequence represents the loop of the beacon. The 5Ј-fluorophore 6-FAM (6-carboxyfluorescein) is in close proximity to the 3Ј quencher (4Јdimethylaminophenilazo) benzoic acid until the beacon is cut by Ogg1, resulting in a fluorescent signal forming the basis of detection of Ogg1 activity. 25, 50, and 100 g of either total murine heart or NCM lysates were incubated with 0.5 M molecular beacon in 50 mM HEPES (pH 7.6), 100 mM KCl, 2 mM EDTA, and 2 mM DTT at 37°C, and the Ogg1-mediated cleavage reaction was followed for 30 min on a Biotek Synergy 2 multimode reader taking readings every minute of fluorescence emission at 528/20 nm and exciting the samples at 485/20 nm. In Situ Molecular Beacon Staining After isolation, NCM cells were seeded on gelatin-coated microscope coverglasses (Thermo Fisher Scientific, 12-454-80, 12CIR-1) and the same protocol described above was followed. After 72 h of either NG or HG treatment, NCM were transfected with 1 nM molecular beacon using Lipofectamine 2000 (Invitrogen) following the instructions of the manufacturer and incubated at 37°C overnight. The following day, cells were rinsed once with 1ϫ PBS and fixed in 4% PFA for 15 min. Cells were then washed five times with 1ϫ PBS, and one drop of mounting medium with DAPI (ProLong Diamond Antifade Mountant with DAPI, Thermo Fisher Scientific) was added. Images and data were acquired as described under "8-OHdG IF Microscopy." As a negative control, one set of cells was transfected just prior to fixation with 4% PFA. Recombinant Protein Expression and Isolation Recombinant FLAG-tagged Ogg1 and OGT were expressed and purified from HEK 293T cells. HEK 293T cells were maintained in DMEM supplemented with 10% FBS and 1% penicillin/streptomycin at 37°C in 5% CO 2 . Cells were seeded to be 80% confluent and, ϳ12 h later, transfected with pcDNA3-FLAG constructs using Lipofectamine 2000 (Invitrogen) following the instructions of the manufacturer. 24 h later, cells were harvested and sonicated in OGT assay buffer (50 mM Tris-HCl (pH 7.4), 1 mM DTT, 10 mM KCl, and 12.5 mM MgCl 2 ). Cleared lysates were incubated with anti-FLAG M2 affinity gel (Sigma) and pre-equilibrated in OGT assay buffer for 2 h at 4°C with constant mixing. Beads were then washed twice with 1ϫ PBS and once with OGT assay buffer. Bound protein was eluted by incubating the beads four times with OGT assay buffer supplemented with 0.2 mg/ml FLAG peptide (Sigma). The four eluates for each protein were pooled and used for further experiments. Aliquots from each purification step were separated by SDS-PAGE and analyzed for recombinant protein purity using the SilverQuest TM silver staining kit (Invitrogen) following the instructions of the manufacturer. Recombinant His-tagged OGA was expressed and purified from E. coli BL21 Rosetta TM (DE3). Bacteria were transformed with pQTEV-OGA purified previously from transformed E. coli DH5␣. Rosetta cells carrying pQTEV-OGA were cultured at 37°C until A 600 reached ϳ0.4. Recombinant expression was induced by incubating the bacterial culture with 0.5 mM isopropyl 1-thio-␤-D-galactopyranoside (Genesee Scientific, Inc.) at room temperature overnight. The next day, bacterial cells were harvested and lysed by sonication in OGT assay buffer supplemented with 200 mM NaCl and 5 mM imidazole. Cleared lysate was incubated for 2 h with nickel-nitrilotriacetic acid-agarose resin (Qiagen) and pre-equilibrated in OGT assay buffer with constant mixing. Beads were then washed with OGT assay buffer supplemented with 200 mM NaCl and 5 mM imidazole until A 280 ϳ0.1. An imidazole gradient (5-500 mM) was then applied, and fractions where collected. Fractions that exhibited OGA expression by SDS-PAGE and Coomassie staining were pooled and loaded onto a G50 resin to remove the excess imidazole. Eluted fractions were then concentrated using a Speed-Vac Concentrator SVC200H (Savant) and pooled, and protein concentration was determined for further experiments. In Vitro Ogg1 O-GlcNAcylation and De-O-GlcNAcylation-Recombinant FLAG-tagged Ogg1, OGT, and His-tagged OGA were expressed and purified as described above. Approximately 0.1 mg/ml eluted FLAG-Ogg1 and FLAG-OGT were incubated in 1-ml total reaction volume in the presence of OGT assay buffer (50 mM Tris-HCl (pH 7.4), 1 mM DTT, 10 mM KCl, and 12.5 mM MgCl 2 ) and 50 M UDP-GlcNAc (Sigma) at 37°C. 50 l of reaction mixture was collected at 0, 30, 90, and 210 min and used to assess Ogg1 activity. After 3.5-h incubation, 200 l from a 1 mg/ml solution of recombinant OGA was introduced into the reaction, and after 30 and 60 min (240 and 270 min total), 63 l was collected for assessing Ogg1 activity. Before running this experiment, the functionality of recombinant His-tagged OGA was confirmed by assessing activity in 50 mM sodium cacodylate (pH 6.4), 3% BSA, 1 M 4-methylumbellifery GlcNAc. Activity was followed on a Biotek Synergy 2 multimode reader taking readings every 2 min of fluorescence emission at 460/40 nm and exciting the samples at 360/40 nm (supplemental Fig. 3). FLAG-Ogg1, FLAG-OGT, His-OGA, and O-GlcNac-Ogg1 protein levels were also analyzed by Western blotting. Aliquots of reaction mixture taken at different time points were either immunoprecipitated using anti-Ogg1 or directly analyzed by Western blotting following the procedures described above. Mitochondrion Preparation Procedures Immediately after excision, murine hearts were rinsed in MIM buffer (250 mM sucrose, 10 mM HEPES, and 1 mM EDTA (pH 7.2)) and then transferred to 3 ml of MIM buffer supplemented with 1 mg/ml BSA (pH 7.4). Samples were then homogenized in a tissue homogenizer four times for 5 s and then in a Potter homogenizer three times up and down. Homogenized samples were then centrifuged at 600 ϫ g for 10 min to pellet nuclei, cytoskeleton, and unbroken cells. The supernatant fraction was then further centrifuged at 8000 ϫ g for 15 min. The mitochondrial pellet was then gently resuspended using a clean small paintbrush in MIM buffer and centrifuged at 8000 ϫ g for 15 min. Finally, the mitochondrial pellet was gently resuspended in ϳ200 l of MIM buffer, and the protein concentration was determined for further applications. All steps were performed at 4°C, and all tools were prechilled and kept at 4°C before and during the isolation procedures. mtDNA Damage Estimation Mitochondria from mouse hearts were isolated as described under "Mitochondrion Preparation Procedures." Isolated mitochondria were resuspended in an appropriate volume of mitochondrial lysis buffer (BioVision Mitochondrial DNA Isolation Kit, K280) and kept for 10 min on ice. Then an appropriate volume of Enzyme B Mix was added, and samples were kept at 50°C until the solution became clear. Samples were further cleaned by adding an equal volume of phenol/chloroform to the lysed samples. After mixing and centrifuging, the aqueous phase was combined with an equal volume of 100% ethanol. Samples were stored at Ϫ20°C for 20 min and then centrifuged at top speed for 5 min at room temperature. The mtDNA pellet was then washed once with 70% ethanol and centrifuged again at top speed for 5 min at room temperature. The mtDNA pellet was resuspended in Tris-EDTA buffer and quantified by Nano-Drop (Thermo Fisher Scientific). mtDNA damage was detected by semiquantitative PCR according to Kovalenko and Santos (48). Briefly, two fragments (117 bp and 10 kb) were amplified from isolated mtDNA using the following primers: 117 bp, 5Ј-CCCAGCTACTACCAT CATTCAAGT-3Ј (forward) and 5Ј-GATGGTTTGGGAGA TTGGTTGATGT-3Ј (reverse); 10 kb, 5Ј-GAGAGATTT TATGGGTGTAATGCGG-3Ј (forward) and 5Ј-GCCAGCCT GACCCATAGCCATAATAT-3Ј (reverse). Because lesions are randomly distributed as a result of oxidative and other kinds of stresses, the amplification of the 10-kb fragment is selectively inhibited. After 20 amplification cycles (thermal protocol for the 117-bp fragment: 2 min at 94°C, 10 s at 98°C, 30 s at 57°C, and 1 min at 68°C; thermal protocol for the 10-kb fragment: 2 min at 94°C, 10 s at 98°C, 30 s at°C, and 10 min at 68°C; KOD Xtreme TM Hot Start DNA Pol from Novagen was used), the PCR mixture was separated on a 1% agarose gel, stained with EtBr, and photographed (Chemi-Doc TM MP System, Bio-Rad). PCR products were quantified by densitometric analysis using ImageJ. The linearity of the reaction was confirmed by including a control reaction containing 50% template DNA. The ratio between 10-and 117 bp bands gives an estimate of the mtDNA damage, and the inverse of this ratio represents the lesions/10 kb. Statistical Analysis All data were analyzed using GraphPad Prism 5 and are presented as mean Ϯ S.E. One-way analysis of variance with appropriate post hoc or unpaired Student's t test was used for comparison between two groups. p Ͻ 0.05 was considered to be statistically significant.
2018-04-03T01:29:21.961Z
2016-11-05T00:00:00.000
{ "year": 2016, "sha1": "f5caea9475f7f19d912ebdd983b673764d54d812", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/291/51/26515.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e80d61ec4576e06f94475fc2f6a7ec5267513ab1", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
49562712
pes2o/s2orc
v3-fos-license
Region-Specific Effects of Immunotherapy With Antibodies Targeting α-synuclein in a Transgenic Model of Synucleinopathy Synucleinopathies represent a group of neurodegenerative disorders which are characterized by intracellular accumulation of aggregated α-synuclein. α-synuclein misfolding and oligomer formation is considered a major pathogenic trigger in these disorders. Therefore, targeting α-synuclein species represents an important candidate therapeutic approach. Our aim was to analyze the biological effects of passive immunization targeting α-synuclein and to identify the possible underlying mechanisms in a transgenic mouse model of oligodendroglial α-synucleinopathy. We used PLP-α-synuclein mice overexpressing human α-synuclein in oligodendrocytes. The animals received either antibodies that recognize α-synuclein or vehicle. Passive immunization mitigated α-synuclein pathology and resulted in reduction of total α-synuclein in the hippocampus, reduction of intracellular accumulation of aggregated α-synuclein, particularly significant in the spinal cord. Lowering of the extracellular oligomeric α-synuclein was associated with reduction of the density of activated iba1-positive microglia profiles. However, a shift toward phagocytic microglia was seen after passive immunization of PLP-α-synuclein mice. Lowering of intracellular α-synuclein was mediated by autophagy degradation triggered after passive immunization in PLP-α-synuclein mice. In summary, the study provides evidence for the biological efficacy of immunotherapy in a transgenic mouse model of oligodendroglial synucleinopathy. The different availability of the therapeutic antibodies and the variable load of α-synuclein pathology in selected brain regions resulted in differential effects of the immunotherapy that allowed us to propose a model of the underlying mechanisms of antibody-aided α-synuclein clearance. INTRODUCTION α-synuclein is an attractive target for disease modification in synucleinopathies including Parkinson's disease (PD), dementia with Lewy bodies (DLB), and multiple system atrophy (MSA) (Brundin et al., 2017) Experimental data from transgenic models based on the overexpression of α-synuclein support the causative role of α-synuclein pathology as a trigger of neurodegeneration (Stefanova and Wenning, 2015;Stefanova, 2017;Ko and Bezard, 2017;Koprich et al., 2017). Recent propagation studies proposed prion-like properties of α-synuclein which may contribute to the progression of neurodegeneration (Steiner et al., 2018). For all these reasons α-synuclein pathology emerges as a valid therapeutic target in synucleinopathies. Immunotherapy is being pursued as one of several strategies to reduce α-synuclein pathology in PD and MSA, where early clinical trials are in progress (Brundin et al., 2017;Koga and Dickson, 2017). Previous work has demonstrated that active immunization against α-synuclein in PD and MSA transgenic mice mitigates motor deficits, reduces α-synuclein pathology, modulates neuroimmune responses and leads to neuroprotection (Mandler et al., 2014(Mandler et al., , 2015Villadiego et al., 2018). There is now increasing evidence from PD models that passive immunization with antibodies against pathogenic α-synuclein species may promote the clearance of α-synuclein and reduce neurodegeneration Games et al., 2014;Lindstrom et al., 2014;El-Agnaf et al., 2017;Spencer et al., 2017). It is suggested that antibodies against α-synuclein may act by promoting clearance of α-synuclein via the autophagy-lysosomal pathway or via microglia-dependent degradation (Bae et al., 2012). Importantly, α-synuclein antibodies may interact with both intracellular and extracellular α-synuclein, and therefore interfere with intracellular aggregate formation, cell-to-cell spread and induction of pro-inflammatory responses (Lopes da Fonseca et al., 2015;Lee and Lee, 2016). In the current study we aimed to analyze the effects of passive immunization with an antibody targeting α-synuclein in the PLP-α-syn transgenic mouse, which was engineered to express human full-length α-synuclein under the proteolipid protein (PLP) promoter in oligodendrocytes (Kahle et al., 2002). This transgenic mouse is considered a model of MSA that features intra-oligodendroglial α-synuclein inclusion formation, α-synuclein-triggered microglial activation, and neurodegeneration. (Stefanova et al., 2005(Stefanova et al., , 2007Stefanova and Wenning, 2015) MATERIALS AND METHODS Animals and Treatment A recombinant α-synuclein antibody, rec47, which preferentially binds oligomeric species (Lindstrom et al., 2014), was produced using a CHOK1SV GSKO Glutamine Synthetase (GS) system as previously described (Lindstrom et al., 2014). The antibody was purified using a custom made HiScale 26/10 Protein G-sepharose column (GE Healthcare) and SEC-purified over a HiLoad 26/60 Superdex 200 prep grade column (GE Healthcare, 17-1071-01). The final buffer was PBS (Dulbecco's PBS, Gibco) and the concentration was determined by measuring A 280 nm on a Nanodrop instrument with IgG settings. The preference of rec47 for α-synuclein oligomers was confirmed by inhibition ELISA (Lindstrom et al., 2014). Homozygous male and female transgenic PLP-α-synuclein mice (MGI:3604008) overexpressing human α-synuclein under the PLP promoter (Kahle et al., 2002) and age-and sex-matched non-transgenic C57Bl/6 background mice were used in this study. All animals were bred and housed in a temperaturecontrolled room under a 12/12 h dark/light cycle, with free access to food and water and under special pathogen free conditions in the animal facility of the Medical University of Innsbruck. All experiments were performed in accordance with the Austrian law under permission BMWFW-66.011/0125-WF/V/3b/2015. At the age of 4 months, 10 mice per group started receiving bi-weekly intraperitoneal injections of anti-α-synuclein antibody rec47 (20mg/kg b. w.) or a corresponding volume of vehicle (PBS) for a period of 12 weeks. The choice of the treatment dosing was selected based on the experience in previous studies (Lindstrom et al., 2014). The age of initiation of treatment was chosen to cover a time window in which PLP-α-synuclein mice show progressive GCI pathology, microglial activation and neuronal loss in SNc without strong motor phenotype presented yet (Refolo et al., 2018). Tissue Sampling Mice were transcardially perfused with 20 ml PBS for 5 min under deep thiopental anesthesia 1 week after the last injection. The brains and spinal cords were quickly removed. The left hemisphere was dissected in sub-regions (forebrain, hippocampus, midbrain, cerebellum, lower brainstem). Each sub-region as well as the most rostral cervical spinal cord were immediately frozen on dry ice, stored at −80 • C and further used for biochemical analysis. The right hemisphere and the caudal cervical spinal cord were immersion fixed in 4% paraformaldehyde at 4 • C overnight, cryoprotected with 30% sucrose and slowly frozen and stored at −80 • C and further used for histological analysis. Homogenization and Tissue Extraction The tissue extraction was sequential, starting with homogenization in TBS (20 mmol/l Tris and 137 mmol/l NaCl, pH 7.6) followed by extraction in TBS or TBS/1% Triton X-100 (TBS/T). The TBS/T pellets were thereafter dissolved in either 1% SDS or 70% formic acid (FA). In more details, the snap-frozen tissue was homogenized in TBS supplemented with protease and phosphatase inhibitors (Roche, Mannheim, Germany) using PreCellys at 1:3-1:10 volume ratios, depending on the weight of tissue. The homogenate was divided and mixed with an equal volume of TBS or TBS/T, followed by centrifugation at 16,000 g for 1h (+4 • C). The supernatants, corresponding to soluble and membrane associated α-synuclein (TX-soluble fraction) were collected and stored at −80 • C until analyses. The TBS/T pellets were thereafter extracted in 1% SDS or 70% FA. The SDS extracted samples were spun down at 16,000 g for 1 h at ambient temperature, the supernatant was collected (TX-insoluble/SDS soluble fraction) and stored at −80 • C until analyses of phosphorylated α-synuclein. The TBS/T pellets extracted in 70% FA were spun at 100,000 g for 1 h (+4 • C), the supernatant was collected and stored at −80 • C until analyses of insoluble α-synuclein (TX-insoluble/FA soluble fraction). For the following analysis, the levels of the analytes were compensated to a final concentration of 1:10 or expressed as g analyte/g tissue. All further analyses were performed in a blinded fashion. α-synuclein Measurements The anti-α-synuclein antibody, synuclein-1 (BD) for detection of total α-synuclein was coated on a 96-well standard MSD plate (0.5 µg/ml) in PBS. Free binding sites were blocked by incubation with 1% Blocker A solution (MSD). Samples and standard (recombinant α-synuclein, BioArctic AB or α-synuclein-HNE complexes) were allowed to interact with the coated antibody. α-synuclein species, bound to the capture antibody, were detected by adding the oligoclonal rabbit anti-α-synuclein antibody FL140 (0.2 µg/ml, Santa Cruz Biotechnology) followed by MSD SULFO-TAG anti-rabbit IgG (MSD) or biotin-conjugated mAb38F (0.5 µg/ml) followed by Streptavidin labeled MSD SULFO-TAG (0.5 µg/ml) and 2x MSD Read Buffer T addition according to manufactures description (MSD). SECTOR Imager 600 (MSD) was used to detect the emitted light that correlates to the amount of α-synuclein in the samples. The plates were washed with PBS-T (0.05%) between each incubation step. Antibody Availability in the CNS Rec47 was measured in TBS/T homogenates prepared as described. 96-well standard MSD plates were coated with 0.5 µg/ml recombinant α-synuclein (BioArctic AB) in PBS. Before addition of sample and standard, (rec47), free binding sites were blocked by incubation with 1% Blocker A (MSD). Bound antibodies were detected by a goat anti-mouse IgG antibody (Southern Biotech) followed by streptavidin labeled MSD SULFO-TAG and 2x MSD Read Buffer T according to manufactures description (MSD). SECTOR Imager 600 (MSD) was used to detect the emitted light that correlates to the amount of antibody in the sample. The plates were washed with PBS-T (0.05%) between each incubation step. Western Blotting for LC3b Brain sub-regions and spinal cords were separately homogenized in TBS buffer (w/v, 1:10) with a complete protease inhibitor cocktail (Roche, Mannheim, Germany) using a tissue grinder and further protein quantification, separation, and immunoblotting were performed according to standard protocols. Shortly, equal amounts of protein were loaded on 15% SDS-PAGE gels and separated by gel electrophoresis. Proteins were then transferred to PVDF membranes (Merck Millipore, Billerica, MA, United States). After incubation in 2% Amersham ECL Blocking Agent (GE Healthcare Life Sciences, Boston, MA, United States) in PBS supplemented with 0.2% Tween20 for 1 h, the membranes were incubated with primary antibodies [LC3 (1:1000, Cell Signaling, Danvers, MA, United States); β-IIItubulin (1:500, Abcam, Cambridge, United Kingdom)] overnight at 4 • C on an orbital shaker. Membranes were incubated for 1 h at room temperature with the horseradish-peroxidase (HRP)conjugated secondary antibody (1:20000, GE Healthcare Life Sciences). Bands were visualized by enhanced chemiluminescent reagent (Biozym, Hessisch Oldendorf, Germany). Immunoblots were scanned (Fusion FX, Vilber Lourmat, Marne-la-Vallée, France) and densitometry was measured with the FusionCapt Advance software (Vilber Lourmat, Marne-la-Vallée, France). LC3b-II band intensities were normalized to the loading control β-III-tubulin, and normalized values were further statistically analyzed. Image Analysis All analyses were performed by an examiner blinded to the treatment and genotype of the animals. The StereoInvestigator Software (MicroBrightField) provided to a Nikon Eclipse 80i microscope with motorized stage and high resolution digital camera was applied. A low power objective lens (2x, SPlan) was used to delineate the borders of the areas of interest. Actual object counting was done using a 100x objective (NA 0.75) using the automated meander scan procedure. The data were presented as object density per mm 2 and thereafter used for the analysis. Dopaminergic neuronal numbers in SNc were determined using the optical fractionator as previously described (Refolo et al., 2018). To study microglial activation, Iba1-and CD68immunostaining optical density was assessed in microphotographs for the regions of interest shot at constant camera and light settings at the same microscope. The data used for the analysis were presented as mean relative optical density with correction for background signal as previously described (Stefanova et al., 2012). Next we defined the number of Iba1positive microglial cells with activated morphological profile [previously described as B, C, and D type (Sanchez-Guajardo et al., 2010;Refolo et al., 2018)] per area and represented the data as cells per mm 2 . In detail, type B cells were characterized by their hyperramified processes and larger cell body, type C cells presented with enlarged cell body and shortening and thickening of the processes, and type D microglia showed amoeboid form. Confocal Microscopy Three-dimensional stacks were acquired with an SP8 confocal microscope (Leica Microsystems, Wetzlar, Germany) using a HC PL APO CS2 63x, 1.3 NA glycerol immersion objective. Imaging was performed using WLL with excitation lines for Alexa 488 at 498 nm and for Alexa 594 at 590 nm. Fluorescence emission was detected in sequence 1 from 503 to 576 nm (Alexa 488) and in sequence 2 from 594 to 742 nm (Alexa 594). Images were acquired using the Leica LAS X 3.1.1 acquisition software (Leica Microsystems). Image deconvolution was performed using Huygens Professional software (Scientific Volume Imaging, Hilversum, Netherlands). The Colocalization Analyzer was used to define the degree of overlap of signals in 3D. Statistical Analysis All data are presented as mean ± SEM. To test statistical significance of the treatment, the data sets were analyzed with Student's t-test or Mann-Whitney test depending on the distribution of the values, and if not indicated otherwise. Twoway ANOVA was used when two variables (genotype/treatment or sub-region/treatment) were considered. Statistical significance was set at p < 0.05 considering two-tailed confidence interval. Correlations were assessed by linear regression analysis. The statistical analysis was performed with the GraphPad Prism Software. RESULTS Passive Immunization of PLP-α-synuclein Mice Resulted in Region-Specific Amelioration of α-synuclein Pathology The passive immunization in PLP-α-synuclein mice over a period of 3 months resulted in no significant differences in the survival of the animals. We assayed in parallel all CNS samples from PBS and rec47 treated mice for levels of antibodies by ELISA. In vehicle-treated animals or non-transgenic controls receiving rec47, no signal was detected, suggesting a specific IgG response to the rec47 therapy only in PLP-α-synuclein mice. In PLP-α-synuclein mice receiving passive immunization, the rec47 exposure in the CNS showed region-specific differences of IgG levels. The level of antibodies was significantly higher in the spinal cord as compared to lower brainstem (p < 0.01) and cerebellum (p < 0.001). The cerebellum showed also significantly lower levels of antibodies as compared to the forebrain (p < 0.05). The availability of antibodies in the CNS of PLP-α-synuclein mice was distributed as follows: spinal cord > forebrain > hippocampus > midbrain > lower brainstem > cerebellum, Figure 1A). was detected in the TX-soluble and TX-insoluble fractions of the hippocampus in the rec47 treated group (Figures 1B,C). Oligomeric α-synuclein levels were variably present in the different sub-regions of the CNS of PLP-α-synuclein mice receiving vehicle (spinal cord < hippocampus < lower brainstem < midbrain < cerebellum < forebrain, Table 1). Overall lower levels of soluble α-synuclein oligomers were detected in PLP-α-synuclein mice receiving rec47 as compared to vehicle, however, sub-region post-hoc analysis failed to reach statistical significance (Table 1). To address selectively the effects of immunotherapy on the intracellular accumulation of α-synuclein in the brains of rec47 (n = 9) 12.6 ± 0.8 38.7 ± 4.0 40.4 ± 3.2 31.6 ± 2.7 29.6 ± 3.6 26.7 ± 1.8 No significant effect of the treatment was idendtified in the studied CNS sub-regions (n > 0.05 for all), despite a trend of overall reduction of the levels of oligomeric α-synuclein after therapy with rec47 of PLP-α-synuclein mice. (B) Immunohistochemistry for phosphorylated α-synuclein (pS129) in PLP-α-synuclein mice, the inset shows the staining pattern in control non-transgenic mice. PLP-α-synuclein mice, we applied immunohistochemistry. To detect aggregated α-synuclein in glial cytoplasmic inclusions (GCI)-like structures which are typically found in the PLPα-synuclein brain we used the 5G4 antibody as previously described (Kovacs et al., 2012;Brudek et al., 2016). The density of GCI-like profiles was significantly affected by the treatment as assessed by two-way ANOVA [effect of treatment: F 1,102 = 6.798, p < 0.01; effect of region: F 7,102 = 76.48, p < 0.001; interaction F 7,102 = 0.9264, p > 0.05]. The treatment-induced change of the density of 5G4-positive GCIs proved significant after the post hoc Bonferroni correction in the spinal cord (Figure 2A). The phosphorylation of the intracellular α-synuclein was measured by the density of cells with pS129-positive intracellular accumulation. We found both effects of treatment (F 1,102 = 17.06, p < 0.001) and region (F 7,102 = 103.1, p < 0.001) as well as interaction between the two variables (F 7,102 = 7.74, p < 0.001) by two-way ANOVA. Interestingly, post hoc Bonferroni test demonstrated a significant increase of intracellular α-synuclein phosphorylation in SNc, pontine nuclei (PN), and inferior olives (IO) but not in the other tested regions (Figure 2B and Supplementary Figure 1). Mechanisms of Intracellular α-synuclein Clearance After Immunotherapy in PLP-α-synuclein Mice Previous experimental evidence suggests that macroautophagy is a major mechanism of protein degradation after immunotherapy targeting α-synuclein . Furthermore, phosphorylation of α-synuclein can be related to preparation of the protein for degradation and clearance (Tenreiro et al., 2014). We sought to determine whether changes in the autophagy pathway can be detected in PLP-α-synuclein mice receiving anti α-synuclein antibodies. Western blot analysis showed no significant change in the LC3b-II levels in any of the brain subregions studied ( Figure 2D). Next, we used confocal microscopy to identify the density of LC3-positive particles in pS129immunopositive cells in selected brain regions ( Figure 2E). The data show a significant increase in LC3 signal in pS129-positive cells in SNc in immunized versus control mice (Figure 2F), while this is not the case in the hippocampus -a region where no change in the level of α-synuclein phosphorylation was detected ( Figure 2G). Changes of the Microglia Activation Profile After Passive Immunization of PLP-α-synuclein Mice Previous studies propose that microglia can be activated by oligomeric α-synuclein released into the extracellular space Fellner and Stefanova, 2013;Sanchez-Guajardo et al., 2015). Confirming previous observations, (Stefanova et al., 2007) we found a significantly higher Iba1 immunosignal in PLP-α-synuclein mice as compared to wild type controls receiving vehicle ( Figure 3A). The difference was lost after anti-α-synuclein antibody treatment. Significant correlation between the level of α-synuclein oligomers and microglial activation was detected in PLP-α-synuclein mice ( Figure 3B); however, no such correlation was found between microglial activation and phosphorylation of intracellular α-synuclein (R 2 = 0.07, p = 0.34). Next, the number of Iba1-immunoreactive microglia with activated morphology type B, C, or D (Sanchez-Guajardo et al., 2010;Refolo et al., 2018) in PLP-α-synuclein mice as compared to wild type controls was determined. Treatment with rec47 in PLP-α-synuclein mice resulted in significant reduction in the number of activated (type B, C, and D) microglial cells (Figures 3C,D). Interestingly, after immunotherapy in PLPα-synuclein there was a clear increase of the clusters of type C and D cells, while B type was predominant in the vehicle treated animals (Figure 2D). Since C and D type morphology of microglia may represent increased phagocytic activity, we tested the expression of CD68, a microglia lysosomal marker. The percentage of CD68-immunosignal significantly increased in PLP-α-synuclein mice receiving rec47 and was linked to the clusters of microglia observed in the Iba1 immunostaining (Figures 3E,F). DISCUSSION Our study provides evidence for biological effects of passive immunization with antibodies targeting α-synuclein in PLPα-synuclein mice. Immunotherapy with rec47 antibody resulted in an overall trend of reduction of α-synuclein levels with decrease of GCI-density in the CNS. In the current experiment post hoc sub-regional analysis confirmed significant changes in α-synuclein pathology after immunotherapy in the hippocampus and the spinal cord. Finally, a shift of α-synuclein-induced microglial activation in PLP-α-synuclein mice toward a phagocytic phenotype was observed (Stefanova et al., 2007). Oligomeric α-synuclein species are considered to precede the formation of amyloid fibrils of the intracellular aggregates in PD, DLB and MSA and to account for α-synuclein toxicity (Bourdenx et al., 2017;Wong and Krainc, 2017). For this reason, targeting oligomeric α-synuclein forms is considered a major candidate therapeutic approach in these disorders. In the PLP-α-synuclein mouse model, oligomeric high molecular weight α-synuclein species have been previously demonstrated (Kahle et al., 2002;Bassil et al., 2016Bassil et al., , 2017Venezia et al., 2017). Hence, the model provides a valid tool to test preclinically the biological effects of passive vaccines targeting preferentially α-synuclein oligomers as rec47 used in the current study (Lindstrom et al., 2014). The effects that we identified in the PLP-α-synuclein mouse after 3 months of rec47 immunization were region-specific. We hypothesize that the sub-regional efficacy may depend on the levels of penetrating antibodies and the levels of pathologic forms of α-synuclein in each sub-region. We found region-specific CNS availability of the antibodies used for immunotherapy, confirming previous reports (Lindstrom et al., 2014). A possible explanation for this phenomenon may be the heterogeneity of the blood-brain and blood-spinal cord barrier (Wilhelm et al., 2016). The higher availability of antibodies in the spinal cord may be due to the higher permeability of the blood-spinal cord barrier as shown previously (Wilhelm et al., 2016). The blood-spinal cord barrier has been demonstrated to have fewer pericytes and reduced FIGURE 4 | A hypothesis for a regional coefficient of efficacy (CE) after immunotherapy. The regional CE was calculated as a ratio between the antibody availability in the CNS and the level of oligomeric α-synuclein. An expected CE was distributed as follows: spinal cord > hippocampus > forebrain > lower brainstem > midbrain > cerebellum. We observed increased phosphorylation of intracellular α-synuclein in the brain stem (SNc, pontine nuclei, inferior olives), i.e., a region with lower CE (entity 1), but not in the spinal cord, forebrain or hippocampus-regions with higher CE (entity 2) after immunotherapy. We propose that these two entities may represent different stages of the dynamics of α-synuclein clearance after rec47 immunotherapy. While in entity 2 it is possible to already measure GCI reduction and reduction of insoluble α-synuclein, entity 1 may be useful to document preceding events before the actual removal of the toxic α-synuclein species. tight junction protein expression which increase permeability compared to the blood-brain barrier (Bartanusz et al., 2011;Winkler et al., 2012). On the other hand, the levels of α-synuclein were lowest in the spinal cord followed by the hippocampus. If a hypothetical efficacy coefficient of the immunotherapy per region is calculated as the ratio between the mean availability of antibodies versus the mean level of α-synuclein oligomers per region, the result will give expected efficacy score of spinal cord (11) > hippocampus (2.7) > forebrain (2.5) > lower brainstem (2.0) > midbrain (1.9) > cerebellum (1.3) (Figure 4). In concert with this hypothesis, the spinal cord was the CNS region where GCI density was significantly reduced in PLP-α-synuclein mice receiving immunotherapy, and the hippocampus showed a significant lowering of insoluble α-synuclein. The mechanism of antibody-mediated clearance of intracellular α-synuclein aggregates is not completely understood. Earlier studies suggest that autophagy-lysosomal pathways may be involved . We observed increased phosphorylation of intracellular α-synuclein in the FIGURE 5 | Working hypothesis on the mechanism of action of passive immunization with rec47 in PLP-α-synuclein mice. (A) At baseline in PLP-α-synuclein mice, oligodendroglia present with intracellular accumulation of α-synuclein oligomers that seed the aggregation of insoluble α-synuclein fibrils. Soluble oligomers released in the extracellular space trigger microglial activation that partly contributes to the clearance of α-synuclein. (B) Rec47 antibodies bind to α-synuclein oligomers and trigger increased phosphorylation of the protein which is further degraded by autophagy. (C) The immunotherapy with rec47 of PLP-α-synuclein mice results in clearance of α-synuclein oligomers leading to lowering of the intracellular seeding of α-synuclein aggregates (glial cytoplasmic inclusions, GCIs) and reduction of α-synuclein-induced microglial activation. brain stem (SNc, pontine nuclei, inferior olives), i.e., a region with lower antibody availability (entity 1), but not in the spinal cord, forebrain or hippocampus-regions with higher antibody concentration (entity 2) after immunotherapy. We propose that these two entities may represent different stages of the dynamics of α-synuclein clearance after rec47 immunotherapy. While in entity 2 it is possible to already measure GCI reduction, entity 1 may be useful to document preceding events before the actual removal of the toxic α-synuclein species. No significant overall changes in autophagy were measurable by immunoblotting, similar to a previous report in a PD model (El-Agnaf et al., 2017). However, we detected increase of intracellular α-synuclein phosphorylation linked to activation of autophagy (LC3 signal) in single cells of entity 1 (SNc), but not in entity 2 (hippocampus). Therefore, the detected selective increase of intracellular phosphorylation of α-synuclein is suggested to be an event related to the protein clearance (Tenreiro et al., 2014) at cellular level and may represent an earlier step in the pathways triggered by the antibody leading toward clearance of α-synuclein after immunotherapy (Figure 5). Several studies indicate that microglia get activated in the presence of oligomeric α-synuclein and may participate in its clearance (Zhang et al., 2005;Austin et al., 2006;Stefanova et al., 2011;. In the PLP-α-synuclein mouse model, the presence of α-synuclein in oligodendrocytes leads to early activation of microglia (Stefanova et al., 2007). Therefore, microglial activation may reflect the levels of pathogenic α-synuclein species in the extracellular space in the PLPα-synuclein mouse brain. Furthermore, microglia may show different activation profiles which may correspond to different functional phenotypes (Sanchez-Guajardo et al., 2010;Refolo et al., 2018), e. g. related to toxic pro-inflammatory responses or linked to beneficial phagocytic activity associated with the clearance of α-synuclein . Passive immunization with rec47 led to reduction of the density of Iba-1-positive microglia with activated morphology, pointing toward reduction of the levels of extracellular α-synuclein. Similar reduction of microglial activation after passive immunization targeting oligomeric forms of α-synuclein was shown in a PD models (El-Agnaf et al., 2017;Rockenstein et al., 2018), further corroborating the effects of immunotherapy in α-synucleinopathies and supporting the suggested here mechanisms (Figure 5). Interestingly, in PLP-α-synuclein mice treated with rec47 we found in addition a shift in the activated morphology of microglia from hyperramified type B toward profiles with shorter processes and larger amoeboid cell bodies (C and D type), i. e. a morphological phenotype associated with increased phagocytic activity. This notion was confirmed by the detection of increase percentage of CD68-positive microglia in PLP-α-synuclein mice treated with rec47. Altogether, the current results support the role of specifically activated microglia by the immunotherapy toward phagocytic phenotypes contributing for the clearance of pathogenic α-synuclein species from the extracellular space. In summary, the data presented here further support the potential of immunotherapy targeting α-synuclein for the treatment of synucleinopathies. The effects observed in the current study of immunization of PLP-α-synuclein mice within a period of 3 months are mild and region-specific. However, the study provides insights into the biological mechanisms and the efficacy of the approach, supporting previous observations Bae et al., 2012;Lindstrom et al., 2014;Spencer et al., 2017). AUTHOR CONTRIBUTIONS MK, MH-V, MJ, and JS contributed to the execution of the experiments, wrote the first draft of the manuscript and its review. FE, WP, and GKW contributed to the discussion and interpretation of the data and review of the manuscript. EN and NS contributed to the conception of the study, the statistical analysis and interpretation of the data, and wrote and review of the manuscript. ACKNOWLEDGMENTS This study was supported by grant P25161 and F4414 of the Austrian Science Fund (FWF) and research grant of BioArctic AB.
2018-07-04T13:03:16.399Z
2018-07-04T00:00:00.000
{ "year": 2018, "sha1": "c22722266699cb317e6edc05959589a17574bee6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.00452/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c22722266699cb317e6edc05959589a17574bee6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
237333857
pes2o/s2orc
v3-fos-license
EXPORT PRIORITIES FOR THE DEVELOPMENT OF THE NON-OIL SECTOR OF THE REPUBLIC OF AZERBAIJAN The article examines the significance of the development of the non-oil sector in Azerbaijan. The potential of this industry segment is analyzed in comparison with other sectors of the economy. The study reveals the main reasons for regression in technological, infrastructural and staffing. Export potential is an integral part of the country's economic potential. The ability of the national economy and its individual branches depends on it, as a result of the full use of the means of production, to produce and export goods and services in the quantity and quality that corresponds to the demand of consumers in the external market. It should be noted that there is a demand for export. If we take into account that the number of products that can be produced in the country is quite small compared to the number of products on the world market, then we can assume that there will always be a potential demand for the country's oil products. Consequently, if the price of a product on the domestic market is less than its price on the world market, then there is an export potential. The article studies the specifics of regulating the country's export potential, as well as an analysis of the export structure. Introduction The transformation of Azerbaijan into a major exporter of oil, an increase in its production from year to year, the dynamic growth of foreign exchange earnings due to exports in this sector have brought into focus the need to increase export potential and other spheres of the economy. Taking as a basis the factors of the negative impact of the outstripping growth of oil revenues, as well as the depletion of oil reserves in the future, the government declared the development of the non-oil sector a priority of economic policy. To solve this problem and implement state support in this area, numerous programs have been adopted and partially implemented in recent years. One of them is the State Program of Socio-Economic Development of Regions of the Republic of Azerbaijan in 2008-2013. The main goals of the planned are to ensure the development of non-oil industries, including processing, as well as to increase the production of agricultural products based on the effective use of the wealth and natural conditions of the regions. Of the macroeconomic measures, the adoption of which is provided for in the document, of particular importance for the development and increase in the export potential of the non-oil sector have the following [4]:  improving the legislative framework, including the creation of special economic zones in the country, the expansion of investment activities, the development of draft laws regulating antimonopoly and other economic relations, stimulating the development of regions;  increasing financial support for projects of socio-economic development of regions;  reduction of tax rates to optimal level;  continuation of the policy of applying tax holidays to agricultural producers;  further preservation of customs rates at the optimal level;  creation of large customs terminals and free customs zones in the north, south and western directions of the Republic of Azerbaijan, as well as around Baku;  improving the activities of the National Fund for Assistance to Entrepreneurship;  creation and development of a network of agrotechnical services;  continuation of measures of state financial assistance to enterprises of the agro-industrial complex, farms, etc. Among the non-oil sector of Azerbaijan, agriculture differs both in export potential and in the level of use of this potential and has a special weight. From this point of view, the significance of this program can hardly be overestimated. The provision of state support in the sale of products grown by farmers both in domestic and foreign markets, the creation of an Export Assistance Fund, stock exchanges, wholesalers, auctions has become one of the main tasks identified in the document in connection with this industry. Methodology Research methods are general scientific methods of cognition -analysis, synthesis, induction and deduction, comparison; research tools for observation and data aggregation; and statistical analysis. Classical and modern theories related to international trade, reflected in the works of famous scientists, economists from near and far abroad, as well as Azerbaijani scientists are the information base of the study. In addition, in the final work, Decrees of the President, State Programs adopted in the country, data of the State Committee on Statistics, the Customs Committee, etc. were used. The program of socio-economic development of regions At the beginning of gaining independence, the Republic of Azerbaijan chose a strategy for the development of the oil sector of the economy. At that time, oil was the only resource that the country could offer on world markets and the only area where foreign investment could be attracted. Thus, Azerbaijan turned into an exporter of oil, the sale of which made up the bulk of the country's export. However, the unilateral development of the economy has a negative impact on the economy, depletes a non-renewable resource, and also increases the risks of losses from changes (decrease) in prices for this resource in world markets. All of the above makes the development of the non-oil sector of the economy urgent. To support the development of the non-oil sector of the economy, various programs have been adopted in the country, which include the Program of Socio-Economic Development of Regions, which was implemented in 3 stages 2004-2008; 2009-2013 and 2014-2018. Within the framework of this Program, infrastructure projects were implemented, the production of agricultural products, the processing industry, the efficient use of resources, the creation of new jobs, an increase in the standard of living, etc. Over the years of implementation of this program, the legal framework for business development in the regions of the country has been improved, opportunities for investment activities have been expanded, and measures have been taken against the expansion of monopoly activities. During the implementation of the programs, the financial support of the socio-economic development of the regions improved, and taxation was improved. Thus, tax rates were reduced, tax holidays were applied for agricultural producers. The National Fund for Assistance to Entrepreneurship was actively involved in the processes of regional development, as well as state financial assistance was provided to farms and enterprises of the agro-industrial complex. At the same time, a policy of maintaining customs rates at an optimal level was pursued, which stimulated the development of regions and an increase in their export potential. An important role in the development of the export potential of the agricultural sector was played by the creation of the Export Assistance Fund, which provided support in the promotion of agricultural products in foreign markets. The success of the implementation of 3 regional development programs indicates the need for their continuation. In this regard, on behalf of the President of the Republic of Azerbaijan, the 4th state program of socio-economic development of regions for 2019-2023 is being prepared. Consider the dynamics of exports for food products, tobacco and alcoholic beverages in Azerbaijan for 2007-2017 ( Table 1). The analysis shows that for 2007-2017, different trends can be traced for different product groups, for some there is an increase in exports, for others, on the contrary, a decrease. The goods, the export of which is growing, include fresh vegetables and natural grape wines, the volume of sales of which during the study period increased 4.4 times and 2.2 times, respectively. Export growth is also observed for such product groups as fresh fruits -by 36.2%; canned fruits and vegetables -76.4% and tobacco -by 17.9%. For the rest of the commodity groups for 2007-2017 there is a decrease in exports. Thus, the export of potatoes decreased by 8.5%; tea by 61.0%; vegetable oil by 57.9%; fat by 86.7%; granulated sugar by 79.0%; fruit and vegetable juices -66.3%; spirits by 54.9%; cigarettes by 86.3%. For comparison, consider the import structure. As can be seen from Table 2, the development of a natural gas field and access to foreign markets resulted in a very large increase in exports for this product group. Also, a significant increase takes place in the export of acetyl alcohol -17.6 times; crude oil -4.3 times; oil coconut -3.1 times; products made of ferrous metals -3.1 times; electricity -2.6 times; ethylenepropylene -2.5 times; semi-finished products from ferrous metals -by 2.4 times. In addition to the above product groups, a small increase in export volumes takes place in the export of bags and packages made of yarn -by 19.8% and cotton yarn -by 4.0%. For the rest of the commodity groups, a decrease in export volumes is observed. A significant reduction takes place in the export of paper, cardboard and products from them, motor gasoline, liquid fuel, corners and channel bars made of ferrous metals, the export of which compared to 2007 is practically zero. The export of kerosene for engines and heavy distillers has almost halved and more. A significant decrease in exports was observed in unprocessed sheep skins, cotton fiber, synthetic fabrics from zombie threads. Analysis of imports shows for which commodity groups it is important to carry out a substitution import policy. It should be noted that agricultural products represent a significant part of the export of the non-oil sector. In recent years, this industry can be called one of the fastest growing. World Bank experts note that Azerbaijan is among the reformer countries and ranks 25th among them. The agricultural sector occupies an important place among the reformed sectors. Over the years of reforms that covered 2003-2017, there has been a real growth in the production of agricultural products, both crop and livestock production. It should be noted that, according to the estimates of the World Bank experts, the growth in agricultural production is ahead of the world average growth in this production. For the further development of this sector, an increase in loans to the agricultural sector is expected. The planting of orchards also plays a special role in the development of agriculture. Considering that in this sector the export of fresh fruits, fruit juices, and grape wines is increasing at an accelerated rate and the export potential is higher, then we can say that this is of the greatest importance. Currently, the development of the agricultural sector faces new challenges, the key of which is the development of the agro-industrial complex, the introduction of new technologies for the transition to the intensive development of agriculture. The transition to intensive development will contribute to the growth of export opportunities. Within the framework of this program, it is planned to create agricultural parks, large farms. This direction is possible with increasing investments, which is provided for by the measures taken by the government of the country to support the creation of agricultural parks and the development of agriculture. The presence of large farms and agricultural parks in the country is one of the important conditions for increasing the export potential of Azerbaijan. In connection with the need to enlarge farms, a system of their preferential crediting is envisaged. President of the Republic of Azerbaijan I. Aliyev approved the decree "On improving leasing activities in the agricultural sector and state support for agriculture." According to this decree, entrepreneurs operating in the agricultural sector will have access to preferential loans. The country has created the Agency for Agricultural Credits and Development, which will issue loans for the development of agriculture. This agency can open a credit line for banks, the maximum size of which is 5 million manat. It should be noted that the size of loans has increased. If earlier the size of micro loans was up to 1,000 manats, now it has increased to 5,000 manats. The size of small loans also increased, which amounted to 1,000 to 20,000, and now increased to 5,001 to 30,000 manats. Medium size loans increased from 20,000-50,000 manats to 30,000-100,000 manats. Large loans ranging from 50,000 to 200,000 manats increased to 100,001 to 200,000 manats. And finally, loans for the purchase of agricultural machinery up to 1 million manats. The loan repayment period is set from 2 to 5 years. The agrarian sector is an important component of the country's economy and the non-oil sector. A significant part of the export of the non-oil sector is accounted for by the agricultural sector. Having chosen this area as one of the priority policies, the government of the country has implemented a number of measures aimed at supporting and developing the agro-industrial complex in the country. Results Among the main tasks set by the government of Azerbaijan in the development of the agricultural sector is the formation of a modern agro-industrial complex in the country. To form an agro-industrial complex that meets modern requirements, it is necessary to attract new technologies, switch to intensive growth in production volumes, and achieve effective specialization in economic regions. The creation of an agro-industrial complex and large farms is the main formation of a highly productive industry for the production of agricultural products, the compliance of these products with world standards. This condition will contribute to the growth of the country's export potential and its diversification. There are 6 operating in the country and it is planned to create 3 more agricultural parks. Work on the creation of agricultural parks and large farms is being carried out in 30 regions of Azerbaijan. It should be noted that the state supports this area, allocating millions of manats for the creation of infrastructure and soft loans for this area.In the country, commercial banks support agriculture by issuing loans, including preferential loans for the development of agriculture. Increasing industrial potential is a priority for the development of Azerbaijan's economy. It should be noted that the presence of a diverse raw material base in the country creates conditions for the development of a diversified industrial complex. It is assumed that the main partners in the export sphere of ferrous metals for Azerbaijan can be such large countries as Great Britain, Russia, Kazakhstan, Turkmenistan, the AOE, and in the field of non-ferrous metals are Iran, China, Turkey and other countries. Experts believe that ferrous and non-ferrous metallurgy can become one of the export priorities in the non-oil sector. To improve the export potential of the metallurgical industry, you can:  it is possible to reorganize the existing enterprises of the metallurgical industry;  attracting both foreign and domestic investments to the metallurgical industry.  pursuing state policy that stimulates the development of the metallurgical industry. For example, the development of government programs, as well as fiscal and monetary policy measures aimed at the development of these industries.  production of final goods from products of ferrous and non-ferrous metals, for example, oil engineering, household appliances and other industries associated with the metallurgical industry. An important area of increasing export potential is the development of the chemical industry. It should be noted that the country had a high potential for the chemical industry, having a sufficient raw material base for diversifying the products of this sector. Currently, the country produces over 100 types of chemical products, most of which are exported. Mainly exported products of the chemical industry include: caustic soda, liquid chlorine, high pressure polyethylene, alcohol, dichloroethane, propylene oxide, polyester resin, chlorine paraffin, heavy and light pyrolysis resin, propylene, butylene, butane-butylene fraction etc. The main importers of chemical products manufactured in Azerbaijan are such countries as the near abroad -Turkey, Russia, Iran, Georgia, Turkmenistan, Ukraine, Kazakhstan, as well as more distant countries -Poland, Egypt, England, Germany and others countries. Conclusion and discussion Another important area for the development of export potential is the field of information and communication technologies. This industry is a promising, progressive branch of the service sector. The country adopted the State Program for the Development of ICT. The result of the development of this industry was the launch of 3 satellites Azerspace 1, Azerspace 2 and Azerspace 3. Thus, Azerbaijan began to provide high-tech services for export. Currently, there is a growth trend in this type of service in the world, which will contribute to the growth of foreign exchange earnings. The development of ICT in modern conditions is an important condition for increasing the country's export potential, attracting investment and developing innovations. Another important area for the development of export potential is the development of tourism. In this situation, it is expected to attract foreign tourists to the country. The development of tourism does not require the production and transportation of any goods. To ensure the influx of tourists, it is necessary to have hotels, most of which are 5-star hotels in the country, while there are not enough hotels and hostels designed for the average tourist in the country. The development of tourism, in addition to the hotel business, can create related links with such industries as transport, public catering, insurance, as well as the development of various small crafts (workshops) for the production of souvenirs, etc. Thus, the development of tourism, as well as the organization of international conferences in the country, the organization of various kinds of sports competitions, for example, Formula 1 and others, will contribute to the influx of tourists into the country. As a priority for the non-oil sector, the government has chosen industry, agriculture, transport, information technology, management of foreign exchange reserves, etc. Studies show that tourism in Azerbaijan is at the stage of development, its indicators are increasing every year. As noted earlier, due to the development of foreign tourism, the export of a number of services in the future can increase the flow of foreign exchange and bring additional profits to the country. The increase in global demand for chemical products, and the presence in the country of an advantage in several parameters in the production of these products and a number of other factors suggest that the development of the chemical complex, strengthening of state support in this area, and an increase in raw materials supply can contribute to an increase in the export potential of the non-oil sector. In our opinion, in order to increase the export potential of the country's chemical industry and the export of chemical industry products, it is necessary to take measures in the following areas:  provision of state support to the chemical industry complexes, application of tax, customs and other benefits;  opening a line of long-term concessional loans at the expense of state funds;  modernization of infrastructure serving chemical complexes;  consideration of the possibilities of attracting foreign investment in large complexes of the chemical industry, which are at the disposal of the state;  production of modern science-intensive products, etc. Thus, the conducted research allows us to come to the conclusion that the export of agricultural products, processed products of the agricultural sector, chemical products, metallurgical products are the export priorities of the non-oil sector. However, the development of other sectors of the economy with high export potential can increase the number of such priority areas, and there are many opportunities for this.
2021-08-27T16:41:15.571Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bfbf49d2f59cbd5b8896d1c6e1071b5242a413a3", "oa_license": "CCBY", "oa_url": "http://bit.fsv.cvut.cz/issues/01-21/full_01-21_07.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ff66471ef77e7725fb0a4415fb56864b1b6e2b5f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
249588669
pes2o/s2orc
v3-fos-license
Isolation, Identification and Hyperparasitism of a Novel Cladosporium cladosporioides Isolate Hyperparasitic to Puccinia striiformis f. sp. tritici, the Wheat Stripe Rust Pathogen Simple Summary Obligate biotrophic pathogen Puccinia striiformis f. sp. tritici (Pst) is a major threat to wheat production. Parasites of Pst can be used to develop biological agents for environmentally friendly control of this fungal disease. Here, we report a hyperparasitic fungus isolated from taupe-colored uredinia of Pst and identified as Cladosporium cladosporioides through molecular and morphological characterizations. The hyperparasitic isolate was able to reduce the production and viability of Pst. Therefore, Cladosporium cladosporioides may have potential in biological control of stripe rust on wheat. Abstract Wheat rust outbreaks have caused significantly economic losses all over the world. Puccinia striiformis f. sp. tritici (Pst) is an obligate biotrophic fungus causing stripe rust on wheat. Application of fungicides may cause environmental problems. The effects of hyperparasites on plant pathogens are the basis for biological control of plant pathogenic fungi and parasites of Pst have great value in biological agents development. Here, we report the isolation and characterization of isolate of Cladosporium cladosporioides from Pst based on morphological characterization and analysis of molecular markers. The hyperparasitic isolate was isolated from taupe-colored uredinia of Pst. Upon artificial inoculation, the hyperparasitic isolate was able to reduce the production and germination rate of Pst urediospores, and Pst uredinia changed color from yellow to taupe. Scanning electron microscopy demonstrated that the strain could efficiently colonize Pst urediospores. Therefore, the isolate has the potential to be developed into a biological control agent for managing wheat stripe rust. Introduction Wheat stripe rust (also called yellow rust), caused by Puccinia striiformis f. sp. tritici (Pst), poses a great threat to wheat production worldwide [1]. In 2000, 9 of the 64 major wheat producing countries reported severe losses in wheat yields caused by stripe rust [2]. In 2000-2012, about 88% of the world's wheat-producing areas were affected by stripe rust [3]. In China, the disease can reduce the yield of wheat by 10-20%, and even more than 60% in extremely severe epidemic years [4,5]. Pst is an obligate biotrophic basidiomycete fungus. The fungus produces yellow to orange uredinia on susceptible host plants. During an Pst urediospores inoculation experiment in a growth chamber, we found that yellow to orange uredinia turned taupe. Based on this phenomenon, we further isolated a novel Pst hyperparasite. Based on morphological characterization and analysis of molecular marker, we identified the hyperparasitic isolate as C. cladosporioides (Fresen.) G.A. de Vries. Furthermore, we demonstrated that this isolate was able to reduce the production and viability of Pst urediospores. Thus, the isolate may have potential in biological control of wheat stripe rust. Isolation of the Hyperparasite from Pst-Infected Leaves Wheat cultivar "Fielder" inoculated with Pst urediospores were kept in a growth chamber at about 16 • C and 80-90% relative humidity. When Pst was sporulating 14 days after inoculation, Pst uredinia started to change color from yellow to taupe. Leaves bearing taupe pustules were cut off the plants, surface-sterilized with 75% alcohol for 1 min, and transferred to a Petri dish containing PDA medium. The dish was incubated in darkness at 25 • C for 5 days [21]. A mycelial tip was transferred to a new dish and incubated under the same condition for obtaining a pure culture. Morphological Identification The obtained pure culture isolate, C. cladosporioides R23Bo, was grown on PDA as described above, and a diameter of 5 mm mycelial disk was placed at the center of a new PDA plate and cultured at the same conditions. Colonies, hyphae, conidiophores, and conidia were observed and measured under a light microscope. To study the isolate's ultrastructure, the samples were prepared using the previously described method [21]. First, the samples were fixed in a glutaraldehyde fixative solution overnight at 4°C, rinsed with PBS buffer for 10 min for 4 times, and dehydrated for 15-20 min with five concentration gradients (30%, 50%, 70%, 80%, 90%) ethanol. The dehydrated samples were soaked in isoamyl acetate for 10-20 min, and then processed with carbon dioxide drier. Finally, the samples were treated by spray-gold [24]. The samples were observed under a SEM. Molecular Characterization Mycelia of isolate R23Bo were collected from colonies cultured at 25 • C in darkness for 5 days. DNA was extracted from the mycelia using the cetyl trimethylammonium bromide (CTAB) method [25]. The generic primers of ITS (eukaryotic ribosomal DNA) (ITS1: TCCGTAGGTGAACCTGCG; ITS4: TCCTCCGCTTATTGATATGC) were used in PCR amplification. PCR procedure was conducted as follows: 94 • C for 4 min; 94 • C for 30 s, 55 • C for 30 s, 72 • C for 30 s, 35 cycles; 72 • C for 10 min. The PCR products were separated in 1.5% agarose gel and collected and purified using the agarose gel DNA extraction purification Kit (Takara, Dalian, China). The amplified fragments were sequenced by the AuGCT company (Beijing, China). Phylogenetic Analysis The sequences of five species in genus Cladosporium were retrieved from GenBank (Table 1), and aligned using software MEGA7.0.26 (https://www.megasoftware.net/ (accessed on 5 May 2022)) [26]. Phylogenetic analysis was conducted using the neighborjoining (NJ) method, and bootstrap analysis was conducted to determine the robustness of branches using 1000 replications. Pathogenicity and Hyperparasite Tests The inoculations of Pst were performed following the previously described methods [21]. Briefly, wheat plants (cv. Fielder) grown in a greenhouse for 20 days were first inoculated with urediospores of Pst race CYR31 collected from Su11 wheat. The collected urediospores of Pst race CYR31 were diluted with water to 20 mg·mL −1 and inoculated by brush. The Pst-inoculated plants were incubated in a dew chamber at 12 • C in dark for 24 h, and then grown in a growth chamber at 16 • C with 16 h light photoperiod. Three, five, seven and nine days after Pst inoculation, the plants in different pots were inoculated with the conidian suspension (1.0 × 10 6 spores/mL) of C. cladosporioides isolate R23Bo, kept in a dew chamber at 16 • C in dark for 24 h, and then returned to the growth chamber for growth under the same conditions. Plants inoculated only with Pst urediospores were used as a control. Fourteen days after Pst inoculation, symptoms and signs were recorded and yellow colored uredinia were counted using pictures analyzed with Image J number counting software (National Institutes of Health, 1.48u, Bethesda, MD, USA). Samples for SEM observation were collected at 3, 5, 7, and 9 days after hyperparasite inoculation (dai). Microscopic observations were conducted using a SEM. Genomic DNA of the samples were extracted from the infected leaf tissue to determine the fungi biomass (Pst DNA/wheat DNA ratio) at 3, 5, 7, 9 dai. Quantitative PCR (qPCR) was performed in a CFX96 Connect Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA) to determine the Pst DNA content in the infected wheat leaves using a TB Green Premix DimerEraser (Perfect Real Time) (TaKaRa, Dalian, China). The fungal Pst-EF1 (Pst elongation factor 1) and wheat TaEF1-α (wheat elongation factor 1 alpha) fusion plasmids were diluted into a serial concentrations (10 3 , 10 4 , 10 5 , 10 6 , 10 7 , 10 8 , and 10 9 fmol·cotL −1 ) for generation of the standard curves ( Figure S1). Wheat-EF1 primers (F: TGGTGTCATCAAGCCTGGTATGGT; R: ACTCATGGTGCATCTCAACGGACT) and Pst-EF1 primers (F: TTCGCCGTCCGT-GATATGAGACAA; R: ATGCGTATCATGGTGGTGGAGTGA) were used for qPCR analysis. The experiment include three independently biological repeats. Germination rate of Pst assay was performed following the previously described methods [21] with some modifications. Freshly collected hyperparasited urediospores were cultured on sterile water at 9 • C for 6 h, then placed on slides to count the numbers of germinated urediospores using an Olympus BX51T-32P01 optical microscope (Tokyo, Japan). A germ tube length up to the one-half spore diameter was defined as germination. The germination rate was calculated as germinated urediospores/100 urediospores. One hundred urediospores were selected randomly, and all experiments were performed three times. SEM Observations of Pst Uredinia Parasitized by the Hyperparasite Observations using a scanning electronic microscope (SEM) showed that without hyperparasite infection, Pst uredinia had a normal shape and structure ( Figure 1A). The Pst urediospores were shriveled at the early infection stage of the hyperparasite ( Figure 1B-D). Soon, the hyperparasite hyphae invaded the urediospores ( Figure 1E,F), and the urediospores were completely covered by hyperparasite conidian and hyphae and eventually disappeared ( Figure 1G,H). Morphological Characterization of the Hyperparasite The morphological characteristics of the hyperparasite cultured on media were studied using a light microscope. The front side of the fungal colonies was taupe-colored and the reverse side was brown to black (Figure 2A,B). Mycelia grew fast and were dense. Colonies reached 20-30 mm in diameter on potato dextrose agar (PDA), at 25 • C in 7 days (Figure 2A). The conidiophores were light brown and branched, and conidia varied in size, ranging from 5 µm to 15 µm (mean 12 µm) in length and from 1 µm to 5 µm (mean 2.5 µm) in width ( Figure 2C,D). ied using a light microscope. The front side of the fungal colonies was the reverse side was brown to black (Figure 2A,B). Mycelia grew fa Colonies reached 20-30 mm in diameter on potato dextrose agar (PDA (Figure 2A). The conidiophores were light brown and branched, and size, ranging from 5 µ m to 15 µ m (mean 12 µ m) in length and from 1 2.5 µ m) in width ( Figure 2C,D). More detailed morphological features of the hyperparasite wer observations. Ramoconidia had one or more conidial scars ( Figure 3A nidia eventually form clusters ( Figure 3E-H). According to the above acteristics, the hyperparasite was identified as Cladosporium cladospori de Vries. More detailed morphological features of the hyperparasite were revealed by SEM observations. Ramoconidia had one or more conidial scars ( Figure 3A-D). Numerous conidia eventually form clusters ( Figure 3E-H). According to the above morphology characteristics, the hyperparasite was identified as Cladosporium cladosporioides (Fresen.) G.A. de Vries. Molecular Characterization of the C. cladosporioides Isolate A neighbor-joining (NJ) tree was constructed for with Cladosporium species based on the internal transcribed spacer (ITS) sequences using software MEGA7 (Figure 4). The Cladosporium species used for the phylogenetic analysis are provided in Table 1. Our isolate R23Bo was most closely related to isolate QTYC16 of C. cladosporioides previously isolated from Pantala flavescens larvae, but not closely related to 14PI001, an isolate of C. cladosporioides previously isolated from Pst (Figure 4) Isolate. Molecular Characterization of the C. cladosporioides Isolate A neighbor-joining (NJ) tree was constructed for with Cladosp the internal transcribed spacer (ITS) sequences using software M Cladosporium species used for the phylogenetic analysis are provid late R23Bo was most closely related to isolate QTYC16 of C. cladosp lated from Pantala flavescens larvae, but not closely related to 14 cladosporioides previously isolated from Pst (Figure 4) Isolate. Confirmation of the C. cladosporioides Isolate Parasitizing Pst The hyperparasitic ability of isolate R23Bo of C. cladosporioide inoculation of wheat plants with Pst and the isolate. Wheat leave the conidian suspension of isolate R23Bo did not show any symp infection ( Figure 5A). When inoculated with only the Pst urediospo colored uredinia with urediospores formed on the inoculated lea lation (dpi) ( Figure 5B). When wheat leaves were inoculated wit lowed by inoculation with the conidia suspension of the C. cladospo days after Pst inoculation, Pst yellow-colored uredinia were cha 5A,C-F). The longer C. cladosporioides grew together with Pst, the or the more taupe pustules. When the number of yellow uredinia were counted using Im software, 12 days after Pst inoculation, the number of yellow ured the lowest in the treatment of C. cladosporioides 9 days after Pst in Confirmation of the C. cladosporioides Isolate Parasitizing Pst The hyperparasitic ability of isolate R23Bo of C. cladosporioides was confirmed by co-inoculation of wheat plants with Pst and the isolate. Wheat leaves inoculated only with the conidian suspension of isolate R23Bo did not show any symptoms or signs of fungal infection ( Figure 5A). When inoculated with only the Pst urediospores suspension, yellow colored uredinia with urediospores formed on the inoculated leaves 12 days post inoculation (dpi) ( Figure 5B). When wheat leaves were inoculated with Pst urediospores followed by inoculation with the conidia suspension of the C. cladosporioides isolates different days after Pst inoculation, Pst yellow-colored uredinia were changed to taupe ( Figure 5A,C-F). The longer C. cladosporioides grew together with Pst, the fewer yellow uredinia or the more taupe pustules. At 36 hai, C. cladosporioides conidian produced germ tubes which contacted with P diospores and then grew into the urediospores ( Figure 7B). The parasitic fungus inside and produced hyphae and conidiophores from the urediospores ( Figure 7C it completely destroyed the urediospores at 120 h after the parasite treatment ( 7C,D). When the number of yellow uredinia were counted using ImageJ number counting software, 12 days after Pst inoculation, the number of yellow uredinia per cm 2 leaves was the lowest in the treatment of C. cladosporioides 9 days after Pst inoculation ( Figure 6A). R23Bo-strain-treated pustules showed impact the production of urediospores, and the fertility of spores is seriously affected, as is obviously exhibited in that the ratio of the spores germination reduces by 65% at 3 dpi and 80% at 5 dpi ( Figure 6B). The biomass of the Pst, measured by the Pst DNA/wheat DNA ratio, decreased as the treatment with C. cladosporioides lengthened ( Figure 6C). The results showed that isolate of C. cladosporioides is able to parasitize Pst, leading to the reduction in Pst urediospore production. Biology 2022, 11, x 9 of 13 SEM observation further illustrated that isolate R23Bo could efficiently parasitize Pst. At 36 hai, C. cladosporioides conidian produced germ tubes which contacted with Pst urediospores and then grew into the urediospores ( Figure 7B). The parasitic fungus grew inside and produced hyphae and conidiophores from the urediospores (Figure 7C), and it completely destroyed the urediospores at 120 h after the parasite treatment ( Figure 7C,D). SEM observation further illustrated that isolate R23Bo could efficiently parasitize Pst. At 36 hai, C. cladosporioides conidian produced germ tubes which contacted with Pst urediospores and then grew into the urediospores ( Figure 7B). The parasitic fungus grew inside and produced hyphae and conidiophores from the urediospores ( Figure 7C), and it completely destroyed the urediospores at 120 h after the parasite treatment ( Figure 7C,D). Discussion The identification of new hyperparasites is useful to understanding the biodiversity of mycoparasites, and it provides the potential to develop new strategies for biological control of plant diseases [13,32]. In the present study, we isolated and identified a fungal isolate from Pst uredinia. The hyperparasitic isolate is able to reduce Pst infection. Furthermore, the isolate can reduce Pst urediospore production and viability. Thus, the isolate has a potential value in biological prevention of wheat stripe rust. , Conidia of C. cladosporioides on the surface of a Pst urediospore (×2000) at 12 hai (hours after inoculation); (B), at 36 hai, a C. cladosporioides conidium generated a germ tube (×4000); (C), at 72 hai, t C. cladosporioides produce conidiophores from the Pst urediospore (×3000); (D), at 120 hai, C. cladosporioides has completely colonized the Pst urediospore and the urediospore is destroyed (×1500). Discussion The identification of new hyperparasites is useful to understanding the biodiversity of mycoparasites, and it provides the potential to develop new strategies for biological control of plant diseases [13,32]. In the present study, we isolated and identified a fungal isolate from Pst uredinia. The hyperparasitic isolate is able to reduce Pst infection. Furthermore, the isolate can reduce Pst urediospore production and viability. Thus, the isolate has a potential value in biological prevention of wheat stripe rust. In the morphological identification, the spore size is an important classification criterion of Cladosporium spp. As the spore dimensions of the most species in the genus overlap, it is difficult to identify species of Cladosporium using only morphological characters, especially the size of conidian [28]. Using only ITS sequences is also not reliable to identify Cladosporium spp. [33]. In the present study, we use ITS sequence analysis and morphological features to identify the hyperparasitic isolate as C. cladosporioides. Several fungal species have been reported to parasitize Pst, including C. cladosporioides [22]. However, the ITS sequence analysis showed that the C. cladosporioides isolate obtained in this study is clearly different from the isolate reported in Zhan et al. [22]. It is interesting that our isolate is most closely related to a C. cladosporioides isolate obtained from Pantala flavescens larvae [29]. This relationship may suggest that the isolate we obtained from Pst uredinia may have other hosts to parasitize and/or natural substrates to grow on. Biocontrol strategies have potential to achieve efficacy in preventing and treating diseases under environmentally friendly conditions. Some studies have been conducted to explore hyperparasites to control rusts. For example, Cladosporium spp. was found to parasitize Melampsora spp. [34]. Several fungal species including C. cladosporioides were identified as hyperparasites of Pst [18,21,22]. The isolate R23Bo of C. cladosporioides identified in the present study is able to reduce or stop the growth of Pst urediospores by growing into uredinia. The isolate is fast-growing and easy to culture. Additionally, this parasitic ability makes R23Bo a potential biological control agent which could be developed into a biocontrol agent for managing wheat stripe rust. Stripe rust is started by urediospore infection of host plants and continually develops by producing more urediospores and consequently more infections. Therefore, it is crucial to reduce urediospores for combating the rust disease. In the present study, we observed that after the inoculation of Pst urediospores and C. cladosporioides conidian on wheat leaves, urediospores were first produced on the wheat leaves, and then C. cladosporioides began to grow on urediospores. The exact invasion or parasitism stage cannot be determined at present. It is only clear that C. cladosporioides parasitizes in the sporulation stage of Pst. In order to develop the isolate as a biocontrol agent, further studies should be conducted to its effect on other plants, humans, animals, and environment, as well as to develop methods for producing and applying the biocontrol agent. Conclusions Identification of parasites infecting cereal pathogenic fungi is essential for developing biological control strategies for managing plant diseases. In this study, we report the discovery of a fungal strain isolated from Pst. Through molecular and morphological characterizations, we identified the hyperparasitic fungus as species Cladosporium cladosporioides. We demonstrated that the fungus was able to parasitize the obligate biotrophic rust fungus. Our experiments showed that Cladosporium cladosporioides was able to impair Pst sporulation and reduce urediospores germination. Collectively, Cladosporium cladosporioides may be harnessed for controlling stripe rust, and these results shed new light on biological control agent for managing plant pathogens. The present study identified Cladosporium cladosporioides as a new hyperparasite of Pst. Although the fungus has the potential utility value as a biological control agent for control stripe rust, additional research is needed to determine if the hyperparasite is environmentally friendly and further to explore its potential to control other rust pathogens.
2022-06-12T15:22:33.825Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "a8fa551200bff011d991dc218c856c903bd076ea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-7737/11/6/892/pdf?version=1654853009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f8e025128d51b908d04ff4835a5b900297851b6", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
239300418
pes2o/s2orc
v3-fos-license
Improvement of land management mechanisms for specially protected areas and objects by the example of the Khabarovsk Territory The Russian Federation possesses significant natural areas, whose legal status is often ambiguous, and the natural recreational and tourist potential is not fully utilized. Specially protected natural areas are often located on land plots that fall into different categories of land, which entails the emergence of various management issues that require the most optimal solutions. The article presents material that concerns the process of transforming forest lands into lands of other categories in the Russian Federation, with the aim of developing and improving the system of state forest management and ensuring its international obligations in the field of environmental protection. This process is considered by the example of the protected area “Shantar Islands” in the Khabarovsk Territory. The practical significance lies in the fact that the developed proposals for the spatial development of the protected area provide for the creation of a tourist center. Introduction Recently, interest in the problem of rational land use and land management in specially protected natural areas (hereinafter -SPNA) has grown significantly, the range of studies has expanded and a number of works have been published. In domestic and foreign scientific research, more and more attention has been paid to the problems of land use efficiency and land management. Nevertheless, the issues of the effectiveness of land management in protected areas, their registration in the Unified State Register of Real Estate (hereinafter -USRN) are not sufficiently disclosed and require further development. The solution to this problem is especially important for the development of land management science in general, as it includes a study of the role of the USRN and land monitoring in information support for rational land use. The object of research in this work is specially protected natural areas of the Khabarovsk Territory. The subject of the research is the effective mechanisms of land management in protected areas. In the process of research, the Laws of the Russian Federation, Decrees of the President of the Russian Federation and Resolutions of the Government of Russia, regulatory documents of the Khabarovsk Territory, cadastral, reporting economic and statistical information, special scientific literature were used. It should be noted that the works of scientists and researchers from different parts of the world are devoted to the issues of optimal organization of specially protected areas [1][2][3][4]. Materials and methods The process of converting forest lands into lands of other categories is a traditional and effective form of environmental protection and land management in the Russian Federation. Development and improvement of the system of state management of forest lands in this area ensure the fulfillment by the Russian Federation of international obligations in the field of environmental protection. According to the decree of the Government of the Khabarovsk Territory "On the approval of the territorial planning scheme of the Khabarovsk Territory" dated 10.07.2012 No. 232-pr [5], it is possible to organize an ecological network of protected areas in the territory of the region without damaging the main forest and agricultural activities of the region due to the transition to new forms of protected areas and an increase in areas of protected areas with a special protected status up to 10 percent of the territory of the region. The theoretical and methodological basis of the study is the fundamental laws of the development of nature and society, the works of domestic and foreign scientists in the field of cadastre, land management, legal and economic regulation of land relations and land management in regions, municipalities, as well as their information support. The article uses economic and statistical, monographic, abstract -logical and experimental research methods. Research on changing the category of a land plot from the category of forest lands to the category of lands of specially protected areas and objects This process is considered by the example of the protected area "Shantar Islands" in the Khabarovsk Territory. The territory of the SPNA National Park "Shantar Islands" was formed on the lands of the forest fund of the Chumikanskoye forestry of the Shantar basin in accordance with the Decree of the Government of the Russian Federation dated December 30, 2013 No. 1304 "On the establishment of the Shantar Islands national park", the forestry scheme is shown in Figure 1. The island complex exists in harsh conditions and is easily vulnerable even with a small anthropogenic impact. The park was created on a rare combination of various species of plants and animals, unique in terms of abundance. In order to preserve the unique nature and improve the management system of this SPNA, it is required to change the category of the land plot from the category of forest lands to the category of lands of specially protected areas and objects. In accordance with the Federal Law of the Russian Federation "On the transfer of land or land plots from one category to another" of 21.12.2004 N 172-FL [6] to initiate the transfer procedure, the interested person submits required documents to the authorized executive body of the constituent entity of the Russian Federation (Ministry of Natural Resources of Khabarovsk territory) [7]. A schematic representation of the translation procedure in the absence of document returns is shown in table 1. -coordination of the activities planned on the transferred land plot with the executive authorities or copyright holders of objects on such a plot, in cases stipulated by federal laws; -documentation confirming the state or municipal importance of the object, if the transfer of a land plot belonging to the lands of the forest fund under protective forests is carried out for the placement of such an object; -a document confirming the organization of protected areas in the event that for the organization of such protected areas, a transfer of a land plot related to the lands of the forest fund occupied by protective forests is carried out; -the scheme of the object located on the land plot, drawn up taking into account the territorial planning documents approved in accordance with the requirements of the legislation on urban planning activities, and agreed with the architecture and urban planning authorities. Land transfer issues. In the case of considering the transfer of the protected areas of the Shantar Islands national park, this territory as a real estate object was formed by creating a structural unit -the forestry of the Shantar Islands national park with cadastral number 27: 15: 0001201: 17. The information in the USRN was entered on 12/13/2017 earlier, which was part of the Shantar basin of the Chumikansky forestry. Figure 3 shows the boundaries of the islands of the multi-contour land plot [9]. It should be noted that during the formation of this land plot, its boundaries were determined by the cartometric method in the MSK- Improving the efficiency of land management in protected areas. The specifics of determining the efficiency of land management in protected areas is that the indicators of economic efficiency cannot always characterize the result of managing these lands, since the purpose of creating and using lands in this case is to preserve natural diversity, flora and fauna [10]. When using recreational and other resources, economic, environmental, organizational, technological, social effects can be obtained, each of which can be characterized by a system of indicators (table 2). Table 2. System of indicators of the use of land resources of protected areas by type of effect Effect type Indicator Economical effect Investment value of the land plot; costs for the formation of the site; recoupment of costs for the preservation of the natural area; labor intensity and cost of work; net income; differential income; lost profit; loss of production [9]. Ecological effect Ecological diversity; number and area of land contours per 1 hectare; the number and species composition of woody vegetation; the length of ecotones per 1 km 2 ; number and average size of ecologically sustainable areas by type of land, units, ha; coefficient of forest cover of the territory; indicators of the territorial distribution of linear elements; the amount of soil washout, t / ha; the amount of precipitation runoff; the amount of losses of humus and nutrients, t / ha; soil compaction, g / cm 3 ; capital expenditures for environmental protection measures; annual costs of maintaining environmental structures [10]. Social effect Population growth; reducing the incidence of the population; an increase in the life expectancy of the population; employment growth (number of jobs); regional infrastructure development. Organizational and technological effect Reducing the cost of land surveying, state cadastral registration and registration of rights to real estate; transfers to the budget from the collection of fines for violations of the environmental management regime. The overall effect of land management in protected areas (economic, ecological, social) is defined as the sum of the effect of direct use, indirect use, the existence of protected areas and information support of land use [11] is represented by the formula: where Em SPNA -the overall effect of land management in protected areas; E dir -the effect of the direct use of the protected area; E indir -the effect of the indirect use of the protected area; E ex -the effect of the existence of protected areas; E inf -the effect of information support for land management in protected areas. Conclusion The main issues in improving the management mechanisms of protected areas include the territorial organization of land use within the boundaries of protected areas, the establishment of their legal status, monitoring. Thus, only after the implementation of improvement measures, it is possible to assess their economic efficiency. For example, based on the analysis of the tourism potential of the region, as well as positive and negative factors, the Territorial Planning Scheme for the Khabarovsk Territory provides, including: -creation of a system of tourist and recreational zones within large urban agglomerations of the region (Khabarovsk), focused on tourism, the main specialization of which is a recreational type; -in the Tuguro-Chumikansky municipal district -the creation of a seasonal center for receiving tourists, the development of tourist infrastructure on the basis of the Shantar Islands National Park. We believe that after the transfer of the Shantar Islands National Park, this territory will be able to fully function as a tourist center and bring funds for the development of the regional and federal budgets.
2021-10-22T20:07:03.420Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "c6bd55ce6cc3e433348f89ca783ddbfc2a1623ce", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/867/1/012068/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c6bd55ce6cc3e433348f89ca783ddbfc2a1623ce", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
259689511
pes2o/s2orc
v3-fos-license
SARS-CoV-2 Testing of Emergency Department Patients Using cobas® Liat® and eazyplex® Rapid Molecular Assays Rapid testing for Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) of patients presenting to emergency departments (EDs) facilitates the decision for isolation on admission to hospital wards. Differences in the sensitivity of molecular assays have implications for diagnostic workflows. This study evaluated the performance of the cobas® Liat® RT-PCR, which is routinely used as the initial test for ED patients in our hospitals, compared with the eazyplex® RT-LAMP. A total of 378 oropharyngeal and nasal swabs with positive Liat® results were analysed. Residual sample aliquots were tested using NeuMoDx™, cobas® RT-PCR, and the eazyplex® assay. Patients were divided into asymptomatic (n = 157) and symptomatic (n = 221) groups according to the WHO case definition. Overall, 14% of positive Liat® results were not confirmed by RT-PCR. These samples were mainly attributed to 26.8% of asymptomatic patients, compared to 3.8% of the symptomatic group. Therefore, positive Liat® results were used to provisionally isolate patients in the ED until RT-PCR results were available. The eazyplex® assay identified 62% and 90.6% of RT-PCR-confirmed cases in asymptomatic and symptomatic patients, respectively. False-negative eazyplex® results were associated with RT-PCR Ct values > 30, and were more frequent in the asymptomatic group than in the symptomatic group (38.1% vs. 5.1%, respectively). Both the Liat® and eazyplex® assays are suitable for testing symptomatic patients. Their use in screening asymptomatic patients depends on the need to exclude any infection or identify those at high risk of transmission. Introduction Testing patients for Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) infection, regardless of the presence of typical symptoms, is often performed in the emergency department (ED) prior to their hospital admission [1]. Pending test results, patients are kept in the ED before being transferred to a ward. During epidemic waves this places an enormous strain on ED capacities, and if SARS-CoV-2 diagnosis takes several hours, it can lead to delays in appropriate treatment for critically ill patients [2]. Real-time RT-PCR, conducted using extracted RNA, is considered the gold standard in diagnostics because it combines high sensitivity and specificity. Most RT-PCR assays require an average of 2 to 4 h for test results to be available. This can impede timely patient management in the ED when broad screening of all patients is performed during periods of high incidences in the population [1]. If the molecular diagnostic laboratory is not close to the hospital, transport time and logistics further increase the time to diagnostic reporting. Rapid pointof-care (POC) RT-PCR and isothermal amplification systems with minimal hands-on time are more suitable for timely diagnosis, but the associated cost of consumables is higher and a single instrument does not allow for high sample throughput [3][4][5]. Compared to RT-PCR, isothermal amplification assays are less sensitive [6][7][8][9]. This can be a major drawback, despite their robustness and ease of use. However, analytical sensitivity does not necessarily correspond to clinical relevance [10,11]. Weakly positive RT-PCR results with high Ct values in samples from asymptomatic individuals can lead to unnecessary delays in therapy and excessive repeat testing. Isothermal amplification assays may therefore be a viable alternative for use in clinical settings, where timely identification of acute infections and infectious patients are required [12][13][14]. In our hospitals, the cobas ® Liat ® system (Roche, Penzberg, Germany), a small instrument for automated RT-PCR testing with a short turnaround time of less than 30 min, has been selected as the primary SARS-CoV-2 screening tool for ED patients [15]. However, due to a significant number of false-positive results during assay validation, it was determined that any positive Liat ® result has to be confirmed via RT-PCR for the final diagnostic report, in accordance with the 2021 FDA communication [16,17]. During the 2021/2022 winter epidemic, the diagnostic workflow for samples from the ED consisted of an initial Liat ® test with immediate reporting of positive results as preliminary suspect cases to be confirmed as soon as possible using RT-PCR systems. The eazyplex ® SARS-CoV-2 RT-LAMP assay (Amplex Diagnostics, Gars-Bahnhof, Germany), a rapid RNA extraction-free isothermal amplification system that provides results within 30 min, was routinely used as a back-up diagnostic when other assays were in short supply. The aim of this study was to evaluate the utility of the eazyplex ® RT-LAMP in comparison to the cobas ® Liat ® for rapid testing of samples from ED patients with and without symptoms characteristic of acute SARS-CoV-2 infection. Comparative testing was performed on the same swab specimens, and only Copan UTM swabs were used. Clinical Specimens and Diagnostic Workflow The samples were oropharyngeal and nasal swabs collected in universal transport medium (UTM-RT MINI swabs, 1 mL, 359C, Copan, distributed by Mast Diagnostica, Reinfeld, Germany) at the ED of the Jena University Hospital and a regional hospital (SHK Weimar) between November 2021 and March 2022. Samples from the Jena University Hospital were tested using the SARS-CoV-2 cobas ® Liat ® screening assay 24/7 in the microbiology and clinical chemistry laboratory, which is connected to the ED by a pneumatic tube system for rapid sample transfer. Samples from the external hospital in Weimar were tested at the small on-site laboratory using Liat ® , and positive specimens were transported to the microbiology laboratory twice a day for confirmatory RT-PCR. Prior to testing, samples were mixed 1:1 with phosphate-buffered saline (PBS, Gibco, Thermo Fisher Scientific, Wesel, Germany). Positive Liat ® samples were analysed using RT-PCR during the normal working day, between 7 a.m. and 8 p.m. We used two different RT-PCR systems, the cobas ® Roche and the NeuMoDx™ (Qiagen, Hilden, Germany), depending on the availability of test kits and technical problems with the instruments. For this study, residual aliquots of all Liat ® positive samples were tested using the eazyplex ® RT-LAMP assay within 24 h. Samples were stored at 8 • C until all assays were performed. SARS-CoV-2 Assays The cobas ® Liat ® system is a small device for single-use cartridges with fully automated processing and amplification. For detection of SARS-CoV-2, it utilized a dual target assay; a positive result was reported without releasing Ct values if either or both ORF1 and N target genes were detected. The test run time was 20 min. A total of 200 µL of UTM/PBS was loaded into the cartridge. For the NeuMoDx™ RT-PCR, 700 µL of the sample was loaded onto the NeuMoDx™ 96 Molecular system. The NeuMoDx™ SARS-CoV-2 assay targeted N and Nsp2 genes. For the cobas ® RT-PCR, 600 µL of the sample was loaded onto the cobas ® 6800 instrument. The assay used primers for E and ORF1ab genes. All RT-PCR assays were performed according to the manufacturers' protocols. For the eazyplex ® RT-LAMP assay, 25 µL of UTM/PBS was heated at 99 • C for 2 min before being pipetted into ready-to-use tubes containing 500 µL of resuspension and lysis fluid (RALF). Subsequent testing was performed using a Genie HT instrument (Amplex Diagnostics) according to the manufacturer's protocol. The assay's total runtime was 25 min, but a positive result was reported in real time if the fluorescence level of either or both of the N and ORF8 target genes rose above the threshold. Data Analysis Medical records were reviewed to group patients as asymptomatic or symptomatic according to the WHO clinical criteria for COVID-19 cases [18]. Patients were classified as symptomatic if they presented with acute onset of fever and cough or with three or more of the following symptoms: fever, cough, weakness/fatigue, headache, myalgia, sore throat, coryza, dyspnoea, and nausea/diarrhea/anorexia. The performance of cobas ® Liat ® and eazyplex ® assays was assessed by calculating the positive percent agreement (PPA) with NeuMoDx™ or cobas ® RT-PCR defined as reference. The turnaround times of different assays were calculated using the times recorded in the laboratory information system when tests were requested and results were released. Results In total, 325 of 378 positive Liat ® tests were confirmed via RT-PCR (86%). There were only three Liat ® -positive samples that were negative using RT-PCR, but positive when subsequently testing using the eazyplex ® RT-LAMP assay (0.8%). Positive Liat ® results that could not be confirmed using either or both of the RT-PCR and RT-LAMP methods were defined as false-positive Liat ® results. The false-positive rate of the Liat ® assay was high, at 26.8% in the asymptomatic patient group, while only 3.8% of positive Liat ® results in symptomatic patients could not be verified ( Table 1). The patient medical records of 25% of those with false-positive Liat ® test results contained the information that they had been diagnosed with SARS-CoV-2 infection ≥ 2 weeks previously, indicating that the Liat ® assay could produce false-positive results relative to the reference method and the absence of symptoms in the patient, but might detect residual amounts of viral RNA from a previous infection. In this study, the overall sensitivity (PPA) of eazyplex ® was only 62% in asymptomatic patients, but 90.9% in symptomatic patients ( Table 2). As RT-PCR Ct values ≥ 30 indicate a low risk of viral shedding, we divided the samples into three groups of high (Ct ≤ 25), intermediate (25 < Ct < 30), and low (Ct ≥ 30) viral load to further evaluate the performance of RT-LAMP compared to RT-PCR (Table 3) [10,19]. Not surprisingly, at high viral loads the sensitivity of the eazyplex ® reached 95.2% and 100% in asymptomatic and symptomatic patient groups, respectively (Table 3). Samples with intermediate viral loads were detected with sensitivities > 80%. In samples with low viral loads, sensitivity decreased to approximately 10%. It is noteworthy that only 5.2% of symptomatic patients, but 38.1% of asymptomatic patients, had a low viral load (Table 3). For quality assessment, we analysed reference standards of the Delta and Omicron variants (INSTAND e.V.) containing approximately 10 5 virus copies/mL, which represented a lower limit of infectivity [20]. As shown in Table 4, the corresponding Ct values for the NeuMoDx™ and cobas ® RT-PCR assays were similar, ranging from 27 to 29. Both standards were also detected via RT-LAMP, with a positive result for at least the N gene. Liat ® results were reported to the ED at a median time of 1.1 h (IQR 0.88-1.42, n = 315) after test requests. Confirmatory RT-PCR testing of positive specimens resulted in a median delay of 2.5 h (IQR 1.88-6.33) for the final diagnostic report. For Liat ® -positive specimens sent from the external hospital to the laboratory for confirmation, the median time from test request to final report was 9 h (IQR 6.25-19.12, n = 63). In the small number of cases in which the eazyplex ® was used as a screening assay in the routine workflow, results were available in a median time of 0.9 h (IQR 0.75-1.25, n = 11). Discussion Rapid molecular testing for SARS-CoV-2 in the ED is critical for timely and appropriate decisions regarding further management and isolation of patients [1,5]. RT-PCR tests are the gold standard for reliable identification of SARS-CoV-2 in symptomatic patients due to their high sensitivity. On the other hand, there are strong arguments that low-positive RT-PCR results in patients without characteristic symptoms are not relevant, either for patient management or for identification of infectivity [10,19]. In this context, it should be noted that widespread testing of asymptomatic individuals is expensive, time-consuming, labor intensive, and generates significant amounts of waste [8]. The cobas ® Liat ® , a sensitive RT-PCR designed for use as a POC test, produced a high rate of positive results in asymptomatic patients that could not be confirmed via reference RT-PCR, consistent with data from previous reports [17,21]. It should be noted that this assay was originally intended for use in symptomatic patient testing [17]. Most of the false-positive results were apparently due to the detection of residual nucleic acid from previous infections. The superior sensitivity of RT-PCR, combined with high-speed amplification of short sequences, may increase the risk of detecting residual fragments due to slow degradation of viral RNA, as shown for SARS-CoV-2, influenza virus, and others [10,22]. The need to confirm a positive Liat ® result not only adds diagnostic cost, but also delays adequate treatment and care of patients, as each case must be managed as a presumptive COVID-19 patient until the final standard RT-PCR result is available. The eazyplex ® RT-LAMP assay was less sensitive than the Liat ® , and reached an overall sensitivity ≥ 90% only for symptomatic (but not for asymptomatic) ED patients. However, the usefulness of a rapid diagnostic test in practice can be assessed differently depending on the corresponding RT-PCR Ct values. There is little doubt that a low positive RT-PCR result does not indicate that a patient is infectious, but viral load cut-offs that accurately discriminate whether or not an individual is producing enough virus for transmission are difficult to define, and Ct values can vary between different assays [19,23]. Interestingly, many studies that have examined the relationship between Ct values and the presence of culturable amounts of virus as a marker of sample infectivity have found similar results [10]. Most studies have reported that Ct values < 30 (or even less) are required for successful growth of the virus from samples in cell culture. It has been calculated that one PFU of SARS-CoV-2 corresponds to viral copy numbers between 10 4 and 10 5 [24]. These findings are in good agreement with the detection limits determined for the eazyplex ® RT-LAMP assay, and are also consistent with the Abbott ID NOW™ isothermal amplification assay, for which high sensitivity has been reported for samples with Ct values < 30 [5,6]. As shown in several studies, the sensitivity of most rapid antigen tests is significantly lower, in the range of 10 6 to 10 7 virus copies/mL [25]. Therefore, isothermal amplification assays developed for use with crude samples may provide an alternative tool for initial screening of patients when rapid results are needed. In principle, both the Liat ® and eazyplex ® assays can be used to identify SARS-CoV-2 in patients with acute respiratory symptoms. In contrast to Liat ® , a positive eazyplex ® result does not require confirmation due to the high specificity of the assay, as previously demonstrated. [6]. On the other hand, eazyplex ® -negative samples need to be subsequently tested using RT-PCR. Depending on the actual prevalence, such a workflow may result in a significant additional workload and cost. However, it can be assumed that patients with acute respiratory illness in the ED who require hospitalization will generally be further tested using multiplex RT-PCR for different respiratory pathogens if the SARS-CoV-2 test is negative. When screening asymptomatic patients, negative Liat ® results almost rules out infection, but there is a significant rate of false-positive results, leading to an additional workload for verification to avoid unnecessary isolation of the patient. Use of the eazyplex ® assay cannot rule out infection, but it can identify those patients most likely to be infectious. The reduced sensitivity of a diagnostic assay can be problematic if the patient is at a pre-symptomatic stage [23]. Therefore, when RT-LAMP is used to screen asymptomatic patients, retesting for SARS-CoV-2 by RT-PCR must always be included in the differential diagnosis if the patient develops symptoms after hospital admission. This is also essential in cases where a patient who tested positive with Liat ® but whose positive result was not confirmed by RT-PCR becomes symptomatic. In conclusion, both rapid molecular assays are useful tools for the diagnosis of acute SARS-CoV-2 infection in high-priority patients. Figure 1 summarizes a diagnostic workflow that could be proposed when both assays are combined for rapid diagnosis of SARS-CoV-2 infection, regardless of the patient's symptoms. An initial screening using Liat ® rules out infection if the test result is negative. Positive Liat ® results are tested via eazyplex ® . A positive eazyplex ® result confirms that the patient is infected and infectious, and does not require further testing via RT-PCR. Their ease of use and short turnaround time allows both assays to be performed directly in the emergency department or in a satellite laboratory in the field; only specimens with a positive Liat result and a negative eazyplex ® result need to be retested via RT-PCR. In conclusion, both rapid molecular assays are useful tools for the diagnosis of acute SARS-CoV-2 infection in high-priority patients. Figure 1 summarizes a diagnostic workflow that could be proposed when both assays are combined for rapid diagnosis of SARS-CoV-2 infection, regardless of the patient's symptoms. An initial screening using Liat ® rules out infection if the test result is negative. Positive Liat ® results are tested via eazyplex ® . A positive eazyplex ® result confirms that the patient is infected and infectious, and does not require further testing via RT-PCR. Their ease of use and short turnaround time allows both assays to be performed directly in the emergency department or in a satellite laboratory in the field; only specimens with a positive Liat result and a negative eazyplex ® result need to be retested via RT-PCR. Informed Consent Statement: This study was conducted in the context of routine diagnostics. Patient consent was waived because collected clinical data were anonymized, and only residual samples were used for confirmatory testing. Data Availability Statement: The dataset analyzed in this study is available from the corresponding author upon reasonable request. Informed Consent Statement: This study was conducted in the context of routine diagnostics. Patient consent was waived because collected clinical data were anonymized, and only residual samples were used for confirmatory testing. Data Availability Statement: The dataset analyzed in this study is available from the corresponding author upon reasonable request.
2023-07-12T06:10:21.982Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "9a8e371a656c143f7b45fb5dd0ca9003c14f67b2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/13/2245/pdf?version=1688204490", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48b89d9b11188a6618122e52e89a2f5259980edf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269766846
pes2o/s2orc
v3-fos-license
Integrating Local Wisdom “Nyucikeun Diri” for Character Education of Generation Z In Purwakarta : The study explores the impact of integrating Nyucikeun Diri, a Sundanese local wisdom, into character education for Generation Z in Purwakarta. Using social construction theory, the research illustrates how externalization, objectivation, and internalization processes contribute to developing religious character among youths. Findings suggest significant behavioral improvements, underlining the efficacy of culturally grounded character education. This version highlights the methodology, key findings, and implications succinctly. Introduction Character education comes from two separate words could be more concisely written as The term character education combines education and character..Education refers to the process of forming character, while character is defined as the results obtained through the education process [1].According to Law of the Republic of Indonesia's number 20 of 2003 concerning the National Education System (UU Sisdiknas) formulates the functions and objectives of national education, which must be used in developing educational efforts in Indonesia. Article 3 of the National Education System Law states, "National education functions to develop and shape the character and civilization of a dignified nation in order to educate the life of the nation, aiming to describe the potential of students to become human beings who believe in and are devoted to God Almighty, have noble character, are healthy, knowledgeable, capable, creative, independent, and a democratic and responsible citizen."Susilo (2013) [2] said that student moral decadence has reached an alarming stage,such as brawling behavior, violations of ethics, morals and law from light to serious are often shown by students.Every day we see a lot of news about the behavior of Generation Z, most of whom are students who deviate from moral values and religious teachings. Generation z is the generation born between 1996 and 2012 [3].Generation Z as the nation's next generation must be prepared to have good character and noble morals.Generation Z lives in an era of advanced information and communication technology.Almost all teenagers use technological devices in their lives such as smartphones, computers, the internet and others.This has both positive and negative impacts.The positive impacts include that they can get useful information and knowledge, but on the other hand there are also negative impacts such as that they can access pornographic sites, play online games until they lose track of time, play on gadgets until they forget to study and worship.Apart from the influence of technological advances, Generation Z's behavior can also be influenced by the social environment, family upbringing, school environment and playground. Therefore, we need a way so that education can show its role in improving national identity.One way that has been implemented in recent years is character education based on local wisdom.This is an effort to prepare Generation Z in the era of globalization by building character and love for the cultural values of local wisdom.In Purwakarta, the government is implementing character education based on local wisdom with the This research is significant as it provides empirical evidence on the efficacy of local wisdom in shaping the character of Generation Z, offering valuable insights for educators and policymakers. Methods The methodology used is a case study method with a qualitative approach.With this qualitative approach, data collection is carried out directly in the field by conducting observations, in-depth interviews with information and also conducting documentation studies related to the research problem.After the data is collected, data reduction is carried out, analyzing the data and drawing conclusions as a result of the research. Character Building National character education is an effort made by the state (government), society, family and educational units to make Indonesian people a nation with noble character.Good character is correct behavior in life that is in accordance with the philosophy and noble values of the Indonesian nation.Noble character is good behavior in relationships with God Almighty, fellow humans, nature and their living environment, their nation and state, and with themselves. Suyanto (2010:1) [8] who said that character education is character education plus, which involves aspects of knowledge (cognitive), feelings (feeling), and action (action).Character education includes how a person knows good things, has the desire to do-good things, and carries out good things based on thoughts and feelings about whether it is good to do or not, then do it.These three things can provide direction or good moral life experiences, and provide maturity in behavior. This character education is an effort made by the government to shape and strengthen the nation's character so that it always behaves in accordance with the values and norms that exist in Indonesia.In accordance with the aim of education in Indonesia, namely that all students, most of whom are Generation Z, have the character of being devoted to God Almighty, working together, thinking creatively and having an insight into diversity based on Pancasila.In its implementation, character education must be carried out comprehensively by all parties, including schools, families, the environment and society in general.The character formation of Generation Z will be successful if there is collaboration from all parties. The shift in values that is occurring and the rapid onslaught of other cultures and technology must be balanced with Generation Z's readiness to accept it all.Strong character will make Generation Z able to absorb values, culture, technology and change well.With this character education, it is hoped that Generation Z will have the ability to think (intellectual), exercise the heart (honest and responsible), exercise (kinesthetic) and exercise initiative (creativity and caring).This is also what we want to apply to students, some of whom are Generation Z in Purwakarta, with character education based on religious values and local wisdom. Local Wisdom Rahyono (2009) [9] accustomed to the digital world.Some call Generation Z the i-generation or internet generation because they have been very familiar with the internet and gadgets since they were born.Moreover, the Covid pandemic means that most Generation Z students cannot avoid using the internet at home, at school or at games. On the one hand, the sophistication of information, technology and communication in the lives of Generation Z can influence their character; this can have both positive and negative impacts.One example of a positive impact is that they can be more creative and find it easier to get information.But on the other hand, the negative impact of this technological progress is the change in the character of Generation Z which is not in accordance with cultural and religious values.For example, they are individualistic; they are more indifferent towards the situation around them; they carry out bullying on social media, and some are even exposed to pornographic content.To anticipate the negative impact of technological progress on Generation Z, it is necessary to provide strong character and instill good religious values.This is so that they can take advantage of advances in technology and information to provide them with a better life. Social Construction of Generation z Behavior Currently, the majority of students in educational institutions (schools) are Generation Z.This certainly presents its own challenges for the government and schools to think about what concepts are suitable to be applied in the education process.Bearing in mind that Generation Z lives in a different situation from previous generations, so if schools continue to apply the same learning methods, it will not suit the needs of Generation Z who live in this era of technological and communication advances.One thing that cannot be avoided is the use of technology in the learning process.Technology can have positive and negative impacts, so to anticipate these negative impacts, character education is needed.This aims to prevent Generation Z from being lulled by traits that deviate from cultural and religious values. In Purwakarta, the government implements character education that instills local cultural and religious values.Character education 7 Poe Atikan Istimewa (Seven Days of Special Education) Purwakarta contains education carried out seven days a week with six educational themes, where students learn about values that are linked based on themes each day, while still paying attention to integration between subjects.This thematic education is in accordance with the conditions of Generation Z who prefer learning that is dynamic and not boring.With a different theme every day makes the learning process more fun. Character education 7 Poe Atikan Istimewa (Seven Days of Special Education) Purwakarta contains education carried out seven days a week with six educational themes, where students learn about values that are linked based on themes each day, while still paying attention to integration between subjects.This thematic education is in accordance with the conditions of Generation Z who prefer learning that is dynamic, varied and not boring.With a different theme every day makes the learning process more fun.Even though every day the learning theme at this school is different, in fact this learning is comprehensive, meaning that every day the learning emphasizes one theme, but there are still other themes that are included.For example, on Monday, the theme of the Indonesian archipelago continues to be instilled with nationalist values, but religious learning is also provided, such as the habit of fasting on the sunnah on Monday and also reading the Asmaul Husna before starting learning. The implementation of character education for the 7 Poe Atikan Special (Seven Days of Special Education) can be described as follows: 1) Monday, ajeg Nusantara (Patriotism).Ajegin Indonesian means upright and the archipelagoisa stretch of territory from Sabangto Merauke.Learning on Monday aims to fosterasense of national ismin students.Students are taught to get to know the culture, customs, natural wealth of the archipelago, all of which are inserted in to the learning curriculum in Indonesian.2) Tuesday, mapag buana (Adapting to the Progress of Time), Mapag means to pick up and Buana means the world, so literally it means preparing students to pick up the arrival of an increasingly modern world civilization.In this theme, students are accustomed to being literate in technology.Learning is carried out by introducing foreign cultures around the world, using information technology devices and sources from the internet.For this Tuesday, students are also accustomed to using international languages.3) Wednesday, maneuh di sunda (Internalizing Sundanese Culture), Maneuh means stay or stayed, and Sundanese is a culture that inhabits the land of Padjadjaran.On this theme, students are accustomed to recognizing and preserving Sundanese cultural values in everyday life. Starting from how to dress, the language used and the games played.This is the goal so that students maintain their identity as Sundanese people and live with Sundanese values in them idstof today's progress.4) Thursday, nyanding wawangi (Spreading Love and Affection), Learning on Thursday is intended to develop students' creativity, especially outside of academics.They are freed not to wear school uniforms, may bring musical instruments or anything according to positive hobbies.They are given a place to express themselves according to their talents and interests.This religious habit every Friday is intended to instill religious values in students who are Generation Z so that they have religious intelligence that can bring them closer to Allah, so they are not only equipped with worldly knowledge but also with religious knowledge.With this character lesson about religion, it is hoped that Purwakarta students who are part of Generation Z will have character and religious behavior patterns.They are expected to become a generation of future national leaders who have intellectual intelligence, emotional intelligence and spiritual intelligence.Overall, the character internalization process that Generation Z is expected to have is a character who has faith and is devoted to Allah, loves the country, has a global/world outlook, continues to preserve local culture, remains creative and has an artistic spirit, and always loves family. Conclusion Based on the results of the discussion, it can be concluded that the character formation of Generation Z can be done through character education at school, because the majority of Generation Z are students.The 7 Poe Atikan Istimewa Program (Seven Days of Special Education) is character education based on local wisdom and religion which is implemented at the basic education level in Purwakarta.The construction/formation of Generation Z's behavior begins with the socialization of character education carried out by the education department in schools, then the schools pass it on to students and parents.After that, students are familiarized with the behavior and character that they must carry out in accordance with the 7 Poe Atikan (Seven Days of Special Education) themes.The character emphasis examined in the research is religious character which is carried out with the "nyucikeun oneself" program.This character cultivation is carried out by conducting socialization programs, habituating behavior and ultimately producing a Generation Z who has religious behavior and character. The integration of Nyucikeun Diri into character education effectively shapes Generation Zs behavior in Purwakarta. This case study underscores the potential of local wisdom in enhancing educational outcomes, suggesting avenues for incorporating cultural values into broader educational frameworks. International Journal of Science and Research (IJSR) ISSN (Online): 2319-7064 Index Copernicus Value (2016): 79.57 | Impact Factor (2017): 7.296 Volume 13 Issue 2, February 2024 www.ijsr.net of Generation Z behavior is done through character education based on local wisdom.The purpose of this study is to analyze how Nyucikeun Diri, a local wisdom concept, can be utilized in character education to shape the behavior of Generation Z in Purwakarta. of Science and Research (IJSR) ISSN (Online): 2319-7064 Index Copernicus Value (2016): 79.57 | Impact Factor (2017): 7.296 Volume 13 Issue 2, February 2024 www.ijsr.net said that local wisdom is human intelligence possessed by certain ethnic groups which is obtained through community experience.These local values can be local knowledge, local skills, local intelligence, local resources, local social processes, local norms and ethics, and local customs.Local wisdom values can be used as the basis for character education in schools.Cultural values that are considered good in the form of local wisdom are used as educational material or sources.Generation z is the technology generation.They were born between 1996 and 2012, when information and communication technology in the world had developed greatly.This situation is certainly very beneficial for Generation Z, because from an early age they have becomeInternational JournalLicensed Under Creative Commons Attribution CC BY 2319-7064 Index Copernicus Value (2016): 79.57 | Impact Factor (2017): 7.296 Volume 13 Issue 2, February 2024 www.ijsr.net Licensed Under Creative Commons Attribution CC BY in schools, then schools pass on the information to students and also to parents through the school committee.The implementation of character education is carried out comprehensively both at school and at home.After carrying out socialization, the school then conducts habituation for students according to the themes each day.
2024-05-15T15:02:02.209Z
2024-02-05T00:00:00.000
{ "year": 2024, "sha1": "111796f157b8814c066fd6d3a6108af46f46894d", "oa_license": null, "oa_url": "https://doi.org/10.21275/sr24210092746", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "26a0426dcbf003f33bcf7482483a73671133239d", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [] }
264107342
pes2o/s2orc
v3-fos-license
Grass Meal Acts as a Probiotic in Chicken : Probiotics can act as an alternative to antibiotics in animal feeding, but their use is minimal due to their expensive production. Dry grass is rich with bacteria beneficial for animal feeding and can be used as a probiotic. However, data about the quantitative dependence of the grass microbiome on environmental factors and seasons remain insufficient for preparing “grass-meal-based probiotics”. Four grass samples were collected in two geographically remote regions of Russia; their microbiome was characterized by metagenomic sequencing of 16S rDNA libraries and microbiological seeding, and biological testing of the grass meal was carried out on 6 groups of birds containing 20 Ross 308 cross broilers each for a period of 42 days. The positive control group (PC) obtained 16–25 mg/mL toltrazuril (coccidiostatic agent) and 0.5 mL/L liquid antibiotic enrostin (100 mg/mL ciprofloxacin and 10 6 MU/mL colistin sulfate in the commercial preparation) within the drinking water, while the negative control group (NC) obtained no medicines. Four experimental groups were fed the diet supplemented with 1% grass meal over the period of 7–42 days of life; no commercial medicines were used here. A spontaneous infection with Eimeria was registered in the NC control groups, which caused the loss of 7 chickens. No losses were registered in the PC group or the two experimental groups. In two other experimental groups, losses of coccidiosis amounted to 10% and 15%, respectively. All specimens of the grass meal demonstrated a significant effect on the average body weight gain compared to NC. Taken together, these observations support the hypothesis that the grass meal may substitute toltrazuril for protecting the chickens from parasitic invasion and increase average daily weight gain (ADG) as effectively as the antibiotic enrostin. Introduction Currently, poultry farming is the principal source of meat worldwide, providing the most available source of valuable protein [1].Intensive development of four-line cross systems in chickens (e.g., Cobb 500 and Ross 308 fast-growing bred) and ameliorating cage, ventilation, climatic, feed distributing, and waste management facilities over the last 6 or 7 decades has resulted in feedstock conversion into muscle mass efficiency [2].The feed conversion ratio (FCR) in these crosses attains 1.5-1.7 to 42 days of life [3,4]. Gross industrial farming of the broilers decreases manufacturing costs but makes the flock vulnerable to infection with Campylobacter jejuni [5], Clostridium perfringens [6], Clostridium deficile [7], several species of Salmonella [8], enteropathogenic Escherichia coli [9], Eimeria [10], avian leukosis virus [11], and Enterococcus avium, resulting in large economic losses to the poultry industry worldwide [12].Antibiotics and antiparasitic medicines are commonly used at low doses for infectious disease prevention in broilers, thereby ameliorating their growth and preventing losses.Nevertheless, the misuse and overuse of the drugs as growth promoters unavoidably lead to emerging antibiotic resistance in the broiler microbiota, including pathogens [13].In 2015, the global annual consumption of antimicrobials per kg of animal product was estimated at 45 mg/kg, 148 mg/kg, and 172 mg/kg for cattle, chicken, and pigs, respectively [14].Starting from this baseline, the global consumption of antimicrobials was expected to increase by 67%, from 63,151 ± 1560 tons to 105,596 ± 3605 tons between 2010 and 2030 [14].The impact of antibiotic use for growth promotion in livestock and poultry production on the rise of antimicrobial resistance in bacteria led to the ban of this practice in the European Union in 2006 and a restriction of antimicrobial use in animal agriculture in Canada and the US [15].A recently emerged paradigm of bioeconomy suggests using biological means of control for infection agents affecting poultry, including probiotics (live microbial preparations with antagonist activity toward pathogens), prebiotics, phytobiotics, bacteriophages, and their lysins [16,17]. Since 1973, probiotics have been suggested as an efficient and safe alternative to feeding antibiotics [2,18].Traditionally, representatives of the genus Lactobacillus and taxonomically close groups (Streptococci, Enterococci) were used in this role [19].As far as Lactobacilli are normal components of the chicken crop, small intestine, and cloaca, but not caeca, long-term survival of the administered bacteria was presumed.Therefore, strains with high resistance to acidic pH, bile, and pepsin with high adhesion ability to intestinal mucin were suggested to be most efficient, and special methods of selection for these traits were proposed [20].Further, probiotics from other taxonomic lineages were successfully used.First, enterobacteria, including E. coli [21] and Bacilli [22], were used.There are reports about the high efficiency of B. subtilis strains isolated from chicken feces [23], despite the fact that the survival of Bacillus in the chicken gastric intestinal tract (GIT) seems doubtful.Later strains of Clostridium [24] and Ascomycetes yeast, e.g., Saccharomyces boulardii [25], were introduced to the practice.The most popular commercial probiotics available on the global market are Aviguard, Primalac, and Interbac made up of several species of Lactobacillus and Bacillus [26]. Significantly, probiotics, like antibiotics (e.g., enramycin and tylosin), confer resistance against Eimeria on the chicken, although they do not exhibit antagonism towards Apikomplexa sporozoits in vitro [27].Moreover, the concept of the necessity of long-term persistence of the probiotics in GIT was revised.Obviously, the high efficiency of Bacillus strains' anti-pathogenic and growth-promoting effects on the chicken was acknowledged, although a pure aerobic metabolism does not allow Bacilli to vegetate under the chicken's GIT anaerobic conditions.Moreover, culture medium fermented with Bacillus licheniformis and Bacillus subtilis exerted a favorable impact on the GIT microbiota and average daily weight gain (ADG) of the broilers [2,28].This confirms that the short-term influence of the probiotic-derived metabolite is sufficient for the favorable action of the overall probiotic.Therefore, the mechanism and final result of the anti-pathogenic action of the antibiotics and probiotics may be more similar than previously suggested. Importantly, the impact of antibiotic preparations on the microbiome of caeca was described in [29].The effect of the coccidiostat monensin and the growth promoters virginiamycin and tylosin on the caecal microbiome and metagenome of broiler chickens, 16S rRNA, and total DNA shotgun metagenomic pyrosequencing.In this study, Roseburia, Lactobacillus, and Enterococcus showed reductions, and Coprococcus and Anaeroflum were enriched in response to monensin alone or monensin in combination with virginiamycin or tylosin.Another important result was the enrichment in E. coli in the monensin/virginiamycin and monensin/tylosin treatments, but not in the monensin-alone treatment. The impact of Bacillus licheniformis metabolites and the peptide antibiotic enramycin on the caecal microbiota was compared by Chen and Yu [2].They reported that the diversity (richness and evenness) of bacterial species in the caeca of the chicken treated with B. lichenofromis metabolites was higher than in the control group.The share of obviously beneficial bacteria associated with probiotic properties, such as Lactobacillus crispatus and Akkermansia muciniphila, was also increased due to exposure of the chicken to B. lichenofromis metabolites.Exposure of the broilers to enaramycin led to an elevation of Clostridium bacterium, Enterococcus cecorum, Anaeromassilibacillus sp., Ruminococcus sp.SW178, Lachnoclostridium sp., and Blautia sp. in the caecal microbiota.Noteworthy, now butyrate-producing genera Ruminococcus (order Eubacteriales, family Oscillospiraceae) and Blautia (order Eubacteriales, family Lachnospiraceae), along with Coprococcus, Roseburia, and Faecalibacterium (other representatives of the class Closrtidia, order Eubacteriales), are suggested to be favorable components of normal human column microbiota exhibiting antiinflammatory properties [30].A deficiency of these genera in the microbiota is associated with the progression of Parkinson's disease. An effect of a peptide antibiotic, bacitracin, and a Bacillus subtilis-derived probiotic on the caecal microbiota of chickens infected with species of Eimeria (causative agent of coccidiosis) was described by Jia et al. [29].The relative abundance of species Butyricicoccus pullicaecorum, Sporobacter termitidis, and Subdoligranulum variabile was increased in the chicken group challenged with Eimeria.It is known that Butyricicoccus pullicaecorum and Subdoligranulum variabile (both belong to the family Oscillospiraceae) produce butyrate and other short-chain fatty acids that suppress the development of Eimeria but are unfavorable for the microbiota [30].Sporobacter abundance was shown previously to be reduced when the chickens were treated with a mixture of probiotic Bifidobacterium strains [31].Similar effects of bacitracin and the probiotic were reported in [29]. The phytobiotics (primary or secondary components of plants that contain bioactive compounds) are proposed by Clavijo V. to be divided into four groups: (1) Herbs (products from flowering, non-woody, and non-persistent plants); (2) botanicals (whole plants or processed parts); (3) essential oils (hydro-distilled extracts of volatile plant compounds); and (4) oleoresins (extracts based on non-aqueous solvents) [1].The beneficial impact of phytobiotics on the poultry gastrointestinal tract microbiota is reported by Hasted et al. and Abdel Baset et al. [15,32].Abdo et al. described a cumulative effect of B. subtilis-derived probiotics and Yucca shidigera extract on water quality, histopathology, antioxidants, and innate immunity in response to acute ammonia exposure in a fish, Oreochromis niloticus (Nile tilapia).Phytobiotics are suggested to be cost-competitive but just a supplementary agent for the control of the normal and pathogenic microbiota in the chicken [33]. Significantly, the beneficial effects of the phytobiotics on the GIT microbiota and mitigating the severity of coccidiosis are attributed mostly to the immune-stimulating action of polysaccharides, flavonoids, and essential oils or to prebiotic-like action on trophic chains within the microbiota [1].Meanwhile, the effect of the dry plant biomass may be partially explained by the presence of live bacteria in the phylloplane, which may confer a probiotic action on the GIT microbiota [34].Members of the genus Bacillus and other endospore-forming aerobic bacteria (e.g., family Paenibacillaceae, order Bacillales) may be considered one of the most probable candidates to contribute to this effect for two reasons: (1) Due to the high ability of the endospores to survive drying, heating, and other unfavorable conditions upon preparation and storage; and (2) due to the commonly acknowledged beneficial impact of the probiotics derived from Bacillus on the normal and pathogenic microbiota of the chicken GIT.Substituting the probiotics with phytobiotics (e.g., grass meal) is able to overcome the principal shortcoming of modern commercial probiotics-the high manufacturing cost. Taking into consideration this idea, the present study had the objective of estimating the growth-stimulating effects of the grass meal on the chicken using specimens collected in two distinct geographic locations in comparison to a negative control group (obtaining no medicines) and a positive control group fed a diet supplemented with the feeding antibiotic enrostin (100 mg/mL ciprofloxacin and 10 6 MU/mL colistin sulfate).The bacterial load of the grass meal specimens was qualitatively and quantitatively assessed by using metagenomic sequencing of libraries obtained with Ferier_F515 and Ferier_R806 primers specific to the V4 region of 16S ribosomal DNA [35].Particular attention was paid to endospore-forming bacteria (Bacilli sensu lato), which are relatively widespread in phylloplane [36] and have been suggested as efficient veterinary probiotics [2,28]. In parallel, the protective effect of the grass meal specimens against spontaneous invasion of the chickens with Eimeria tenella (coccidiosis) was studied.Molecular analysis by PCR with primers EtF and EtR specific to ITS-1 of the ribosome cluster [37] demonstrated the presence of this parasite in the ileum digesta of six chickens from seven dead in the negative control group in the course of the trials in the single dead chicken from experimental group KS2.One more chicken from the NC group, a chicken from the KS2 experimental group, and three chickens from the TS1 experimental group died in the course of the trials, as none of the chickens that survived until the end of the trials exhibited E. tenella DNA in the ileum digesta.This observation allows hypothesizing that the grass meal may confer a specific anti-coccidiosis effect on the chicken or exhibit an overall restorative effect, increasing their resistance to parasitic invasions. Collection and Drying of Grass Specimens Four specimens of the grass biomass (mostly members of the family Poaceae: Dactylis glomerata, Phleum pretense, and Bromus inermis) were collected in two locations in Kursk region (GPS 51.8104The collection site is located on the supporting part of the ravine in front of the forest (Tambov district, Tambov region, Lysogorsky Village Council).The soil type is meadow chernozem.The forest area of the beam type.The predominant type of trees are oak; maple, alder, aspen, elm and linden.The collection site is located in a meadow with mixed grass. Each specimen weighed 7-10 kg.The mown grass was distributed in a thin layer on a wooden surface and dried in the open air without exposure to direct sunlight for 10-12 days.The hay was turned over daily to accelerate the drying process and avoid rot. After drying, each hay specimen was cut with scissors into pieces below 1 cm and ground in a hand coffee grinder.Each portion was treated for 1.5-2 min while avoiding heating.The meal was kept in a plastic bag with zippers until it was used for animal trials. In Vitro Testing of Grass Specimens The total composition of the hay microbiota was determined by 16S sequencing.Briefly, 100-120 mg samples were taken from each hay specimen and put into a 1.5 mL Eppendorf tube.One hundred microliters of sterile deionized water was added, and the total DNA was isolated by using the GMO-B Sorbent Kit using CTAB as a lysing agent (Syntol, Moscow, Russia), following the manufacturer's instructions.The isolated DNA samples were sent to the State Research Institute of Agricultural Microbiology (Pushkin, Russia) for analysis.16S DNA libraries were composed using Ferier_F515 (5 -3 ) GTGCCAGCMGCCGCGGTAA and Ferier_R806 (5 -3 ) GGACTACVSGGGTATCTAAT primers [37], as described below. Total bacterial contamination and a count of Proteobacteria in the hay specimens were determined by microbiological methods.Fifty microliter samples of the dry grass meal were placed into a 1.5 mL Eppendorf tube with 1.000 µL of sterile deionized water, mixed intensively with vortexation, and incubated for 20 min at room temperature.Then 100-fold and 10.000-fold dilutions of the extract were prepared by the subsequent transfer of 10 µL aliquots of the initial grass extract into 1.000 µL volumes of sterile deionized water in new tubes.10 µL aliquots of each grass extract and its 100-fold dilution were distributed onto LB agar plates (pepton bacto 10 g/L, yeast extract bacto 5 g/L, NaCl 5 g/L, agar bacto 15 g/L) for assessing total bacterial contamination and onto LB agar plates supplemented with 35 µg erythromycin for assessing Proteobacteria count.The plates were incubated at 30 • C for 48 h, and the number of colonies was calculated manually. The count of Bacilli sensu lato (number of live thermostable endospores) was determined as follows: 50 µL samples of the dry grass meal were placed into a 1.5 mL Eppendorf tube, 1.000 µL sterile deionized water was added, and the tube was incubated at 90 • C for 10 min without preliminary mixing.Then the samples were thoroughly mixed at a hand vortex and 100 times diluted by transferring 10 µL aliquots of the heated grass extract into 1.000 µL volumes of sterile deionized water in new tubes.10 mL aliquots of each heated grass extract and its 100-fold dilution were distributed onto LB agar plates (pepton bacto 10 g/L, yeast extract bacto 5 g/L, NaCl 5 g/L, agar bacto 15 g/L).Each specimen was analyzed in duplicate.The plates were incubated at 30 • C for 48 h, and the number of colonies was calculated manually.The number of colonies in the range of 20-200 per plate was suggested to be adequate for an accurate calculation of the initial contamination of the grass sample with the endospore-forming bacteria. Individual bacterial colonies were transferred onto LB agar plates using the triple streak exhausting method, which is used to inoculate liquid media (1 mL LB broth in 20 mL flacons with cotton plugs), which were incubated for 40 h at room temperature.The cultures were used for genomic DNA purification with GMO-B sorbent kits by Syntol (Russia), using CTAB as a lysing agent.The purified DNA in amounts of 1 µL with concentrations of 0.2-0.4µg/µL was used as templates for PCR with primers 8F (AGAGTTTGATCCTG-GCTCAG) and 1492R (TACCTTGTTACGACTT) described earlier [34].Dream Taq thermostable DNA polymerase (Thermo Fisher Scientific, USA) was used in the amount of 1 U per 30 µL reaction mix.The following thermal cycling parameters were applied: 94 • C-2 min; (94 • C-30 s, 60 • C-45 s, 72 • C-30 s)-30 cycles.Briefly, the 1473 bp-long PCR product was purified with the ColGen Silica Sorbent Kit by Syntol (Russia) following the manufacturer's instructions and sent for custom sequencing to Eurogen LLC (Russia, Moscow) with primers 8F, 1492R, and 926R (CCGYCAATTYMTTTRAGTTT).Three sequences covering the 16S rDNA gene were merged, and the resulting 1473-1474 bp-long sequences were compared with NCBI GenBank using the Nucleotide BLAST utility.The name of the closest sequence and its accession numbers were fixed as the ID of the isolate. Chickens and In Vivo Trials The experimental protocol was approved at a meeting of the Local Ethics Committee of the VIGG (Protocol No. 1 dated 15 February 2018). For the first seven days, 130 one-day-old Ross 308 cross broilers were placed in the vivarium of the Skryabin Academy of Veterinary Medicine and Biotechnology (Moscow, Russia) and kept at 32 ± 1 • C on a 12 h photoperiod in cage batteries with a mesh floor with an area of 80 × 90 cm and 20 heads per cage.The chickens had ad libitum access to water.One washer and drinker per 10 heads was used.The sex of the birds was not determined.In this period, the chickens were fed a complete starter diet "PK-5-1", purchased from Stavropolsky Kombikorm (Stavropol, Russia), without being divided into groups.During this period, they were kept in the cage. On day 7, each chicken was weighted and distributed into one of six groups (two control and four experimental), with 20 heads in each group, using a method of pairs of analogs as described previously [24].Ten birds not included in the experimental groups were kept in a separate cage as a reserve on a PC group diet containing toltrazuril and enrostin.They were not taken into account when the growth performance parameters were assessed and were used as a negative control for the E. tenella PCR diagnosis.Namely, the birds from this group were sacrificed once losses from natural causes were registered in experimental or control groups, and ileum digesta was used for DNA purification PCR and PCR with primers EtF and EtR. All birds were kept in cages with a concrete floor with an area of 1.5 m × 2 m covered with sawdust litter, which was changed twice a week.The initial average live body weight of the chickens in the experimental and control groups at the beginning of the experiment is shown in Table 2. Further, the experiment was carried out until the 42nd day of life (35 days).During this period, the chickens were kept on the floor.Each group had ad libitum access to the food.The complete diet without antibiotics Ekorm-ROST grower diet purchased from Stavropolsy Kombikorm (Russia) was provided in excess twice a day, about 8 AM and 6 PM, each diet portion was weighted.Each experimental diet was prepared for the whole period of the experiment by adding 1% of the respective grass meal sample to the whole volume of the diet (300 g of the grass meal per 30 kg of Ekorm-ROST diet) and mixing in a 100 L hopper with a propeller stirrer from EuroPlast (Russia, Moscow).Enrostin and toltrazuril were not mixed with the diet since they were administered to PC group birds with drinking water. Before providing a fresh diet, the residue left from the previous dosage was weighted and subtracted from the initial weight of the dosage to determine the fodder consumption and calculate the feed conversion ratio (FCR). The chickens were weighted weekly on days 14, 21, 28, 35, and 42 of their lives (days 0, 7, 14, 21, and 28 since the start of the experiment).Each group was weighted as a whole, and the average bird mass in a group was calculated.The living weight was used as the output parameter.It was expressed as the medium arithmetical value of an average bird in each group in g and in % normalized to the positive control group.FCR was calculated as described previously [4]. Each chicken was killed within two to three hours.The dead chickens were subjected to autopsy for the purpose of collecting ileal digesta specimens, which were immediately frozen.The healthy control chickens from the reserve group were humanely killed through carbon dioxide inhalation at the same age, while spontaneous death due to contamination was registered in the negative control group.Briefly, 100 milligrams of ileal digesta was sampled in duplicate from each chick and used for DNA purification using the K-Sorb Micro-Column Sorbent Kit (Syntol, Moscow, Russia), following the manufacturer's instructions.The DNA samples isolated from the ileum digesta were analyzed by PCR with primers EtF (AATTTAGTCCATCGCAACCCT) and EtR (CGAGCGCTCTGCATACGACA) specific to ITS-1 of the ribosome cluster [38] and sequenced using the Sanger method using the BigDye Terminator v3.1 Cycle Sequencing Kit (Thermo Fisher Scientific, USA) and Nanophore 05 genetic analyzer (Syntol, Russia) once PCR products appeared.The derived sequences were compared to NCBI GenBank.Their affiliation with the genomic DNA of Eimeria tenella was verified by similarity with the Eimeria tenella genome assembly, chromosome 13 (NCBI GenBank Accession number HG994973). At the end of the experiment, all broilers were humanely killed through carbon dioxide inhalation as described formerly [39]. 16S DNA Library Construction, Sequencing and Bioinformatics Analysis A paired-end sequencing library was prepared from the PCR product obtained using the extracted DNA as a template and Ferier_F515/ Ferier_R806 primers.The Illumina Nextera XTLibrary Preparation Kit (Illumina, San Diego, CA, USA) was used for constructing the library.The library quality was assessed using a Qubit 2.0 fluorometer (Thermo Scientific, Waltham, MA, USA) and a Bioanalyzer 2100 system (Agilent, Santa Clara, CA, USA).The library was then sequenced on an Illumina NovaSeq 6000 platform (Illumina) to generate 150 bp paired-end reads.Quality control and filtering of sequenced raw reads were performed using Trimmomatic (version 0.38).A mean quality lower than Q20 in a 100 bp sliding window was considered the criterion.The reads that mapped to eukaryotic genomes on Bowtie2 (version 2.3.4.1) were filtered out.The clean reads were assembled using MEGAHIT (version 1.1.3)in pair-end mode.Bioinformatics analysis was performed using MicrobiomeAnalyst [39].Fisher's alpha index (species richness) and Shannon index (species evenness) were used to evaluate the alpha diversity of the bacterial compositions.The overall differences in the bacterial community were analyzed through a heat map and principal coordinate analysis (PCoA) on QIIME 2 (version 2017.4).Correlation analysis was performed using Spearman's correlation coefficient and visualized using the R package "corrplot" (version 0.84). Statistical Analysis For each group, the following indicators were calculated: Average body weight (ABW), average feed intake per broiler per day (FI), average daily weight gain per broiler per day (ADG), and feed conversion index (FCR).Food intake for each group as a whole was recorded twice a day, immediately before each feeding.Dead birds were excluded from the count on the day of death. The means and standard deviation (SD) of growth parameters (ABW, FI, ADG, and FCR) were calculated weekly and over the entire 7-42-day period as described previously, and daily means were calculated by dividing the indicator for the period by the number of days in it [24].The Mann-Whitney U-test was used to determine the differences between each pair of groups.It was suggested that differences be kept confidential once p < 0.05 was found.Statistics 8.0 for Windows was used for statistical analysis. Analysis Grass Metagenome Diversity Four mixed grass samples were mowed down in two geographically remote locations in the Chernozem region of the European territory of Russia (KS1 and KS2 from Kursk region, TS1 and TS2 from Tambov region).The mown grass was dried in the open air outside in direct sunlight and milled as described in Materials and Methods.A metagenomic assay of the microbiome in the hay samples was carried out after two months of long storage of the hay meal samples at room temperature.In parallel, the same samples were subjected to biological trials on the broilers. Metagenomic 16S rDNA-based analysis of the hay samples before filtering indicated that 53-86% of the sequences belonged to Proteobacteria, whereas most others were attributed to the plant mitochondrial genome.This demonstrated the insufficient specificity of the chosen primers to the bacterial genome, but it was not possible to change them for more specific variants such as 8F + 1492R or 8F + 926R due to the necessity of keeping a limited PCR product length in order to maximize read quality and coverage.Statistically treated data from metagenomics analysis of the hay specimen microbiome after subtracting the sequences belonging to plant mitochondria are shown in Figure 1.First of all, the analysis demonstrated an absolute dominance of the genus Paucibacter (Betaproteobacteria, Burkholderiales, and Burkholderiales genera incertae sedis).This genus comprises 86.6% in KS1, 77.4% in KS2, 99.4% in TS1, and 85.4% in the TS2 microbiome.Some representatives of this genus are reported as human pathogens causing bacteremia likely due to Acinetobacter and Pseudomonas [40], algicides [41], and agents providing degradation of cyanobacteria-derived toxins in fresh water [42].Other genera ubiquitous First of all, the analysis demonstrated an absolute dominance of the genus Paucibacter (Betaproteobacteria, Burkholderiales, and Burkholderiales genera incertae sedis).This genus comprises 86.6% in KS1, 77.4% in KS2, 99.4% in TS1, and 85.4% in the TS2 microbiome.Some representatives of this genus are reported as human pathogens causing bacteremia likely due to Acinetobacter and Pseudomonas [40], algicides [41], and agents providing degradation of cyanobacteria-derived toxins in fresh water [42].Other genera ubiquitous within the studied cohort belonged to Proteobacteria: Pseudomonas (class Gammaproteobacteria), Sphingomonas, and Aureimonas (Alphaproteobacteria).The share of these genera in the microbiome reached the range of 0.03-2.15%.The total share of class Gammaproteobacteria comprised 93.1% in KS1, 88.6% in KS2, 99.4% in TS1, and 99.1% in TS2, and class Alphaproteobacteria, 3.4% in KS1, 4.7% in KS2, 0.2% in TS1, and 0.4% in TS2. Types Myxococcota class Polyangia and Patescibacteria class Saccharimonadia were represented in the single grass sample KS1, where they share 0.06% and 0.01% of the microbiome, respectively. Type Bdellovibrionota class Bdellovibrionia was found in the single sample KS2, where it shared 0.02% of the microbiome. No rare classes of bacteria are found in TS1 or TS2 samples.Characterizing genus diversity in the grass samples, one should note that bacterial diversity in KS1 and KS2 was higher than in TS1 and TS2 samples.In turn, bacterial diversity in the TS1 sample was lower than in TS2.This observation was confirmed by the following digital values: • Subdominant taxa are absent in TS1 and TS2 samples. A microbiological assay was used for verification of the metagenomics data.Plating nonheated extracts of the grass meal confirmed the dominance of Proteobacteria, which accounted for >10 7 c.f.u. per g in all four samples.This observation proved the essential survival of aerobic Proteobacteria in the dry grass biomass under storage, although they do not produce endospores.The load of spore-forming microorganisms was TS1-6.0 × 10 2 .TS2-1.6 × 10 6 .KS1-1.0 × 10 4 .KS2-3.0 × 10 4 that did not correspond to data from metagenomics analysis (Figure 1).Apparently, the high discrepancy between data obtained by metagenomics and microbiological methods may be explained by a high share of vegetative (perhaps not alive) cells and a low share of thermostable endospores of Bacilli sensu lato in the TS2 sample, whereas the KS1 and KS2 samples contained Bacilli mostly in the form of thermostable endospores.The species specificity of the isolated bacterial clones was determined by molecular methods (16S rDNA sequencing).The results of this analysis are shown in Table 3. Data from Table 3 confirm the conclusion about the dominance of Gammaproteobacteria in the microbiome of all grass samples; however, the exact species specificity of the isolates differs from that determined by metagenomics assay in the following parameters: No representatives of the genera Paucibacter and Sphingomonas were isolated from any grass samples.Apparently, they are unable to grow fast at LB media under the chosen conditions.In contrast, isolated representatives of Gammaproteobacteria belonged to the genera Pantoea, Acinetobacter, and Pseudomonas, which shared just a small amount of the consortium detected by metagenomics assay.Pseudomonas is the only genus found in the grass sample after metagenomics analysis and microbiological seeding. The species affiliation of Bacilli sensu lato was overall similar (B.subtilis and close species B. velezensis, B. amyloliquefaciens, B. altitudinis, B. tequilensis, B. siamensis, B. licheniformis, B. inaquosorum, B. stercoris, B. safensis, and B. mojavensis) in all samples, although their quantitative representation fluctuated in the range of 4 orders.The only isolate containing thermostable endospores not belonging to the group of mesophilic bacilli was isolated from a KS1 grass sample.It has 16S rDNA that is 84% similar to Paenibacillus dendritiformis (family Paenibacillaceae, not Bacillaceae). For isolating Bacilli sensu lato, 50 mg of ground grass samples was placed in 1.7 mL Eppendorf tubes, flooded with 1 mL deionized water, and heated for 10 min in a solid-state thermostat for the tubes.The tubes were not mixed before heating to avoid casting the bacteria-containing material onto the not-heated lid.After the heating, the samples were thoroughly mixed by vortexation, and 50 µL aliquots of the suspension were picked up by a trimmed 200 µL automated pipet tip, placed onto the nutrient agar, and distributed by a glass spatula.For isolating Proteobacteria, the ground grass suspensions prepared in a similar way without heating were picked up by a trimmed 200 µL automated pipette tip, placed onto the nutrient agar containing 35 µg/µL erythromycin, and distributed by a glass spatula.The respective experimental procedures are described in detail in Section 2.1. Interestingly, a Bacillus stercoris colony was found in the TS2 sample under erythromycin selection without heating.Mesophilic bacilli were found in the TS1 sample, although metagenomic assay demonstrated the complete absence of this group in the microbiome.This observation makes doubtful the accuracy of the metagenomics data, apparently due to incomplete DNA extraction from the grass sample. No B. cereus was found among Bacilli sensu lato isolates from the TS2 sample, although following metagenomics data, this species significantly outnumbered B. subtilis, B. stercoris, B. safensis, and B. mojavensis in this source.This fact admits the assumption that DNA is poorly isolated from thermostable spores, whereas a major share of Bacilli sensu lato remains on the grass in the vegetative form, which does not survive heating at 90 • C. Isolates Kp1-1 and Kp1-2 from the KS1 sample under non-selective conditions belonged to mesophilic bacilli, not Proteobacteria, although following metagenomic data, the share of this group in the KS2 metagenome is ~1%.This is the clearest evidence of a substantial bias that appeared at the stage of DNA purification and 16S rDNA metagenomic analysis. In our opinion, the data from the microbiological assay of thermostable endospores were the most accurate.They were first used for the characterization of the tested grass samples. Biological Trials of the Additives on the Basis of the Grass Meal Chickens of a rapidly growing cross, Ross 308, were used in the experiment.As shown in Table 2, the broad-spectrum antibiotic enrostin (complex preparation containing 100 mg ciprofloxacin (fluoroquinolone) and 10 mU colistin (peptide ionophore)) was used for preventing bacterial infection in all flocks in periods 1-7 of the life prior to forming experimental groups.No bird losses were registered at this time.On day 7 of life, the flock was distributed into the groups using a method of pairs of analogs for equilibrating the average mass of the birds in each group, as shown in Table 2. Then the positive control group (PC) was treated with Stop-Coccid (days 14-16-25 µg/mL toltrazuril within the drinking water; days 28-33-enrostin within the drinking water).The negative control group (NC) in periods 8-42 obtained no additives, and the experimental groups permanently obtained 1% grass meal within the food. In the period of days 14-21, a total of 7 chickens were lost under these conditions in the NC group (35% mortality).Two chickens died at the same time in the KS2 experimental group (10% mortality), and three chickens died in the TS1 experimental group (15% mortality).No losses were registered in the KS1 and TS2 groups or in the positive control group (PC).This observation gives evidence that the grass meal may partially or completely substitute the antibiotic enrostin and the coccidiostatic preparation toltrazuril as a means of protecting the chickens from death caused by infection or invasion. Before the death, all seeking chickens exhibited the following symptoms: Ruffled feathers, diarrhea, and difficult gait.Their feces had a peculiar acidic smell.One chicken had liquid discharge from its beak.The visual examination of the inner organs of dead chickens after autopsy elucidated an edema and inflammation of the small intestine, and particularly the caeca in all birds.DNA was isolated from the ileum digesta and analyzed by PCR with primers EtF (AATTTAGTCCATCGCAACCCT) and EtR (CGAGCGCTCTGCATACGACA) specific to ITS-1 of the ribosome cluster [37].The PCR was positive in the samples from six of the seven chickens that died in the NC group and in the single dead chicken from the experimental group KS2 (Table 4).The PCR products were sequenced by the Sanger method.The derived sequences were compared to NCBI GenBank and exhibited 100% similarity with the Eimeria tenella genome assembly, chromosome 13 (NCBI GenBank Accession number HG994973).Taken together, these data unambiguously prove that the death from coccidiosis (spontaneous invasion with E. tenella) in the NC group attained 37%, and these losses were completely prevented by either the combination of toltrazuril and enrostin (positive control group) or by grass meal (experimental groups KS1, TS1, and TS2) and partially prevented in the KS2 group.No PCR products with primers EtF and EtR were found in the ileum digesta DNA of the reserve group (totaling 10 heads), which were sacrificed simultaneously with the birds that died from the invasion.Besides protection from invasion and infection, the impact of the grass meal additives on the average weight gain on days 14, 21, 28, 35, and 42 was determined, and the FCR coefficient was calculated.A clear lag in the NC group in comparison to the PC group was found on days 21, 28, 35, and 42 (p < 0.01).At the end of the experiment, the average body weight (ABW) in the NC group (2473 g) was 13% less than in the PC group (2841 g).Differences in ABW between the PC group and the experimental groups were not confidential (less than ±1% at any time during the experiment).These data prove that the tested grass meal additives are able to substitute the chemical preparations toltrazuril and enrostin as growth promoters, not as only anti-parasitic means. Dynamics of ABW, feed intake, ADG, and FCR values in the experimental groups are shown in Table 5. Discussion The biological trials carried out demonstrated a clear beneficial effect of the KS1 and TS2 grass meal additives in the ABW of the chickens.In contrast, they did not exhibit an impact on daily feed intake or FCR parameters.Noteworthy, this additive completely prevented the chicken's death.In this respect, they were not inferior in effectiveness to the combination of toltrazuril and the antibiotic enrostin.On the contrary, the feed additives based on grass flour (KS2 and TS1) did not provide complete protection for chickens from E. tenella invasion, although mortality in these groups was lower than in the negative control group (3 and 2 dead chickens, respectively, versus 7 in the NC group).Taken together, this observation allows hypothesizing that herbal flour itself has a protective effect that suppresses the invasion of E. tenella in chickens, but the content of bacteria within significantly affects physiological properties when the grass sample is used as a feed additive. Studies of the effect of herbal flour-based additives on ABW and ADG indicators have confirmed their ability to replace an antibiotic and an antiparasitic drug when feeding chickens.Starting from the 28th day of the chicks' life and up to the end of the experiment (42 days of life), the ADG index in the experimental groups KS1, TS1, KS2, and TS2 significantly differed from the NC group but not from the PC group. At the same time, the FCR indicators in the PC group on the 21st day of the chicks' lives differed from those in the NC group for the worse.This shows that the use of a combination of toltrazuril and enrostin during this period had a negative effect on feed conversion.At the same time, the herbal meal did not have a depressing effect on the digestibility of the feed, successfully coping with the function of preventing bird death from E. tenella invasion (35% dead birds in the NC group vs. 0-15% in the PC and the experimental groups).This observation shows the advantage of herbal flour enriched with certain types of bacteria as a means of protecting chickens from coccidiosis.At the same time, on the 35 and 42 days of chick life, the NC group already showed a statistically confidential lag in ADG from the PC and experimental groups KS1, KS2, and TS1.On the contrary, the KS2 group on the 35 and 42 days of the experiment showed the worst ADG index compared to the PC group. Explaining the results obtained, we draw attention to the fact that in the TS2 herbal flour sample, with a high total content of bacilli potentially capable of acting as a probiotic (as derived from results of bacteriological seeding after heating), the absolute dominance of the B. cereus species (following metagenomics analysis), described as a conditional pathogen of chickens and other animals, was observed [43].This species may cause a depressing effect on the assimilation of feed by chickens, along with an antagonistic effect on E. tenella and bacterial pathogens in the gastrointestinal tract of chickens.On the other hand, no isolates of B. cereus were found among the bacterial clones subjected to individual molecular typing (Table 1). Noticeable differences in the protective effectiveness of KS1 and KS2 samples are difficult to explain since the composition of the microbiome of these samples is highly similar.The share of bacilli is 0.79% for KS1 and 1.29% for KS2 when assessed by metagenomic analysis.According to the microbiological seeding data, the endospore content in these samples was 1.0 × 10 4 in KS1 and 3.0 × 10 4 in KS2.Both methods give similar figures, and this does not allow considering the difference in the content of bacilli or their endospores as a key parameter affecting the protective effectiveness against coccidiosis in chickens.It is possible that the low protective effectiveness of the KS1 sample is due to the content of potential pathogens of chickens in it, representatives of the Saccharimonadia class (0.01% of the microbiome) or the genus Staphylococcus (0.16% of the microbiome); both of these groups were completely absent in the KS2 sample.There is evidence in the literature about the possibility of the presence of representatives of these groups in the human intestine and the association of their increased proportion in the microbiome with an unfavorable prognosis for diseases [44,45].The TS2 sample exhibiting a high protective activity, along with the KS1 sample, contained few representatives of Bacilli sensu lato (0.3%, as shown by metagenomic analysis).We hypothesize that its beneficial impact on chicken safety during the spontaneous E. tenella invasion may be explained by the high share of the unclassified Enterobacteriaceae group in the TS2 sample. Analysis of previously published data gives a number of indirect clues confirming the efficiency of grass biomass as a source of beneficial bacteria (analogs of probiotics).Probiotics, like antibiotics (e.g., enramycin and tylosin), confer resistance against Eimeria on the chicken, although they do not exhibit antagonism towards Apikomplexa sporozoits in vitro [27].Moreover, the concept of the necessity of long-term persistence of the probiotics in GIT was revised.Obviously, the high efficiency of Bacillus strain's anti-pathogenic and growth-promoting effects on the chicken was acknowledged, although a pure aerobic metabolism does not allow Bacilli vegetate under the chicken's GIT anaerobic conditions.Moreover, extracellular culture medium from Bacillus licheniformis and Bacillus subtilis conferred a favorable impact on the GIT microbiota and daily weight gain of the broilers [2,28].This confirms that the short-term influence of the probiotic-derived metabolite is sufficient for the favorable action of the overall probiotic.Therefore, the kinetics of the anti-pathogenic action of the antibiotics and probiotics may be more similar than previously suggested. The effects of antibiotics and probiotics on the GIT microbiota in chickens were extensively studied by using metagenome sequencing (amplified libraries of 16S rDNA gene fragments were sequenced on the Illumina platform) [29,45].It has been indicated that in the caeca of broilers, Clostridia are the predominant organisms [30], while the genus Lactobacillus is dominant in the ileum [46]. Importantly, an impact of the antibiotics monensin, virginiamycin, and tylosin on the microbiome of caeca was described in [31]: The effect of the coccidiostat monensin and the growth promoters virginiamycin and tylosin on the caeca microbiome and metagenome of broiler chickens, 16S rRNA, and total DNA shotgun metagenomic pyrosequencing.In this study, Roseburia, Lactobacillus, and Enterococcus showed reductions, and Coprococcus and Anaeroflum were enriched in response to monensin alone or monensin in combination with virginiamycin or tylosin.Another important result was the enrichment in E. coli in the monensin/virginiamycin and monensin/tylosin treatments, but not in the monensin-alone treatment. The impact of Bacillus licheniformis metabolites and the peptide antibiotic enramycin on the caecal microbiota was compared by Chen and Yu [2].They reported that the diversity (richness and evenness) of bacterial species in the caeca of the chicken treated with B. lichenofromis metabolites was higher than in the control group.The share of obviously beneficial bacteria associated with probiotic properties, such as Lactobacillus crispatus and Akkermansia muciniphila, was also increased due to exposure of the chicken to B. lichenofromis metabolites.Exposure of the broilers to enaramycin led to an elevation of Clostridium bacterium, Enterococcus cecorum, Anaeromassilibacillus sp., Ruminococcus sp.SW178, Lachnoclostridium sp., and Blautia sp. in the caecal microbiota.Noteworthy, now butyrate-producing genera Ruminococcus (order Eubacteriales, family Oscillospiraceae) and Blautia (order Eubacteriales, family Lachnospiraceae), along with Coprococcus, Roseburia, and Faecalibacterium (other representatives of the class Closrtidia, order Eubacteriales), are suggested to be favorable components of normal human column microbiota exhibiting antiinflammatory properties [29].A deficiency of these genera in the microbiota is associated with the progression of Parkinson's disease. An effect of a peptide antibiotic, bacitracin, and a Bacillus subtilis-derived probiotic on the caecal microbiota of chickens infected with species of Eimeria (causative agent of coccidiosis) was described by Jia et al. [29].The relative abundance of species Butyricicoccus pullicaecorum, Sporobacter termitidis, and Subdoligranulum variabile increased in the chicken group challenged with Eimeria.It is known that Butyricicoccus pullicaecorum and Subdoligranulum variabile (both belong to the family Oscillospiraceae) produce butyrate and other short-chain fatty acids that suppress the development of Eimeria but are unfavorable for the microbiota [47].Sporobacter abundance was shown previously to be reduced when the chickens were treated with a mixture of probiotic Bifidobacterium strains [48].Similar effects of bacitracin and the probiotic were reported in [29]. Chicken gut microbiota (feces) responses to B. subtilis probiotics in the presence and absence of E. tenella infection are reported by Memon F.U. [49].The feces of the healthy control group contained about 95% Firmicutes, 4% Proteobacteria, and 1% other phyla.Infection with Eimeria decreased the share of Firmicutes to 70%, whereas Proteobacteria shared 21% and Bacteroidetes 8% of the fecal microbiome.Treatment of the healthy chicken flock with the probiotic somewhat increased the share of Proteobacteria, Bacteroidetes, and other phyla in comparison to the non-treated group.Administration of the probiotic to the chicken challenged with E. tenella did not affect the ratio of different bacterial phyla in the fecal microbiota, although it substantially mitigated the morbidity of the disease.The relative abundances of Lactobacillus within the Firmicutes clade accounted for 36.56%,56.42%, 49.73%, and 54.76 in the respective groups of chickens.Escherichia-Shigella accounted for 4.42%, 25.82%, 6.41%, and 28.20% within the Proteobacteria clade.In contrast, decreased abundances of Kurthia, Ruminococcus torques, and Clostridium were found in Eimeria-infected groups compared to the healthy control group.Probiotic-treated and challenged chickens, on the other hand, restored (increased) the abundances of Clostridium sensu stricto, Corynebacterium, Enterococcus, Romboutsia, and Subdoligranulum and decreased the abundances of Faecalibacterium, Lachnoclostridium, Eisenbergiella, Sellimonas, Flavonifractor, Monoglobus, Lachnospiraceae, Blautia, Ruminococcus torques, Christensenellaceae, Eubacterium hallii, and Paludicola compared to the Eimeria-infected non-treated group. Khogali reported changes in the microbiota of feces in old laying hens induced by the administration of Clostridium butyricum and B. subtilis-derived probiotics [24].Noteworthy, the exposure of the hen to the probiotics reduced the share of pimpled eggs, a substantial share of which compromises the economic efficiency of the elderly hens.In contrast to the caecal microbiota, the healthy hen feces contain above 85% Firmicutes (>98% Lactobacillales), 6% Proteobacteria, and 2% Actinobacteria.In old hens prone to laying pimpled eggs, above 70% of the feces microbiota is occupied by Proteobacteria, and the share of Bacteroidetes attains 4-5%, whereas the share of Firmicures is decreased to 15% (share of Lactobacillales is ~50%).Application of the bifunctional probiotic increases the share of Firmicutes to ~70%, reduces the share of Proteobacteria and Bacteroidetes to the normal level, and increases the share of Actinobacteria to 7%.It increases the share of Verrucomicrobia to 2.5%, while the contents of this group in the feces of non-treated hens are negligible.However, the share of Lactobacillales within Firmicutes after exposure to the probiotic was far from normal (15-20%).Taken together, one should conclude that probiotics are now considered a powerful tool comparable to antibiotics in terms of impact on the normal and pathogenic components of the chicken GIT microbiota and safety, but are less affordable for practical use due to a high manufacturing cost [1]. Conclusions Concluding the analysis of the obtained results, it should be noted that they convincingly demonstrate the beneficial impact of the dry plant biomass (a mix of D. glomerata, P. pretense, and B. inermis) as the growth promoter when added to the food in a ratio of 1% of the diet weight.This effect was not worse than the effect of enrostin, which is traditionally used at Russian industrial poultry plants in this role.Enrostin added to the chicken food together with tolatrzuril elevated ADG up to 14.9% in comparison to the same diet without medicines.The tested dry grass biomass samples collected in different locations increased ADG to 14.6-15.2% in comparison to the negative control.Dry grass biomass is obviously more economical and safe for chickens and chicken meat consumers in comparison to any antibiotic, including enrostin. Moreover, due to an extensive outbreak of coccidiosis that occurred in 2022, we faced a spontaneous invasion of E. tenella in the experimental and negative control groups and registered the efficiency of two dry grass biomass samples from four tested against the parasite invasion.We hypothesize that this effect was caused by the different microbial composition of the grass biomass.The most protective samples, KS1 and TS2, contained 0.79% Bacilli sensu lato, whereas the KS2 sample contained 12.3% unidentified Enterobacteriaceae.The KS2 sample, which contained the highest share of Bacilli sensu lato and was considered the most probable analog of probiotics, exhibited poor protection against mortality.We suppose that differences between KS1 and KS2 samples can be explained by differences in the prevalence of Bacillus species, namely, a high share of an opportunistic animal pathogen, B. cereus, in the KS2 sample, whereas B. subtilis group species Bacillus velezensis, Bacillus amyloliquefaciens, Bacillus subtilis, Bacillus altitudinis, Bacillus subtilis, Bacillus tequilensis, as well as Paenibacillus dendritiformis, dominated in the KS1 sample.We suggest that these bacteria, along with herbal bio-constituents, contribute to the suppression of opportunistic pathogens in the chicken ileum and other GIT sections.We hypothesize that these bacteria can suppress Eimeria egg germination, mitigating the risk of parasite invasion and the death of the bird from it, although this hypothesis still requires experimental verification. Informed Consent Statement: Not applicable. Figure 1 . Figure 1.Results of metagenomics analysis of the hay samples KS1, KS2, TS1, and TS2 carried out by 16S ribosomal DNA metagenomic sequencing.Share of sequences attributed to a certain genus is shown.The affiliation of each genus to a certain class and type of bacteria is indicated using colored edging.Square of the sectors is shown proportionally to share of the taxa in the microbiome on logarithmic scale. Figure 1 . Figure 1.Results of metagenomics analysis of the hay samples KS1, KS2, TS1, and TS2 carried out by 16S ribosomal DNA metagenomic sequencing.Share of sequences attributed to a certain genus is shown.The affiliation of each genus to a certain class and type of bacteria is indicated using colored edging.Square of the sectors is shown proportionally to share of the taxa in the microbiome on logarithmic scale. Table 1 . • N, 36.3095• E; 51.8129 • N, 36.3070• E) and two locations in Tambov region (GPS 52.861625 • N, 41.277611 • E; 52.869416 • N, 41.258822 • E) during the period of May 20th to the 10th of June, 2022.A description of the locations is shown in Table 1.Description of the locations where grass specimens used for biological trials were collected. Table 2 . Experimental design of the biological trials-initial composition of the experimental groups. Table 3 . Results of molecular identification of species specificity of bacterial clones isolated from dry grass samples. Table 4 . The chicken losses in the course of the experiment and E. tenella diagnosis in their ileal digesta samples. Table 5 . Live body weight values (average weight per head, g)/FCR values of the chickens in the experimental groups. Table 5 . Cont. -statistically confident difference in the parameter value in comparison to the positive control group at the same time point of the experiment (p < 0.05 according to the Mann-Whitney test).b -statistically confident difference in the parameter value in comparison to the negative control group to the same time point of the experiment (p < 0.05 according to the Mann-Whitney test). a
2023-10-15T15:02:34.056Z
2023-10-13T00:00:00.000
{ "year": 2023, "sha1": "5e313ab930665e9180b06e4a7f9bca2721e55476", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2036-7481/14/4/113/pdf?version=1697202437", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4140aa350a4239791be84cc5d504218dded78839", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
57761603
pes2o/s2orc
v3-fos-license
EnanDIM - a novel family of L-nucleotide-protected TLR9 agonists for cancer immunotherapy Background Toll-like receptor 9 agonists are potent activators of the immune system. Their clinical potential in immunotherapy against metastatic cancers is being evaluated across a number of clinical trials. TLR9 agonists are DNA-based molecules that contain several non-methylated CG-motifs for TLR9 recognition. Chemical modifications of DNA backbones are usually employed to prevent degradation by nucleases. These, however, can promote undesirable off-target effects and therapeutic restrictions. Methods Within the EnanDIM® family members of TLR9 agonists described here, D-deoxyribose nucleotides at the nuclease-accessible 3′-ends are replaced by nuclease-resistant L-deoxyribose nucleotides. EnanDIM® molecules with varying sequences were screened for their activation of human peripheral blood mononuclear cells based on secretion of IFN-alpha and IP-10 as well as activation of immune cells. Selected molecules were evaluated in mice in a maximum feasible dose study and for analysis of immune activation. The ability to modulate the tumor-microenvironment and anti-tumor responses after EnanDIM® administration was analyzed in syngeneic murine tumor models. Results The presence of L-deoxyribose containing nucleotides at their 3′-ends is sufficient to prevent EnanDIM® molecules from nucleolytic degradation. EnanDIM® molecules show broad immune activation targeting specific components of both the innate and adaptive immune systems. Activation was strictly dependent on the presence of CG-motifs, known to be recognized by TLR9. The absence of off-target effects may enable a wide therapeutic window. This advantageous anti-tumoral immune profile also promotes increased T cell infiltration into CT26 colon carcinoma tumors, which translates into reduced tumor growth. EnanDIM® molecules also drove regression of multiple other murine syngeneic tumors including MC38 colon carcinoma, B16 melanoma, A20 lymphoma, and EMT-6 breast cancer. In A20 and EMT-6, EnanDIM® immunotherapy cured a majority of mice and established persistent anti-tumor immune memory as evidenced by the complete immunity of these mice to subsequent tumor re-challenge. Conclusions In summary, EnanDIM® comprise a novel family of TLR9 agonists that facilitate an efficacious activation of both innate and adaptive immunity. Their proven potential in onco-immunotherapy, as shown by cytotoxic activity, beneficial modulation of the tumor microenvironment, inhibition of tumor growth, and induction of long-lasting, tumor-specific memory, supports EnanDIM® molecules for further preclinical and clinical development. Electronic supplementary material The online version of this article (10.1186/s40425-018-0470-3) contains supplementary material, which is available to authorized users. Background Toll-like receptors (TLR) belong to the group of pattern recognition receptors (PRR) that identify pathogen-associated molecular patterns (PAMP), which are ubiquitously presented by pathogens but are essentially absent in vertebrates. TLR enable immune cells to fight pathogens by first activating the innate immune response, followed by the induction of antigen-specific effector-as well as memory T cells of adaptive immunity. Therefore, TLR agonists are attractive candidates for the development of therapeutic immune modulators to treat a broad range of diseases like cancer, asthma, allergies, or infections [1][2][3]. Among the more than ten currently known TLR in humans, TLR9 is predominantly expressed by plasmacytoid dendritic cells (pDC) and B cells and plays a major role in detecting invading pathogens with subsequent activation of the immune system [4,5]. TLR9 recognizes non-methylated CG-motifs as PAMP, which are predominantly present in pathogenic DNA, but underrepresented in human nuclear DNA. Since TLR9 is known to broadly activate both the innate and adaptive immunity, TLR9-triggered immune activation can re-activate immune surveillance to effectively recognize tumor-specific antigens on cancer cells of tumor patients. For immunotherapeutic approaches PAMP can be mimicked by synthetic oligodeoxynucleotides (ODN) containing non-methylated CG-motifs [6,7]. There is already a remarkable history of synthetic ODN developed to target TLR9 dating back to the first synthetic DNA oligonucleotide under clinical investigation which was the chemically-modified linear CPG-7909 (PF-35126 76, ProMune) [8]. Other TLR9 agonists with a similar chemical composition soon followed suit especially for application to cancer treatment [2,7,9]. As linear, single-stranded CpG-ODN with natural phosphodiester (PO) backbones are prone to degradation by nucleases, protective modifications are necessary to ensure their persistence in vivo. Therefore, it is common to chemically modify these CpG-ODN with phosphorothioates (PTO) as previously used for antisense therapeutics [10]. However, these PTO-modifications lead to off-target side effects like prolongation of blood clotting time via inhibition of the intrinsic tenase complex [11,12], non-specific binding to various proteins (i.e., transcription factors), thereby affecting cell signaling [13], platelet activation [14] and causing acute toxicities via complement activation in rhesus monkeys [15,16]. In mice, these chemical modifications dramatically altered morphology and functionality of lymphoid organs, and can induce hemophagocytic lymphohistiocytosis and macrophage activation syndrome [17][18][19]. Furthermore, the resulting narrow therapeutic window of the early PTO-modified TLR9 agonists led to a discontinuation of advanced clinical studies [20,21]. A second molecular family of TLR9 agonists, dSLIM®, was recently introduced which consists of dumbbell-shaped, covalently-closed DNA molecules devoid of any PTO or other artificial modifications [19,22]. Here, we describe the development of EnanDIM® molecules which constitute a novel molecular family of L-nucleotide-protected TLR9 agonists. The members of the EnanDIM® family described here are stabilized and protected against nucleolytic degradation through enantiomeric nucleotides by positioning of L-deoxyribose-containing nucleotides at the DNA 3′-end. Although not prevalent in current vertebrates L-deoxyribose-containing nucleotides are capable of forming L-DNA, the enantiomer of natural D-DNA [23]. As DNA processing enzymes, like nucleases, and DNA components co-evolved, present mammalian exonucleases are blind for L-deoxyribose and the resulting L-DNA backbone thereby leaving L-nucleotide-protected ODN intact [24,25]. Here, we investigated the EnanDIM® molecular family with respect to immunological potentialboth in vitro and in vivoand assayed for possible toxicological effects of maximum feasible doses in mice. To establish its potential in immuno-oncology, we evaluated the capacity of EnanDIM® molecules to modulate the tumor microenvironment (TME) and characterized the resulting anti-tumor effects, including long-term immune memory in various mouse tumor models. The necessary and sufficient properties as immune surveillance reactivators (ISR) for cancer immunotherapy, i.e. efficacious and broad activation of innate and adaptive immunity, absence of clinically relevant off-target effects, and stability against nucleolytic degradation, are all successfully realized in EnanDIM®, the novel molecular family of TLR9 agonists. L-nucleotide-protected TLR9 agonists ODN with terminal L-nucleotides were synthesized by BioSpring, Axolabs or TIB Molbiol. After chromatographic purification, ODN were either ultra-diafiltrated or reconstituted in the indicated solvent. In vitro stimulation of cells Buffy coats from anonymized healthy donors were obtained from the "DRK-Blutspendedienst -Ost". Peripheral blood mononuclear cells (PBMC) were isolated by density gradient centrifugation using Ficoll (Biochrom). pDC were prepared using the Human Diamond All flow cytometric parameters of cells were acquired on a FACSCalibur (BD Biosciences). Frequencies were related to the indicated parent populations; geometric means of fluorescent cells were indicated as mean fluorescence intensity (MFI). Data were analyzed with the FlowJo software. Cytokine and chemokine determination Secreted cytokines were accumulated in cell growth medium for 2d. ELISA for IFN-alpha (eBioscience), IFN-gamma (OptEIA Human IFN gamma ELISA Set, BD Biosciences), IP-10 (interferon-inducible protein 10, CXCL10), IL-8, and MCP-1 (monocyte chemoattractant protein-1, CCL2) (all from R&D Systems) were performed in duplicates according to the manufacturer's instructions. Optical density was measured at 450 nm; the data were analyzed with the MicroWin software (Berthold Technologies). Alternatively, cytokine levels in the cell growth medium were determined in duplicates by a bead-based multiplex immunoassay (FlowCytomix from eBioscience) according to the manufacturer's instructions. Data were acquired on a FACSCalibur and evaluated with the FlowCytomixPro software (eBioscience). In vitro TLR9 model A murine reporter cell line (ELAM41) was obtained from K. Stacey [26]. Briefly, ELAM41 was generated by stably transfecting cells from the established mouse macrophage line RAW264.7 with a fluorescent protein-expressing DNA construct under the control of the human nuclear factor kappa-light-chain-enhancer of activated B-cells (NF-kappaB) responsive elastin promoter [26]. Thereby, the endogenous mouse TLR9 of ELAM41 cells was functionally coupled to the expression of the enhanced green fluorescent protein (eGFP). ELAM41 cells were incubated in the presence of the indicated concentrations of EnanDIM-C or the CG-free variant of EnanDIM-C, EnanDIM-C(-CG). If not shown otherwise, after 7 h the amount of fluorescent protein was determined via flow cytometry. Maximum feasible dose (MFD) and immunological study CD-1 mice were subcutaneously (s.c.) injected with vehicle (0.9% NaCl), or a total dose of 10 mg EnanDIM-C or 50 mg EnanDIM-A after being assigned to experimental groups with each 10 mice by the body weight stratification method. The total dose was divided into four different injections of 0.25 mL per animal, each two hours apart, administered to four different sites on the back on day 1 of the study. Safety assessment relied on observed mortality, clinical signs and body weight recorded throughout the study period (15d). Immune response was assessed by the determination of CD169positive cells within the CD11b + CD11c − monocyte/ macrophage population (via flow cytometry, the following antibodies were used: anti-CD169, clone SER-4 [eBioscience]; anti-CD11b, clone M1/70 [BD Biosciences]; anti-CD11c, clone HL3 [BD Biosciences]) and analysis of IP-10 (via ELISA, R&D Systems) at two time points: 24 h after first injection and at sacrifice (day 15). All animals were sacrificed and subjected to a gross necropsy consisting of a macroscopic evaluation of the tissues/organs contained in the abdominal and thoracic cavities. To determine the immunological profile after a single administration EnanDIM-C/−A each 9 female Balb/c mice were distributed into 5 experimental groups by body weight. Animals were injected s.c. with vehicle (PBS), 200 μg or 1000 μg of either EnanDIM-C or EnanDIM-A, respectively. Three animals from each group were sacrificed at different time points: 6 h, 12 h and 24 h after injection. An additional group of three non-injected mice served as naïve control (time point 0 h). IP-10 levels were determined as above. Mouse tumor models Female C57BL/6 and Balb/c mice (age 6-8 weeks) were housed and treated in accordance with the regulations of the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC). Tumors were engrafted by s.c. injection of 100 μl tumor cell suspension in the flank. Tumor length and width were determined using calipers and volume was calculated as (width 2 × length)/2. Mice were randomized prior to i.tu. treatment when tumors were well established (40-140 mm 3 , reached at day 3 to 13 after tumor inoculation), mice being treated with s.c. application of EnanDIM were randomized according to body weight and treatment was started the day after tumor inoculation. ELISpot assay. Spleen cells of 8 mice surviving double EMT-6 tumor inoculation as well as CT26 tumor inoculation and spleen cells from three naïve mice were prepared. For ELISpot assay (Mabtech, No. 3321-4HPW-2) 8 × 10 5 spleen cells were co-cultured with 8 × 10 4 mitomycin C-treated (100 μg/ml) tumor cells (EMT-6, CT26, Renca) or with AH1 peptide (Anaspec, 1 μg/ml, H2-Ldrestricted epitope derived from glycoprotein 70 expressed in CT26 cells) for 24 h in triplicates. Detection of IFN-gamma secreting cells were done according to the instructions of the manufacturer. For positive controls spleen cells were incubated with 500 ng/ml PMA plus 1 μg/ml Ionomycin; for negative controls, spleen cells were cultured without any additives. Number of spots was analyzed in an ELISpot reader (AID iSpot). For analysis number of spots in the "splenocytes only" approach was subtracted from the respective approaches with tumor cells /tumor peptides. Statistical analyses Data were analyzed with GraphPad Prism 7 (GraphPad Software Inc.). P values < 0.05 were considered significant. The statistical analyses are specified in the figure legends. Design of L-nucleotide-protected TLR9 agonists without chemical modification In contrast to CpG-ODN which achieve metabolic stability mainly by chemical modifications to its backbone, the new family of DNA-based immunomodulators, EnanDIM®, is protected from degradation by a different approach. The here described linear ODN for TLR9 activation are protected against 3′-exonucleolytic degradation by the presence of L-deoxyribose containing nucleotides at their 3′-ends ( Fig. 1a, b). Exonucleases and other DNA processing enzymes recognize D-nucleotides and are blind to L-nucleotides, thereby rendering the 3′-end "incognito" to degradation processes including, for example, the exonuclease-activity of T7 polymerase (Fig. 1c). The DNA sequence of the members of this L-nucleotide-protected ODN family was optimized in a screening system using incubation with PBMC. The key optimization parameters for these TLR9 agonists were high secretion of IFN-alpha and IP-10, the central cytokine and chemokine for activation of immune responses by TLR9 agonists. Two possible candidates were identified for further evaluation, EnanDIM-C and EnanDIM-A (Fig. 1d). EnanDIM® molecules activate components of innate and adaptive immune system Together with cell-cell interaction, secretion of chemoand cytokines are important tools of the immune system to mount an anti-tumor response. Treatment of human PBMC with EnanDIM-C molecules resulted in a strong secretion of IFN-alpha, IP-10, MCP-1 and IFN-gamma (Fig. 2a). EnanDIM-C stimulates TLR9-positive pDC and B cells: however other immune relevant TLR9-negative cells within human PBMC, like myeloid dendritic cells (mDC), monocytes, natural killer (NK) cells, NKT cells and T cells, are likely activated via pDC-released IFN-alpha or via cell-cell contact with activated TLR9-positive cells (Fig. 2b, c). The broad activation of this spectrum of cell types indicates a strong induction of the innate and the adaptive immune systems. EnanDIM-A exhibited a comparable activation pattern targeting similar components of immune system (Fig. 2d-f ). Despite this, each EnanDIM® molecule exhibits a unique pattern of immunomodulatory activity, with EnanDIM-C showing the highest secretion of IFN-alpha and EnanDIM-A with the strongest up-regulation of MHC class II on TLR9-bearing pDC (Fig. 2g, h). TLR9-specificity of EnanDIM® in vitro To further investigate the mode-of-action of EnanDIM® molecules and their TLR9-specificity, the effect of EnanDIM-C and EnanDIM-A on isolated TLR9-positive pDC was compared. While EnanDIM-C induced a stronger IFN-alpha production by pDC, for EnanDIM-A a more pronounced increase of CD80 and CD86 surface marker expression on pDC was observed, confirming the preferential immune response pattern for each Enan-DIM® molecule (Fig. 3a). The observed immune activation was strictly dependent on the presence of CG-motifs, known to be recognized by TLR9. Cytokine secretion and cellular activation was abrogated when a CG-depleted variant EnanDIM-C(-CG) was used (Fig. 3b). This was confirmed in the reporter cell line ELAM41 [26], where EnanDIM-C, but not EnanDIM-C(-CG), stimulated the TLR9-triggered NF-kappaB pathway indicated by expression of eGFP resulting in increased fluorescence Fig. 2 Immunological activation profile of EnanDIM-C (a-c), EnanDIM-A (d-f) and differences between both molecules (g, h). Human PBMC were treated without (black open squares) or with EnanDIM molecules (blue filled squares) at a final concentration of 3 μM for 48 h. Cytokines/ chemokines were measured in cell culture supernatants (a n = 14-48, d n = 12-38, h n = 21) and activation of immune cells was analyzed by flow cytometry (b n = 13-29, c, e n = 12-34, f, g). Means are shown, differences between EnanDIM®-treated PBMC and controls were calculated using the paired t-test (*p < 0.05, **p < 0.01 ***p < 0.001) (a, b, d, e, h). Results from representative experiments are shown (c MFI of CD169 within monocytes is shown, f frequency of CD86 within B cells is shown, g HLA-DR expression of pDC) (Fig. 3c). Furthermore, it was shown that an intact type I IFN pathway is crucial for the immunomodulatory effect of EnanDIM-C, since co-incubation with B18R protein (vaccinia virus-encoded receptor with binding capability to type I interferons) clearly reduced the activation of TLR9 negative cells and secretion of IP-10 (Fig. 3d). The reduction of B cell activation was less pronounced due to their TLR9 positivity allowing a direct stimulation. Cytotoxic activity of EnanDIM® in vitro To provide evidence that stimulation of NK cells within human PBMC by EnanDIM® molecules convert them into effective tumor destroying cells, functional experiments to analyze NK cell-mediated cytotoxicity were performed. PBMC were stimulated with EnanDIM-C and subsequently co-cultured with Jurkat cells -a human T leukemic cell line -as target cells. Indeed, an increased death of target cells was observed, indicating the induction of NK cell mediated cytotoxicity (Fig. 3e). Taken together, the data obtained from in vitro studies confirm the proposed mode-of-action of EnanDIM® molecules primarily targeting TLR9-positive cells and thus triggering subsequent broad innate and adaptive immune responses (Fig. 3f). Immunologic activity of EnanDIM® and lack of acute toxicity in vivo EnanDIM® molecules were used in a maximum feasible dose (MFD) mouse study to evaluate their acute toxicity at very high single doses. EnanDIM-A was injected subcutaneously (s.c.) at 50 mg (app. 2000 mg/kg) and EnanDIM-C at 10 mg (app. 400 mg/kg). None of the EnanDIM® molecules led to mortality, clinical signs or body weight changes and macroscopic organ evaluation at day 15 revealed also no signs of toxicity (data not shown). In this model, treatment of mice with EnanDIM® molecules resulted in a clear peripheral immune activation represented by increased levels of IP-10 after 24 h (Fig. 4a). At the same time, up-regulation of CD169 on monocytes/macrophages was observed, however only for EnanDIM-C (Fig. 4b), while the strongly activated monocytes/macrophages in EnanDIM-A treated mice had likely already migrated into lymphoid tissues at the time of analysis [27]. As expected, the immune activation had subsided 15 d after the injection (data not shown). In line with this, an early dose-dependent increase of serum IP-10 level with a peak after approximately 6 h was visible both for EnanDIM-C and EnanDIM-A (Fig. 4c). Modulation of the TME by EnanDIM® in vivo The presence of CD8 + T cells in the TME is a crucial pre-requisite for the success of immuno-oncological approaches. Given their mode-of-action EnanDIM® molecules should provide signals for recruitment of immune cells to the TME. EnanDIM-C was injected into established tumors in the CT26 colon carcinoma model (Fig. 4d). Tumor growth was significantly reduced in EnanDIM-C treated mice (Fig. 4e, f), and immunohistochemical analysis of tumors showed a significant increase of CD8 + T cells within the tumor (Fig. 4g, h). A trend towards a correlation of high CD8 + T cell numbers with small tumor volumes was also observed (Fig. 4i). Anti-tumor effects of EnanDIM® in syngeneic murine tumor models EnanDIM-C was next evaluated for its anti-tumor effects across a broad range of syngeneic murine tumor models, including Pan02 (pancreas carcinoma), MC38 (colon carcinoma), and B16F10 (melanoma). After s.c. inoculation of the respective tumor cells into Balb/c or C57BL/ 6 mice, EnanDIM-C (or vehicle) was injected multiple times into established tumors. The anti-tumor effect of EnanDIM-C varied with respect to tumor models ranking from low (Pan02) to clear (MC38, B16F10) reduction of tumor growth and, consequently, prolongation of survival (Fig. 5a-c). In order to evaluate a route of administration intended for a broader potential clinical application, EnanDIM-C was injected systemically (s.c.) in the CT26 model which showed a comparable anti-tumor effect to local (i.tu.) injection in this model (Fig. 5d, e). Long-lasting immune memory through EnanDIM® in EMT-6 and A20 murine tumor models Treatment of mice with EnanDIM-C in the syngeneic A20 lymphoma model showed substantial tumor growth inhibition (TGI) of 78% and a highly significant increase of survival (Fig. 6a). In fact, in six out of ten mice the tumors completely disappeared. Five surviving mice were subsequently subjected to a re-challenge with A20 cells without any further treatment. All five mice completely rejected the second inoculation of A20 cells, in contrast to age-matched naïve mice, indicating that clearance of the initial lymphoma following EnanDIM-C treatment resulted in the formation of protective anti-tumor memory. However, re-challenge of surviving mice with CT26 tumor cells led to tumor growth in all mice indicating no cross-reaction of the induced immunity (Fig. 6b). EnanDIM-C induced a profound anti-tumor effect in the syngeneic EMT-6 breast cancer model with a substantially inhibited tumor growth (TGI: 85%) and a significantly augmented survival (Fig. 6c). Notably, eight out of ten mice showed complete regression of tumor and re-challenge of the surviving mice with EMT-6 cells led to tumor-free survival of all of them, which was in contrast to age-matched naïve mice. Again, EnanDIM-C induced a sustained anti-tumor immune memory against EMT-6 cells. Unexpectedly, re-challenge of surviving mice with CT26 tumor cells led to tumor rejection in all surviving mice indicating a cross-reactive immunity between antigens expressed by EMT-6 and CT26 cells (Fig. 6d). This was confirmed by an ELISpot assay, showing a significantly increased number of IFN-gamma secreting cells after re-stimulation with EMT-6 or CT26 tumor cells as well as Renca cells compared to naïve mice (Fig. 6e). Discussion EnanDIM® molecules constitute a novel family of L-nucleotide-protected TLR9 agonists. They induce a broad stimulation of cells involved in innate and also adaptive immune responses, with pDC and B cells as primary and mDC, monocytes NK-, NKT cells and T cells as secondary target cells. Together with their elicited chemokines/cytokines these TLR9-agonists play crucial roles in the body's anti-tumor immune response: IFN-alpha stimulates several key regulatory immune cells and thereby initiates innate and also adaptive immune responses [28], the latter especially by activating CD8-alpha + dendritic cells able to cross-present antigens to cytotoxic T cells [29,30]. The chemokine IP-10 attracts activated T and NK cells and also has angiostatic potential [31,32]. IFN-gamma may be secreted by NK cells in response and is one of several mediators for a TH1 immune response [33]. Notably, the secretion of the pro-inflammatory and angiogenic cytokine IL-8 was only moderately induced by EnanDIM® and considerably less when compared with other TLR9 agonists (Additional file 1: Figure S1). Strong IL-8 secretion induced by class B CpG-ODN containing a complete PTO backbone was independent from TLR9-binding CG-motifs [34]. Generally, off-target immunological effects on certain immune cell populations usually caused by PTO-modifications in CpG-ODN are non-detectable with EnanDIM-C. This was supported by the absence of toxicities in a MFD study using EnanDIM-C and EnanDIM-A at very high dose levels in vivo which contrasts to previous publications describing side effects for PTO-modified CpG-ODN [16][17][18]. Taken together, EnanDIM-C/−A show a beneficial immune profile and initial data from the MFD study may predict an absence of toxicities and severe adverse events in subsequent clinical development. EnanDIM-C treatment led to a recruitment of CD8 + T cells to the TME in vivo, which can be explained by the primary activation of pDC to induce secretion of IFN-alpha synergizing with the secondary induction of IFN-gamma for the secretion of IP-10 from monocytes [35]. Effector CD8 + T cells, TH1 cells and NK cells express CXC-chemokine receptor 3 (CXCR3), which is the receptor for the TH1-type chemokines CXC-chemokine ligand 9 (CXCL9) and IP-10 (CXCL10). These cells can migrate into tumors in response to these chemokines [32,36,37], thereby increasing the number of T cells in the tumor. The presence of a T cell-inflamed TME in so-called "hot tumors" is linked with improved responses to cancer immunotherapies including checkpoint inhibitors [38,39]. We have shown that the modulation of the TME, shown as CD8+ T cell infiltration, is associated with a reduction of tumor growth in the CT26 colon carcinoma model. The anti-tumor effect was also observed for other tumor models and consequently resulted in an improved survival of mice. More importantly, treatment with EnanDIM-C resulted in a complete tumor regression in the majority of mice in A20 lymphoma or EMT-6 breast cancer models and all surviving mice rejected tumor cells in a re-challenge study, suggesting a sustained immune memory against the tumor. The complete regression of established tumors in the EMT-6 model is especially remarkable, since therapeutic blockade of PD-L1 alone had little or no effect in this model [40]. EMT-6 is known for its immune excluded TME and only a combination of blockade of PD-L1 and TGF-beta resulted in a pronounced anti-tumor effect together with an infiltration of T cells [40]. Furthermore, complete tumor rejection of a secondary CT26 tumor may indicate cross-reactivity against shared antigens between different tumor types, confirmed by ELISpot responses not only against EMT-6 but CT-26 and even Renca cells. Anti-tumor efficacy of EnanDIM-C varied between different syngeneic models indicating different tumor properties in responding to immune modulation. It is well known that syngeneic models differ in their immunogenicity and in their ability to respond to immuno-oncological approaches including checkpoint inhibitors: Analyses of syngeneic murine tumor models revealed strong differences in mutational load, type of mutations, gene expression in immune-related pathways as well as composition and magnitude of the tumor immune infiltrates [41]. Immunosuppressive cell types in the TME dominated in tumor models that did not respond to immune-checkpoint blockade, whereas cytotoxic effector immune cells were enriched in responsive models. The described immunological features of EnanDIM® molecules, their potent anti-tumor responses and the lack of off-target effects renders them as an ideal combination partner for other immunotherapeutic approaches. Especially the fact, that the mode-of-action of EnanDIM® molecules via TLR9 starts upstream of the targets of checkpoint inhibitors like anti-PD-1/anti-PD-L1 a combinatory approach may ideally be suited for a synergistic immune activation and thus enhanced anti-tumor effects. This was currently shown by PTO-modified TLR9 agonists in mouse tumor models [42,43] and also in a clinical trial in advanced melanoma [44]. A major question regarding the use of EnanDIM® molecules with different sequences is how these TLR9 agonists are able to induce different immunological responses in the same donor PBMC when only one receptor, TLR9, is involved. A possible explanation for this phenomenon is that molecules with distinct sequences may exhibit differences in molecule uptake, intracellular distribution and receptor binding [45] (Fig. 7). The distribution into two distinct types of endosomes results in the induction of two specific signaling pathways: a) activation of NF-kappaB inducing the production of pro-inflammatory cytokines and acquisition of antigen-presenting function, and b) activation of interferon regulatory factor 7 (IRF7) leading to type I IFN (e.g. IFN-alpha) production [46,47], which is crucial to link the stimulated innate response to the adaptive arm of the immune system [28]. This way differential mixtures of IFN-alpha and other cytokines as well as the induction of specific surface molecules in pDC determine the subsequent characteristic activation of secondary target cells such as NK cells, monocytes and mDC. Our own data regarding IFN-alpha secretion of pDC, target specificity for TLR9 and the dependency of a broad immune activation by EnanDIM® molecules on an intact type I IFN pathway support the relevance of the published signaling pathway of TLR9 agonists [46,47] for EnanDIM®. As described, EnanDIM-C is a potent inducer of the IFN-alpha pathway resulting in a strong activation of TLR9-negative cells, like NK cells and monocytes but less pronounced B cell activation. However, with EnanDIM-B we identified an Enan-DIM® molecule mainly triggering the B cell pathway (Additional file 2: Figure S2). This way it will be (See figure on previous page.) Fig. 6 Persistent immunological anti-tumor memory induced by EnanDIM-C in two syngeneic tumor models. 10 mice per group were inoculated s.c. with either A20 lymphoma (a, b), or EMT-6 breast cancer (c, d) tumor cells. Established tumors (40 mm 3 , day 3 to day 7) were injected with EnanDIM-C or vehicle (i.tu.). Mean tumor growth + SEM (left), and Kaplan-Meier survival plots (right) are shown (log-rank analyses: A20, p < 0.0001; EMT-6, p < 0.0001). Mean tumor growth curves are continued until 50% of mice treated with vehicle have been sacrificed. Light blue bar on each x axis indicates the treatment period. b, surviving mice from the A20 tumor model as well as age-matched naïve mice were re-challenged with A20 cells (1st re-challenge) and surviving mice were subsequently re-challenged with CT26 cell (2nd re-challenge) as specified. d, surviving mice from the EMT-6 tumor model as well as age-matched naïve mice were re-challenged with EMT-6 cells (1st re-challenge) and surviving mice were subsequently re-challenged with CT26 cell (2nd re-challenge) as specified. Individual tumor growth of all mice is shown and respective tumor cell injections are marked. e, spleen cells from naïve mice and from mice surviving two EMT-6 and one CT26 tumor inoculation were collected, cocultured with either EMT-6, CT26 cells, AH1 peptide or Renca cells and subjected to ELISpot assay to quantify the IFN-gamma secreting cells possible to broaden the spectrum of possible applications for the EnanDIM® family. Conclusion EnanDIM® molecules activate the innate and adaptive immune system. Their mode-of-action is strictly dependent on presence of CG-motifs specifically targeting TLR9. Off-target effects are avoided due to the lack of chemical modifications and a wide therapeutic window may thus be enabled. The immunological and resulting anti-tumor potential of EnanDIM®, including beneficial TME modulation, inhibition of tumor growth and induction of long-lasting tumor-specific memory, favors the EnanDIM® family of TLR9 agonists for further preclinical and clinical development as cancer immunotherapy for systemic and local administration. Additional files Additional file 1: Figure S1. In tr a c e llu la r E x tr a c e llu la r Fig. 7 Mode-of-action of EnanDIM® molecules: Differential activation of secondary cells. Variation of sequences and thus secondary conformation have an influence on ① unspecific DNA-uptake by TLR9-positive cells, ② differential uptake into the respective (early or late) endosomes resulting in either IRF7 or NF-kappaB activation followed by induction of specific cytokine pattern and surface marker expression resulting in differential activation of (TLR9-negative) cell subpopulations, and ③ specific binding to the TLR9 and thus strength of the response and subsequent thresholds for cell activation (which may differ between cell types)
2019-01-17T06:26:47.750Z
2019-01-08T00:00:00.000
{ "year": 2019, "sha1": "bb459d60f6e609159534e0525e782da29baf42ec", "oa_license": "CCBY", "oa_url": "https://jitc.bmj.com/content/jitc/7/1/5.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb459d60f6e609159534e0525e782da29baf42ec", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268561297
pes2o/s2orc
v3-fos-license
Increases in Water Balance‐Derived Catchment Evapotranspiration in Germany During 1970s–2000s Turning Into Decreases Over the Last Two Decades, Despite Uncertainties Understanding variations in catchment evapotranspiration (EC) is critical as it directly affects water availability for humans and ecosystems. Previous studies found increases in EC in Central Europe over recent decades, but fixed study periods may not fully reveal inter‐decadal hydroclimatological variability. We performed a multi‐temporal trend analysis of water balance‐derived EC for 461 German catchments and the period 1964–2019. We accounted for previously often neglected changes in storage and uncertainties in precipitation. EC generally increased throughout Germany during 1970s–2000s (>2 mm year−2), while it showed milder changes and decreases afterward. These variations were robust to uncertainties in precipitation (median relative uncertainty of 26%) and broadly coherent with sparse plot‐scale data. Variations in EC were related with variations in precipitation and radiation, with a potentially increasing influence of precipitation after 2000s. These findings provide a reference for synthesizing current knowledge on variations in EC and their uncertainties. Introduction Evapotranspiration (E) couples the water, energy, and carbon cycles.Since E is a major water flux, a proper understanding of current and potential future variations in E is critical to predict water availability for humans and ecosystems.Our knowledge of past variations in E across regions is still limited, mainly because of the difficulty in measuring E throughout the landscape and the interplay among multiple drivers, like changes in climate and land cover (Teuling et al., 2019). Previous studies largely found increases in E in Central Europe over past decades (Duethmann & Blöschl, 2018;Hobeichi et al., 2021;Pan et al., 2020;Pluntke et al., 2023;Teuling et al., 2009;Ukkola & Prentice, 2013;Yang et al., 2023).Pluntke et al. (2023) reported increases in catchment E (E C , from the observed water balance) of 2.1 mm year 2 over 1969-2019 at the experimental Wernersbach catchment in Eastern Germany, and Duethmann and Blöschl (2018) reported average increases in E C of 2.9 mm year 2 across 156 Austrian catchments over 1977-2014.Most studies analyzed trends over fixed study periods, which may not fully reveal inter-decadal variability (Hannaford et al., 2013(Hannaford et al., , 2021;;Vicente-Serrano et al., 2021).For example, Duethmann and Blöschl (2018) noticed that increases in E C over Austrian catchments mostly occurred between 1980 and 1995, and Pluntke et al. (2023) did not detect significant increases in E from a flux tower in the Wernersbach catchment between 1997 and 2019.Whether increases in E extend to larger scales and more recent decades still remains open. Flux towers and lysimeters provide direct E measurements at the plot-scale, but these are available for a limited number of sites and typically do not cover inter-decadal periods.Diagnostic products offer regional E estimates by relying on satellite data or by upscaling plot-scale data (Pan et al., 2020), but their representativeness at the catchment scale remains debated (Lehmann et al., 2022;Tan et al., 2022).One of the firmest observational approaches to assess long-term variations in E remains estimating catchment evapotranspiration (E C ) from the observed water balance: where P is precipitation, Q streamflow, and ΔS changes in the terrestrial water storage, S. Yet, uncertainties in observed water balance components may involve considerable uncertainties in E C (Kampf et al., 2020).While Q can be measured at catchment scale and annual estimates are typically associated with relatively low uncertainties, P and the assumptions on ΔS can be considered the main sources of uncertainty for E C estimates.Uncertainties in P variations result from measurement errors, the upscaling of point measurements, and potential inhomogeneities over time.Whereas previous studies investigated the effect of different products on P trends (Duan et al., 2019;Gomis-Cebolla et al., 2023;Jiao et al., 2021), few works did so for E C trends (Ukkola & Prentice, 2013).Furthermore, most studies on E C trends assumed ΔS negligible over multi-year periods (Duethmann & Blöschl, 2018;Teuling et al., 2009;Ukkola & Prentice, 2013;Vadeboncoeur et al., 2018), even though ΔS can be relevant over 10-year periods for some catchments (Bruno et al., 2022;Han et al., 2020).The Gravity Recovery And Climate Experiment (GRACE) satellite mission provides estimates of S anomalies, but only at 1°resolution and since 2002.Groundwater is generally a main component of S; nevertheless data on groundwater levels (GWL) are usually limited and they require local knowledge on the specific yield of wells to gain S variations.For catchments where Q is mainly controlled by S, the portion of S connected to Q variations (dynamic S, S dyn , Staudinger et al., 2017) can be estimated through streamflow recession analysis (Kirchner, 2009).Yet, estimates of S dyn have hardly been used to derive E C (Aulenbach & Peters, 2018) and assess uncertainties associated with the neglection of ΔS in E C trends. Here we characterize variations of water-balance derived E C for Germany and the last six decades, which we further compare to variations from plot-scale E data.To increase the robustness of E C estimates, we use a homogeneous P data set and estimates of changes in S dyn (ΔS dyn ).We further assess uncertainties in the estimates of E C variations, by using different P products and alternative S data for the more recent decades.Finally, we evaluate variations in main climatic drivers of E C, that is, P, as a proxy for available moisture, and radiation (R), as main contributor of past changes in atmospheric evaporative demand (Duethmann & Blöschl, 2018). Study Area and Streamflow Data We selected Germany as study area and focused on catchments completely within it to exploit national-scale climatic data which do not cover transboundary areas (Section 2.1.2).We collected daily Q data and catchment boundaries from the environment agencies of the German Federal States, the Global Runoff Data Center, and the Global Streamflow Indices and Metadata Archive (Do et al., 2018a(Do et al., , 2018b)).We selected catchments with an area of 50-1000 km 2 to focus on near-natural conditions (Stahl et al., 2010), excluding catchments with known direct human impacts (e.g., water withdrawals and transfers) from information by data providers.We only included catchments with less than 5% of missing data per year and at least 28 years of data over the study period , with hydrological years starting in November throughout the manuscript).To ensure high data quality we retained only catchments with annual Q less than annual P, and without change points in annual Q (Pettitt test, Pettitt, 1979, p ≤ 0.05) which frequently indicate problems in Q data (Slater et al., 2021).This led to 461 partly nested study catchments with median area (interquartile range) of 173 (105/320) km 2 and elevation of 348 (145/ 502) m a.s.l.(Figure S1 and Table S1 in Supporting Information S1).We grouped these catchments into three Geophysical Research Letters 10.1029/2023GL107753 regions (Figure S1 in Supporting Information S1), following the German river classification system (Pottgiesser & Sommerhäuser, 2004), and long-term variations in P and S dyn (Figure S2 in Supporting Information S1). Climatic Data To focus on long-term consistency in P data, we used a German-wide gridded data set which relies on a constant station network over time (P1).It applies the SPHEREMAP method (Shepard, 1968;Willmott et al., 1985) to interpolate daily data from 1300 rain gauges of the German Weather Service (DWD) to a resolution of 0.11°, as used by Hoffmann et al. (2018).To quantify the uncertainties in E C variations from uncertainties in P, we used additional P products.The HYRAS data set, provided by DWD at 1 km and daily resolution (DWD, 2023b) for Germany, is based on all available stations (up to ≥6,000, Rauthe et al., 2013).This ensures high resolution and station density, which however varies over time.E-OBS is a European-wide data set at daily and 0.1°resolutions from station data of the European Climate Assessment and Data set (v26.0e, Cornes et al., 2018), comprising an ensemble of 20 interpolations to reflect uncertainties from spatial upscaling.Finally, ERA5-Land provides global P fields at resolutions of 9 km and 1 hr (Muñoz-Sabater et al., 2021).As a reanalysis product, it overcomes potential data inhomogeneities over time. To avoid systematic P underestimations for gauge undercatch, we corrected the observational products following Richter (1995), who proposed corrective coefficients for the study area depending on precipitation type and gauge exposure.To discriminate between rain and snow, we used air temperature (T ) data from the E-OBS data set (v26.0e, Cornes et al., 2018).We assumed corrective coefficients for moderately sheltered locations for all cells, given low impact of using alternative assumptions on E C trends (Duethmann & Blöschl, 2018).Finally, for each catchment we computed catchment-average P time series to estimate E C (Section 2.2.2) and analyze variations in P as driver of variations in E C (Section 2.2.4). To investigate variations in R as additional climatic driver, we used data from 32 stations provided by DWD (DWD, 2023a) with <25% missing daily data per year and ≥20 years of data over the study period.We computed annual anomalies for each station and averaged them among the study region to derive regional average annual R anomalies. Benchmark Storage Data We obtained regional anomalies in S from GRACE (S GRACE henceforth) over 2003-2019.To minimize uncertainties from specific GRACE products, we retrieved the CSR mascon product (RL06v02, Save et al., 2016) and the GFZ Level-3 product (RL06v05, Boergens et al., 2020), and we averaged them over the study regions.Furthermore, we used GWL data from the environment agencies of the German Federal States compiled by CORRECTIV.Lokal (2022).We selected 1052 wells located within our study catchments and with no more than 1 month of missing data in each year over 2003-2019. Additional Evapotranspiration Data for Comparison We used E data from grass-covered and forested monitoring sites (flux towers and lysimeters, Figure S1 and Table S2 in Supporting Information S1).Since long-term data are sparse, we included sites in neighboring countries.Specifically, we used lysimeter data from Britz (Müller, 2009), St Arnold (Harsch et al., 2009), Rheindhalen, and Rietholzbach (Hirschi et al., 2017).We further selected 11 flux towers with ≥10 years of quality-checked and continuous data (<25% of missing daily data) over the study period from the Fluxnet2015 data set (Pastorello et al., 2020), the Warm Winter 2020 data set (Warm Winter, 2020 Team & ICOS Ecosystem Thematic Centre, 2022), the European Fluxes Database Cluster, Hörtnagl, Buchmann, et al. (2023), Hörtnagl, Shekhar, et al. (2023), and Pluntke et al. (2023).We pre-processed the data to avoid suspicious data and changes in land-cover or site management, basing on information from data providers and we associated each monitoring site with a study region (Table S2 in Supporting Information S1). Water Balance-Derived Catchment Evapotranspiration We estimated annual E C from the observed water balance (Equation 1).As a first-order approximation of ΔS, we derived (ΔS dyn ) from the analysis of streamflow recession data as proposed by Kirchner (2009) for catchments 10.1029/2023GL107753 where Q generation is mainly controlled by S dyn (details in Text S1 in Supporting Information S1).Briefly, we followed Brutsaert (2008) and Stoelzle et al. (2013) for the selection of recession periods and the derivation of catchment-specific recession parameters, which we then used to estimate S dyn from Q data (Kirchner, 2009).We calculated annual series of ΔS dyn from S dyn at the beginning and end of each hydrological year.We removed unrealistic values which can occur in individual catchments and years, due to uncertainties in streamflow recession analyses (Stoelzle et al., 2013) and violation of the assumption of Q mainly controlled by S dyn (Text S1 in Supporting Information S1). For estimating inter-decadal E C variations, we derived E C from the precipitation data set P1 and ΔS dyn , as a "best estimate".We used additional P products (Section 2.1.2) and different assumptions on ΔS (negligible, approximated by ΔS dyn or by changes in S GRACE , ΔS GRACE ) for alternative E C estimates to evaluate uncertainties in inter-decadal E C variations. Trend Analysis For trend detection we adopted the Mann-Kendall test (significance level of 0.05, Mann, 1945;Kendall, 1975) with trend-free prewhitening to remove lag-one autocorrelation (Yue et al., 2002) and the Sen's slope estimator (Sen, 1968) to quantify trend magnitudes (in the following trends).We performed a multi-temporal trend analysis to explore E C variations over multiple subperiods within 1964-2019.We considered all subperiods ≥20 years to ensure sufficient length for trend detection.We estimated trends for E C averaged across the study catchments of each region and for individual catchments.For each subperiod, we included all catchments with a maximum of two years missing (removed before trend detection).This resulted in a variable number of catchments over the subperiods, but the number of catchments within each subperiod was kept constant to avoid artifacts in trend detection. As a benchmark for trends in E C , we quantified trends in plot-scale E data over the whole record period of each site.Given the low number of catchments with long-term monitoring of E and Q, we did not perform a comparison at the scale of specific catchments, but we visually compared E and E C temporal dynamics by region, and we verified the coherence of their trends. Uncertainties in Variations of Catchment Evapotranspiration We quantified uncertainties in E C trends from uncertainties in P as the standard deviation in E C trends from different P products (weighted by 0.05 for trends from the 20 E-OBS members, and 1 for others).We assessed potential uncertainties in E C trends neglecting ΔS as the range of E C trends when ΔS is neglected or approximated by ΔS dyn (using the precipitation data set P1).We presented these uncertainties as percentage of the absolute E C trend from the "best estimate". For evaluating our S dyn estimates, we compared S dyn , S GRACE , and GWL over 2003-2019, in terms of regional average monthly deseasonalized anomalies (baseline period 2004-2009) and Pearson's correlation coefficient (r). To allow comparison in case of different storage capacity/water yields, we first deseasonalized the time series of S dyn and GWL, following Güntner et al. (2023), and then aggregated catchments/wells within each region.To evaluate the influence of uncertainties from ΔS dyn on E C variations, we further derived E C estimates using regional ΔS GRACE and the P1 data set (E C,GRACE ) over 2003-2019.To this end, we used 271 catchments with realistic E C,GRACE data (Equation S8, Text S1 in Supporting Information S1). Contribution of Main Climatic Drivers We calculated partial correlations between E C and two main climatic drivers (P and R).To focus on long-term variability, we smoothed the time series through a Gaussian kernel with a 2-year standard deviation, which we also used for visualization purposes throughout the manuscript.We calculated regional averages over the study catchments for E C and P, and over the stations for R. Furthermore, we fitted a multi-linear regression (MLR) model with E C as dependent, and P and R as independent variables, over the whole study period and a period with strong E C increases (1970-2000, Section 3.1). Inter-Decadal Variations in Catchment Evapotranspiration Increases in regional averages of E C dominated across Germany in the first part of the study period, while decreases occurred over recent decades (Figure 1).Significant strong increases (>2 mm year 2 ) were observed between 1970s and 2000s, with an increase of 3 mm year 2 or 5.1% decade 1 on average over the three regions for 1970-2000 for instance.The timing of the transition to negative trends differed across the regions.Significant strong negative trends were identified for subperiods already starting in mid-1980s in the Pre-Alpine region (Figure 1a) and in early 1990s in the Western one (Figure 1b).Similar results were obtained when excluding nested catchments to avoid potential redundant information (Figure S3 in Supporting Information S1). For individual catchments, trends in E C were unavoidably heterogeneous in terms of magnitude and significance, but the sign of significant trends was broadly consistent among catchments, especially for subperiods with strong significant trends at the regional scale (Figure S4 in Supporting Information S1).As an example, for 1970-2000 significant positive (negative) trends were observed for ≥33% (≤3%) of catchments within each region and the standard deviation of trends among catchments was lower than the average trend for each region (Table S3 in Supporting Information S1). Only 4 (out of 15) monitoring sites showed significant trends in E over their record period.We detected increases at the lysimeter St Arnold over 1967-2015 and decreases at the flux towers CH-Lae, FR-Hes, and DE-Obe over the last two decades (Figure S5 and Table S2 in Supporting Information S1). Figure 1.Inter-decadal variations in catchment evapotranspiration (E C ) at the regional-scale.Multiple trend analysis for regional E C for the Pre-Alpine (a), Western (b), and Eastern (c) regions, and (d) regional average annual anomalies in E C .In (d) we smoothed the time series for visualization purposes, according to the details provided in Section 2.2.4 (for unfiltered data refer to Figure 3a).Hatched cells in (a-c) indicate significant trends at the 5% level. Uncertainties in Variations of Catchment Evapotranspiration For subperiods with significant strong trends (absolute magnitude ≥2 mm year 2 ), uncertainties in regional E C trends from P were lower than estimated trends, with median (maximum) uncertainties of 26% (63%) of trend magnitude (Figures 2a-2d).Relative uncertainties slightly increased to a median of 36% when focusing on all subperiods with moderate trends (absolute magnitude ≥1 mm year 2 ). Potential uncertainties by assuming negligible ΔS were lower than uncertainties from P, with a median of 13% for all subperiods with moderate trends, and relevant for the sign of estimated trends (i.e., ≥100%) only for subperiods with specific start/end years or short duration (i.e., along the diagonals of the matrices, Figures 2e-2h). Anomalies in S dyn generally agreed with those from both GWL (r > 0.6 for all regions) and S GRACE (r > 0.43, Figure S6 in Supporting Information S1). S GRACE showed strong negative anomalies over 2018-2019 and replacing ΔS dyn with ΔS GRACE led to higher E C values, but still plateaued or decreasing (Figure S7 in Supporting Information S1). Discussion Multi-temporal trend analysis showed widespread increases of regionally averaged E C across Germany during 1970s-2000s, while mild changes and decreases over the last two decades (Figure 1).The increases between the 1970s and 2000s are consistent with Teuling et al. (2009), Duethmann and Blöschl (2018), and Pluntke et al. (2023).The time-varying approach complements previous works by providing a broader picture of E C variations over the last six decades and facilitating comparison among studies.At the regional scale and over the whole period, we found lower increases than those observed in Austria over 1977-2014 (Duethmann & Blöschl, 2018) and the Wernersbach catchment in Eastern Germany over 1969-2019 (Pluntke et al., 2023).However, we found strong increases for individual catchments and over specific subperiods (e.g., 1970(e.g., -2000, Figure S4 , Figure S4 in Supporting Information S1), pointing to regional differences in the timing and magnitude of E C increases in Central Europe.E C trends for individual catchments were generally coherent in sign (Table S3 in Supporting Information S1), suggesting that regional-scale trends are representative.However, the Eastern region comprises a comparatively lower number of catchments than others, due to widespread human impacts on Q in its central part (Figure S1 in Supporting Information S1).Trends in E from plot-scale data were broadly consistent with those in E C, with past increases and a turnaround around 2000s, despite a limited number of sites with significant changes.Differences between E and E C can be related to scale-differences and the fact that most E data did not coincide with the study catchments.Methodological challenges can further hamper the comparison, such as the difficulty in accounting for evaporation from interception in E data from flux towers (Pluntke et al., 2023). For "best estimates" of E C trends, we used a homogeneous observational P data set (Section 2.1.2) and we estimated ΔS dyn from streamflow recession analysis (Section 2.2.1).We showed that uncertainties from different P products did not affect the sign of significant strong trends (median relative uncertainty of 26%, Figures 2a-2d).Yet, uncertainties from P were potentially relevant in some regions for subperiods with milder trends, which means that alternative P products may result in E C trends with even different sign.Uncertainties from P were generally higher than potential uncertainties that may have arisen by assuming negligible ΔS, though the uncertainty estimates were based on a different number of members (Section 2.2.3).Potential uncertainties by assuming negligible ΔS were relevant for short and specific subperiods, such as those starting or ending in the wet years 1981 and 1998 associated with large-scale floods in the study area (Uhlemann et al., 2010).We derived firstorder estimates of ΔS through streamflow recession analysis (Kirchner, 2009;Stoelzle et al., 2013).This approach relies on the assumption of Q mainly controlled by S (Kirchner, 2009), as done previously for many catchments in the study area (e.g., Berghuijs et al., 2016;Stoelzle et al., 2013), and it quantifies the portions of S connected to Q, neglecting those only connected to E and intercatchment groundwater flows (IGF, Dralle et al., 2018).A disagreement between S dyn and S GRACE over the more recent years (Figure S6 in Supporting Information S1) may be due to variations in S not connected to Q and to uncertainties in S GRACE .Thomas et al. (2016) quantified trends in groundwater storage for mesoscale catchments in the USA from S dyn estimates, GWL, and GRACE data.They found stronger agreement of trends from S dyn to those from GWL than to those derived from GRACE, which may further indicate uncertainties in GRACE data at small spatial scales.Uncertainties in estimates of S dyn are expected to be particularly relevant for individual catchments (e.g., where IGFs are significant) and specific subperiods (e.g., during heavy rainfall events associated with surface processes or in case of shifts in the recession parameters over time, Trotter et al., 2024).We aimed at reducing these uncertainties and we further checked that possible uncertainties in S dyn , as compared to S GRACE , did not affect the sign of the detected E C variations over the last two decades (Figure S7 in Supporting Information S1).Alternative methodologies for the derivation of S dyn from Q data during recessions (Stoelzle et al., 2013) or statistical approaches could be used in future work to explicitly quantify the uncertainty in E C trends due to uncertainties in ΔS dyn. Correlations of P and R to E C (Figure 3) suggest that both drivers contributed to past variations in E C over the study area, similarly to previous findings for Central Europe (Duethmann & Blöschl, 2018;Teuling et al., 2009Teuling et al., , 2019)).The decreases in R until around 1980 and the increases afterward that we detected are in line with previous studies (Sanchez-Lorenzo et al., 2015), and they reflect climatic variations and changes in air pollution ("global dimming/brightening," Wild, 2012).Regional R variations may be affected by uncertainties related to the relatively low density of stations.P variations reflect a drying tendency over the study area during the last two decades, with summer droughts in 2003 (Pluntke et al., 2023;Teuling et al., 2013), 2015 (Ionita et al., 2017), and 2018-2019(Boergens et al., 2020).MLR underestimates high E C values around 2000 when fitted to the entire study period or overestimates E C during recent years when fitted over 1970-2000 (Figure S8 in Supporting Information S1), which suggests an increasing influence of P over the recent decades.The increasing importance of P over R on E C variations is also intuitively supported by decreasing E C with still high values in R over the last two decades and is in line with findings from global climate modeling showing widespread transitions from energy-to water-limited ecosystems (Denissen et al., 2022).We focused on climatic drivers of E C variations, building on previous studies which showed changes in climate, and in R in particular, as the main contributors for variations in E in large parts of Europe (see e.g., Duethmann & Blöschl, 2018;Teuling et al., 2019).While we focused on main climatic factors, future research should rigorously attribute the identified E C trends to their drivers, considering variations in P seasonality, in additional climatic variables, including relative humidity, wind speed, and T, and in land-use and -cover.Global changes in climate, atmospheric CO 2 concentration, and land-use and -cover recently promoted widespread vegetation greening, which was a major driver of E increases in many regions over 2001-2020 according to diagnostic products (Yang et al., 2023). Understanding long-term E variations is essential to properly support forest and water management for society and ecosystems.Decreasing E under drying conditions points to increasing stress on vegetation during recent droughts over the study area (Pluntke et al., 2023;Senf et al., 2020).Long-term E variations may help contextualizing the role of E in surface water availability during droughts (Pluntke et al., 2023;Teuling et al., 2013) and hydrological non-stationarities triggered by them (Gardiya Weligamage et al., 2023;Massari et al., 2022). Conclusions We investigated (a) inter-decadal variations in data-based E C from a homogeneous observational precipitation P product and accounting for ΔS dyn of the catchments, (b) the robustness of these variations to the main sources of uncertainties, and (c) variations in the main climatic drivers of E C .E C largely increased across Germany between 1970s and 2000s, while it showed mild changes and tendencies to decreases over the last two decades (Figure 1).These variations were broadly coherent with sparse plot-scale data and robust to uncertainties.Uncertainties from P were in the order of 26% of trend magnitude and larger than those from neglecting ΔS dyn (Figure 2).To further reduce uncertainties from P, it is recommended to use homogeneous, observational P products for future E C trend analyses.If ΔS dyn are not accounted for, short study periods or those with strong storage anomalies at the start/end should be avoided.Increases in E C over 1970s-2000s reflected variations in P and R during the global brightening phase, while recent decreases in E C over the last drying decades with still high R values suggested an increasing influence of moisture variability on variations in E C (Figure 3).Our findings provide a framework to synthesize studies on variations in E C in Central Europe over recent decades, including their uncertainties and potential drivers, which is relevant for freshwater and forest management in a transient climate. Drivers E C variations mirrored variations in P and R, despite a varying importance (ρ P,E|R = 0.84 and ρ R,E|P = 0.6 over 1964-2019, Figures 3a-3c).Periods with high P were the mid-1960s, the early 1980s, and years around 2000.R generally decreased until around 1980 and increased afterward.While high P values in the mid-1960s and around 2000 were also reflected by high E C values, high P in the early 1980s did not correspond to high E C , likely due to low Figure 2 . Figure2.Uncertainties in regional trends in catchment evapotranspiration (E C ). Uncertainty from different P products (a-d) and potential uncertainty by assuming ΔS negligible (e-h), as percentage of absolute trends from the "best estimate" of E C (Section 2.2.3).Uncertainties are considered only for subperiods with moderate and strong trends. Figure 3 . Figure 3. Variations in catchment evapotranspiration (E C ) and their main climatic drivers (P and R).Regional average anomalies in annual E C (a), P (b), and R (c).Shaded colors refer to unfiltered time series, whereas full colors to smoothed time series (Section 2.2.4).Note the different y-scales for visualization purposes. R. MLR fitted to the entire study period generally reflected the variability in E C (FigureS8in Supporting Information S1), despite underestimating high E C values around 2000.MLR fitted to the subperiod with increasing E C overestimated E C before 1970 and after 2000 (FigureS8in Supporting Information S1).
2024-03-22T15:50:02.927Z
2024-03-19T00:00:00.000
{ "year": 2024, "sha1": "b360d39c36c9c29e7cf5f140a7628d7b3c64dd6b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023GL107753", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "b2135024fab65ba48c0c3e4c95bc4b2d9c089733", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
29848142
pes2o/s2orc
v3-fos-license
Forgotten Ureteral Stents in a Tertiary Hospital in Accra and a Review of Endourological Management of Upper Urinary Tract Pathologies in the West Africa Sub-Region Background: Forgotten ureteral stent is defined as prolong indwelling ureteral stent whose function is no longer desired. Ureteral stents are used in the management of upper urinary tract pathologies. Prolonged indwelling ureteral stents may be complicated by urosepsis or renal failure, encrustation, stone formation, spontaneous fracture which may either be retained or voided in the urine (stenturia). Hitherto, these complications were managed by open procedures alone in our center. We report our recent experience in endourology with the management of three cases of forgotten ureteral stents with durations of ten years and two years (two cases) and review endourological practice in West Africa. Conclusion: Although encrusted stents can be managed successfully by minimally invasive approaches in the majority of cases, the best treatment is prevention. Urology units should have preferably an electronic stent register such that when the time for removal is due, the patient's name and details are flagged red. If electronic register is not available, then a hard paper/book register should be made to prevent situations of forgotten stents. Also, efforts must be made to improve endourological services in the West Africa sub-region to allow patients to have the benefit of endourology in the management of upper urinary tract pathologies including that of stones originating from an encrusted or fractured forgotten ureteral stent. How to cite this paper: Bray, L.D., Kyei, M.Y., Mensah, J.E., Ampadu, K.N., Asiedu, I.O., Toboh, B., Akpakli, E., Ashiagbor, F., Awuku-Asabre, J. and Oyortey, M. (2016) Forgotten Ureteral Stents in a Tertiary Hospital in Accra and a Review of Endourological Management of Upper Urinary Tract Pathologies in the West Africa Sub-Region. Open Journal of Urology, 6, 102108. http://dx.doi.org/10.4236/oju.2016.66018 Introduction Since its description by Zimskind in 1967, ureteric stents have undergone modifications and have become a ubiquitous tool for the urologist.Although we have come a long way from the initial straight Zimskind silicone catheter with advances in anchoring devices, composition and coatings, we still strive to find the ideal stent [1]. Ureteral stent placement is an important adjunct to many urologic procedures.It may be used for the prevention or relief of upper urinary tract obstruction and following reconstructive surgery.Examples of such procedures include the management of renal or ureteral stones as in extracorporeal shock wave lithotripsy (ESWL), endoscopic (ureteroscopy, renoscopy) lithotripsy and open stone removal [2].Ureteral stents are used for relieving hydronephrosis due to ureteral trauma or strictures, malignant neoplasm or retroperitoneal fibrosis [2].Indwelling ureteral stents therefore need to be changed at regular periods to prevent complications.Various authors have reported indwelling time between 2 -4 months as safe.The causes of forgotten ureteral stents could be classified as surgeon's, patient's, stent material and others factors [3].Prolong urinary stasis, urinary tract infections (UTIs) especially by urease splitting organism, and dehydration promote biofilm (slippery slime) formation on the surface of the stent with subsequent crystalloid deposition [4]- [6].Pregnancy and incarceration enhance ureteral encrustation as well [7].Biochemical and optical analyses of stent encrustations by Robert et al. revealed that encrustations consisted mainly of calcium oxalate, calcium phosphate and ammonium magnesium phosphate [8].Hard water consumption may perhaps promote stent encrustation due to the high content of calcium and magnesium carbonates; hence may necessitate early ureteral stent removal or change in such situations. Encrustation, stone formation or fragmentation of indwelling ureteral stents can pose a formidable management challenge especially in resource-poor settings.Open surgeries are commonly performed for upper urinary tract pathologies in most West African countries because of lack of endourology equipment and expertise [9] and applied to management of encrusted ureteral stents as well. We present three cases of encrusted forgotten/neglected ureteral stents with durations of ten years and two years (two cases) seen over a 10-year period and managed by endourologic procedures in two cases and a combination of endourology and open procedure in one case in our institution.We then proceeded to review the current practice of endourology in the West Africa sub-region. Case 1: A 78-year old female with a year's history of intermittent right flank pains presented with a recurrence of the right flank pains, fever and chills.She had a past history of a similar but more severe right flank pains ten years ago and was hospitalized in a suburban General hospital in the United States of America (USA).She apparently had a right ureteric stent inserted for an obstructing right ureteric calculus and returned home (Ghana) one month after the procedure but denied knowledge of the indwelling ureteral stent.On further evaluation of the current symptoms, a Kidney, ureter and bladder (KUB) radiograph and abdomen and pelvis CT Scan done revealed a full length encrustation of a ureteral stent with heavy stone burden at the renal pelvis and bladder coils [most of the stone was radiolucent on KUB] (Figure 1(a)) and a non-obstructing multiple left renal calculi.Her urine analysis showed cloudy urine with blood (3+) and pus cells-14/phf.Urine culture isolated Escherichia coli which was sensitive to meropenem and nitrofurantoin.Her blood urea and creatinine levels were 6.6mmol/L reference range (2.0 -7.0 mmol/L) and 124 umol/L reference range (62 -106 mmol/L) respectively.Subsequent to treating the associated urosepsis, the stent was retrieved in whole (Figure 1(c)) after two sections of ureteroscopy with alternating endoscopic ultrasonic and ballistic fragmentation of the stones.The first surgery lasted 2 hours 30 minutes and the second 1 hour 24 minutes at two week interval.Post procedure recovery was uneventful and she is being followed up for the non-obstructing left renal calculi. Case 2: A 31-year old mechanic who had been attacked by armed robbers sustained multiple penetrating abdominal wounds from a close range gunshot to the abdomen.At surgery, in addition to multiple bowel perforations requiring intestinal resection and anastomosis, a mid-ureteric perforation seen was closed over a ureteral stent.He was discharged a week after the surgery.He got lost to follow up until he reported after two years with Discussion The incidence of forgotten ureteric stents and stent encrustation is unknown and there are hardly any reports from West Africa.Our report shows that 3 cases of forgotten stents were seen over a 10 year period making it an unusual occurrence despite the wide spread use of ureteral stents for upper urinary tract diseases and surgeries in the sub-region.The exact mechanism of encrustation is not clear.It however appears to be dependent on several factors.All patients denied knowledge of the presence of the ureteral stents in this series. Stent material may contribute to encrustation.Silicone containing stents tend to be more resistant to encrustation, followed by polyurethane, silitek, percuflex and hydrogel coated polyurethane [1].Studies have shown no encrustation on silicone containing stents at 10 to 12 months dwell time compared to 76% encrustation rate of polymer stents at 12 months [10] [11].The ureteral stents we use in our center is made of polyurethane.The use of biodegradable stents which are made with high molecular weight polymers such as polyglycolide, polylactide and uriprene obviates the need for cystoscopic removal or change and thus, mitigating procedure related complications, cost, patient discomfort and stent neglect.Their complete dissolution and controllability of degradation rate however remains to be perfected [12].The application of antifouling coatings to ureteral stents reduces bacterial adhesion and encrustation.Heparin coated stents have been shown to be effective at reducing stent encrustation [13].Hydrogel coating is able to absorb water, forming a thin liquid layer in the surface of the stent and, thus, preventing bacterial adhesion while providing improved lubrication [14].Metal stents, albeit expensive, have a longer dwell time and fewer stent change associated morbidities and are particularly suitable for malignant ureteral obstruction (MUO).The metallic stents currently available are namely, self-expandable, balloon expandable, covered and thermo-expandable shape memory stents [15].Silver and diamond-like carbon coatings are effective strategy to reduce biofilm adherence due to their wide-spectrum antimicrobial ability and excellent biocompatibility respectively [16] [17].Stents coated with polymers such as pentosan polysulfate, phosphorylcholine copolymer and polyvinylpyrollidone provide excellent lubricant properties enhanced biocompatibility and reduced encrustation.Also newer drug eluting stents (DES) incorporate antibiotic into their biodegradable coating, thus modulating their pharmacokinetics to induce a stable and long-term release of the drug [15] [18] [19].Novel modifications in stent design and material continue to be made to reduce stent encrustations. Stent breakage is sometimes associated with encrustation in forgotten stents as was seen in the second patient.Stents may also fracture spontaneously (as in case 2) after being in situ for a long time due to hardening and loss of tensile strength [1] [20].Most studies showed a predominance of encrustation at the upper coil of the stent.This may be because more effective peristalsis at the lower part of the stent sweeps any deposits off the stent, thus minimizing encrustation at the lower end [3].Our patient in case 1 however, had heavy stone burden at bladder coil (Figure 1(b)). The site of encrustation, the size of the stone burden and the function of the affected kidney dictates the method of treatment.Management of encrusted ureteral stents as occurs in forgotten stents often requires multiple endourologic approaches and/or open surgeries. For encrustations located at the upper coil and or stent body, ESWL and flexible ureteroscopy retrieval of the stent has been reported to be non-invasive and effective first line therapy.The shock waves can be directed at the proximal or ureteral part of the encrusted stent under fluoroscopic guidance.ESWL is however indicated mainly for localized, low volume encrustations [21]- [23]. Ureteroscopy using ultrasonic lithotripsy may also be attempted, either as first-line therapy or after failure of ESWL.Flexible ureteroscopy with holmium laser lithotripsy is an alternative minimally invasive treatment option.More invasive procedures, such as PCNL or open pyelolithotomy are often necessary for treating a severely encrusted stent [21] [24]- [26]. Endourological approaches have been noted to be safe and mostly successful [3].In our series, the patients were discharged 2 days after the endoscopic procedures and quickly resume to their usual daily activities but the second case which had an unsuccessful open pyelolithotomy was discharged 7 days after surgery.All 3 (100%) cases in our series, also in Okeke et al. [27] and Papoola et al. [28] eventually underwent successful endoscopic retrieval of the stent material with no complications.The use of laser, saline irrigation fluid and ureteral stent added an extra cost of approximately 1000 USD in the endourology group.However, the cost for shorter hospitalization, no wound care, cosmesis and quick return to work in the endourology group may perhaps makeup for the difference in the cost of surgery in these two groups. Kane et al. in Senegal reported their experience in a comparative study of 89 patients with upper urinary tract calculi who underwent endourology intervention or open surgery.Less complication and early discharge from hospital was observed in the endourology group [29]. Despite these observed advantages with these minimally invasive techniques, there is widespread lack of well-established endourology in the West Africa sub-region.A report by Ramyil et al. on their management of upper urinary tract obstructions indicated their use of open surgeries due to the absence of modern facilities thus subjecting all their patients to open procedure [30]. Shaibu et al. in 2013 reviewed log books of final year Urological residents presented for the West African College of Surgeons (WACS) and National Postgraduate Medical College (NPMC) final part II exams from January 2007 to December 2011 at Jos University Teaching Hospital (JUTH), Nigeria.They concluded that there was a decline in endoscopic surgeries despite overall increase in absolute number of operative cases performed by final year residents in the period after the commencement of the Urology residency programme [26]. A tabulation of our findings from a search of English publication on endourological procedures in the sub region (Table 1) confirmed the paucity of endourological procedures for the treatment of upper urinary tract pathologies in the West African sub-region.This calls for efforts at training and increasing access to endourology equipment and services to improve the management of upper urinary pathologies including calculi as occurs in the cases of forgotten ureteral stents [31]- [34]. Although encrusted stents can be managed successfully in the majority of cases, the best treatment is prevention.Urology units should have preferably an electronic stent register such that when the time for removal is due, the patient name and details are flagged red.If electronic register is not available, then a hard paper/book register should be made to prevent situations of forgotten stents.Information leaflets should be made by departments on post-operative complications, need for removal and date of removal of indwelling stents.Copies could be given to parents or spouses. Conclusion Forgotten ureteral stent is an uncommon finding in our center.The availability of endourological equipment facilitated the retrieval of the encrusted stents even when open surgery was not successful.Efforts must be made to improve endourological service in the West Africa sub-region to allow patients to have the benefit of endourology in the management of upper urinary tract pathologies including that of stones originating from an encrusted or fractured forgotten stent. Informed Consent Informed consent was obtained from all 3 patients in our series.information needed for this publication. Figure 1 . Figure 1.Images of encrusted ureteral stent in situ for 10 years.(a) KUB showing encrusted stent.(b) Endoscopic appearance of lower coil encrustation.(c) Retrieved stent with fragments.recurrent dysuria, hematuria and having voided a piece of the stent material.He denied knowledge of an indwelling ureteral stent.A KUB done revealed a fractured indwelling right ureteric stent (Figure 2(a)).The distal fragment was removed using a rigid ureteroscope.A right open pyelolithotomy to retrieve the retained proximal stent was unsuccessful as it had migrated into one of the calyxes.The use of a flexible ureteroscope enabled visualization of the stent material and with laser, the encrusted impacted piece of stent was retrieved (Figure 2(b)).Case 3: A 23-year old student nurse with a left single kidney (agenesis of right kidney in Figure 3(a)) was diagnosed with pyonephrosis due to a pelviureteric junction obstruction of the kidney.As part of her management, she had a ureteral stent placed (Figure 3(b)) to drain the pyonephrosis to improve the renal function which was deranged (blood urea 23.2 mmol/l, reference range (2.0 -7.0 mmol/L), creatinine 710 umol/l, reference range (62 -106 umol/L), Hb 4.4 g/dl) reference range 11.0 -18.0 g/dl.She eventually had open pyeloplasty with a ureteral stent placement.She was lost to follow up until two years after, when she appeared for review on account of a left flank pain.A diagnosis of a forgotten ureteral stent was made.Encrustations were minimal and the stent was removed in whole cystoscopically with gentle traction on the stent (Figure 3(c)). Figure 2 .Figure 3 . Figure 2. Images of fractured ureteral stent.(a) KUB-showing fractured stent and pellets with a new ureteral stent placed after unsuccessful open pyelolithotomy for stent fragment retrieval; (b) Retrieved fragmented stent. Table 1 . Summary of some publications on endourological interventions in West Africa.
2017-09-23T07:47:12.395Z
2016-06-14T00:00:00.000
{ "year": 2016, "sha1": "81af7d9015e1ef8cf57164b5fd76f7dcdef43f06", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=67447", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "81af7d9015e1ef8cf57164b5fd76f7dcdef43f06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259294673
pes2o/s2orc
v3-fos-license
DYRK1A-mediated PLK2 phosphorylation regulates the proliferation and invasion of glioblastoma cells Polo-like kinases (PLKs) are a family of serine-threonine kinases that exert regulatory effects on diverse cellular processes. Dysregulation of PLKs has been implicated in multiple cancers, including glioblastoma (GBM). Notably, PLK2 expression in GBM tumor tissue is lower than that in normal brains. Notably, high PLK2 expression is significantly correlated with poor prognosis. Thus, it can be inferred that PLK2 expression alone may not be sufficient for accurate prognosis evaluation, and there are unknown mechanisms underlying PLK2 regulation. In the present study, it was demonstrated that dual specificity tyrosine-phosphorylation-regulated kinase 1A (DYRK1A) interacts with and phosphorylates PLK2 at Ser358. DYRK1A-mediated phosphorylation of PLK2 increases its protein stability. Moreover, PLK2 kinase activity was markedly induced by DYRK1A, which was exemplified by the upregulation of alpha-synuclein S129 phosphorylation. Furthermore, it was found that phosphorylation of PLK2 by DYRK1A contributes to the proliferation, migration and invasion of GBM cells. DYRK1A further enhances the inhibition of the malignancy of GBM cells already induced by PLK2. The findings of the present study indicate that PLK2 may play a crucial role in GBM pathogenesis partially in a DYRK1A-dependent manner, suggesting that PLK2 Ser358 may serve as a therapeutic target for GBM. Introduction Gliomas are tumors arising within the brain and the spinal cord, among which glioblastoma (GBM) is widely acknowledged as the most malignant tumor (1). Despite significant advances in understanding of GBM tumorigenesis and treatment over the past few decades, patients with GBM continue to face dismal outcomes, characterized by high recurrence rates and rapid disease progression (2). Therefore, it is imperative to further explore the underlying mechanisms of GBM pathogenesis to improve patient prognosis. Therapeutics targeting kinases have shown promise as an efficacious treatment for a variety of cancers, given the high correlation between those kinases and the initiation and progression of certain cancers (3). To date, several kinase inhibitors have been approved by the U.S. Food and Drug Administration for clinical use in cancer treatment (4,5). Pololike kinases (PLKs) comprise a protein family that selectively binds to and phosphorylates substrates on specific motifs recognized by the POLO box domains. The PLK family consists of five members, among which PLK2 has been implicated as a potential tumor suppressor. PLK2 downregulation has been observed in several types of cancers, including breast cancer, GBM, HPV + head and neck squamous cell carcinoma, kidney chromophobe, lung adenocarcinoma, lung squamous cell carcinoma, prostate adenocarcinoma and uterine corpus endometrial carcinoma. However, PLK2 upregulation has also been detected in cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma, kidney renal clear cell, kidney renal papillary cell carcinoma, pheochromocytoma and paraganglioma, stomach adenocarcinoma and thyroid cancer on TIMER2.0 ( Fig. 1A; http://timer.cistrome.org/). These data indicate multifaceted roles of PLK2 in different cancers. Furthermore, recent findings have identified PLK2 as a novel biomarker for the prognosis of human GBM (6). The PLK2/Notch axis may be closely linked to the development of acquired resistance to temozolomide in GBM (7). Consequently, further investigation is necessary to comprehensively understand the role of PLK2 in GBM tumorigenesis. Dual specificity tyrosine-phosphorylation-regulated kinases (DYRKs) constitute a group of evolutionarily conserved kinases that induce phosphorylation on tyrosine, serine and threonine residues. A total of five members have been identified, including DYRK1A, DYRK1B, DYRK2, DYRK3 and DYRK4. DYRKs are known to phosphorylate a broad range of proteins involved in diverse cellular processes (8). Abnormal expression or activity of DYRK1A has been implicated in the development of numerous cancers, including B-cell acute lymphoblastic leukemia, hepatocellular carcinoma and glioma (9)(10)(11). DYRK1A inhibitors have been approved for the treatment of certain types of cancer, such as metastatic breast cancer (12). The role of DYRK1A is complex, and whether DYRK1A employs tumor suppressive or oncogenic activities is most likely dependent on its specific substrates. For instance, DYRK1A exerts a tumor-promoting effect by phosphorylating various transcription factors, including Gli1 and STAT3 (13,14). Conversely, DYRK1A may maintain its antitumor effect by activating ASK1 (15). The precise role of DYRK1A in GBM pathogenesis has yet to be fully elucidated. In the present study, it was demonstrated that DYRK1A phosphorylates PLK2 at Ser358. DYRK1Amediated phosphorylation of PLK2 enhances both protein stability and kinase activity. Introduction of PLK2 leads to a significant decrease in glioma cell malignancy, which is further weakened in the presence of DYRK1A. These results suggested a potential contribution of DYRK1A-mediated PLK2 phosphorylation to glioma pathogenesis. Materials and methods Dataset acquisition. The transcriptome sequencing data and corresponding clinical data of primary GBM were procured from The Cancer Genome Atlas (TCGA) database (https://portal.gdc.cancer.gov/). The present study utilized the TCGA GBM cohort comprising 169 tumor samples and 5 normal brain samples to analyze differentially expressed genes (DEGs) using count data. Additionally, PLK2 expression across cancers was evaluated using the TIMER2.0 website. The GSE68848, GSE16011 and GSE4290 datasets downloaded from the Gene Expression Omnibus database (GEO; https://www.ncbi.nlm. nih.gov/geo/) were employed to further validate the expression of PLK2 (16)(17)(18). Overall survival (OS) information was also acquired from the datasets for prognostic evaluation. Clustal Omega (https://www.ebi.ac.uk/Tools/msa/clustalo/) was used for multiple sequence alignment. Identification of differentially expressed genes and function analysis. Differentially expressed genes were identified as previously described (19). DEGs between GBM tissues and normal brain tissues were analyzed using the 'limma', 'edgeR' and 'DESeq2' R packages with the cutoff criteria of |log2FC|≥1 and P<0.05. The raw count data of the TCGA GBM cohort were employed as the input for limma, edgeR and DESeq2. Volcano plots were generated to display DEG distribution from the three algorithms mentioned above with the 'tinyarray' R package. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were performed utilizing the 'clusterProfiler' R package to predict the biological functions and related pathways. Kaplan-Meier analysis to assess the overall survival of patients was performed with 'survminer' and 'survival' R packages. Log-rank test was used to compare the survival curves between the groups. Cell culture, vectors and transfection. The human cell lines 293 (cat. no. CRL-1573), 293T (cat. no. CRL-3216) and U87MG (cat. no. HTB-14) were obtained from the American Type Culture Collection (ATCC). Of note, U87MG cells were cells established likely from GBM of unknown origin. U251MG cells were purchased from Cell Bank Type Culture Collection, Chinese Academy of Sciences (Xi'an China) as previously described (20). 293, 293T, U87MG and U251MG cells were cultured in high glucose DMEM supplemented with 10% FBS, 100 units/ml penicillin and 0.1 mg/ml streptomycin. All cells were maintained in a 37˚C humidified incubator containing 5% CO 2 . All experiments were performed using cells within 20 passages after receipt. For transfection, cells were seeded into cell culture dishes or plates. When cell confluency reached ~80% within 24 h, plasmids were transfected into cells by Lipofectamine 3000 transfection reagent at room temperature (for six-well plate, 2.5 µg plasmid in total was used in transfection for each well). All transfections were carried out with Lipofectamine 3000 (cat. no. L3000001; Thermo Fisher Scientific, Inc.) according to the manufacturer's instructions. Cycloheximide (CHX) chase assay. Cycloheximide was purchased from MedChemExpress (cat. no. HY-12320). A CHX chase assay was performed as previously described (21). Briefly, 293 cells were seeded in six-well plates one day before transfection. 293 cells were transfected according to the manufacturer's instructions. A total of 36 h after transfection, cells were treated with 150 µg/ml CHX and separately harvested at 0, 1, 2, 4, 6, and 8 h for western blotting (WB). Cell proliferation and viability assay. Cell Counting Kit-8 (WST-8/CCK-8) (cat. no. C0043; Beyotime Institute of Biotechnology) was used as a convenient and robust way of performing a cell viability assay. Briefly, cells were suspended adequately and seeded into 96-well plates at a density of 5x10 3 cells/well with 3 replicates a day before the cell viability assay was performed as follows: The supernatant was removed, and 100 µl fresh medium containing 10% CCK-8 reagent was added as a working solution at 0, 24, 48, and 72 h time points. The OD450 and OD650 values of each well were measured by a microplate reader after the cells were incubated with working solution for 1.5 h in a 37˚C humidified incubator containing 5% CO 2 . Transwell invasion assay. Transwell plates (8-µm diameter pores; Corning, Inc.) were used to determine the invasion potential of U87MG and U251MG cells. Briefly, the upper faces of the membranes were precoated with Matrigel (cat no. 354234; BD Biosciences) at 37˚C for 1 h. A total of 5x10 4 cells were resuspended in serum-free medium and transferred into the upper chambers in triplicate. Complete cell culture media were added to the lower chambers. After 60 h of incubation at 37˚C humidified incubator containing 5% CO 2 , the media were removed, and the cells were fixed with 4% paraformaldehyde for 20 min at room temperature. A 0.1% (w/v) crystal violet solution was used for cell staining. The upper side of the filter was gently wiped with cotton swabs, and the chamber was air-dried. Representative images were captured by inverted microscopy. The total number of cells on ten individual fields for each membrane was counted; average numbers and standard deviation of invading cells were calculated. Wound healing assay. U251MG and U87MG cells were seeded on a six-well plate and cultured until the cell confluence reached ~90%. Straight line wounds were created by scratching a cell monolayer with sterile 100-µl pipette tips. The medium was gently replaced for the removal of the nonadherent cells generated during scratching. Cells were then maintained in serum-free media. The cells migrated slowly to fill the wound area. Images of the wells were captured at 0 and 48 h, separately. Wound areas were used to assess the migration rate of the cells. The results were quantified and analyzed using ImageJ 1.53t software (National Institutes of Health). Colony formation assay. The colony formation assay is an in vitro cell survival assay that is used to evaluate the ability of a single cell to grow into a colony. The colony is defined to consist of at least 50 cells (24). U251MG cells were counted and seeded at 8x10 2 cells per six-cm plate. Media were changed every four days. After two weeks, cell colonies were grown, and the media were removed. Cells were washed with PBS three times, fixed with 4% paraformaldehyde for 15 min at room temperature, and stained with 0.1% crystal violet for 30 min. The colonies were counted manually under a microscope and images were captured. Immunofluorescence. U251MG cells were fixed with 4% paraformaldehyde at room temperature for 20 min and immunostained with mouse anti-DYRK1A (cat. no. WH0001859M1; MilliporeSigma) at a dilution of 1:100 and rabbit anti-PLK2 sequentially at a dilution of 1:100 (cat. no. 14812; Cell Signaling Technology, Inc.). CoraLite488-conjugated goat anti-rabbit IgG (H+L) (SA00013-2) and CoraLite594conjugated goat anti-mouse IgG (H+L) (cat. no. SA00013-3; Proteintech Group, Inc.) were used to index and display the immunofluorescent signals both at a dilution of 1:200. DAPI (1 µg/ml; Roche Applied Science) was applied in mounting medium to indicate the nucleus. The images were captured by a fluorescence confocal microscope (LSM880; Leica Microsystems GmbH). The association analysis was achieved by ImageJ 1.53t software. Coimmunoprecipitation assay. Coimmunoprecipitation assays were performed as previously described (22). Briefly, cells were harvested and lysed in WB and IP cell lysis buffer containing 20 mM Tris (pH 7.5), 150 mM NaCl, and 1% Triton X-100 in the presence of a protease inhibitor mixture (Roche Applied Science). The cell lysate was centrifuged at 22,000 x g at 4˚C for 15 min. The supernatant was carefully retained. Supernatant containing 100 µg protein was saved and used as input. Primary antibodies and protein A/G-agarose beads (Santa Cruz Biotechnology, Inc.) were added to the supernatant and maintained on a tube rotator at 4˚C for 4 h. Mouse IgG (Beyotime Institute of Biotechnology) was applied as a negative control. Samples were analyzed by 10% glycine SDS-PAGE. Lentivirus production and transduction. The lentivirus vectors were prepared based on the 2nd generation system. Lentiviruses were produced by transfection of 293T cells with three plasmids together, pLent-EF1a-FH-CMV-Puro (pLV100008-OE; WZ Biosciences) carrying the gene of interest, pMD2. G (cat. no. 12259; Addgene, Inc.), and psPAX2 (cat. no. 12260; Addgene, Inc.) packaging constructs. 293T cells were plated in 100-mm dishes to reach 70-90% confluency by the time of transfection. The transfection was performed at room temperature with the vectors of pLent-EF1a-FH-CMV-Puro (10 µg), pMD2. G (5 µg) and psPAX2 (10 µg) for each 100-mm cell culture dish. Media were refreshed after 12 h of transfection (10 ml for each 100-mm dish). A total of 48 h after transfection, the lentivirus-containing supernatant was collected and filtered with 0.45 µm filters to isolate the lentiviral particles for the following infection. These lentiviruses were introduced into U87MG and U251MG cell lines on Day 2 of culture at a volume ratio of 1:5. The cell culture media were replaced with fresh media within 24 h of infection and incubated for 5 days before further experiments. Stable clones transduced with PLK2, DYRK1A, shPLK2 and scramble control were selected for 10 days by puromycin at the concentration of 2 µg/ml. Lentiviral particles packaging human PLK2 and DYRK1A are based on the sequences of Q9NYY3 and Q13627-2, respectively, in the UniProt database (https://www.uniprot.org/). Lentiviral particles packaging the shRNA targeted PLK2 (5'-TAG TCA AGT GAC GGT GCT G-3') and the scramble control (5'-TTC TCC GAA CGT GTC ACG T-3'). Dephosphorylation assay. Cell lysate samples containing 100 µg protein were incubated with thermosensitive alkaline phosphatase (AP; cat. no. EF0651; Thermo Fisher Scientific, Inc.) at 37˚C for 30 min. Then, the sample was placed into a 75˚C metal bath for 5 min to deactivate AP. Samples were analyzed by WB. Statistical analysis. Data are presented as the mean ± standard deviation (SD) from three independent experiments. For immunoblotting, one representative picture is shown. Quantifications from three independent experiments are defined with blot density by ImageJ 1.53t software. Differences between two groups are determined by unpaired Student's t-test. Two-way ANOVA followed by Tukey's post hoc test was applied for multiple comparisons of the protein level change at different time point for CHX assay. The data are evaluated for statistical significance with analysis of variance or non-parametric analysis by Prism 7 (Dotmatics). P<0.05 was considered to indicate a statistically significant difference. Results PLK2 is downregulated in GBM and significantly associated with prognosis. PLK2 exhibits widespread dysregulation across multiple cancer types. The expression of PLK2 in tumors and normal tissues was explored on TIMER2.0 (Fig. 1A). To improve understanding of the expression of PLK2 in GBM, three differential expression analyses were conducted between tumor tissues and normal brains in the TCGA-GBM cohort. DEGs in the TCGA-GBM cohort were visualized using Volcano plots (Fig. 1B). As indicated, PLK2 was markedly downregulated in GBM tissues compared with normal brains (Fig. 1B and C). The downregulation of PLK2 in tumor tissues was further confirmed in the GSE68848 and GSE4290 datasets ( Fig. 1D and E, respectively). In addition, PLK2 expression was considered to be associated with the OS of GBM patients (6). Function analysis was performed with common upregulated and downregulated genes (Fig. S1). GO analysis suggested that features relevant to tumor malignancy are promoted, such as positive regulation of cell adhesion, focal adhesion and extracellular matrix structural constituent (Fig. S1A). KEGG analysis revealed upregulation of classic pathways associated with tumor growth, including ECM-receptor interaction, cell adhesion molecules, and transcriptional misregulation in cancer (Fig. S1B). Kaplan-Meier analysis followed by the log-rank test was performed to assess OS. The OS of patients evidently exhibited that high expression of PLK2 was strongly associated with poor prognosis in both the TCGA-GBM cohort and GSE16011 dataset ( Fig. 1F and G). It should be noted that PLK2 expression is lower in tumor tissues, but low PLK2 expression predicts favorable prognosis. Due to the inconsistency of PLK2 expression and prognosis value, pathways and functions predicted by function analysis may not be markedly suggestive. This seemingly paradoxical finding suggests that other unidentified mechanisms may regulate PLK2 in GMB pathogenesis, warranting further investigation. DYRK1A modulates PLK2 protein levels in a kinase activitydependent manner. PLK2 is mainly involved in cellular biofunctions by direct phosphorylation of specific substrates. However, the critical role of phosphorylation in its kinase activity remains poorly understood (25). Endogenous PLK2 protein in 293 cells is extremely low and could be barely detected by WB. Hence, 293 cells were only used for detection of exogenous PLK2 after transfection. In the present study, DYRK1A and PLK2 expression vectors were transfected into 293 cells and it was found that overexpression of DYRK1A led to a significant increase in exogenous PLK2 protein levels by ~5.6-fold compared with the control, while PLK2 mRNA levels remained relatively unchanged ( Fig. 2A and B). To confirm that endogenous PLK2 protein is also regulated by DYRK1A, a DYRK1A-expressing vector was transfected into two GBM cell lines. The results revealed that PLK2 protein levels were increased to 180±1.0 and 147±1.3% upon DYRK1A overexpression compared with the controls in U87MG cells and U251MG cells, respectively ( Fig. 2C and E). Meanwhile, PLK2 mRNA was relatively unchanged upon DYRK1A overexpression in both cell lines (Fig. 2D and F). These results suggested that DYRK1A likely exerts post-translational regulation on PLK2. As a protein kinase, DYRK1A is commonly involved in cellular processes by phosphorylation on specific substrates. It was then explored whether PLK2 is phosphorylated by DYRK1A. As expected, the phosphorylation and total levels of PLK2 were both upregulated in the presence of DYRK1A (Fig. 2G). DYRK1A-mediated PLK2 phosphorylation was mostly eliminated by treatment with AP (Fig. 2H). A previous study found that the substituted mutant with Arg in place of Lys179 (K179R) in DYRK1A disrupts the direct interaction with ATP. The K179R mutant of DYRK1A is kinase-inactive, and its autophosphorylation ability is impaired (26). To further validate that the kinase activity of DYRK1A is essential for PLK2 phosphorylation, the DYRK1A kinase inactive K179R mutant vector was transfected into U87MG cells. In contrast to wild-type DYRK1A, the DYRK1A K179R mutant failed to induce PLK2 protein accumulation (Fig. 2I). These results demonstrated that DYRK1A increases PLK2 protein levels in a kinase activity-dependent manner by directly phosphorylating PLK2. DYRK1A interacts with PLK2 in glioma cells. To validate whether DYRK1A directly interacts with PLK2, co-immunoprecipitation was employed. The results demonstrated that PLK2 can pull down DYRK1A in 293 cells (Fig. 3A). Similarly, DYRK1A was able to pull down PLK2 as well (Fig. 3B). To further explore the interaction of endogenous DYRK1A and PLK2, co-immunoprecipitation was performed in U87MG and U251MG cells. A significant interaction between DYRK1A and PLK2 was observed in both cell lines (Fig. 3C and D). Immunofluorescence was then performed to confirm the intracellular localization of DYRK1A and PLK2. PLK2 scattered in both the nucleus and cytoplasm, while DYRK1A mainly stayed in the nucleus (Fig. 3E). Colocalization analysis showed that they were predominantly colocalized in the U251MG cell nucleus. Identification of phosphorylation sites in PLK2 by DYRK1A. It was previously found that a large proportion of DYRK1A-recognized substrates contain a consensus RPX(S/T) P motif (27). To identify the potential phosphorylation sites on PLK2, sequence alignment with RPX(S/T)P was performed using the Clustal Omega alignment tool (https://www.ebi. ac.uk/Tools/msa/clustalo/). As shown in Fig. 4A, two highly conserved sites, Ser248 and Ser358, were revealed in the PLK2 coding region. Substituted mutants of PLK2 S248A and S358A were constructed and employed to further validate whether Ser248 and Ser358 are phosphorylated by DYRK1A. The results revealed that, consistent with wild-type PLK2, PLK2 S248A mutant protein was significantly increased by DYRK1A. However, the PLK2 Ser358A mutant had only a mild increase upon DYRK1A overexpression ( Fig. 4B and C). These results indicated that Ser358 in PLK2 may be the phosphorylation site induced by DYRK1A. Phosphorylation at Ser358 increases PLK2 protein stability. The impact of phosphorylation on protein stability is an important regulatory mechanism of post-translational modifications. To explore whether phosphorylation of PLK2 induced by DYRK1A affects its protein stability, a CHX assay was conducted. A previous study revealed that PLK2 is degraded rapidly, with a half-life of ~15 min (28). Nevertheless, in the present study it was demonstrated that PLK2 protein is still detectable even after treatment with CHX for 8 h. Within 8 h, degradation of PLK2 was significantly slower in the presence of DYRK1A than in the control (Fig. 5A). Similarly, the phosphorylation-mimicking mutant PLK2S358D also exhibited decelerated degradation compared with wild-type PLK2. By contrast, PLK2S358A manifests remarkably accelerated degradation. These data demonstrated that DYRK1A-mediated PLK2 phosphorylation plays a crucial role in regulating PLK2 protein stability. Harmine is a potent and selective natural DYRK inhibitor that is commonly applied to deactivate DYRK1A kinase activity (29,30). Harmine was employed to further validate whether DYRK1A kinase activity is vital for PLK2 protein stability. The results revealed that treatment with harmine resulted in slower degradation of endogenous PLK2 compared with DMSO treatment (Fig. 5B). cells. An anti-ti-PLK2 antibody was used for protein pull-down. DYRK1A was detected by WB. (E) Immunofluorescence of U251MG cells was performed to determine their binding. Confocal microscopy was used to acquire images. Colocalization analysis of endogenous DYRK1A and PLK2 was achieved by ImageJ software. DYRK1A, dual specificity tyrosine-phosphorylation-regulated kinase 1A; PLK2, polo-like kinase 2; WB, western blotting. Taken together, the findings of the present study demonstrated that DYRK1A-mediated phosphorylation may increase PLK2 protein stability in vitro. Enhancement of PLK2 kinase activity by DYRK1A. PLK2 kinase activity is vitally crucial for substrate phosphorylation. To investigate whether PLK2 kinase activity was impacted by DYRK1A, PLK2 kinase activity was examined by detecting α-synuclein Ser129 phosphorylation. α-synuclein is widely acknowledged as a specific substrate of PLK2. PLK2 overexpression markedly increases phosphorylation of α-synuclein at Ser129 and promotes abnormal aggregation of α-synuclein (31,32). Hence, α-synuclein Ser129 phosphorylation could be used as an indicator of PLK2 kinase activity. Notably, both DYRK1A and PLK2 increased α-synuclein accumulation compared with the control (Fig. 6A). Consistent with the previous studies (31,32), PLK2 robustly induces α-synuclein Ser129 phosphorylation. However, DYRK1A introduction alone was not able to induce α-synuclein Ser129 phosphorylation (Fig. 6A). Notably, α-synuclein Ser129 phosphorylation was significantly increased in the presence of both PLK2 and DYRK1A compared with PLK2 alone (Fig. 6B). This result suggested that the increase in α-synuclein Ser129 phosphorylation may be attributed to the enhancement of PLK2 activity induced by DYRK1A. Further analysis using phosphorylationnull mutant PLK2 S358A and phosphorylation-mimicking mutant PLK2 S358D was applied for comparison. Compared with wild-type PLK2, PLK2 S358A had significantly reduced α-synuclein Ser129 phosphorylation. Conversely, PLK2 S358D significantly elevated α-synuclein Ser129 phosphorylation (Fig. 6C). Taken together, these data demonstrated that Ser358 of PLK2 is critical for PLK2 kinase activity and can be modulated by DYRK1A. DYRK1A-mediated phosphorylation attenuates proliferation and migration/invasion in GBM cells. Previous studies revealed that PLK2 and DYRK1A are highly correlated with GBM malignancy (7,33). Whether their interaction contributes to GBM properties has yet to be uncovered. In the present study, in vitro cell viability assays as well as colony formation assays were first performed on U87MG and U251MG stable cells. The results indicated that PLK2 introduction significantly impaired cell viability, while PLK2 silencing had the opposite effect. The introduction of PLK2 together with DYRK1A further suppressed cell viability compared with introducing PLK2 alone (Fig. 7A). Moreover, PLK2 overexpression significantly decreased cell self-renewal, which was even weakened in the presence of DYRK1A in U251MG cells (Fig. 7B). It is noteworthy that U87MG cells are incapable of growing into colonies with such few cells for colony formation assay. U87MG cells can grow colonies only if cell confluency reaches 50% or more. Migration and invasion are essential processes in GBM progression; thus, it was then investigated whether the interaction of PLK2 and DYRK1A affects these processes in GBM cells. The wound healing assay results demonstrated that the migratory ability of U87MG and U251MG cells was significantly decreased upon PLK2 overexpression and further attenuated in the presence of PLK2 together with DYRK1A (Fig. 7C). Additionally, the invasion potential of both cell lines was determined by the Transwell invasion assay. DYRK1A significantly enhanced the suppression of invasion induced by PLK2 in U87MG and U251MG cells (Fig. 7D). Collectively, these data indicated the potential roles of DYRK1A-mediated PLK2 phosphorylation in regulating glioma cell malignancy (Fig. 8). Discussion The involvement of PLKs in various cancer types has been extensively studied, including their potential as therapeutic targets in GBM, where PLK2 has been identified as a novel prognostic biomarker (6). Hypermethylation of PLK2 has been implicated in GBM prognosis (34). PLK2 commonly serves as a tumor suppressor, and the expression of PLK2 is frequently lower in multiple types of cancer, including GBM. Strikingly, it was identified that a high level of PLK2 was still positively correlated with poor prognosis. These results indicated that uncharacterized regulatory mechanisms may be involved. The tumorigenic role of PLK2 is intricate and multifaceted. PLK2 dysregulation has been observed in various cancer types and is considered to play pivotal roles in cancer pathogenesis. For instance, partial or complete loss of PLK2 expression commonly occurs in colorectal carcinomas and impacts mTOR signaling (35). Silencing PLK2 leads to increased cell proliferation and decreased apoptosis in gastric cancer cells (36). PLK2 mRNA and protein expression are simultaneously low in hepatocellular carcinoma and are positively correlated with patient OS (37). In addition, PLK2 is hypermethylated in a high percentage of patients with multiple myeloma and B-cell lymphoma (38). Of note, PLK2 expression is exceedingly suppressed in GBM samples, particularly in temozolomide-resistant GBM. Reduced PLK2 expression enhances temozolomide resistance in GBM by activating Notch signaling. Meanwhile, upregulation of PLK2 decreased GBM cell malignancy (7), which is in line with the present results. In the present study, it was found that PLK2 could interact with DYRK1A and be phosphorylated by it in vitro. A previous study showed that four phosphorylation sites, including Ser497, Ser588, Tyr590 and Ser299, affect PLK2 protein stability (25). It was observed that DYRK1A-mediated phosphorylation increased PLK2 protein levels by decelerating its degradation, further addressing the critical role of PLK2 phosphorylation in PLK2 protein stability. Introduction of DYRK1A in the presence of PLK2 further attenuates proliferation, migration and invasion of GBM cells in vitro, underscoring the substantial contribution of DYRK1A-mediated PLK2 phosphorylation in GBM cell malignancy. The functional importance of PLK2 kinase activity has been extensively studied. PLK2 has been found to phosphorylate PLK1 at Ser-137, which is sufficient to mediate the survival signal in colon cancer cells, highlighting the important role of PLK2 kinase activity in cell growth (39). In addition, PLK2 phosphorylates CPAP at S589 and S595, impacting procentriole formation during the centrosome cycle (40). In the central nervous system, PLK2 phosphorylates alpha-synuclein at Ser129, rendering alpha-synuclein one of the major substrates of PLK2 (41). In the present study, it was revealed that the S358A mutant of PLK2 has no effect on alpha-synuclein Ser129 phosphorylation, whereas the phospho-mimicking mutant PLK2S358D enhances alphasynuclein Ser129 phosphorylation. These results demonstrated that the phosphorylation of Ser358 induced by DYRK1A tightly regulates PLK2 kinase activity. Further investigation is required to fully comprehend the potential therapeutic applications of PLK2. PLK2-mediated TAp73 phosphorylation prevents TAp73 activity, which confers an invasive phenotype through activation of POSTN (58,59). Nevertheless, how the DYRK1A/PLK2/TAp73 axis functions in GBM remains unknown. In addition, the potential damage to healthy brain tissue caused by radiation therapy against GBM, such as inflammation and necrosis, is a major concern. Radiation-induced necrosis has been reported to affect over 30% of patients with GBM (60-62). A previous study has shown that PLK2-mediated phosphorylation and translocation of Nrf2 activates anti-inflammatory effects via p53/Plk2/p21 cip1 signaling in acute kidney injury (63). Nrf2 is a critical regulatory factor that helps GBM tumors maintain low immunogenicity and antiapoptotic proliferative phenotypic characteristics (64). Therefore, it is worthwhile to investigate whether the DYRK1A/PLK2/Nrf axis is involved in GBM immunogenicity regulation. Acknowledgements The authors would like to thank the biological imaging facility of Shandong University for their support in immunofluorescence image acquisition and analysis. The authors would also like to thank Dr Xiulian Sun (Shandong University, China) for generously providing them with the pCMV6-entry-DYRK1A vector. Funding The present study was supported by the Natural Science Foundation of Shandong (grant no. ZR2022MH313). Availability of data and materials All data generated or analyzed during this study are included in this published article. PLK2 expression in multiple cancers is available on TIMER2.0 database(http://timer.cistrome.org/). Authors' contributions ST and PW conceived and designed the experiments. ST and JZ performed the experiments. ST and PW performed the bioinformatics and data analysis. PW reviewed and revised the manuscript. All authors read and approved the final manuscript. ST and PW confirm the authenticity of all the raw data. Ethics approval and consent to participate Not applicable. Patient consent for publication Not applicable.
2023-07-01T06:16:10.091Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "42af87bafbe906c6e13ba0bb0341a3bbc3a3fb14", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ijo.2023.5542/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "21d04ff7dcc221c8b2f330c59725ec478cc93881", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119962341
pes2o/s2orc
v3-fos-license
The Coupled Effect of Mid-tropospheric Moisture and Aerosol Abundance on Deep Convective Cloud Dynamics and Microphysics The humidity of the mid troposphere has a significant effect on the development of deep convection. Dry layers (dry intrusions) can inhibit deep convection through the effect of a thermal inversion resulting from radiation and due to the reduction in buoyancy resulting from entrainment. Recent observations have shown that the sensitivity of cloud top height to changes in mid-tropospheric humidity can be larger than straightforward " parcel dilution " would lead us to expect. Here, we investigate how aerosol effects on cloud development and microphysics are coupled to the effects of mid-tropospheric dry air. The two effects are coupled because the buoyancy loss through entrainment depends on droplet evaporation, so is controlled both by the environmental humidity and by droplet sizes, which are, in turn, controlled in part by the aerosol size distribution. Previous studies have not taken these microphysical effects into account. Cloud development and microphysics are examined using a 2-D non-hydrostatic cloud model with a detailed treatment of aerosol, drop, and ice-phase hydrometeor size spectra. A moderately deep mixed-phase convective cloud that developed over the High Plains of the United States is simulated. We find that a dry layer in the mid troposphere leads to a reduction in cloud updraft strength, droplet number, liquid water content and ice mass above the layer. The effect of the dry layer on these cloud properties is greatly enhanced under elevated aerosol conditions. In an environment with doubled aerosol number (but still realistic for continental conditions) the dry layer has about a three-times larger effect on cloud drop 223 number and 50% greater effect on ice mass compared to an environment with lower aerosol. In the case with high aerosol loading, the dry layer stops convective development for over 10 min, and the maximum cloud top height reached is lower. However, the effect of the dry layer on cloud vertical development is significantly reduced when aerosol concentrations are lower. The coupled effect of mid-tropospheric dry air and aerosol on convective development is an additional way in which long term changes in aerosol may impact planetary cloud processes and climate. Introduction The effect of mid-tropospheric moisture on convective cloud development has been studied extensively in recent years [1][2][3][4][5][6][7][8][9].These studies have found that convection in a dry atmosphere tends to be more readily diminished by entrainment of very dry air.The phenomenon occurs globally: in the northern hemisphere [1], southern hemisphere [4,10], and the tropics [2,11].The effect has also been studied at oceanic [11], continental [1,12], and coastal locations [13,8].The processes have been examined through case studies [4], statistics [2], and numerical simulations [7,13,14].The studies also help to understand the large-scale moisture-convection feedback [15].For example, Ridout [6] found that there is a large increase in stored buoyant energy in association with the suppression of deep convection by dry layers.The presence of dry mid-tropospheric air may cause the buildup of buoyant energy for subsequent episodes of deep convection, such as those associated with the onset of the Madden-Julian oscillation [16][17][18]. The relationship between moist convection and free-tropospheric humidity involves various mechanisms at a range of spatial and temporal scales [15].Locally, dry air inhibits deep convection through two processes [12]: firstly, the thermal inversion due to the dry layer can prevent the development of convective clouds; and secondly, the entrainment of dry air as the parcel rises decreases the buoyancy [8,11,14,19].Both processes control the cloud top height.The impact of tropospheric humidity above the boundary layer depends on parcel buoyancy [14,20,21], as well as other factors, such as wind shear, which may affect the mixing rate near the cloud boundary [6].Shepherd et al. [13] discussed the relative importance of moisture in the boundary layer and in the mid troposphere.For long-lived, organized convective systems with low cloud bases and large wind shear, low-level moisture may likely affect convective activity.For short-lived convection with weak wind shear and higher cloud bases, moisture in the mid troposphere can significantly impact rainfall.The process of the suppression of deep convection by dry layers touches on some of the fundamental issues of representing moist convection [7,22]. Sherwood et al. [8] noted that the observed sensitivity of cloud top height to changes in free-tropospheric humidity was larger than expected from straightforward "parcel dilution".They found that a 20% increase in mean water vapor mixing ratio between 750 and 500 hPa was associated with about 1 km deeper maximum cloud penetration.They also suggested that dynamical feedbacks involving the evaporation of lofted cloud or raindrops during the early stages of convective growth might affect subsequent development. An important process that has not been considered in the previous studies is the aerosol properties of the entrained air.The buoyancy loss through entrainment is very much dependent on the amount of droplet evaporation, thus, on the environmental humidity and droplet sizes.Because droplet spectra depend on aerosol spectra, aerosol abundance may affect cloud development.In addition, the entrained aerosol may result in spectral broadening from CCN growing into small droplets [23].Therefore, the effect of aerosol on the droplet spectrum is a potentially important process affecting convective clouds and their response to dry layers. Recent studies have found that the interaction between aerosol and deep convective cloud is complex, depending on many factors, such as aerosol loading, geographical location and atmospheric thermal conditions [24][25][26][27].In our previous work [27] we used a cloud model with a detailed treatment of microphysics and aerosols to study the sensitivity of a continental deep convective cloud to the abundance of aerosol.The meteorological conditions show that there is a dry layer in the mid troposphere.The model produced different cloud features when the initial aerosol concentration increased.In the high aerosol case, cloud development was inhibited for about10 min, but other cases with low aerosol concentrations did not exhibit this feature.The inhibition in cloud development was clearly related to the dry layer, but also depended on the amount of aerosol.However, we did not investigate the link between them.Nevertheless, our previous results show that dry layers and aerosol are likely to have a coupled effect on deep convective cloud development, which has not been previously studied. Wang et al. [28] have shown that the smaller droplets in polluted stratocumulus clouds evaporate more rapidly and that this enhanced evaporation during entrainment can counteract the increases in liquid water path associated with reduced collision-coalescence.A recent modeling study of Trade Wind maritime cumulus [29] found that the lifetime, depth and width of the clouds can decrease in environments with high aerosol abundance, which is contrary to the expectation that the cloud lifetime should be enhanced.They attribute this response to the enhanced evaporation during entrainment in clouds with more drops.A similar study of warm continental cumulus [30] also found evaporation to play a role in determining cloud response to changes in aerosol.A similar process occurs during entrainment of dry air in the deeper mixed-phase clouds that we study here.Our work contrasts with theirs because we study unstable, moderately deep mixed-phase clouds rather than warm trade cumulus within a capped inversion.We also examine the coupled effect of humidity and aerosol in controlling the cloud response. Our specific objective is to study the response of a moderately deep mixed-phase cloud to aerosol abundance both with and without a dry layer in the mid troposphere.We investigate the issue with a dynamic cloud model with bin-resolved microphysics and aqueous-phase chemistry.The research contributes to and extends our current understanding of factors influencing deep convective clouds in the following ways.Firstly, it will study the synergistic roles of aerosol abundance and mid-tropospheric humidity, while previous work has studied these as separate issues.Secondly, the simulations with the cloud model provide information on the way in which drop spectra respond.In this paper, sections 2 and 3 describe the cloud model and the experimental design.Section 4 discusses results from numerical simulations and conclusions follow in Section 5. The Model In this section we briefly describe the model used for simulations.The numerical model is the Model of Aerosols and Chemistry in Convective Clouds (MAC3), which is based on the axisymmetric nonhydrostatic cloud model of Reisin et al. [31] and includes newly added modules of trace gas chemistry and aerosol [32,33]. The governing equations include the following atmospheric variables: the vertical and radial velocity, pressure perturbation, virtual potential temperature perturbation, specific humidity perturbation, specific number concentration and mass of aerosol in a spectral bin, specific number concentration and mass for each type of cloud hydrometeors in a size bin, and concentration of activated ice nuclei.The microphysical processes are solved with an accurate multi-moment method [31,[34][35][36]. Four hydrometeor species are included: drops, ice crystals, graupel and snowflakes (aggregates).Each particle species is divided into 34 bins, with mass doubling for adjacent bins.The aerosol spectrum is represented by 43 bins.The warm microphysical processes include nucleation of drops based on the calculated supersaturation and the prognosed aerosol spectrum, condensation and evaporation, collision-coalescence, and binary break-up.The cold processes are ice nucleation (deposition, condensation-freezing, contact nucleation, and immersion freezing), ice multiplication, deposition and sublimation of ice, ice-ice and ice-drop interactions, melting of ice particles, sedimentation of drops and ice particles. The aerosol module includes prognostic equations for the number concentration of aerosol particles and of the specific mass of aerosols which become activated, scavenged by hydrometeors, and remain in the air, and regenerated following complete evaporation/sublimation of hydrometeors, gas-cloud interactions and aqueous phase oxidation of dissolved SO 2 by ozone and hydrogen peroxide. The effects of entrainment and mixing in our model are included in a turbulent diffusion operator [37].The operator is defined as (1) where r and z are the radial and vertical coordinates, respectively, is the density of the unperturbed atmosphere, is an arbitrary function, is the turbulent coefficient based on the approach of Monin and Yaglom [38]: ( where (= 10.0) is the turbulence coefficient in the unperturbed atmosphere,  , C td,q = 1.061, and The Experimental Design The 19 July 1981 Cooperative Convective Precipitation Experiment (CCOPE) case has been extensively studied [27,33,[39][40][41][42][43].This case is characterized by moderate instability and weak wind shear [44].The vertical profile of relative humidity is marked by large changes on a scale of about 2 km (Figure 1a).A dry layer between 4.9 and 6.4 km is of particular interest in this study. MAC3 is run here in an axisymmetric configuration, which is appropriate for clouds that developed during the low wind shear conditions of 19 July 1981 over Montana [27].The model domain is 12 km in the vertical direction and 6 km in the radial direction, with vertical and horizontal grid sizes of 300 m and 150 m, respectively, and 60 and 120 m, respectively, for the high resolution runs.Open boundary conditions are used [37].A time step of 2.5 s is used for condensation/evaporation of drops or deposition/sublimation of ice particles, and 0.01 for gas absorption.All other processes use a time step of 5 s.The cloud is initialized by a 450 m wide warm bubble of 2 K in a layer centered at an altitude of 900 m. The initial aerosol size distribution is based on observations made in Mondana, USA by Dye et al. [44] and is typical of continental conditions (Figure 1b).The distribution is made up of 3 modes (the Aitken, accumulation and coarse modes), with geometric mean particle radii of 0.006, 0.03, and 1.0 µm, respectively [27,33,43,45].These modes are mapped onto the model's 43 aerosol particle size bins, with the mass per particle and the number concentration in each bin treated as prognostic tracers.Based on Hobbs et al. [46], Yin et al. [32] assumed that 15% of the aerosol particles were water soluble, and that the soluble aerosols were composed of ammonium sulphate, regardless of size.Only sulphate aerosols are considered in the simulations.We also assume that the aerosol number (and mass) decreases with altitude according to , where is the aerosol concentration at the surface, z is altitude, r n is aerosol radius, and z s (= 2 km) is the scale height [27].Because aerosol is a prognostic tracer in the model, this initial aerosol profile changes in and around the cloud as the cloud develops, and aerosol particles transported and processed by the cloud can be re-entrained [33].Cui and Carslaw [45] studied the sensitivity to changes in the vertical profile of aerosol.They found that the cloud causes an increase in upper troposphere aerosol mass when the initial upper troposphere aerosol abundance is low and a decrease when the initial abundance is high.Realistic increases in cloud condensation nucleus concentrations reduce the precipitation efficiency and thereby the scavenging efficiency of aerosol and allow more aerosol material to be transported to the upper troposphere.The enhancement of the upper troposphere aerosol mass after a cloud event therefore increases in clouds with higher CCN concentration, a positive feedback driven by the response of scavenging rates to aerosol abundance. The purpose of the model runs is to examine how the cloud changes as the aerosol concentration and mid-tropospheric humidity vary.The base case simulation used here is the same as that used by Yin et al. [33] who compared the simulated cloud with observations.The model provided a reasonably good reproduction of the cloud base height, size of the main updraught core, updraught speed at cloud base, start time of the updraught decay, location and time of the maximum liquid-water content, concentration of droplets, first appearance of graupel, and location of the first radar echo.Therefore, the simulated cloud in this case is in good agreement with observations.In sensitivity simulations, Cui et al. [27] found that the dynamics and microphysics changed in response to changes in the aerosol size distribution.When the number of particles in the aerosol accumulation mode was doubled (which controls the evolution of cloud drop number depending on the attained supersaturation), the cloud top temporally stopped ascending. In order to investigate the coupled effect of the mid-tropospheric humidity and the aerosol abundance, we designed four simulations.The first run is the base case, which uses the observed humidity profile, including the dry layer between 4.8 and 6.6 km (see Figure 1a), and the standard number of particles in the aerosol accumulation mode.We refer to this as case OrgDry.This simulation is identical to the base case described by Yin et al. [33] and Cui et al. [27] and produced ~930 cm −3 drops at cloud base.In the second run, the humidity in the mid-tropospheric dry layer is increased from ~15% to ~55% (OrgWet), but cloud base drop concentrations are identical to OrgDry when the two clouds just formed.In the third simulation, the observed dry humidity profile is used but the amplitude of the aerosol accumulation mode is doubled (DblDry), resulting in a cloud base drop concentration of 1200 cm −3 .In the fourth simulation, the amplitude of the accumulation mode is doubled and the humidity in the mid-tropospheric dry layer is increased to ~55% (DblWet).The four cases are listed in Table 1.[46], and Respondek et al. [43]) used in the simulations. Results Figure 2 shows how the cloud top height and masses of drops, ice crystals and graupel vary between the four cases.In response to the changes in mid-tropospheric humidity and aerosol abundance, the simulated clouds gradually diverge after 20 min.A distinctive feature in cloud top heights is the change from stagnation in case DblDry (at ~30-40 min) to more steady development in case DblWet.Another feature is the increase in the top height of graupel particles in the high aerosol case when the dry layer is removed.The top of graupel particles is ~8 km in case DblDry, while it is ~10 km in DblWet.Generally, cloud top heights tend to increase with decreasing aerosol without the dry layer.Figures 2a and 2c also indicate the large differences in surface precipitation between the four runs.In Cui et al. [27], we studied the aerosol impact on precipitation, and found that an increase in aerosol loading produces more numerous but smaller drops, which suppresses the warm rain process and reduces the precipitation.Cui et al. [26] examined the microphysical responses to aerosol abundance.We found that the precipitation of the CCOPE cloud is mainly from melting graupel particles.When the aerosol loading increases from moderate to high, the suppressed warm rain process results in smaller graupel particles.Therefore, the precipitation is suppressed.The results of the four cases in this paper are in agreement with our previous studies.In short, the dry layer can suppress the development of the convective cloud and the suppression is more significant with increasing aerosol loading.The causes for the coupled effect will be studied by examining the differences in cloud microphysics and thermal dynamics in Figures 3-7.Cloud tops reach the bottom of the dry layer at ~20 min in all cases and pass through the dry layer during 20-25 min.At 20 min, there are very small differences in the maximum specific mass of hydrometeors and the cloud top height.After passing the dry layer at ~25 min, the differences become progressively larger in the upper part of the clouds.Figure 3 shows the differences at 25 min of simulation in the specific mass and number concentrations of droplets, in the temperature, and in the vertical velocity between the wet cases (without the dry layer) and dry cases (with the dry layer).A comparison between the wet and dry cases with the same initial aerosol concentrations indicates that the removal of the dry layer leads to more vigorous clouds, in accord with previous findings [9].Both the specific mass (Figure 3a,b) and number concentrations (Figure 3c,d) of droplets increase near the cloud top and edge when the dry layer is removed, and the increase is much larger in the high aerosol cases (DblWet-DblDry).The removal of the dry layer suppresses evaporation caused by mixing and produces more droplets near the cloud top and edge.Droplet concentrations increase in these regions by up to 200-300 cm −3 .Less evaporation, together with less dilution, results in more latent heating in wet cases (Figure 3e,f).This, in turn, promotes updrafts that are stronger by up to 2-3 ms −1 (Figure 3g,h).The differences are larger in the high aerosol cases (right column of Figure 3) than in the low aerosol cases (left column) because the droplets are smaller and evaporate more quickly.Therefore, the microphysical structure, cloud dynamics, and thermodynamics, are more sensitive to changes in initial aerosol abundance in a dry mid-tropospheric environment. Two factors in this study may affect buoyancy.One is the changes in the initial moisture profiles, which causes a change of 6.88 J kg −1 in convective available potential energy.The other is discussed below.The process of buoyancy depletion acts through cloud microphysical processes, which eventually affects cloud thermodynamics and dynamics.The size distribution of drops reveals how the aerosol abundance and mid-tropospheric humidity affect cloud microphysics at selected locations (Figures 4,5).For cases with the same initial aerosol concentrations (OrgDry and OrgWet; or DblDry and DblWet), the drop distributions vary with mid-tropospheric humidity.In the cloud lateral boundary and the cloud top layer, the figures indicate an increase in both the number and mass distributions in the wet cases.But in the updraft core, the differences in the distributions are small between the dry and wet cases, reflecting the fact that the mid-tropospheric dry layer reduces the drops by a process of mixing and entrainment across the cloud boundaries and gradually into the core.For cases with the same initial moisture profile, the distributions of drops vary with the initial aerosol concentrations.In the cloud lateral boundary and the cloud top layer, the figures indicate an increase in both the number and mass distributions in the low aerosol cases.But in the updraft core, there are more large drops (radii ≥ 20 µm) for the low aerosol cases.Figures 3-5 indicate that mid-tropospheric humidity is an important factor affecting cloud microphysics, but that the magnitude of the effect depends also on the aerosol abundance. Figure 6 shows the differences at 35 min in the specific mass and number concentrations of droplets, in the temperature, and in the vertical velocity between the wet and dry cases.A comparison between this figure and Figure 3 (for 25 min) indicates that the impact of removing the dry layer becomes stronger than at 25 min although the maximum impact is now in air that has ascended 1-1.5 km above the dry layer at 6 km.The effect of the dry layer on drop specific mass is ~ 2 g kg −1 in the high aerosol case but only ~1 g kg −1 in the low aerosol case.Likewise, drop number concentrations are reduced by greater than 200 cm −3 by the dry layer but only by ~50 cm −3 in the low aerosol case.The enhancement of cloud activity is therefore much larger between cases DblDry and DblWet.Case DblWet still has a cloud top height lower than case OrgWet, but it overcomes the stagnation in case DblDry.The top of graupel particles grows accordingly (Figure 2).c and d; unit: cm −3 ), temperature (e and f; unit: °C), and vertical velocity (g and h; unit: ms −1 ).Left column is the difference between OrgWet and OrgDry, while right column is the difference between DblWet and DblDry. Figure 7 shows the differences of drops and ice crystals at the cloud centre between the wet and dry cases.Both the mass and number of drops increase near the cloud top when the dry layer is removed because the moister atmosphere delays drop dissipation (Figure 7a-d).The effect of the dry layer on drop mass and number is most pronounced and lasts longer in the high aerosol cases (right hand panels of Figure 7).The effect of the dry layer on ice properties is slightly different to that on the droplets.The removal of the dry layer results in greater ice crystal mass, mostly before 40 min (Figure 7e-g), and again, the effect of the dry layer on ice mass is more pronounced and lasts longer when aerosol concentrations are higher.However, the increase in ice crystal number concentration upon removing the dry layer is larger in the low aerosol case (Figure 7g) than in high aerosol case (Figure 7h).This is because immersion freezing is the dominant mode of freezing in these simulations, and in the low aerosol case, there are more large drops near the cloud top, which is preferable for immersion freezing [27]. Figure 7. The difference of drop specific mass (a and b; unit: gkg −1 ), drop number concentration (c and d; unit: cm −3 ), ice crystal specific mass (e and f; unit: gkg −1 ), and ice crystal number concentration (g and h; unit: cm −3 ) at cloud centre.The left column is the difference between OrgWet and OrgDry, and the right column between DblWet and DblDry. Sherwood et al. [8] found that increases in mid-tropospheric moisture lead to higher cloud top, but the sensitivity is too large to be explained by simple dilution of parcel buoyancy through entrainment and mixing.They speculated that dynamical feedbacks involving the re-evaporation of lofted cloud and/or raindrops during the earlier stages of convective growth produce effects at later times that enhance the overall sensitivity.The results of our simulations are consistent with Sherwood et al.'s speculation in terms of the liquid phase microphysics.The role of mid-tropospheric moisture in determining cloud top height is not to add more buoyancy.Rather, it is to lose less buoyancy by reducing evaporation from entrainment and mixing.Therefore, the air parcel theory cannot explain the sensitivity in the Sherwood study.Our study further indicates that the ice phase microphysics is different in response to changes in mid-tropospheric moisture.Since the cloud top heights in the Sherwood et al. study reached above 12 km, the existence of supercooled droplets is almost excluded.Cloud drops will have already frozen either by heterogeneous freezing at lower levels or by homogeneous freezing at higher levels.Therefore, the ice phase microphysics causes a final push on the cloud top.This has been confirmed by previous studies [27,47]. The treatment of entrainment and mixing processes in the model is related to the spatial resolution.To see the effect of resolution on the results, we have simulated the cloud 4 runs of 60-m resolution in the horizontal and 120-m in the vertical: OrgDry60, OrgWet60, DblDry60, and DblWet60.The temporal variation of cloud top height is shown for the runs in Figure 8.The cloud top height increases when the dry layer is removed for both high and low aerosol loadings. The results of the sensitivity simulations show that dry layers can reduce cloud activity when aerosol loading is high. Discussion In this paper, we investigate the coupling effect of mid-tropospheric moisture and aerosol abundance on the dynamics and microphysics of a deep convective cloud with a numerical model.The model used is a dynamic cloud model with bin-resolved microphysics and aqueous-phase chemistry.The cloud, which formed on 19 July 1981 over Montana during CCOPE, developed in an environment with moderate instability, weak wind shear, and a strong dry layer in the mid-troposphere.We investigate the response of the cloud to aerosol abundance both with and without the mid-tropospheric dry layer.The impact of dryness in the mid troposphere varies greatly depending on aerosol abundance.When aerosol abundance is high, the impact is large enough to alter cloud dynamics and microphysics.In the high aerosol case, the cloud top temporarily stops ascending with the dry layer.This does not repeat when the dry layer is removed.We find that the dry layer in the mid troposphere leads to a reduction in cloud updraft strength, droplet number, liquid water content and ice mass in the cloud above the layer, and that these changes are amplified in high aerosol environments. Our simulations agree with previous studies showing that mid-tropospheric dryness suppresses convective activity.These studies suggest that dry layers inhibit the growth of deep convective clouds by reducing buoyancy through entrainment.Our study, using bin-resolved microphysics, suggests that enhanced evaporation of clouds with high droplet concentrations can significantly impact cloud, and that macroscopic and microphysical properties can add to a dilution effect. Mixing in our model is calculated at each time step as a result of turbulent diffusion, which is assumed to mix cloudy and dry air down to the molecular level.There are two conceptual models for the mixing of cloudy and clear air [48] and each is likely to result in a different response of the cloud microphysics during entrainment.The homogeneous mixing model applies to situations in which the turbulent mixing time scale is much longer than the droplet evaporation timescale.During a mixing event, all cloud droplets therefore experience the same environmental conditions, all droplets experience some evaporation and the number of cloud droplets does not change.In the inhomogeneous mixing model, in which the timescale of turbulent mixing is shorter than droplet evaporation, cloud drops at the interface between cloudy and clear air can evaporate completely, while other drops are unaffected.In this case, the number of drops decreases.A number of studies have simulated the mixing process from the scale of individual entrained blobs of dry air (~ metres) down to the Kolmogorov scale at which molecular diffusion wipes out remaining gradients in cloud properties [23,49,50] and there is some limited observational support [49,51].These studies have concluded that mixing in real clouds is likely to lie somewhere between extreme homogeneous and heterogeneous, and that homogeneous mixing is the best approximation of clouds with high turbulence, such as cumulus clouds. Lehmann et al. [52] discussed the scales in a quantitative way.They argued that the ratio of the mixing and thermodynamic reaction time scales, defined as the Damköhler number, is not sufficient to describe the mixing process.They introduced a transition length scale to separate the inertial subrange into a range of length scales for which mixing between ambient dry and cloudy air is inhomogeneous, and a range for which the mixing is homogeneous.The mixing process depends on many factors, such as the mixing scales, turbulence, ambient relative humidity and sizes and concentrations of drops [53].In the case of extreme heterogeneous mixing, droplets at the cloudy-clear interfaces evaporate completely, so the rate of loss of cloud water and droplet number is essentially independent of the droplet size spectrum.In homogeneous mixing, all droplets are exposed to the same humidity and all droplets will evaporate a bit, so such mixing will lead to different evaporation behavior in clouds with different initial drop spectra.The timescale of evaporation depends on the droplet size (τ = r 2 /DS, where r is the droplet radius, D is the diffusion coefficient of water vapor in air and S is the subsaturation (e.g., 0.1 for RH = 90%).Therefore, in the cloud with a larger number of smaller droplets, evaporation occurs more rapidly during mixing.In the cloud we simulate here, the mid-cloud median radius varies between 12 and 14 µm between the base case and doubled CCN case, so the timescale of drop evaporation is initially 36% longer in the cloud with larger drops.This difference in evaporation timescale increases further as the droplets shrink.Thus, during mixing, the cloud with initially smaller droplets suffers a greater loss of droplets as it passes through the dry layer. The effect of aerosol on cloud dissipation that we have described operates under relatively dry mid-tropospheric conditions with fairly high aerosol loadings.Under these conditions, the entrainment of dry air combined with the rapid evaporation of small droplets leads to more rapid reduction of cloud buoyancy than would be expected from either a moist mid-troposphere with fairly high aerosol loadings or a dry mid-troposphere with fairly low aerosol loadings alone.Moist maritime air with low aerosol number concentrations is at the opposite extreme.Dry layers have less effect because oceanic areas have typically low aerosol and droplet concentrations and high humidity.The competition for the available moisture is much less intense, and the reduction of cloud buoyancy is much slower.The recent study by Koren et al. [23] revealed that convective cloud top height increases over the Atlantic due to aerosol.Recently, Huang et al. [54] studied the deep convective cells over the Black Forest in Germany.They found more vigorous growth and a higher top in the cleaner cloud.This is caused partly by the cooling effect through evaporation, which occurred near the cloud boundary via entrainment and mixing of the cloud air with the ambient dry air and in the upper part of the cloud through the Bergeron mechanism.The reduction in cloud updraft strength due to enhanced evaporation in polluted environments that we describe here is likely to be more important over dry continental regions with high aerosol loadings. Limitations of our current research include the resolution, and the treatment of the entrainment and mixing.The resolution in the current simulations resolution is restricted by the use of bin resolved microphysics and aerosol processes.Turbulent mixing takes considerable time before droplets are exposed to the environmental air [49].Such a delay should be taken into account when low spatial resolution is used.The work by Jeffery and Reisner [55] shows promise for future development. The existence of dry layers is a global phenomenon [56].This phenomenon has been extensively studied during the TOGA COARE IOP [6,57,58].Recent studies have revealed dry layers in other regions, such as over the West African monsoon area [7], the Arctic [59], the United Kingdom [60] and the tropical Atlantic [61,62].Further work is required to determine whether the effect of aerosol on cloud dissipation could have wider implications for large-scale cloud processes and climate.A large-scale suppression of convective clouds over substantial regions due to enhanced aerosol loadings would amount to a negative climate forcing (due to the increased outgoing longwave radiation from lower cloud tops).However, the effect of aerosol on convective clouds is complex and multi-faceted [63], and net changes in cloud top height need to be considered alongside changes in anvil properties, cloud extent, etc.Previous studies [64,65] have also suggested that increased aerosol leads to cloud invigoration due to a suppression of low-level rainout and aerosol washout, as well as elevation of the onset of precipitation.The greater suppression of cloud development due to more rapid evaporation in dry layers would compete with that effect.Further research also needs to answer how the coupling effect controls convective cloud fields rather than just a single cloud. Figure 1 . Figure 1.(a) The 1440 MDT 19 July 1981 Miles City, Montana, USA, sounding of temperature (red for all cases) and relative humidity (thick blue for OrgDry and DblDry; thin green for OrgWet and DblWet); (b) Initial number density distribution functions (black for OrgDry and OrgWet; brown for DblDry and DblWet) of aerosol particles (based on Hobbs et al.[46], and Respondek et al.[43]) used in the simulations. Figure 2 .Figure 3 . Figure 2. The temporal variation of specific mass for droplets (a); ice crystal (b); graupel (c); and the simulated cloud top heights (d).For clarity, only the isopleth of 0.1 g kg −1 is plotted in (a-c). Figure 4 . Figure 4. Drop number distribution functions at selected locations at 25 min.The radial distance (X) and altitude (Z) of the individual distributions is indicated on each panel.The units of the horizontal and vertical coordinates are µm and cm −3 µm −1 , respectively.Color schemes: black for OrgDry, red for OrgWet, blue for DblDry and green for DblWet. 4 Figure 5 . Figure 5. Drop mass distribution functions at selected locations at 25 min.The radial distance (X) and altitude (Z) of the individual distributions is indicated on each panel.The units of the horizontal and vertical coordinates are µm and g kg −1 µm −1 , respectively.Color schemes: black for OrgDry, red for OrgWet, blue for DblDry and green for DblWet. Figure 6 . Figure6.The differences at 35 min in drop specific mass (a and b; unit: gkg −1 ), drop number concentration (c and d; unit: cm −3 ), temperature (e and f; unit: °C), and vertical velocity (g and h; unit: ms −1 ).Left column is the difference between OrgWet and OrgDry, while right column is the difference between DblWet and DblDry. Table 1 . Description of Basic Simulations Conducted. DblWetAmplitude of the accumulation mode is doubled Relative humidity between 5.1 and 6.3 km is increased to ~55%.
2016-03-14T22:51:50.573Z
2006-07-12T00:00:00.000
{ "year": 2011, "sha1": "23723eb4978854648ff30459c7c73af1f3b045f8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/2/3/222/pdf?version=1311069573", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "999683d1cdb10a26827f41a8ff3ba8cbef5baba8", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Environmental Science" ] }
232243767
pes2o/s2orc
v3-fos-license
Evolutionary and genomic analysis of four SARS-CoV-2 isolates circulating in March 2020 in Sri Lanka; Additional evidence on multiple introduction and further transmission The molecular epidemiology of the virus and mapping helps understand the epidemics' evolution and apply quick control measures. This study provides genomic evidence of multiple severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) introductions into Sri Lanka and virus evolution during circulation. Whole-genome sequences of four SARS-CoV-2 strains obtained from coronavirus disease 2019 (COVID-19) positive patients reported in Sri Lanka during March 2020 were compared with sequences from Europe, Asia, Africa, Australia and North America. The phylogenetic analysis revealed that the sequence of the sample of the first local patient collected on 10 March, who contacted tourists from Italy, was clustered with SARS-CoV-2 strains collected from Italy, Germany, France and Mexico. Subsequently, the sequence of the isolate obtained on 19 March also clustered in the same group with the samples collected in March and April from Belgium, France, India and South Africa. The other two strains of SARS-CoV-2 were segregated from the main cluster, and the sample collected from 16 March clustered with England and the sample collected on 30 March showed the highest genetic divergence to the isolate of Wuhan, China. Here we report the first molecular epidemiological study conducted on circulating SARS-CoV-2 in Sri Lanka. The finding provides the robustness of molecular epidemiological tools and their application in tracing possible exposure in disease transmission during the pandemic. Introduction The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in the late 2019s, causing coronavirus disease 2019 (COVID-19). The SARS-CoV-2 virus spread rapidly throughout the world with an unsettling effect on human livelihood and economy. Globally, since 31 December 2019 and as of week 2021-5, 106 472 660 cases of COVID-19 have been reported, including 2 323 103 deaths. Since 31 December 2019 and as of the fifth week, 2021, >106 472 660 confirmed cases, including 2 323 103 deaths, have been reported worldwide, affecting >343 countries as indicated by the European Centre for Disease Prevention and Control [1]. The World Health Organization (WHO) has declared a global health emergency at the end of January 2020 [2]. In Sri Lanka, the first local case of COVID-19 was recorded on 11 March 2020 in a 58-year-old male. As of 23 February 2021, there were 80 517 confirmed cases of COVID-19, including 450 deaths reported to the WHO [3]. A robust surveillance system, understanding the molecular epidemiology of the virus, and mapping can help to understand the evolution of the epidemics and apply quick control measures [4,5]. Currently, based on the genomic epidemiology mapping of the COVID-19 virus, it has been demonstrated that the virus is undergoing mutations [6]. Therefore, in addition to confirmation of the presence of the virus, the WHO recommends regular sequencing of a percentage of specimens from clinical cases to monitor viral genome mutations that might affect the medical countermeasures, including diagnostic tests [5]. The whole genome of four SARS-CoV-2 virus strains obtained from COVID-19 positive local patients was sequenced and deposited in the Global Initiative on Sharing All Influenza Data (GISAID) EpiCoV™ database. This study was conducted to investigate the evolution and genetic relatedness of SARS-CoV-2 strains in Sri Lanka, with other reported SARS-CoV-2 strains. Sri Lankan SARS-CoV-2 sequences The whole-genome sequence of four SARS-CoV-2 virus strains obtained from COVID-19 positive patients, including the first local positive case reported on 11 March 2020, was deposited in the GISAID EpiCoV™ database 5 were used for this study ( Table 1). Selection of SARS-CoV-2 isolates For further understanding of the molecular epidemiology of the COVID-19 outbreak in Sri Lanka, 46 isolates were selected from GenBank, National Center for Biotechnology Information (NCBI) using Basic Local Alignment Search Tool nucleotide (BLASTn) tool based on the highest identity and lowest expected value (E-Value) with Sri Lankan isolates. The sequence datasets of 46 selected SARS-Cov-2 complete genomes from different countries in Asia, Africa, Australia, Europe, North America and four Sri Lankan isolates retrieved from GISAID 5 by 28 April 2020 were used for this analysis. The strain isolated from Wuhan in December 2019 with the NCBI accession number NC_045512.2 was used as the reference genome. Whole-genome sequence alignment and phylogenetic analysis Sequence alignment was performed using Multiple Sequence Comparison by Log-Expectation (MUSCLE) software [7]. Following alignment, single nucleotide polymorphisms (SNPs) and amino acid variations analysis were conducted using Molecular Evolutionary Genetics Analysis version ten (MEGA X) [8], taking the first SARS-CoV-2 reference sequence (GenBank Accession number NC_045512) deposited December 2019 in GenBank from Wuhan, China. The evolutionary history was inferred using the neighbour-joining method with the maximum composite likelihood method and the Hasegawa-Kishino-Yano model (HKY) as the best fitting model [9] after 1000 bootstrap replication using MEGA X [8]. Phylogenetic tree analysis The maximum likelihood phylogenetic tree in Figure 1 shows that two of the SARS-CoV-2 isolates from Sri Lanka (GISAID accession IDs: EPI_ISL_428671 and EPI_ISL_428672) collected on 10 March 2020 and 19 March 2020, respectively, are clustered in the group with the isolates from Italy, Germany, France and Mexico that were collected before 10 March 2020. The EPI_ISL_428673 Sri Lankan isolates collected on 31 March 2020 was clustered with isolates obtained on 9 February 2020 from England while EPI-ISL_428670 Sri Lankan isolates collected on 16 March 2020 showed the highest evolutionary distance to the SARS-CoV-2 sequence originated in Wuhan, China (GenBank Accession number NC_405512). SNPs analysis Fifteen SARS-CoV-2 genome sequences that are mainly clustered with the four Sri Lankan strains were compared with the Wuhan reference to observe the viral genome mutations and amino acid variations. The SNPs presented along with the whole genome are indicated in Table 2 (positions referred with respect to the reference sequence; GenBank accession number: NC_045512). The genome sequence of EPI_ISL_428671 from the first local patient Table 3 indicates the respective changes in the amino acid positions of the derived proteins (positions referred with respect to the reference sequence; GenBank accession number: NC_045512). SNPs occurred only in the open reading frame (ORF) 1ab gene, S gene, ORF 3a gene, M gene and N gene of four Sri Lankan whole-genome strains have resulted in amino acid changes at the corresponding positions of the translated proteins, while the rest of SNPs in the genes did not result in any changes in amino acid sequence (Table 3). Except for the first Sri Lankan isolate collected on 10 March (EPI_ISL_428671), the other three Sri Lankan isolates presented a total of six mutations in the ORF 1ab protein with respect to the reference (Table 3). Mutations can be observed in the S protein at the same position AA614 (bps23403) in both Sri Lankan strains EPI_ISL_428671 and EPI_ISL_428672. A single mutation was observed in ORF 3a protein in strain EPI_ISL_428673 at the positions AA251 and bps26144. In the EPI_ISL_428670 strain, the amino acid sequence of N protein shows one mutation at the position AA398 at bps29465, while EPI_ISL_428671 strain had mutations at the positions AA203 (bps28882) and AA204 (bps28883) compared to the reference strain. Discussion In this study, the virus strain of the first local patient (EPI_ISL_428671) collected on 10 March 2020, who was a tour guide and had direct contact with Italian tourists [4], is clustered together with isolates from Italy, Germany and Mexico. This evolutionary evidence revealed the first sequence of SARS-CoV-2 showed the highest genomic similarity to Italy, and European isolates confirming the history of exposure of the first patient who has been exposed to tourists who came from Italy. Even though the history and origin of the infection of the remaining isolates were not reported, the clustering of other isolates from Sri Lanka with isolates from the database has provided a clue about the possible source of the infection. Furthermore, genomic relatedness of the SARS-CoV-2 virus genome sequences of Sri Lankan isolates further confirmed the exposure history of the patients presented in the Epidemiology Unit, Ministry of Health, Sri Lanka [7]. More importantly, this study has indicated the importance of tracking the history of the infection to trace the contacts of the infected person, particularly for the asymptomatic patients in Sri Lanka. The mutations found in the virus identified in Sri Lanka, compared with the reference Wuhan strain and the recognise amino acid changes, should be further monitored to understand whether those changes affect the virulence of the virus or clinical manifestations of the disease. Though this study had limitations mainly due to the lack of epidemiological information on the available genome in the database and a limited number of sequence genome available in Sri Lanka at the time of analysis, the information obtained from this study might assist in understanding the evolutionary dynamics and local transmission of circulating SARS-CoV-2 in Sri Lanka. Conclusion In this section, the results of this study indicated that the SARS-CoV-2 sequences from Sri Lanka have the highest genomic similarity to isolates from Italy, Germany and England. This study was conducted as a preliminary study in Sri Lanka; further studies are necessary to be performed to increase our knowledge regarding SARS-CoV-2 isolates. Since the mutational variants can alter the presentation of COVID-19 infection, the robust molecular epidemiological tools indicated in this study can be used to trace possible exposure, epidemiological analysis and develop an effective treatment, including vaccines. Supplementary material. The supplementary material for this article can be found at https://doi.org/10.1017/S0950268821000583.
2020-07-30T02:09:01.999Z
2020-07-27T00:00:00.000
{ "year": 2021, "sha1": "533550b9c188366072edaf40f5745c40c8420445", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D6E1F37F014E988DBC82EB3B08CC4C6E/S0950268821000583a.pdf/div-class-title-evolutionary-and-genomic-analysis-of-four-sars-cov-2-isolates-circulating-in-march-2020-in-sri-lanka-additional-evidence-on-multiple-introduction-and-further-transmission-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1286f1c1e27aedc7497128e35d4e082ace63b916", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257841743
pes2o/s2orc
v3-fos-license
Effect of Beam Oscillation on Microstructure and Mechanical Properties of Electron Beam Welded EN25 Steel EN25 steels have been found to be applicable in shafts, gears, etc., but welding of EN25 steel was performed using electron beam welding with different oscillation beam diameters varying from 2 mm to 0.5 mm. The present study reports the effect of beam oscillation on the evolution of nonmetallic inclusions, microstructures, and mechanical properties of EN25 steel. Heat input calculations showed that the application of beam oscillations resulted in significantly lower heat inputs compared to their non-oscillating counterparts. The highest fraction of the retained austenite (9.35%) was observed in a weld prepared with beam oscillation at a 2-mm oscillation diameter, and it decreased to 3.27% at an oscillating diameter of 0.5 mm, and it further reduced to 0.36% for non-oscillating beam cases. Residual stresses were compressive in the fusion zone, irrespective of beam oscillation. Beam oscillation resulted in equiaxed grain in the recenter region of the fusion zone, attributed to heat mixing and the evolution of random texture. The application of beam oscillations resulted in a significant decrease in the size of the nonmetallic inclusions to 0.1–0.5 compared to 5–20 mm in base metal. All tensile samples failed in the base metal, indicating good strength of the weld. Fusion zone hardness (250–670 HNV) and wear properties (COF 0.7 to COF 0.45) improved irrespective of with and without beam oscillation. Introduction Medium carbon low alloy (MCLA) Ni-Cr-Mo steels have been a center of attraction for many researchers in the last few decades. Several studies have been conducted on MCLA steels due to their excellent balance of strength, toughness, wear resistance and weldability. European-grade (EN) Ni-Cr-Mo steels find application in machine part members, gears, and shafts due to their high hardenability, good tensile strength and toughness, which could be tailored by proper heat treatment [1]. EN25 is used to make gears, motor shafts, axle shafts, connecting rods, torsion bars, adapters, spindles, die holders, piston rods, oil refining, high-temperature bolts and steam installations [2]. Welding is the technological reconditioning process that ensures short repair time. Electron beam welding (EBW) is a fusion welding process in which a high-energy density beam (10 7 W/cm 2 ) is used to join the metal parts, which results in the formation of a contamination-free narrow heat-affected zone (HAZ) with low distortion compared to conventional welding techniques [3][4][5][6][7][8][9]. EBW has been used for joining high-strength steel in aerospace applications and several studies have been conducted to understand the effect of EBW parameters on the microstructure and properties of HSLA steel components [10]. Arata et al. [11] investigated the behavior of the EB weld zone in low-alloy steels (Ni-Cr-Mo steels). In the weld zone, four types of cracks were found, namely horizontal, vertical crack I, vertical crack II and cold shut. Cracks in the weld zone were mainly due Materials 2023, 16,2717 2 of 17 to solidification except for vertical crack II, which was formed to occur along the prior austenite grain boundary due to high hardenability in that region. Ueyama et al. [12] worked on Ni-Cr-Mo steel to understand the crack behavior and found that with increasing laser power, both the penetration and crack length in solidification cracking increased, suggesting a strong correlation between crack length and penetration. Qiang Wang et al. [13,14] proposed a new effective stress intensity factor range parameter as the crack growth driving force for Ni-Cr-Mo-V high-strength steel, which yielded a fairly good correlation for the mixed mode fatigue crack growth data. Acicular ferrite with a basket-weave microstructure in the weld metal exhibited favorable crack growth resistance relative to the base material. The damage tolerance design discussed this HSLA steel in a marine environment. Shi-Dong Liu et al. [15] showed that short fatigue crack growth was influenced by both a local microstructure and a global strength gradient. It is indicated that strain localization at lathy boundaries and the formation of sub-grains were responsible for the fatigue of short cracks. Sadeq et al. [16] studied the discontinuous welding repair of worn carbon steel shafts through arc welding. The repaired area contained a large region of soft ferrite and discontinuous regions of pearlite, which increased hardness and wear resistance while preserving the tension strength as it was before the repair. Sisodia et al. [17] studied the microstructural and mechanical properties of S960M high-strength steel by EBW. Base material (BM) contained upper bainite and autotempered martensite, while the fusion zone (FZ) consisted primarily of martensite and the heat-affected zone (HAZ) comprised martensite and bainite with a small amount of ferrite. The highest average hardness was observed in the fine-grained heat-affected zone (FGHAZ); HAZ was harder than FZ. The tensile strength of the EB-welded joint (997 MPa) reached the level of BM (1058 MPa). Lateral expansion showed HAZ (1.99 mm) and FZ (0.59 mm) had brittle-ductile characteristics and lower expansion compared to BM (1.34 mm). Charpy fractures of BM exhibited ductile failure, while HAZ and FZ showed brittle-ductile characteristics. Bai et al. [18] studied the weldability of SA508Gr4 steel for nuclear pressure vessels. SA508Gr4 steel exhibited a high hardening tendency and high cold-cracking susceptibility during welding. The results showed that when the HAZ cooling time was less than 15 s, lath-shaped martensite developed, which resulted in extensive hardening and cold cracking in the HAZ. Cooling time above 1200 s resulted in bainite formation, which suppressed cracking. Hence it was suggested that preheating to 196 • C or higher improved the weld quality. Kar et al. [19] and Dinda et al. [20] studied the effects of beam oscillation on the microstructures and mechanical properties of dissimilar EB welded joints between SS-Cu and steel and Fe-Al alloy, respectively. They demonstrated that the ductility of the joint increased significantly compared to its non-oscillating counterpart, which was attributed to heat mixing and the development of more uniform, equiaxed microstructures and random textures [19][20][21][22]. Nayak et al. [21] investigated the role of beam oscillation on electron beam welded zircaloy-4 butt joints and reported that the application of beam oscillation resulted in a more uniform and fine-basket weave widmanstätten microstructure in the fusion zone of the joints. Wang and Wu [23] reported that a linear oscillating beam is better than a circular one because the former promotes the refinement of the fusion zone microstructure, while later producing a coarser microstructure. Wang et al. [24] reported that joints produced with a higher welding speed possessed comparatively fine-grained structures with higher tensile strength. Doong et al. [25] reported that beam oscillation significantly reduced the porosity content and improved the fatigue life of the 4130 steel joints. Xia et al. [26] reported that beam oscillation induced the early formation of equiaxed grain at fusion zone depth direction and improved the weld morphology and uniformity of microstructures. Several studies have been conducted on the evolution nonmetallic inclusions (NMIs) with weldability of steels, which has been found helpful towards property enhancement. Jincheng Sun et al. [27] studied the effects of heat input on inclusion evolution behavior in the heat-affected zone of EH36 shipping steel. This was performed systematically through ex-situ (scanning electron microscopy) SEM examination and in-situ (confocal laser scanning microscopy) CLSM observation. Al-Mg-Ti-O-Mn-S is a type of complex inclusion as observed in such steel. The count of inclusions (number density) remained constant, but the MnS number density decreased with increasing heat input. Low heat input produced nucleation of acicular ferrite on inclusions, which increased the toughness of the HAZ. Chen et al. [28] and X. Wan et al. [29] reported that in HSLA, TiN is a stable inclusion at high temperatures, which retards the austenite grain growth by refining the grain size. The crystallographic grain size became small in the simulated HAZ due to the effective pinning effect and acicular ferrite formation. It is evident from previous research studies that a lot has been studied on the effect of weld parameters on the microstructure and mechanical properties of steels. However, in this study, we used the EBW technique with different beam oscillation diameters to understand the changes in inclusions and microstructure formation in EN steel grade, which has not been extensively studied in the past. The Investigated Base Material Two Ni-Cr-Mo steel plates (EN25 steel) were used as the base material for electron beam welding of similar joints. EN25 steel was provided by Heavy Engineering Corporation Limited Ranchi, India. Chemical analysis of base material used in joining was analyzed using X-ray spectroscopy analysis is presented in Table 1. To perform welding, the base material was cut into 32 mm × 26 mm × 3 mm (Length × Breadth × Height) dimensions using a diamond wheel cutter with a water-cooling system. The sample was polished using 220, 600 and 1200 grit then, it was given ultrasonic cleaning and finally, it was cleaned with acetone. Experimental Procedure The worksheet for the current study is presented in Figure 1a. Electron beam (EB) welding was performed in a butt-welding configuration with and without beam oscillation (as shown in Figure 1b). An electron beam weld (EBW) on an EN25 steel bead on a plate was performed on three samples using the EBW machine at the Bhabha Atomic Research Centre (BARC), Mumbai, India. Table 2 presents a list of the operating parameters for EBW. For electron beam welding, the process variables used were gun chamber vacuum (mbar) = 10 −6 , welding chamber vacuum (mbar) = 10 −5 , both base material dimensions = 32 × 26 × 3 mm 3 and gun specimen distance (mm) = 465. The oscillation beam shape was circular. Microstructural studies were carried out using an optical microscope and scanning electron microscopy (SEM) on the welded samples. Samples for microscopy were mechanically polished and etched using 2% Nital (2 mL HNO 3 and 98 mL C 2 H 5 OH) and hot picric acid etchant (2% picric acid and distil water 100 mL and HCL 2-3 drop and soap solution). Grain size measurements were carried out using the grain intercept method. SEM analysis was carried out using Zeiss (fitted in Zeiss ® EVO 60 SEM) from Oxford Instruments (Oxford, UK). Microstructural studies were carried out using an optical microscope and scanning electron microscopy (SEM) on the welded samples. Samples for microscopy were mechanically polished and etched using 2% Nital (2 mL HNO3 and 98 mL C2H5OH) and hot picric acid etchant (2% picric acid and distil water 100 mL and HCL 2-3 drop and soap solution). Grain size measurements were carried out using the grain intercept method. SEM analysis was carried out using Zeiss (fitted in Zeiss ® EVO 60 SEM) from Oxford Instruments (Oxford, UK). Electron backscattered diffraction (EBSD) studies were performed at a step size of 0.2 μm using the TSL OIM analysis software (fitted in Zeiss ® Auriga compact dual beam scanning electron microscope) from Oxford Instruments, UK, operated at 20 kV over the sample before and after EB welding. Sample surfaces were gently polished with an aqueous colloidal silica solution before EBSD scanning. Microhardness measurements were performed using a Wilson hardness testing machine. Hardness measurements were taken along the transverse direction of the weld bead, heat affected zone (HAZ) and the fusion zone (FZ) using a diamond pyramid indenter under a load of 100 g with a dwell time of 15 s. Ten indentations per region were performed, which were then averaged to get the overall Vickers Hardness (HV) in each case with a standard deviation ±5. Electron backscattered diffraction (EBSD) studies were performed at a step size of 0.2 µm using the TSL OIM analysis software (fitted in Zeiss ® Auriga compact dual beam scanning electron microscope) from Oxford Instruments, UK, operated at 20 kV over the sample before and after EB welding. Sample surfaces were gently polished with an aqueous colloidal silica solution before EBSD scanning. Microhardness measurements were performed using a Wilson hardness testing machine. Hardness measurements were taken along the transverse direction of the weld bead, heat affected zone (HAZ) and the fusion zone (FZ) using a diamond pyramid indenter under a load of 100 g with a dwell time of 15 s. Ten indentations per region were performed, which were then averaged to get the overall Vickers Hardness (HV) in each case with a standard deviation ±5. Tensile tests were performed using an Instron tensile testing machine with a 10 kN maximum load capacity fitted with a digital extensometer. Figure 2 represents the schematic of the ASTM E-8 subsize standard tensile specimen geometry. A strain rate of 0.2 mm/min was used in tensile tests for all five weld samples as well as for base metals. For each weld condition, tests were repeated three times to report the average values with a standard deviation ±5. X-ray diffraction experiment was performed in Bruker D8 discover diffractometer for quantification of retained austenite (RA) and residual stress calculation for welds with different welding parameters. Adequate polishing and ultrasonic cleaning were performed on the samples to avoid any contamination on the surface from the grinding. The Co-Kα radiation (wavelength: 1.789 Ả) was selected as the incident X-ray. The strongest reflections (111), (200) from austenite and (110), (200), and (211) from ferrite were used for the estimation of retained austenite from the Rietveld refinement method using TOPAS software. Lattice strain, crystallite size, dislocation density and residual stresses were also calculated from the XRD data plot. Tensile tests were performed using an Instron tensile testing machine with a 10 kN maximum load capacity fitted with a digital extensometer. Figure 2 represents the schematic of the ASTM E-8 subsize standard tensile specimen geometry. A strain rate of 0.2 mm/min was used in tensile tests for all five weld samples as well as for base metals. For each weld condition, tests were repeated three times to report the average values with a standard deviation ±5. X-ray diffraction experiment was performed in Bruker D8 discover diffractometer for quantification of retained austenite (RA) and residual stress calculation for welds with different welding parameters. Adequate polishing and ultrasonic cleaning were performed on the samples to avoid any contamination on the surface from the grinding. The Co-Kα radiation (wavelength: 1.789 Ả) was selected as the incident X-ray. The strongest reflections (111), (200) from austenite and (110), (200), and (211) from ferrite were used for the estimation of retained austenite from the Rietveld refinement method using TOPAS software. Lattice strain, crystallite size, dislocation density and residual stresses were also calculated from the XRD data plot. The kinetics of wear were studied using a fretting wear testing unit. The wear test was performed using a ball-on-disk wear tester (Ducom-TR-283M, Ducom, Bohemia, NY, USA). The wear test was conducted at a constant load of 30 N for a constant testing duration of 30 min at a constant frequency of 10 Hz and a constant stroke length of 1 mm. The wear data were analyzed using Winducom2006 software. The variation of wear depth with time was studied. The microstructure of the worn-out debris was analyzed with the SEM to understand the mechanism of wear. Before carrying out the test, all the samples were diamond polished and cleaned properly. For each weld condition, tests were repeated three times to report the average values with a standard deviation ±5. The kinetics of wear were studied using a fretting wear testing unit. The wear test was performed using a ball-on-disk wear tester (Ducom-TR-283M, Ducom, Bohemia, NY, USA). The wear test was conducted at a constant load of 30 N for a constant testing duration of 30 min at a constant frequency of 10 Hz and a constant stroke length of 1 mm. The wear data were analyzed using Winducom2006 software. The variation of wear depth with time was studied. The microstructure of the worn-out debris was analyzed with the SEM to understand the mechanism of wear. Before carrying out the test, all the samples were diamond polished and cleaned properly. For each weld condition, tests were repeated three times to report the average values with a standard deviation ±5. Figure 4 shows the cross-sectional appearance of weld bead under four different welding conditions. Figure 4a represents the weld with an oscillation diameter of 2 mm. It shows a lack of fusion at the top, attributed to low heat input with a higher oscillation diameter. The weld obtained had a 0.5-mm oscillation diameter and showed a better ap- Figure 4 shows the cross-sectional appearance of weld bead under four different welding conditions. Figure 4a represents the weld with an oscillation diameter of 2 mm. It shows a lack of fusion at the top, attributed to low heat input with a higher oscillation diameter. The weld obtained had a 0.5-mm oscillation diameter and showed a better appearance of the fusion zone top. Similarly, with an increase in speed under without beam oscillation conditions, incomplete penetration at the top occurs at a higher speed (Figure 4d), corresponding to a low heat input. Figure 4 shows the cross-sectional appearance of weld bead under four different welding conditions. Figure 4a represents the weld with an oscillation diameter of 2 mm. It shows a lack of fusion at the top, attributed to low heat input with a higher oscillation diameter. The weld obtained had a 0.5-mm oscillation diameter and showed a better appearance of the fusion zone top. Similarly, with an increase in speed under without beam oscillation conditions, incomplete penetration at the top occurs at a higher speed ( Figure 4d), corresponding to a low heat input. The cross-sectional appearance of weld bead under different weld conditions (a) S1000V60I35 OD 2 mm, (b) S1000V60I35 OD 0.5 mm, (c) S1000V60I35 WO and (d) S1200V60I35 WO. Figure 5 represents the optical microscopy images of the fusion zone with and without beam oscillation conditions in electron beam welding. It is observed that the microstructures are predominantly columnar, except in two cases where equiaxed grains are seen significantly in the weld zone, namely, beam oscillation with a 2-mm oscillation diameter and weld without beam oscillation at maximum weld speed. Equiaxed grains may be promoted either at a high cooling or solidification rate, where a low G/R promotes equiaxed dendritic structure or under beam oscillation, when the heat mixing promotes crystal growth in multi-directions [19][20][21]30]. Heat Input Calculation for Different Welding Conditions Beam oscillation changes the heat input rate to the material and consequently affects the evolution of the microstructure. Therefore, this section presents the heat input calculation and correlates it with the evolution of microstructure, inclusion and properties. Heat input per unit length is calculated during welding using Equation (1) given below: where Q = heat input rate during the welding operation in kJ/mm, η = efficiency of power supply usually 0.9 for EBW, V = voltage used during the welding operation, V, I = current used during the welding operation, mA and v = speed of the beam, mm/min. During non-oscillating conditions, the speed of the beam is equivalent to the scan speed of welding. During oscillating electron beam welding, the velocity of the electron beam changes and can reach values of several thousand mm/s, depending on the welding velocity and the oscillation parameters (oscillation diameter and frequency). In the case of the oscillating beam, the velocity of the beam is calculated as shown in Equation (2), as reported in the literature [31]: where a and f represent the oscillating amplitude and frequency, respectively. φ represents the initial phase angle. Subscripts x and y represent the x-and y-component, respectively. For circular beam oscillation patterns, the amplitude and frequency are equal, and the phase shift is π/2. v w is the welding scan velocity. By using Equation (2), the velocity of the beam is calculated at different beam oscillation diameters. After calculating the heat input using Equation (1), the value of the heat input with a beam oscillation diameter of 2 mm was found to be the lowest of all the welding parameters taken, as presented in Table 3. Weld beads without beam oscillation were the lowest at the highest speed when compared to other speeds. Similar results were reported by S. Dinda for EBW dissimilar steel to Fe-Al alloy joints [20]. Table 3 also shows the weld bead dimensions. It is found that weld bead thickness increases with an increase in heat input. Consequently, a weld bead with beam oscillation at a 2-mm oscillation diameter shows the minimum weld bead thickness, which is highest in cases without beam oscillation and at the lowest speed. Materials 2023, 16, x FOR PEER REVIEW 7 of 18 Figure 5 represents the optical microscopy images of the fusion zone with and without beam oscillation conditions in electron beam welding. It is observed that the microstructures are predominantly columnar, except in two cases where equiaxed grains are seen significantly in the weld zone, namely, beam oscillation with a 2-mm oscillation diameter and weld without beam oscillation at maximum weld speed. Equiaxed grains may be promoted either at a high cooling or solidification rate, where a low G/R promotes equiaxed dendritic structure or under beam oscillation, when the heat mixing promotes crystal growth in multi-directions [19][20][21]30]. Heat Input Calculation for Different Welding Conditions Beam oscillation changes the heat input rate to the material and consequently affects the evolution of the microstructure. Therefore, this section presents the heat input calculation and correlates it with the evolution of microstructure, inclusion and properties. Heat input per unit length is calculated during welding using Equation (1) given below: * Figure 5. Optical images for different welding conditions at 20× magnification and 5× magnification at the top right corner. (a) S1000V60I35 OD 2 mm, (b) S1000V60I35 OD 0.5 mm, (c) S1000V60I35 WO and (d) S1200V60I35 WO. Figure 6 represents the IPF map with and without beam oscillation conditions. The finer grain size was confirmed in the welded region with a beam oscillation diameter of 2 mm, as seen in Figure 6a. Grains became coarser for the beam oscillation diameter of 0.5 mm and a without beam oscillation condition, maintaining the speed at 1000 mm/min, voltage at 60 kV and current at 35 mA for all welding conditions. The base metal grain size was found to be the highest, as seen in Figure 6d. The solidification structure became finer with the increased cooling rate and the cooling rate increased from the sample without oscillation to samples prepared with beam oscillation at the highest oscillation diameter (Table 3). Therefore, the finest grains were obtained with beam oscillation at a 2-mm oscillation diameter and the coarsest samples were produced in the option without beam oscillation. The base metal is a high-temperature solution-forged sample that shows coarse grains because prior austenite gets coarser at high temperatures. The grain size distribution is presented in Figure 7 for all sample conditions. It shows that grain refinement takes place under the condition of beam oscillation, and it is lowest at the highest beam oscillation diameter, which may be attributed to the lowest heat input. However, grain coarsening was observed for withoutbeam oscillation case weld counterparts, especially with low weld scan speed. The grain size distribution is presented in Figure 7 for all sample conditions. It shows that grain refinement takes place under the condition of beam oscillation, and it is lowest at the highest beam oscillation diameter, which may be attributed to the lowest heat input. However, grain coarsening was observed for withoutbeam oscillation case weld counterparts, especially with low weld scan speed. Figure 6. IPF maps with and without beam oscillation conditions for (a) S1000V60I35 OD2 mm, (b) S1000V60I35 OD 0.5 mm, (c) S1000V60I35 WO and (d) base metal. The grain size distribution is presented in Figure 7 for all sample conditions. It shows that grain refinement takes place under the condition of beam oscillation, and it is lowest at the highest beam oscillation diameter, which may be attributed to the lowest heat input. However, grain coarsening was observed for withoutbeam oscillation case weld counterparts, especially with low weld scan speed. Figure 7. Grain size distribution for different sample conditions: (a) base metal, (b) S1000V60I35 OD 0.5 mm, (c) S1000V60I35 OD 2 mm (d) S800V60I35 WO, (e) S1000V60I35 WO and (f) S1200V60I35. Figure 7. Grain size distribution for different sample conditions: (a) base metal, (b) S1000V60I35 OD 0.5 mm, (c) S1000V60I35 OD 2 mm (d) S800V60I35 WO, (e) S1000V60I35 WO and (f) S1200V60I35. Residual Stress, Lattice Strain and Dislocation Density from XRD Analysis The X-ray diffraction pattern of the EN 25 steel fusion zone with beam oscillation and without beam oscillation is shown in Figure 8. The XRD reveals the presence of both the BCC (ferrite/martensite) and FCC (retained austenite) phases in the weld. The amount of austenite phase is much smaller in welds obtained without beam oscillation. The amount of austenite phase decreases with a decrease in oscillation diameter from 2 mm to 0.5 mm. i.e., Rietveld analysis confirmed that the amount of retained austenite in the weld was highest (9.35%) for a beam oscillation diameter of 2 mm and was reduced (3.27%) for an oscillation diameter of 0.5 mm. It further reduced to 0.36% for welding without beam oscillation, maintaining the speed at 1000 mm/min and voltage at 60 kV constant for all welding conditions. Beam oscillation during electron beam welding brings in churning action and heat mixing in the weld pool, which is likely to reduce the thermal shear stress and shear-induced phase transformation (austenite to martensite). Therefore, more retained austenite is observed in the weld seam obtained by beam oscillation, especially for large oscillation diameters. In the present study, the residual stresses of fusion zones were calculated by analyzing XRD data [32]. Williamson-Hall method (W-H method) was also used to estimate lattice strain and dislocation density [33,34]. Table 4 presents the residual stresses, lattice strain, and dislocation density for various welding conditions. This table also shows the hardness values in the weld and size of nonmetallic inclusions, discussed subsequently. The residual stresses were compressive irrespective of whether they were welded with beam oscillation or without beam oscillation. A similar trend in the fusion zone (i.e., compressive residual stresses) was also reported by Singh et al. for niobium weld [35] and by Ramana et al. for maraging steel [36]. The residual stress was found to increase with increasing weld speed without beam oscillation, which may be attributed to a higher cooling rate and thermal stress generation at a lower heat input with higher welding velocity. The residual stress was also found to be higher with beam oscillation than its withoutbeam oscillation case counterpart, again attributed to a higher beam speed for an oscillating beam. oscillation diameter of 0.5 mm. It further reduced to 0.36% for welding without beam os-cillation, maintaining the speed at 1000 mm/min and voltage at 60 kV constant for all welding conditions. Beam oscillation during electron beam welding brings in churning action and heat mixing in the weld pool, which is likely to reduce the thermal shear stress and shear-induced phase transformation (austenite to martensite). Therefore, more retained austenite is observed in the weld seam obtained by beam oscillation, especially for large oscillation diameters. In the present study, the residual stresses of fusion zones were calculated by analyzing XRD data [32]. Williamson-Hall method (W-H method) was also used to estimate lattice strain and dislocation density [33,34]. Table 4 presents the residual stresses, lattice strain, and dislocation density for various welding conditions. This table also shows the hardness values in the weld and size of nonmetallic inclusions, discussed subsequently. The residual stresses were compressive irrespective of whether they were welded with beam oscillation or without beam oscillation. A similar trend in the fusion zone (i.e., compressive residual stresses) was also reported by Singh et al. for niobium weld [35] and by Ramana et al. for maraging steel [36]. The residual stress was found to increase with increasing weld speed without beam oscillation, which may be attributed to a higher cooling rate and thermal stress generation at a lower heat input with higher welding velocity. The residual stress was also found to be higher with beam oscillation than its withoutbeam oscillation case counterpart, again attributed to a higher beam speed for an oscillating beam. Lattice strain was also found to be higher for oscillating beams and increases with oscillation diameter from 0.5 mm to 2 mm. An increase in lattice strains may also be corroborated by the increase in dislocation density in the fusion zone, as observed in Table 4. In the case of without beam oscillation, with decreasing speed (1200 mm/min to 1000 mm/min to 800 mm/min), the lower lattice strain may be attributed to a lower cooling rate resulting in lower thermal stress, and allowing diffusion of solute from the supersaturated matrix and its redistribution [37]. Figure 9a presents the hardness data of EBW joints for five welding conditions. Figure 9b presents the eleven positions from which hardness data is taken in a welded sample. Vickers hardness values showed the highest hardness in the fusion zone (FZ) and the lowest hardness in base metal. Hardness Measurements The fusion zone possessed the highest hardness value for all welding conditions, whether it was with beam oscillation or without beam oscillation. Additionally, HAZ possessed a comparatively lower hardness than the fusion zone for all welding conditions. However, significant hardness values were recorded both on the FZ and HAZ when compared to the base metal, similar results were recorded in the research work of Isaac et al. [38]. The hardness of the weld with beam oscillation at the highest oscillation diameter was the highest but comparable to the hardness value for the withoutbeam oscillation case case at the highest welding speed, due to the high cooling rate and fine grain structure in both cases (see Table 4). With beam oscillation, hardness increased with an increase in oscillation diameters, while in withoutbeam oscillation case cases, hardness increased with increasing weld speed. Hardness values in the fusion zone are in agreement with the values reported by F. Souza Neto et al. on the TIG and laser welding processes of medium carbon low Ni steel, AISI 4140 steel [39]. Figure 9a presents the hardness data of EBW joints for five welding conditions. Figure 9b presents the eleven positions from which hardness data is taken in a welded sample. Vickers hardness values showed the highest hardness in the fusion zone (FZ) and the lowest hardness in base metal. The fusion zone possessed the highest hardness value for all welding conditions, whether it was with beam oscillation or without beam oscillation. Additionally, HAZ possessed a comparatively lower hardness than the fusion zone for all welding conditions. However, significant hardness values were recorded both on the FZ and HAZ when compared to the base metal, similar results were recorded in the research work of Isaac et al. [38]. The hardness of the weld with beam oscillation at the highest oscillation diameter was the highest but comparable to the hardness value for the withoutbeam oscillation case case at the highest welding speed, due to the high cooling rate and fine grain structure in both cases (see Table 4). With beam oscillation, hardness increased with an increase in oscillation diameters, while in withoutbeam oscillation case cases, hardness increased with increasing weld speed. Hardness values in the fusion zone are in agreement with the values reported by F. Souza Neto et al. on the TIG and laser welding processes of medium carbon low Ni steel, AISI 4140 steel [39]. Tensile Test All weld tensile specimens produced with or without beam oscillation failed in the base metal indicating a stronger weld as presented in Figure 10. However, there was no significant difference in tensile strength between the base metal and welded specimen. The percentage of elongation was maximum in the base metal (18%), followed by the weld with beam oscillation at a 2-mm oscillation diameter (16.8%). A similar trend of elongation was observed in F. Souza Neto et al.'s research on 4140 steel welding, a medium carbon, low Ni alloy steel [39]. Tensile Test All weld tensile specimens produced with or without beam oscillation failed in the base metal indicating a stronger weld as presented in Figure 10. However, there was no significant difference in tensile strength between the base metal and welded specimen. The percentage of elongation was maximum in the base metal (18%), followed by the weld with beam oscillation at a 2-mm oscillation diameter (16.8%). A similar trend of elongation was observed in F. Souza Neto et al.'s research on 4140 steel welding, a medium carbon, low Ni alloy steel [39]. Figure 10. Tensile sample after the test for different sample conditions (a) S1000V60I35 OD 2 mm, (b) S1000V60I35 OD 0.5 mm (c) S800V60I35 WO, (d) S1000V60I35 WO and (e) S1200V60I35. Figure 11 represents reciprocating wear test SEM images. Base metal shows severely worn debris and the weld prepared with beam oscillation at an oscillation diameter of 2 mm showed the minimum wear debris. Figure 10. Tensile sample after the test for different sample conditions (a) S1000V60I35 OD 2 mm, (b) S1000V60I35 OD 0.5 mm (c) S800V60I35 WO, (d) S1000V60I35 WO and (e) S1200V60I35. Figure 11 represents reciprocating wear test SEM images. Base metal shows severely worn debris and the weld prepared with beam oscillation at an oscillation diameter of 2 mm showed the minimum wear debris. Figure 10. Tensile sample after the test for different sample conditions (a) S1000V60I35 OD 2 mm, (b) S1000V60I35 OD 0.5 mm (c) S800V60I35 WO, (d) S1000V60I35 WO and (e) S1200V60I35. Figure 11 represents reciprocating wear test SEM images. Base metal shows severely worn debris and the weld prepared with beam oscillation at an oscillation diameter of 2 mm showed the minimum wear debris. Figure 11. Wear test of EN25 steel for different sample conditions (a) Base metal, (b) S1000V60I35 OD 2 mm, (c) S1000V60I35 OD 0.5 mm, and (d) S1000V60I35 WO, all for load 30 N track length 2 mm and frequency 10 Hz. Figure 11. Wear test of EN25 steel for different sample conditions (a) Base metal, (b) S1000V60I35 OD 2 mm, (c) S1000V60I35 OD 0.5 mm, and (d) S1000V60I35 WO, all for load 30 N track length 2 mm and frequency 10 Hz. Figure 12a represents the variation of the coefficient of friction (COF) as a function of interaction time. The COF is defined as the ratio of frictional force to normal force, where frictional force may depend on the asperities of different hardnesses and scales on the surface [40]. The greater the COF, the more it will be worn and the less it will be wear-resistant. COF was highest for the base metal in the range between 0.65 and 0.7. For the cases of beam oscillation, COF decreased with a decreasing oscillation diameter. Figure 12a represents the variation of the coefficient of friction (COF) as a function of interaction time. The COF is defined as the ratio of frictional force to normal force, where frictional force may depend on the asperities of different hardnesses and scales on the surface [40]. The greater the COF, the more it will be worn and the less it will be wearresistant. COF was highest for the base metal in the range between 0.65 and 0.7. For the cases of beam oscillation, COF decreased with a decreasing oscillation diameter. However, COF is found to be the lowest without beam oscillation at the lowest scan speed, which is contradictory to the corresponding minimum hardness value (as seen in Table 4). This may be attributed to defects formation associated with high heat input, such as thermal stress-generated cracks, keyhole porosity and undercuts. During wear, such defective structures may show a low apparent material loss. The wear depth vs. time plot However, COF is found to be the lowest without beam oscillation at the lowest scan speed, which is contradictory to the corresponding minimum hardness value (as seen in Table 4). This may be attributed to defects formation associated with high heat input, such as thermal stress-generated cracks, keyhole porosity and undercuts. During wear, such defective structures may show a low apparent material loss. The wear depth vs. time plot presented in Figure 12b also shows the same trend. Wear depth was highest for base metal, followed by welding with beam oscillation and then without beam oscillation. Therefore, the wear rate or wear volume is high for base metal (Figure 12c,d), but after welding, this rate is reduced. Thus, the material becomes wear resistant after welding. Similar trends were reported by Sumit et al. [41,42]. Figure 13 shows the morphological change of nonmetallic inclusion size for different welding conditions. Figure 13a,b represent inclusions in the weld prepared with beam oscillations at oscillation diameters of 2 mm and 0.5 mm, respectively. Figure 13c-e represent the welds without beam oscillation for varying speeds namely, 800 mm/min, 1000 mm/min and 1200 mm/min, respectively, keeping voltage and current constant. The average size of NMIs was found to be lowest at a 2-mm oscillation diameter weld condition, which is attributed to the lowest heat input and growth of inclusion. The montage image represented in Figure 13a marked by the arrow of six different locations shows a similar trend. Similarly, a montage image is provided for 0.5 mm oscillation diameter and here one can observe NMIs size variation, showing inclusion growth in some cases due to comparatively lower cooling rates. In all other cases, inclusion sizes are bigger due to comparatively higher heat input and lower cooling rates. The clear size distribution of NMIs is represented in Table 4 for different welding conditions. Table 4 for different welding conditions. Figure 13. Nonmetallic inclusion size variation under different weld conditions (a) S1000V60I35 OD 2 mm, (b) S1000V60I35 OD 0.5 mm, (c) S800V60I35 WO, (d) S1000V60I35 WO and (e) S1200V60I35 WO. Conclusions Welding with beam oscillation produces a churning action in the weld seam, causing heat mixing and promoting equiaxed grain and a random texture with improved ductility, hardness and wear properties. It destroys the unidirectional growth of columnar grains and the nonuniform strength distribution. Beam oscillation also enhances the beam speed Figure 13. Nonmetallic inclusion size variation under different weld conditions (a) S1000V60I35 OD 2 mm, (b) S1000V60I35 OD 0.5 mm, (c) S800V60I35 WO, (d) S1000V60I35 WO and (e) S1200V60I35 WO. Conclusions Welding with beam oscillation produces a churning action in the weld seam, causing heat mixing and promoting equiaxed grain and a random texture with improved ductility, hardness and wear properties. It destroys the unidirectional growth of columnar grains and the nonuniform strength distribution. Beam oscillation also enhances the beam speed and decreases the cooling rate significantly. This retards the growth of inclusions, which evolve into finer inclusions in the weld structure that are less harmful to weld. The present study with beam oscillation demonstrated these effects on the electron beam welded EN25 steel. Some salient conclusions that emerged out of this study are: (i) The calculated heat input rate is found to be extremely low for beam oscillation at a 2-mm oscillation diameter (6 × 10 −3 kJ/cm) and the highest (1.57 kJ/mm) for the withoutbeam oscillation case case at 800 mm/min scan velocity. (ii) A large region of equiaxed grains was observed at the center region of the weld prepared with beam oscillation at a 2-mm oscillation diameter, attributed to churning action and heat mixing in the weld seam. (iii) The fraction of retained austenite (9.35%) was found to be highest in the weld prepared with an oscillating beam at the highest oscillating diameter of 2 mm, which was attributed to heat mixing, a lesser temperature gradient, and thermal stress and stressinduced transformation, such as austenite to martensite. Subsequently, it decreased to 3.27% with decreasing beam oscillation diameter to 0.5 mm. For withoutbeam oscillation case electron beam welding, the fraction of retained austenite was further lowered (0.36%). (iv) Residual stresses in the weld were found to be compressive in the fusion zone, irrespective of welding conditions. (v) Nonmetallic inclusion size decreased significantly for welds prepared with beam oscillation, especially at higher oscillating diameters, which was attributed to the fastest cooling rate that retarded the growth of inclusion. (vi) The hardness and wear properties of welds were found to improve after welding, especially for welds with oscillating beams. Limitation of the Present Work In the present work, the welding speed and beam oscillation diameter have been studied, whereas the welding current and frequency have been kept constant. Additionally, to achieve full-depth penetration of all the samples, the optimum welding current with a 2-mm beam oscillation diameter has been considered for carrying out welding for other cases. This makes the apparent current input higher than required for other cases to achieve full penetration. Future Scope A tailored welded blank of MCLA with stainless steel (SS) has found widespread application in the construction process in production industries [43]. Therefore, dissimilar joining of EN25 steel to SS by EBW could be explored. Effects of frequency and beam oscillation pattern could also be explored.
2023-03-31T15:02:21.760Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "649f215cfd070598fef4d90fefa5885d8685c639", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/7/2717/pdf?version=1680071644", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3602715570f0c4a0207ad002819452a5324faa8c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
8816485
pes2o/s2orc
v3-fos-license
Catechol-O-methyltransferase Val158Met genotype in healthy and personality disorder individuals: Preliminary results from an examination of cognitive tests hypothetically differentially sensitive to dopamine functions A functional polymorphism of the gene coding for Catechol-O-methyltrasferase (COMT), an enzyme responsible for the degradation of the catecholamine dopamine (DA), epinephrine, and norepinephrine, is associated with cognitive deficits. However, previous studies have not examined the effects of COMT on context processing, as measured by the AX-CPT, a task hypothesized to be maximally relevant to DA function. 32 individuals who were either healthy, with schizotypal personality disorder, or non-cluster A, personality disorder (OPD) were genotyped at the COMT Val158Met locus. Met/Met (n = 6), Val/Met (n = 10), Val/Val (n = 16) individuals were administered a neuropsychological battery, including the AX-CPT and the N-back working memory test. For the AX-CPT, Met/Met demonstrated more AY errors (reflecting good maintenance of context) than the other genotypes, who showed equivalent error rates. Val/Val demonstrated disproportionately greater deterioration with increased task difficulty from 0-back to 1-back working memory demands as compared to Met/Met, while Val/Met did not differ from either genotypes. No differences were found on processing speed or verbal working memory. Both context processing and working memory appear related to COMT genotype and the AX-CPT and N-back may be most sensitive to the effects of COMT variation. Catecholamine neurotransmitter activity in the prefrontal cortex (PFC) exerts an infl uence over a range of cognitive functions. Specifi cally, catechol-O-methyltransferase (COMT) is an important enzyme involved in the regulation of catecholamines, including dopamine (DA). A single nucleotide polymorphism of the gene coding for COMT is associated with performance with working memory and executive functions in schizophrenia (Egan et al 2001;Joober et al 2002), with Val allele associated with increased enzymatic activity (leading to more catabolism of DA) and resultant poorer cognitive performance. In a recent study, Minzenberg et al (2006) demonstrated similar effects of the COMT genotype on cognitive performance in patients with schizotypal personality disorder (SPD). It is of interest that patients with SPD have been shown previously to manifest substantial abnormalities on tasks highly dependent on the functioning of the prefrontal cortex (PFC), including the ability to maintain contextual information in short term memory (Barch et al 2004). Leung et al Context processing, defined as information actively maintained in such a form that it can be used to mediate later last-appropriate behavior, is particularly relevant in two ways. First, it has been suggested that a specifi c defi cit in the ability to represent and maintain context information may help to explain defi cits in working memory as well as other cognitive domains in schizophrenia and the spectrum disorders (Cohen and Servan-Schreiber 1992;Cohen et al 1999). In a neuroimaging study (Barch et al 2001), working memory defi cits in patients with schizophrenia was shown to refl ect impairment in context processing associated with a selective disturbance in dorsolateral PFC functions. Second, context processing has been directly linked to DA function, in that D-amphetamine (D-AMP), a DA agonist, results in improved context processing (Servan-Schreiber et al 1998). Recent evidence from a double-blind, placebocontrolled trial of guanfacine treatment suggest that not only do patients with SPD perform more poorly on a context processing task than healthy control and patients with non-schizotypal personality disorders, pharmacological compounds increasing catecholamine activity in PFC exert a normalizing infl uence on context processing in SPD patients (McClure et al in press). Given its dopaminergic relevance, impaired context processing may also be related to the effects of the COMT genotype. However, there are no published data available on the relationship between COMT and context processing to date. Tasks that measures executive functioning, such as the Wisconsin Card Sorting Test (WCST) (Egan et al 2001;Joober et al 2002;Malhotra et al 2002;Minzenberg et al 2006) and working memory, such as the N-back test , and, by inference, the functioning of the dorsolateral PFC, have been shown to be related to the COMT genotype in healthy and schizophrenic individuals in some studies. Interestingly, Mattay et al (2003) reported better performance in carriers of the Val allele and a decline in the Met/Met group in a working memory task after a pharmacological challenge with amphetamine. The decline in performance in the Met/Met group after amphetamine intake suggests that the association between performance and DA levels has an inverted "U" shape characteristic. That is, activation of the DA system by working memory load and amphetamine pushes these subjects beyond their optimal activation level. However, other investigators did not fi nd support that these tasks, presumed to be dopaminergic-depended tasks, were associated to COMT genotype (Tsai et al 2003;Ho et al 2005;Minzenberg et al 2006). Specifi cally, performance on measures like the WCST, digit span backward subtest of the WAIS-III, N-back test, and Trail Making (Ho et al 2005) were not associated with COMT genotype in healthy individuals or patients with schizophrenia. Additionally, COMT genotype did not exert an effect on tasks measuring verbal or visual delayed memory in healthy and schizotypal personality disorder individuals (Minzenberg et al 2006). The AX Continuous Performance Test (AX-CPT) (Barch et al 2004) is a task specifi cally designed to assess context processing. During the AX-CPT, participants are presented with cue-probe pairs and are told to respond to an "X" (probe), but only when it follows an "A" (cue). The task also includes three types of non-target trials that allow one to selectively assess context processing defi cits: AY trials ("A" cue followed by any letter other than "X"); BX trials (non-"A" cue followed by an "X" probe); and BY trials (non-"A" cue followed by a non-"X" probe). AX trials occur with high frequency (70%), creating two important response biases. First, this high AX frequency creates a bias to make a target response to any stimulus following an "A" cue (as a target "X" occurrence is highly likely following an "A" cue). In healthy individuals, maintenance of context is demonstrated by the tendency to make a false alarm response after occurrence of the "A" cue when not followed by an "X" (leading to increased AY errors). Conversely, low levels of AY errors suggest reduced tendencies toward development of context representations. The second bias created by the high AX frequency is the tendency to make a target response to the "X" probe, as this is the correct response the majority of the time. On BX trials, maintenance of the context provided by the cue (non-A) reduces BX false alarms. Thus, on the AX-CPT, defi cits in context processing are not indicated by an overall increase in false alarms, but rather a specifi c pattern of errors (decreased AY and increased BX). In light of the current mixed fi ndings regarding the association between DA-dependent tasks and the effects of COMT genotype, it is unclear whether COMT exerts a general effect on poor neurocognitive functioning or a specifi c defi cit in one cognitive domain. Given the direct link to DA function and the PFC, the current study sought to examine whether COMT genotype variation has a specifi c impact on context processing. The Modifi ed AX-CPT is the prototypical context processing task and is hypothetically one of the cognitive domains that is most DA relevant, and we hypothesized that it may be the most relevant task that is sensitive to the COMT effects, as compared to other DA-related tasks, including the N-back, measuring working memory, Trail Making, measuring processing speed and attention, and the Paced Auditory Serial Addition Test (PASAT) (Gronwall 1977), measuring maintenance and manipulation processes in verbal working memory. Previous studies have demonstrated that a functional genetic polymorphism of COMT infl uences prefrontal cognition in healthy individuals (Bruder et al 2005;Egan et al 2001;Malhotra et al 2002), healthy siblings of patients with schizophrenia spectrum disorders (Rosa et al 2004), and patients with schizophrenia (Egan et al 2001). Further, one published study reported that poorer performance on prefrontal-dependent tasks is associated with the Val/Val genotype regardless of diagnosis in a group of healthy individuals and patients with SPD and OPD (Minzenberg et al 2006). Thus, in our ongoing study, we examined the shared effects of COMT variation on cognition in individuals with and without schizophrenia spectrum disorders, with the focus of examining tests hypothesized to be most sensitive to dopaminergic functions. As such, genotyping was collapsed across healthy individuals, patients with SPD, and patients with other, non-cluster A personality disorders (OPDs). We predicted that subjects with the Val allele would show impairment in context processing, as evidence by a greater number of BX errors and a smaller number of AY errors on the AX-CPT, while subjects in the Met/Met group would demonstrate an inverse response pattern with a greater number AY errors and a smaller number of BX errors, refl ecting intact context processing. We also predicted that, compared to the Met/Met group, subjects with the Val allele would show impaired working memory as measured by the N-back. In addition to evaluating N-back accuracy score at each condition with different levels of diffi culty, we propose that examining the degree of improvement that takes place from one condition to the following condition with increased diffi culty would allow us to better understand working memory defi cit as it relates to COMT. We predicted that subjects with the Val allele would perform disproportionately worse as the task increases demands of working memory in comparison to the Met/Met group. Further, we predict that group differences in AX-CPT performance would yield a greater effect size, as compared to the N-back, suggesting that the AX-CPT is a more sensitive test to detect the effects on DA exerted by COMT. Methods Participants As part of a larger study examining context processing in schizotypal personality disorder, 11 individuals with Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV; American Psychiatric Association, 1994) SPD, two individuals with other, non-cluster A, DSM-IV personality disorders (OPD) and 19 healthy controls (HCs) were genotyped and evaluated on context processing and working memory. Participants with SPD and OPD were ascertained either through recruitment from the outpatient clinics at the Mount Sinai Medical Center and Bronx Veteran Affairs Medical Center, by advertisements in local newspapers, or by referral from psychiatrists and psychologists in the local community. The HCs were recruited from the local community through newspaper advertisements. Participants were excluded for (a) meeting criteria for current (within six months of testing) substance abuse or dependence, (b) a positive urine toxicology screen, (c) a lifetime diagnosis of a psychotic disorder or bipolar I disorder and (d) signifi cant head trauma. Participants were assessed for Axis I psychopathology using the Structured Clinical Interview for DSM-IV (SCID; First et al 1995), by a master's level or doctoral level interviewer who did not know the participants' cognitive task performance. In addition, participants were assessed for Axis II pathology using the Structured Interview for the DSM-IV Personality Disorders (SIDP; Pfohl et al 1995). Consensus diagnoses were reached in a meeting of all raters with an expert diagnostician. OPD individuals were excluded from the current analyses if they met criteria for any Cluster A personality disorder (ie, paranoid personality disorder, schizoid personality disorder or if they met more than two criteria for SPD. Healthy controls were excluded if they had either a personal or family history of a major Axis I disorder (eg, schizophrenia), or a personal history of an Axis II disorder. All participants signed informed consent forms in accordance with the approvals of Institutional Review Boards at each research site, where ethical approval for the study procedures was obtained. Tasks and apparatus AX-CPT tasks Participants performed the modifi ed version of the AX-CPT, in which sequences of letters were visually presented one at a time in a continuous fashion on a computer display. Subjects were instructed to identify target and non-target trials with a button press using separate fi ngers on the same hand. Target trials were defi ned as a cue-probe sequence, in which the letter "A" appeared as the cue, and the letter "X" appeared as the probe. The remaining letters of the alphabet served as invalid cues (ie, cues that were not A's) and non-target probes (ie, probes that were not X's), with the exception of the letters K and Y, which were excluded due to their similarity in appearance to the letter X. Letter sequences were presented in pseudorandom order, such that target (AX) trials occurred with 70% frequency, and non-target trials occurred with 30% frequency. Non-targets were divided evenly (10% each) among the following trial types: BX trials, in which an invalid cue (ie, non-A) preceded the target; AY trials, in which a valid cue was followed by a non-target probe (ie, non-X); and BY trials, in which an invalid cue was followed by a non-target probe. The delay between cue and probe was manipulated so that half of the trials had a short delay and half had a long delay. On short delay trials, the cue-probe interval was 1 sec, and the inter-trial interval was 4900 msec. On long delay trials, the cue-probe interval was 5 sec and the inter-trial interval was 1 sec. Thus, the total trial duration was equivalent across conditions, providing a means of controlling for general factors that might affect performance (eg, pace of the task, response frequency, total time on task). The task was presented in 4 blocks of 50 trials, all of which were either short (2 blocks) or long (2 blocks) delay trials, with the order of short and long delay blocks counterbalanced across subjects. Stimuli were presented centrally, for a duration of 300 msec, in 24-point uppercase Helvetica font. Subjects were instructed to respond to both cue and probe stimuli, pressing one button for targets and another button for non-targets (cues were always considered non-targets). Responses were recorded on a specially constructed button box connected to the computer that recorded response choice and reaction time with 1 msec accuracy. For right-handed individuals, responses were made with the middle (nontarget, middle button) and index (target, right button) fi ngers of the right hand. For left-handed individuals, responses were made with the middle (non-target, middle button) and index (target, right button) fi ngers of the left hand. Following previous work (Barch et al 2004), subjects were allowed a total of 1300 msec from stimulus onset in which to respond. Responses slower than this limit were not recorded, and elicited feedback (a "bloop" sound) as a prompt to increase speed. The task was run on Apple Macintosh computers, using PsyScope software for stimulus presentation and data collection (Cohen et al 1993). N-back working memory task The N-back is a commonly used measure of working memory (Braver et al 1997;Casey et al 1995;Cohen et al 1996) that has been frequently shown to elicit performance defi cits among individuals with schizophrenia and their unaffected relatives (Callicott et al 2000;Egan et al 2001;Menon et al 2001;Perlstein et al 2001;Barch et al 2002;Callicott et al 2003). The N-back test manipulates complexity with working memory load, rather than retention duration or interference conditions. Materials for the N-back task were similar to those used by Braver et al (1997). In the current study, participants observed letters presented on a computer screen one at a time. There were three conditions: (1) 0-back, (2) 1-back, and (3) 2-back. In the 0-back condition, participants responded to a single pre-specifi ed target letter (eg, X). In the 1-back condition, the target was any letter identical to the one immediately preceding it (ie, one trial back). In the 2-back condition, the target was any letter identical to the one presented 2 trials back. Thus, working memory load is increased incrementally from the 0-back to the 2-back conditions. Stimuli were presented as single letters appearing centrally in 24-point Helvetica font, white against a black background, subtending a visual angle of approximately 3 degrees. All consonants of the alphabet were used as stimuli with the exception of L (because it is easily confused with the number "1") and W (because it is the only two syllable letter of the alphabet). Vowels were excluded. Further, the case of the presented lettered-stimuli changed randomly throughout the trials. Stimuli were presented in a pseudorandom sequence of consonants, randomly varying in case in order to prevent participants from relying on strategies of perceptual familiarity for responding. Stimuli were presented centrally on a controlled computer display for 500 msec. The inter-stimulus interval (ISI) was 2500 msec. Targets were presented on 33% of the trials. Conditions were presented in blocks of 25 trials, with three blocks at each load level (0-, 1-, 2-back) presented in a counterbalanced order. The ISI and target density for the N-back test were selected in order to be consistent with prior studies using this test (Casey et al 1995;Cohen et al 1996;Braver et al 1997). WAIS-III vocabulary and block design The Vocabulary and Block Design subtests of the WAIS-III (Wechsler, 1997) are the best correlates of overall IQ (Tulsky et al 1997;Wechsler 1997). These subtests were administered to obtain an estimate of general level of intellectual ability. Other neuropsychological measures Trail Making Part A and B (Reitan and Wolfson, 1993) is a timed test that measures processing and psychomotor speed. On part A, subjects are presented with numbers from one to 25 randomly placed on a sheet of paper and instructed to connect the numbers in their correct numerical order as quickly as possible. Part B is analogous to Part A with the addition of letters and subjects are instructed to alternate between numbers and letters in numerical and alphabetical order. The PASAT (Gronwall 1977) measures maintenance and manipulation processes in verbal working memory. On this test, subjects listen to a tape recorded voice presenting a series of numbers and are asked to add each adjacent pair of numbers and respond by verbalizing the sum. There are 50 trials at a rate of one digit per two seconds, with total correct detections as the dependent variable. Participants were tested in a single testing session. Task order was counterbalanced across participants. Prior to performance of the fi rst block of each computerized task, standardized instructions describing the task appeared on the computer, and the experimenter answered any remaining questions regarding them. Participants were asked to respond as quickly as possible to each stimulus while maintaining accuracy. One full block of trials was then performed as practice prior to administration of the experimental trials for that condition. This ensured that subjects understood the instructions and were performing the task appropriately. Data analysis Participants were collapsed diagnostic groups for genotyping and analyses were conducted with group comparison between the different COMT genotypes for two reasons. First, polymorphism of COMT also influences prefrontal cognition in healthy individuals as well as schizophrenia patients, and performance on prefrontaldependent tasks is associated with the Val/Val genotype regardless of diagnosis. Second, the focus of the paper is to investigate the association between COMT and a task (ie, AX-CPT) hypothesized to be maximally relevant to DA function. For the AX-CPT, analyses focused on error rates for the two error types most related to effective context processing, BX and AY, for both the short and long delay intervals. Context processing data were analyzed using analyses of variance (ANOVAs) with genotype as a between-subject factor and trial type and delay as within-subject factors. For the N-back, accuracy scores were calculated for each participant across the three conditions (0-back, 1-back, 2-back). ANOVAs were conducted to examine the accuracy responses from the N-back with genotype as a between-subject factor and condition as a within-subject factor. Additionally, change scores were calculated to capture changes in performance as the N-back test increased in difficulty from 0 to 1-back and 1 to 2-back. While accuracy scores provide information on performance level at each task condition, change scores would yield additional information regarding the proportion of worsening as load increased in the task Greater changes indicate greater vulnerability to increases in processing demands. A series of Univariate ANOVAs were conducted to examine changes in performance with genotype as the between-subject factor and change scores as the dependent variable. For all posthoc analyses, planned contrasts were conducted for multiple pairwise comparisons and Bonferroni was used to adjust for the alpha level. Effects of COMT genotype on context processing For the purposes of this study, we will focus on main effects of genotype or interaction with genotype in all of the analyses presented below, as our hypotheses focused on genotypic differences. The error data from the AX-CPT was analyzed using a 3-factor ANOVA, with genotype (Met/Met, Val/Met, Val/Val) as a between-subject factor, and both delay (short, long) and error type (AX, AY, BX, BY) as within-subject factors. Descriptive statistics for the AX-CPT are presented in Table 2. The ANOVA did not reveal a signifi cant main effect of genotype, F(2, 29) = 0.54, p = 0.59. However, there was a non-signifi cant trend for an error type by genotype interaction, F(6, 56) = 2.06, p = 0.07, with the Val/Val group making more BX than AY errors, while the Val/Met and Met/Met groups made more AY and BX errors. Given our small sample sizes, we followed up the interaction although it was nonsignifi cant. Follow up univariate ANOVAs showed a signifi cant genotype difference in AY errors during the short delay condition, F(2, 29) = 3.77, p Ͻ 0.05, R 2 = 0.21. Compared to Met/Met, Val/Met (p Ͻ 0.05) and Val/Val (p Ͻ0.05) made signifi cantly fewer AY errors, but Val/Met and Val/Val did not differ. Similarly, there was also a signifi cant genotype difference in AY errors during the long delay F(2, 29) = 3.69), p Ͻ 0.05, R 2 = 0.20, with Met/Met making more AY errors compare to the Val/Val (p Ͻ 0.05) while the Val/Met did not differ from the other two groups (p Ͼ 0.05). There were no genotype differences in BX errors during the short F(2, 29) = 0.12, p = 0.89, or long F(2, 29) = 0.51, p = 0.61 delay. These results suggest that although the groups did not differ on BX false alarms, Met/Met subjects demonstrated better maintenance of context relative to Val/Met and Met/ Met subjects, as refl ected by the AY errors. Effects of COMT genotype on working memory The accuracy data from the N-back were analyzed using a 2factor ANOVA with genotype (Met/Met, Val/Met, Val/Val) as a between subject factor and condition (0-back, 1-back, 2-back) as a within-subject factor. A signifi cant main effect of condition was found, F(2, 28) = 8.21, p Ͻ 0.01, but there was no signifi cant main effect of genotype, F(2, 29) = 0.61, p = 0.55, or interaction, F(4, 58) = 1.74, p = 0.15. As displayed on Table 3, participants performed the worst at 2-back compared to the other two conditions (p's Ͻ 0.01), though there were no differences in performance between 0-back and 1-back (p Ͼ0.05). Genotype comparison analyses were conducted to examine changes in performance as the task increased in diffi culty (ie, the difference in accuracy score from one condition to another). This mechanism would provide more meaning information on differential deterioration as the task becomes more demanding. Change scores were calculated to capture changes in performance from 0-back to 1-back, 1-back to 2-back, and 0-back to 2-back. The ANOVA revealed a signifi cant genotype difference in performance from 0-back to 1-back, F(2, 29) = 3.66, pϽ0.05), R 2 = 0.20. As shown in Figure 1, Val/Val subjects had a greater change score compared to Met/Met subjects (p Ͻ0.05), suggesting a greater deterioration going from 0-to 1-back, while Val/Met subjects did not differ from the other groups. There were no signifi cant genotype differences in change of performance from 1-to 2-back F(2, 29) = 0.12, p = 0.89, or 0-to 2-back, F(2, 29) = 2.04, p = 0.15, although the Val/Val group clearly showed the largest deterioration from 0 to 2 back. Effects of COMT genotype on other PFC indices A series of ANOVAs were conducted to examine genotype group differences on processing speed, as indexed by Trails Making A and B, and verbal working memory, as indexed by the PASAT. One subject from the Val/Met genotype had missing data on these measures and was not included in these analyses. The genotypes did not differ on Trail Making part A, F(2, 28) = 3.32, p = 0.51, Trail Making part B, F(2, 28) = 2.24, p = 0.13, or the PASAT, F(2, 28) = 0.29, p = 0.75. Discussion The goal of the current study was to examine the effects of COMT genotype on context processing, as well as to replicate previous fi ndings of working memory defi cit in carriers of the Val allele. We predicted that the Val/Met and Val/Val genotypes would demonstrate impairment in context processing, and that this impairment would not be found in the Met/Met group. In the modifi ed AX-CPT context processing is best understood by the pattern of errors, rather than the overall number of errors, that an individual makes. Specifi cally, a greater number of AY errors suggest development of a response bias refl ecting intact context processing and individuals with intact context processing tend to make a smaller number of BX errors. Impaired context processing, on the other hand, is indicated by the reverse pattern: greater numbers of BX and smaller numbers of AY errors. While we did not fi nd a main effect of genotype, there was a non-signifi cant trend for a genotype by condition interaction. This is likely due to the low statistical power as a result of the small sample size because additional post-hoc tests showed that, during both the short and long delays of the AX-CPT, participants in the Met/Met group demonstrated a greater number of AY errors, as compared to the other two Proportions of change in performance on the n-back test as captured by difference scores from 0-to 1-back, 1-to 2-back and 0-to 2-back. Note: Larger change score indicates a greater decrease in accuracy as the condition increased in diffi culty because performance is expected to be worse as the more demanding condition (ie, 1-back) relative to the less demanding condition (ie, 0-back). *Val/Val subjects demonstrated a greater change score compared to Met/Met subjects, p Ͻ 0.05. groups. Further, COMT genotype accounted for 20%-21% of variance in AX-CPT performance which is substantial compared to previous results of shared variance with other cognitive tests (Egan et al 2001;Malhotra et al 2002). There were, however, no differences in the number of BX errors in either the short or long delay condition, which is interesting in the context of the recent results of McClure et al (2006). In that study we found that the benefi cial effects of guanfacine, an adrenergic agonist, were greater on AY than BX errors. In a study examining executive functioning and COMT, Egan et al (2001) reported that COMT genotype accounted for 4% of the variance in frequency of preseverative errors on the WCST, while the current study found that COMT genotype accounted for about 20% of the variance on the AX-CPT. This fi nding is quite robust and is consistent with our hypothesis that the AX-CPT is dopaminergic task that is sensitive to the effects of COMT. In support of the idea that the small sample that we collected is still representative, the Met/Met group in our sample demonstrated an average of AY error rate of 18.3%, which is similar, albeit slightly larger, to the rates reported by McDonald et al (2003) and Barch et al (2001), Thus, the larger variance accounted for statistics may truly refl ect differences in task sensitivity to dopaminergic effects. Although we did not fi nd genotype differences in performance on the N-back working memory test when accuracy scores were analyzed at each condition of the test, Val/Val subjects demonstrated a larger change score from 0-back to 1back, as compared to the Met/Met group. Since performance is expected to be worse at the more diffi cult condition (ie, 1-back) relative to the less demanding condition (ie, 0-back), this larger change score demonstrated by the Val/Val indicates a greater decrease in accuracy from a less demanding condition to a more demanding condition. As such, the Val/Val subjects performed disproportionately worse as they progressed from 0-back to 1-back compared to the Met/Met group. Therefore, fi ndings of a larger change in performance 0-back to 1-back in the Val/Val group provides evidence of working memory defi cits in those individuals, which is in line with previous fi ndings . We propose that this method of examining working memory measured by the N-back may be of particular utility in future studies. In contrast, there was no signifi cant genotype difference for the change score from 0-back to 2-back, suggesting that the diffi culty level in the 2-back condition may be too high for all participants and thus resulted in reduced sensitivity. We predicted that the AX-CPT would be a more sensitive task to detect effects of COMT in comparison to the N-back working memory task. COMT genotype explained comparable amounts of variance in both tasks (20%), but only with a different analytic plan than previously employed. Consistent with our hypothesis, processing speed and verbal working memory were not associated to the effects of COMT genotype. A reformulation of the COMT genotype effect on cognition was recently provided by Bilder et al (2004), which suggested that the functional effects of the COMT polymorphism may be better understood from the perspective of the tonicphasic DA theory. Bilder et al (2004) hypothesized that the Met allele is associated with increased tonic and decreased phasic DA transmission, leading to increased stability but reduced fl exibility of neural network activation states that are central to aspects of working memory. It was suggested that these effects may be benefi cial or detrimental depending the phenotype and environmental demands (ie, cognitive task). Thus, given our fi ndings, the AX-CPT and N-back may be DA-dependent tasks that require the stability of networks that underlies working memory, rather than the fl exibility of neural programming. One major limitation of the current study relates to the small sample size. However, it should be noted that large effect sizes were found despite the small number of subjects. Future studies with a larger sample are needed to replicate the current fi ndings. Further, examination of diagnosis × genotype interactions would be of interest as well. One interesting empirical question that the current fi ndings point to is whether or not COMT genotype plays a signifi cant role in functional disability among patients with schizophrenia and schizotypal personality disorder. There is a substantial amount of evidence that schizophrenia is marked by profound functional impairments, including occupational (Carpenter and Strauss 1991;McGurk and Meltzer 2000) and social functioning (Mueser et al 1991;Green 1996;Green et al 2000). Further, across multiple studies, cognitive impairment has been found to be consistently related to work outcomes (Bryson et al 1998;McGurk and Meltzer 2000;Suslow et al 2000;Tsang et al 2000;Mueser et al 2001) and cognitive functioning uniquely predicted functional outcomes in patients (Green 1996) and individuals at risk for schizophrenia (Niendam et al 2006) over and above clinical symptoms. Thus, it would be interesting to investigate whether COMT genotype mediates the relationship between cognitive defi cits and functional disability in patients with schizophrenia and schizotypal personality disorder. In summary, preliminary results of the current study provided some evidence that context processing and working memory are associated with COMT genotype variation, and the AX-CPT and N-back are two theoretically dopaminergic dependent tasks most sensitive in capturing the effects of COMT. Processing speed and verbal working memory were not found to be related to COMT genotype in the current sample. Taken together, this study provides evidence that variation in COMT genotype does not lead to general impairment of cognitive functioning, but that it uniquely affects two specifi c prefrontal domains, context processing and working memory.
2018-04-03T04:59:55.493Z
2007-12-01T00:00:00.000
{ "year": 2007, "sha1": "deca79b292721fe36353795a4dd1c2eeb30598dd", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=1885", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8b198a5e868b53f421eb429bbb1e0f67360f737", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
7222020
pes2o/s2orc
v3-fos-license
MicroRNA-mediated repression of nonsense mRNAs Numerous studies have established important roles for microRNAs (miRNAs) in regulating gene expression. Here, we report that miRNAs also serve as a surveillance system to repress the expression of nonsense mRNAs that may produce harmful truncated proteins. Upon recognition of the premature termination codon by the translating ribosome, the downstream portion of the coding region of an mRNA is redefined as part of the 3′ untranslated region; as a result, the miRNA-responsive elements embedded in this region can be detected by miRNAs, triggering accelerated mRNA deadenylation and translational inhibition. We demonstrate that naturally occurring cancer-causing APC (adenomatous polyposis coli) nonsense mutants which escape nonsense-mediated mRNA decay (NMD) are repressed by miRNA-mediated surveillance. In addition, we show that miRNA-mediated surveillance and exon–exon junction complex-mediated NMD are not mutually exclusive and act additively to enhance the repressive activity. Therefore, we have uncovered a new role for miRNAs in repressing nonsense mutant mRNAs. DOI: http://dx.doi.org/10.7554/eLife.03032.001 Introduction Eukaryotic cells are constantly at risk for various types of mutations. Although many of the mutations are benign, a high number of mutations have detrimental consequences. Among these mutations, the nonsense mutation is a severe type that converts a coding codon into a stop codon, leading to the premature termination of translation and the expression of proteins truncated at the carboxyl terminus. These truncated protein products often have deleterious dominant-negative or gain-of-function effects that interfere with normal biological processes in cells. Indeed, many inherited genetic disorders, such as β-thalassemia (Chang and Kan, 1979) and Duchenne muscular dystrophy (Koenig et al., 1987;Monaco et al., 1988), are caused by germline nonsense mutations. Moreover, nonsense mutations in critical tumor suppressor genes are associated with prevalent cancer types such as breast cancer (Miki et al., 1994) and colorectal cancer (Powell et al., 1992;Rowan et al., 2000). A recent large-scale genome-wide study revealed that even healthy individuals carry dozens of nonsense mutations (MacArthur et al., 2012). In addition, transcriptional errors, mis-splicing, or even alternative splicing (Danckwardt et al., 2002;Lewis et al., 2003;Wollerton et al., 2004) also frequently lead to nonsense mutations. Accordingly, cells have evolved a surveillance system known as nonsense-mediated mRNA decay (NMD) to eliminate these aberrant transcripts. Great efforts have been made to uncover the mechanism by which cells detect and selectively degrade nonsense mRNAs. One well known mechanism for recognizing premature termination codon (PTC)-containing transcripts is exon-exon junction complex (EJC)-dependent NMD (EJC-NMD). During splicing, a multi-protein EJC complex is deposited at the 5′ side of each exon-exon junction, which is subsequently displaced by the translating ribosome during the pioneer round of translation. An EJC will remain bound to the mRNA and trigger NMD efficiently if the ribosome stalls at a PTC located at least 50 nucleotides (nt) upstream of the final exon-exon junction (Maquat, 2004). However, nonsense transcripts that originate from naturally intronless genes are immune to EJC-NMD, as are transcripts with PTCs located within the last exon or less than 50 nt upstream of the last exon-exon junction. Other EJC-independent NMD mechanisms, such as long 3′ untranslated regions (UTRs) (Buhler et al., 2006;Eberle et al., 2008;Singh et al., 2008;Hogg and Goff, 2010;Zund et al., 2013) and upstream open reading frames (uORFs) (Matsui et al., 2007) are also involved in the degradation of PTC-containing mRNAs. The diverse degradation pathways for nonsense mRNAs indicate the complexity of mRNA quality control mechanisms in living cells. Another mechanism for post-transcriptional gene regulation is mediated by microRNAs (miRNAs), a class of small non-coding RNAs that are present in essentially every organ and tissue of the body. In animals, miRNAs are processed from hairpin precursors and assemble with Argonaute (Ago) family proteins into RNA-induced silencing complexes (RISCs) to regulate gene expression (Bartel, 2004). By imperfectly base-pairing with miRNA-responsive elements (miREs) in target mRNAs (primarily through nucleotides 2-7, the seed region of the miRNA), miRNAs exert their repressive effects by promoting RNA degradation through accelerated deadenylation and/or by inhibiting translation (Wu and Belasco, 2008b). The richness and variety of cellular miRNAs, as well as the versatility of miRNA:miRE seed matches render miRNAs one of the most flexible molecules to govern transcriptome integrity. In this study, we demonstrate that miRNAs also selectively target and repress the expression of nonsense mRNAs by both expedited poly(A) tail removal and translational repression. We present evidence that naturally occurring cancer-causing nonsense mRNAs are repressed by miRNA-mediated surveillance. Furthermore, we show that miRNA-mediated surveillance and EJC-NMD function additively. We propose that miRNAs may serve as a novel component of the cellular mRNA quality control system that eliminates nonsense mRNAs. eLife digest To produce a protein from a gene, the sequence of the gene must be transcribed to produce a molecule of messenger RNA (mRNA). The sequence of the mRNA is then read in groups of three letters at a time (called codons), and each codon instructs for a particular amino acid to be added into the protein. Some codons, however, do not code for an amino acid and instead these 'stop codons' mark the end of a protein. If a DNA letter is added, lost, or changed, this mutation can sometimes produce a stop codon too early in the mRNA sequence. This is called a nonsense mutation, and produces truncated proteins that either work incorrectly or do not work at all, which can harm the organism. For example, people with a nonsense mutation in the human tumor suppressor gene called APC-which normally stops uncontrolled cell growth and division-are more likely to develop colon cancer than people without this mutation. Cells in the body employ several different surveillance mechanisms to detect nonsense mutations. The best-known mechanism involves a large protein group called the exon-exon junction complex (EJC), which binds to sites within the mRNA. The cellular translation machinery removes all the EJCs bound to a normal mRNA during the production of proteins. If the translation machinery reaches a stop codon too early, so that EJCs located downstream of it are not removed, the mRNA molecule is destroyed. However, this mechanism does not work for all genes-including APC. Very short sections of RNA called microRNAs regulate protein production by causing mRNAs to degrade and by inhibiting their translation, and Zhao et al. have now found that microRNAs also act as a defense against nonsense mutations in the APC gene. A premature stop codon exposes sites further along the mRNA molecule that microRNA molecules bind to, which triggers the breaking down of the mRNA and inhibits its translation. The microRNA surveillance system works independently of the system involving the EJC. However, both mechanisms can work in parallel alongside each other, which provides extra protection against nonsense mutations. Zhao et al. also found that microRNAs can protect against nonsense mutations in several other types of gene found in human cells. Therefore, microRNA surveillance is likely to be a common method employed by cells to restrict the production of potentially harmful truncated proteins. DOI: 10.7554/eLife.03032.002 A PTC potentiates miRNA-mediated repression of nonsense mRNAs The vast majority of known functional and conserved miREs reside within the 3′ UTR of mRNAs (Bartel, 2004). By contrast, reports of miRNAs efficiently targeting ORFs are sparse (Duursma et al., 2008;Forman et al., 2008;Tay et al., 2008;Huang et al., 2010;Schnall-Levin et al., 2011). Interestingly, recent studies describing the transcriptome-wide identification of miREs revealed prevalent RISC binding in the coding region (Chi et al., 2009;Hafner et al., 2010;Helwak et al., 2013), raising a fascinating question about the biological significance of these ORF miREs. Previously, we have shown that miRNAs cause expedited removal of the poly(A) tails from their mRNA targets through the recognition of miREs in the 3′ UTR. By accelerating this initial and rate-limiting step of mRNA decay, miRNAs efficiently reduce the cellular concentration of their target mRNAs (Wu et al., 2006;Figure 1-figure supplement 2A). Theoretically, premature translation termination at a nonsense mutation should cause the ORF region downstream of the PTC to acquire a 3′ UTR identity. If any functional miRE is located within this redefined 3′ UTR region, this nonsense mRNA may become miRNA-sensitive and be subject to rapid miRNA-mediated deadenylation and then decay. In this manner, miRNAs could serve as a surveillance system against nonsense mRNAs. To test this hypothesis, we used a transiently inducible β-globin (BG) reporter system and a well-established transcriptional pulse-chase assay to analyze the effect of a PTC on mRNA deadenylation in human cells (Shyu et al., 1989). Briefly, transcription was induced by removing tetracycline (tet) from the culture medium for a short period so as to obtain a nearly homogeneous population of BG mRNAs that subsequently underwent synchronous deadenylation and degradation. RNA samples collected at different time points after induction were subjected to site-specific cleavage by RNase H to produce 3′ BG mRNA fragments which facilitate accurate measurement of the poly(A) tail length via gel electrophoresis and Northern blotting. The natural sequence of BG mRNA does not harbor any miREs; therefore, we first modified the reporter by inserting a let-7a miRE sequence ( Figure 1F) inframe into the ORF of the last exon to create LastEx-L7. We found that this insertion did not significantly accelerate poly(A) shortening in HeLa-tTA cells, where let-7a is naturally abundant ( Figure 1A, compare LastEx-L7 and TBG). This result indicates that ORF miREs are not functionally efficient. Interestingly, a point mutation (AAA to TAA) in the last exon that introduced a PTC 16 nt upstream of the let-7a miRE of LastEx-L7 (LastEx-PTC-L7) significantly accelerated the shortening of the 3′ fragment ( Figure 1A, upper portion, compare LastEx-L7 and LastEx-PTC-L7, time points 3 and 4.5), but not the 5′ fragment, of BG mRNA ( Figure 1A, bottom portion), a finding indicative of expedited poly(A) removal. Furthermore, treatment of the RNA samples with oligo(dT) and RNase H caused the BG 3′ fragments that previously appeared as diffuse bands after electrophoresis to migrate uniformly to a position corresponding to fully deadenylated mRNA ( Figure 1B), which constitutes additional evidence that the decrease in the length of BG 3′ fragments was due to trimming of the poly(A) tails. The expedited deadenylation disappeared when the let-7a miRE was mutated (LastEx-PTC-L7M), indicating that the rapid poly(A) removal was specifically induced by endogenous let-7a in the cells ( Figure 1A). As a consequence of accelerated deadenylation, the BG mRNA bearing both the let-7a miRE and the upstream PTC decayed much faster than counterparts that lacked the PTC or the let-7a miRE or bore the mutant let-7a miRE, with the half-life decreasing from >6 hr to <3 hr ( Figure 1-figure supplement 1A,B). LastEx-L7 decayed slightly faster than the control BG mRNAs (TBG and LastEx-PTC), which suggests that the miRE located in the ORF may still have residual activity. Similarly, the deadenylation rate of another BG reporter with a PTC located 11 nt upstream of the last exon-exon junction was also accelerated in a miRNA-dependent manner ( Figure 1C, compare PTC102-L7 and PTC102-L7M, time points 1.5, 3, and 4.5). Expedited poly(A) shortening was not observed for BG mRNA bearing the PTC in the last exon or 11 nt upstream of the last exon-exon junction but no downstream miRE ( Figure 1A, LastEx-PTC and Figure 1C, PTC102), because the PTC alone is unable to trigger EJC-NMD when located in the last exon or less than 50 nt upstream of the last exon-exon junction. Similar results were obtained with BG reporters containing one miR-21 miRE (Figure 1-figure supplement 2B,C and data not shown), providing evidence that miRNA-mediated deadenylation of nonsense mRNAs is not miRNA-type-specific. Besides point mutations, alternative splicing also serves as an important source of PTCs. Recent studies suggest that intron retention, a splicing event that often creates PTCs, functions as a general mechanism that controls the expression of many genes in the cells (Galante et al., 2004;Yap et al., Research article Figure 1. A PTC potentiates rapid miRNA-mediated deadenylation of nonsense mRNAs. (A) The influence of a PTC on BG mRNAs with or without a downstream miRE. Cytoplasmic RNA was collected at the indicated times after transcriptional arrest by adding tetracycline (tet). The RNA samples were then treated with RNase H and an oligodeoxynucleotide complementary to codons 74-81 of BG mRNA to generate 5′ and 3′ fragments, which were then Figure 1. Continued on next page 2012; Wong et al., 2013). We speculated that the PTC-containing transcripts generated by intron retention could also be targeted by miRNAs. To test this possibility, we generated a BG reporter (TBG-IR-L7) by mutating the splicing sites in the last intron of LastEx-L7, which abolished the splicing of the last intron and created a PTC 505 nt upstream of the let-7a miRE. As expected, we found that TBG-IR-L7 underwent rapid deadenylation in a miRNA-dependent manner; and that absence of the downstream miRE (TBG-IR) or mutations in the miRE seed region (TBG-IR-L7M) completely abolished this accelerated deadenylation ( Figure 1D). Interestingly, TBG-IR is deadenylated slightly faster than wildtype TBG, probably due to the presence of potential miREs in the longer region between the PTC and the native stop codon (570 nt) that resulted from intron retention; these miREs may be recognized by the highly expressed endogenous miRNAs in HeLa-tTA cells, such as miR-10a/b and miR-17/20a (data not shown). Collectively, these observations demonstrate that a PTC, introduced either by point mutation or alternative splicing, is able to potentiate miRNA-mediated rapid deadenylation of the mRNA by unmasking ORF miREs downstream of it. An essential feature that distinguishes the 3′ UTR from the ORF is the absence of translating ribosomes. The PTC may serve as a roadblock to stop ribosomes and mark the new boundary between the ORF and 3′ UTR. We speculated that blocking translation would cause the miRE within the ORF to behave as if it were in the 3′ UTR and efficiently trigger accelerated miRNA-mediated mRNA deadenylation, even in the absence of an upstream PTC. To test this hypothesis, we placed a large stem-loop structure at the 5′ UTR of LastEx-L7 (hp-LastEx-L7) to block translation initiation (Chen et al., 1995;Wu et al., 2006; Figure 1-figure supplement 3) and examined its deadenylation rate. As expected, blocking translation in this manner caused LastEx-L7 mRNA, which normally is deadenylated and decays slowly, to undergo rapid miRNA-mediated deadenylation and decay (half-life of >6 hr vs <3 hr) ( Figure 1E, Figure 1-figure supplement 1A,B), suggesting that translating ribosomes may indeed have interfered with miRNA-RISC binding to the miREs located in the ORF and thus masked their repressive function. Similar results were obtained with another set of BG reporters containing a miR-21 miRE (Figure 1-figure supplement 2D). Together, these data suggest that miRNAs selectively accelerate the deadenylation of nonsense mRNAs by stably binding to miREs in the ORF downstream of the PTC. The immunity of PTC-free mRNA to miRNA-mediated repression may be due to masking of ORF miREs by the translating ribosomes, which is consistent with a previous study using constitutively transcribed luciferase reporters (Gu et al., 2009). separated by electrophoresis on a denaturing polyacrylamide gel and detected by Northern blotting. Left panel: BG mRNA deadenylation with or without a PTC in the last exon. TBG contains an intact BG ORF. A PTC was introduced into TBG at codon 121 within the last exon to generate LastEx-PTC. Both constructs harbor no miRE in their ORFs. Right panel: the influence of let-7a on the deadenylation rate of BG mRNAs harboring one let-7a miRE in its ORF with or without an upstream PTC mutation. LastEx-L7 contains one let-7a miRE in the last exon of the BG ORF. LastEx-PTC-L7 has a PTC at the same position as LastEx-PTC, which is 16 nt upstream of the let-7a miRE. Two nucleotides within LastEx-PTC-L7 let-7a miRE seed were changed to create a synonymous codon in LastEx-PTC-L7M. The positions of the ORF start site and original stop codon are indicated by arrows. The borders of the original ORF/3′ UTR and redefined ORF/3′ UTR upon PTC mutation are indicated by solid or dashed lines below the constructs. Markers A(0) and A(160) correspond in size to BG mRNA 3′ fragments bearing no poly(A) or a 160-nt poly(A) tail, respectively. (B) Confirmation of poly(A) tail shortening by treatment with oligo(dT) and RNase H. The same LastEx-PTC-L7 RNA as in A were further treated with oligo(dT) and RNase H and analyzed by Northern blotting. (C) Induction by let-7a of accelerated deadenylation of nonsense BG mRNAs that do not conform to the '50 nt boundary rule' of EJC-NMD. PTC102-L7 contains a PTC mutation 11 nt upstream of the last exon-exon junction and a let-7a miRE in the last exon of the ORF. PTC102-L7M is identical to PTC102-L7 except for a mutated let-7a miRE. PTC102 contains the PTC only. (D) The influence of let-7a on the deadenylation rate of BG mRNAs harboring one let-7a miRE in its ORF with a retained last intron. TBG-IR-L7 contains one let-7a miRE in the last exon of the BG ORF and a retained last intron that creates a PTC 505 nt upstream of the let-7a miRE. Two nucleotides within the TBG-IR-L7 let-7a miRE seed were mutated as in A. TBG-IR only has the retained intron but no miRE. (E) The influence of let-7a on the deadenylation rate of BG mRNAs harboring one let-7a miRE in its ORF in the absence of translation. hp-LastEx-L7 contains a 40-nt inverted repeat in its 5′ UTR to block translation initiation. hp-LastEx-L7M is identical to hp-LastEx-L7 except for a mutated let-7a miRE. (F) Duplexes expected for the let-7a miRE or its mutant counterpart base-paired with let-7a. DOI: 10.7554/eLife.03032.003 The following figure supplements are available for figure 1: NMD exerts its repressive power primarily by promoting RNA degradation; however, miRNAmediated repression usually involves both mRNA decay and translational repression. To determine whether translational repression is involved when miRNAs exert their repressive effects against nonsense mRNAs, we constructed a modified luciferase reporter that has a fragment containing two in-frame miR-125b miREs ( Figure 2A) followed by an additional in-frame stop codon fused to the 3′ end of the luciferase ORF (TAA-2E). In this construct, the original stop codon of the luciferase ORF served as a PTC. This PTC (TAA) was mutated to TCA to create a PTC-free counterpart (TCA-2E) ( Figure 2B), which produced a luciferase protein with an additional 46 amino acids at the carboxyl terminal. Measurement of the steady-state mRNA level by qRT-PCR showed specific and significant repression of TAA-2E in the presence of miR-125b ( Figure 2C), observations consistent with the results of the BG deadenylation assay shown in Figure 1A and Figure 1-figure supplement 2B. Moreover, the measurement of luciferase activity revealed an even greater reduction of the PTCcontaining reporter at the protein level ( Figure 2D), indicating that translational repression plays a prominent role in the repression of nonsense messages by miRNAs ( Figure 2E). Since no intron is present downstream of the PTC, this reporter (TAA-2E) is immune to EJC-NMD and the repression is most likely contributed by miRNAs. EJC-NMD generally complies with the '50 nt boundary rule'. To determine whether any boundary rule for miRNA-mediated surveillance exists, we constructed a series of plasmids in which a miR-125b miRE was inserted in-frame at various locations before or after the PTC of a modified luciferase reporter mRNA ( Figure 2F). Measurement of luciferase activity revealed that an miRE has to be located at least 10 nt downstream of the PTC to trigger miRNA-mediated repression effectively ( Figure 2G). This observation defines a distinct boundary rule for miRNAs to successfully repress PTC-containing messages and also suggests that the size of the footprint of RISC is much smaller than that of the EJC, which makes miRNA-mediated surveillance more versatile in recognizing and repressing nonsense mRNAs. Altogether, these results demonstrate that a PTC can potentiate miRNA-mediated deadenylation and translational inhibition of nonsense mRNAs, via redefinition of ORF/3′ UTR identities and unmasking of downstream miREs. PTC-containing APC mRNAs are natural substrates repressed by miRNA-mediated surveillance Next, we asked whether any naturally occurring nonsense mRNAs are subjected to regulation by miRNA-mediated surveillance. We found APC (adenomatous polyposis coli), a tumor suppressor gene that is frequently mutated in colorectal cancer, to be of particular interest. Most of the mutations in the APC gene that have been identified in clinical studies are point mutations or frameshift indels that create a PTC and result in the expression of a truncated version of the APC protein. Interestingly, the majority of known APC mutations are clustered in a hotspot region (designated as the MCR in Figure 3A) within the last exon of the ORF (Miyoshi et al., 1992), rendering these mutants immune to EJC-NMD. Meanwhile, bioinformatic analysis based on the seed match rule predicts numerous potential miREs between the MCR and the native stop codon in APC mRNA. All of these features make APC a nearly ideal paradigm for the study of miRNA-mediated surveillance. To investigate whether PTC-containing APC mRNA is specifically targeted by certain miRNAs, we performed miRE screening using a reporter that has the region between a PTC at codon 1450 and the native stop codon (the PTC-STOP region) of APC mRNA fused to the 3′ end of the luciferase ORF ( Figure 3-figure supplement 1A). The top 45 miRNA candidates were chosen for the screening based on their general abundance in human tissues and the predicted thermal stability of the duplex they may form with an APC miRE. For each miRNA selected, we co-transfected HEK293 cells with the luciferase reporter plasmid and a synthetic miRNA mimic or a control small RNA, and examined protein production by measuring luciferase activity 36 hr after transfection. A decrease in luciferase activity when co-transfected with a miRNA mimic would indicate that the selected miRNA may have the potential to repress PTC-containing APC (PTC-APC) expression through miRE(s) in the PTC-STOP region. Using this method, we identified several miRNAs with a strong repressive effect (Supplementary file 1), although others that are naturally highly expressed in HEK293 cells even without transfection may have been missed. To verify that the repression is due to the direct interaction between the selected miRNA and an miRE in the PTC-APC mRNA, we mapped the corresponding miREs of the miRNAs via the 2-7 seed match alignment ( Figure reporter. Multiple miREs exhibited specific responses to the cognate miRNAs ( Figure 3B), which verifies their functionality. To further confirm the repressive effect of these miRNAs on PTC-APC mRNA in its natural sequence context, we constructed minigene vectors that express both an HA-tagged PTC-APC bearing either a wild-type miRE or a mutant miRE with mismatches in the seed region and an HA-tagged EGFP that served as an internal control for more precise protein quantification. The minigene plasmids were co-transfected with cognate miRNA mimics into HEK293 cells. Western blotting revealed a marked increase in protein expression for the minigene constructs with mutant miREs ( Figure 3C). The unmasking of ORF miREs by PTCs may significantly augment repression by miREs in the 3′ UTR. To test this hypothesis, we constructed a pair of PTC-containing APC minigene plasmids that each contained the full-length APC 3′ UTR, one with wild-type miR-29a miREs (APC-PTC1450-3'UTR) and the other with mutant miREs (APC-PTC1450-mut-3'UTR). Western blotting showed that, in the context of the natural APC 3′ UTR, the PTC was still able to potentiate repression by miR-29a miREs originally located in the ORF (Figure 3-figure supplement 2). In addition, to quantify the relationship between unmasked ORF miREs and pre-existing miREs in the 3′ UTR, we designed chimeric reporters in which the PTC-STOP region and the entire 3′ UTR of APC mRNA were fused to the 3′ end of a firefly luciferase ORF. The miR-29a miREs in the PTC-STOP region and a miR-135b miRE in the 3′ UTR of APC mRNA that had previously been reported to be functional (Nagel et al., 2008) were mutated, either individually or simultaneously. In the presence of both miRNAs, the wild-type chimeric reporter was repressed most efficiently, while mutating either the ORF miREs or the 3′ UTR miRE alleviated the repression ( Figure 3-figure supplement 3). These observations indicate that ORF miREs unmasked by an upstream PTC are fully functional in the presence of repression by 3′ UTR miREs. We next sought to determine whether the expression of an endogenous PTC-APC mutant is downregulated by miRNAs. Our previous luciferase reporter-and minigene-based assays have identified that two miR-29a miREs ( Figure 3I) are present in the PTC-STOP region of APC nonsense mRNA ( Figure 3A-C). Interestingly, the seed regions of the miR-29a miREs embedded in the APC ORF are highly conserved across several species. Therefore, miREs of miR-29a were selected for subsequent investigations. SW480 is a colorectal cell line that naturally expresses a truncated APC protein (caused by a PTC mutation at codon 1338) that can be readily detected by Western blotting with a specific antibody ( Figure 3D), and high levels of endogenous miR-29a ( Figure 3F,G, Figure 3-figure supplement 4B,D), which render it a suitable cell line to investigate the repression mediated by miR-29a on the endogenous APC nonsense mutant. We transduced SW480 cells with lentiviruses encoding a TuD miRNA decoy (Figure 3-figure supplement 4A), which has been proven a very effective and specific antagonizer of miRNAs (Haraguchi et al., 2009), to inhibit endogenous miR-29a. TuDexpressing cells were cultured for 3 days before APC was examined by Western blotting. An approximate twofold increase in truncated APC expression was observed for SW480 cells in which miR-29a was knocked down ( Figure 3F, upper panel). We also measured the mRNA level of PTC-APC in SW480 cells and found that the knockdown of miR-29a caused a ∼1.6-fold increase of PTC-APC mRNA ( Figure 3E), which is consistent with the important role of translational repression by miRNAs. The successful knockdown of endogenous miR-29a expression was confirmed by Northern blotting ( Figure 3F, middle panel) and a reporter assay (Figure 3-figure supplement 4B,C); by contrast, the abundance of an untargeted endogenous miRNA, miR-26a, remained unchanged ( Figure 3F, bottom Protein quantification by analyzing luciferase activity for TAA-2E and TCA-2E in the presence and absence of miR-125b. (E) Contribution of translational repression by miRNA-mediated surveillance. The repression ratios for TAA-2E and TCA-2E were calculated from normalized levels of firefly luciferase protein (black bars) and mRNA (gray bars) in the absence versus the presence of miR-125b. By dividing the repression ratio for protein production and that for mRNA concentration, the repression ratio for translation efficiency (protein yield per mRNA molecule, white bars) was determined. A repression ratio for translation efficiency that is >1 indicates that part of the total repression observed at the protein level is attributable to inhibition of translation. (F) Schematic illustration of the reporter designs used in G. The same 22-nt miR-125b miRE as in A was introduced in-frame into different positions before or after a PTC of a modified firefly luciferase reporter gene to obtain a series of miRE-containing plasmids. A graph that illustrates the different methods for calculating the distance between an upstream or a downstream miRE and the PTC is shown below the construct. (G) Boundary rule for miRNA-mediated surveillance. Each firefly luciferase construct that contains one miR-125b miRE at a different position was co-transfected with a Renilla luciferase reporter into HEK293 cells in the presence or absence of miR-125b. The relative FL expression level was calculated from the normalized levels of firefly luciferase in the absence versus the presence of miR-125b. DOI: 10.7554/eLife.03032.007 To test if miR-29a-mediated repression is specific for PTC-APC but not wild-type APC (WT-APC), we generated doxycycline (dox)-inducible miR-29a-overexpressing SW480 and HEK293 stable cell lines and examined the amount of endogenous APC protein after inducing miRNA expression for 3 days. HEK293 cells naturally express the full-length APC protein ( Figure 3D) and low levels of miR-29a ( Figure 3H, Figure 3-figure supplement 4B,D). The amount of WT-APC protein remained unchanged after the induction of miR-29a expression to much higher levels in the cells ( Figure 3H) because the miR-29a miREs are located within the PTC-STOP region but not the 3′ UTR of APC mRNA ( Figure 3A). In contrast, SW480 cells that overexpress miR-29a produced a lower amount of truncated APC compared to cells that overexpressed a non-functional small RNA that did not affect the already high expression region. (B) Validation of miRE function by luciferase assays. A vector expressing both a firefly luciferase (FL) transcript harboring one potential miRE in its 3′ UTR and a control Renilla luciferase (RL) transcript was co-transfected with cognate miRNA mimics into HEK293 cells. The relative FL expression level represents the firefly/Renilla luciferase ratio for pRF-miRE relative to the no miRE control pRF-con. (C) Validation of miRE function in an APC minigene. A vector expressing both an HA-tagged truncated APC (APC-PTC1450) and a control HA-tagged EGFP was co-transfected with cognate miRNA mimics into HEK293 cells. The mutant counterpart of the APC minigene (APC-PTC1450-mut) contains two altered nucleotides that abolish miRE:miRNA complementarity without changing the identity of the encoded amino acid. (D) Western blot analysis of endogenous truncated APC in SW480 cells and full-length APC in HEK293 cells by using an anti-APC antibody. (E) Change in PTC-APC mRNA levels in SW480 cells upon miR-29a knockdown. Cytoplasmic RNA was extracted from the SW480 cell lines used in F, and the levels of APC and GAPDH mRNA were determined by qRT-PCR. The relative APC mRNA level was calculated by normalizing to GAPDH mRNA. (F) Upregulation of endogenous truncated APC upon miR-29a knockdown. SW480 cells were transduced with lentiviruses encoding a miR-29a decoy (TuD-29a) or a control decoy (TuD-NC). Endogenous truncated APC was probed with an anti-APC antibody. GAPDH served as a loading control. Changes in the levels of endogenous miR-29a and an untargeted control (miR-26a) were determined by Northern blotting. 5S rRNA served as a loading control. (G) Downregulation of endogenous truncated APC upon miR-29a overexpression. SW480 cells were transduced with lentiviruses encoding miR-29a or a control small RNA (siEGFP). Western and Northern assays were performed as in F. (H) Invariant concentration of endogenous wild-type APC upon miR-29a overexpression. HEK293 cells were transduced with lentiviruses encoding miR-29a or a control small RNA (siEGFP). Western and Northern assays were performed as in F except that Tubulin served as the loading control in the Western assay. (I) Duplexes expected for the miR-29a miREs base-paired with miR-29a. (J) Ribonucleoprotein immunoprecipitation (RIP) analysis of PTC-APC mRNA associated with Ago2 in SW480 cells upon miR-29a knockdown. SW480 cells used in F were transduced with a low amount of lentiviruses (MOI <0.3) expressing FLAG-tagged Ago2. Anti-FLAG RIP followed by qRT-PCR was performed to compare the binding of endogenous PTC-APC mRNAs to Ago2. The amount of Ago2-associated PTC-APC mRNA was normalized to MYC, an endogenous target of let-7c. The relative Ago2-RIP efficiency was calculated from the normalized amount of PTC-APC mRNA in the presence of a miR-29a decoy (TuD-29a) versus a control decoy (TuD-NC). (K) RIP analysis of PTC-APC mRNA associated with Ago2 in SW480 cells upon miR-29a overexpression. SW480 cells used in G were transduced with a low amount of lentiviruses (MOI <0.3) expressing FLAG-tagged Ago2. RIP assays were performed as in J. The relative Ago2-RIP efficiency was calculated from the normalized amount of Ago2-associated PTC-APC mRNA in the presence of miR-29a versus a control small RNA (siEGFP). (L) The 3′ UTR of APC mRNA contains no miR-29a miRE. The sequence of a full length APC 3′ UTR was cloned to the 3′ UTR of a firefly luciferase reporter. This reporter plasmid (pFL-APC-3′UTR) or a control plasmid (pFL) was co-transfected with a miR-29a mimic or a control small RNA (siNC) into HEK293 cells. The relative FL expression level was calculated from the normalized levels of firefly luciferase activity for pFL-APC-3′UTR versus pFL in the presence of the miR-29a mimic or siNC. (M) RIP analysis of ectopically expressed PTC-APC mRNA associated with Ago2 in HEK293 cells. A PTC-containing APC minigene plasmid with wild-type (APC-PTC1450) or mutant miR-29a miREs (APC-PTC1450-mut) was co-transfected with the miR-29a mimic or a control small RNA (siNC) into HEK293 cells that stably expressed FLAG-tagged Ago2. Anti-FLAG RIP followed by qRT-PCR was performed to compare the binding of the mRNAs to Ago2. The amount of Ago2-associated PTC-APC mRNA or its miRE mutant counterpart was normalized to HOXD10, an endogenous target of miR-10a. The relative Ago2-RIP efficiency was calculated from the normalized levels of Ago2-associated PTC-APC mRNA bearing wild-type (APC-PTC1450) or mutant (APC-PTC1450-mut) miR-29a miREs in the presence of the miR-29a mimic versus siNC. DOI: 10.7554/eLife.03032.008 The following figure supplements are available for figure 3: Figure 3G). These results support that miR-29a specifically represses the PTC-APC and does not impair the expression of WT-APC. To further prove that the repressive effects of miR-29a on PTC-APC mRNA are direct, we performed ribonucleoprotein immunoprecipitation (RIP) assays to examine the association of PTC-APC mRNA with Ago2, the component of the RISC complex that directly binds the miRNA and its target mRNA. The level of endogenous PTC-APC mRNA associated with Ago2 showed a mild but reproducible decrease when miR-29a was knocked down ( Figure 3J) and a significant increase upon miR-29a overexpression in SW480 cells ( Figure 3K). The 3′ UTR of APC mRNA contains no miR-29a miREs, which was determined by comparing the expression of a luciferase reporter bearing a full-length APC 3′ UTR in the presence of miR-29a versus a negative control small RNA (siNC) ( Figure 3L). Therefore, the changes in the levels of Ago2-associated PTC-APC mRNA in SW480 cells are most likely due to a direct effect of miR-29a targeting its miREs within the PTC-STOP region. Moreover, the binding efficiency of APC-PTC1450 mRNA bearing wild-type or mutant miREs to Ago2 was compared by cotransfecting HEK293 cells with the minigene construct and a miR-29a mimic or a control small RNA. In the presence of miR-29a, the APC-PTC1450 mRNA that harbors wild-type miR-29a miREs has a much stronger association with Ago2 than does its counterpart that contains the mutant miREs ( Figure 3M). In addition, a mutant version of the miR-29a mimic that restored base-pairing of the 2-7 seed with the mutant miR-29a miRE also restored repression of APC mRNA (Figure 3-figure supplement 5), supporting the conclusion that repression of PTC-APC expression is achieved through the direct interaction of miR-29a with the miREs we mapped. Altogether, these observations represent novel evidence that the expression of a naturally occurring nonsense mRNA is selectively repressed by endogenous miRNAs in the cells. The repressive effects of miRNA-mediated surveillance and EJC-NMD are additive Although APC provides an excellent case for studying miRNA-mediated surveillance, most of the reported nonsense mRNAs that harbor PTCs located upstream of the last exon should be EJC-NMD sensitive. We therefore sought to determine whether these EJC-NMD-competent transcripts are simultaneously subjected to miRNA-mediated surveillance. BRCA1, a tumor suppressor gene that is frequently mutated in breast cancer, was chosen for further investigation. Unlike APC, mutations that lead to the expression of a truncated version of BRCA1 are scattered along the ORF (Castilla et al., 1994); therefore, most BRCA1 nonsense mutants contain introns downstream of the PTC and are EJC-NMD-sensitive (Perrin-Vidoz et al., 2002). We amplified the cDNA of the PTC-STOP region of one clinically identified BRCA1 mutant (with a PTC at codon 526) (Perrin-Vidoz et al., 2002) and fused it to the 3′ end of a firefly luciferase coding region to create a chimera such that the original stop codon of the luciferase gene serves as a PTC in the fused reporter ( Figure 4A). This reporter (pFL-BRCA1) has no downstream introns and harbors wild-type miREs; therefore, it is expected to be sensitive to miRNA only. pFL-BRCA1in is identical to pFL-BRCA1 except that two downstream introns were incorporated to render the transcript sensitive to EJC-NMD in addition to miRNA. The expression level of pFL-BRCA1in was significantly lower compared to pFL-BRCA1, indicating that EJC-NMD was contributing to the repressive activity ( Figure 4C, left panel). When GW182, a key component of RISC, was knocked down >70% by small interfering RNAs (siRNAs) in HeLa-tTA cells ( Figure 4C, right panel), the expression level of pFL-BRCA1 was significantly increased, which indicates that miRNAs were indeed involved in the repression. More importantly, knockdown of GW182 also caused a comparable extent of upregulation of pFL-BRCA1in, suggesting that miRNA-mediated repression of PTC-containing mRNAs is independent of the presence of downstream introns ( Figure 4C, left panel). Thus, miRNA-mediated surveillance and EJC-NMD are not mutually exclusive, and the repressive effects caused by both mechanisms are additive. Next, we identified several potential miREs between the PTC and the natural stop codon of BRCA1 nonsense mutant mRNA by using the similar screening method for APC in HEK293 cells (data not shown). miREs targeted by miR-137 and miR-544 were mutated to create pFL-BRCA1-mut, which is not responsive to either EJC-NMD or miR-137 and miR-544, and pFL-BRCA1in-mut, which is insensitive to miR-137 and miR-544 but responsive to EJC-NMD ( Figure 4A,B). As expected, the expression level of EJC-NMD-sensitive reporter pFL-BRCA1in-mut was significantly lower compared to pFL-BRCA1-mut ( Figure 4D, compare black and light gray bars). When co-transfected with miR-137 or miR-544 in HEK293 cells (which do not naturally express these two miRNAs), the expression of the miRNA-specific reporter pFL-BRCA1 was significantly suppressed compared to pFL-BRCA1-mut, confirming the role of miRNAmediated surveillance in repressing its target ( Figure 4D, middle and right, compare black and dark gray bars). Importantly, a markedly stronger repression was observed for the reporter expected to be sensitive to both EJC-NMD and miRNA (pFL-BRCA1in) when co-transfected with miR-137 or miR-544 but not when co-transfected with a control small RNA ( Figure 4D, compare black and white bars). This observation further validates that miRNA-mediated surveillance and EJC-NMD can act additively. To determine if miRNA-mediated surveillance is a general mechanism that helps to eliminate nonsense transcripts, we sought to identify more potentially functional ORF miREs in other naturally occurring nonsense mRNAs. We performed exome sequencing of HCT-116 cells and identified 188 heterozygous nonsense mutations. Analysis of transcriptome data (Djebali et al., 2012) indicates that 47 of them were actively expressed in HCT-116 cells (Supplementary file 2). We chose 16 candidates that contain PTC-STOP regions longer than 400 nt for further experimental validation of miREs by using a similar strategy for APC and BRCA1. We found that 11 of the 16 candidates contained at least one functional miRE in the PTC-STOP regions ( Figure 4E-I, Figure 4-figure supplement 1). Several of these candidates, such as ITGA6, USP1, and PPM1J, harbored multiple functional miREs that were responsive to different miRNAs ( Figure 4F-H). Other candidates, such as KTN1 and RAD54L2, contained at least one strong miRE ( Figure 4I). As only a limited number of miRNAs have been screened by our luciferase assays, additional miREs may be found by increasing the screened miRNAs. Considering the wellknown facts that multiple miRNAs can repress the same mRNA simultaneously and the effects of miREs are additive (Doench and Sharp, 2004), the presence of multiple active miREs within these PTC-STOP regions could result in a pronounced repressive effect. Together, our results strongly support the view that miRNA-mediated surveillance may serve as a general mechanism that downregulates various nonsense mRNAs in human cells ( Figure 5). Discussion In this study, we have described evidence that a PTC is sufficient to induce miRNA-mediated downregulation of nonsense mRNAs by unmasking miREs located between the PTC and the natural stop codon of the mRNA. We have also experimentally verified that nonsense mutants of APC, BRCA1, and a few other genes are subjected to miRNA-mediated surveillance in human cells. Our findings indicate that besides their established roles in regulating gene expression, miRNAs may serve as a novel surveillance system to reduce aberrant mRNAs bearing PTCs and their potentially harmful truncated protein products, thus functioning as an important supplement to other mRNA surveillance systems in mammalian cells. Unlike classic NMD, miRNA-mediated surveillance is EJC-independent and recognizes its targets only if an miRE is embedded in the PTC-STOP region ( Figure 1A) at a position as close as 10 nt downstream of the PTC (Figure 2F,G). The repressive influence of miRNA is enhanced by combining accelerated mRNA degradation and translational inhibition ( Figure 2C-E). Furthermore, miRNA-mediated to each reporter. (C) Left panel: miRNA-mediated surveillance and EJC-NMD are not mutually exclusive. Knockdown of RISC core component GW182 alleviated the repression of BRCA1 reporters, regardless of the presence of the downstream introns. The relative FL expression level represents the firefly/Renilla luciferase ratio in the presence of a control siRNA (siNC) versus an siRNA targeting GW182 (siGW182). Right panel: expression level of GW182 protein in HeLa-tTA cells treated with siNC or siGW182. Tubulin served as a loading control. (D) Additive effects of miRNA-mediated surveillance and EJC-NMD. Each reporter in A was co-transfected with miR-137, miR-544, or a control small RNA (siNC), together with a Renilla luciferase reporter into HEK293 cells. The relative FL expression level represents the firefly/Renilla luciferase ratio of each EJC-NMD-and/or miRNA-responsive reporter relative to pFL-BRCA1-mut. (E) Reporter constructs for miRE function validation of candidates identified from HCT-116 exome and RNA sequencing data. The PTC-STOP region of each candidate was fused to a firefly luciferase (FL) ORF in the same manner as for APC and BRCA1. A control Renilla luciferase (RL) reporter was expressed from a second promoter in the same construct (pRF-candidate). Each potential miRE identified from the screening was mutated to obtain a series of miRE mutant reporters (pRF-candidate-miREmut). (F-I) Experimental verification of functional miREs in the PTC-STOP region of selected candidates. The wild-type or miRE mutant version of each candidate reporter was co-transfected into HEK293 cells with a cognate miRNA mimic. The activity of each miRE (fold increase) was calculated from the normalized levels of firefly luciferase activity for pRF-candidate versus pRF-candidate-miREmut in the presence of the cognate miRNA mimic. DOI: 10.7554/eLife.03032.014 The following figure supplement is available for figure 4: Figure 4C,D), which strengthens the repressive activity and expands the substrate spectrum of the cellular mRNA quality control system. An estimated one-third of genetic diseases are associated with truncated proteins produced from nonsense mRNAs (Kuzmiak and Maquat, 2006). Indeed, truncated BRCA1 has been shown to antagonize wild-type BRCA1 function in a dominant-negative manner (Fan et al., 2001;Sylvain et al., 2002), and several lines of evidence support the view that truncated APC contributes to colorectal cancer by causing spindle misalignment that eventually leads to chromosomal instability (Fodde et al., 2001;Tighe et al., 2004;Quyn et al., 2010). As predicted by a 2-7 seed match algorithm, the nonsense mutants of APC and BRCA1 contain many potential conventional miREs in their PTC-STOP regions and several of them are experimentally validated in the cell lines, suggesting that these disease-causing mutants may be targets of miRNA-mediated surveillance. Recent studies have revealed that many unconventional miREs exist in animals (Shin et al., 2010;Helwak et al., 2013), raising the possibility that many more functional miREs may be present than would be predicted by a search algorithm based on the 2-7 seed match rule. In addition, targets for only a limited number of miRNAs were sought, which could also lead us to underestimate the frequency with which miREs are present in the PTC-STOP regions. The abundance of some endogenous miRNAs is already very high in HEK293 cells, which could lead them to falsely be scored as negative in our mimic-based screening. Such miRNAs could be identified by performing the screening in multiple cell lines that have different miRNA profiles. Moreover, the contribution of miRNAs to the overall level of repression may be even greater if we count the inhibitory effect of miRNAs on translation, which would not be evident in the RNA sequencing data (MacArthur et al., 2012). Interestingly, we found that a luciferase reporter harboring the miR-29a miREs from the PTC-STOP region of APC nonsense mutant is efficiently repressed only in colon-derived SW480 cells but is not in kidney-derived HEK293 cells (Figure 3-figure supplement 4B,D), which correlates well with the relatively high abundance of miR-29a in SW480 cells and its undetectable level of expression in HEK293 cells ( Figure 3F-H). This finding implies that a nonsense mRNA may have a very different fate in distinct tissues depending on which miRNAs are highly expressed. It is well established that the expression of many miRNAs are tissue-or cell type-specific and are dynamically regulated during cell differentiation or under different physiological conditions (Houbaviy et al., 2003;Liu et al., 2004;Lu et al., 2005;Figure 5. Model of miRNA-mediated surveillance system. The coding region of an mRNA may contain multiple potential miREs. Usually, miRNAs cannot stably bind to their cognate miREs that are embedded within the ORF of a normal transcript under active translation. However, upon nonsense mutation, the translating ribosome stalls at the PTC so that miREs downstream of the PTC are unmasked, triggering miRNA-mediated deadenylation and translational repression. DOI: 10.7554/eLife.03032.016 Marsit et al., 2006;Landgraf et al., 2007;Liang et al., 2007). Some nonsense mRNAs may escape miRNA-mediated surveillance due to the lack of expression of certain miRNAs in some tissues or under some stress conditions, thereby increasing the risk of tissue-specific diseases. In addition to many identified nonsense messages resulting from genomic mutations, evidence suggests that ∼35% of alternatively spliced gene products contain PTCs (Lewis et al., 2003;Wollerton et al., 2004). Transcripts with retained introns, for example, are efficiently degraded by miRNAs, as shown by our BG reporter assays ( Figure 1D). Considering that introns tend to be very long and rich in repeats, we believe many more transcripts with retained introns may be targeted by miRNAs. Consistent with this idea, more than 10% of Ago-CLIP reads map to introns (Chi et al., 2009;Hafner et al., 2010). Our computational analyses of transcriptome and published Ago-CLIP sequencing data from two human cell lines revealed that hundreds of PTC-containing transcripts generated by intron retention are indeed bound by Ago in the PTC-STOP regions (Supplementary file 3), which makes them attractive candidates of miRNA-mediated surveillance. Taken together, we have uncovered a new role for miRNAs in mRNA quality control. This miRNAmediated surveillance system acts in parallel with other systems, such as EJC-NMD, to provide extra protection for the cells against nonsense mutations. We have also demonstrated that APC and BRCA1 nonsense mutant mRNAs are downregulated by multiple miRNAs that specifically target the PTC-STOP regions, thereby providing a feasible means to eliminate such deleterious nonsense transcripts without impairing their wild-type counterparts. Plasmid constructions The rabbit β-globin coding region was amplified by PCR from the pBBB plasmid (Shyu et al., 1989) and placed downstream of a tet-off promoter to generate the plasmid TBG. LastEx-PTC was constructed by mutating one nucleotide at codon 121 to introduce a PTC in TBG. The LastEx-L7 and LastEx-PTC-L7 plasmids were constructed by inserting one let-7a miRE originating from human LIN-28 (GCACAGCCTA TTGAACTACCTCA) in-frame into TBG and LastEx-PTC. hp-LastEx-L7 is identical to LastEx-L7 except for the presence of a stable hairpin (GGGGCGCGTGGTGGCGGCTGCAGCCGCCACCACGCGCCCC) in the β-globin 5′ UTR 29 nt upstream of the initiation codon. A Renilla luciferase ORF was fused to the 5′ end of BG ORF to create a RL-BG chimeric reporter, and then the stable hairpin was placed in the 5′ UTR at the same position as hp-LastEx-L7 to generate hp-RL-BG. Two nucleotides within the let-7a miRE seed region were mutated without changing the amino acids to generate LastEx-PTC-L7M and hp-LastEx-L7M. Two nucleotides of the 5′ splicing site and three nucleotides of the 3′ splicing site within the final intron of TBG or LastEx-L7 were mutated to create TBG-IR or TBG-IR-L7. Two nucleotides within the let-7a miRE seed region were mutated without changing the amino acids to generate TBG-IR-L7M. For the second set of TBG plasmids containing a miR-21 miRE, all the other designs are identical to the TBG plasmids with a let-7a miRE except for two differences: (1) the PTC site was introduced at codon 116, and (2) an artificial miR-21 miRE complementary to miR-21 with three centered mismatches (TCAACATCAGAGAGATAAGCTA) was inserted in-frame into TBG. The plasmid 3′UTR-L7 has been described (Wu et al., 2006). The let-7a miRE between the NheI and XbaI sites was replaced by a miR-21 miRE to generate 3′UTR-21. PTC102, PTC102-L7, and PTC102-L7M are identical to LastEx-PTC, LastEx-PTC-L7, and LastEx-PTC-L7M, respectively, except that the PTC site was introduced at codon 102. The let-7a miRE or its mutant in PTC102-L7 or PTC102-L7M was replaced with a miR-21 miRE or its mutant to generate PTC102-21 and PTC102-21M. Two tandem miR-125b miREs (GGTATCACAAGTTACAATC TCAGGGATAGCCAAGGTATCACAAGTTACAATCTCAGGGATAGCCAATTCTTAAT) were inserted between the EcoRI and XbaI sites 59 nt downstream of the firefly luciferase ORF with modified 3′ UTR sequences containing an additional stop codon to generate TAA-2E. The original stop codon of the luciferase gene was disrupted by a point mutation to obtain TCA-2E. One miR-125b miRE was inserted in-frame into a modified firefly luciferase plasmid at different positions before or after the PTC to generate a series of miRE-containing luciferase plasmids. The APC minigene plasmid was constructed by placing an HA-tagged APC ORF with or without its natural 3′ UTR downstream of the CMV promoter. An HA-tagged EGFP ORF was driven by an SV40 promoter from the same plasmid backbone. PTCs and the corresponding miRE mutations were generated by site-directed mutagenesis. miRE screening plasmids for APC, BRCA1, and all of the selected candidates from HCT-116 cells were constructed by fusing corresponding PTC-STOP regions to the 3′ end of a firefly luciferase ORF. miRE validation plasmids were constructed either by inserting ∼30 nt sequences that surround predicted miRE seeds into the 3′ UTR of a firefly luciferase reporter gene or by fusing the whole PTC-STOP region of each candidate with the mutated miRE seed to the 3′ end of the firefly luciferase ORF. The last 738 nt of APC coding region (PTC-STOP region) plus the entire 3′ UTR were fused to the 3′ end of a firefly luciferase ORF to generate pFL-APC-WT. The miR-29a miREs and miR-135b miRE were disrupted individually or together by site-directed mutagenesis to generate pFL-APC-29M, pFL-APC-135M, and pFL-APC-29M+125M. The sequence of the APC natural 3′ UTR was cloned downstream of a firefly luciferase gene to generate pFL-APC-3′UTR. Mature miR-29a or a negative control small RNA sequence siEGFP (AACTTCAGGGTCAGCTTGCCG) was cloned into the inducible TRIPZ lentiviral shRNA vector to generate miRNA-overexpression lentiviruses. A miRNA decoy sequence (TuD-NC or TuD-29a) was cloned into the GIPZ constitutive lentiviral vector to generate miRNA-knockdown lentiviruses. pFL-BRCA1 is identical to the BRCA1 miRE screening plasmid. pFL-BRCA1in harbors two introns downstream of the luciferase stop codon. Functional miR-137 and miR-544 miREs within the BRCA1 PTC-STOP region were subjected to a synonymous mutation to generate pFL-BRCA1-mut and pFL-BRCA1in-mut. Cell culture and stable cell lines HEK293, 293T, HCT-116, and SW480 cells were purchased from ATCC (Manassas, Virgina, USA). HeLa-tTA cells were purchased from Clontech (Mountain View, California, USA). All cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum (Invitrogen, Carlsbad, California, USA). To produce the lentiviruses, 293T cells were transfected with a virus vector encoding the expression cassette as well as the VSVG and ΔR8.91 plasmids. Viruses were harvested at 48 and 72 hr post-transfection. HEK293, HCT-116, and SW480 stable cell lines were generated by transduction with lentiviruses in the presence of 8 μg/ml of polybrene overnight, followed by a 1-week puromycin and/or hygromycin selection. RNA interfering to deplete RISC core component Endogenous GW182 of HeLa-tTA cells was depleted using protocols described elsewhere (Piao et al., 2010). siRNAs were synthesized by GenePharma (Shanghai, China) and the mature sequences are as follows: siNC: UUCUCCGAACGUGUCACGUUU; siGW182: UUGAGCACGGAGAUUAGGCUG. Deadenylation and decay assays HeLa-tTA cells were plated on 35-mm plates 1 day before transfection in DMEM containing 20 ng/ml tet. In total, 350 ng of the BG reporter plasmid was used for transfection. The transcription of BG mRNA was induced by removing tet 12 hr after transfection. After 3 hr of induction, tet was added to a final concentration of 1 μg/ml to block the transcription of BG. Cytoplasmic RNA was then isolated at various time intervals. Equal amounts of RNA were treated with RNase H in the presence of an oligodeoxynucleotide (GGTTGTCCAGGTGACTCAGACCCTC) complementary to codons 74-81 within the BG coding region. For BG reporters with a retained last intron, RNA samples were treated with RNase H in the presence of an oligodeoxynucleotide (CAGTGTATATCATTGTAACCATAAA) complementary to a region within the last intron which is 81 nt upstream of the final exon. The digested RNA samples were then analyzed by electrophoresis (5.5% PAGE with 8 M urea) and Northern blotting as previously described (Wu et al., 2006;Wu and Belasco, 2008a). For measuring BG mRNA half-life, a constitutively transcribed EGFP mRNA was co-transfected with BG plasmids. Luciferase assays In the miR-125b repression assays, HEK293 cells cultured in a 12-well plate were transfected with a firefly luciferase reporter (TAA-2E, TCA-2E, or a series of firefly luciferase reporters with one miR-125b miRE embedded before or after the PTC, 10 ng), pRL (10 ng), and a plasmid encoding or not encoding miR-125b (pMIR125b or pMIR125bΔ, respectively; 480 ng). In the miRE screening assays of APC, BRCA1, and computationally identified candidates from HCT-116 cells, HEK293 cells cultured in a 24-well plate were transfected with a firefly luciferase reporter containing the corresponding PTC-STOP region of the selected candidate (20 ng), pRL (10 ng), and 10 pmol of a synthetic miRNA mimic. In the miRE validation assays, HEK293 cells cultured in a 24-well plate were transfected with a vector encoding both firefly and Renilla luciferase (10 ng) and 10 pmol of the cognate miRNA mimics. In the PTC-STOP and 3′ UTR miRE additive effect assay, HEK293 cells cultured in a 24-well plate were transfected with a firefly luciferase reporter having a partial ORF and the entire 3′ UTR of APC fused to the 3′ end of the luciferase ORF (pFL-APC-WT or its miRE mutant counterparts, 10 ng), pRL (10 ng), and 10 pmol of siNC or miR-29a and miR-135b mimic mixture. To examine potential miR-29a miREs in the natural APC 3′ UTR, HEK293 cells cultured in a 24-well plate were transfected with a normal firefly luciferase reporter (pFL, 10 ng) or a reporter bearing a full length APC 3′ UTR sequence in the luciferase 3′ UTR (pFL-APC-3′UTR, 10 ng), pRL (10 ng), and 10 pmol of siNC or miR-29a mimics. In the miRNA-mediated surveillance and EJC-NMD additive effect assay, HEK293 cells cultured in a 24-well plate were transfected with a firefly luciferase reporter harboring the BRCA1 PTC-STOP region with or without introns (pFL-BRCA1 or its miRE mutant version pFL-BRCA1-mut, 20 ng; pFL-BRCA1in or its miRE mutant version pFL-BRCA1in-mut, 33.3 ng), pRL (10 ng), and 10 pmol of siNC or miRNA mimics. In the GW182 knockdown assay, siRNA-treated HeLa-tTA cells cultured in a 24-well plate were transfected with pFL-BRCA1 (20 ng) or pFL-BRCA1in (33.3 ng), together with 10 ng pRL. In the tissue-specific repression of miRNA-mediated surveillance and miR-29a knockdown assays, HEK293 cells, SW480 cells, and TuD-NC-or TuD-29a-overexpressing SW480 cells cultured in a 24-well plate were transfected with a vector encoding both a firefly and a Renilla luciferase reporter (20 ng) with the APC miR-29a miREs or the mutant form in the 3′ UTR of the firefly luciferase gene. In all luciferase assays, values represent means±SD from at least three independent experiments. Real-time quantitative PCR In total, 2 μg cytoplasmic RNA isolated from HEK293 cells was treated with 1 U DNase I (Fermentas, Burlington, Ontario, Canada) and was then reverse transcribed using M-MLV (TAKARA, Otsu, Shiga, Japan), according to the manufacturer's instructions. Real-time PCR was performed on a StepOnePlus real-time PCR system (Applied Biosystems, Foster City, California, USA) with Power SYBR Green PCR Master Mix (Applied Biosystems, Foster City, California, USA). The PCR mixtures were heated to 95°C for 10 min and then subjected to 40 amplification cycles (15 s at 95°C, 1 min at 60°C). Ribonucleoprotein immunoprecipitation For RIP assays of ectopically expressed PTC-APC mRNAs, HEK293 cells that stably express FLAGtagged Ago2 were plated in 60-mm plates 1 day before transfection. A total of 2 μg of APC-PTC1450 or APC-PTC1450-mut plasmid with 150 pmol of siNC or miR-29a mimic was transfected into the cells. Then 36 hr after transfection, cells were trypsinized, collected, and washed with 10 ml PBS twice. The pelleted cells were lysed in 200 μl PLB buffer (100 mM KCl, 5 mM MgCl 2 , 10 mM HEPES, 0.5% NP-40, 1 mM DTT, 100 U/ml RNase inhibitor). The cleared lysate was incubated with anti-FLAG affinity gel (Sigma-Aldrich, St. Louis, Missouri, USA) for 1 hr. Beads were washed with 1 ml NT-2 buffer (50 mM Tris, 150 mM NaCl, 1 mM MgCl 2 , 0.05% NP-40) five times. Washed beads were re-suspended in 250 μl NT-2 buffer and RNA was isolated using Trizol LS Reagent (Ambion, Austin, Texas, USA) and analyzed by qRT-PCR. For RIP assays of endogenous PTC-APC mRNAs, SW480 cells that stably express FLAG-tagged Ago2 were cultured in one 100-mm plate and were used for RIP assay after they reached 80% confluency. Bioinformatics analysis strategy SNV calling Genomic DNA was extracted from HCT-116 cells using the Genomic DNA Extraction kit (TIANGEN, Beijing, China). Exon-captured libraries were constructed by Genergy Biotechnology (Shanghai, China). Sequencing was conducted using the Illumina Hiseq2000 to produce pair-end reads. The exome sequencing data (24 × coverage) for HCT-116 cells have been deposited in the Short Read Archive (SRA accession number: SRX528176). Raw exome sequencing data were mapped to the human genome (hg19) using BWA with five mismatches allowed per uniquely aligned read, and duplicate reads were marked using Picard. The alignments were then processed to generate SNV calls as a standard workflow by GATK. A human SNV dataset was downloaded from the GATK resource bundle (ftp.broadinstitute.org). SNVs that lead to premature termination were designated as PTCs and were used in further analysis. Genes with a PTC/WT allele reads ratio between 0.7 and 1.3 were considered to be heterozygous. Allele-specific expression analysis of heterozygous PTC variants Published transcriptome sequencing data of the HCT-116 cell line (GEO accession number: GSM958749) were used to quantify the relative expression levels for the identified PTC-containing genes. Raw RNA sequencing data were mapped to heterozygous PTC loci using Tophat. Duplicate-removed alignments were processed to generate PTC calls using Samtools. Functional annotation of PTCs was performed with reference to GRCh37.70 using snpEff. For transcripts with multiple isoforms, the one with the longest PTC-STOP region was annotated for downstream analysis. PTC-containing transcripts were subdivided into EJC-NMD-sensitive and/or miRNA-mediated surveillance-sensitive groups: if a PTC site was found more than 50 nt upstream of the final exon-exon junction in a transcript, it was regarded as an EJC-NMD-and miRNA-mediated surveillance-sensitive candidate; otherwise the transcript was considered a miRNA-mediated surveillance-sensitive candidate. miRE prediction miRNA target site prediction was performed within the PTC-STOP region of the selected candidates. The small RNA sequencing data have been deposited in the Short Read Archive (SRA accession numbers: SRX528179 for HCT-116 cells; SRX528184 for HeLa cells; SRX528182 for HEK293 cells). The top 150 expressed miRNAs were included in the miRE prediction. miRBase v20 was used as the reference database. miRE prediction was based on the 2-7 seed match rule: if the 5′ 2-7 seed region of the miRNA formed Watson-Crick base pairs with sequences within regions at least 10 nt downstream of the PTC of the candidate mRNA without mismatches, this mRNA was considered to harbor one miRE for the selected miRNA. Intron retention prediction Raw RNA sequencing data of HeLa (SRA accession number: SRX528183) and HEK293 cells (SRA accession number: SRX528181) were processed through the standard Tophat and Cufflinks pipeline to generate transcriptome profiles. The isoform with the highest transcript abundance in each gene family was chosen as the constitutive isoform. Intron-retention isoforms with an intron retention level >0.05 were then found by using MATS accordingly. For all the PTC-causing intron-retention isoforms, Ago-CLIP hits (http://starbase.sysu.edu.cn/download.php) were searched for in their PTC-STOP regions. miREs were predicted based on the 2-7 seed match rule as described above. Research article Zhao Y, Lin J, Xu B, Hu S, Zhang X, Wu L
2017-04-20T12:34:44.479Z
2014-08-08T00:00:00.000
{ "year": 2014, "sha1": "fcc1afd8af12fe13dca7244fb359ab5826a4b985", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.03032", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fcc1afd8af12fe13dca7244fb359ab5826a4b985", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
117746456
pes2o/s2orc
v3-fos-license
Global bifurcations close to symmetry Heteroclinic cycles involving two saddle-foci, where the saddle-foci share both invariant manifolds, occur persistently in some symmetric differential equations on the 3-dimensional sphere. We analyse the dynamics around this type of cycle in the case when trajectories near the two equilibria turn in the same direction around a 1-dimensional connection - the saddle-foci have the same chirality. When part of the symmetry is broken, the 2-dimensional invariant manifolds intersect transversely creating a heteroclinic network of Bykov cycles. We show that the proximity of symmetry creates heteroclinic tangencies that coexist with hyperbolic dynamics. There are n-pulse heteroclinic tangencies - trajectories that follow the original cycle n times around before they arrive at the other node. Each n-pulse heteroclinic tangency is accumulated by a sequence of (n+1)-pulse ones. This coexists with the suspension of horseshoes defined on an infinite set of disjoint strips, where the first return map is hyperbolic. We also show how, as the system approaches full symmetry, the suspended horseshoes are destroyed, creating regions with infinitely many attracting periodic solutions. Introduction A Bykov cycle is a heteroclinic cycle between two hyperbolic saddle-foci of different Morse index, where one of the connections is transverse and the other is structurally unstable -see Figure 1. There are two types of Bykov cycle, depending on the way the flow turns around the two saddle-foci, that determine the chirality of the cycle. Here we study the non-wandering dynamics in the neighbourhood of a Bykov cycle where the two nodes have the same chirality. This is also studied in [21], and the case of different chirality is discussed in [23]. Our starting point is a fully symmetric systemẋ = f 0 (x) with two saddle-foci that share all the invariant manifolds, of dimensions one and two, both contained in flow-invariant subspaces that come from the symmetry. This forms an attracting heteroclinic network Σ 0 with a nonempty basin of attraction V 0 . We study the global transition of the dynamics from this fully symmetric systemẋ = f 0 (x) to a perturbed systemẋ = f λ (x), for a smooth one-parameter family that breaks some of the symmetry of the system. For small perturbations the set V 0 is still positively invariant. When λ = 0, the one-dimensional connection persists, due to the remaining symmetry, and the two dimensional invariant manifolds intersect transversely, because of the symmetry breaking. This gives rise to a network Σ λ , that consists of a union of Bykov cycles, contained in V 0 . For partial symmetry-breaking perturbations of f 0 , we are interested in the dynamics in Ω(V 0 ), the maximal invariant set contained in V 0 . It contains, but does not coincide with, the suspension of horseshoes accumulating on Σ λ , as shown in [2,21,19]. Here, we show that close to the fully w v Figure 1. A Bykov cycle with nodes of the same chirality. There are two possibilities for the geometry of the flow around a Bykov cycle depending on the direction trajectories turn around the connection [v → w]. We assume here that the nodes have the same chirality: trajectories turn in the same direction around the connection. When the endpoints of a nearby trajectory are joined, the closed curve is always linked to the cycle. symmetric case it also contains infinitely many heteroclinic tangencies. Under an additional assumption we show that Ω(V 0 ) also contains attracting limit cycles with long periods. Symmetry plays two roles here. First, it creates flow-invariant subspaces where non-transverse heteroclinic connections are persistent, and hence Bykov cycles are robust in this context. Second, being close to a more symmetric problem organises the information and imposes some restrictions on the invariant manifolds of the saddle-foci, and this is the origin of the heteroclinic tangencies and related bifurcations. Cycles where the two nodes have the same chirality are addressed by Knobloch et al [19]. They restrict the analysis to trajectories that remain for all time inside a small tubular neighbourhood of the cycle. With this constraint they find that there are at most two heteroclinic tangencies, and that they only occur if the ratio of the real parts of the complex eigenvalues is irrational. We use the proximity of the fully symmetric case to capture more global dynamics, and we find infinitely many heteroclinic tangencies corresponding to trajectories that make an excursion away from the original cycle. In contrast, around Bykov cycles where the nodes have different chirality, heteroclinic tangencies occur generically in trajectories that remain close to the cycle for all time, as shown in [23]. Chirality is an essential information in this problem. Homoclinic and heteroclinic bifurcations constitute the core of our understanding of complicated recurrent behaviour in dynamical systems. The history goes back to Poincaré on the late 19 th century, with major subsequent contributions by the schools of Andronov, Shilnikov, Smale and Palis. These results rely on a combination of analytical and geometrical tools used to understand the qualitative behaviour of the dynamics. Heteroclinic cycles and networks are flow-invariant sets that can occur robustly in dynamical systems with symmetry, and are frequently associated with intermittent behaviour. The rigorous analysis of the dynamics associated to the structure of the nonwandering sets close to heteroclinic networks is still a challenge. We refer to [16] for an overview of heteroclinic bifurcations and for details on the dynamics near different kinds of heteroclinic cycles and networks. In the present article, we study the non-wandering dynamics in the neighbourhood of a Bykov cycle. Bykov cycles appear in many applications like the Kuramoto-Sivashinsky systems [9,24], magnetoconvection [34] and travelling waves in reaction-diffusion dynamics [5]. In generic dynamical systems heteroclinic cycles are invariant sets of codimension one, but they can be structurally stable in systems which are equivariant under the action of a symmetry group, due to the existence of flow-invariant subspaces. Explicit examples of equivariant vector fields for which such cycles may be found are reported in [3,4,18,23,25,30,33]. The transverse intersection of the two-dimensional invariant manifolds of the two nodes implies that the set of trajectories that remain for all time in a small neighbourhood of the Bykov cycle contains a locally-maximal hyperbolic set admitting a complete description in terms of symbolic dynamics, reminiscent of the results of L.P. Shilnikov [37]. An obstacle to the global symbolic description of these trajectories is the existence of tangencies that lead to the birth of stable periodic sinks, as described for the homoclinic case in [1,27,28,39]. In the present article, we prove that when λ → 0, the horseshoes in Ω(V 0 ) lose hyperbolicity at heteroclinic tangencies with Newhouse phenomena. The complete description is an unsolvable problem: arbitrarily small perturbations of any differential equation with a quadratic heteroclinic tangency may lead to the creation of new tangencies of higher order, and to the birth of degenerate periodic solutions [14, §4]. See also the article [17], about the existence of generic cubic homoclinic tangencies in the context of Hénon maps. All dynamical models with quasi-stochastic attractors were found, either analytically or by computer simulations, to have tangencies of invariant manifolds [1,12,13]. As a rule, the sinks in a quasi-stochastic attractor have very long periods and narrow basins of attraction, and they are hard to observe in applied problems because of the presence of noise, see [14]. The present article contributes to a better understanding of the global transition between uniform hyperbolicity (Smale horseshoes with infinitely many strips) coexisting with infinitely many sinks, and the emergence of regular dynamics. We discuss the global bifurcations that occur as a parameter λ is used to break part of the symmetry. We complete our results by reducing our problem to a symmetric version of the structure of Palis and Takens' result [29, §3] on homoclinic bifurcations. Being close to symmetry adds complexity to the dynamics. Framework of the article. This article is organised as follows. In Section 2, after some basic definitions, we describe precisely the object of study and we review some of our recent results related to it. In Section 3 we state the main results of the present article. The coordinates and other notation used in the rest of the article are presented in Section 4, where we also obtain a geometrical description of the way the flow transforms a curve of initial conditions lying across the stable manifold of an equilibrium. In Section 5, we prove that there is a sequence of parameter values λ i accumulating on 0 such that the associated flow has heteroclinic tangencies. In Section 6, we discuss the geometric constructions that describe the global dynamics near a Bykov cycle as the parameter varies. We also describe the limit set that contains nontrivial hyperbolic subsets and we explain how the horseshoes disappear as the system regains full symmetry. We show that under an additional condition this creates infinitely many attracting periodic solutions. The object of study and preliminary results In the present section, after some preliminary definitions, we state the hypotheses for the system under study together with an overview of results obtained in [21], emphasizing those that will be used to explain the loss of hyperbolicity of the suspended horseshoes and the emergence of heteroclinic tangencies near the cycle. 2.1. Definitions. Let f be a C 2 vector field on R n with flow given by the unique solution Given two equilibria p 1 and p 2 , an m-dimensional heteroclinic connection from p 1 to p 2 , denoted [p 1 → p 2 ], is an m-dimensional connected flow-invariant manifold contained in W u (p 1 )∩ W s (p 2 ). There may be more than one trajectory connecting p 1 and p 2 . Let S = {p j : j ∈ {1, . . . , k}} be a finite ordered set of equilibria. There is a heteroclinic cycle associated to S if ∀j ∈ {1, . . . , k}, W u (p j ) ∩ W s (p j+1 ) = ∅ (mod k), where W s (p) and W u (p) refer to the stable and unstable manifolds of the hyperbolic saddle p, respectively. A heteroclinic network is a finite connected union of heteroclinic cycles. The dimension of the unstable manifold of an equilibrium p will be called the Morse index of p. These objects are known to exist in several settings and are structurally stable within certain classes of Γ-equivariant systems, where Γ ⊂ O(n) is a compact Lie group. Here we consider differential equationsẋ = f (x) with the equivariance condition: for all γ ∈ Γ. Given an isotropy subgroup Γ < Γ, we write F ix( Γ) for the vector subspace of points that are fixed by the elements of Γ. For Γ-equivariant differential equations each subspace F ix( Γ) is flow-invariant. In a three-dimensional manifold, a Bykov cycle is a heteroclinic cycle associated to two hyperbolic saddle-foci with different Morse indices, in which the one-dimensional manifolds coincide and the two-dimensional invariant manifolds have a transverse intersection. The dynamics near this kind of cycles has recently been studied in the reversible context by [9,10,19,24] and in the equivariant setting by [21]. See also [11,32]. Suppose there is a cross-section S to the flow ofẋ = f (x), such that S contains a compact invariant set Λ where the first return map is well defined and conjugate to a full shift on a countable alphabet. A suspended horseshoe is a flow-invariant set Λ = {ϕ(t, q) : t ∈ R, q ∈ Λ}. 2.2. The organising centre. The starting point of the analysis is a differential equationẋ = f 0 (x) on the unit sphere S 3 = {X = (x 1 , x 2 , x 3 , x 4 ) ∈ R 4 : ||X|| = 1} where f 0 : S 3 → TS 3 is a C 2 vector field with the following properties: (P1) The vector field f 0 is equivariant under the action of Z 2 ⊕ Z 2 on S 3 induced by the linear maps on R 4 : (P2) The set F ix(Z 2 ⊕Z 2 ) = {x ∈ S 3 : γ 1 x = γ 2 x = x} consists of two equilibria v = (0, 0, 0, 1) and w = (0, 0, 0, −1) that are hyperbolic saddle-foci, where: • the eigenvalues of df 0 (w) are E w ± α w i and −C w with α w = 0, C w > E w > 0. (P3) The flow-invariant circle F ix( γ 1 ) = {x ∈ S 3 : γ 1 x = x} consists of the two equilibria v and w, a source and a sink, respectively, and two heteroclinic trajectories from v to w that we denote by and w, and a two-dimensional heteroclinic connection from w to v. Together with the connections in (P3) this forms a heteroclinic network that we denote by Σ 0 . Given two small open neighbourhoods V and W of v and w respectively, consider a piece of trajectory ϕ that starts at ∂V , goes into V and then goes once from V to W , ending at ∂W . Joining the starting point of ϕ to its end point by a line segment, one obtains a closed curve, the loop of ϕ. For almost all starting positions in ∂V , the loop of ϕ does not meet the network Σ 0 . If there are arbitrarily small neighbourhoods V and W for which the loop of every trajectory is linked to Σ 0 , we say that the nodes have the same chirality as ilustrated in Figure 1. This means that near v and w, all trajectories turn in the same direction around the one-dimensional connections [v → w]. This is our last hypothesis on f 0 : (P5) The saddle-foci v and w have the same chirality. Condition (P5) means that the curve ϕ and the cycle Σ 0 cannot be separated by an isotopy. This property is persistent under small smooth perturbations of the vector field that preserve the one-dimensional connection. An explicit example of a family of differential equations where this assumption is valid has been constructed in [33]. The rigorous analysis of a case where property (P5) does not hold has been done in [23]. 2.3. The heteroclinic network of the organising centre. The heteroclinic connections in the network Σ 0 are contained in fixed point subspaces satisfying the hypothesis (H1) of Krupa and Melbourne [20]. Since the inequality C v C w > E v E w holds, the stability criterion [20] may be applied to Σ 0 and we have: Under conditions (P1)-(P4) the heteroclinic network Σ 0 is asymptotically stable. As a consequence of Lemma 1 there exists an open neighbourhood V 0 of the network Σ 0 such that every trajectory starting in V 0 remains in it for all positive time and is forward asymptotic to the network. The neighbourhood may be taken to have its boundary transverse to the vector field f 0 . The flow associated to any C 1 -perturbation of f 0 that breaks the one-dimensional connection should have some attracting feature. When the symmetry Z 2 ( γ 1 ) is broken, the two one-dimensional heteroclinic connections are destroyed and the cycle Σ 0 disappears. Each cycle is replaced by a hyperbolic sink that lies close to the original cycle [21]. For sufficiently small C 1 -perturbations, the existence of solutions that go several times around the cycles is ruled out. in two flow-invariant connected components, preventing arbitrarily visits to both cycles in Σ 0 . Trajectories whose initial condition lies outside the invariant subspaces will approach one of the cycles in positive time. Successive visits to both cycles require breaking this symmetry [21]. 2.4. Breaking the Z 2 ( γ 2 )-symmetry. From now on, we consider f 0 embedded in a generic one-parameter family of vector fields, breaking the γ 2 -equivariance as follows: (P6) The vector fields f λ : S 3 → TS 3 are a C 1 -family of γ 1 -equivariant vector fields. Since the equilibria v and w lie on F ix( γ 1 ) and are hyperbolic, they persist for small λ > 0 and still satisfy Properties (P2) and (P3). Their invariant two-dimensional manifolds generically meet transversely. The generic bifurcations from a manifold are discussed in [7], under these conditions we assume: (P7) For λ = 0, the local two-dimensional manifolds W u (w) and W s (v) intersect transversely at two trajectories that will be denoted [v → w]. Together with the connections in (P3) this forms a Bykov heteroclinic network that we denote by Σ λ . The network Σ λ consists of four copies of the simplest heteroclinic cycle between two saddlefoci of different Morse indices, where one heteroclinic connection is structurally stable and the other is not, a Bykov cycle. The next result shows that Property (P7) is natural, since the heteroclinic connections of (P7), as well as those of assertion (4) of Theorem 3 below, occur at least in symmetric pairs. Lemma 2. Let f λ be a C 1 -family of vector fields satisfying (P1)-(P3) and (P6). If the local twodimensional manifolds W u (w) and W s (v) intersect at a point then their intersection contains at least two trajectories. is also a solution with the same property. If For small λ = 0, the neighbourhood V 0 is still positively invariant and contains the network Σ λ . Since the closure of V 0 is compact and positively invariant it contains the ω-limit sets of all its trajectories. The union of these limit sets is a maximal invariant set in V 0 . For f 0 this is the cycle Σ 0 , by Lemma 1, whereas for symmetry-breaking perturbations of f 0 it contains Σ λ but does not coincide with it. Our aim is to describe this set. When λ moves away from 0, the simple dynamics jumps to chaotic behaviour. A systematic study of the dynamics in a neighbourhood of the Bykov cycles in Σ λ , under the condition (P5), was carried out in [2,21,22]; we proceed to review these local results and then we discuss global aspects of the dynamics. In order to do this we introduce some concepts. Given a Bykov cycle Γ involving v and w, let V, W ⊂ V 0 be disjoint neighbourhoods of these points as above. Consider two local cross-sections of Σ λ at two points p and q in the connections [v → w] and [w → v], respectively, with p, q ∈ V ∪ W . Saturating the cross-sections by the flow, one obtains two flow-invariant tubes joining V and W that contain the connections in their interior. We call the union of these tubes with V and W a tubular neighbourhood T of the Bykov cycle. More details will be provided in Section 4. Given two disjoint neighbourhoods V and W ⊂ V 0 of v and w, respectively, a one-dimensional connection [w → v] that, after leaving W , enters and leaves both V and W precisely n ∈ N times is called an n-pulse heteroclinic connection with respect to V and W . When there is no ambiguity, we omit the expression "with respect to V and W ". If n ≥ 1 we call it a multi-pulse heteroclinic connection. If W u (w) and W s (v) meet tangentially, we say that the connection [w → v] is an n-pulse heteroclinic tangency, otherwise we call it a transverse npulse heteroclinic connection. The original heteroclinic connections [w → v] in Σ λ are 0-pulse heteroclinic connections. With these conventions we have: (2) the only heteroclinic connections from v to w are those in the Bykov cycles and there are no homoclinic connections; (3) any tubular neighbourhood T of a Bykov cycle Γ in Σ * contains points not lying on Γ whose trajectories remain in T for all time; (4) any tubular neighbourhood of a Bykov cycle Γ in Σ * contains infinitely many n-pulse heteroclinic connections [w → v] for each n ∈ N, that accumulate on the cycle; (5) for any tubular neighbourhood T , given a cross-section there exist sets of points such that the dynamics of the first return to S q is uniformly hyperbolic and conjugate to a full shift over a finite number of symbols. These sets accumulate on Σ * and the number of symbols coding the return map tends to infinity as we approach the network. Notice that assertion (4) of Theorem 3 implies the existence of a bigger network: beyond the original transverse connections [w → v], there exist infinitely many subsidiary heteroclinic connections turning around the original Bykov cycle. Hereafter, we will restrict our study to one heteroclinic cycle. The results of L.P. Shilnikov on a homoclinic cycle to a saddle-focus [36,37,38] are well known. Under a specific eigenvalue condition the cycle gives rise to an invariant set where the first return map is conjugate to a full shift over a finite alphabet. In contrast to these findings, in Theorem 3, the suspended horseshoes arise due to the presence of two saddle-foci together with transversality of invariant manifolds, and does not depend on any additional condition on the eigenvalues at the nodes. A hyperbolic invariant set of a C 2 -diffeomorphism has zero Lebesgue measure [6]. Nevertheless, since the authors of [21] worked in the C 1 category, this set of horseshoes might have positive Lebesgue measure. Rodrigues [31] proved that this is not the case: ). Let T be a tubular neighbourhood of one of the Bykov cycles Γ of Theorem 3. Then in any cross-section S q ⊂ T the set of initial conditions in S q that do not leave T for all time, has zero Lebesgue measure. The shift dynamics does not trap most solutions in the neighbourhood of the cycle. In particular, none of the cycles is Lyapunov stable. Statement of results Heteroclinic cycles connecting saddle-foci with a transverse intersection of two-dimensional invariant manifolds imply the existence of hyperbolic suspended horseshoes. In our setting, when λ varies close to zero, we expect the creation and the annihilation of these horseshoes. When the symmetry Z 2 ( γ 2 ) is broken, heteroclinic tangencies are reported in the next result. Although a tangency may be removed by a small smooth perturbation, the presence of tangencies is persistent. Theorem 5. In the set of families f λ of vector fields satisfying (P1)-(P6) there is a subset C, open in the C 2 topology, for which there is a sequence λ i > 0 of real numbers, with lim i→∞ λ i = 0 such that for λ > λ i , there are two 1-pulse heteroclinic connections for the flow ofẋ = f λ (x), that collapse into a 1-pulse heteroclinic tangency at λ = λ i and then disappear for λ < λ i . Moreover, the 1-pulse heteroclinic tangency approaches the original [w → v] connection when λ i tends to zero. The explicit description of the open set C is given in Section 5, after establishing some notation for the proof. Theorem 6. For a family f λ in the open set C of Theorem 5, and for each parameter value λ i corresponding to a 1-pulse heteroclinic tangency, there is a sequence of parameter values λ ij accumulating at λ i for which there is a 2-pulse heteroclinic tangency. This property is recursive in the sense that each n-pulse heteroclinic tangency is accumulated by (n + 1)-pulse heteroclinic tangencies for nearby parameter values. Due to the negative divergence at both saddle-foci, heteroclinic tangencies give rise to attracting periodic solutions of large periods and small basins of attraction, appearing in large numbers, possibly infinite. For λ ≈ 0, return maps to appropriate domains close to the tangency are conjugate to Hénon-like maps [8,26]. As λ → 0, in V 0 , infinitely many wild attractors coexist with suspended horseshoes that are being destroyed. For a family f λ in the open set C of Theorem 5, there is a sequence of closed intervals ∆ n = [c n , d n ], with 0 < d n+1 , c n < d n and lim n→∞ d n = 0, such that as λ decreases in ∆ n , a suspended horseshoe is destroyed. A similar result has been formulated by Newhouse [27] and Yorke and Alligood [39] for the case of two dimensional diffeomorphisms in the context of homoclinic bifurcations with no references to the equivariance. A more precise formulation of the result is given in Section 6. Applying the results of [39,29] to this family, we obtain: With an additional hypothesis we get: For a family f λ in the open set C of Theorem 5, if the first return to a transverse section is area-contracting, then for parameters λ in an open subset of ∆ n with sufficiently large n, infinitely many attracting periodic solutions coexist. In Section 6 we also describe a setting where the additional hypothesis holds. When λ decreases, the Cantor set of points of the horseshoes that remain near the cycle is losing topological entropy, as the set loses hyperbolicity, a phenomenon similar to that described in [13]. Local geometry and transition maps We analyse the dynamics near the network by deriving local maps that approximate the dynamics near and between the two nodes in the network. In this section we establish the notation that will be used in the rest of the article and the expressions for the local maps. We start with appropriate coordinates near the two saddle-foci. Local coordinates. In order to describe the dynamics around the Bykov cycles of Σ λ , we introduce local coordinates near the equilibria v and w. By Samovol's Theorem [35], the vector field f λ is C 1 -conjugate to its linear part around each saddle-focus. Without loss of generality we assume that α v = α w = 1. In cylindrical coordinates (ρ, θ, z) the linearisation at v is given by: and around w takes the form:ρ = E w ρθ = 1ż = −C w z. In these coordinates, we consider cylindrical neighbourhoods V and W in S 3 of v and w, respectively, of radius ρ = ε > 0 and height z = 2ε -see Figure 2. After a linear rescaling of the variables, we may also assume that ε = 1. Their boundaries consist of three components: the cylinder wall parametrised by x ∈ R (mod 2π) and |y| ≤ 1 with the usual cover (x, y) → (1, x, y) = (ρ, θ, z) and two discs, the top and bottom of the cylinder. We take polar coverings of these disks where 0 ≤ r ≤ 1 and ϕ ∈ R (mod 2π). The local stable manifold of v, W s (v), corresponds to the circle parametrised by y = 0. In V we use the following terminology suggested in Figure 2: • In(v), the cylinder wall of V , consisting of points that go inside V in positive time; • Out(v), the top and bottom of V , consisting of points that go outside V in positive time. We denote by In + (v) the upper part of the cylinder, parametrised by (x, y), y ∈ [0, 1] and by In − (v) its lower part. The cross sections obtained for the linearisation around w are dual to these. The set W s (w) is the z-axis intersecting the top and bottom of the cylinder W at the origin of its coordinates. The set W u (w) is parametrised by z = 0, and we use: • In(w), the top and bottom of W , consisting of points that go inside W in positive time; • Out(w), the cylinder wall of W , consisting of points that go inside W in negative time, with Out + (w) denoting its upper part, parametrised by (x, y), y ∈ [0, 1] and Out − (w) its lower part. We will denote by W u loc (w) the portion of W u (w) that goes from w up to In(v) not intersecting the interior of V and by W s loc (v) the portion of W s (v) outside W that goes directly from Out(w) into v. The flow is transverse to these cross sections and the boundaries of V and of W may be written as the closure of In(v) ∪ Out(v) and In(w) ∪ Out(w), respectively. 4.2. Transition maps near the saddle-foci. The trajectory of a point (x, y) with y > 0 in Similarly, a point (r, φ) in In(w)\W s (w), leaves W at Out(w) at Transition map along the connection are mapped into In(w) in a flow-box along the each one of the connections [v → w]. Without loss of generality, we will assume that the transition Ψ v→w : Out(v) → In(w) does not depend on λ and is modelled by the identity, which is compatible with hypothesis (P5). Using a more general form for Ψ v→w would complicate the calculations without any change in the final results. The coordinates on V and W are chosen to have [v → w] connecting points with z > 0 in V to points with z > 0 in W . We will denote by η the map η = Φ w • Ψ v→w • Φ v . From (4.1) and (4.2), its expression in local coordinates, for y > 0, is (1) if the curve lies in In(v) then it is mapped by η = Φ w • Ψ v→w • Φ v into a helix on Out(w) accumulating on the circle Out(w) ∩ W u loc (w), its maximum height is M δ and it has a fold point at η(x * , h(x * )) for some x * ∈ (a, x M ); (2) if the curve lies in Out(w) then it is mapped by η −1 into a helix on In(v) accumulating on the circle In(v) ∩ W s loc (v), its maximum height is M 1/δ and it has a fold point at Proof. The graph of h defines a curve on In(v) without self-intersections. Since η is the transition map of a differential equation, hence a diffeomorphism, this curve is mapped by η into a curve H(x) = η (x, h(x)) = (H θ (x), H y (x)) in Out(w) without self-intersections. Using the expression (4.3) for η, we get From this expression it is immediate that ∈ (a, b), so the curve lies below the level y = M δ in Out(w). θ (x) changes sign, this is a fold point. If x * is the minimum value of x for which this happens, then locally the helix lies to the right of this point. This proves assertion (1). The proof of assertion (2) is similar, using the expression In this case we get lim x→a + H θ (x) = lim x→b − H θ (x) = −∞ and if x * is the largest value of x for which the helix has a fold point, then locally the helix lies to the left of η −1 (x * , h(x * )). Geometry of the invariant manifolds. There is also a well defined transition map Ψ λ w→v : Out(w) −→ In(v) that depends on the Z 2 γ 2 -symmetry breaking parameter λ, where Ψ 0 w→v is the identity map. We will denote by R λ the map Ψ λ w→v •η, where well defined. When there is no risk of ambiguity, we omit the subscript λ. In this section we investigate the effect of Ψ λ w→v on the two-dimensional invariant manifolds of v and w for λ = 0, under the assumption (P7). For this, let f λ be an unfolding of f 0 satisfying (P1)-(P6) and lying in the C 2 -open set C 1 of unfoldings that satisfy (P7). For λ = 0, we introduce the notation, see Figure 4: • (P 1 w , 0) and (P 2 w , 0) with 0 < P 1 w < P 2 w < 2π are the coordinates of the two points where the connections [w → v] of Property (P7) meet Out(w); • (P 1 v , 0) and (P 2 v , 0) with 0 < P 1 v < P 2 v < 2π are the coordinates of the two points where [w → v] meets In(v); • (P j w , 0) and (P j v , 0) are on the same trajectory for each j = 1, 2. By (P7), the manifolds W u loc (w) and W s loc (v) intersect transversely for λ = 0. For λ close to zero, we are assuming that W s loc (v) intersects the wall Out(w) of the cylinder W on a closed curve as in Figure 4. It corresponds to the expected unfolding from the coincidence of the manifolds W s (v) and W u (w) at f 0 . Similarly, W u loc (w) intersects the wall In(v) of the cylinder V on a closed curve. For small λ > 0, these curves can be seen as graphs of smooth 2π-periodic functions, for which we make the following conventions: The two points (P 1 w , 0) and (P 2 w , 0) divide the closed curve y = h v (x, λ) in two components, corresponding to different signs of the second coordinate. With the conventions above, we get h v (x, λ) > 0 for x ∈ P 1 w , P 2 w . Then the region W − in Out(w) delimited by W s loc (v) and W u loc (w) between P 1 w and P 2 w gets mapped by Ψ λ w→v into In − (v), while all other points in Out + (w) are mapped into In + (v). We denote by W + the latter set, of points in Out(w) with 0 < y < 1 and y > h w (x, λ) for x ∈ P 1 w , P 2 w . The maximum value of h v (x, λ) is attained at some point With this notation, we have: Proposition 11. Let f λ be family of vector fields satisfying (P1)-(P7). For λ = 0 sufficiently small, the portion of W u loc (w) ∩ In(v) that lies in In + (v) is mapped by η into a helix in Out(w) accumulating on W u loc (w). If M w (λ) is the maximum height of W u loc (w) ∩ In + (v), then the maximum height of the helix is M w (λ) δ . For each λ > 0 there is a fold point in the helix that, as λ tends to zero, turns around the cylinder Out(w) infinitely many times. Proof. That η maps W u loc (w)∩In + (v) into a helix, and the statement about its maximum height follow directly by applying assertion (1) of Lemma 10 to h w . For the fold point in the helix, let x * (λ) be its first coordinate. From the expression (4.3) of η it follows that Since f λ unfolds f 0 , then lim λ→0 M v (λ) = 0, hence lim λ→0 −K ln h w (x λ , λ) = ∞ and therefore, the fold point turns around the cylinder Out(w) infinitely many times. Heteroclinic Tangencies Using the notation and results of Section 4 we can now discuss the tangencies of the invariant manifolds and prove Theorems 5 and 6. As remarked in Section 4.5, since f λ unfolds f 0 , then the maximum heights, We make the additional assumption that (M w (λ)) δ tends to zero faster than M v (λ). This condition defines the open set C ⊂ C 1 of unfoldings f λ that we need for the statement of Theorem 5. More precisely, let C be the set of unfoldings of f 0 that satisfy (P7) and for which there is a value λ * > 0 such that for 0 < λ < λ * we have (M w (λ)) δ < M v (λ). Then C is open in the C 2 topology. Proof of Theorem 5. Suppose f λ ∈ C. By Proposition 11, the curve is a helix in Out + (w) and has at least one fold point at x = x * (λ). The second coordinate of the helix satisfies 0 < α λ 2 (x) < M w (λ) δ for all x ∈ P 2 v , P 1 v (mod 2π) and all positive λ < λ * . Since f λ ∈ C, then α λ 2 (x) < M v (λ) for all x and all positive λ < λ * . Moreover, since the fold point α(x * (λ)) turns around Out(w) infinitely many times as λ goes to zero, given any λ 0 < λ * there exists a positive value λ R < λ 0 such that α(x * (λ R )) lies in W + , the region in Out(w) between W s loc (v) and W u loc (w) that gets mapped into the upper part of In(v). Since the second coordinate of the fold point is less than the maximum of h v , there is a positive value λ L < λ R such that α(x * (λ L )) lies in W − , the region in Out(w) that gets mapped into the lower part of In(v), whose boundary contains the graph of h v . Therefore, the curve α(x * (λ)) is tangent to the graph of h v (x, λ) at some point α(x * (λ 1 )) with λ 1 ∈ (λ L , λ R ). We have thus shown that given λ 0 > 0, there is some positive λ 1 < λ 0 , for which the image of the curve W u loc (w) ∩ In(v) by η is tangent to W s loc (v) ∩ Out(w), creating a 1-pulse heteroclinic tangency. Two transverse 1-pulse heteroclinic connections exist for λ > λ 1 close to λ 1 . These connections come together at the tangency and disappear. As λ goes to zero, the fold point α(x * (λ)) turns around the cylinder Out(w) infinitely many times, thus going in and out of W − . Each times it crosses the boundary, a new tangency occurs. Repeating the argument above yields the sequence λ i of parameter values for which there is a 1-pulse heteroclinic tangency and this completes the proof of the main statement of Theorem 5. On the other hand, as λ goes to zero, the maximum height of the helix, (M w (λ)) δ also tends to zero. This implies that the second coordinate of the points α(x * (λ i )) ∈ Out(w) where there is a 1-pulse heteroclinic tangency tends to zero as i goes to infinity. This shows that the tangency approaches the two-dimensional [w → v] connection that exists for λ = 0. The construction in the proof of Theorem 5 may be extended to obtain multipulse tangencies, as follows: Proof of Theorem 6. Look at W u loc (w)∩In(v), the graph of h w (x, λ). where the map h w is monotonically decreasing. Therefore, we may define infinitely many intervals where α λ (x) = η (x, h w (x, λ)) lies in W + . More precisely, we have two sequences, (a j ) and (b j ) in [x, P 1 v ] such that: • the curves α ([a j , b j ]) accumulate uniformly on W u loc (w) ∩ Out(w) as j → ∞. Hence, each one of the curves α ([a j , b j ]) is mapped by Ψ w→v into the graph of a function h in In(v) satisfying the conditions of Lemma 10, and hence each one of these curves is mapped by η into a helix ξ j (x), x ∈ (a j , b j ). ξ (x) j Figure 6. Curves in the proof of Theorem 6: for λ = λ i the curve α(x) = η(W u loc (w) ∩ In(v)) (solid black curve) is tangent to W s loc (v) (gray curve) in Out(w). The curves ξ j (x) (dotted) accumulate on α(x). Very small changes in λ make them tangent to W s loc (v) creating 2-pulse heteroclinic tangencies. Let λ i be a parameter value for whichẋ = f λ i (x) has a 1-pulse heteroclinic tangency as stated in Theorem 5. As j → +∞, the helices ξ j (x) accumulate on the helix of Theorem 5 as drawn in Figure 6, hence the fold point of ξ j (x) is arbitrarily close to the fold point of η (x, h w (x, λ)). The arguments in the proof of Theorem 5 show that a small change in the parameter λ makes the new helix tangent to W s loc (v) as in Figure 5. For each j this creates a 2-pulse heteroclinic tangency at λ = λ ij . Since the the helices ξ j (x) accumulate on α λ i (x), it follows that lim j∈N λ ij = λ i . Finally, the argument may be applied recursively to show that each n-pulse heteroclinic tangency is accumulated by (n+1)-pulse heteroclinic tangencies for nearby parameter values. If λ ⋆ ∈ R is such that the flow ofẋ = f λ ⋆ (x) has a heteroclinic tangency, when λ varies near λ ⋆ , we find the creation and the destruction of horseshoes that is accompanied by Newhouse phenomena [13,39]. In terms of numerics, we know very little about the geometry of these attractors, we also do not know the size and the shape of their basins of attraction. Bifurcating dynamics We discuss here the geometric constructions that determine the global dynamics near a Bykov cycle, in order to prove Theorem 7. For this we need some preliminary definitions and more information on the geometry of the transition maps. First, we adapt the definition of horizontal strip in [15] to serve our purposes: for τ > 0 sufficiently small, in the local coordinates of the walls of the cylinders V and W , consider the rectangles: with the conventions −π < P 2 v − τ < P 1 v + τ ≤ π and −π < P 2 w − τ < P 1 w + τ ≤ π. A horizontal strip in S v is a subset of In + (v) of the form The graphs of the u j are called the horizontal boundaries of H and the segments P 2 v − τ, y , u 1 (P 2 v − τ ) ≤ y ≤ u 2 (P 2 v − τ ) and P 1 v + τ, y , with u 1 (P 1 v + τ ) ≤ y ≤ u 2 (P 1 v + τ ) are its vertical boundaries. Horizontal and vertical boundaries intersect at four vertices. The maximum height and the minimum height of H are, respectively Analogously we may define a horizontal strip in S w ⊂ Out + (w). 6.1. Horseshoe strips in S v . We are interested in the dynamics of points whose trajectories start in S v and return to In + (v) arriving at S v . Lemma 12. The set η −1 (S w ) ∩ S v has infinitely many connected components all of which are horizontal strips in S v accumulating on W s loc (v) ∩ In(v), except maybe for a finite number. The horizontal boundaries of these strips are graphs of monotonically increasing functions of x. Proof. The boundary of S w consists of the following: (1) a piece of W u loc (w) ∩ Out(w) parametrised by y = 0, x ∈ [P 1 w − τ, P 2 w + τ ], where η −1 is not defined; (2) the horizontal segment (x, 1) with x ∈ [P 2 w − τ, P 1 w + τ ]; (3) two vertical segments P 1 w − τ, y and P 2 w + τ, y with y ∈ (0, 1). Together, the components (2) and (3) form a continuous curve that, by the arguments of Lemma 10 (2), is mapped by η into a helix on In + (v), accumulating on W s loc (v) ∩ In(v). As the helix approaches W s loc (v), it crosses the vertical boundaries of S v infinitely many times. The interior of S w is mapped into the space between consecutive crossings, intersecting S v in horizontal strips, as shown in Figure 7. From the expression (4.5) of η −1 it also follows that the vertical boundaries of S w are mapped into graphs of monotonically increasing functions of x. Denote by H n the strip that attains its maximum height h n at the vertex P 1 v + τ, h n with (6.6) h n = e (P 1 v −P 2 w +2τ −2nπ)/K , then lim n→∞ h n = 0, hence the strips H n accumulate on W s loc (v) ∩ In(v). The minimum height of H n is given by and is attained at the vertex P 2 v − τ, m n . Moreover n < m implies that H n lies above H m . Lemma 13. Let H n be one of the horizontal strips in η −1 (S w ) ∩ S v . Then η(H n ) is a horizontal strip in S w . The strips η(H n ) accumulate on W u loc (w) ∩ In(v) as n → ∞ and the maximum height of η(H n ) is h δ n , where h n is the maximum height of H n . Proof. The boundary of S v consists of a piece of W s loc (v) plus a curve formed by three segments, two of which are vertical and a horizontal one. From the arguments of Lemma 10 (1), it follows that the part of the boundary of S v not contained in W s loc (v) is mapped by η into a helix. Consider now the effect of η on the boundary of H n . Each horizontal boundary gets mapped into a piece of one of the vertical boundaries of S w . The vertical boundaries of H n are contained in those of S v and hence are mapped into two pieces of a helix, that will form the horizontal boundaries of the strip η(H n ), that may be written as graphs of decreasing functions of x. A shown after Lemma 12, the maximum height of H n tends to zero, hence the strips η(H n ) have the same property. The maximum height of η(H n ) is h δ n , attained at the point P 2 w − τ, h δ n . Lemma 14. For each λ > 0 sufficiently small, there exists n 0 (λ) such that for all n ≥ n 0 the image R λ (H n ) of the horizontal strips in η −1 (S w ) ∩ S v intersects S v in a horseshoe strip. The strips R λ (H n ) accumulate on W u loc (w) ∩ In(v) and, when n → ∞, their maximum height tends to M w (λ), the maximum height of W u loc (w) ∩ In(v). Proof. The curve W s loc (v) ∩ Out(w) is the graph of the function h v (x, λ) that is positive for x outside the interval [P 2 w , P 1 w ] (mod 2π). In particular, h v (P 2 w − τ, λ) > 0 and h v (P 2 w + τ, λ) > 0 for small τ > 0. Therefore there is a piece of the vertical boundary of S w that lies below W s loc (v) ∩ Out(w), consisting of the two segments (P 2 w − τ, y) with 0 < y < h v (P 2 w − τ, λ) and (P 1 w + τ, y) with 0 < y < h v (P 1 w + τ, λ). For small λ > 0, these segments are mapped by Ψ λ w→v inside In − (v). Let n 0 be such that the maximum height of η(H n 0 ) is less than the minimum of h v (P 2 w −τ, λ) > 0 and h v (P 2 w + τ, λ) > 0. Then for any n ≥ n 0 the vertical sides of η(H n ) are mapped by Ψ λ w→v inside In − (v). The horizontal boundaries of η(H n ) go across S w , so writing them them as graphs of u 1 (x) < u 2 (x), there is an interval where the second coordinate of Ψ λ w→v (x, u j (x)) is more than M w (λ) > 0, the maximum height of W u loc (w) ∩ In + (v). Since the second coordinate of Ψ λ w→v (x, u j (x)) changes sign twice, then it equals zero at two points, hence R λ (H n ) ∩ S v is a horseshoe strip. We have shown that the maximum height of η(H n ) tends to zero as n → ∞, hence the maximum height of R λ (H n ) = Ψ λ w→v (η(H n )) tends to M w (λ). 6.2. Regular Intersections of Strips. We now discuss the global dynamics near the Bykov cycle. The structure of the non-wandering set near the network depends on the geometric properties of the intersection of H n and R λ (H n ). Let A be a horseshoe strip and B be a horizontal strip in S v . We say that A and B intersect regularly if A ∩ B = ∅ and each one of the horizontal boundaries of A goes across each one of the horizontal boundaries of B. Intersections that are neither empty nor regular, will be called irregular. If the horseshoe strip A and the horizontal strip B intersect regularly, then A ∩ B has at least two connected components, see Figure 8. In this and the next subsection, we will find that the horizontal strips H n across S v may intersect R λ (H m ) in the three ways: empty, regular and irregular, but there is an ordering for the type of intersection, as shown in Figure 9. Lemma 15. For any given fixed λ > 0 sufficiently small, there exists N (λ) ∈ N such that for all m, n > N (λ), the horseshoe strips R λ (H m ) in S v intersect each one of the horizontal strips H n regularly. Proof. From Lemma 14 we obtain n 0 (λ) such that all R λ (H m ) with m ≥ n 0 (λ) are horseshoe strips and their lower horizontal boundary has maximum height bigger than the maximum height M w (λ) of W u loc (w) ∩ In(v). On the other hand, since the strips H n accumulate uniformly on W s loc (v) ∩ In(v), with their maximum height h n tending to zero, then there exists n 1 (λ) such that h n < M w (λ) for all n ≥ n 1 (λ). Note that the h n do not depend on λ. Therefore, for m, n > N (λ) = max{n 0 (λ), n 1 (λ)} both horizontal boundaries of R λ (H m ) go across the two horizontal boundaries of H n . The constructions of this section also hold for the backwards return map R −1 with analogues to Lemmas 12,13,14 and 15. Generically, for n > N (λ) each horizontal strip H n intersects R λ (H n ) in two connected components. Thus the dynamics of points whose trajectories always return to In(v) in H n may be coded by a full shift on two symbols, that describe which component is visited by the trajectory on each return to H n . Similarly, trajectories that return to S v inside H n ∪ · · · ∪ H n+k may be coded by a full shift on 2k symbols. As k → ∞, the strips H n+k approach W s loc (v) ∩ In(v) and the number of symbols tends to infinity. We have recovered the horseshoe dynamics described in assertion (5) of Theorem 3. The regular intersection of Lemma 15 implies the existence of an R λ -invariant subset in η −1 (S w ) ∩ S v , the Cantor set of initial conditions: where the return map to η −1 (S w )∩S v is well defined in forward and backward time, for arbitrarily large times. We have shown here that the map R λ restricted to this set is semi-conjugate to a full shift over a countable alphabet. Results of [2,21] show that the first return map is hyperbolic in each horizontal strip, implying the full conjugacy to a shift. The set Λ depends strongly on the parameter λ, in the next subsection we discuss its bifurcations when λ decreases to zero. 6.3. Irregular intersections of strips. The horizontal strips H n that comprise η −1 (S w ) ∩ S v do not depend on the bifurcation parameter λ, as shown in Lemmas 12 and 13. This is in contrast with the strong dependence on λ shown by the first return of these points to S v at the horseshoe strips R λ (H n ). In particular, the values of n 0 (λ) ( Lemma 14) and N (λ) (Lemma 15) vary with the choice of λ. For a small fixed λ > 0 and for m, n ≥ N (λ) we have shown that H n and R λ (H m ) intersect regularly. The next result describes the bifurcations of these sets when λ decreases. These global bifurcations have been described by Palis and Takens in [29] in a different context, where the horseshoe strips are translated down as a parameter varies. In our case, when λ goes to zero the horseshoe strips are flattened into the common invariant manifold of v and w. of λ in [c, d] for which the map R λ has a homoclinic tangency associated to a periodic point see [29,39]. The rigorous formulation of Theorem 7 consists of Proposition 16 , with ∆ 1 = [c, d] as in the remarks above. Since Proposition 16 holds for any λ 3 > 0, it may be applied again with λ 3 replaced by λ 1 , and the argument may be repeated recursively to obtain a sequence of disjoint intervals ∆ n . Proof of Corollary 8. The λ dependence of the position of the horseshoe strips is not a translation in our case, but after Proposition 16 the constructions of Yorke and Alligood [39] and of Palis and Takens [29] can be carried over. Hence, when the parameter λ varies between two consecutive regular intersections of strips, the bifurcations of the first return map can be described as the one-dimensional parabola-type map in [29]. Following [39,29], the bifurcations for λ ∈ [λ 1 , λ 3 ] are: • for λ > d: the restriction of the map R λ 3 to the non-wandering set on H a is conjugate to the Bernoulli shift of two symbols and it no longer bifurcates as λ increases. • at λ ∈ (c, d): a fixed point with multiplier equal to ±1 appears at a tangency of the horizontal boundaries. It undergoes a period-doubling bifurcation. A cascade of perioddoubling bifurcations leads to chaotic dynamics which alternates with stability windows and the bifurcations stop at λ = d; • at λ < c: trajectories with initial conditions in H a approach the network and might be attracted to another basic set. Proof of Corollary 9. If the first return map R λ is area-contracting, then the fixed point that appears for λ ∈ (c, d) is attracting for the parameter in an open interval. This attracting fixed point bifurcates to a sink of period 2 at a bifurcation parameter λ > c close to c. This stable orbit undergoes a second flip bifurcation, yielding an orbit of period 4. This process continues to an accumulation point in parameter space at which attracting orbits of period 2 k exist, for all k ∈ N. This completes the proof of Corollary 9 . Finally, we describe a setting in which the fixed point that appears for λ ∈ (c, d) can be shown to be attracting. Recall that the set W u loc (w) ∩ In(v) is the graph of y = h w (x, λ). Suppose the transition map Ψ λ w→v : Out(w) −→ In(v) is given by Ψ λ w→v (x, y) = (x, y + h w (x, λ)). Since then det DR λ (x, y) = det DΨ λ w→v (η(x, y)) · det Dη(x, y) = δy δ−1 . For sufficiently small y (in H n with sufficiently large n) this is less than 1, and hence R λ is contracting. However, if the first coordinate of Ψ λ w→v (x, y) depends on y, then det DΨ λ w→v (η(x, y)) will contain terms that depend on x + K ln y and a more careful analysis will be required.
2016-03-04T18:05:38.000Z
2015-04-07T00:00:00.000
{ "year": 2015, "sha1": "f092467aad21bee3125f1db09c5661ff2ef9460a", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jmaa.2016.06.032", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "f092467aad21bee3125f1db09c5661ff2ef9460a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
250105899
pes2o/s2orc
v3-fos-license
Immunoregulatory Therapy Improves Reproductive Outcomes in Elevated Th1/Th2 Women with Embryo Transfer Failure Objective Immunological disturbance is one of the crucial factors of implantation failure. Limited data exists evaluating immunoregulatory therapy in patients with implantation failures. Methods This is a retrospective cohort study on patients who had failed embryo transfer cycle and had elevated Th1/Th2 cytokine ratios between 1/2019 and 3/2020. Patients were assigned into two groups based on whether they received immunoregulatory treatment during a frozen transfer cycle. The primary outcome was live birth rate. Secondary outcomes included clinical pregnancy, implantation rate, and neonatal outcomes. Results Of 71 patients enrolled, 41 patients received immunoregulatory therapy and 30 patients did not. Compared to untreated patients, rate of live birth was significantly elevated in the treated group (41.5% vs. 16.7%, P = 0.026). Rate of biochemical pregnancy, implantation, clinical pregnancy, and ongoing pregnancy between two groups were 56.1% vs. 40% (P = 0.18), 36.5% vs. 23.9% (P = 0.15), 51.2% vs. 30% (P = 0.074), and 41.5% vs. 16.7% (P = 0.03), respectively. Although there was no statistical significance, women receiving treatment also had a tendency of lower frequency of pregnancy loss (19.0% vs. 44.4%, P = 0.20). No adverse events were found between newborns of the two groups. Immunoregulatory therapy, age, infertility type, ovulation induction protocol, number of oocytes retrieved, artificial cycle embryo transfer, and cleavage transfer were associated with live birth in univariate analysis (all P < 0.05). Only immunoregulatory therapy was associated with live birth after adjustment of confounders (OR = 5.02, 95% CI: 1.02-24.8, P = 0.048). Conclusions Immunoregulatory therapy improves reproductive outcomes in elevated Th1/Th2 cytokine ratio women with embryo transfer failure. Introduction Incidence of implantation failures varies from 8 to 33% in the general population [1]. When conducting IVF/ET, pregnancy is established when an embryo, which is a semiallograft, is successfully implanted to the maternal decidua with an establishment of maternal immune tolerance [2]. The disturbance of Th1 and Th2 cytokines may result in implantation failure. Elevated levels of Th1 cells are associated with embryo rejections, whereas elevated Th2 cell levels are associated with successful pregnancy [3]. Cytokines produced by Th1 cells, such as TNF-α, promote inflammatory and thrombotic responses. Cytokines produced by Th2 cells such as IL-4 inhibit Th1 cell-induced tissue factors. Previous studies have also shown significantly higher Th1/Th2 ratios in peripheral blood samples in patients with implantation failures [4]. Immunoregulatory therapy may be effective in treating immune disturbance. Prednisone (PDN), hydroxychloroquine (HCQ), or cyclosporine (CsA) has been proven to inhibit Th1 cytokine secretion, increase the number of regulatory T cells, and induce maternofetal tolerance [5]. Therefore, use of immunoregulators prior to embryo transfer may improve the IVF outcome. However, some studies showed adverse results [6]. These studies did not target on well-selected patients with immune disturbance which may present as an elevated Th1/Th2 cell ratio. Besides, these studies did not use combination of immunoregulatory medicines. Therefore, we conducted a retrospective cohort study to investigate the reproductive outcomes of FET (frozenthawed embryo transfer) cycle after use of immunoregulatory therapy versus no treatment in women with previous implantation failure and an elevated peripheral blood Th1/ Th2 cell ratio. A preprint has previously been published [7]. Study Population. This is a retrospective cohort study in which patients were enrolled in Reproductive Medicine Department, Peking University People's Hospital between January 2019 and March 2020. We enrolled patients based on the following eligibility criteria: (1) patients age < 40 years, (2) having at least one failed embryo transfer cycle previously, (3) having an elevated peripheral blood Th1/ Th2 ratio, and (4) planning to undergo frozen-thawed embryo transfer. Patients were excluded if they had any structural lesions of the uterus or hydrosalpinges. Demographic and baseline clinical data were abstracted from the clinical records. All serum laboratory values were obtained in our laboratory system. Patients who received immunoregulatory therapy were enrolled into the treated group, and those who did not receive immunoregulatory therapy were enrolled into the nontreated group. The study was approved by the institutional ethics review committee (2018PHB141-01). A signed informed consent form was obtained from all patients prior to treatment. Analyses of the Peripheral Blood Th1/Th2 Cells. According to Kwak-Kim et al. [4], mean TNF-α/IL-4 level is 12:81 ± 2:52 in patients with previous implantation failures; we took one standard deviation plus mean value, which is 15.33, as a lower limit of the elevated Th1/Th2 cytokine ratio. Therefore, the elevated Th1/Th2 ratio was defined as TNF-α/IL-4 equal to 15.33 or above. To evaluate the value of Th1/Th2 ratios, peripheral blood was drawn between cycle days (CD) 3 and 9 of this ART cycle. TNF-α and IL-4 concentrations in serum were measured using ELISA kits from BioLegend (San Diego, CA, USA), according to the manufacturer's instruction. Serum was prepared by centrifugation of coagulated blood tubes at 2000g for 10 min at room temperature and stored in −70°C. Samples were tested for IL-4 and TNF-α using a sandwich enzyme-linked immunosorbent assay according to the manufacturer's instructions (R&D Systems, USA). Their concentrations were calculated using standard curves. Immunosuppressive Treatment. Patients with only one time of implantation failure received prednisone only. Patients who had two or more implantation failures took combination of prednisone, hydroxychloroquine, and cyclosporine. Patients who had concerns about safety of cyclosporine received combination of prednisone and hydroxychloroquine. Cyclosporine is a strong immunosuppressant which can cause serum creatinine and urea nitrogen elevation, as well as discomforts after the first contact. An animal test has shown no teratogenic risk but needs further clinical verifications. We explained its benefits and risks to all patients before treatment. 5 mg prednisone daily was begun on the first menstrual day of the FET cycle and continued until the 8 weeks of gestation. 200 mg hydroxychloroquine daily was started from day one of the FET cycle and continued until 8 weeks of gestation. 100 mg daily cyclosporine was begun on the day of embryo transfer and continued until the 8 weeks of gestation. 2.4. FET Procedures. All patients received standardized ovarian stimulation regimens, oocyte retrieval, and fertilization, followed by a planned frozen transfer of up to two day-3 or day-5 embryos. Patients received one of following regimens based on individual situations: gonadotropin-releasing hormone (GnRH) antagonist, GnRH agonist long protocol, and mild stimulation protocol. When at least two follicles reached 18 mm, 5,000 to 10,000 IU of hCG (Covidrel, Merck Serono) was administered and oocyte retrieval occurred 36 hours later. Luteal phase support was started from the day of ovulation with oral dydrogesterone at a dose of 20 mg twice a day and was continued until the day of serum hCG testing. Up to two cleavage stage frozen embryos or day 5 trophoblasts were thawed and transferred, respectively. The pregnancy test will be performed 2 weeks after embryo transfer. In women with a positive hCG test, luteal phase support was continued until 10 weeks of gestation. Outcome Measurements. The primary outcome was live birth rate, which was defined as the delivery of any viable neonate who was 28 weeks of gestation or older. Secondary outcomes included biochemical pregnancy, clinical pregnancy, implantation rate, and neonatal outcomes. 2.6. Statistical Analysis. Baseline characteristics and laboratory results were summarized for two groups utilizing descriptive statistics, including percentage, means ± standard deviation (SD), and 95% CI. For the quantitative variable, the t-test was used to compare group differences. For categorical variables, the chi-square test or Mann-Whitney U test was used for group comparisons. Significance level was set at P < 0:05; all data were analyzed by using SPSS 23.0 (SPSS, IBM, NYU). Study Population. Ninety-two patients were screened, and 21 were excluded for structural lesions of the uterus or hydrosalpinges, fresh embryo transfer, no embryo for transfer or cryopreservation, autoimmune disease, and chronic medical condition. Seventy-one patients met the eligibility and were enrolled into the study ( Figure 1). Forty-one patients who received immune therapy to regulate immune disturbance were enrolled into the treatment group, and 30 patients who did not receive immunotherapy were enrolled into the control group. Of 41 patients in the treatment group, 30 patients who had twice or more implantation failure received combination therapy (prednisone, hydroxychloroquine with/without cyclosporine), and 11 patients who had one time of implantation failure received prednisone. Live Birth and Neonatal Outcomes. The mean number of embryos transferred was higher in the treated group than in the control group (1:8 ± 0:5 vs. 1:5 ± 0:5, P = 0:02). Other variables in frozen embryo transfer procedures did not have statistical significances ( Risk Factors Associated with Live Birth in Univariate Analysis. As determined by univariate analysis (Table 3) Discussion In this study, we reported on data of immunotherapy therapy for improving IVF-ET outcomes in patients with the elevated peripheral Th1/Th2 cytokine ratio. To our knowledge, this is the first study to evaluate the efficacy and safety of a combination of immunoregulators to IVF-ET in this special population. Our results indicated that use of prednisone, hydroxychloroquine, or prednisone during the frozen embryo transfer cycle for patients with the elevated peripheral cytokine Th1/Th2 ratio improved live birth rate compared to those untreated. It is worth noting that live birth rate of all patients who underwent frozen embryo transfer is 32.5% in out center; use of an immunoregulator raised live birth rate to the above average level. Therefore, immunoregulatory therapy can bring benefits to the clinical practice in treating implantation failures. Studies found that implantation failure is associated with elevated Th1/Th2 [8]. Cytokines of Th1 and Th2 cells interfere with pregnancy through several mechanisms. Cytokines of Th1 cells such as TNF-α may activate macrophages which could attack the trophoblast and trigger processes at the maternal utero-placental blood vessels by activation of vascular endothelial cell procoagulant [9]. In contrast, Th2 cytokines such as IL-4 inhibit Th1-induced tissue factor production by monocytes. In our study, the mean Th1/Th2 (TNF-α/IL-4) ratio was 29:42 ± 12:90 in the treated group, which was much higher than the Th1/Th2 ratio of both infertile patients (2:4 ± 0:4, n = 80) [4] and patients with IVF failures (12:81 ± 2:52, n = 9) in other studies [10], suggesting a severer infertile condition in our population. Rate of live birth improved since the increase in the type of immunoregulators, and combination of prednisone, hydroxychloroquine, and cyclosporine had the most favorable outcome. But this trend needs to be verified in the larger population. Prednisone, hydroxychloroquine, and cyclosporine have different mechanisms in regulating immune status; the crosstalk among them may have enhanced effect on the cytokine network and result in better outcomes. This also suggested that elevated Th1/Th2 ratios should be used to identify targeted patients who were suitable for immunoregulatory therapy. Limited data exists evaluating live birth rate of IVF-ET patients with immune disturbance after immunoregulatory therapy. In previous studies, prednisone was usually combined with hydroxychloroquine and other immunoregulatory regimens to improve the natural conception rate of women with disease of the immune system such as obstetrical antiphospholipid syndrome and systemic lupus erythematosus [11]. Those medications were also proven effective in immune disturbances. A study showed that hydroxychloroquine administration in women with RIF with a high TNF-α/IL-10 ratio significantly decreased serum level of TNF-α and significantly increased serum level of IL-10 (P < 0:0001) [5]. Cyclosporine A (CsA) was also proven to increase Th2-associated responses (P = 0:0001) and reduce Th1/Th2 (26:71 ± 7:32 vs. 18:56 ± 4:92, P < 0:0001) in women with recurrent pregnancy loss [12]. However, reproductive outcomes were not analyzed in either study. Dan and Hong [13] analyzed reproductive outcomes after pred-nisolone administration during assisted reproductive technology (ART). The result had no significance (pregnancy rate: RR 1.02, 95% CI 0.84-1.24; clinical pregnancy rate: RR 1.01, 95% CI 0.82-1.24; and implantation rate: RR 1.04, 95% CI 0.85-1.28). Reasons may be because these studies did not subanalyze patients with immune abnormality. Therefore, immunoregulatory therapy may be effective in patients with immune disturbance, and the Th1/Th2 cytokine ratio should be used to identify targeted patients. This study provides a possible way of treating implantation failure in clinical practice. Certain patients could try to receive immunoregulatory therapy to improve reproductive outcomes and reduce time of embryo transfer. Safety of immunoregulatory therapy needs to be emphasized. Prednisolone has long been proven safe in pregnancy with low prednisone amounts (<10 mg/day) [14]. The safety of hydroxychloroquine is also well established with a favorable safety profile [15]. CsA is a particularly lipophilic peptide being able to inactively traverse the placenta and enter the fetal circulation. Though drug has been found in the placenta, cord blood, and amniotic fluid [16], it has not been convincingly verified whether or not it interferes with fetus development and growth. A meta-analysis implied that CsA does not seem to be a major human teratogen [17]. Other researches have also displayed that the use of CsA during pregnancy does not increase the threat for inherited defects in infants [18]. This study has limited statistical power due to small sample size. A prospective randomized study is needed in the future to examine the efficacy and safety of immunotherapy. Besides, treated patients had higher Th17 levels which induce inflammation, suggesting a severer situation of immune abnormality compared with the control group [19]. However, Treg cells express anti-inflammatory cytokines; recent studies have indicated that the balance between Treg and Th17 cells is important for maintaining a normal pregnancy. Therefore, the Th17/Treg ratio should also be analyzed. After two times of implantation failure, we used a combination of immunotherapy. The efficacy of single medication should be analyzed in the further study. In conclusion, immunoregulatory therapy improves reproductive outcomes in elevated Th1/Th2 cytokine ratio women with embryo transfer failure.
2022-06-29T15:07:43.328Z
2022-06-26T00:00:00.000
{ "year": 2022, "sha1": "18c69ddaafb444e5839cfdd0fbb522413b4c046b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2022/4990184.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66733ff252fb42cdcdc96f9eb8d1f472670c9835", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14410476
pes2o/s2orc
v3-fos-license
Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis Amyotrophic lateral sclerosis (ALS) is a progressive neuromuscular disease, with large variation in survival between patients. Currently, it remains rather difficult to predict survival based on clinical parameters alone. Here, we set out to use clinical characteristics in combination with MRI data to predict survival of ALS patients using deep learning, a machine learning technique highly effective in a broad range of big-data analyses. A group of 135 ALS patients was included from whom high-resolution diffusion-weighted and T1-weighted images were acquired at the first visit to the outpatient clinic. Next, each of the patients was monitored carefully and survival time to death was recorded. Patients were labeled as short, medium or long survivors, based on their recorded time to death as measured from the time of disease onset. In the deep learning procedure, the total group of 135 patients was split into a training set for deep learning (n = 83 patients), a validation set (n = 20) and an independent evaluation set (n = 32) to evaluate the performance of the obtained deep learning networks. Deep learning based on clinical characteristics predicted survival category correctly in 68.8% of the cases. Deep learning based on MRI predicted 62.5% correctly using structural connectivity and 62.5% using brain morphology data. Notably, when we combined the three sources of information, deep learning prediction accuracy increased to 84.4%. Taken together, our findings show the added value of MRI with respect to predicting survival in ALS, demonstrating the advantage of deep learning in disease prognostication. Introduction Amyotrophic lateral sclerosis (ALS) is a progressive neuromuscular disease, heterogeneous in terms of symptom development, disease onset and disease progression (Chiò et al., 2011;Ravits and La Spada, 2009). ALS patients display, on average, a survival time of 3-4 years after onset of symptoms (del Aguila et al., 2003;Hardiman et al., 2011). To date, clinical characteristics such as site of onset, respiratory status, ALS Functional Rating Scale (ALSFRS) scores (Cedarbaum et al., 1999) and C9orf72 phenotype status (i.e. a disease-causing repeat expansion mutation in ALS (DeJesus-Hernandez et al., 2011;Renton et al., 2011)) are shown to have some predictive power for prediction of survival (Chiò et al., 2009;Elamin et al., 2015;Scotton et al., 2012;Wolf et al., 2014Wolf et al., , 2015. Prognosis based on these markers, however, often remains too uncertain to be implemented in clinical practice (Elamin et al., 2015) as motor neuron loss might already occur before clinical weakness can be measured (Simon et al., 2014). This stresses the importance of the development of new (objective) markers and neuroimaging techniques might provide such markers and improve prognostication (Turner et al., 2011). Reliable prediction of survival at the first clinical MRI appointment would provide highly valuable information for patients and care providers. Neuroimaging data has been used to separate ALS patients from healthy controls. In these approaches, machine learning techniques (e.g. support vector machines, (Cristianini and Shawe-Taylor, 2000)) have been used to group patients and controls on the basis of changes in motor and extra-motor resting-state functional connectivity, reaching an overall accuracy of 87% (Fekete et al., 2013). Also other resting-state networks have been used in machine learning approaches with an accuracy of 72% (Welsh et al., 2013). In ALS, cortical thinning and subcortical changes have been related to disease progression, putting forward these changes as a biomarker of ALS (Agosta et al., 2012;Mezzapesa et al., 2013;Turner and Verstraete, 2015;Verstraete et al., 2012;Walhout et al., 2015;Westeneng et al., 2015). For this reason, cortical thickness has been used as an alternative imaging metric to separate patients from controls, reaching accuracies ranging from 60 to 75% (Ahmed et al., 2015;Foland-Ross et al., 2015;Greenstein et al., 2012;Lerch et al., 2008). Notwithstanding the importance of exploring classification approaches to distinguish between patients and controls, predicting disease course, i.e. going beyond the establishment of patientcontrol identification, is presumably a more difficult problem, and arguably clinically more relevant. In the present study we therefore set out to explore the use of clinical characteristics in combination with MRI-based metrics of connectivity and brain morphology. White matter brain connectivity was derived from diffusion-weighted connectome imaging and brain morphology (i.e. cortical thickness and subcortical volume), was extracted from T1-weighted images. We used deep learning, a powerful technique shown to be of great value in many classification problems (Dean et al., 2012;Hinton et al., 2012;Krizhevsky et al., 2012;Mohamed et al., 2012;Wu et al., 2015), with Google's visual search algorithm and Google's AlphaGo program as well-known examples of applications of deep learning (Dean et al., 2012;Krizhevsky et al., 2012;Silver et al., 2016). Deep learning methods have shown high value in the fields of image classification (Krizhevsky et al., 2012;Wu et al., 2015), speech recognition Mohamed et al., 2012) as well as in elucidating complex relationships in MRI data . Focusing on testing the predictive power of deep learning in differentiating between three survival duration subgroups (i.e. short, medium and long survivors), we assess the prediction accuracy of four deep learning networks which are based on 1. clinical data, 2. structural connectivity MRI data, 3. morphology MRI data and 4. a combined approach, in which clinical and imaging data are combined using layered deep learning. We show that the prediction accuracy of future survival time of ALS patients can be as high as 84% on the basis of combined clinical and neuroimaging data. Patients A total dataset of 135 patients with sporadic ALS was included in this study (Table 1). Patients were diagnosed according to the El Escorial criteria (Brooks et al., 2009) and recruited from the outpatient clinic for motor neuron diseases of the University Medical Center Utrecht. Parts of this dataset have been described in earlier publications in the context of examining patient-control group effects (Schmidt et al., 2014;Verstraete et al., 2011) (demographics given in Table 1). The included set involved data of patients that were either deceased (n = 122) or still alive with a disease duration of over 50 months (n = 13) at the time of analysis, providing the opportunity to test survival predictions using information from the first clinical MRI appointment. The included clinical characteristics (Table 1) consisted of eight metrics. These comprised 1. site of disease onset, 2. age at disease onset and 3. time to diagnosis (i.e. the time from disease onset until the diagnosis of ALS was given). In addition, 4. the ALSFRS slope was included to provide an indication of disease progression based on the revised ALS Functional Rating Scale (ALSFRS-R) (Cedarbaum et al., 1999) and time T (in months) between symptom onset and first examination: slope = (48 − ALSFRS-R) / T (Kimura et al., 2006;Kollewe et al., 2008;Qureshi et al., 2006). Other clinical variables taken into account were 5. the forced vital capacity (FVC), 6. C9orf72 phenotype status, 7. frontotemporal dementia (FTD) status as derived from the (revised) Neary criteria (Neary et al., 1998;Rascovsky et al., 2011) and 8. El Escorial criteria diagnostic category. Among the group of 135 patients, there was no history of brain injury, psychiatric illness, epilepsy, or neurodegenerative diseases other than ALS. The Ethical Committee for human research of the University Medical Center Utrecht approved the study protocols and informed written consent according to the Declaration of Helsinki was obtained from each patient. Short, medium, long survivors Each of the 135 patients was categorized according to the true survival time (i.e. time between disease onset and death): short survivors Table 1 Demographic and clinical characteristics of all study participants. Total dataset is divided in a training set, validation set and evaluation set. Total Training Validation Evaluation p-value a n = 135 n = 83 n = 20 n = 32 with survival up to 25 months after disease onset, medium survivors with survival between 25 and 50 months after disease onset, and long survivors living over 50 months after disease onset (Elamin et al., 2015). The group of long survivors consisted of patients who either died after a disease duration of at least 50 months or were still alive and had a disease duration of at least 50 months at time of analysis. Image acquisition and preprocessing T1 and diffusion-weighted scans were acquired from all patients using a 3 Tesla Philips Achieva Medical Scanner with a SENSE receiver head-coil, described in detail by (Verstraete et al., 2011) and (Schmidt et al., 2014). A high-resolution T1-weighted image was acquired for anatomical reference by a 3D fast field echo using parallel imaging (TR/ TE = 10/4.6 ms, flip-angle 8°, slice orientation: sagittal, voxel size = 0.80 × 0.75 × 0.75 mm, field of view = 176 × 240 × 240 mm covering the whole brain). For each subject, two sets of 30 weighted diffusion scans and 5 unweighted B0 scans were acquired with opposite kspace readouts (Andersson et al., 2003) using the following settings: parallel imaging SENSE p-reduction 3, high angular gradient set of 30 different weighted directions, TR/TE = 7035/68 ms, 2 × 2 × 2 mm voxel size, 75 slices, b = 1000 s/mm 2 . Anatomical T1-weighted images were parcellated using FreeSurfer (V5.1.0) according to the Desikan-Killiany atlas (Desikan et al., 2006) dividing the segmented gray matter into 83 distinct brain regions (68 cortical regions (34 for each hemisphere), 14 subcortical areas, and the brainstem). Of the 68 cortical regions, cortical thickness was measured by computing the distances between the gray/white matter boundary and pial surface at each point on the cortical mantle (Fischl and Dale, 2000). Volumes of the 14 subcortical areas and the brainstem were computed with FreeSurfer's automated procedure for volumetric measurements (Fischl et al., 2002). Preprocessing of diffusion-weighted images included corrections for susceptibility and eddy-current distortions (Andersson and Skare, 2002;Andersson et al., 2003). Next, a tensor was fitted to the diffusion signals in each voxel and diffusion tensor imaging metrics, such as fractional anisotropy (FA) (Alexander et al., 2007), were derived. White matter tracts were reconstructed using Fiber Assignment by Continuous Tracking (FACT) (Mori et al., 1999;; tracking was initiated by 8 seeds per white matter voxel and stopped using conditions as detailed in previous work (Schmidt et al., 2014). For each subject, an individual brain network was reconstructed by selecting the interconnecting tracts from the total cloud of reconstructed streamlines for each pair of regions included in the used cortical atlas (van den Heuvel and Sporns, 2011; Verstraete et al., 2011). We focused on white matter connectivity strength measured in terms of FA (from here on referred to as connection weight). In support of using this metric as a marker for disease effects, previous studies have extensively shown FA changes in ALS patients Ciccarelli et al., 2006;Menke et al., 2012;Schmidt et al., 2014;Senda et al., 2011;Turner et al., 2011;Verstraete et al., 2010;Verstraete et al., 2014). Moreover, the extent to which FA alterations are observed has been suggested to reflect distinct sequential ALS disease stages (Kassubek et al., 2014;Müller et al., 2016;, and has been noted to mirror the pattern of phosphorylated 43 kDa TAR DNA-binding protein (pTDP-43,(Brettschneider et al., 2013)) aggregation. FA values of tracts interconnecting brain regions were stored in a weighted connectivity matrix. Training, validation and test set The total set of 135 datasets (i.e. 135 patients) was randomly divided into a training set (n = 83), a validation set (n = 20), and an independent evaluation set (n = 32), by first randomly selecting an evaluation set of the total dataset, and next by dividing the remaining subset of the data into a training set and a validation set, with the proportional splits between 70/30 and 80/20 (Crowther and Cox, 2005;Shahin et al., 2004). The training, validation and evaluation sets had similar survival class distributions (Fig. 1) and were not significantly different for each of the eight clinical characteristics (Table 1). Deep learning neural network A deep learning approach on the basis of an artificial neural network was applied. In brief (see for details on the applied deep learning procedures below), the procedure included deep learning on the training set, with the validation set used to stop the training process on time to prevent overfitting of the classifier to the training set (Duda et al., 2001). The evaluation set was then used to assess the final performance of the trained neural network on an independent sample. For clarity, we note that the term 'neural network' here is taken from the field of supervised learning (Bishop, 1995;Duda et al., 2001;Hinton et al., 2012;Larochelle et al., 2009) and does not refer to the concept of brain network as used in the field of connectomics (Bullmore and Sporns, 2009;van den Heuvel et al., 2012;van den Heuvel et al., 2008). Deep learning networks refer to the subclass of neural networks comprising multiple hidden layers ) that allow for a detailed input-to-output mapping. Due to the inclusion of multiple hidden layers, deep learning networks are particularly useful for modeling high-level abstractions from data (Larochelle et al., 2009). In total, four deep learning networks were constructed (Fig. 2). These were based on 1. clinical data, 2. structural connectivity MRI data, 3. morphology MRI data and 4. a combination of the previous three information sources. The input vectors or features of these networks included the normalized clinical characteristics (8 in total), normalized cortical thickness and subcortical volumes derived from T1weighted MRI and/or the connection weights as stored in the connectivity matrices. Normalization was performed using a min-max feature scaling in order to accelerate training (Priddy and Keller, 2005). Average values of features were imputed for missing values (i.e. either unknown clinical characteristics or connections that could not be detected in patients), with missing values being accounted for in an additional binary input vector. The output vectors, i.e. the classes that the deep learning Table 1 for more details). network had to classify, represented the predefined classes of short, medium and long survivors. In what follows, we will describe the construction of the four deep learning networks, their training and their evaluation. Clinical deep learning First, a deep learning network was constructed for the clinical characteristics (Fig. 2). The input vector of this network represented the eight clinical characteristics, including site of onset, age at onset, time to diagnosis, ALSFRS slope, FVC, C9orf72 phenotype status, FTD status and El Escorial criteria diagnostic category. Each layer in the deep learning network consisted of nodes and was connected to other layers using weighted edges (Duda et al., 2001). The number of input nodes was based on the number of clinical characteristics and the number of output nodes was based on the number of classes (short, medium, long). The number of hidden layers was set to two, in order to balance between the benefit of discovering more complex relations, the risk of overfitting, and training time and complexity (Karsoliya, 2012). The number of hidden nodes was set by means of a fine neuron grid search during the training phase (described below), with the number of hidden nodes in both layers varying from 1 to 500. Structural connectivity deep learning Second, a deep learning network was constructed for the structural connectivity MRI metric (Fig. 2). This second network employed connection weight of each reconstructed connection as an input node for the deep learning (2285 features in total: 83 brain regions × 82 / 2 = 3403 possible connections of which 2285 are existing connections). The number of output nodes was set to three survival classes. Two hidden layers were used and the sizes of these layers were found using a fine neuron grid search in the same search domain as similarly set for clinical deep learning. Morphology deep learning Next, a deep learning network was constructed based on the cortical thickness and subcortical volume measurements (Fig. 2). The input vector of the morphology network included 68 cortical thickness values and volume values of 14 subcortical regions and the brainstem, resulting in a total of 83 input nodes. The number of output nodes was set to three survival classes and two hidden layers were used. The size of these layers was set by means of a fine neuron grid search in the same search domain as described for structural connectivity deep learning. Clinical-MRI combined deep learning In addition to the clinical and MRI deep learning networks, a fourth network was constructed that combined the three information sources. The input layer of this network included each of the three output nodes of the clinical and MRI deep learning networks, thus consisted of nine input nodes (Fig. 2). The number of hidden nodes was found using a neuron grid search in the same range as for the other networks during the training phase. Output nodes (three nodes) represented survival class. Network training All four deep learning networks were trained using the following procedures. Each hidden node was assigned a non-linear (logistic) activation function . Feeding each training data point through the network produced the output vector of weights. This output vector was compared to the target values, with any difference in the predicted outcome and the real outcome (i.e. short, medium, long survivors) defined as error using the cross-entropy error function (Murphy, 2012;Rubinstein and Kroese, 2004). After presenting all examples of the training set to the network, the network weights were updated by backpropagation learning (Rumelhart et al., 1986) using the scaled conjugate gradient algorithm (Møller, 1993) to correct the weights in a direction that reduced the error of the network. During this training phase, overfitting of the network was prevented by the use of L2-regularization technique (i.e. penalizing by adding the sum of the squared values of the weights to the error function) with parameter λ = 0.1 (Ng, 2004) and performance comparison against the validation set. The validation data was presented to the network after each training iteration to obtain a non-training performance error. Training was stopped when the validation error ceased to decrease (Sarle, 1995). This training procedure was repeated for all networks constructed in the neuron grid search; performance of these networks was evaluated using the measures described below, and the optimal deep learning network size was selected. Network evaluation After the training stage, performance of the obtained neural network was assessed in the evaluation phase by means of the evaluation dataset. The input features (i.e. clinical characteristics (network 1), connectivity matrices (network 2), morphology values (network 3) or all three information sources (network 4)) of the subjects in the evaluation set (n = 32) were presented to the trained networks. The softmax activation function (Bishop, 1995) was used for the output nodes, resulting in a vector of values varying between 0 and 1 that add up to 1 and the output node with the highest probability was selected as the predicted class label using a winner-take-all approach (Duan et al., 2003;Hinton, 2002;Hinton et al., 2012;Lefebvre et al., 2013). To determine whether the predicted class was correct, the network output label was compared to the true class label (i.e. true survival class of a patient). Good classifications were marked by equal labels for the true and predicted class, incorrect classifications were marked by a mismatch between prediction and truth. Next, a mosaic plot of the trained network (Fig. 3) was computed to visualize the distribution of datasets over the different classes after prediction (Hartigan and Kleiner, 1981), including the positive predictive values (PPV) of the network, defined as the percentage of patients with a predicted label that coincided with the true class label (Altman and Bland, 1994;Fletcher and Fletcher, 2005). For each class a PPV score was computed as PPV i = N i correct / N label i (i ∈ {short, medium, long}), with N i correct being the number of patients that correctly received class label i and N label i the total number of patients predicted to have class label i. PPVs were computed to give an impression of the discriminative power and thus predictive value of the network. In addition, the overall performance of a network was assessed as the overall accuracy of the predictions, denoting the percentage of patients for whom the network predicted the correct class label and was calculated using the formula N correct / N. Here, N correct denoted the number of patients for whom the prediction by the deep learning network was equal to the true survival class (i.e. highlighted diagonal elements in mosaic plot), and N denoted the total number of patients included in the set. Clinical deep learning The clinical network was trained on eight clinical characteristics (site of onset, age at onset, time to diagnosis, ALSFRS slope, FVC, C9orf72 phenotype status, FTD status and El Escorial criteria diagnostic category). Neuron grid search resulted in a deep learning network with Fig. 3. The distribution of the prediction results shown in mosaic plots. The columns represent the known survival class of patients, where the width of columns is relative to the number of subjects in that column. The colors orange, gray and blue represent the predicted survival classes short, medium and long, respectively. The highlighted diagonal cells (bottom left to top right) denote the number of patients that were correctly classified and the off-diagonal cells the number of patients that were mispredicted (opaque cells), i.e. the wrong class was predicted by the network. The positive predictive value (Fletcher and Fletcher, 2005) for each predicted class can be derived by dividing the correctly predicted subjects by the total number of predictions of that class (i.e. highlighted cell/sum of same colored cells). The overall accuracy is computed by summing the highlighted diagonal cells and dividing this number by the total population. For the evaluation set (n = 32), the clinical deep learning network, the structural connectivity deep learning network, morphology deep learning network and the clinical-MRI combined deep learning network obtained overall accuracies of 68.8%, 62.5%, 62.5% and 78.1%, respectively. (Box) A perfect classification and a random classification mosaic plot are displayed for comparison of the results obtained on the evaluation set. In the perfect classification, all subjects are correctly classified and therefore an accuracy of 100% is achieved. In a random classification, three class labels are randomly distributed over the subjects, only predicting a third of the total subject population correctly. 8 input nodes, 158 nodes in the first hidden layer, 448 nodes in the second hidden layer, and 3 output nodes. The predicted short survivors had a PPV of 72.7% on the evaluation set; medium survivors had a PPV of 64.3% and long survivors 71.4%. The highest PPV was obtained for the predicted short survivors, indicating that a predicted short survival on the basis of clinical metrics was more often correct than a predicted medium or long survival. The optimal network based on clinical characteristics gave an evaluation accuracy of 68.8% (Fig. 3), a training accuracy of 78.3% (Fig. S1) and a validation accuracy of 70.0% (Fig. S2). Structural connectivity deep learning The structural connectivity MRI network, based on connection weights, consisted of 2285 input nodes (the total number of reconstructed tracts), and was fitted 134 and 313 nodes in the first and second hidden layer respectively, and comprised 3 output nodes. The PPV scores for the predicted short, medium and long survivors of the evaluation set were 62.5%, 57.1% and 100.0%, respectively. The highest PPV in this network was obtained for the long survivor class, indicating that this structural connectivity-based deep learning network was highly reliable when it gave predictions for the class of long survivors. The structural connectivity deep learning network reached an evaluation prediction accuracy of 62.5% (Fig. 3), a training accuracy of 79.5% (Fig. S1) and a validation accuracy of 60.0% (Fig. S2). With average simulated PPV chance levels of 38.6% (short), 38.6% (medium) and 23.0% (long) when assigning random class labels to patients, these findings show that objective connectivity values alone can provide valuable information on disease survival. Morphology deep learning The morphology MRI deep learning network consisted of 83 input nodes (68 cortical thickness values and 15 subcortical volumes), and was fitted to 181 and 178 nodes in the first and second hidden layer respectively, and contained included the three survival classes as output nodes. The PPV scores of the evaluations set were 64.3% (short), 61.5% (medium) and 60.0% (long survival), respectively. The highest PPV in this network was obtained for the short survivor class, indicating that the morphology network, similar to the clinical network, was more reliable when it predicted a short survivor. The morphology deep learning network reached an evaluation prediction accuracy of 62.5% (Fig. 3), a training accuracy of 80.7% (Fig. S1) and a validation accuracy of 60.0% (Fig. S2). These findings support the relation between morphology and disease progression, and thus disease survival. Clinical-MRI combined deep learning Next, the prediction probabilities from the clinical deep learning network and the prediction probabilities from the two MRI networks were presented as input for a combined deep learning network. Grid search during network training resulted in a network configuration with 9 input nodes, 171 nodes in the first hidden layer, 108 nodes in the second hidden layer, and 3 output nodes. PPV scores of the combined network on the evaluation set included 90.9%, 83.3% and 77.8% for the predicted short, medium and long survivor classes, respectively. The combined network reached an evaluation accuracy of 84.4% (Fig. 3), a training accuracy of 88.0% (Fig. S1) and a validation accuracy of 80.0% (Fig. S2). Statistical testing indicated that survival prediction was significantly improved (p b 0.001) due to the addition of structural connectivity data and morphology MRI findings to the clinical characteristics (Fig. 4, see Supplementary materials for details). Discussion We evaluated the use of deep learning to predict survival time of ALS patients on the basis of clinical characteristics and advanced MRI metrics. Our findings show that MRI data alone (i.e. structural connectivity and brain morphology data, consisting of cortical thickness and subcortical volumes) can provide valuable predictions of survival time. Furthermore, combining clinical characteristics and MRI data into a layered deep learning approach can further improve predictions about whether a patient will have a short, medium or long survival time. Previous studies have used proportional Cox hazard classification (Cox, 1972) employing clinical characteristics such as site of onset, executive dysfunction and diagnostic delay (Elamin et al., 2015;Scotton et al., 2012) to develop prognostic models for survival. These models already showed the predictive power of clinical data, with PPVs and overall accuracies lower than or similar to results in our study, indicating a potentially better predictive power of survival classes using deep learning. It should however be noted that PPV scores are dependent on the prevalence of a subtype in the total population and might also influence the differences in scores. Deep learning on diffusion-weighted imaging data led to a prediction accuracy of 62.5%. Deep learning on T1-weighted images data resulted in a prediction accuracy of 62.5%. A combination of these imaging metrics yielded an improved prediction accuracy of 78.1% (see Supplementary materials for more details), indicating the predictive power of combining imaging metrics in deep learning. Previous studies have used MRI data for the prediction of diagnosis; that is, they used structural connectivity MRI data to differentiate between ALS patients and healthy controls, resulting in prediction accuracies between 70 and 80% (Fekete et al., 2013;Welsh et al., 2013). Other studies used cortical thickness measurements to discriminate between patients and controls in various diseases, such as Alzheimer's disease (Lerch et al., 2008), childhood onset schizophrenia (Greenstein et al., 2012), and major depression (Foland-Ross et al., 2015) with prediction accuracies ranging from 60 to 75%. In our study we examined a presumably more difficult task of predicting survival time within the group of patients, with a priori chance levels (i.e. true positive rate) here equal to 33.3% for the three survival classes, rather than 50% for patient/control status. The potential of MRI in patient classification and prognostication was previously also shown for the prediction of disease status in Alzheimer's disease, where machine learning differentiated between two subtypes of dementia based on T1-weighted images with accuracies of 89% Fig. 4. Fitted normal curves on accuracy distributions of the four networks. Normal curves are fitted on the accuracy distributions of the clinical deep learning network (blue), the structural connectivity deep learning network (orange), morphology deep learning network (purple) and the clinical-MRI combined deep learning network (gray), based on 16 randomly selected subjects (repeated 10,000 times) from the first evaluation set (n = 32). The mean accuracies (dashed lines) of these distributions were 68.7%, 62.5%, 62.4% and 84.4% for the clinical deep learning network, structural connectivity deep learning network, morphology deep learning network and clinical-MRI combined deep learning network, respectively. A paired t-test showed significant differences between accuracies of each pair of networks (all p b 0.001). (Klöppel et al., 2008). In this study we included all reconstructed connections instead of focusing on connections between specific brain regions, for example from motor regions to other brain areas (Schmidt et al., 2014). By considering all connections, a deep learning method is allowed to identify combinations of affected connections that are most valuable for survival prediction. As such, the deep learning network may detect relevant patterns in connections that are only slightly affected, thereby adding valuable information for prediction. The ability of deep learning to distill complex relationships from large datasets makes it a promising tool for disease prognostication. The predictions of the network based on clinical parameters and the two MRI networks were combined in a clinical-MRI network. The clinical network seemed to be less sensitive to correctly predicting short survivors compared to the MRI networks (see Supplementary materials for more details). The combined network learned relationships between the survival class predictions of the three other networks. Patients incorrectly predicted by either the clinical or one of the MRI networks were often predicted correctly by the combination network using prediction information from the other two networks. By utilizing the predicting probabilities of the survival classes instead of the survival class label, the uncertainty of predictions was taken into account. The prediction probabilities of the clinical, structural connectivity MRI and morphology MRI networks contributed equally to the combined prediction. Deep learning shows promising results for the prediction of survival categories for individual ALS patients, but several points have to be taken into account. Large training and evaluation sets are preferred to ensure convergence of prediction accuracies and to prevent overfitting (Ng, 2004). In addition, external validation is crucial for the development of a reliable prognostic tool and should be incorporated in future examinations. Second, while deep learning can effectively make predictions on datasets with complex relationships, dependency among input variables and between input and output variables cannot be easily deduced. In future research, it would therefore be worthwhile to investigate possibilities to reveal these dependencies and gain more insight into the mechanisms underlying disease progression. Third, prediction may also be improved using additional deep learning networks based on fMRI scans, as used in previous studies investigating disease diagnosis (Fekete et al., 2013;Welsh et al., 2013). Finally, additional clinical characteristics or diffusion tensor imaging metrics may also improve prediction. For example, radial diffusivity differences have been shown between ALS patients and healthy controls (Agosta et al., 2010;Metwalli et al., 2010) and therefore might also be of value in survival prognostication. Deep learning is a powerful approach with successful applications in many real world issues. Here, we show that deep learning can also be of benefit to medical problems. Our findings show that deep learning can contribute to early prognostication of survival in ALS by combining clinical characteristics and brain imaging data. Our study provides promising results and may contribute to developing an automated prognostication tool for the estimation of survival in individual patients. Disclosure MPvdH was supported by the Netherlands Organization for Scientific Research VIDI Grant and a fellowship of the Brain Center Rudolf Magnus. LHvdB received funding from the Netherlands Organization for Scientific Research VICI Grant and from the ALS Foundation Netherlands. LHvdB received travel grants and consultancy fees from Baxter and serves on scientific advisory boards for Prinses Beatrix Spierfonds, Thierry Latran Foundation, Cytokinetics and Biogen Idec.
2016-10-25T01:47:29.847Z
2016-10-11T00:00:00.000
{ "year": 2016, "sha1": "ca7a97add7ccbfe845d22e02833c2902cb696631", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nicl.2016.10.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca7a97add7ccbfe845d22e02833c2902cb696631", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256703951
pes2o/s2orc
v3-fos-license
Realisation of high-fidelity nonadiabatic CZ gates with superconducting qubits Entangling gates with error rates reaching the threshold for quantum error correction have been reported for CZ gates using adiabatic longitudinal control based on the interaction between the |11〉 and |20〉 states. Here, we design and implement nonadiabatic CZ gates, which outperform adiabatic gates in terms of speed and fidelity, with gate times reaching 1.25∕(22g01,10)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.25/(2\sqrt 2 g_{01,10})$$\end{document}, and fidelities reaching 99.54 ± 0.08%. Nonadiabatic gates are found to have proportionally less incoherent error than adiabatic gates thanks to their fast gate times, which leave more room for further improvements in the design of the control pules to eliminate coherent errors. We also show that state leakage can be reduced to below 0.2% with optimisation. Furthermore, the gate optimisation process is highly feasible: experimental optimisation can be expected to take less than four hours. Finally, the gate design process can be extended to CCZ gates, and our preliminary results suggest that this process would be feasible as well, if we can measure the CCZ fidelity separate from the initialisation and readout errors in experimental optimisation. INTRODUCTION Motivated by the quest for quantum error correction and expanding the set of realisable circuits, 1,2 there has been a great effort to improve the design of entangling gates, [1][2][3][4][5][6][7][8][9][10][11] and by now there is a rich array of design choices in a variety of quantum computing modalities, including superconducting quantum circuits, 12 trapped ions, 13 quantum dots 14 and NV diamonds. 15,16 Notable designs for entangling gates in superconducting circuits, include fast adiabatic gates, 17 frequency modulation, 11,18 cross resonance 19,20 and resonator-induced phase, 21,22 which effect the gates using longitudinal (first two) or transverse (last two) control of the qubits. Currently, the best results for entangling-gate fidelity using longitudinal control is 99.44%, 3 and using transverse control is 99.1%. 20 The former result reaches the surface code threshold error rate. 3,23 Precision control of qubit states is hindered by coherent and incoherent errors. Coherent errors, comprised of phase errors and leakage errors, can be eliminated in principle by improving the gate design and by calibration of nonideal factors such as crosstalk, pulse distortion, energy drift and readout drift. Incoherent errors, on the other hand, can not normally be eliminated within the domain of designing and calibrating pulses, and the best, most direct way to mitigate this source of error is to make the gate as fast as possible. The information loss due to decoherence during the gate time thus sets an upper limit on the achievable fidelity of the operation. The highest gate fidelities reported to date use fast adiabatic gates. 3 The principle of the fast adiabatic design is to change the frequency of the qubits slowly enough that Landau-Zener transition probability is minimised. The speed of the gate is maximised, all while maintaining the population stability provided by the adiabatic theorem. Barends et al. provide an analysis of the sources of coherent and incoherent errors in these gates, and conclude that 55% of the error comes from decoherence. The other errors are reported to be from coherent sources: phase error (29%) and state leakage (21%). 3 Our work aims to address the dominant source of error, and improve on this leading gate design. We design and implement nonadiabatic gates and show that this design can achieve performance parity with the reported fidelities of adiabatic gates. Normalised by the coupling strength T s = 1/(2g 11,20 ), our gates are significantly faster than their adiabatic counterparts, reaching 1.25T s (40 ns), compared to adiabatic gate times, which range from 1.66 to 1.87T s . 3 The faster gate times mean the upper bound set by decoherence during the operation is higher, which leaves more room for improvement of overall gate fidelity by focusing on coherent errors. Our analysis shows that approximately 48% of the error is incoherent. Our simulations also find that the fastest possible gate is 1.06T s , but in practice, technical limitations will cause the gate time to increase. Simulation of nonadiabatic CZ gate A CZ gate design we used in this experiment works by adjusting the frequency of the qubits so that the two-qubit state |11〉 interacts with the noncomputational state |20〉 24 (Fig. 1a). In our case, we only tuned one the frequency of qubit a, without changing qubit b (Fig. 1b). The |11〉-|20〉 interaction will cause a phase difference of π relative to the three other two-qubit states, thus implementing a CZ gate. In the nonadiabatic case, the detuning from the energy eigenstate is made rapidly, making |E 20 − E 11 |-the energy difference between the two interacting states-go to the order of g 11,20 . Once qubit a is rapidly detuned to the interaction point, the |11〉-|20〉 subspace of the two-qubit state processes around the Bloch sphere for one revolution (Fig. 1c), after which time the adjusted qubit is rapidly tuned back to its idle point. Analysis shows that when the pulse shown in Fig. 1b is applied, a large part of the population leaves the eigenstate before returning nearly in full (Fig. 1d). This is a distinguishing characteristic of nonadiabatic gates. Ideally, the population of the |11〉 state instantly goes to the interaction point, gains the phase of π and returns instantly. But in practice, the bandwidth of the control pulses limits the speed of the detuning. To find a waveform that minimises population leakage to noncomputational states in the realistic situation where bandwidth is finite, we first search for a waveform without applying bandwidth restrictions, then take that result and restrict the waveform bandwidth. The waveform search is done with the conventional differential evolution (DE) optimisation method, [25][26][27] using the simulated fidelity as the fitness function. During the search, the waveform is parametrised by the total time of the operation and by relative weights of different frequencies that make up the pulse. [28][29][30] Fidelity is defined as the normalised trace distance in the computational subspace between the desired unitary operation, and the one resulting from the simulated applied pulse. Here, U CZ is the ideal operation, and U P is the evolution in the computational subspace caused by the detuning pulse. Searching over the seven-parameter search space resulted in a parameters for W with a maximum fidelity of 99.97%. To adapt the ideal pulse to the experiment, we applied a 300 MHz Gaussian bandpass filter and added a 3 ns rising time and falling time for detuning. Doing so distorts the pulse, so the Nelder Mead (NM) algorithm 31 was then used to maximise the fidelity of the modified waveform, reaching a final fidelity of 99.95%. Theoretical analysis of the modified pulse shows that the theoretical finite-bandwidth waveform can reduce state leakage to 0.05% and phase error to 0.005 rad. The gate operation time is 1.06T s for infinite bandwidth and 1.25T s for 300 MHz bandwidth. Experiment of nonadiabatic CZ gate Our experiment uses two qubits from a 12-qubit superconducting quantum processor (Fig. 2a). The qubits 32,33 are arranged in a linear array and are capacitively coupled to their neighbours, and the coupling strength is measured to be 11.0 MHz, corresponding T s = 32.1 ns. Each qubit is also capacitively coupled to a readout resonator. Parameters for qubits used in the experiment are listed in Table 1 and illustrated in Fig. 2b. Once the theoretical waveform for a CZ gate has been found in simulation, its actual fidelity is measured experimentally. First we measure and correct the dynamic single-qubit phase of two qubits using quantum process tomography 34,35 (QPT). Once this is corrected, randomised benchmarking (RB) is used to optimise the fidelity of CZ gate by NM. To maximise error contrast during optimisation, we select the number of Clifford gate to be 15 and the interleaved gate to be CZ (Fig. 3a), then generate interleaved sequences to measure gate fidelity. We observe saturation of the NM optimisation algorithm in less than 120 evaluations (resulting Fig. 1 Nonadiabatic CZ gate design. a Principle of nonadiabatic CZ gate. |11〉 is moved rapidly to the interaction point and interacted with |20〉 for period of time and then moved back to the idle point. b Detuning pulse. The ideal, infinite-bandwidth pulse (green) moves instantaneously the interaction point. The realistic, 300 MHz-bandwidth pulse (red) has added edges on either side. The ideal (realistic) pulse takes 34 ns (40 ns) in theory. c Swap space. The green (red) circle shows the evolution resulting from the ideal (realistic) control pulse. The small green and red diamonds in x-z plane are the eigenstates during each of these evolutions. After the operation, the |11〉 has gained a phase of π relative to the other states. d State population in the eigenstate during the pulse. A large part of the population can be seen exiting and then re-entering the eigenstate pulse shown in Fig. 3b), which implies this scheme is efficient (Fig. 3c, d). The resulting fidelity of the CZ is 99.54 ± 0.08%, which is measured by interleaved RB and fitted by the formula shown in (Fig. 3e). The result is robust and stable for more than one week (see Discussion). To reduce the effect of noise and maintain highly accurate pulses during the optimisation, we calibrate the readout and energy levels as the RB is taking place. Readout is calibrated by measuring the distribution of |0〉 and |1〉, while Ramsey measurements are used to determine the frequencies of each qubit. This Table 1. q3, q4 information for CZ data is collected and used to calibrate the measured state population in post processing and also to calibrate qubit frequencies by adjusting the DC current for the next NM evaluation. Including this calibration, the resulting optimised pulse takes around 2 h to calculate the RB pulse sequences and to allow for communication between the control hardware, and two more hours of on-chip operation. Expanding to CCZ gate Simulation. This scheme proves useful not only for CZ gates, but also achieves good results optimising CCZ gates. 27,36 By applying a control pulse on qubit 3, the |111〉, |021〉, |030〉 populations will go to the interaction point where they can mutually exchange population nonadiabaticly (Fig. 4a-d). To expand the search space of waveform, we add third-and fourth-order Fourier components to Eq. (1), increasing the number of waveform parameters to 11. Including two parameters to select the frequencies of qubit 2 and 4, the total number of variables is 13. By applying such a pulse, |111〉 will get an added phase of (2n + 1)π, and |110〉 will get an added phase of 2nπ (n being an integer). The other six bases have almost no interaction, so they gain almost no extra phase. The DE algorithm is again used to optimise the CCZ waveform, obtaining a fidelity of 99.3% in simulation. Experiment. We used QPT to measure the fidelity of the CCZ gate for on-chip optimisation and the parameters for qubits are shown in Table 2. Using the simulation result as initial pulse and after about 50 NM evaluations, the CCZ fidelity was measured to be 93.3% (Fig. 4e) and the gate time was the same as simulation (78.5 ns). The discrepancy of fidelity between simulation and experiment is mainly caused by the initialisation and readout error, which limits the ability of the NM algorithm to converge accurately toward the optimal point. DISCUSSION Our result demonstrates that the difficulties of nonadiabatic gates can be overcome to the point that gate fidelities exceed those of adiabatic gates reported in the literature, which themselves have already passed the threshold for quantum error correction using the surface code. We notice that there has been much focus on adiabatic CZ gates, and there is theory developed to reduce the state population leakage throughout the entire pulse. 17 But our work shows that as long as the population returns to the computational subspace by the end of the pulse, having population exit the eigenstate is acceptable. In practice, we find that state leakage is easily suppressed if the Z-pulse distortion calibration is performed well. Calibration and stabilisation of the qubits is therefore more worth considering. Our experiment has proved that finding high-fidelity pulses is realistic and viable. We tried NM optimisation for different qubit frequencies and found that we were always able to reach fidelities of 99% in about 150 evaluations even in a noisy electromagnetic environment, which is comparable to adiabatic implementations. 37 Continuing the investigation, we varied the energy level structure (frequency difference of two qubits, anharmonicity, and coupling strength) and searched for the theoretical detuning pulses. We found nonadiabatic CZ gates with fidelities higher than 99.9% and gate times lower than 1.1T s after 50-200 evaluations of the NM algorithm. Due to the NM algorithm's sensitivity to initial points, sometimes fidelities higher than 99.95% can not be obtained only using NM. However, using DE and NM together, we have always found fidelities high than 99.95% after 200 DE iterations which implies that a good practical nonadiabatic pulses exist and can be found with sufficiently sophisticated search techniques. The difference between experimental and theoretical fidelity is caused by the imperfect calibration of pulse distortion and the finite accuracy of practical control pulses. We also investigated the robustness of CZ gate under control parameter fluctuations. We changed simulated waveform parameters by up to 1 MHz, and randomly adjusted each point on the resulting pulse according to a Gaussian distribution a with standard deviation of 1 MHz. The fidelity of the gate was found to never fall below 99.9%. Experimentally, the fidelity of interleaved sequences (m = 15, G = CZ) dropped from 68 to 65% after varying the waveform parameters in the same way. Also, the sequence fidelity (of the unaltered pulse) did not fall below 67% after one week. We conclude that high frequency noise and long time drift of control equipment do not significantly affect the CZ gates. We expect nonadiabatic gates to be more effective in the future. Although it is very likely that superconducting qubits with coherence times exceeding hundreds of microseconds will be regularly fabricated soon, the decoherence error always contributes significantly to the total gate error. Our simulations show that the relationship between nonadiabatic CZ gate and decoherence time is given by r decoherence = 0.38T gate /T 1,low + 0.62T gate /T 1,high + 0.45T gate /T φ,low + 0.93T gate /T φ,high , here, high (low) represents the qubit that involves (does not involve) the |2〉 state. According the decoherence time measured in experiment, the incoherent error and control error is 0.22% and 0.24%, respectively. The upper bound of |20〉 leakage error is 0.22% and could be lower if we could measure the |20〉 population directly. Control error is likely to be mitigated experimentally in the near term by taking the following steps: First, reduce and characterise the distortion of the waveform as it travels from the arbitrary waveform generators (AWG) to the qubits and do the corresponding calibration more accurately. Second, improve the voltage resolution and sampling resolution of the AWG. Lastly, design a more efficient search algorithm which includes resistance to noise, as well as find a different definition for the waveform parameters. 38 Finding high quality control pulses and improving complex multi-qubits gates under experimental conditions remains challenging, as evidenced by continued research in this field. 21, 36 We believe that nonadiabatic detuning can be a powerful method 21,26,27,36 for two-and three-qubits gates. The errors in the simulated gates are of the same order of magnitude as the errors from incoherent errors; additionally, the gate time is acceptably short (78.5 ns, 1.73/(2g 01,10 )). For these two reasons, our simulated pulses meet our experimental needs. However, as the initialisation and readout errors in QPT limit the experimental optimisation, it is essential to find a way to separate these errors and efficiently get the valid CCZ fidelity to further improve the CCZ gate. Nonadiabatic CZ and CCZ gates can be extended into mediumscale quantum computation. The time that the system can stably work is two orders of magnitude longer than the time required for optimisation and the optimisation time can be significantly shortened by applying active reset technology. 39 By setting qubits' frequencies appropriately and designing tunable coupling strength qubits, we can realise the CZ (CCZ) gate in any adjacent two (three) qubits for quantum processors that contain hundreds of qubits or more. The CZ (CCZ) pulses only involve adjacent qubits and the nonadjacent CZ (CCZ) can be optimised parallel if the influence of cross-talk can be neglected. Hopefully nonadiabatic CZ (CCZ) gates in quantum processors with hundreds qubits can be completely optimised within an hour. Hamiltonian and control pulse for CZ in simulation The total system Hamiltonian is the sum of the static Hamiltonians, and each qubit is controlled independently by a tuning pulse. Qubits are modelled as three-level systems and qubit frequencies are 5.3 and 4.7 GHz, for qubit a and qubit b, respectively. Both qubits have an anharmonicity of S. Li et al. , and |030〉 states interact strongly (interaction point). b CCZ pulse applied to qubit 3, which obtains fidelity of 99.3%. c Evolution of each state under CCZ pulse. Dotted (solid) lines show the evolution when the initial state is |110〉 (|111〉). Both black lines (dotted and solid) hovering near unity for the entire pulse duration the means that pulse can be analysed by separating the evolution into two (nearly) independent subspaces. Note that |110〉 interacts strongly with |020〉, and that |111〉 interacts strongly with |021〉 and |030〉. |111〉 also interacts slightly with |120〉 in the |111〉-|021〉-|030〉-|120〉 subspace. The AWG we used has a sampling rate of 2 GHz. So, when simulating the evolution, we sample the continuous waveform W every 0.5 ns, and interpolate linearly between the sampled points to approximate the limitations of the hardware. On-chip optimisation and calibration The time overhead for one function evaluation on-chip is 50 s. We use 25 s to apply an RB sequence and 25 s to apply probing pulses, used for realtime calibration. The whole optimisation takes 120 evaluations, which amounts to 6000 s for on-chip optimisation. Considering data packet loss and reoperation, total operation time may increase to about 2 h. RB measurement. Each fidelity is obtained by averaging the fidelity of 100 random RB sequences and each fidelity of random RB sequences is obtained by 1000 single-shot measurements. Each of single-shot measurement takes 250 μs, including the RB sequence, readout and qubit relaxation to zero. The total time to characterise one waveform takes up to 25 s. Real-time calibration. During the 25 s of measuring fidelity, we alternately insert Ramsey sequence, readout sample sequence, decoherence time marked sequence for every 1 s and collect the data to do calibration, which takes another 25 s. Z-pulse predistortion. We input a square pulse, which becomes distorted as it travels down the refrigerator, and measure the frequency of qubit by Ramsey. Since a distorted pulse will be change the frequency of the qubit, Ramsey measurments allow us to gain enough insight into the transfer function to predistort the pulses. We calibrated the pulse such that 20 ns after the pulse, the distortion is less than 10 −4 times the pulse amplitude. Estimation of decoherence error We assume that every qubit has two decoherence channels: amplitude damping and phase dampling, which can be quantified by T 1 and T φ . We have analysed the effects of T 1 and T φ for idle gates. We define QPT fidelity as F = Tr(χ idle χ decoherence ) and find F = 1 − T gate /(2T 1 ) − T gate /(2T φ ) when T gate ≪ T 1 , T φ . In experiment, we measured the T 1 and idle gate fidelity (by RB), and then calculated T φ . We substituted the T 1 and T φ of both qubits into the formula for nonadiabatic CZ gate decoherence error and found the decoherent error to be 0.22% for the CZ gate. DATA AVAILABILITY Data available on request from the authors. CODE AVAILABILITY Code available on request from the authors.
2023-02-10T14:04:47.258Z
2019-10-08T00:00:00.000
{ "year": 2019, "sha1": "6bfa80af037bdc0e8b1f7158ab618fc738dbcbcb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41534-019-0202-7.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6bfa80af037bdc0e8b1f7158ab618fc738dbcbcb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
225420209
pes2o/s2orc
v3-fos-license
Expanded SEIRCQModel Applied to COVID-19 Epidemic Control Strategy Design and Medical Infrastructure Planning +e rapid spread of COVID-19 has demanded a quick response from governments in terms of planning contingency efforts that include the imposition of social isolation measures and an unprecedented increase in the availability of medical services. Both courses of action have been shown to be critical to the success of epidemic control. Under this scenario, the timely adoption of effective strategies allows the outbreak to be decelerated at early stages. +e objective of this study is to present an epidemic model specially tailored for the study of the COVID-19 epidemics, and the model is aimed at allowing the integrated study of epidemic control strategies and dimensioning of the required medical infrastructure. Along with the theoretical model, a case study with three prognostic scenarios is presented for the first wave of the epidemic in the city of Manaus, the capital city of Amazonas state, Brazil. Although the temporary collapse of the medical infrastructure is hardly avoidable in the state-of-affairs at this time (April 2020), the results show that there are feasible control strategies that could substantially reduce the overload within reasonable time. Furthermore, this study delivers and presents an intuitive, straightforward, free, and open-source online platform that allows the direct application of the model. +e platform can hopefully provide better response time and clarity to the planning of contingency measures. Introduction e rapid spread of COVID-19 outbreak has become a very challenging situation due to its substantial demand for medical infrastructure. e lack of knowledge about the virus at early stages severely limits the effectiveness of available clinical treatments and thus contributes to longer hospitalization periods that might nevertheless result in the loss of a considerable share of patients, as shown in recent clinical publications [1][2][3][4]. As largely explained by political and medical authorities from several countries and the WHO (World Health Organization), the main problem of having a steep epidemic curve is the overload it might cause on health care infrastructure [5]. In this sense, not only medical materials and equipment but also medical personnel suddenly become insufficient, who become especially prone to infection when subjected to an overload of working hours and lack of adequate personal protection equipment [5]. As the world faces the consequences of a largely unknown infectious agent that spreads exponentially, the timely adoption of adequate control and mitigation strategies is of utmost importance. In this context, the mathematical modelling of epidemic dynamics can provide a valuable contribution. Beyond helping to unveil prognostics of future behavior under different scenarios that are unaccessible otherwise, it allows the application of numerical parameters from previous studies and clinical expertise (see, for instance, [1][2][3][4]). Furthermore, while the focus of epidemic modelling is control strategies, the estimated epidemic timeline can help forge ways to optimize the maintenance of economic activities to their fullest yet below levels that are detrimental to epidemic control. Finding this tenuous and fragile equilibrium between epidemic control and economic activity is at the core of several debates [6]. In this sense, mathematical models can provide a rather solid basis for discussion, while their underlying assumptions can be openly scrutinized and updated to evaluate scope and accuracy of the resulting conclusions. Another factor that has gained importance as authorities acknowledged the lack of infrastructure to fight the epidemic is time. If countries can gain time by adopting adequate strategies, they can prepare better to properly handle the epidemic. As such, the study and evaluation of epidemic models can not only provide future scenarios that allow the dimensioning of the infrastructure but also test strategies to flatten the infection curve as much as needed to fit the achievable capacity of the health care system [7][8][9]. Following the seminal works on epidemic dynamics by Kermack and McKendrick [10][11][12], many epidemic models were proposed and studied in the literature, including but not limited to SIR [13,14], SIS [14], SIRS [14,15], SEIR [16,17], SEIS [18,19], and MSEIR [13]. Among those, the SEIR model following classical assumptions is a rather interesting starting point to the study of the COVID-19 since it is a rather simple model for which a substantial body of literature is available. e SEIR model consists of four population subclasses: S (susceptible), E (exposed, which are infected but not infective), I (infected and infective), and R (removed). e compartment R encompasses both recovered and deceased. e recovered individuals in this compartment represent the fact that once COVID-19 patients recover, they are considered to have built lasting resistance to the infection, at least to the timeframe of the first wave of the epidemic. e SEIR model was used as basis in recent studies about COVID-19 [7,9]. To allow the implementation of epidemic control strategies and dimensioning of medical infrastructure under different scenarios, the model presented in this study was expanded with new compartments. e objective of this study is to present an epidemic model tailored for COVID-19, and the model is aimed at allowing the integrated study of epidemic control strategies and dimensioning of the medical infrastructure required in different scenarios. e model is an expanded version of the well-known SEIR model, in which two new compartments are added: confinement of susceptible (C) and quarantine of infected (Q). ese compartments are aimed at allowing the implementation of two of the epidemic control measures that also include social distancing and vaccination, the latter as a future prospect in the case this scenario extends for a long period. Furthermore, three equations that account severe and critical hospitalization cases and the number of deaths are included to give an account of the burden on health care infrastructure. To ease the application of the model by a general audience, we provide an implementation of the expanded SEIRCQ model as an intuitive, straightforward, free, and open-source online platform (https:// dashseir.herokuapp.com/) that allows the direct application of the model with clinical and physical parameters that are accessible in the literature. To illustrate the application of the model, we also provide a complete case study for the city of Manaus, the capital city of Amazonas State, Brazil. Manaus shall face the collapse of the medical care capacity due the first wave of the coronavirus outbreak [20]. We use the model to evaluate the current state-of-affairs and show that they shall lead the infection curve to overwhelm the current medical care capacity for several months in case they remain unchanged. en, we test possible measures and strategies to mitigate the hazardous effects of the epidemic. As such, the justification for this manuscript is twofold: to introduce the rationale behind the working of the platform and to serve as a reference for its application. Epidemic Models. e study of epidemic models dates to 1927 with a set of three publications by Kermack and McKendrick, which were more recently republished in digital versions [10][11][12]. Since then, a number of models were proposed and studied, such as SIR [13,14], SIS [14], SIRS [14,15], SEIR [16,17], SEIS [18,19], and MSEIR [13], among others, which are known as compartmental models. e general idea is that the population is subdivided into compartments, each governed by a dynamical equation. e equations can be either deterministic or stochastic, and each compartment can be further stratified by gender, age, or any other feature that is regarded as relevant for a given study. Furthermore, the model can be either spatially homogeneous or heterogeneous, depending on the relevance of location to the dynamics of the epidemic. Several models have been proposed to describe the dynamics of the COVID-19 epidemic. Naturally, each modelling approach seeks to describe a set of variables of interest according to specific objectives. A fair share of the recent models appearing in the literature for the study of COVID-19 are extensions of the well-known SEIR (susceptible-exposedinfected-removed) model by Li and Muldowney [17], Roda et al. [9], López and Rodó [21], and Kuniya [7]. A study by Lin et al. [22] took as basis the SEIR model and expanded it by including extracompartments that were aimed at modelling the public perception of risk and the cumulative number of cases (both reported and not reported). In fact, the results presented in Wu and McGoogan [23] suggest that a considerable share of nonreported cases exist. us, modelling the cumulative number of cases, as done by Lin et al. [8], is likely to produce more realistic scenarios and prognostics. Furthermore, a step-like function is applied to model the zoonotic transmission at early stages, from which it was possible to estimate the approximate number of zoonotic infections that presumably occurred in Wuhan and trace the epidemic back to its origin. To deal with the lack of a long enough time series for COVID-19, Lin et al. [8] applied parameters from the influenza epidemic from 1918 in London, UK [24], to fit the model to the official data from Wuhan, China. e main difficulty associated with the application of this model is the definition of control strategy parameters (such as governmental action strength) since they cannot be straightforwardly mapped into collectible real-world parameters. Roda et al. [9] showed that the confirmed-case data are the main reason to the wide range of variations in model predictions for the COVID-19 epidemic using mathematical models, which pose a difficulty when estimating real transmission rates, since transmission rate coefficients in model calibration might have a very wide range. Roda et al. [9] also point that the confirmed-case data represent only a fraction of the total infected population. us, fitting the model to confirmed-case data is likely to produce misleading prognostics of future scenarios in terms of the peak values. COVID-19 Epidemic Parameters. e knowledge gained during the crisis in China provides several useful parameters for modelling. For instance, the work by Wu and McGoogan [23] considered over 72, 000 cases from the first wave of the epidemic in China and established that 81% of the cases were mild (all the way from asymptomatic to mild symptoms), 14% were severe (high respiratory frequency, low blood oxygen saturation, and lung infiltrates), and 5% were critical (respiratory failure, septic shock, and/or multiple organ dysfunction or failure). Furthermore, the authors also found the overall case-fatality rate of 2.3%, while the death rate was 49% among critical patients, and no deaths were reported in mild and severe cases. Wu and McGoogan [23] also presented numbers that hint that insufficient testing may have caused case-fatality rates to be overestimated approximately by 7-fold factor. Interestingly, a 7-fold factor was also found to be the factor of subnotification of infected individuals in the State of Rio Grande do Sul, Brazil, as concluded the first large field study based on massive testing of individuals [25]. e basic reproduction number in Japan considering the SEIR model was estimated by Kuniya [7] as R 0 � 2.6 (2.4 to 2.8 in a 95% confidence interval), which means that an infected individual will infect, on average, 2.6 new individuals. Meanwhile, considering the case of Wuhan, Tang et al. [26] found that the basic reproduction number can be as large as 6.47 (5.71 to 7.23, 95% CI) and Li et al. [22] found 2.2 (1.4 to 3.9, 95% CI) on the basis of analysis of 425 laboratory-confirmed cases. Regarding the basic reproduction number after social distancing measures, Wang et al. [3] considered a scenario with R m � 0.9, which later was found to provide an accurate description of the epidemic dynamics after the social distancing measures were implemented in China. As other important parameters to estimate the load upon medical care services, Chen et al. [27] calculated the average duration of hospitalization of 31 discharged patients and found 12.39 days ( ± 4.77, 95% CI). Meanwhile, Li et al. [22] calculated the average incubation period to be 5.2 days (4.1 to 7.0, 95% CI), while Bi et al. [1] found 4.8 days (4.2 to 5.4, 95% CI) on the basis of 391 cases and 1, 286 of their close contacts. To take advantage of the results from clinical studies, the next section presents the expanded SEIRCQ model in a formulation that allows explicit application of clinical and physical parameters. is is aimed at allowing straightforward application of medical literature about COVID-19. Materials and Methods is section gives an account of the proposed model and the epidemic control strategies to be applied. It also presents the region considered in the case study and the methods applied to acquire the model parameters. 3.1. e Expanded SEIRCQ Model. Consider a population with N individuals and the following stratification compartments: susceptible (S), exposed (E), infected (I), removed (R), quarantined infected (Q), and confined susceptible (C), which are represented in Figure 1 and related according to the equations: where R t is the average number of new infections caused by an infected individual, T INF is the average duration of the infection in an individual, R CFN is the rate of confinement of susceptible, T CNF is the average time of confinement of a susceptible individual, R VCN is the rate of vaccination of susceptible individuals, T IMN is the average time for immunization of an individual after vaccination, T INC is the average incubation time of an individual exposed to the infection, T INC R QRT is the rate of infected individuals entering quarantine, T LIC is the average time lag until an infected individual is put in quarantine, and T QRT is the average time of quarantine. As a result, the total population is given by e parameter R t can be a function of time to describe the strategy of lockdown, during which the infection rate tends to be lower. For instance, to describe the transition from free circulation to isolation at time T ISO , one can define R t as a function defined by parts as where R t � R 0 during (−∞, T ISO ] and R t � R m from T ISO on. In this case, it is expected that R 0 > R m . It is supposed that a portion of the infected population will need hospitalization and a smaller portion will require hospitalization in an Intensive Care Unit (ICU). is portion of the infected population causes greater concern due to the prospect of overload in the health care infrastructure. To model the dynamics of individuals requiring hospitalization and individuals requiring intensive care, the model is expanded and two new state variables are defined, H and U, respectively. e rates of change in the number of individuals requiring hospitalization and those requiring intensive care are modeled as where R HSP is the rate of hospitalization in simple hospital beds, T HSP is the average duration of hospitalization in simple hospital beds, R ICU is the rate of critical cases requiring hospitalization in ICUs, and T ICU is the average duration of hospitalization in ICUs. In any case, we consider valid the inequality 1 ≪ R HSP ≪ R ICU , which represents the fact that the number of hospitalizations in common beds is a fraction of the current number of infected individuals, and that the number of hospitalizations in ICU beds is a fraction of the hospitalizations. Besides, the number of mild cases (that do not require hospitalization) can be represented by M and defined as To assess the possible number of deaths in different scenarios, the number of deceased individuals is defined by P. e number of fatal cases is modeled as a fraction of those who require intensive treatment. Note that, in practice, not all patients who require intensive treatment can access it since the number of ICU beds is limited. In the model, we will express this limitation by N ICU , which represents the number of ICU beds available to COVID-19 patients in this system. us, the rate of change in the number of deaths considers two strata of patients who need an ICU: those who have access and those who cannot, each with a different mortality rate. It is expected that the patient who needs intensive care and does not obtain access will have lesser chances of survival as compared to the patient who needs intensive care and obtains access. us, two mortality rates are defined, R DHT1 and R DHT2 , for patients with access and without access to ICU, respectively. e rate of change in the number of deaths is modeled as where U represents the number of patients in need of intensive care and obtains access to an ICU bed. e variable U receives the value of the state variable U up to the limit of ICU vacancies, N ICU , according to function In addition, T FOR defines the average survival time for a patient who needs intensive treatment but cannot access an ICU bed. In this way, a fraction R DHT2 of critical patients not able to find an ICU are removed from U and transferred to compartment P after a period of T FOR . Meanwhile, σ is a function that takes on binary values and controls the second term of equation (6). When U is greater than the number of ICU beds available, the function returns a unit value and the second term of the equation comes into play with its respective mortality rate R DHT2 and mean survival time T FOR . e mortality according to the increased rate R DHT2 is modeled as proportional to the number of people in need of intensive treatment but does not get access to a bed due to overcrowding. us, the σ function is modeled as Epidemic Control Strategies Using the SEIRCQ Model. e SEIRCQ model allows the application of four epidemic control strategies. e aim of this section is to present the rationale under each of them and define measures of effectiveness for each one separately and then to all of them combined. Quarantine of Infected Individuals. e strategy of quarantine of infected individuals seeks to remove from circulation those who are infected to neutralize their contribution to the spread of the infection. By decreasing the number of circulating infected, I, one has the multiplication R t SI assume lower values, such that the rate of change in the number of individuals exposed to the infection, given as decreases accordingly. To have the epidemic under control, it is necessary that the number of circulating infected individuals decreases, i.e., dI dt Reorganizing the terms in equation (10), one obtains Figure 1: Flowchart of the SEIRCQ model. from which it is possible to observe that T INC and T INF are parameters whose values are characteristic of the infection and thus not controllable. In turn, the quarantine rate R QRT and the time lag between infection and quarantine T LIC are controllable parameters that depend on the agility and effectiveness of identifying and isolating infected individuals. As one seeks to make the right side of the inequation as large as possible, one shall seek for larger values of R QRT and smaller values of T LIC . e strategy of quarantine of infected individuals usually works by flattening the epidemic curve. To that end, it is necessary that the number of quarantined infected individuals, Q, reach a proportion of the total number of infected individuals, I, in the population N. e coefficient of quarantine of the infected is defined as for I + Q > 0, and it measures how much a given set of quarantine parameters R QRT , T QRT , and T LIC perform in a given scenario. e coefficient ranges from 0 to 1 that correspond to 0 to 100% quarantine of infected individuals. Confinement of Susceptible Individuals. e strategy of confinement of susceptible individuals seeks to remove from circulation a fraction of those who are susceptible to neutralize their exposition to the infection. By decreasing the number of circulating susceptible, S, one has the multiplication R t SI assume lower values, such that the rate of change in the number of individuals exposed to the infection decreases accordingly. is strategy can be thought of as relaying a share of the workforce that is susceptible, for example, to a 70 − 30% proportion of circulating and confined, respectively. e rate of change in the confined susceptible is given by in which susceptible individuals enter according to a rate R CNF and confined individuals leave after being confined for T CNF days. As a confined individual leaves the compartment, he goes back to the compartment of susceptible individuals since during T CNF , the confined susceptible individuals are isolated from infected individuals. e confinement strategy usually works by flattening the epidemic curve. To that end, it is necessary that the number of confined individuals, C, reach a proportion of the total number of susceptible individuals, S, in the population N. e coefficient of confinement of susceptible is defined as for S + C > 0. e coefficient ranges from 0 to 1 that corresponds to 0 to 100% confinement of susceptible individuals. Social Distancing. e strategy of social distancing seeks to prevent gatherings, circulation, and exposition to deaccelerate the spread of the epidemic. It includes physical distancing in supermarkets, restaurants, and drugstores, for example, and the use of masks in public places. It is expected that the social distancing will reduce the value of the basic reproduction number, R t by means of restriction of social activities that may induce social interactions, such as circulation of public transport or cultural events, for example. Most commonly, only essential services are allowed to function and nevertheless under considerable changes in regulation. In terms of the equations, the strategy of social distancing works by decreasing the basic reproduction number, R t , such that multiplication R t SI will assume lower values and the rate of change in the number of individuals exposed to the infection decreases accordingly, as shown in equation (9). Lockdowns are considered energic measures due to the amount of changes they bring to the daily routines. Nevertheless, they were shown to be effective to deaccelerate infection rates although they may also cause undesirable economical side effects. us, the accurate dimensioning of the duration and severity of social distancing measures and the planning of its time window are fundamental to mitigate its negative effects on the economy while being effective in controlling the surge of infections. e level of what we refer to as social distancing will be calculated by means of the coefficient of social distancing: where R 0 > 0 is the basic reproduction number, i.e., the basic reproduction number before social distancing. Vaccination. e strategy of vaccination seeks to transfer individuals directly from the compartment of susceptible to the compartment of removed. It can be observed in the equations of the model that vaccination is the only form of getting immunized without being sick (infected). e equation for the rate of change in the number of susceptible individuals is dS dt where the last term of the equation represents the outflow of susceptible from the compartment according to a rate of vaccination R VCN and to a time lag T IMN , which represents the number of days necessary after vaccination until the body produces antibodies enough to fight the disease. When leaving the compartment of susceptible, S, the vaccinated individuals enter the compartment of removed according to rate R VCN /T IMN . Measuring the Combined Effectiveness of Strategies. In the last section, the presentation of control strategies showed that they operate upon the reduction of the product R t SI, one at a time. Note that this term appears as a flow of individuals from the compartment of susceptible to the Mathematical Problems in Engineering compartment of exposed and then to the compartment of infected, that is, S ⟶ E ⟶ I. When two or more strategies are applied simultaneously, the combined effect of them can be evaluated by means of what we call the effective isolation coefficient, EIC, as where R 0 > 0 is the basic reproduction number and (S + C) > 0, (I + Q) > 0; that is, the coefficient is defined for the period during the epidemic. e EIC allows the evaluation of the relation between the severity of the isolation measures and the results upon the infected population. When R 0 � R t and Q � C � 0, EIC � 0 and there is no isolation at all, whereas R t � 0 or S � 0 or I � 0 with R 0 (S + C)(I + Q) > 0 yielding EIC � 1, which means that there is no transmission. is coefficient will be applied to the scenarios studied in Section 4. us, we reserve the term social isolation to refer to the combined effects of social distancing, confinement of susceptible, and quarantine of infected individuals. e Parameters. e clinical parameters used in the model were adopted from previous clinical studies on the assumption that they are roughly the same everywhere. is assumption might be proven mildly inaccurate in the near future, but considering the lack of consolidated data and results at the current moment (April 2020), this seems to be the most viable way to get to a functional and working model for prognostic and strategy planning. Considering several studies in the literature, the parameters of the model are set to the values shown in Table 1. e values of R t , R HSP , R ICU , and R DHT1 were calculated from the information on confirmed and active COVID-19 cases provided by the Health Secretary of the State of Amazonas [20]. Currently, the amount of data is insufficient to calculate R DHT2 , and thus, it was arbitrarily set to 30% higher than R DHT1 . e values for R t ( * ) and R t ( * * ) resulted from model calibration to the COVID-19 confirmed positive data from Manaus, made available by Health Secretary of the State of Amazonas [20]. Furthermore, the first research with considerable size and scope in Brazil applying widespread testing of individuals chosen at random concluded that subnotification of COVID-19 is considerable, having found that, for every confirmed case, there are at least seven infected individuals EPICOVID19 [27]. We apply this 7-fold factor to compensate the hospitalization rate values R HSP and R ICU presented in Table 1. e death rate R DHT1 is maintained since it is applied upon the rate of critical individuals, R ICU . Since it was not possible to access consolidated data regarding the number of ICU beds available for COVID-19, it was set to N ICU � 148, which represents about 30% of the total number of ICU beds in the State of Amazonas. is is only a reference number and might not reflect the real availability. e number of simple hospital beds for COVID-19 N HSP was set to 1, 000 for reference under the unavailability of updated information. 3.5. e Model Dashboard. e model dashboard was implemented using Python 3.7 programming language and the Dash Plotly library [28]. e equations are integrated numerically using the fourth-order Runge-Kutta method, with an integration step △t � 1. e model is presented in the form of an interactive dashboard, and it is available online at https://dashseir.herokuapp.com/. e working of the dashboard is further detailed in the flowchart in Figure 2. 3.6. Model Calibration. Many different groups have delivered relevant research and data concerning the coronavirus outbreak as the epidemic is ongoing and evolving from country to country. It is important to note that official data are hampered by a number of sources of uncertainty: undertesting, lag of test results, and lack of accuracy in infection dating, among others. Furthermore, all research is preliminary in the sense that they deal with rather problematic data sources which are likely to be revised and consolidated in the future. e characteristics of the epidemics and the current stage of promptness of health organs and laboratories worldwide make the infection curve lead the official confirmed cases by a period that might well be of about 10 days or more. us, for now, the calibration of epidemic models based on confirmed infection data can lead to highly inaccurate prognostics. To overcome this limitation, the model was formulated as much as possible on the basis of parameters that can be directly mapped to those of clinical studies. On the contrary, parameters depending on social behavior were calibrated on the basis of available data, such as the reproduction number R t . e reproduction numbers for the periods before and during lockdown were calculated by fitting the model to the available official data from the Health Secretary of the State of Amazonas [20]. Calibration processes were performed and were aimed at finding the value of R t that minimizes the root mean square error (RMSE) between the data and the model outputs. Case Study. Manaus is the capital of Amazonas state and is located on the banks of Negro river in Northwest Brazil. As the largest city of the state, it occupies an area of 11, 401 km 2 , from which 427 km 2 is an urban area, as represented in Figure 3. Manaus currently holds a population of about 2, 15 million people and has recently been pointed as one of the first Brazilian cities that shall face the collapse of health care facilities due to the coronavirus epidemic. Although the actions of the State Government in tackling a possible epidemic have been important to increase the number of available beds for the hospitalization of serious cases and the number of health professionals, the cases of COVID-19 rapidly increased since late March to reach over a thousand cases by April 12th. Temporary hiring of human resources, purchase of supplies, equipment for laboratory analysis, and personal protective equipment (PPE) are some of the actions that were announced. To expand the supply of health professionals available to deal with the pandemic, the State University of Amazonas (UEA) announced the anticipation of the graduation of 79 doctors, 28 nurses, and 21 pharmacists by six months [20]. e measure aims to optimize the availability of health services and increase the service capacity of public units, especially those receiving suspected and confirmed coronavirus patients. On March 23rd, the Amazonas state government announced that the state of public calamity, suspended activities from public organs, and oriented novel hygiene and monitoring procedures in industries. However, the rapid growth in the number of cases since then until the first days of April prompted the state to take more forceful measures. On April 4th, the state government announced further restrictive measures to contain the epidemics: schools were closed and classes broadcasted on open TV channels; only essential services remained working; and the transportation of passengers in buses, taxis, and vans was suspended in federal and state roads. Since then, the number of cases and deaths kept growing rapidly and the capital Manaus became a major concern due to the concentration of confirmed cases and deaths of coronavirus patients. According to official data, from the current 863 active cases, 117 are hospitalized in mild conditions and 77 are critical and occupying an ICU bed. Furthermore, there are currently another 366 hospitalization of cases under investigation, 313 in regular beds and 53 in ICU beds [20]. e evolution of the epidemic in Manaus (AM) is presented in Figure 4. From the official and confirmed active cases, one can calculate the percentages of hospitalization in regular and ICU beds to be 13.6% and 8.9%, respectively. Although these are preliminary data, it is the major source of information in the current situation, and thus, these parameters will be applied in the model to evaluate the deficit in medical infrastructure in each scenario. Model Application e expanded SEIRCQ model was run on the data from Manaus in order allow the calibration of parameter R t that depends on social behavior. e number of ICU beds used in this study corresponds to those informed in publications by Figure 5. ese curves correspond to the initial phase of the epidemics, before the first governmental restriction decree took effect. Recall that there is a delay between the exposition and symptoms (if any) and another delay between symptoms and testing/confirmation by official organs. e time lag between the first social distancing measures and their visible effects in the epidemic curve was of about 14 days. e epidemic growth already shows a slowly decreasing growth rate. Model calibration for the available data of the Mathematical Problems in Engineering first social distancing period resulted in reproduction number R t � 1.51. e calibration curves are shown in Figure 6. As plotted in a single graphic window, the model and the data curves present a sigmoidal shape, which may indicate that the epidemic is reaching a plateau. Is that so? e next section shows that if current social distancing measures are maintained to the same levels, the number of confirmed cases would grow steadily for weeks before starting to decelerate. e calibration showed that the infection curve started to decelerate in the beginning of April, about 14 days after the isolation measures became effective. As expected, there is a time lag between the implementation of isolation measures and its visible effects in the infection curve. is should also be taken into account in planning epidemic control measures. As the model results described the infection curves, all the dates mentioned will be referred to the official confirmed data, which means that they will be presumably delayed relatively to the real infection curve. Exploring the Application Epidemic Control Measures: Alternative Scenarios and Actions. Next, we present three case studies that consider different scenarios for 180 days following the first efforts of social isolation which occurred in March 2020. ree scenarios are considered as prognostics of the epidemic: Scenario 1: Epidemic Prognostics under Mild Social Distancing Measures (as Currently Adopted). e following simulation presents a prognostic of the likely scenario under EIC � 0.57, which roughly corresponds to the calibrated parameters that reflect the current situation by April 2020. Although the current measures helped reduce the rate of infection as compared to the initial stages of the epidemic, the model shows that the sigmoid-shaped curve would not lead to a plateau in the following days, as one might believe by checking Figure 7. Rather, the number of infected people shows a tendency to accelerate in the following weeks. e hospitalization curves for this scenario are shown in Figure 8. Note that, under this scenario, the number of ICU beds would be exceeded in late April and the number of patients continue to rise for about two months. Under this scenario, the epidemic would reach a peak only in the period from mid-June to early July and drag the health care system into a period of overload, in which case the available medical infrastructure would be greatly exceeded. Indeed, Manaus and more broadly the Amazonas state are known to have a low average number of ICU beds of about 1.24 to each 10.000 inhabitants. e problem would be worsened by the insufficient reduction of the basic reproduction number under distancing, which is an effect of social behavior. For the same scenario, Figures 9 and 10 show the peak of the epidemics expected to happen in mid-June. If hospitalization rates continue at the same levels, the hospitalizations would amount to a couple thousands at the peak, as shown in Figure 10. Note that, under this scenario, the medical infrastructure would be greatly and rapidly exceeded beyond any feasible perspective of building new ICU beds. Figure 11 shows the effective isolation coefficient for scenario 1. Of course, as the perspective of such a scenario becomes a reality, more strict measures are likely to be taken and social behavior naturally would change. How about alternative scenarios of more severe social distancing measures? How would they impact the overload in the medical infrastructure? is is a question on which the next section attempts to shed some light. Scenario 2: More Strict Social Distancing Measures. By making the distancing measures more severe, the epidemic curve would reach a maximum and then start to decline by the end of May. At this time (April 2020), Manaus is facing the beginning of the exhaustion of the medical infrastructure. In a scenario of more intense social distancing measures in which the reproduction number would decrease to R t � 1.10, the curve would reach a peak and then slowly start to decline. As such measures last for about 90 days, they would support a consistent reduction in the number of infected people. Figure 12 shows the prognostics of the numbers of deaths under this scenario, and Figure 13 focuses on hospitalizations. It is noticeable that even though the epidemic is controlled under 0.70 EIC, the number of ICU beds would be insufficient. Under this scenario, the number of critical patients would reach nearly double of available ICU beds. It is important to note that Manaus concentrates the ICU beds in the State of Amazonas, and thus, the total count of ICU beds required statewide would be considerably higher. Moreover, under this scenario, about 3 months ahead of April, the number of required ICU beds would be nearly the total capacity, as shown in Figure 14. It should be noted that the model does not take into account that equipment may become unavailable as they break, as well as medical staff might themselves be unavailable as they might be infected during the epidemic. It is noticeable that even under more strict distancing measures, the existing infrastructure would be insufficient. Figure 15 shows the EIC for scenario 2. Under this scenario, a consistent decline in the number of infected is observed and the number of infected individuals would return to the current level after a steep ascent in the last week of April. However, strict social distancing measures might be hard to implement in practice since there are economic issues that have to be taken into account. ere are other alternatives to help handle this situation. e next section gives a hybrid approach in which three epidemic control strategies are simultaneously applied to reduce infection rates while also maintaining the economy running as much as possible. Scenario 3: Severe Social Distancing and Confinement, Followed by Quarantine of Infected Individuals. is scenario considers an EIC of about 0.80 during four to six weeks, such that the spread of the virus is further reduced relatively to the previous scenarios. In this scenario, the reproduction number is decreased to R t � 1.10 by more severe social distancing measures. is period would ideally start as soon as possible (the simulations considered late April) and last for about 45 days. After that, the more strict social distancing measures could be relaxed to levels comparable to those imposed by the government decree from March 23 but added with two other strategies: confinement (of susceptible) and quarantine (of infected). e confinement of the susceptible is at the rate of 2% per day, that is, R CNF � 0.02 and T CNF � 15, which would result in a peak of about 20% of the total susceptible population being confined. About four weeks later, as the more severe social distancing period reaches about 2/3 of its duration, the quarantine of infected individuals would take place with R QRT � 0.10, T QRT � 15 days, and T LIC � 2 days. It shall be remarked that the quarantine of infected individuals heavily relies on testing, clinical diagnostics, and adequate social behavior. e period of more severe social distancing measures would give authorities time to put into practice testing campaigns starting in late May or early June. Under such conditions, the epidemic would be maintained under control after a peak in mid-May, as shown in Figures 16 and 17. As a result of controlling the epidemic, the curve of hospitalized individuals is found to behave satisfactorily since it remains below the presumable number of available beds. In this case, the peak would overwhelm the medical infrastructure during a short period after which the number of hospitalizations in ICU beds would consistently decrease, as shown in Figure 18. e EIC for scenario 3 is shown in Figure 19. is strategy would require that widespread testing be performed upon mid-June 2020 upon the mild relaxation of social distancing measures. e Importance of Mathematical Models in is Epidemic. As the number of cases of COVID-19 in the world surpasses a couple millions, the effects of exponential growth seem to continue misunderstood. is is quite problematic since most of the success or failure of strategic control measures relies upon how they are implemented in practice by society as a whole. In this sense, models can have a widespread didactic effect upon the individual comprehension of the dynamics of the epidemic and how fast things can deteriorate. As the world goes through one of the most challenging periods in recent history, the application of all available knowledge that can gain us time is needed to deal with the limitations of health care systems and search for effective treatments that can help reduce the death toll. In this context, gaining time is mainly a matter of strategically dealing with this first wave of the epidemic. To this end, scientific, medical, and political personnel are working to flatten the curve as much as possible and make it fit into the health care system capacity. Naturally, not the same strategy will work everywhere. Meanwhile, there is enormous concern about the impacts of the epidemic and of the control strategies in the economy, as well as about a second wave of the epidemic. As such, humanity is currently living in an ongoing multiobjective optimization problem whose outcome greatly depends on early action. In this sense, the application of mathematical models can be worth an alternative to gain time to find solutions that can be effective to simultaneously preserve public health and mitigate the deep effects that the epidemic shall have upon Figure 19: Cumulative effect of the isolation measures. Effective isolation coefficient for the third scenario with three control strategies in use, which would stabilize the epidemic faster and allow greater retake of economical activity (interrupting mild distancing and implementing severe distancing along with confinement of susceptible and then effective quarantine of infected). production, employment, and the economy as a whole. While the obvious best solution from the point of view epidemic control is a complete lockdown for about 4 to 6 weeks, this is not economically affordable nor physically feasible for many cities, states, or countries without severe collateral damage, such as the consequences to economically vulnerable people. As such, alternative strategies must be considered in such cases, and they should be tested and perfected in mathematical models before application. From the Mathematical Model to the Real Epidemic. Regarding the results of mathematical models and their application in practice, some matters of utmost importance are (i) how to translate and relate the theoretical control actions and parameters into practical control parameters of the real-world epidemic and (ii) how to evaluate the evolution of the control measures in the real-world epidemic? While the results of control strategies from the models seem promising, they are of no practical interest if we fail to map them to real life. To illustrate how this can be done, let us consider one strategy at a time. Social Distancing. Social distancing was shown to have effect in the reproduction number, R t , since the parameter represents the interaction between individuals that will result in exposition to the infection. is study found the basic reproduction number as R 0 � 3.55 for Manaus and then R t � 1.51 after the government decree of March 23rd, imposing a rather mild set of social distancing measures. Social distancing in Brazil has been measured by means of smartphone apps, and for the period under study, the State of Amazonas achieved indices of reduction of mobility of the order of 50%, which reflected in the reduction of the reproduction number in the period. us, although monitoring methodologies may find enhancements in the following weeks, they seem to be sufficiently adequate to allow the verification of the control parameter R t in practice. While other methodologies might be available, such image processing from aerial images by vants or street surveillance cameras, the individual localization of a fraction of the population by means of smartphone apps seems to give a sharper account of the mobility dynamics of a given geographic area. To this end, the mapping of EIC to isolation indices from smartphone monitoring can help provide a link between the model parameter and the real-world epidemic. Confinement of Susceptible Individuals. e simulations for scenario 3 applied the confinement rate R CNF � 0.02 (or 2% of individuals per day) and confinement period T CNF � 15 days. While the confinement period is rather intuitive and easy to understand, the confinement rate might be a bit trickier, especially when it comes to translating it into clear instructions on how to proceed in practice. For a big company, it could represent the entering of 2% of its (supposedly) susceptible employees in isolation in a daily basis and leaving isolation after T CNF days. is would create a flow of susceptible to confined and back to susceptible that for these parameter values would reach a peak of 20% of the employees confined simultaneously. Although this can be clear enough as an instruction to employees in a big company, it would be rather confusing and hardly feasible to society as a whole. us, a different approach would be to confine individuals in the proportion of the peak of the curve. In the example of Manaus, where about 400, 000 would be confined in the peak, this would represent roughly 20% of the population. us, as a rule of thumb, after checking the strategy in the model, the government could suggest the confinement of 20% of (supposedly) susceptible individuals to be implemented everywhere except basic services. e results of such measure could also be assessed using the monitoring of the location of smartphone devices. It is important to note that this strategy works best upon testing to assure that the individuals being confined are effectively susceptible ones. Quarantine of Infected Individuals. e simulations in Section 4.2.3 applied quarantine rate R QRT � 0.10 (or 10% of the infected individuals that were identified during the period T LIC ), quarantine period T QRT � 15 days, and average delay until quarantine of an infected individual as T LIC � 2 days. While the confinement strategy deals with individuals that cannot spread the infection by themselves, the quarantine deals with more hazardous elements of the population that should be monitored more closely. As tests are lacking in Brazil and the ones applied take long until any conclusive result is achieved, this strategy has been applied on the basis of clinical exam of symptomatic individuals. us, under the current conditions and considering recent studies that found that, for each COVID-19 confirmation, there are about at least 7 asymptomatic or mild positives that were missed, it would be very far from reality to set the quarantine rate to much larger values than 10%. Quarantined infected must be monitored by health agencies during T CNF to guarantee that they comply with the quarantine period under the right conditions to make it effective In practice and with time, this strategy would be equivalent to effectively quarantine about 70% of the infected individuals at the peak since the strategy would greatly reduce the inflow of newly exposed individuals. is requires intensive testing even in asymptomatic individuals, and for this reason, it was left for application from early June such that authorities have time to prepare massive testing campaigns. As shown in the simulations for the case of Manaus, having this strategy taken into effect would allow that the number of infections progressively decreases along with the number of quarantined individuals after about a month. Under these parameters, the number of quarantined and the number of infected in circulation become roughly the same with time and decrease together to reach values that would clear the overload of the medical infrastructure by the mid-June, 2020. e Model Results and Infrastructure Planning. e results indicate that the existing medical infrastructure shall be overwhelmed soon under all scenarios considered in the study. How overwhelmed will fundamentally depend on the course of actions and measures from the last weeks of April on. e first scenario considers the continuity of the current distancing measures reaching an EIC of about 0.57 and, in this case, the number of infected individuals skyrockets. In this case, the number of required simple beds and ICU beds would go well beyond the possibility of the current medical infrastructure and possibly any new infrastructure that might be added in the following weeks. e second scenario considers intensifying the current distancing measures such that the EIC reaches about 0.70 and the maintenance of this scenario for at least ten to twelve weeks. In this case, further maintenance of distancing measures would cause a consistent decrease in the number of infected and hospitalized and bring the number of infections under control a few months later. In this scenario, the number of ICU beds would be overwhelmed by about 100% of capacity as well in the peak of this first wave of infection. Although this is well over the current capacity, newly installed ICU beds can be achieved timely if isolation measures are taken soon and the medical infrastructure is expanded accordingly. e third scenario considers intensifying the current distancing measures such that EIC reaches 0.80 with the confinement of susceptible and quarantine of infected individuals for four to six weeks, along with confinement of susceptible that would reach 20% of confined at peak. After 30 days, the policy of isolating and monitoring infected individuals would be established, and after 45 days, the measures could be relaxed to the levels observed in early April. Under this scenario, ICU beds would be overwhelmed during a much shorter period. is is a deficit more easily reachable by building new infrastructure than the previous ones. e Effect of Isolation. From the EIC values obtained in each scenario, the effect of isolation upon the epidemic curves became clear. Compared to one another, the strategy presented in scenario 3 shows far more effectiveness than the other two, yet it allows further relaxation of isolation in the period following the period of severe isolation. is can give some insight into the role of severe social distancing measures to allow a more rapid resume of economical activities. e EIC for the three scenarios is presented in Figure 20. Concluding Remarks e scenarios presented in this study show that the outcomes are highly dependent upon both the strength and timing plus duration of application of epidemic control strategies. As an unprecedented epidemic in recent history and the first worldwide epidemic of large scope to hit the globalized and interconnected world, the coronavirus outbreak is a novel experience and, as such, has demanded the mobilization of efforts from the most varied fields. As some countries and cities are especially prone to the occurrence of major human disasters in the midst of this pandemic, it is urgent that control strategies are meticulously studied and be ever changed as necessary in order to attend their function of controlling the epidemic, minimize deaths, and minimize economic loss which leads to poverty and misery. e case of Manaus is very emblematic since the epidemic is at the beginning (as this manuscript is written in mid-April 2020), and there are forms of avoiding more severe consequences if actions are taken timely and with the adequate severity that the current situation calls for. Many other possible measures and strategies can be tested using the same set of parameters or new ones that eventually become available in the next days in the medical literature. Nevertheless, the current state-of-affairs indicates that the medical infrastructure shall be overwhelmed for at least four weeks regardless of any actions that will be taken to contain the epidemic at the moment. is further illustrates the importance of timely and adequately strong isolation measures. is is especially relevant due to the fact that the epidemic curve is currently leading isolation measures by about two weeks, which means that any actions taken now will reflect on the curves in about two weeks. is study aimed at presenting a model with simple application for the study of COVID-19. We presented the SEIRCQ model and its dashboard developed and implemented in Python language. e dashboard allows the reproduction of the studies presented in this research and several others, and it is available online free of charge at https://dashseir.herokuapp.com/. We encourage the application of the dashboard by health system authorities for applied studies such that they can draw their own possible scenarios when dealing with strategies to combat the epidemic. Furthermore, a case study is presented of the first wave of the epidemic in Manaus (AM), Brazil, with prognostics for a time window of about 150 days under three hypothetic scenarios. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-08-13T10:10:54.980Z
2020-08-08T00:00:00.000
{ "year": 2020, "sha1": "abfa12912d592964c81bb0d7dd84c14f58a018b8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/8198563", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "055a44216d676025abcf21ec9a8a9112eda7131f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
227330429
pes2o/s2orc
v3-fos-license
Survey data on income, food security, and dietary behavior among women and children from households of differing socio-economic status in urban and peri-urban areas of Nairobi, Kenya This article describes data collected to analyze consumer behaviors in vulnerable populations by examining key access constraints to nutritious foods among households of differing socio-economic status in urban and peri‑urban areas of Nairobi, Kenya. The key variables studied include wealth status, food security, and dietary behavior indicators at individual and household level. Household food insecurity access scale (HFIAS), livelihood coping strategies (LCS), food expenditure share (FES), food consumption score (FCS), household dietary diversity score (HDDS), minimum dietary diversity-women(MDD-W), and child dietary diversity score (CDDS) indicators were used to measure food security. Household assets were used to develop an asset-based wealth index that grouped the study sample population into five wealth quantiles, while income levels were used to estimate FES. The hypothesis that guided the cross-sectional survey conducted to generate these data is that vulnerability to food insecurity and poverty are important drivers of food choice that influence household and individual dietary behavior. Data from this study was thus used to assess direction and strength of association between; household food insecurity, wealth status, women, children, and household dietary behavior in both urban and peri‑urban populations sampled. a b s t r a c t This article describes data collected to analyze consumer behaviors in vulnerable populations by examining key access constraints to nutritious foods among households of differing socio-economic status in urban and peri-urban areas of Nairobi, Kenya. The key variables studied include wealth status, food security, and dietary behavior indicators at individual and household level. Household food insecurity access scale (HFIAS), livelihood coping strategies (LCS), food expenditure share (FES), food consumption score (FCS), household dietary diversity score (HDDS), minimum dietary diversitywomen(MDD-W), and child dietary diversity score (CDDS) indicators were used to measure food security. Household assets were used to develop an asset-based wealth index that grouped the study sample population into five wealth quantiles, while income levels were used to estimate FES. The hypothesis that guided the cross-sectional survey conducted to generate these data is that vulnerability to food insecurity and poverty are important drivers of food choice that influence household and individual dietary behavior. Data from this study was thus used to assess direction and strength of association between; household food insecurity, wealth status, women, children, and household dietary behavior in both urban and peri-urban populations sampled. © Value of the Data • These data fill a significant knowledge gap and provide an overview of food security and a nutrition profile of low-and medium-income households in Nairobi's metropolitan urban and peri-urban areas. • This data can be used by local communities in which the survey was administered to address food security and nutrition challenges through community-led initiatives targeting the entire population and vulnerable groups such as children under five and pregnant women. • Findings from these data can guide policymakers, advocacy teams, and implementers of food security interventions in the development of complementary and corrective policies and nutrition programming for vulnerable low-and medium-income households in Kenya's urban and peri-urban areas. • Program implementers can use these data to guide the development of appropriate interventions or to justify targeting for interventions for vulnerable households. The private sector, particularly the food processing sector, can use this information to develop affordable, innovative food products for low-income consumers. Data Description The data was collected in the context of a Cultivate Africa's Future (CultiAF) project on pre-cooked beans, led by the Kenya Agricultural and Livestock Research Organization (KALRO) and within/under the CGIAR Center Research Program (CRP) on Agriculture for Nutrition and Health (A4NH) in November 2015. A cross-sectional household survey was administered to collect nutrition, income, and demographic indicators used to assess access constraints to nutritious foods among urban and peri-urban consumers. Urban consumers were sampled from three subcounties of Nairobi County: Kibera, Dandora, and Mukuru Kwa Njenga. Peri-urban consumers were sampled from Athi River sub-county in Machakos County and Juja sub-County in Kiambu County. Data was collected using Open Data Kit (ODK). Sampling was undertaken among households with at least one child aged 6-59 months. In each household sampled, the target child, whose nutrition and anthropometric information was collected was indexed to track information collected. The primary caregiver of the index child, who was in most cases the child's mother, was purposively selected as the survey questionnaire respondent. The questionnaire used to collect the data has been edited and added to the Dataverse repository containing the data. A data dictionary which contains and describes all the 1147 data variables in the data is available Dataverse repository as Microsoft excel file adjacent to the dataset and questionnaire. Fig. 1 presents the geographical study sites where data was collected, while Fig. 2 shows the respondents' distribution among the study sites. Fig. 3 presents the Food Consumption Score (FCS) that shows the diet diversity and frequency of food consumption characteristics, by showing the proportion of households with their corresponding FCS per location. Fig. 4 presents the household Food Expenditure Share (FES) used to measure household food security characteristics by acting as an income proxy. Table 1 presents the minimum women's diet diversity (MDD-W) characteristics by showing the proportion of women from each age group that consumed food from each of the ten food groups. Survey site and population The study sites were purposely selected based on the Living Standards Measurement Study (LSMS) to include the following income classifications: low income (Kibera), upper-low income (Dandora and Athi River), and medium income (Mukuru Kwa Njenga, Juja, and Athi River). Population data from the Kenya National Bureau of Statistics (KNBS) census was used to determine the number and location of households sampled. A random sampling technique was employed to select sample households from a population of 327,745 households. A total of 354 households were sampled: Kibera ( N = 98), Dandora ( N = 67), Athi River( N = 60), Mukuru Kwa Njenga ( N = 89), and Juja ( N = 40). The sample size was calculated to achieve 80% power at α = 0.05 to ensure the study would detect the effect of either food insecurity or wealth inequality on poor dietary diversity at both the household and individual levels. Oral consent was obtained from the respondents before conducting the interviews. The survey targeted household representatives with adequate information on household food consumption, food intake by the index child (6-59 months), and index woman (biological mother or caregiver of the index child) as the study respondent. Questionnaire modules The questionnaire modules were designed to collect data to ascertain whether these three study hypotheses were true, for both urban and peri-urban populations: 1. Households that are asset-or income-poor are food insecure. 2. Households that are income-or asset-poor have poor dietary diversity for the household, women, and children. 3. Households that are food insecure have poor dietary diversity for the household, women, and children. The principal study variables collected included: household identification, household roster and demographics, dwelling characteristics, market access, household food consumption, house- hold non-food expenditure, infant dietary diversity, women's dietary diversity, household hunger scale, hunger coping strategies, and household income sources. Study questions that informed responses to the direction and association of the variables were: 1. Is it that wealth status affects food security, which then affects dietary behavior? 2. Or does food security, independent of wealth status, affect dietary behavior? 3. Is the pattern of strength and direction of association similar for both urban and peri-urban populations? Data collection and survey implementation The questionnaire used for data collection was designed and then coded in Open Data Kit (ODK) to facilitate mobile data collection. The ODK form was hosted on a SurveyCTO cloud server. Enumerators used android tablets to collect data and transmitted it to the server on a daily basis. A data manager monitored the data received in the server and ran data quality checks for inconsistencies, patterns, and outliers, providing feedback to the field teams to improve performance. Data were collected by trained enumerators associated with KALRO. All enumerators had a university bachelor's or higher-level degree. All spoke English and Kiswahili languages fluently and were experienced in data collection in urban and peri-urban areas. Before data collection commenced, the enumerators received a four-day mandatory training on the questionnaire and Computer Assisted Personal Interviews (CAPI) enumeration skills using tablets. The survey intentionally targeted one primary caregiver of the index child in each household, usually the mother, so as to improve the accuracy and detail of child dietary diversity scores (CDDS) and MDD-W parameters. As a result, 94% of the study respondents were women. Data was collected using a face-to-face interview technique. The questionnaire was structured to generate both qualitative and quantitative data using a combination of open-ended and closed-ended questions. Household characteristics This module was used to collect data on household location, roster, demographics, and dwelling characteristics. Data on assets and dwelling characteristics were used to generate the household asset-based wealth index. Household income sources The household income module asked about the income sources from various livelihoods and the actual earnings in local currency for a recall period of twelve months. Household non-food expenditure The household non-food expenditure module split household non-food expenditures into a period of thirty days for ten common expenditures and for a period of six months for other expenses. Thirty-day recall items for recurring expenses that include rent, water, electricity, fuel, satellite, transport, garbage collection, household items, and addictive items such as alcohol and tobacco were recorded. Six-month recall expense items included medical fees, education fees, the debt amount, savings amount, house construction and house repairs, clothing, social events or celebration, and agricultural inputs. Estimates of all these household non-food expenditures and activities were recorded in local currency. Household food consumption The food consumption module sought to gather the household's current status of quality and quantity of food consumption seven days prior to the interview [1] . The food consumption module contained a list of sixteen food groups: (1) Cereals and grains, (2) Roots and tubers, (3) Legumes/nuts, (4) Orange vegetables (vegetables rich in Vitamin A), (5) Green leafy vegetables, (6) Other vegetables, (7) Orange fruits (fruits rich in Vitamin A), (8) Other fruits, (9) meat, (10) Liver, kidney, heart and other organ meats, (11) Fish/shellfish, (12) eggs, (13) Milk and other dairy products, (14) Oil / fat / butter, (15) Sugar/sweets, and (16) Condiments/spices. For each food group, the frequency of intake in the last seven days, the quantity of food consumed by the household, source of food consumed (purchased, non-purchased, or both), the estimated total cost of the food (cash, credit, or value of both for purchased and non-purchased food), and primary source of non-purchased food was indicated. An aggregate of food consumption frequency and diversity was used to calculate the individual FCS as per the World Food Programme (WFP) (2009) [2] guidelines, and Leroy et al. [1] as shown in Fig. 3 . The HFIAS data was collected using Coates' et al. [3] nine questions, commonly referred to as items, using a 30-day recall period. The HFIAS indicator was constructed to measure the occurrence and frequency of the food insecurity dimension. While the individual items measured the food insecurity occurrence, categorical items were used to measure the frequency at which each item occurred. The HFIAS indicator was then calculated, and the score ranging from a minimum of zero (food-secure households), and a maximum score of 27 (food-insecure households) was allocated to each household, as shown in the data. Using Smith and Subandoro's [4] measurement method, a combination of household food expenditure and non-food expenditure data was used to calculate FES. FES was used as a food security indicator and an income proxy for the households, as shown in Fig. 4 . FES was used as an income proxy because findings from WFP (2009) [2] indicate that poor households had a higher share of total expenditures going towards food compared to wealthy households. This is especially true for households that depend mainly on purchased food instead of own production, which is the case in Nairobi metropolitan area, where this study was carried out. FES data was collected using a 30-day recall period. The household food consumption module was designed to improve understanding of households' intake of key nutrient-rich foods. Aggregated data for food items from the 16 food groups, was used to construct HDDS to measure dietary behavior and food access, as proposed by Leroy et al. [1] . Finally, questionnaire responses from a series of nineteen questions were used to calculate the LCS indicator that measures the livelihood stress and assets depletion over a 30-day period before the survey as per WFP guidelines [2] . Respondents were classified into three broad categories depending on the food insecurity faced at household level. The LCS categories used in allocation include: stress, crisis , and emergency coping. Child dietary diversity score(CDDS) This module asked questions about food and drinks offered to the index child within a household to measure a child's diet diversity using CDDS as per the WFP and WHO guidelines [ 5 , 6 ]. The infant dietary diversity module also asked if the child had received vitamin drops, oral rehydration solution, or had drunk anything from a bottle with a teat. The module used a frequency of food and water intake over the 24 h prior to the study and whether the intake reported was usual or unusual [ 5 , 6 ]. The module contained a list of nine liquid foods given to the child: (1) Breast milk, (2) Women were categorized into three age groups, 15 -25 years, 26 -35 years, and 36 -49 years to check for the relationship between age and diet diversity. Household hunger scale The household hunger scale module used a set of eight questions to determine the occurrence of increasingly severe experiences of food shortage [8] . Four key module domains were assessed: worry about food access in the short term, uncertainty about food access in the long term, inadequate food quality, and insufficient food quantity. The recall period was 30 days. The module asked if the household had experienced any of the four domains, how often it occurred in the last 30 days, why the experience occurred, and who was affected -adults, children under 24 months, or both. Livelihood coping strategies The module used a set of 19 questions to determine how vulnerable households responded to food insecurity and what measures they took to mitigate the problem. The coping strategies are divided into three groups -stress, emergency, and crisis.
2020-11-26T09:03:25.583Z
2020-11-19T00:00:00.000
{ "year": 2020, "sha1": "648bf06fab02e7f685e42d36bc2041296e894015", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2020.106542", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "279bc87145f50df2b7e39fe62e3a171eb56b48bb", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
35013141
pes2o/s2orc
v3-fos-license
Effect of eicosapentaenoic acids-rich fish oil supplementation on motor nerve function after eccentric contractions Background This study investigated the effect of supplementation with fish oil rich in eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) on the M-wave latency of biceps brachii and muscle damage after a single session of maximal elbow flexor eccentric contractions (ECC). Methods Twenty-one men were completed the randomized, double-blind, placebo-controlled, and parallel-design study. The subjects were randomly assigned to the fish oil group (n = 10) or control group (n = 11). The fish oil group consumed eight 300-mg EPA-rich fish oil softgel capsules (containing, in total, 600 mg EPA and 260 mg DHA) per day for 8 weeks before the exercise, and continued this for a further 5 days. The control group consumed an equivalent number of placebo capsules. The subjects performed six sets of ten eccentric contractions of the elbow flexors using a dumbbell set at 40% of their one repetition maximum. M-wave latency was assessed as the time taken from electrical stimulation applied to Erb’s point to the onset of M-wave of the biceps brachii. This was measured before and immediately after exercise, and then after 1, 2, 3, and 5 days. Changes in maximal voluntary isometric contraction (MVC) torque, range of motion (ROM), upper arm circumference, and delayed onset muscle soreness (DOMS) were assessed at the same time points. Results Compared with the control group, M-wave latency was significantly shorter in the fish oil group immediately after exercise (p = 0.040), MVC torque was significantly higher at 1 day after exercise (p = 0.049), ROM was significantly greater at post and 2 days after exercise (post; p = 0.006, day 2; p = 0.014), and there was significantly less delayed onset muscle soreness at 1 and 2 days after exercise (day 1; p = 0.049, day 2; p = 0.023). Conclusion Eight weeks of EPA and DHA supplementation may play a protective role against motor nerve function and may attenuate muscle damage after eccentric contractions. Trial registration This trial was registered on July 14th 2015 (https://upload.umin.ac.jp/cgi-open-bin/ctr/index.cgi). Background It is widely recognized that unaccustomed muscle movement, including eccentric contraction (ECC), causes muscular damage [1,2]. It has been shown that ECCs result in a loss of muscle strength, delayed onset muscle soreness (DOMS), a limited range of motion (ROM), muscle swelling, increases in serum creatine kinase and myoglobin levels, prolonged transverse relaxation time (T2) on magnetic resonance imaging (MRI), and echo intensity on ultrasound imaging [1,3,4]. Interestingly, because these effects differ in the time taken to reach their peak [1,2], the relationship with each muscle damage marker is not clear. For example, it has been shown that strength loss and limited ROM peak immediately after performing ECCs, whereas DOMS reaches its peak at 1-3 days after the ECCs and T2 and echo intensity, which indicate the distribution of free water and/or interstitial edema, are at their maximum after 3-6 days [1,2,4,5]. Nerve conduction velocity, M-wave latency, and amplitude are often measured to assess motor nerve function [6,7]. M-wave latency is measured as the time between electrical stimulation and the onset of an M-wave, but this can be influenced by a number of factors, including various processes such as nerve conduction, neuromuscular transmission, and muscle fiber conduction, and sarcolemmal excitability [8]. Previous studies have used M-wave latency to examine nerve disorders such as neuropathy and neural muscular atrophy [6,9]; these indicated that an increase in M-wave latency reflected the impairment of motor nerves. It has been shown that ECCs caused histological damage in rats, not only in myofibrils, the extracellular matrix, and the triads of the cytoplasmic membrane system [10,11], but also in nerve fibers and in the thinning of myelin sheaths [12]. Kouzaki et al. [13] reported that M-wave latency delayed by 12% at 24 h and 24% at 48 h after 60 ECCs of the elbow flexors performed by women, suggesting musculocutaneous nerve impairment. However, it is not known whether nutritional strategies have a role for preventing temporal muscular dysfunction after ECCs. When fish oil is consumed in the diet, the concentration of the long-chain omega-3 polyunsaturated fatty acid increases proportionately in the cellular membranes of muscles [14,15] as in other organs [16]. In mice, omega-3, especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), has several important roles in reducing the risk of cardiovascular diseases [17,18], reducing inflammatory markers [19,20], and improving vision and cognitive functions including Alzheimer's disease [21]. Recently, we have shown that EPA and DHA supplementation ameliorated reductions in muscle strength, the development of DOMS, and limited ROM following ECCs [22]. We have hypothesized that, as one of the mechanisms underlying this phenomenon, EPA and DHA play an important role related to nerve structure and/or function. However, as yet no study has investigated the effect of EPA and DHA supplementation on ECC-induced nerve damage. The aim of the present study was to investigate the effect of 8 weeks of EPA and DHA supplementation on M-wave latency of the biceps brachii after elbow flexion ECCs. We hypothesized that EPA and DHA supplementation for 8 weeks would inhibit the strength loss, limited ROM, DOMS, and muscle swelling after the ECCs and would increase M-wave latency. Subjects A total of 21 healthy men (age, 21.0 ± 0.8 years; height, 170.9 ± 5.7 cm; weight, 64.3 ± 6.1 kg; body mass index, 25.0 ± 1.7) were recruited for this study. The sample size was determined by a power analysis (G*power, version 3.0.10, Heinrich-Heine University, Dusseldorf, Germany) by setting the effect size as 1, α level of 0.05 and power (1-β) of 0.80 for the comparison between groups, which showed that at least ten participants were necessary. None had participated in any regular resistance training, restriction of exercise, or other clinical trial, had food allergies to fish, or were taking any supplement or medication. The subjects were requested to avoid interventions such as massage, stretching, and strenuous exercise, and the excessive consumption of food and alcohol, during the experimental period. Prior to participation, they were provided with detailed explanations of the study protocol, and all signed an informed consent form. The study was conducted in accordance with the principles of the Declaration of Helsinki; it was approved by the Ethics Committee for Human Experiments at Juntendo University (ID: 27-66) and has been registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR, identifier: UMIN000018285). Study design The study was conducted following the double-blind, placebo-controlled, parallel-group trial design. The subjects were randomly assigned to two groups using a table of random numbers in such a manner as to minimize the inter-group differences in age, body fat, body mass index. According to the previous studies [22,23], the control group consumed daily placebo capsules for 8 weeks prior to an exercise experiment and for 5 days after the exercise whereas the EPA group consumed EPA supplement capsules as described in the following section. The capsules were taken for a total of 62 days (including the exercise day). The sequence allocation concealment and blinding to subjects and researchers were maintained throughout this period. Compliance to intake was assessed by daily record and a pill count at the end of the study. To help the reliability of the pill count, subjects were given an excess number of pills and asked to return any remaining pills at the end of the study. On the day of exercise testing, M-wave latency, maximal voluntary isometric contraction (MVC) torque, elbow joint ROM, upper arm circumference, and muscle soreness assessed by a visual analog scale, were assessed in the non-dominant arm before exercise. Immediately after these baseline measurements, the subjects performed ECCs with the same arm. All measurements were repeated immediately after the exercise, and then at 1, 2, 3, and 5 days later. Before the subjects started taking the supplement or placebo capsules, we surveyed their nutritional status using the food frequency questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). This was repeated after the 8 weeks supplementation. The primary outcome measures were MVC torque, ROM, M-wave latency, and muscle soreness. Supplements From the previous studies [20,22,24] and considering the safety factor [25], the EPA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, a total of 2400 mg per day containing 600 mg EPA and 260 mg DHA. The CON group consumed eight 300 mg corn oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day (not containing EPA and DHA in a total of 2400 mg). The subjects consumed the supplements 30 min after meals with water. Eccentric contractions For the eccentric exercise, the subject sat on a preacher curl bench with his shoulder joint angle at 45°flexion. The dumbbell used was set to weigh 40% of his one repetition maximum arm curl weight. The exercise consisted of six sets of ten maximal voluntary ECCs of the elbow flexors with a rest of 120 s between each set as described in our previous study [22]. The subject was handed the dumbbell in the elbow flexed position (90°) and instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s), in time (3 s) with a metronome. The investigator then removed the dumbbell and the subject returned his arm without the dumbbell to the start position for the next eccentric contraction. Maximal voluntary isometric contraction torque For the measurement of MVC torque, the subject performed two 5-s MVCs at a 90°elbow joint angle with a 15-s rest between the contractions. The peak torque of the two contractions was used as the MVC torque. The torque signal was amplified using a strain amplifier (DPM-611B; Kyowa, Tokyo, Japan). The analog torque signal was converted to digital signals with a 16-bit analog-to-digital converter (Power-Lab 16SP; ADInstruments, Bella Vista, Australia). The sampling frequency was set at 2 kHz. The measurement was based on a previous study [26]. Range of motion of the elbow joint To examine elbow joint ROM, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the subject attempted to fully extend the joint with the elbow held by his side and the hand in supination [22] The flexed joint angle was determined while the subject attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. Upper arm circumference The upper arm circumference was measured at 3, 5, 7, 9, and 11 cm above the elbow joint using a Gulick tape measure while the subject stood with his arm relaxed by his side. The mean value of five measurements was used for the analysis. Measurement marks were maintained throughout the experimental period using a semipermanent ink marker, and a well-trained investigator made the measurements [22]. The mean value of measurements was used for analysis. Ultrasonography B-mode ultrasound pictures of the upper arm were taken of the biceps brachii using ultrasound (Aixplorer, SuperSonic, France), with the probe placed 9 cm from the elbow joint at the position marked for the circumference measurement. The gains and contrast were kept consistent over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed by a computer. The cross-sectional area of the elbow flexors was determined by measuring the distance between the subcutaneous fat layer and the edge of the humerus [27] on the transverse images of the biceps brachii. The average echo intensity for the region of interest (20 × 20 mm) was calculated by the computer image analysis software that provided a gray scale histogram (0, black; 100, white) for the region as described in previous study [27]. Muscle soreness Muscle soreness in the elbow flexors was assessed using a 10-cm visual analog scale in which 0 indicated "no pain" and 10 was the "the worst pain imaginable" [4]. The subject relaxed with his arm in a natural position; the investigator then palpated the upper arm using a thumb and the subject indicated his pain level on the visual scale. All tests were conducted by the same investigator, who had been trained to apply the same pressure over time and between subjects. M-wave latency The musculocutaneous nerve was stimulated (for a pulse duration of 10 ms) by a bipolar surface electrode with an inter-electrode distance of 23 mm and a diameter of 0.8 mm (NM-420S, Nihon Kohden, Tokyo, Japan). The stimulation electrode was placed over Erb's point located on the supraclavicular fossa and sternocleidomastoid muscle and connected to an electrical stimulator (DS7AH, Digitimer Ltd., Welwyn Garden City, UK). The M-wave latency and amplitude were assessed according to the instructions in the "Manual of Nerve Conduction Studies" [28] and as described in our previous study [13]. The subject sat on a chair with both arms at his side and placed his forearm on his lap, resulting in the elbow joint being at approximately 10°flexion. This position was kept consistent between measures, and a plastic goniometer was used to reproduce the elbow joint angle. A monopolar surface electrode was placed at the mid-belly of the biceps brachii long head, a reference electrode was placed proximal to the antecubital fossa in the region of the junction of the muscle fibers and the biceps tendon, and a ground electrode was placed on the acromion [13]. The electrode locations were marked with a semi-permanent marker to ensure the same placement over time. The stimulation current was gradually increased to obtain the maximal Mwave (final stimulation current: 14-18 mA), and it was confirmed that a further increase in the current did not increase the M-wave amplitude. Data acquisition and analysis were performed using Lab Chart 7 (ADInstruments, Bella Vista, Australia). The relative increase in the M-wave latency from the pre-ECC value was then calculated. This measurement was based on a previous study [13]. Statistical analyses All analyses were performed using SPSS Statistics software version 22.0 (IBM Corp., Armonk, NY). Values are expressed as means ± standard deviation (SD). MVC torque, ROM, echo intensity, circumference, CSA, and nerve conduction latency values at post, day1, day2, day3, and day5 were calculated by relative changes from baseline (100%). MVC, ROM, upper arm circumference, echo intensity, cross-sectional area, muscle soreness, and M-wave latency over time were compared between the CON and EPA groups by two-way repeated-measure analysis of variance (ANOVA). When a significant main effect or interaction was found, Bonferroni's correction was performed for the post-hoc testing. A p-value of <0.05 was considered statistically significant. Physical characteristics and nutritional status No subject dropped out during the study period. No significant differences were observed between the EPA group and the CON group for age, weight, and body mass index (EPA; n = 10; age, 20.7 ± 0.7 years; height, 171.5 ± 6.6 cm; weight, 63.0 ± 6.3 kg; body mass index, 25.0 ± 2.0, CON; n = 11; age, 21.3 ± 0.9 years; height, 170.7 ± 5.0 cm; weight, 65.5 ± 5.9 kg; body mass index, 25.0 ± 1.5). The food frequency questionnaire results showed no significant difference in nutritional status between the EPA group (energy, 2578.1 ± 359.4 kcal; protein, 88.7 ± 18.7 g; fat, 91.9 ± 18.5 g; carbohydrate, 330.4 ± 60.0 g; omega-3 fatty acid, 2.4 ± 0.7 g) and the CON group (energy, 2273.4 ± 607.7 kcal; protein, 80.7 ± 20.7 g; fat, 92.9 ± 25.0 g; carbohydrate, 268.6 ± 90.8 g; omega-3 fatty acid, 2.3 ± 0.6 g) before the intake of supplements (Table 1). These parameters did not change during the experimental period. Blood analyses showed that EPA was significantly higher in the EPA group than in the CON group after 8 weeks intake of supplements, whereas DHA was no significantly difference between two groups (data not shown). Maximal voluntary isometric contraction torque Compared with the pre-exercise value, MVC torque in the CON group had significantly decreased immediately after the exercise and 1 day later (post; p = 0.001, day 1; p = 0.003; Fig. 1a). MVC torque in the EPA group decreased immediately after the exercise compared with the preexercise level (p = 0.010). MVC was significantly higher in the EPA group than in the CON group at 1 day after the exercise (CON 76.8% ± 16.0%, EPA 90.5% ± 14.0%, p = 0.049). The results for the absolute MVC were similar to these. Range of elbow motion As shown in Fig. 1b, a significant decrease in ROM was observed in the CON group immediately after the exercise (a reduction of 23.2%, p = 0.000), and ROM continued to be lower than baseline at 1 and 2 days (day 1; p = 0.003, day 2; p = 0.012). ROM in the EPA group decreased immediately after the exercise compared with the pre-exercise level (a reduction of 12.0%, p = 0.013). ROM was significantly greater in the EPA group than in the CON group immediately after exercise and 2 days later (post: CON 76.8% ± 10.4%, EPA 88.1% ± 7.2%, p = 0.006; day 2: CON 81.7% ± 17.4%, EPA 96.2% ± 6.3%, p = 0.014). Muscle soreness Compared with the pre-exercise value, a significant level of muscle soreness was indicated by the CON group using the visual analog scale at 1 and 2 days after exercise (day 1; p = 0.006, day 2; p = 0.021; Fig 1c). In contrast, no increase in muscle soreness was indicated by the EPA group at any time points. Significantly greater muscle soreness was observed in the CON group compared with the EPA group at 1 day (CON, 5.7 ± 1.6 cm vs. EPA, 4.5 ± 1.0 cm; p = 0.049) and 2 days (CON, 5.4 ± 1.8 cm vs. EPA, 3.7 ± 1.2 cm; p = 0.023) after the exercise. Echo intensity In the CON group, echo intensity increased at 1 day after exercise (133.8% ± 25.9%, p = 0.005; Fig. 1d), but there was no significant increase in echo intensity in the EPA group at any time point. The echo intensity was significantly higher for the CON group than the EPA group at 1 day after the exercise (CON, 133.8 ± 25.9% vs. EPA, 111.2 ± 18.2%; p = 0.045). Upper arm circumference and cross-sectional area of the flexors No significant difference in the upper arm circumference from the pre-exercise values or between the two groups was observed at any time point (Fig. 2a). Similarly, there was no significant difference in the cross-sectional area of the elbow flexors at any time point (Fig. 2b). M-wave latency In the CON group, but not the EPA group, M-wave latency increased immediately after the exercise (p = 0.010; Fig. 3). In addition, M-wave latency was significantly longer in the CON group than in the EPA group immediately after the exercise (CON, 133.0 ± 32.1% vs. EPA, 113.3 ± 26.5%; p = 0.040). Discussion This study investigated the effect of EPA and DHA supplementation on the M-wave latency of the biceps brachii and on muscle damage after single session of maximal elbow flexion ECC exercises. The results demonstrated that EPA and DHA supplementation inhibited the loss of muscle strength, limitation of ROM, development of DOMS, and increases in echo intensity and M-wave latency. These results support our original hypothesis. The reduction of isometric torque was significantly inhibited in the EPA group. Our recent study also showed that 600 mg EPA and 260 mg DHA supplementation for 8 weeks inhibited strength loss following elbow flexion ECC exercises [22]. To the best of our knowledge, that study was the first to demonstrate that EPA and DHA supplementation had a positive effect on MVC torque following ECC exercises, and the present study supports those results. Strength loss following ECCs has been considered to be the result of disruption of the myofibrils, muscle membranes, and neuromuscular junctions, and abnormal calcium (Ca 2+ ) levels [3]. Temporary strength loss following ECCs has also been assumed to be due to excitation-contraction coupling failure [29]. Importantly, omega-3 polyunsaturated fatty acids are a major component of the cell membrane, with the total concentration in muscle cellular membrane significantly increasing after the ingestion of fish oil [30]. Thus, it is possible that the muscle membrane structure may be protected by EPA supplementation and that EPA and DHA thereby reduced the muscular dysfunction that resulted from the ECCs at day 1. , muscle soreness (c), and echo intensity (d) measured before (pre) and immediately after (post) the eccentric contractions exercise and then 1, 2, 3, and 5 days later in the control and EPA groups. The values of MVC torque, ROM, and echo intensity at post, 1, 2, 3, and 5 days after eccentric contractions were calculated by relative changes from baseline (100%). * p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the control group, # p < 0.05 for the difference from pre-exercise value in the EPA group Nerve injuries are classified as neurapraxia, axonotmesis, or neurotmesis [31][32][33]. In neurapraxia, nerve conduction is impaired by a segmental myelination of the nerve trunk without axonal degeneration. The causes of neurapraxia include mechanically induced lesions, ischemia, inflammation, toxic compounds, and metabolic disturbances [31,34], and M-wave latency increases without an apparent change in M-wave amplitude [33]. In the present study, the M-wave amplitude did not significantly change after the ECCs (data not shown), and the prolonged M-wave latency was not observed at 2 days post-exercise. Thus, the effect of ECCs on the nerve may have been similar to neurapraxia. The present study also demonstrated that EPA and DHA supplementation inhibited the increased M-wave latency following the eccentric exercise. We believe that this is the first report to show that EPA and DHA have important roles in counteracting the muscle dysfunction resulting from ECCs. Braddom et al. [35] observed structural damage to musculoskeletal nerves and ischemic neuropathy with high-intensity weight lifting exercise. In addition, it has been shown in rats that repeated sessions of ECCs caused histological damage in nerve fibers and thinning of myelin sheaths [12]. It is therefore possible that the decreased M-wave latency was associated with a protective role for EPA and DHA with regard to nerve fiber damage and myelin sheaths thinning. As mentioned earlier, one cause of neurapraxia is mechanically induced inflammation [31,34]; we assume that EPA and DHA reduce the inflammation resulting from nerve injury. Our study also demonstrated that EPA and DHA supplementation had a preventive effect on ROM and DOMS following eccentric exercise. These results are consistent with those of previous studies [22,24]. Tartibian et al. [24] showed that daily supplementation with 324 mg EPA and 216 mg DHA attenuated the limitation of ROM after 40 min of bench stepping. Similarly, our recent study demonstrated that 600 mg EPA and 260 mg DHA supplementation for 8 weeks inhibited the limitation of ROM and the development of DOMS [22]. The limited ROM following ECC has been attributed to an inflammatory response within myofibrils leading to an increase in passive stiffness [36]. Although DOMS can be attributed to a combination of several factors, previous studies have suggested that its primary cause is a local inflammatory response [11,37]. It is well established that EPA and DHA have anti-inflammatory effects that reduce levels of interleukin-6 [22]; we therefore suggest that their inhibition of limited ROM and the development of DOMS could be attributed to their antiinflammatory effects. In this study, the echo intensity was significantly lower in the EPA group than in the CON group 1 day after the ECC exercise. Increased echo intensity is associated with edema of the muscle due to trauma, ischemia, or infraction [1]. Indeed, previous studies have confirmed that echo intensity in the Fig. 3 Changes (mean ± SD) of nerve conduction latency measured before (pre) and immediately after (post) the eccentric contractions exercise and then 1, 2, 3, and 5 days later in the control and EPA groups. The values of nerve conduction latency at post, 1, 2, 3, and 5 days after eccentric contractions were calculated by relative changes from baseline (100%). * p < 0.05 for the difference between groups, † p < 0.05 for the difference from the pre-exercise value in the control group a b Fig. 2 Changes (mean ± SD) of upper arm circumference (a) and cross-sectional area (CSA) of the elbow flexors (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and then 1, 2, 3, and 5 days later in the control and EPA groups. The values of upper arm circumference and CSA at post, 1, 2, 3, and 5 days after eccentric contractions were calculated by relative changes from baseline (100%). n.s. not significant biceps brachii and brachialis increases with elbow flexors after ECC exercise [1,26,38]. In addition, because echo intensity appears to be related to creatine kinase level after ECC [1], it has the potential to be a useful marker of muscle damage. Although the mechanism is unclear, the results in present study suggest that EPA and DHA supplementation may inhibit edema. However, no significant difference was observed in the upper arm circumference. This observation is similar to that of our previous study [22], and we assume that it related to a limitation of the study method. Although we used a Gulick tape measure, we could not exclude the effects of other muscles, fat, skin, etc. For this reason, in the present study we also calculated the cross-sectional area using ultrasonography. However, we observed no significant difference between groups, although the cross-sectional area showed a non-significant tendency to be smaller in the EPA group than in the CON group. The reason for this could be that the intensity of ECC and the reduction of muscle strength were lower than in the previous studies [2], and that the accuracy of ultrasonography is lower than that of MRI. A further study is needed that uses MRI to determine the response after severe ECC exercise. The present study had the following three limitations. First, we did not evaluate the inflammatory response such as C-reactive protein (CRP) and tumor necrosis factor-alpha (TNF-α), and interleukin (IL)-6 [22,39]. Although the results demonstrated that EPA and DHA supplementation had a positive effect on DOMS and ROM following eccentric exercise, the actual inflammatory response was unknown. Second, we investigated only a single dose of EPA and DHA administration. It has been shown that the amount of EPA and DHA is limited 3000 mg in total per 1 day for the safety in human by natural medicines comprehensive database [25]. Regarding the effect of EPA and DHA supplementation on muscle damage, the minimal effect was with 540 mg/day (EPA and DHA) according to Tartibian et al [24]. Hence, we have decided to use the amount of supplementation (600 mg EPA and 260 mg DHA). However, further investigation is needed to study the different doses of EPA and DHA to elucidate the appropriate amount. Third, we used one bout of eccentric exercise model, which was a singleexercise, to untrained subjects. Therefore, our observations could not provide to athletes and multiple training sessions. Further study is required to investigate this point. Conclusions In summary, we showed that 600 mg EPA and 260 mg DHA supplementation for 8 weeks inhibited the increased M-wave latency following a session of intense ECC exercise, as well as the strength loss, limitation of ROM, and development of DOMS. We speculate that the mechanism underlying these observations may be related to the improvement of nerve structure and function. These findings of the beneficial effects of EPA and DHA supplementation are of importance for applied sport scientists, nutritionists, and strength and conditioning professionals and could help them to design better nutritional interventions aimed at preventing temporary muscle strength loss, limited flexibility, and DOMS after exercise.
2017-07-18T23:55:17.434Z
2017-07-12T00:00:00.000
{ "year": 2017, "sha1": "601ee7f40d0d6ef01a9ff1f40f5f6855699afd51", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12970-017-0176-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "601ee7f40d0d6ef01a9ff1f40f5f6855699afd51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270661148
pes2o/s2orc
v3-fos-license
A new Mexican species of the Cryptopygus complex (Collembola, Isotomidae) associated with the hermit crab Coenobitaclypeatus (Crustacea, Coenobitidae) Abstract A new species of Cryptopygus Willem, 1901 associated with hermit crabs living on seashores of Quintana Roo State, Mexico, is described and illustrated. It is blind, with 9–11 postlabial setae, unguis with a pair of lateral teeth, empodial appendix lanceolate and almost as long as unguis, tenaculum with 4 + 4 teeth and 3–4 setae on corpus, manubrium with 11–14 pairs of manubrial setae on anterior surface and 17–18 pairs on posterior surface, and mucro bidentate. An updated key for the identification of 29 American species of Cryptopygus complex is included. Introduction This contribution is a part of the project aimed at the microarthropods associated with hermit crabs, in which the male of Coenaletes caribaeus Bellinger, 1985 (Coenaletidae) has been redescribed (Palacios-Vargas et al. 2000) and other Collembola and some Acari recorded from this crab (Maldonado-Vargas and Palacios-Vargas 1999). The genus Cryptopygus Willem, 1901 sensu lato represents a complex of genera (Potapov 2001;Potapov et al. 2013).The species of this complex live in a wide variety of environments such as soil, litter, caves, and sandy beaches.Here, I use the name Cryptopygus in its broad sense.There are altogether 76 valid species involved in the genus (Bellinger et al.1996(Bellinger et al. -2023)), which has a global distribution.The purpose of this paper is to describe a new species of Cryptopygus found in hermit crabs collected in marine waters of Quintana Roo State, Mexico.As shown by Palacios-Vargas (1997) (Thibaud, 1996), which belongs to the same complex, was cited from Guerrero state by Potapov et al. (2013). Materials and methods The material of the new species comes from Xcacel beach, Quintana Roo State.It was found while examining the hermit crab Coenobita clypeatus (Fabricius, 1787) living in Cittarium pica (Linnaeus, 1758) shells.Hermit crabs were put in a bucket with fresh water and springtails floating on the surface were collected.The specimens were fixed in ethanol 96% and later cleared with KOH 10% and mounted on slides using Hoyer solution.To harden the solution, the slides were dried in a slide warmer at 45-50 °C for 1 week.Finally, each specimen was labeled with its collecting data.Specimens were examined with a Carl Zeiss Primo Star phase-contrast microscope.The drawings were made with the aid of a drawing tube.Type locality.Mexico -Quintana Roo • Municipality of Solidaridad; Xcacel; ex.Coenobita clypeatus; 20°20'13"N, 87°20'45"W; 6 June 2022; J.G. Palacios-Vargas, M. Ojeda & A. Arango leg. Etymology.This species is a noun in apposition after the genus of the hermit crab in which it was found. , there are several Cryptopygus species known from Mexico, namely C. benhami Christiansen & Bellinger, 1980 found in caves from Guerrero and Mexico States and C. exilis from Veracruz State.Later, Palacios-Vargas and Thibaud (2001) described an additional species, Cryptopygus axacayacatl Vargas & Thibaud, 2001, which lives on sandy beaches from Guerrero State.Moreover, Pauropygus caussaneli
2024-06-22T15:20:22.531Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "54d6782a78b3149479c8c5946ae92715c859fd19", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "4d11b147c96f09a0f03531e5585f2270d4de8cb2", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
55155367
pes2o/s2orc
v3-fos-license
An Overdetermined Problem in Potential Theory We investigate a problem posed by L. Hauswirth, F. H\'elein, and F. Pacard, namely, to characterize all the domains in the plane that admit a"roof function", i.e., a positive harmonic function which solves simultaneously a Dirichlet problem with null boundary data, and a Neumann problem with constant boundary data. Under some a priori assumptions, we show that the only three examples are the exterior of a disk, a halfplane, and a nontrivial example. We show that in four dimensions the nontrivial simply connected example does not have any axially symmetric analog containing its own axis of symmetry. Introduction In [19], the authors have posed the following problem: find a smooth bounded domain Ω in a Riemannian manifold M g with metric g, such that the first eigenvalue λ 1 of the Laplace-Beltrami operator on Ω has a corresponding real, positive eigenfunction u 1 satisfying u 1 = 0, ∂u1 ∂n = 1 on the boundary of Ω. Any such domain is called extremal because it provides a local minimum for the first eigenvalue λ 1 of the Laplace-Beltrami operator, under the constraint of fixed total volume of Ω (see [19] and references therein). In special cases one can find a sequence of extremal domains {Ω t } with increasing volumes, such that the limit domain Ω = Ω t→∞ is unbounded, and its first eigenvalue vanishes as t → ∞. This limit extremal domain is then called exceptional, and the corresponding limit function (u 1,t ) t→∞ → u is a positive, harmonic function on Ω which solves simultaneously the overdetermined boundary value problem with null Dirichlet data and constant Neumann data. The problem of finding exceptional domains in R n and their corresponding functions u (called "roof" functions by the authors of [19]) is a nontrivial problem of potential theory. There is no obvious variational principle to use, on the one hand because Ω is unbounded (so the Dirichlet energy of u [2, Ch. 1] will diverge), and, on the other hand, because the constant Neumann data constraint is not conformally invariant. In the absence of a suitable variational formulation, we may interpret the scaling t → ∞ described above as a dynamical process, in which the pair (Ω t , u t ) evolves so that the limit t → ∞ solves the overdetermined problem. In other words, we can turn this observation into a constructive method for finding (building) exceptional domains. In order to do this, it is helpful to note that, upon compactification of the boundary ∂Ω (with metric dσ 2 ), the pair (Ω, u) with flat metric becomes conformal to the half-cylinder N := R + × ∂Ω, with metric ds 2 = e −2u (du 2 + dσ 2 ). Under this reformulation, scaling of (Ω t , u t ) t→∞ becomes equivalent to scaling of the metric structure given above, defined over the fixed space N . This is reminiscent of the Ricci flow, in which the metric structure g evolves with respect to a deformation parameter t ∈ R according to the equation with the right side of the equation given by the covariant Ricci tensor. It is known [37] that for the case of a two-dimensional manifold, with metric given by ds 2 = e −2u (dx 2 + dy 2 ), the Ricci flow equations reduce to a single nonlinear equation (since in two dimensions the Riemann tensor has only one independent component). This is a heat equation with the generator given by the Laplace-Beltrami operator corresponding to the metric ds 2 . Therefore, if there is a stationary solution ∂u ∂t → 0 as t → ∞, it will correspond to the scaling of the first eigenvalue λ 1 (t) → 0 and, by conformally mapping back N using the solution u(t → ∞), we will obtain the solution (Ω, u). In other words, we can summarize this constructive method for finding exceptional domains in R 2 as follows: starting from a 2-dimensional Riemannian manifold with finite volume and metric encoded through the positive real function u, and boundary set defined via u = 0, consider the time evolution given by the Ricci flow, without volume renormalization. Then [37] the manifold will remain Riemannian at all times, and in the t → ∞ limit the function u will become a solution of the nonlinear Laplace-Beltrami equation. Furthermore, if u remains finite everywhere in the domain, then it is harmonic and satisfies both Dirichlet and Neumann conditions at all finite boundary components, so it is a solution for the overdetermined potential problem. Considered together with the (boundary) point at infinity, the manifold is equivalent [30] to a pseudosphere (flat everywhere except at the infinity point, with overall positive curvature). (We wish to emphasize that there is no reason to assume that such constructive methods would be exhaustive.) Thus, so motivated, it is natural to try to characterize exceptional domains in flat Euclidean spaces. The authors in [19] suggested that in two dimensions there are only three examples: a complement of a disk, a halfplane, and a nontrivial example obtained as the image of the strip | ζ| ≤ π/2 under the mapping ζ → ζ + sinh(ζ). We address both of these problems. The paper is organized as follows. In Section 2, we review the theory of Hardy spaces in order to address a subtlety that arises in connection with the regularity of the boundary of an exceptional domain. This leads us to assume in our theorems that the domain Ω is Smirnov. In Section 3, we characterize exteriors of disks as being the only exceptional domain whose boundary is compact. In Section 4, we establish a connection between the "roof function" of an exceptional domain and the so-called Schwarz function of its boundary, and we also show that the boundary of a simply connected exceptional domain Ω can pass either (i) once or (ii) twice through infinity. In Section 5, we show that Case (i) implies that Ω is a halfplane. In Section 6, we show that Case (ii) implies that Ω is the nontrivial example found in [19,Section 2]. In each of these theorems we assume that Ω is Smirnov, but we allow the roof function to be a weak solution merely satisfying the boundary conditions almost everywhere. In Section 7, we extend the result of Section 3 to higher dimensions. In Section 8, we show that the nontrivial example from Section 6 does not allow an extension to axially symmetric domains in four dimensions, contrary to what was suggested in [19, Remark 2.1] (and we conjecture that this example has no analogues in any number of dimensions greater than two). Sections 3 through 6 together partially confirm what was suggested in [19,Section 7] under some assumptions on the topology of Ω and assuming that Ω is Smirnov. In Section 9, we give concluding remarks including a conjecture that, up to similarity, there are only three finite genus exceptional domains. The additional assumption of finite genus is due to a remarkable example of an infinite-genus exceptional domains that appeared in the fluid dynamics literature [4]. See Section 9 for discussion. Remark 1. After this paper was submitted, Martin Traizet announced a more complete classification of exceptional domains [38] after developing an exciting new connection to minimal surfaces. He characterized the three examples among those having finitely many boundary components. M. Traizet's preprint that appeared while we have been revising the previous version of our paper finds a new beautiful connection of the problem with the theory of minimal surfaces. From this point of view, he noticed the above-mentioned family of infinite genus examples [4] and also characterized them among periodic domains for which the quotient by the period has finitely many boundary components [38,Theorem 13]. For this latter result, he invokes a powerful theorem of W. H. Meeks and M. Wolf. Our methods mostly rely on classical function theory (H p spaces) and potential theory and in most parts are different from Traizet's. Interestingly, as Traizet notes in his preprint [38,Remark 5], if one could prove his Theorem 13 by only invoking pure function theory this would give (via Traizet's results) a new and independent proof of the Meeks-Wolf result from minimal surfaces. An attractive challenge! for solutions of free boundary problems). Unfortunately, the problem at hand is complicated by a remarkable family of examples with rectifiable but non-smooth boundaries, a.k.a. non-Smirnov domains -cf. [12,Ch. 10]. This results in adding a Smirnov condition to the assumptions on the domains if we desire to consider "weak solutions", i.e., harmonic "roof functions" satisfying the Dirichlet and Neumann boundary conditions almost everywhere with respect to the Lebesgue measure. In order to address this subtlety, we first give some background from H p theory -cf. [12] for details. An analytic function f : D → C is said to belong to the Hardy class H p , 0 < p < ∞, if the integrals: Recall that a Blaschke product is a function of the form B(z) = z m n |a n | a n a n − z 1 − a n z , where m is a nonnegative integer and (1 − |a n |) < ∞. The latter condition ensures convergence of the product (See Theorem 2.4 in [12]). A function analytic in D is called an inner function if its modulus is bounded by 1 and its modulus has radial limit 1 almost everywhere on the boundary. If S(z) is an inner function without zeros, then S(z) is called a singular inner function. An outer function for the class H p is a function of the form where γ is a real number, ψ(t) ≥ 0, log ψ(t) ∈ L 1 , and ψ(t) ∈ L p . The following theorem [12, Ch. 2, Ch. 5] (also cf. [16]) provides the parametrization of functions in Hardy classes by their zero sets, associated singular measures, and moduli of their boundary values. Suppose Ω is a Jordan domain with rectifiable boundary and f : D → Ω is a conformal map. Then f ∈ H 1 by Theorem 3.12 in [12]. By Theorem 2.1, f has a canonical factorization f (z) = B(z)S(z)F (z), and since f is a conformal map f does not vanish, so f (z) = S(z)F (z). Then Ω is called a Smirnov domain if S(z) ≡ 1 so that f (z) = F (z) is purely an outer function. This definition is independent of the choice of conformal map. There are examples of non-Smirnov domains with, as above, f (z) = S(z)F (z), but now F (z) ≡ 1 and the singular inner function S(z) is not constant. Such examples were first constructed by M. Keldysh and M. Lavrentiev [24] using complicated geometric arguments. Their existence was somewhat demystified by an analytic proof provided by P. Duren, H. S. Shapiro, and A. L. Shields [13]. Like the disk, such a domain has harmonic measure at zero (assuming f (0) = 0) proportional to arc-length. Thus, its boundary is sometimes called a "pseudocircle". Similarly, there are "exterior pseudocircles", arising as the boundary of an unbounded non-Smirnov domain [22] for which the harmonic measure at infinity is proportional to arclength, and thus Green's function with singularity at infinity provides a roof function that is a weak solution satisfying the boundary conditions almost everywhere. Thus, this provides a pathological example of an exceptional domain in a weak sense. In order to construct such an unbounded non-Smirnov domain, let us follow the method in the above mentioned [13], which is presented in Duren's book [12,Section 10.4]. We recall that the construction is carried out by "working backwards", first writing down a singular inner function S(z) as a candidate for the derivative f (z) of the conformal map f (z). The difficulty is then to show that f (z) is not only analytic, but is also univalent so that it actually gives a conformal map from D to some domain Ω. Univalence is established using a criterion of Nehari which states that the following growth condition on the Schwarzian derivative (Sf )(z) is sufficient for univalence: Let us follow this procedure, indicating the step that needs to be modified. Start with a measure µ ≤ 0, singular with respect to Lebesgue measure on the circle, yet sufficiently smooth, so that it belongs to the Zygmund class Λ * (cf. [12,Section 10.4]). Let g(z) be the exponential of a constant (to be chosen later) times F , Here is where we depart slightly from [12] in order to get an unbounded domain as the image of f (z). Instead of taking g(z) as a candidate for f (z), we take Note that the residue of f (z) is zero (from having made the first moment of µ zero) so that its antiderivative f (z) is analytic in D except for a simple pole at z = 0. Also, |f (z)| = 1 a.e. on ∂D. A calculation shows that the Schwarzian derivative Sf of f is: As explicitly stated in [12,Section 10.4], F (z), F (z) 2 , and F (z) are each Moreover, by the vanishing of the first moment of µ, F (0) = 0, so that F (z)/z is Thus, for a small enough choice of a, (Sf )(z) satisfies the Nehari criterion for univalence (2.2). Hence, f (z) is a conformal map mapping {|z| < 1} onto the complement of a Jordan domain with rectifiable boundary. To see why the boundary is rectifiabile, note that, as stated in [12, Section 10.4]), g(z) ∈ H 1 , and so f (z) = g(z)/z 2 is in H 1 in an annulus 0 < r < |z| < 1. This seemingly excessive construction of an exterior pseudocircle cannot be avoided by simply taking an inversion of an interior pseudocircle; the result will be non-Smirnov, but it will not be an exterior pseudocircle. Nor can one simply take the complement. As P. Jones and S. Smirnov proved in [22], the complement of a non-Smirnov domain is often Smirnov! (This unexpected resolution of a long standing problem put to rest all hopes to characterize the Smirnov property in terms of a boundary curve.) The above examples of non-Smirnov exceptional domains lead to assuming Ω is Smirnov in our main theorems (but we allow u to be a weak solution). An alternative approach is to require u to be a "classical solution" that satisfies the boundary condition everywhere (and not just almost everywhere), then non-Smirnov domains are ruled out. Moreover, real-analyticity of the boundary then follows automatically. To be precise, we have the following Lemma. If Ω ⊂ R 2 is exceptional and the roof function u is a "classical solution" in C 1 (Ω), then ∂Ω is locally real-analytic. Choose a point z 0 ∈ ∂Ω, and let ζ 0 = f (z 0 ). Let g(ζ) = f −1 (ζ) denote the local inverse of f (z). Choose a neighborhood U of ζ and let F : Since |g (ζ)| = 1 on ∂Ω, we can also choose U small enough that g does not vanish in F . This implies that h(ζ) = Log(g (ζ)) is analytic in the interior of F and continuous in F . We have {h(ζ)} vanishes on the imaginary axis, since |g (ζ)| = 1 there. Thus h(ζ) extends to a neighborhood of ζ 0 by the Schwarz reflection principle. This allows us to extend g (z) and therefore g(z) and f (z) extend analytically across z 0 , since u := f = 0 on ∂Ω and |∇u| = 1 on ∂Ω near z 0 . The lemma is proved. and Ω is exceptional then ∂Ω is locally realanalytic. Proof. C 2 -smoothness of ∂Ω implies that u is in C 1 (Ω). It remains now to refer to Lemma 2.2. Using Kellogg's theorem on regularity of conformal maps up to the boundary, cf. [31,Ch. 3], one easily extends the above corollary to C 1,α , α > 0, boundaries and even merely to C 1 boundaries. We shall not pursue these details here. It would be interesting to find sharp necessary and sufficient conditions for the a priori regularity of the boundary that would guarantee the conclusion of Corollary 2.3. As we have mentioned in the beginning of this section, it is necessary to assume that the domain is Smirnov, but it is not at all obvious that this is indeed sufficient, cf. a related discussion in [10] and [11] regarding nonconstant functions in E p classes with real boundary values. The Case When Infinity is an Isolated Boundary Point Suppose Ω is an exceptional domain whose complement C \ Ω is bounded and connected, and assume Ω is Smirnov. Then Ω is the exterior of a disk. Proof. Let u be a roof function for Ω. Positivity of u implies, by Bôcher's Theorem [3,Ch. 3], u(z) = u 0 (z) + C log |z| for some constant C, where u 0 (z) is harmonic in Ω ∪ {∞}, and u 0 (z) approaches a constant at infinity (the "Robin constant" of ∂Ω). Thus, in view of the Dirichlet data of u, u(z) is a multiple of the Green's function of Ω with a pole at infinity, and taking v(z) to be the harmonic conjugate of u(z)/C, we have a conformal map g(z) = e u(z)/C+iv(z) from Ω to the exterior of the unit disk (note that g(z) is single-valued in Ω). Using both the Dirichlet and Neumann data, we have |g (z)| = 1/C a.e. on ∂Ω, and therefore a.e. on ∂D. Since g −1 has a simple pole at infinity, Since Ω is Smirnov, the latter function is outer and also has constant modulus on the unit circle a.e., which together imply that it is constant. (Recall from Section 2 that by formula (2.1) an outer function is determined from its boundary values.) Hence g −1 is a linear function and ∂Ω is a circle. We defer proving a higher-dimensional version of this result until Section 7, but we mention here that under more smoothness assumptions the higher-dimensional case can be proved using a theorem of W. Reichel [32]. Under additional smoothness assumptions, the hypothesis of Theorem 3.1 guarantees that Ω is a special type of arclength quadrature domain. The following is then an immediate corollary of a result of B. Gustafsson [18, Remark 6.1]. Theorem 3.2 (B. Gustafsson, 1987). Suppose Ω is a finite genus exceptional domain, with piecewise-C 1 boundary, and infinity is not a point on the boundary of Ω. Then Ω is the exterior of a disk. This removes the condition that the complement of Ω is connected. Proof. We will show that Ω is an arclength null quadrature domain for analytic functions vanishing at infinity. At first, consider as a class of test functions to integrate over ∂Ω rational functions r(z) in Ω vanishing at infinity. Let f (z) = u(z) + iv(z) be the analytic completion of the roof function u. Note that f (z) is single-valued (since it is the conjugate of the gradient of u), and by Bôcher's theorem cited above f (z) = O(|z| −1 ). Since the gradient of u is the inward normal of ∂Ω, f (z) = 1 f (z) is as well. The unit tangent vector dz ds is a 90-degree rotation of the normal vector 1 f (z) . Thus, if (z)dz = ds. We then have a quadrature formula for integration of r(z) with respect to arclength: where the vanishing of this integral is obtained by deforming the contour to infinity where f (z)r(z) = O(|z| −2 ). Indeed, r(z) = O(|z| −1 ) by assumption on the test class, and f (z) = O(|z| −1 ) as mentioned above. If the boundary of Ω is piecewise-C 1 then the rational functions are dense in E p classes (see [12,Thm. 10.7], and for the multiply connected case, [39], [40], [41]). In particular, the rational functions r(z), vanishing at infinity, are dense in the space of functions E(Ω) considered in [18]. Thus, (3.1) shows that Ω is an arclength null quadrature domain for this space of functions, and the result now follows from Remark 6.1 in [18]. The Schwarz Function of an Exceptional Domain The Schwarz function of a real-analytic curve Γ is the (unique and guaranteed to exist near Γ) complex-analytic function that coincides withz on Γ. For the basics on the Schwarz function we refer to [9] and [36]. We recall two basic facts needed in the proof of the next proposition. Statement (i) follows from the chain rule and the fact that the complex conjugate of the Schwarz function, S(z), is an involution (see [9,Ch. 7]). Statement (ii) follows from the formula for the complex unit tangent vector expressing the derivative of z with respect to the arc-length along Γ (see again [9, Ch. 7, Formula (7.5)]). Proof. Lemma 2.2 implies that Γ is locally real-analytic. So Γ has a Schwarz function S(z). The complex conjugate of the analytic function u z is normal to Γ (since u has zero Dirichlet data). In light of the constant Neumann data, we then have This, along with the statements (i) and (ii) above, shows that on Γ the vectors u z (z) and −S (z) are parallel and each have constant length. Therefore, for some real constant c, the equation u z (z) = c −S (z) holds on Γ. But since u z and −S (z) are both analytic, the equation is true everywhere that either side is defined. In particular, this guarantees analytic continuation of S (z) throughout Ω. Let us use the Schwarz function to give a heuristic argument that the boundary of an exceptional domain can pass through infinity at most twice. In fact, the angle between consecutive arcs at infinity must be π (and obviously there cannot be more than two such angles at infinity). Suppose the boundary of a domain has a corner where two arcs meet at an angle different from 0, π, or 2π. Then the derivatives of the Schwarz functions of the two arcs have a branch cut along a third arc that propagates into the domain from the corner. To see why this is the case, note that the Schwarz function of an arc can be approximated near a point by the Schwarz function of the tangent line. Thus, to first order, the jump along the branch cut is linear, so to zeroth order, the jump of S is determined by the slopes of the tangent lines. If the angle is 0 or 2π then the tangent line is the same for each arc, but the orientation changes, so there is still a jump due to the sign change. In the case of an angle of π both the tangent line and the orientation are unchanged. Thus, for any angle other than π, S (z) has a jump across a branch cut between the two boundary boundary components. For an exceptional domain, u is a global solution throughout Ω, and so Proposition 4.1 indicates that the Schwarz function cannot have such branch cuts. Thus, the angle between consecutive boundary arcs at infinity can only be π, and there can be at most two such angles. In the above informal argument, we have assumed that each arc is real analytic at infinity, so that the Schwarz function has an expansion. Alexandre Eremenko related to us the following proof [15] using ideas from [5] that extend techniques due to Ch. Pommerenke. No regularity assumptions on ∂Ω are required. Also, an important part of the theorem readily extends to higher dimensions. [38] obtained the estimate |∇u| ≤ 1 in Ω for domains with finitely many boundary components using the Phragmen-Lindelof principle. For Smirnov domains Ω it suffices to show that u z belongs to the class N + (cf. [10]) in order to conclude that the analytic function u z is bounded by 1 in Ω. However, even this assumption is not needed here, and it is possible to establish the estimate on ∇u in full generality. Alexandre Eremenko has kindly permitted us to include his argument here. Proof. First we note that, as observed in [5,Lemma 1], if u is a positive harmonic function in a disk (or a ball in higher dimensions), D(a, R), of radius R centered at a, and u(z 1 ) = 0 for some boundary point z 1 , then This immediately follows from Harnack's inequality for D(a, R) as for z ∈ D(a, R) and letting z → z 1 establishes (4.1). Applying (4.1) when a ∈ Ω and R is the distance from a to ∂Ω, gives u(a) ≤ 2R ≤ 2(|a| + const). So u(z) = O(|z|), as z → ∞. The fact that u is a combination of at most two Martin functions now follows from a standard argument using Carleman's inequality, see for example [27]. For the higher-dimensional case, one must use [17] instead of [27]. Next we show, in the two dimensional case, the additional claim that ∇u(z) = O(1). Let R > 0 and consider an auxilliary function where R > 0 is a parameter. A direct computation shows that (4.2) ∆ log w R ≥ w 2 R , and w R (z) = 1/R for z ∈ ∂D. We claim that from which the result follows by letting R → ∞ which gives |∇u| ≤ 2 in Ω. Suppose, contrary to (4.3), that w R (z 0 ) > 2/R, for some So the subharmonic function log u − log v is positive in K 0 and vanishes on the boundary, a contradiction. Remark 5. This a priori estimate implies the following corollary showing that the boundaries of exceptional domains are extremely regular. Namely, they are locally real analytic and even parameterized from the unit circle by the antiderivative of a rational function. In particular, it validates the preceding argument using the Schwarz function, and establishes that the boundary passes at most twice through infinity each time with an angle of π. The only additional assumptions needed here are that the domain is Smirnov (cf. Section 2) and simply connected. Let Ω be a simply connected Smirnov domain, and let h(ζ) be the conformal map from D to Ω. If Ω is exceptional then h (ζ) is a rational function, and either: Case (i). h has one pole on ∂D, or Case (ii). h has two poles on ∂D. Proof. Let u be a roof function for Ω, and f (z) = u + iv its analytic completion. Since u > 0, f (z) takes Ω into the right halfplane, and f (h(ζ)) takes the unit disk D into the right halfplane. Adding an imaginary constant if necessary, we may assume that f (h(0)) > 0 is real. Then, by the Herglotz Theorem (see [21,Ch. 3], [12, Ch. 1]), we can represent f (h(ζ)) as with µ positive. Now since f (h(ζ)) is the pull back to D of the roof function u, which by Theorem 4.2 is a convex combination of at most two Martin functions, µ consists of at most two atoms. Thus, differentiating (4.4): where R(ζ) is a rational function with either one or two double poles on ∂D (at the atoms of µ). Since f (h(ζ)) is a bounded analytic function in D with |f (h(ζ))| = 1 a.e. on ∂Ω, f (h(ζ)) is an inner function. Moreover, h (ζ) is an outer function, since Ω is Smirnov. For a rational function such as R(ζ) the canonical factorization given by Theorem 2.1 reduces to: with B a Blaschke product and F a (rational) outer function. (The singular factor S(ζ) is trivial, since R(ζ) has no essential singularities.) By the uniqueness of the canonical factorization, h (ζ) and f (h(ζ)) equal F (ζ) and B(ζ) respectively (up to multiplication by a unimodular constant). Hence, h (ζ) = F (ζ) is rational, and f (h(ζ)) = B(z) is a Blaschke product. The Case When Infinity is a Single Point on the Boundary In this Section, we characterize the halfplane as the only simply connected exceptional domain having infinity as a single point on the boundary. This extends [19, Prop. 6.1] by removing the additional hypothesis ∂ x u > 0 in [19] on the roof function in Ω. Theorem 5.1. Suppose Ω satisfies the hypothesis of Corollary 4.3. Then Case (i) implies that Ω is a halfplane. Remark 6. (i) We have not been able to drop the assumption that Ω is simply connected, but as mentioned in the introduction and Section 9, M. Traizet has recently established the result under the assumption of finitely many boundary components. [38]. We note that, in the simply connected case, this assumption is stronger than ours (since we allowed for infinitely many boundary components in Section 4). (ii) We have not been able to prove a higher dimensional version of Theorem 5.1 (cf. Section 7). Proof. Using the same notation f and h from the proof of Corollary 4.3, for some finite positive measure µ on ∂D. By assumption, we are in the case when h has one pole, and according to the proof of Corollary 4.3 µ is an atomic measure with a single point mass. Without loss of generality, we can place it at the point e iθ = 1. Thus, Differentiating (5.1), and as asserted in the proof of Corollary 4.3, f (h(ζ)) is the Blaschke factor of the right hand side, which has no zeros, so f (h(ζ)) is a unimodular constant. Therefore, f is a linear function and Ω is a halfplane. The Case When Infinity is a Double Point of the Boundary In this section we characterize the nontrivial example found in [19]. Suppose Ω is a simply connected domain and Ω is exceptional. By Corollary 4.3, recall that the derivative h (ζ) of the conformal map from the disk onto Ω is a rational function with either one or two double poles on ∂D. Theorem 6.1. Suppose Ω satisfies the hypothesis of Corollary 4.3. Then Case (ii) implies that Ω is, up to similarity, the image of the strip | w| ≤ π/2 under the conformal map g(w) = w + sinh(w), while the analytic completion of the function u(g(w)) is the function f (g(w)) = cosh(w). Remark 7. The exceptional domain Ω that is the image of the strip under the conformal map ζ → ζ + sinh ζ is precisely the exceptional domain found by the authors in [19]. Theorem 6.1 together with Theorems 3.1 and 5.1 and Theorem 4.2 show that this Ω is, under an assumption on the topology essentially the only nontrivial example of an exceptional domain in R 2 . It turns out that some assumption on topology is necessary as there is yet a whole one-parameter family of non-similar exceptional domains that have infinite genus (See Section 9). However, under the assumption of finitely many boundary components, the example described in Theorem 6.1 is the only nontrivial example as the previously mentioned recent work of M. Traizet shows [38]. Proof. Using the same notation as in the proofs of Corollary 4.3 and Theorem 5.1, we have that h (ζ) is a rational function, and according to (4.5) f (h(ζ)) is as well. This justifies applying the argument principle to study f (h(ζ)) and f (h(ζ)). Namely, we will prove the following. Claim: The function f solves a differential equation: after simple normalizations described below. Before proving this Claim we solve the differential equation to see that it gives the desired result. Separating variables, Make the substitution f = cosh(w), z = w + sinh(w) (fixing the constant of integration C = 0). Now using the conditions f (z(w)) = 0 for z ∈ ∂Ω, and f (z(w)) > 0 for ζ ∈ Ω, and the identity cosh(x + iy) = cosh(x) cos(y), we find that the pre-image of the domain in the w-plane is the strip | w| ≤ π/2. Therefore, Ω can be described as the image of the strip under the map z(w) = w + sinh(w). Proof of Claim. We will use the argument principle to show that both sides of the Equation (6.1) provide a conformal map from Ω to D. Starting from the formula which relates the tangent vector T (z) on ∂Ω and the derivative of the analytic completion f of u(z), we obtain from the continuity of T (z) through the double point at infinity (see Figure 1), that We conclude that f is a single-sheeted covering of the unit disk by the domain Ω, and that it has only one zero, at some point z 0 ∈ Ω. We may assume that f (z 0 ) = 1. If not, say f (z 0 ) = a + ib, a > 0, then one may subtract the constant ib from f (this just amounts to choosing a different harmonic conjugate for the same roof function), so we have f (z 0 ) = a. Then one may simply replace 1 with a in the claim, and integrating the differential equation is done similarly resulting in a dilation of the original solution. Consider now the function defined on Ω, taking values in the unit disk D Then g is also a univalent map from Ω into D. Indeed, by the argument principle, is a branched, two-sheeted covering of the disk, since it maps each of the two boundary components shown in Figure 1 onto T, Moreover, the single branch point z 0 is mapped to the origin, so that taking the square root gives a single-valued analytic function. Also, f (z 0 ) = g(z 0 ) = 0. This uniquely determines the conformal map up to a unimodular constant, which we may assume is 1 (after a rotation), and we then arrive at the differential equation (6.1). An Extension of Theorem 3.1 to Higher Dimensions In this section, we notice that some results in Section 3 extend to higher dimensions. Theorem 7.1. Suppose Ω is an exceptional domain in R n whose exterior is bounded and connected. If ∂Ω is C 2,α -smooth, α > 0, then ∂Ω is a sphere. Proof. Let u be a roof function for Ω, and let v(s) = 1 |s| n−2 denote the Newtonian kernel. Fix y ∈ Ω and take a small ball B ε centered at y. Take also a large ball B R of radius R that containes both B ε and the complement of Ω. Since u(x) and v(x − y) are harmonic in Ω \ B ε , Green's second identity gives Letting R → ∞, we can drop the integration over ∂B R , since again by Bôcher's Theorem [3,Ch. 3], near infinity u(x) ≈ |x| 2−n . Since, u(x) = 0 on ∂Ω and ∂ n u(x) = 1 on ∂Ω, Let U be the bounded domain such that R n \ U = Ω. The outward normal for ∂U is opposite to that of ∂Ω, and since v(x − y) = 1 ε n−2 on ∂B ε , For the first term on the right-hand-side, we have as ε → 0. So, u(y) is the single layer potential with charge density one on the surface ∂U . That U is a ball now follows from a theorem of W. Reichel [32, Theorem 1]. Remark 8. Reichel's result holds for more general elliptic operators than the Laplacian. In the setting of the Laplacian, J. L. Lewis and A. Vogel [28] characterized the sphere in terms of its interior Greeen's function under weaker regularity assumptions, namely, the boundary is assumed Lipschitz. In that case, the Neumann condition can be assumed to hold almost everywhere on the boundary. Thus, the hypothesis of Theorem 7.1 could be weakened by checking that the same proof [28] works for the exterior case we are interested in. Yet, we have chosen an easier and more transpoarent path to apply Reichel's result directly, even though it requires a stronger regularity on the boundary. Nonexistence of a Higher-dimensional Analog of the cosh(z) Example The authors in [19] expressed a suspicion (see Remark 2.1 in [19]) that there exist n-dimensional, rotationally-symmetric examples similar to the two-dimensional example {(x, y) ∈ R 2 : |y| < π 2 + cosh(x)} that appeared in Section 6. We show that there does not exist an exceptional domain in R 4 whose boundary is generated by rotation about the x-axis of the (two-dimensional) graph of an even function. Theorem 8.1. There does not exist a rotationally-symmetric exceptional domain Ω in R 4 that contains its own axis of symmetry and whose boundary is obtained by rotating the (two-dimensional) graph of an even real-analytic function about the x-axis. Remark 9. (i) Our proof will rely heavily on two tricks, one exploiting the assumption that n = 4, and the other using the assumption that the generating curve is symmetric. However, we strongly suspect a more general non-existence of such examples in R n for any n > 2. Therefore, we conjecture the following. Conjecture 8.2. For n > 2, there does not exist an axially symmetric, exceptional domain in R n that contains its own axis of symmetry. (ii) The assumption that the domain contains its axis of symmetry rules out the exteriors of balls and circular (or spherical) cylinders, respectively (which are clearly exceptional domains as was noted in [19]). Also, A. Petrosyan and K. Ramachandran pointed out to us that the nonconvex component of the exterior of a certain cone is also an exceptional domain. In R 4 , using the x-axis as the axis of rotation, the cone is the rotation of {(x, y) : y 2 − x 2 = 0}, and the roof function in the meridian coordinates x, y where y is the distance to the x-axis in R 4 , is u(x, y) = y 2 −x 2 y for y > 0. Proof of Theorem 8.1. Suppose that Ω is such a domain in R 4 . Namely, the boundary ∂Ω is obtained from rotation of γ := {(x, y) ∈ R 2 : y = g(x)}, with g(−x) = g(x). i.e., the boundary of Ω is given by Considering the boundary data, the rotational symmetry of the domain will be passed to the roof function, so that, abusing notation, we can write For clarity, we emphasize that the x-axis corresponds to the axis of symmetry and the y-coordinate gives the distance from the axis of symmetry. For axially symmetric potentials v in R n the cylindrical reduction of Laplace's equation is: where x = x 1 and y = x 2 2 + ... + x 2 n . Moreover, in the case we are considering, when n = 4, u satisfies the equation ∆u + 2uy y = 0, if and only if yu(x, y) is a harmonic function of two variables x and y. Indeed, ∆(yu) = y∆u + 2∇u · ∇y + u∆y = y∆u + 2u y . (The trick that reduces axially symmetric potentials in R 4 to harmonic functions in the meridian plane is well known, cf. [26] and [23].) Since yu(x, y) is then harmonic in the unbounded two-dimensional domain D bounded by γ and its reflection (which we denote byγ) with respect to the x-axis, this implies ∂ ∂z (yu(x, y)) is analytic in the domain D, where as usual z = x + iy. The Cauchy data (originally posed in R 4 ) imply that u z = 1 2 (u x − iu y ) coincides with −S (z) on γ andγ. This implies that the analytic function 2i −S (z) is analytic, this actually gives a formula for W (z) valid not only on γ andγ: We note that (8.2) can be used to analytically continue S(z) to all of D, but this is not needed in our proof. S ± (f (ζ)) = f (ζ ∓ i), and Substituting these into (8.2), we obtain two expressions for the pullback of W (z) to the strip Σ: Even though W (f (ζ)) is analytic throughout Σ, we caution that these two expressions (one expression for "+" and one for "−") may only be valid near the bottom and top sides (respectively) of the strip Σ. Claim: The function W (f (ζ)) is odd. Before proving the Claim, let us see how it is used to finish the proof of the Theorem. The fact that W (f (ζ)) is odd implies W (0) = W (f (0)) = 0. By (8.1) we then have that −i 2 u + yu z vanishes at z = 0, which implies that u(0, 0) = 0. This contradicts the positivity of u. We show that this formula vanishes where it is valid, which then implies that V (ζ) vanishes identically throughout Σ. For this, we use the fact that f is odd and consequently f is even. This establishes the Claim. Concluding Remarks and Main Conjecture 1. It is tempting to conjecture that the three examples in the plane studied above are the only exceptional domains in the plane as suggested in [19]. However, there is a remarkable family of infinitely-connected exceptional domains. These were discovered as solutions to a fluid dynamics problem by Baker, Saffman, and Sheffield in 1976 [4]. See also [8] for a more detailed account. The original problem there was to find hollow vortex equilibria with an infinite periodic array of vortices, i.e., "spinning bubbles" amid a stationary flow of ideal fluid. The domain occupied by fluid turns out to be an exceptional domain with an infinite periodic array of holes, and the roof function is a stream function of the fluid flow, see Figure 3. The constant Dirichlet condition corresponds to the requirement that the boundary of each hollow vortex is a stream line, and the constant Neumann condition corresponds to the requirement that the fluid pressure should be balanced at the interface by the pressure inside each bubble which is assumed constant. The latter correspondence is more subtle; in order to have constant pressure along a stream line, the fluid velocity (which equals the normal derivative of stream function) should be constant according to Bernoulli's law. This infinite genus example leads us to add to the conjecture the assumption that the domain has finite genus. Conjecture 9.1. The only finite genus exceptional domains in R 2 are the exterior of the unit disk, the halfplane, and the domain described in Theorem 6.1. Remark 10. As mentioned in the introduction, Martin Traizet [38] recently announced a classification of exceptional domains. His results confirm our conjecture for domains having finitely many boundary components and also show that the above infinite genus example is the only periodic exceptional domain for which the quotient by the period has finitely many boundary components. His methods use a remarkable nontrivial correspondence to minimal surfaces, perturbing an exceptional domain by harmonically mapping it to another domain in such a way that the graph of the new height function (which pulls back to the roof function in the There is actually a whole one parameter family of different bubble shapes. As noticed in [38], each of the three previously known examples can be recovered as different scaling limits of this family. In that sense, this family includes all known examples. original domain) satisfies the minimal surface equation. A miraculous (and crucial to his approach) by-product is that, whereas the graph of the roof function meets its boundary at a 45-degree angle, the minimal graph meets its boundary vertically so that gluing it to its own reflection over the xy-plane results in a smooth minimal surface (without boundary!) embedded in R 3 . 2. Regarding the higher-dimensional case, we conjecture the following extension of Theorem 5.1 to higher dimensions. Suppose Ω is an exceptional domain in R n that is homeomorphic to a halfspace. Then Ω is a halfspace. 3. The connection to the Schwarz function in Section 4 reveals that exceptional domains are arclength null-quadrature domains. That is, for any function f , say analytic in Ω, continous in Ω, integrable over the boundary, and decaying sufficiently at infinity, we have ∂Ω f ds = 0. Indeed, ∂Ω f ds = ∂Ω f (z) 1 T (z) T (z)ds = ∂Ω f (z) S (z)dz, where T (z) is the complex unit tangent vector (see Section 4), and now this integral vanishes as long as the integrand decays sufficiently at infinity. Null-quadrature were previously studied in the case of area measure. They were characterized in the plane by M. Sakai [33]. Our current study can be seen as a step toward characterizing null-quadrature domains for arclength. 4. Other interesting connections involve differentials on Riemann surfaces. The study of Gustafsson [18] used half-order differentials on the Schottky double of an arclength quadrature domain. From a different point of view, the boundary of an exceptional domain is a trajectory of the positive quadratic differential −(df ) 2 , where f (z) is the analytic completion of the roof function. 5. The differential equation (6.1) can be solved by a more general substitution using Jacobi elliptic functions [1, p. 567, §16]: f (ζ, k) ≡ cos(θ)cn(ζ, k) + sin(θ)sn(ζ, k) and z(ζ) = φ(ζ) + cos(θ)sn(ζ, k) − sin(θ)cn(ζ, k), where φ(ζ) = ζ dn(ξ, k)dξ and θ is an arbitrary phase, θ ∈ [0, 2π]. For a given value of the elliptic modulus k ∈ [0, 1], we define the corresponding domain F through its fundamental periods T 1 (k) = 4F (π/2, √ 1 − k 2 ) and T 2 (k) = 4F (π/2, k), where F (π/2, k) = K(k) is the complete elliptic integral of the first kind [1, p. 590, §17.3]: It diverges for k = 1 and equals π/2 for k = 0. Let γ be the pre-image of ∂Ω under z(ζ): it consists of two pieces γ ± , γ − = −γ + , dividing the fundamental domain F into three sub-domains. Denote the component which contains the origin by D 0 , then since f (0) = 1, we conclude that f (z) > 0 for z ∈ D 0 \ γ ± , and we have proven the following result. As noted before, the conditions f (ζ)| γ± = 0 give the pre-image γ ± := z −1 (∂Ω) = { ζ = ± π 2 }, and the pre-image of the domain, D 0 , becomes the strip | ζ| ≤ π 2 . 6. Note that the domain D 0 (k) is the pre-image of the unit disk under the map ζ(w) : F → D, with the support of µ at points ζ ± = ± 1−ik 1+ik , where µ is the measure discussed in the proof of Corollary 4.3. The case k → 0 corresponds to the strip domain and to ζ ± = ±1. The reparametrization invariance noted above for the solution f (z) of (6.1) under rescaling of the elliptic modulus k is indicative of a deeper invariance of the solution: all the specific solutions in C discussed here are associated with fixed points in the moduli space of Riemann surfaces. Let again f (h(z)) be the analytic completion of a solution, and denote by G the group of transformations which leaves supp(µ) invariant up to a global rotation. It follows that f is an automorphism of the quotient of the group of linear fractional transformations by G, which can be in general a Kleinian group [30]. The limit set (accumulation points of the orbits of the group) can be finite (in which case it can consist of only 0, 1, or 2 points), or infinite. It is known ( [2], Thm. 10.3.4.) that the set of homeomorphic solutions for a quasilinear elliptic equation of Laplace-Beltrami type forms a group only in the case of finite limit set [2]. The Kleinian groups are called degenerate in this case, and they correspond to either finite groups (with empty limit set), or the cyclic groups (generated by one element, with limit set consisting of 1 or 2 points). These correspond to the solutions described in the present paper (isolated point at infinity, respectively simple and double boundary point at infinity).
2013-03-28T01:29:37.000Z
2012-05-23T00:00:00.000
{ "year": 2012, "sha1": "6527dec676458590dd5e561283c2156fd5b30a6e", "oa_license": null, "oa_url": "http://msp.org/pjm/2013/265-1/pjm-v265-n1-p04-s.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "4a4915fa6be835c40c073a61efef8fe7efa6fd1c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
213416605
pes2o/s2orc
v3-fos-license
Purchase Intention in the Online Open Market: Do Concerns for E-Commerce Really Matter? : This study aims to investigate motivational factors and motivation hindering factors of online shopping via online open market platforms. For a comprehensive exploration, the response data were collected from a total of 417 Korean consumers before conducting a hierarchical regression analysis. The results showed that the e ff ects of motivation factors on purchasing intention were all supported. As for moderating e ff ects of concerns for e-commerce, privacy concerns by time saving, perceived ease of use, and security concerns by cost saving were found to be statistically significant. Privacy concerns by cost saving and business integrity concerns by time saving were also found to be statistically significant, but had a positive e ff ect as opposed to an initial prediction. The finding denotes that, in order to reduce concerns for e-commerce, consumers may prefer using the online shops they can trust based on their previous shopping experience. Various concerns identified and analyzed in this study are clues to better understanding what potentially motivates or obstructs consumers to shop online, thereby helping businesses thrive in the online open market. on the existing the acceptance social cognitive and social exchange proceeded with the survey to to identify both motivation factors hindering factors of online open market shopping. a hierarchical regression analysis was the e ff ects of factors by and by cost assumed, privacy concern by cost saving positive Introduction The online market has provided unique opportunities for e-commerce businesses. But it is also true that competition among online markets is getting stiffer as increasing number of businesses open online stores or shopping apps with the growth of e-commerce users. In this highly competitive situation, attracting and retaining more customers have become a major concern for online business firms [1,2]. Total online sales in Korea reached KRW 9.6 billion (1 USD = KRW 1170 as of 6 January 2020), up 16. 4% year over year as of February 2019 [3]. As for the growth rate by types of e-commerce, the business-to-consumer (B2C) market grew about 94 times, while the consumer-to-consumer (C2C) market expanded more than 320 times during the period from 2001 to 2013, a more notable growth in C2C transactions in Korea [4]. Among numerous online markets, open markets have attracted more consumers, leading the growth of e-commerce market in Korea [5]. An open market, modeled after online auctions, is an online store mediating business between sellers and consumers. In other words, the open market owner or the platform operator is a middleman that enables transactions between sellers and buyers, which benefits sellers because they can focus on developing and providing quality products. An open market has a commission-based structure, not counting on merchandising capabilities, because the revenue comes from commission paid by third-party vendors. The online auction or reverse auction model, pioneered by the first generations of e-marketplaces, including eBay, Auction, and Priceline, enormously expanded its reach into the e-commerce industry. Amazon, first founded as an Internet-based book seller, now sells more items through its third-party marketplace than through its Sustainability 2020, 12, 773 2 of 21 own retail operations. In Korea, major players include Auction, Gmarket (Seoul, Korea), Interpark (Seoul, Korea), and SK Planet's (Seoul, Korea) 11th Street. E-commerce users face a number of concerns in using e-commerce platforms, including security and reliability issues [6,7]. Users want to make sure whether the website they are using is a secure online destination [8]. They also try to develop their views about the legitimacy of the online business, which eventually affecting users' intention to make transactions on the Internet [9]. What is problematic is a lack of trustworthy standards that help e-commerce users to assess legitimacy and credibility of vendors, which consequently makes users keep worrying about e-commerce security [10]. Consumers tend to make a final purchase decision after considering various factors including brand reputation or awareness, product quality, security, user-friendliness, price competitiveness, product diversity, and others. Among various issues related to e-commerce security, the effect of consumer concerns and perceived risk associated with the likelihood of purchase has been considered one of the most vital subjects of research [11]. E-commerce security refers to keeping e-commerce sites secure by ensuring the safety of personal information, security systems, and payment methods. Since consumers' purchase decisions are greatly affected by the safety of electronic transactions [12], more attention needs to be paid in the interactions between e-commerce security and other various factors affecting purchase through open market websites. Despite the recent exponential growth of online platforms, there has been less empirical evidence for what motivates to purchase over an online open market [5,13,14]. There are tons of research studying consumers' behavior related to online shopping, but most of them focused on a single aspect of purchase intention when purchasing online shopping malls. Moreover, little research has been done so far to investigate both motivational and hindering factors impacting purchase in the online open market. From a practical perspective, this research may provide online open market operators with essential information and guidelines by accurately identifying various types of perceived risk and the factors that keep potential buyers from visiting an open market website. Based on such guidelines, the operators may be able to take appropriate measures aiming at reducing consumers' risk levels [15], which will consequently improve actual sales and consumers' responses, including purchase intention [16]. In particular, this study seeks to investigate the perceived barriers influencing the intention of both users and non-users to purchase via an online open market. Understanding these potential obstacles helps open market service providers develop more efficient strategies in terms of marketing and website operation, which will drive more traffic to the websites. Therefore, the ultimate goal of this study is to identify antecedents, both motivating and hindering factors affecting purchase intention in the e-marketplaces, which is a different approach from previous studies. With this goal in mind, purchase intention in the e-marketplace is observed as an outcome variable and online open market are the main subject of this research. Purchase Intention through an Online Open Market Purchase intention is a consumer's tendency to perform a particular action and serves as a critical barometer to predict consumer behavior [17]. Consumers generally develop their expectations based on product information before they decide to purchase a product. Accordingly, the intention consumers have prior to purchase is influenced by their attitudes at the pre-purchase step [18]. In other words, purchase intention refers to a consumer's expression of willingness to take a particular action in relation to buying a certain product or service, and it is influenced by the consumer's trust and attitudes toward a product or service. Also, purchase decision is more greatly influenced by consumers' purchase intention than their attitudes toward a product or service itself [19]. In this study, a consumer's intention to purchase from an online open market is considered a dependent variable to examine what motivates or discourages online purchase intention. This section first overviews Sustainability 2020, 12, 773 3 of 21 favorable motivating factors, and Section 2.2 addresses motivation hindering factors by describing concerns for e-commerce based on literatures pertaining to perceived risk. Purchase intention pursued by this study connotes "transaction intention" because it is not developed by a certain product or service but by transaction platforms-an online open market. Building upon the definition of purchase intention mentioned above, "intention to purchase from an online open market" is described as one's willingness to perform a specific behavior to buy a product or service from an online open market, and the intention is influenced by the consumer's trust in online transactions placed in a particular open market and attitudes toward that open market website. Consumer trust is considered important because it serve as a motivation factor of purchase intention. Among numerous theories regarding motivation factors that lead to developing behavioral intention, this study finds its theoretical foundations from the following three models that have been widely adopted in the field of MIS (Management Information System): the technology acceptance model, social cognitive theory, social exchange theory. By doing so, this study seeks to identify motivation factors that determines purchase intention in the online open market. Technology Acceptance Model The theory of reasoned action (TRA) and the theory of planned behavior (TPB) have been widely used by researchers as a theoretical base to explain and predict user acceptance and use of IT, and based on these theories, several competing models have been suggested [20][21][22]. The technology acceptance model (TAM), first suggested by Davis [23], provides a foundation for examining the impact of external factors on internal beliefs, attitudes, and intention associated with employing and accepting technologies. As an expansion of TRA, TAM has been widely used to investigate information technology usage behaviors. TRA posited that a person's behavior is guided by one's behavioral intention [24]. Likewise, it can be assumed that the intention to use information systems may lead to actual usage behavior. TAM presents two important constructs-perceived usefulness and perceived ease of use-and both of them influence behavioral intention. Perceived usefulness is defined as "the extent to which people believe that the ability to perform a given task is improved by using a particular technology or system." This can be interpreted that the intention to use technologies further increases if people expect that their job performance will be enhanced by the use of technologies. Perceived ease of use is "the extent to which people believe that using a particular system is effortless." When a technology is perceived to be easier to use, technology usage intention is positively influenced, and more benefits are expected from the use of this technology. This eventually influences perceived usefulness [23]. Better understanding of these casualties helps in the design effective intervention measures that make more people use new information systems [25]. TAM excludes an attitude variable from the model based on its logic that perceived usefulness has a direct impact on information technology acceptance intention without going through acceptance attitudes. TAM has been the cornerstone of numerous prior studies that aim to predict user intention and behavior associated with the use of information systems [25,26]. Drawn from beliefs-related variables in TAM, this study focuses on perceived usefulness and perceived ease of use, and ultimately aims to understand motivation factors of purchase intention in the online open market in light of technology acceptance mechanism. Social Cognitive Theory Social cognitive theory (SCT) has been generally used to understand individuals' motivations, thoughts, and behaviors in various situations [27]. This theory argues that environmental, cognitive, and personal factors influence each other [27], and individuals not only react to the environment but also behave in a way to make a positive change in the environment. Among a few constructs mentioned in SCT, this study sheds more light on the role of a cognitive component-outcome expectations. Outcome expectations are defined as one's assurance that engaging in a behavior will produce certain outcomes [28], which is considered one of the most primary factors to predict future behaviors. Since individuals tend to engage in a behavior whose outcome is expected to be positive, outcome expectations may discourage performing a certain behavior with negative expected consequences. Outcome expectations are consistent with perceived usefulness proposed by Davis [29] in the TAM model [30] and serve as a critical factor for explaining one's engagement in a certain behavior in a plethora of research on information systems [30,31]. From the perspective of SCT, consumers do not simply respond to a given settings of purchase, but develop the intention to buy from online open market (purchase intention) based on expected benefits they may have from purchase (perceived usefulness) and self-efficacy (perceived ease of use). Social Exchange Theory Social exchange theory (SET) was proposed by Homans [32] and claimed that a person's various social behavior is the process of exchange. Rooted in economics, SET asserted that one's behavior is results-driven by calculating and evaluation tangible or intangible cost and benefits resulting from a given behavior [33]. If the rewards one can receive from an interaction exceeds the costs of interaction, the interaction is more likely to continue [34]. Conversely, the interaction is unlikely to occur when the costs of interaction are greater than the rewards of interaction. Therefore, SET can provide theoretical evidence to explain cognitive processes associated with purchase decisions when consumers try to buy something from open market websites. These consumers may be able to understand the offset between benefits and costs resulting from the use of online open market. For instance, making an online purchase in an open market enables consumers to save time and money while they may face risks for e-commerce transactions. To conclude, TAM, SCT, and SET are appropriate for providing a foundation for understanding consumers' cognitive processes when deciding to purchase from an online open market. This study viewed the costs and benefits explained in SET as outcome expectations consumers have by making a purchase via online open market and defined the key terms describing factors that influence purchase intention in the online open market. Two terms-economic advantages (i.e., cost saving and time saving) and convenience (i.e., perceived ease of use)-were defined to explain motivation factors, and one term-perceived risk-was created to examine motivation hindering factors. Perceived Risk and Concerns for E-Commerce Perceived risk is the uncertainty people experience when buying a product or service and is defined as the expectations of losses associated with a certain purchase [35]. Liebermann and Paroush [36] verified that willingness to accept newly launched products is mainly determined by how to alleviate perceived risk that might be caused by new products. The impact of perceived risk is applied to potential customers who are not currently using a product but positively consider becoming users based on a strong interest in the product. Existing users who are willing to continue to use a product are also influenced by perceived risk [37]. The same goes for online purchase behaviors. There has been previous research suggesting that perceived risk is one of the major considerations in the process of consumers' decision-making on the Internet. For example, Donthu and Garcia [38] revealed that non-online shoppers are more risk-averse than online shoppers, which indicating that non-online shoppers perceive a higher degree of subjective risk associated with online shopping. Consumers tend to feel greater risk in their online shopping because the e-commerce environment has unique characteristics, such as the lack of information available and lack of face-to-face interaction [39,40]. Furthermore, online payments and sharing personal information greatly increase the perceived risk associated with online shopping [41]. An extensive body of research has already found that higher levels of perceived risk may negatively influence consumers' intention to buy items online and their behaviors on the website (e.g., [39,[41][42][43][44][45][46][47][48][49][50][51]). Based on these prior studies, consumer hesitation in making online transactions is partly driven by relatively high levels of concerns about online shopping [52]. According to a recent preliminary literature review, perceived risk is divided into various components [37,53]. Among them, online privacy and security are assumed as the major concerns arising from online shopping. Both online users and non-online users develop perceived risk involved with other components. For example, concerns related to security, privacy, and business integrity are the main barriers making consumers hesitate to purchase on the Internet [9,54,55]. In this study, consumers' concerns over e-commerce are examined based on three dimensions-security, privacy, and business integrity. The higher the concerns for e-commerce, the weaker the online purchase intention generated. This is because higher levels of perceived risk may drive up the expectations of losses associated with purchase decision, which consequently leads to hesitation and aversion to online purchase in open markets. This study seeks to examine three dimensions of concerns for e-commerce from platform-related and seller-related viewpoints. There are four agents who make online open markets work: sellers, buyers, third-party vendors, and the platform itself. Concerns over privacy and security belong to the domain of the platform because they are greatly influenced by the physical environment of the open market website, business policies, and guidelines. The role of sellers in the open market is less relevant with running and maintaining a platform. They are just one of the marketplace participants trading through the website, which is different from general e-commerce websites where sellers own and operate the website. Unlike privacy and security issues, business integrity concerns are deeply related to sellers, not coming from a platform. Therefore, the role of business integrity concerns may be distinct from that of privacy and security concerns. Motivation Factors and Purchase Intention in the Online Open Market According to motivation researchers, motivation to perform a behavior is broadly divided into two categories-extrinsic and intrinsic motivation [56]. Extrinsic motivation plays an instrumental role in encouraging people to behave in order to achieve external rewards or values outcomes. Better job performance, benefits, pay rise or promotions are the sources of extrinsic motivation (e.g., [57][58][59]). While one is extrinsically motivated due to reinforcements of valued outcomes, intrinsic motivation is defined as doing activities without apparent reinforcements or rewards, but caused by internal interests or enjoyment [60,61]. Built upon outcome expectations suggested by SCT, this study focuses more on extrinsic motivation and considers perceived usefulness as an example of extrinsic motivation [56]. In the context of this study, perceived usefulness refers to perceived beliefs that purchasing from an open market saves costs and time. Based on TAM [30], perceived usefulness has a positive effect on purchase intention in the online open market. The concept of perceived usefulness overlaps with perceived values. The concept of perceived values, a comprehensive term, is divided into functional values and emotional values [62,63]. Functional values are defined by individuals' rational and economic assessment, while emotional values include sentimental or social dimensions. An open market is a platform where numerous sellers and buyers transact through an online open market, and identical products are traded through other open market websites or offline stores by the same or different sellers. Therefore, the better way of approaching perceived values is focusing on a platform itself, not a single product traded on the website. In addition, this study converted dimensions of perceived values into economic and time values with a view to identifying extrinsic motivation factors. Consumers perceive economic values when they purchase a product or service at a lower price than alternatives are purchased [64]. The alternatives in this study are defined as purchasing from offline stores or other online open markets. According to the previous research about economic and time values that consumers appreciate, purchase intention is positively perceived when consumers perceive more benefits than sacrifices [65], and it can be therefore concluded that economic and time values have a positive effect on consumers' choice of purchase channel [66]. Likewise, it was hypothesized that cost saving and time saving have a direct impact on purchase intention in the online open market. Davis [29] asserted that an individual's attitudes determine one's use of a system because they are attributed to the impact the system has on one's performance. Therefore, even if consumers do not prefer an open market as their purchase channel, they are more likely to choose an open market if they perceive that transacting through that online open market reduces cost or time. it can be said that purchase intention in the online open market is determined by the effect of cost and time saving. Hypothesis 1 (H1). Cost saving influences the intention to purchase through online open markets. Hypothesis 2 (H2). Time saving influences the intention to purchase through online open markets. Perceived ease of use is closely related to self-efficacy, competence, and self-determination [56]. In particular, consumers with higher levels of self-efficacy are able to make an informed decision more efficiently because open market websites offer more information from more sellers. If consumers have already transacted through an open market, consumers' self-efficacy is the result of their perception of accumulated satisfaction arising from experience across various open market websites. That is why those with online shopping experience are likely to have stronger willingness to buy again via an online open market [67]. In other words, perceived ease of use, one of the determinants of self-efficacy, may positively influence purchase intention in the online open market. These discussions and reasoning lead to the following hypothesis: Hypothesis 3 (H3). Perceived ease of use influences the intention to purchase through online open markets. Motivation Hindering Factors: Concerns for E-Commerce Consumers' privacy in the online transaction is not a novel issue because online privacy is considered one of the most essential concerns in the e-commerce industry [55]. For e-commerce users, how their personal information is used and treated by online businesses has long been a source of concern [68]. Online privacy concerns include consumers' fears about the loss and mishandling of individually identifiable information [69]. With the rapid growth of e-commerce sales, privacy concerns have become a prominent challenge in terms of public policy making [70,71]. This is because consumers' personal and financial data are shared with online retailers on almost every interaction, and consumers expect that their information should be confidentially treated [69]. The key principles of online privacy include keeping information collection and dissemination practices transparent, letting consumers choose how their personal information may be used, taking measures to ensure integrity of private information, and precluding the unauthorized disclosure of private information [72]. Privacy infringement usually involves collecting, disclosing, or using personal data obtained by e-commerce transactions in an unauthorized way [73]. Therefore, higher privacy concerns may negatively influence purchase processes in the online open market, serving as a motivation hindering factor. Hypothesis 4a (H4a). Privacy concern negatively moderates the effect of cost saving on the intention to purchase through online open markets. Hypothesis 4b (H4b). Privacy concern negatively moderates the effect of time saving on the intention to purchase through online open markets. Hypothesis 4c (H4c). Privacy concern negatively moderates the effect of perceived ease of use on the intention to purchase through online open markets. When an appropriate level of security is not guaranteed, consumers hesitate to trade through e-commerce sites [74]. That is why security is another vital concern over online transaction. The potential risks to those using credit card on the Internet are widely publicized. However, a major threat to Internet businesses is payment fraud [75]. Fraudulent activities, including illegal transactions or non-creditworthy orders, are increasing to include up to one-sixth of all purchases on the Internet. E-commerce fraud is evolving enough to include diverse types of activities like break-ins, technology disturbance, stalking, impersonation, identity theft, etc. [76]. Computer hacking has been another serious issue everyone should be concerned about. Hacking techniques can be either benign or malicious. Consumers' understanding of protection measures implemented by online open markets determines their perceived security about online transactions [9]. When ordinary consumers find that security features and protection programs are in place in a website, they are able to understand the website's intention to meet requirements regarding secure online transactions [9]. This encourages buyers to make a purchase decision because the above-mentioned protections highlight vendors' efforts to win consumers' trust and mitigate consumers' perceived risk levels [9]. Conversely, if prospective consumers who have visited an open market, due to their past experience, are concerned about the lack of protections and security features, the motivation to purchase from the online open market may be hindered. Based on these inferences, the following hypotheses are proposed: Hypothesis 5a (H5a). Security concern negatively moderates the effect of cost saving on the intention to purchase through online open markets. Hypothesis 5b (H5b). Security concern negatively moderates the effect of time saving on the intention to purchase through online open markets. Hypothesis 5c (H5c). Security concern negatively moderates the effect of perceived ease of use on the intention to purchase through online open markets. As online fraud increases, online consumers are always cautious about getting cheated or sharing sensitive information with fraudsters who commit identity theft. In this situation, how collected personal data are used or treated by online merchants is critical, and the importance of business integrity is greater now than ever before [70]. In other words, whether consumers interact with e-commerce merchants mainly depends on integrity of online businesses [55]. Therefore, if consumers have concerns over the legitimacy of online businesses, the motivation to trade through the online open market might be inhibited. Hypothesis 6a (H6a). Business integrity concern negatively moderates the effect of cost saving on the intention to purchase through online open markets. Hypothesis 6b (H6b). Business integrity concern negatively moderates the effect of time saving on the intention to purchase through online open markets. Hypothesis 6b (H6b). Business integrity concern negatively moderates the effect of time saving on the intention to purchase through online open markets. Measurements This study established reflective measurement items based on prior studies, and Appendix A contains the source of measurement items. Items to measure purchase intention in the online open market-a dependent variable of this study-was drawn from behavioral intention in light of technology use, which was already measured in the previous studies [15,54], but modified to fit for the purpose this study. Three motivation factors affecting online purchase intention in an open market were measured with nine questions. Since these factors contain dimensions of perceived values, nine measurement items consisted of (1) three items related to cost-saving drawn from prior studies of Mittal [77], and Dickinger and Kleijnen [78], (2) three items related to time-saving drawn from the work of Gipp et al. [79], and (3) three items related to perceived ease of use suggested in Davis's TAM [23]. Concerns for e-commerce, the moderation variables, were based on the items to measure three sub-dimensions in the Kim et al. [55]'s research. Five questions drawn from Chen's research [80] were used to measure privacy concern, and four questions were developed based on the work of Gefen [54] and Swaminathan et al. [81] to measure security concern. Business integrity concern was measured with four items suggested by Kim et al. [9]. Likert's five-point scale was used to assess the level of agreement (1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree), and age and gender acted as control variables. Measurement items went through two rounds of translation-first from English to Korean and then back to English again-which was to remove inconsistency and differences between the original and translated items [82]. Data Collection For an empirical verification of the research hypotheses, this study surveyed Korean consumers who have shopped online. A Korean survey agency first accessed the online panel of over 2000 randomly selected subjects, and the survey lasted for a month in March 2019. A total of 417 or 21% were returned, which went through a series of statistical analysis using SPSS 22.0 that included frequency analysis, reliability and validity tests, and hierarchical regression analysis. The demographics of respondents were shown in Table 1. About 72% of the respondents are those in their 20s and 30s. Given that the vast majority of the online open market users are younger, the age distribution of respondents may reflect a general trend of the e-commerce. The respondents said that they frequently visit the following sites: SK Planet's 11th street (22.8%), Coupang (Seoul, Korea, 17.3%), Gmarket (16.1%), Naver (Seongnam-si, Korea, 12.7%), and Auction (Seoul, Korea, 11.7%)-among them, Auction and Gmarket are run by eBay Korea, a subsidiary of the US company eBay. Reliability and Validities For assessment of convergent validity, factor loadings were computed. When conducting an exploratory analysis, a minimum value of factor loading higher than 0.7 is considered reliable and acceptable [83,84]. Following the results of factor analysis, SC3 was excluded as factor loadings of measurement items were below 0.5. As shown in Tables 2 and 3, 24 items were found to be reliable and valid as all factor loadings and α values exceeded 0.7. Also, composite reliability (CR) values and average variance extracted (AVE) values were greater than 0.7 and 0.5, respectively, indicating this study established internal consistency. Table 4 further shows that the square root of AVEs is greater than all the correlation coefficients among other variables, demonstrating sufficient discriminant validity [83]. Note: CS = cost saving, TS= time saving, PEOU = perceived ease of use, PC = privacy concern, SC = security concern, BIC = business integrity concern, TI = transaction intention, the highest factor loading for each item is shown in bold In addition, the VIF values of all seven variables were found to have the highest VIF value for SC (2.216), which is lower than the threshold value of 5. This shows that no serious collinearity is found in any constructs. A confirmatory factor analysis conducted on our variables revealed that the data fit our overall model reasonably well (χ 2 = 559.18, df = 230, RMSEA = 0.06, CFI = 0.94, IFI = 0.94). Common Method Bias This study is exposed to the problem of common method bias (CMB) because both the independent and dependent data were gathered by the same pool of respondents with a single instrument being used. This study first chose Harman's single-factor analysis, which is a technique to identify a single factor representing the most among others to determine a common method bias [85]. By performing a principal component analysis excluding rotation, five different factors were identified in this study. This result indicates that any single factor does not take up the majority of total variance, so the data did not have a serious CMB problem. Second, this study employed a marker variable, with five items about outdoor activities to test for the effect unrelated to this study [86]. Since the average correlation among the marker variable and the principal constructs was r = 0.07, and the correlation matrix does not indicate any highly correlated factors (highest correlation is r = 0.68, which is below the threshold of 0.90), there was minimal evidence of CMB [87]. Based on these results, it is therefore concluded that common method bias is not a serious concern in this study. Hierarchical Regression Analysis The aim of hierarchical regression analysis was to examine the moderation effects of concerns for e-commerce. Specifically, a six-step hierarchical regression was conducted to look into how three moderation variables-privacy concerns, security concerns, business integrity-influence the relationship between purchase intention in the online open market and the three antecedents (cost saving, time saving, and perceived ease of use). By doing so, the increment in R 2 due to the interaction may be minutely identified [88]. Table 5 shows the results of analysis. In this study, the following six steps were performed in a sequence. Demographic variables-age and gender-were entered at Model 1 (Step 1), independent variables-cost saving, time saving and perceived ease of use-were added at Model 2, three dimensions of concerns for e-commerce were added at Model 3, and the interaction term, multiplying independent variables by concerns for e-commerce, was added at Model 4~6. Therefore, the full model is produced at Model 6, where the effects of cost saving, time saving and perceived ease of use on online purchase intention and the moderating effects of concerns for e-commerce are jointly examined. Also, interpretation of the significance of all hypotheses comes from Model 6 analysis. The alpha protection level was set at 0.05. According to the Model 6 results, all the effects associated with three independent variables were supported. When the moderating effects of concerns for e-commerce were controlled at Model 6, the direct effect of cost saving on transaction intention was found to be significant (b = 0.625, p < 0.05), supporting H1. Saving more time positively influenced the intention to purchase via an online open market (b = 0.473, p < 0.05), indicating that consumers expect to save more time by shopping in the online open market than in the offline stores. Conversely, wasting time or spending more time in shopping adversely impacted purchase intention. All these results supported H2. Perceived ease of use was also found to significantly influence transaction intention (b = 0.465, p < 0.05), supporting H3. The Moderating Effects of Concerns for E-Commerce This study was tested for the moderation effects of concerns for e-commerce, which aims to examine whether the moderation effects influence consumers' purchase intention. The test results show the moderation effect that privacy concern has on cost saving, time saving, perceived ease of use was found to be statistically significant. Also, the moderation effect that security concern has on cost saving and that business integrity has on time saving were statistically significant. However, unlike our original predictions, the impact of time saving and perceived ease of use on purchase intention was not hampered by security concern; thus, H5b and H5c are not supported. The interaction term between business integrity concern and cost saving was found to be insignificant (b = 0.080); thus, H6a is not supported. The moderation effects of privacy concern on cost saving and business integrity concern on time saving were both statistically significant. However, the effects were found to be positive, which is opposed to the original prediction, hence these effects were rejected. H6a was also rejected because it appeared positive, contrary to expectations. As H4b, H4c, and H5a were supported, three out of eight hypotheses regarding moderation effects were accepted. As (b), (c) and (d) in Figure 2 shows, privacy concern and security concern have hindering effects on consumers' purchase intention. These three graphs regarding the relationship between three motivation factors and purchase intention and they can be interpreted: the rate of rise gets smaller-the value of a slope is smaller-when the level of concerns is greater. Conversely, (a) and (e), regarding how concerns for e-commerce affect the role of cost saving (or time saving), show that the rate of rise is steeper-so the slope is greater-when the level of concerns is greater. This is consistent with the results of hierarchical regression analysis-against our initial predictions, a coefficient of the interaction term was positive. motivation factors and purchase intention and they can be interpreted: the rate of rise gets smallerthe value of a slope is smaller-when the level of concerns is greater. Conversely, (a) and (e), regarding how concerns for e-commerce affect the role of cost saving (or time saving), show that the rate of rise is steeper-so the slope is greater-when the level of concerns is greater. This is consistent with the results of hierarchical regression analysis-against our initial predictions, a coefficient of the interaction term was positive. Summary of the Findings and Discussion This study aimed to discover the antecedents of consumers' purchase intention in the online open market, an online platform connecting multiple sellers and buyers, and examine their role in motivating or hindering the intention. To accomplish its goal, a survey was conducted among Korean consumers with experience of buying online via open markets, and the date collected from the survey were analyzed. Here is a summary of the research results. First, the main effects of all three motivation factors on purchase in the online open market were supported. The two motivation factors drawn from outcome expectations mentioned in SCT and perceived usefulness suggested in TAM-cost saving and time saving-were found to positively influence purchase intention in the online open market (H1 and H2 are supported). These results are consistent with other empirical research based on TAM [29] asserting that perceived usefulness has a positive impact on technology acceptance intention (purchase intention in this study). Like purchase intention associated with a product, perceived usefulness (the benefits consumers expect to earn from shopping at the open market, not from buying a specific product) is a main motivator in encouraging purchase intention in the open market (namely transaction intention). Therefore, consumers' perceived financial and time values positively influence choosing a purchase Summary of the Findings and Discussion This study aimed to discover the antecedents of consumers' purchase intention in the online open market, an online platform connecting multiple sellers and buyers, and examine their role in motivating or hindering the intention. To accomplish its goal, a survey was conducted among Korean consumers with experience of buying online via open markets, and the date collected from the survey were analyzed. Here is a summary of the research results. First, the main effects of all three motivation factors on purchase in the online open market were supported. The two motivation factors drawn from outcome expectations mentioned in SCT and perceived usefulness suggested in TAM-cost saving and time saving-were found to positively influence purchase intention in the online open market (H1 and H2 are supported). These results are consistent with other empirical research based on TAM [29] asserting that perceived usefulness has a positive impact on technology acceptance intention (purchase intention in this study). Like purchase intention associated with a product, perceived usefulness (the benefits consumers expect to earn from shopping at the open market, not from buying a specific product) is a main motivator in encouraging purchase intention in the open market (namely transaction intention). Therefore, consumers' perceived financial and time values positively influence choosing a purchase channel. In addition, perceived ease of use positively influences the online purchase intention in the open market, meaning H3 is supported and it is consistent with what has been found in previous studies focusing on the effect of perceived ease of use, one of the most critical antecedents of technology acceptance intention according to TAM. Consumers have self-efficacy as a result of accumulated satisfaction with experiences in the open market. Hence, consumers develop the willingness to buy once they have self-efficacy. As perceived ease of use determines self-efficacy, it has a positive impact on purchase intention in the open market. Second, privacy concern was found to moderate the relationship between three motivation factors and purchase intention via open markets. As previously expected, the time saving-purchase intention relationship and the perceived ease of use-purchase intention relationship were negatively moderated by privacy concern, thereby supporting H4b and H4c. The same conclusion was reached by previous findings showing that higher levels of perceived risk has a harmful impact on consumers' purchase intention and behaviors in online open market. In other words, greater levels of privacy concern serve as a hindering factor that reduces consumers' motivation to shop online via open markets. Contrary to what was originally expected, privacy concern was found to positively moderate the effect of cost saving on purchase intention in the open market, rejecting H4a. This finding can be interpreted cautiously given the characteristics of survey respondents. Participants for the survey of this research were those with experience of online shopping through online open market. Thus, their motivation to shop online could be influenced by many factors such as previous purchasing experience and trust toward the open market they used. In addition, an open market, acting as an intermediary, collects basic information from both sellers and buyers. Taking these into account, the survey respondents would expect relatively less losses from repeating purchases from a website that they previously traded through than making a first-time purchase through other websites that they have never used. Previous experience lets the respondents know which websites are trustworthy and which ones already have their personal information. Conversely, the respondents expect more losses from signing up for their first purchase and giving a new website their personal information. However, the comprehensive meaning of cost saving needs to be defined through a more careful interpretation of these findings. Third, security concern, as originally predicted, was found to negatively moderate the relationship between cost saving and purchase intention in the open market, supporting H5a. This finding is in line with the previous research [74] showing that lower levels of perceived security make consumers hesitate to use e-commerce channels. In other words, consumers' strong concern over insufficient e-commerce security and lack of protection procedures is a hindering factor that consequently reduced the benefit of cost saving (a motivation factor). However, security concern did not have a significant moderation effect on two other antecedents-time saving and perceived ease of use. Thus, H5b and H5c were not supported. This implies that consumers' security concern is not closely intertwined with time saving or perceived ease of use. Fourth, business integrity concern was found to have no significant moderation effect on the relationship between cost saving and purchase intention in the open market (H6a was not supported). Also, business integrity concern was found to have a positive moderation effect on the time saving-purchase intention relationship, which is in contrast to the assumption of negative effect (H6b was not supported). The reason behind this, as mentioned above, is in line with the positive moderation effect of privacy concern on the relationship between cost saving and the purchase intention. If some consumers had a satisfactory experience of online shopping at open market sites, they would feel that the sellers' integrity on those sites is guaranteed at an appropriate level. As a result, they would think that it is better to return to the sites where they have shopped before than to go to different e-commerce sites or offline stores because verifying sellers' integrity is both time-consuming and risky, and expected losses are therefore greater. Theoretical and Practical Implications The results of this study yielded several implications. Theoretically, this study considered both In addition, this study conducted a two-pronged examination of concerns for e-commerce. In other words, privacy concern and security concern were examined in the context of an online shopping platform, while business concern was analyzed from a seller-focused perspective. This is because sellers are not the operator or owner of a marketplace under an online platform, unlike online shopping malls. The main four players in an open market are sellers, buyers, third parties, and the online open market itself, hence sellers are just one of the participants in the marketplace built and run by the open market. Since concerns over privacy and security are determined by the physical environment of an open market, its business policies, and its operation systems, they are in the domain of platform-related perspective. However, business integrity is not relevant to a platform, as it is determined by the acts of sellers. For this reason, business integrity influences perceived usefulness, instead of perceived ease of use, which is earned by using a particular platform. Alongside the above-mentioned academic contributions, this study provides practical implications for online open market providers. Perceived risk components examined by this study implies that the first thing open market providers must do is understanding potential obstacles faced by both users and non-users, and based on that understanding, they are able to focus on designing reliable systems, appropriate policies, and optimal solutions. Those well-prepared rules and guidelines will help potential customers handle perceived risk effectively and ultimately mitigate the risk to an acceptable level. Also, these efforts will turn non-shoppers into active shoppers who are willing to increase their transaction volumes. E-commerce users want to feel secure in terms of their privacy protection. There are possible measure that open market providers can consider, including allowing visitors to be informed of how their personal information is used and offering consumers choices of privacy preference [89]. If open market providers are more conscious of privacy concerns, they will post privacy policies clearly and let users have choices of sharing personal information or restricting its use. Also, associations and/or corporations specializing in cyber security offer a variety of solutions that are designed to mitigate consumers' concerns over data breaches, discloses of private data, or lack of integrity coupled with poor customer service in handling unfulfilled orders [72]. These solutions assess internal control procedures to help consumers have trust in e-commerce instead of being affected by perceived risks. To conclude, consumers' perceived effectiveness of the chosen solutions plays a crucial role in reducing concerns for e-commerce, and it eventually has a significant impact on consumers' online purchase intention. Limitations and Directions for Future Research Although this study provides useful implications, there are potential limitations that can affect the findings of this study. First, this study mainly considered extrinsic motivations factors-perceived usefulness and perceived ease of use-and did not focus on intrinsic motivation factors. As e-commerce is diversifying itself, resulting in fiercer competition, just giving financial advantages is no guarantee of success any longer. Instead, intrinsic factors have become important in recent years, as it has been proven that non-financial benefits such as giving customers a memorable experience encourages consumers to repeat purchase and increase loyalty [56]. Although this study focused on extrinsic motivation drawn from outcome expectations suggested in SCT, future studies need to explore intrinsic motivation factors and figure out how those factors interact with motivation hindering factors. Second, there is a national diversity in concerns for e-commerce due to multiple factors, such as e-commerce development status, open market providers' efforts to address security issues, and cultural differences. Especially, perceived risks were found to be significantly influenced by cultural differences [49]. As a result, consumers' behaviors of browsing, shopping or transacting in the open market websites differ from nation to nation. As this study is confined to investigating Korean users' purchase behavior, its applicability may be limited. To gain comprehensive implications, future work needs to understand the e-commerce market in a particular country and also consider national or cultural differences. Third, it needs to identify the cause why two hypotheses (H4a and H6b) were rejected. As discussed in Section 6.1, privacy concern was found to have a positive moderation effect on the relationship between cost saving and purchase intention and business integrity concern was found to have a positive moderation effect on the relationship between time saving and purchase intention. These results are opposed to what was originally hypothesized, and the reason behind this is the survey respondents were those who have shopped in online open markets, which indicates that results may be affected by consumers' purchase experience and levels of trust. From this standpoint, future studies will be able to explore a direct effect of consumers' shopping experience by measuring stability, reliability or satisfaction levels. However, the findings of future studies will be affected by what platforms the respondents have used for purchase. Therefore, it is necessary to analyze differences by the types or characteristics of platforms or conduct research by controlling for the impact of platforms. Another possible approach is to study people who have no experiences of buying items from open markets. Since online shoppers are less risk-averse compared to non-online shoppers [38], those who never shopped in open markets may have different risk perception. As e-commerce has transformed over time and new platforms are emerging, analyzing risks perceived by non-online shoppers may provide useful implications. Finally, this study may be exposed to common method bias because the same subjects responded to the same questionnaire at a specific period of time. Of course, Harman's single factor analysis and marker variable technique were conducted to prove that the bias does not significantly affect the findings of this study. Nevertheless, the survey using the same questionnaire does not entirely remove the bias. Therefore, future studies could conduct a survey at different times or select different groups of respondents who are required to answer different questions. By doing so, future research would be able to overcome these limitations and provide further practical implications. Conclusions This study seeks to explore the drivers and barriers influencing consumers' intention to purchase through the online open market based on the insights drawn from existing theories such as the technology acceptance model, social cognitive theory, and social exchange theory. The research proceeded with the survey administered to 417 Korean consumers to identify both motivation factors and motivation hindering factors of online open market shopping. Then, a hierarchical regression analysis was conducted. The results provide evidence supporting the effects of motivation factors on purchase intention and also clearly show moderating effects of concerns for e-commerce; privacy concern by time saving, perceived ease of use, and security concern by cost saving were found to be statistically significant. Unlike what was previously assumed, privacy concern by cost saving and business integrity concern by time saving were found to have a positive significance. These findings suggest the following two implications: (1) consumers may visit and buy in the online shop they can trust based on their past shopping experience in order to mitigate concerns for e-commerce, and (2) deeper understanding and integrated analysis of concerns for e-commerce are required. What is more, the findings and implications in this study allow online open market companies to deepen their understanding of both positive factors and hindrances, thereby developing strategies to attract more visitors. Table A1. Constructs and questionnaire items. Questionnaire Items Source Cost Saving (CS): the extent to which people believe that the cost saving is improved by using an online open market [77,78] Business Integrity Concern (BIC): consumers' worry about getting cheated by seller or providing sensitive information to crooks that perpetrate identity theft [9,55] BIC1. In general, I am concerned that sellers are untruthful in their dealings with me. BIC2. In general, I would characterize that sellers are not honest. BIC3. In general, I am concerned that sellers would not keep their promises made on their website. BIC4. In general, I am concerned that sellers are not sincere and genuine. * excluded item.
2020-01-23T09:09:39.973Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "adcd256023c829754e9094ab3b64793fda1835ff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/3/773/pdf?version=1581506151", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7cb05038e4abb65ca22b69bf5d8ce69f07cea8c7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
195886484
pes2o/s2orc
v3-fos-license
Conditional Analysis for Key-Value Data with Local Differential Privacy Local differential privacy (LDP) has been deemed as the de facto measure for privacy-preserving distributed data collection and analysis. Recently, researchers have extended LDP to the basic data type in NoSQL systems: the key-value data, and show its feasibilities in mean estimation and frequency estimation. In this paper, we develop a set of new perturbation mechanisms for key-value data collection and analysis under the strong model of local differential privacy. Since many modern machine learning tasks rely on the availability of conditional probability or the marginal statistics, we then propose the conditional frequency estimation method for key analysis and the conditional mean estimation for value analysis in key-value data. The released statistics with conditions can further be used in learning tasks. Extensive experiments of frequency and mean estimation on both synthetic and real-world datasets validate the effectiveness and accuracy of the proposed key-value perturbation mechanisms against the state-of-art competitors. I. INTRODUCTION In the age of big data, personal-related data from user's side is routinely collected and analyzed by service providers to improve the quality of services. However, for the user side, directly sending original data can somehow lead to information leakage, which may draw potential privacy issues in many data-driven applications. To handle the privacy concerns, many mechanisms are proposed for privacy-preserving data analysis, among which stands out the differential privacy [1], [2], [3]. Usually, there are two kinds of differential privacy: the centralized setting and the local setting. In the centralized setting, the result of a query is computed, and then a noisy version of the output is returned (usually with Laplace noise [1]). In the local setting, the collecting and analyzing flow can be included into three steps: 1) Each record is first encoded into a specific data format (for example, by bloom filters). 2) Then the encoded data are perturbed. 3) At last, data from user side are aggregated and analyzed. Mechanisms with local differential privacy guarantee an individual's privacy against potential adversaries (including the aggregator in LDP). The local differential privacy has been widely used in crowdsourcing and IoT scenarios for privacy-preserving data analytics [4], [5]. To analyze user's data with high-level privacy guarantees, respected data service providers have applied local differential in their services. Google has proposed RAPPOR [6] for crowdsourcing statistics in Chrome. Microsoft proposed a memoization mechanism for continual data collection. Apple has used differential privacy for frequency estimation [7], such as identifying popular emojis. Recently, a significant amount of attention has been focused on improving accuracy in mean and histogram estimation with local differential privacy guarantees, such as categorical values [6], set values [8], [9] and numerical values [10], [11]. For the first time, Ye et al. [12] formalize the frequency and mean estimation problems for key-value data under local differential privacy. Our work will improve previous studies in data collecting and analyzing of key-value data. For further clarification, we start from a dietary rating example. Motivating example. As shown in Table I, assume that analysts consider ratings of the food by collecting users' scores on each specific food. Each record consists of a set of keyvalue data and represents the orderings of an individual, where each key-value data shows one's appetite on the given food (note that the rating scores are normalized into [−1, 1]). Based on the properties of key-value data, the analyzing tasks will include two parts: frequency estimation of keys and mean estimation of values for given key. The frequency estimation allows us to know the proportion of people with given keys. For example, the rate of people who ordered Hamburger can be given by f k=Hamburger . The mean estimation allows us to know people's average appetite for the food they eat. For example, the average rating of k = Fries is −0.3+(−0.1) 2 = −0.2, while that of k = Pepsi is 0.8+(−0.9)+0. 8 3 = 0.83, which might reveal to us that one who orders Pepsi loves it, but one orders Fries probably because he just needs something to eat. The PrivKV and PrivKV-based methods are proposed to handle the frequency and mean estimation in key-value data [12]. However, our experiments support that the PrivKVrelated methods only work well with a high average of mean. Thus in the first part of this paper, we propose a series of encoding and decoding mechanisms that are ǫ-differentially private for frequency and mean estimation in key-value data. All the proposed methods have low communication cost and compared with PrivKVM, and the proposed methods do not need to interact with an aggregator frequently. Our investigation indicates that correlations between attributes are essential in many analyzing and learning tasks. The On-Line Analytical Processing (OLAP) data cube is the exhaustion of possible marginals of data sets. And correlations are important in decision makings, such as the typical decision tree [13]. For the key-value data, retrieving conditional probabilities between keys can provide useful information for more in-depth analysis. Thus the rest part of this paper focuses on analyzing correlations of frequencies and means between different keys. The problem is motivated by such kind of problems: Motivating problem: Will those ordered hamburgers order Pepsi? How to model people's appetites for Pepsi if they've ordered fries? These questions are important to merchants. For example, upon knowing that people who love hamburger also wants to eat fries, a new combo can be introduced by merchants. Such problems are challenging when privacy concerns are considered. We call such kind of problems conditional analysis. Currently, to the best of our knowledge, no proposed methods can handle the conditional analysis for key-value data. Even though the PrivKV-based mechanisms can estimate frequency and mean for a single key, they do not support any conditional analysis as each user only randomly sends one keyvalue pair to the aggregator. To address these challenges, in the rest part of this paper, we introduce the conditional analysis mechanism for frequency and mean estimation. We define the L-way conditional notion and propose analyzing mechanism with ǫ-local differential privacy guarantees. To summarize, the main contributions are listed as follows: • We propose a new estimator for frequency and mean under the framework of PrivKV. • We propose several mechanisms for estimating the number of key-value data under the framework of LDP. Compared with existing algorithms, the proposed mechanisms are more effective and stable. • For the first time, we introduce conditional analysis for key-value data. We formulate the problem of L-way conditional analysis in the local setting. The rest of this paper is organized as follows. In Section II, we briefly include previous work. In Section III we propose a new estimator under the encoding results of PrivKV. In Section IV, several perturbation mechanisms are presented and analyzed. In Section V, we define the conditional analysis problem and proposed methods for it. At last, the experimental results are shown in Section VI and the whole paper is concluded in Section VII. A. Local Differential Privacy The notion of differential privacy was initially proposed for statistical database, where a trusted data curator is assumed. The data curator gathers, processes, and publishes data in a way that satisfies requirements of differential privacy. In many application scenarios, such a trusted party does not exist, therefore comes up the local differential privacy. Definition 1 (Local Differential Privacy [14]). A randomized mechanism is ǫ-local differential privacy (ǫ-LDP) iff for any two tuples x, x ′ ∈ X and any output o ∈ O: where the randomness is over mechanism M. There are many functional properties of differential privacy, one of which is the composition [2], usually used to track the privacy loss in sequential executions. Lemma 1 (Composition Theorem). Let M i : N |X | → R i be an ǫ i -differential privacy algorithm, and M(x) : The canonical solution towards local differential privacy is randomized response [15], first introduced in the literature as a survey design technique. The randomized response mechanism provides plausible deniability. Thus the aggregator cannot reveal original data with a high confidence level. The randomized response works as follows: for a user having a bit value v ∈ {0, 1}, he flips the value with rate p > 0.5 (which means the sent value is the same as the original value with probability p). Then the randomized response achieves LDP with e ǫ = p/(1 − p). Perturbation mechanisms with randomized response offer acceptable accuracy under large datasets. The randomized response plays a core role in many recent LDP mechanisms. One typical implementation is Google deployment of RAPPOR [6], the randomized response is used in each bit of output array by bloom filters. For continuous data, the randomized response can be used in the discretized value for mean estimation [16], [10], [17], [18]. the i-th user in U K the set of keys, and d = |K| S i the set of key-value pair owned by u i k i,j , v i,j the j-th key-value pair in S i f k the frequency of key k m k the mean of values with key k C = (α, β) conditions of keys The basic randomized response only works for binary response (|K| = 2). For category data with |K| > 2, a generalized randomized response (also called Direct Encoding [19]) is proposed. otherwise. In other words, the true value is reported with probability p, while each other value is reported with probability 1−p |K|−1 . To achieve ǫ-LDP, we set p = e ǫ /(e ǫ + |K| − 1). B. Key-Value Data Collection The problem of privacy-preserving key-value data collection with frequency and mean estimation in the local setting is first proposed in [12]. Before defining the problem, we first describe the notations used in this paper. The key-value data collecting and analyzing framework under LDP can be briefly stated as follows: let the universe contain a set of users U = {u 1 , u 2 , ..., u n } and a set of keys K = {1, 2, ..., d}. The value domain V for the keys is in the domain of [−1, 1]. We consider that each user u i owns a list of (say ℓ i , which is at most d) key-value pairs An untrusted data collector needs to estimate statistics information of the key-value data, especially, the frequency estimation and the mean estimation. • Frequency estimation: The goal of frequency estimation is to estimate the frequency of key k. It is the portion of users who possess the key. It is defined as: • Mean estimation: The goal of mean estimation is to estimate the mean values of key k. It is defined as: C. PrivKV Ye et al. [12] adopt local perturbation protocol and propose the basic PrivKV algorithm for statistical estimation. They also extend PrivKV to PrivKVM and PrivKVM + to improve estimation accuracy. If the j-th key k i,j of user i exists, we have k i,j = 1, otherwise, k i,j = 0. When the key does not exist in u i , it is represented by k, v = 0, 0 . Thus, given k, the key-value Algorithm 1 Local Perturbation Protocol (LPP) Require: User u i 's set of key-value pairs S i ; Privacy budget ǫ 1 and ǫ 2 . Ensure: LP P (S i , ǫ 1 , ǫ 2 ) is the perturbed key-value pair k j , v * of the j-th key. 1: Sample j uniformly at random from [d]; 2: if k j exists in the key set of S i then 3: Perturbs k j , v * as: with probability e ǫ 1 e ǫ 1 +1 , 0, 0 with probability 1 e ǫ 1 +1 ; 5: else 6: Randomly draw a value m ∈ [−1, 1]; 7: v * = V P P (m, ǫ 2 ); 8: Perturbs k j , v * as: with probability 1 e ǫ 1 +1 ; 9: end if 10: return j and k j , v * ; To achieve LDP in key-value protection, the PrivKVbased mechanisms have four types of perturbations: The key exists before and after perturbation. As the key does not exist, the value is meaningless and is set to zero. For example, 1, v → 0, 0 . • 0 → 1: A new key-value pair is generated after perturbation and a value is assigned, i.e., 0, 0 → 1, v ′ . • 0 → 0: The key-value pair does not exist before and after the perturbation. In this case, the key-value pair is kept unchanged, i.e., 0, 0 → 0, 0 . The PrivKV-based mechanisms guarantee ǫ-LDP by providing indistinguishability for both key and value in key-value data. The randomized response can be directly used for key perturbation, as the key space is binary. For value perturbation, the perturbation mechanism called Harmony [10] is used for mean estimation (also seen in [20]). Values in continuity interval [−1, 1] are first discretized to {−1, 1} through Eq. (1), then the randomized response is used in the discretized value for perturbation. These two steps are called VPP (Value Perturbation Primitive). By assigning the randomized response to the key and the value perturbation mechanism to the value, the local perturbation protocol for key-value data is drawn (Algorithm 1). 11: Aggregator calculates mean m * k = n * 1 −n * 1 N ; 12: end for 13: return f * and m * ; When an untrusted aggregator receives the perturbed keyvalue data, he can then estimate the frequency of keys and mean of values with the PrivKV algorithms (Algorithm 2). Based on the PrivKV algorithm, the PrivKVM with iterations and PrivKVM + with virtual iterations are also proposed. For simplicity, the algorithms are not detailed. The results are covered in our experimental analysis. III. ESTIMATOR FOR PrivKV The main intuition for the mean estimation under local differential privacy is to estimate the frequency of k = 1. This problem has been well studied under current frequency estimation framework. After discretization, the number of k, v with k = 1 consists of two parts: those with value −1 and those with value 1. For the mean estimation, the top priority task is to estimate the number of key-value pair with value 1 and −1, which is m k = N1−N−1 N1+N−1 . The perturbed keyvalue data are in the same space as that after discretization (i.e., all in { 0, 0 , 1, 1 , 1, −1 }). The aggregator only uses the counting information of { 1, 1 , 1, −1 } according to the PrivKV algorithm (note that the { 1, 1 , 1, −1 } is the perturbed value). This causes error for the mean estimation (Line 7-8 in Algorithm 2), as part of 0, 0 also turns into { 1, 1 , 1, −1 } and some key-value pairs turns into 0, 0 . It inspires us to develop mean estimation method to eliminate the impact of key perturbation. Instead of directly estimat-ing m k with received key-value pairs, we design an unbiased estimator for estimating N 0 , N 1 and N −1 . For the aggregator, let M 1 = Count( 1, 1 ), M −1 = Count( 1, −1 ) be the counts of the key-value pairs 1, 1 , 1, −1 respectively, and M 0 = Count( 0, 0 ) be the counts of the received key-value pairs without key. Then the total received records by the data collector is M = according to the encoding process, we can estimate N 1 and N −1 by N * 1 and N * −1 : Theorem 2. The estimators of N * 1 and N * −1 for N 1 and N −1 are unbiased, respectively. we calculate by transforming the above equations. Through the encoding process of Algorithm 1, we have: From which we get: Then it holds that: which concludes that the N * 1 and N * −1 are unbiased estimator for N 1 and N −1 . We can also estimate N 0 by N * 0 = M − N * 1 − N * −1 , which is also unbiased. IV. LDP FOR KEY-VALUE DATA In this section, we combine the state-of-art locally differentially private mechanisms for data collecting and propose several ǫ-LDP perturbation mechanisms for key-value data collecting and analyzing that can be used in different scenarios. A. F2M: Frequency to Mean Unlike PrivKV-based mechanisms, we notice that there is no need to maintain the authenticity of the sent key-value pairs. For example, when original key k i does not exist in keyvalue pairs, the data should be in form of k i , v i = 0, 0 . Setting the value v i to any value will make it meaningless. Thus, in the P irvKV algorithm, the perturbed key-value results can only be in the form of 0, 0 , 1, −1 or 1, 1 , where k i , v i = 0, 0 represents that the key does not exists and v i = 0 is useless. Whereas we think more states in the perturbed space can provide more information when Algorithm 3 F2M: Frequency to Mean Require: User u i 's set of key-value pairs S i ; Privacy budget ǫ 1 and ǫ 2 ; Default value v when key does not exist. Ensure: of the j-th key of user i 1: Sample j uniformly at random from [d]; 2: Perturb key with: estimating. From this point, the state 0, 0 is substituted by 0, 1 and 0, −1 . In the LPP, when an existing key of a key-value data is perturbed to not exist, the value is directly set to 0, which increases error in mean estimation. The main difference between Algorithm 3 and Algorithm 1 is that all the outputs of LPP are in the same space as the original space, meanwhile F 2M of Algorithm 3 is not. In this mechanism, we treat key and value as irrelevant data and perturb them separately. Perturbing key and value of a key-value pair independently allows the aggregator to estimate the frequency and mean of all key-value pairs. However, the goal of mean estimation in key-value data is to estimate those values with keys. Hence the influence of frequency to mean estimation should be considered. When encoding, we set the value without key to a default value v. We will further explain how to use this information for mean estimation. Let: Then we can estimate the mean of all the values (with and without key) by: With m * all and f * , we can then estimate m * k by: ( In the F2M mechanism of Algorithm 3, we perturb key and value separately. From the composition theorem (Lemma 1), F2M achieves ǫ-local differential privacy. Also, we can be sure with probability at least (1 − δ) 2 that (in Appendix A): . 6: return j and k ′ , v ′ ; B. Unary Encoding for Key-Value data The F2M perturbation mechanism aims at eliminating the restriction that the value can only be 0 when the key of a keyvalue pair does not exist. Thinking that the original key-value data can only be in three statures when discretized, we pool the principle of the generalized randomized response to design a mapping function between the original and perturbed space. For k ′ , v ′ ∈ { 0, 0 , 1, 1 , 1, −1 }, the mapping between perturbed key-value data and the original data is designed by: where v * = Discretiaztion(v) represents the discretization process shown in Eq. (1), and , v * and 0 otherwise. We name this perturbation mechanism KVUE (Key-Value Unary Encoding). The probability mapping function is somehow difficult to understand. If we treat each key-value pair as a whole entity instead of treating key and value separately, we can directly use the generalized randomized response. Intuitively inspired by this, the mapping is equal to: Theorem 3. The unary encoding mechanism for key-value pair achieves ǫ-LDP. Proof. According to Eq. (3), for any key-value pairs k, v ∈ { 0, 0 , 1, v } and the possible output k ′ , v ′ , we have: Thus, for any input Then we can say that N * i is an unbiased estimator. The proof is given in Theorem 4. Proof. According to the algorithm, for i, j ∈ {0, 1, −1}, we have: We then achieve: which concludes the correctness of theorem. The unary encoding first maps a key-value pair into a single item and then uses the generalized randomized response to achieve ǫ-LDP. Thus the variance is the same as the that of Direct Encoding (DE [19]): Theorem 5. Given δ ∈ (0, 1), with probability at least 1 − δ, we have: Proof. Based on the Chernoff-Hoeffding bound [21], for every t > 0, it holds that: Then, we have: Let r = 2t 3p−1 , which corresponds to t = r(3p − 1)/2, we achieve: , corresponding to r = 1 3p−1 2N · ln 2 δ , then we can say that with probability at least 1 − δ, we have: Thus, this completes the proofs. C. One-Hot Encoding for Key-Value Data The one-hot encoding mechanism was commonly used in histogram estimation [22], [6]. For bucket i, the one hot encoding returns a vector in which all bits are 0 except the i-th index. The encoding mechanism inspires us to uses such a bit array for state representation of key-value data. As analyzed in preceding sections, there are three statuses when one keyvalue pair is discretized. We design the following mechanism for index projection: Thus the one-hot encoding for a key-value pair can be represented by A[I(k, v)] = 1. Then we use randomized response in each bit of A by: The randomized response guarantees ǫ/2-LDP for every single bit. Since A and A ′ differ only in two bits, so we achieve ǫ-LDP for A according to the composition theorem. We call this KVOH (Key-Value One-Hot mechanism). Same as the proposed methods, we focus on estimating the number of each state after discretization instead of directly estimating frequency and mean, which is to retrieve the number of states in A. Let M i = A i be the sum of received arrays and N denotes the number of arrays. We first adjust the sum of arrays before perturbation by: It is easy to prove that the estimator of N i is unbiased [19], and the variance is: Similar to the proof of Theorem 5, we can obtain By setting r = t · e ǫ/2 −1 e ǫ/2 +1 , we then have: By using δ = 2e − 2r 2 N ·( e ǫ/2 −1 e ǫ/2 +1 ) 2 , we can say that with probability at least 1 − δ, we have: D. Estimation Analysis In the designation of KVOH and KVUE encoding mechanisms, we propose unbiased estimator for the states of 0, 0 , 1, −1 , 1, 1 (denoted as N 0 , N A and N B ) and use the number of states for further estimation instead of directly estimating the frequency and mean: In this section, we give the upper bound for the estimator of f * and m * . To analyze the estimation error, we first define θ A = N * A − N A and θ B = N * B − N B as the estimation error of the state number for different states. For the frequency estimation, we then can analyze the estimating error by: For mean estimation, we have: In the prior sections, we've proven that with probability at least 1 − δ, we have Pr[|N * i − N i | ≤ r]. Considering the noises on N A and N B are independent, the error of frequency estimation can be sure with: For the mean estimation, we then have: Hence, we can guarantee that by at least (1 − δ) 2 probability, and that by at least (1 − δ) 2 probability, V. CONDITIONAL FREQUENCY AND MEAN ESTIMATION In this section, we present a complete analysis for privacypreserving key-value data, which allows conditional analysis. Before giving the solutions, we first formulate the L-way conditional frequency and mean. To better understand the Lway conditional problems, we start from an example. For simplicity, we take d = 3 and k ∈ {Hamburger, Fries, Pepsi} as a subset of Table I. After discretization, each user's keyvalue data is listed as follows: Definition 2 (L-way Conditional Frequency and Mean). Given target key k and L conditional keys, the conditional frequency and conditional mean of key k is defined as f k|ck1=c1,...,ckL−1=cL−1 and m k|ck1=c1,...,ckL−1=cL−1 , where ck i ∈ k [d] represents a key and c i ∈ {0, 1} represents the key ck i exists or not. Given conditions C : ck 1 , ck 2 , ..., ck L−1 = c 1 , c 2 , ..., c L−1 , we say that a user meets conditions if the existence of key ck i is c i . For example, k Fries , k Pepsi = 0, 1 represents a consumer ordered Pepsi but not Fries (which is user1 in this example). With those L − 1 conditions, we now formulate the L-way conditional frequency and means: Where U C means users with conditions C. For example, to represent consumer's average scores of Hamburgers among those who orders Pepsi, we can use the 2-way conditional mean m kHamburger=1|kPepsi=1 . The L-way conditional notions are easy to understand but not manageable. Considering that keys in conditions might be out-of-order, we introduce the (α, β)condition to formalize the L-way conditions to a length-d bit vector: Definition 3 ( α, β-condition). Given conditions C = {ck 1 , ck 2 , ck L = c 1 , c 2 , ..., c L }, α is used to represent what key is in condition, which is α i|ki=ckj = 1. And β is used to Algorithm 6 IOH: Indexing One Hot encoding Require: A user u's set of key-value pairs S = { k 1 , v 1 , ..., k d , v d } (here k j , v j is set to 0, 0 if user u does not have it); Privacy budget ǫ. Ensure: IOH(S, ǫ) is the perturbed key-value pair. 1: Discretize each key-value data to k ′ i , v ′ i ; 2: Indexing each encoded key-value pair by k ′ i · v ′ i + 1 and get the overall index by: For example, the conditions C = {k Hamburger = 1, k Fries = 0} can be represented by C = (α = 110, β = 100). α = 110 indicates that the first key and the second key is assigned in conditions, the conditional value of the key is in β, which is β = 100). In the rest of this paper, we use C = (α, β) for conditions representation. To handle the conditional estimation with privacy concerns, we first need to encode all of the key-value data. The proposed methods for frequency and mean estimation only works on one single key-value pair. To achieve ǫ-LDP on the whole key-value pairs, each key-value pair should be encoded with ǫ ′ = ǫ/d, which might cause errors in the estimation results. To overcome this, we introduce the Indexing One Hot encoding (IOH) mechanism. Following KVOH, a key-value pair k i , v i is first encoded to a single state by: Then we can get the index by all the I( k i , v i ): We initialize a zero array A and set A[I] = 1. The bit array is the one hot encoding of key-value pairs, like KVOH, we can achieve ǫ-LDP by using ǫ ′ = ǫ/2 on each bit. To sum up, the process is in Algorithm 6. Unlike KV OH, the IOH encoding mechanism encodes all of key-value of a user. For user1 in our example, the key-value pairs are first indexed with the indexing function: 1, 1 , 0, 0 , 1, −1 → 210 (3) . The subscript (3) here means the base of 210 is 3. Thus user1 encodes his data to a bit array with its index 12 set to 1. When all the data are transferred to the aggregator, all the received bit vectors are adjusted and summed up to a A s : Here, A j denotes the j-th user's indexing one hot encoding vector. We will further extract the conditional frequency and conditional means from the summed array A s [i]. We will further use the adjusted A s for conditional frequency estimation and conditional mean estimation. A. Conditional Frequency Estimation To retrieve information from the summed array A s , we first define the frequency counting operator. For example, if we want to know the number of users with k a , k c = 1, 0 in Table III. We first get the (α, β) = (101, 100). We want to know f ka|kc=1 , thus we need to know the number of users with k a , k c = 1, 1 and the number of users with k c = 1. The frequency counting operator can be used for the conditional frequency estimation. For example, the conditional frequency f ka|kc=1 can be represented by: When encoding, all of the user's data are mapped into a length 3 d bit vector. For frequency estimation, we need to extract given key-value data under condition C. We now introduce the notion of condition to frequency index for computing F . For example, to compute F 101 , the index set is: With the index of I(γ), we can compute F γ by: Following this example, the frequency under condition C = (α, β) can be given by: B. Conditional Mean Estimation Like the frequency counting operator, we define two counting operations to handle the conditional mean estimation tasks. We use S k | α β to represent the sum of value with key k under condition α, β, Then the conditional mean can be given by: The notion α ∨ α[k] = 1 and α ∨ α[k] = 1 indicates that when considering m k|C=(α,β) , the key k should be included. The main problem now is to calculate S k|C . Like F α β , we use S k,γ to represent (S k ) 11...11 γ . For the k-th key-value pair, the value can be 1 and −1. The mean estimation then turns to be the counting problem: to count the number of key-value pairs with the k-th value be 1 and be -1. The symbol S k,γ calculates the sum of values with key k. After discretization, the sum of S k,γ [A s ] can be divided into two parts: those with the k−th key-value pair being 1, 1 (denoted as S + k,γ [A s ]) and those being 1, −1 (denoted as S − k,γ [A s ]). Like the frequency index operator, we now define the mean index operator to extract S + k,γ and S − k,γ from A s . Definition 6 (condition to mean index). For conditional vector γ ∈ {0, 1} d , the corresponding index of S + k,γ [A s ] (and S − k,γ [A s ]) can be represented by: where the I + (γ) and I − (γ) are defined as: The only difference between conditional frequency and conditional mean is that for key k, the frequency estimation needs the overall number of value 1 and value −1. Whereas for mean estimation, we need to estimate those with value 1 and those with value −1 separately. VI. ANALYSIS AND EVALUATION In this section, we empirically evaluate the performance of proposed mechanisms. The PrivKV-based mechanisms [12] have shown great advantages in frequency estimation over proposed mechanisms like RAPPOR [6], k-RR [23] and SHist [24]. That is the same in mean estimation, over Harmony [10] and MeanEst [16]. Thus we only compare our proposed mechanisms with PrivKV-based mechanisms, namely PrivKV and PrivKVM. Datasets used. We evaluate the proposed methods over a real-world dataset and synthetic datasets. We first use the MovieLens dataset [25]. This dataset samples were collected by the GroupLens Research Project. It contains over 20M ratings from 138,000 users on over 27,000 movies. Each user has rated at least 20 movies. For each anonymous person, ratings are treated as key-value data. We first exact the top-100 most rated movies as our key space K and extract s smaller dataset. We also generated two synthetic datasets: the Uniform dataset and the Gaussian dataset. The frequency and mean for different keys follow the uniform distribution and Gaussian distribution. Each generated dataset has 100 keys and 100,000 records. Default parameters and settings. In the frequency and mean estimation experiment, we acquire the distributions of estimation error by repeatedly encoding and decoding 50 times in each experimental instance. Each user randomly picks up one key-value pair and encodes with different mechanisms. Then the aggregator decodes with the corresponding mechanism. When encoding, the privacy budget varies from 0.1 to 5. For the PrivKVM, we set the iterations to be 10. The result is measured with AE (Absolute Error) and MSE (Mean Square Error). For the F2M estimator, we set the default v = 1. Also, the influence of the default value is discussed in Section VI-C. A. Overall Results We first list the theoretical communicating cost between a user end to the aggregator end. The cost is based on the number of state of the encoded key-value data. For example, the encoded space of PrivKV is k ′ , v ′ ∈ { 0, 0 , 1, 1 , 1, −1 }, thus the communicating cost for key-value encoded by PrivKV can be compressed to log 2 3. The PrivKVM works in an iterative way. Thus the communication cost is c times that of PrivKV, where c is the number of iterations. When encoding, a user needs to pick up one key from the key space K. Thus an index should also be sent to the aggregator. The cost for index is log 2 |K|. The costs of different mechanisms are listed in Table IV. Methods PrivKV Figure 1 plots the estimation errors of different mechanisms with different privacy budgets. Among all these six mechanisms, the PrivKVM is the only one that outputs an unbiased mean estimation. However, our simulations indicate the effectiveness of both frequency and mean estimation. As the PrivKVM achieves unbiased estimation by iterating with the aggregator, and in each round, the privacy budget is very small (ǫ ′ = ǫ/c). Thus estimation error in each round accumulates. When the privacy budget is not very small (ǫ > 0.4), the KVOH, KVUE, KVOH, PrivKV and PrivKV-A can achieve estimation error under 0.05. Over the tested mechanisms, KV U E achieves lower estimation error considering different privacy-preserving levels on both generated dataset and realworld dataset. All of these mechanisms have higher mean estimation errors compared with frequency estimation. We think it is because of the natural insufficiency of local differential privacy: the estimation accuracy is influenced by the volume of data. When estimating the frequency, we need to estimate the number of key-value data with key from N users, which is N · f k from N . Compared with that, the mean estimation task requires estimating the number of key-value pair with value 1 and value −1 from the estimated key-value data with key. Thus the accuracy of key estimation affects the performance of mean estimation. Like frequency estimation, generally, KVUE achieves lowest estimating error. B. Scalability In this section, we evaluate the performance of estimating mechanisms on different circumstances. For the frequency estimation, We divide the frequency into four situations: extreme low frequency with f k = 0.05, low frequency with f k = 0.2, middle frequency with f k = 0.6 and high frequency f k = 0.8. For the mean estimation, we divide the mean into three situations: low average with m k around −0.8, middle average with m k around 0 and high average with m k around 0.8. We generate several these kinds of dataset with Gaussian distribution and uniform distribution. Each generated dataset contains 100,000 key-value pairs. Figure 2 and Figure 3 show the box-plot of frequency estimation and mean estimation results with Gaussian distribution. It turns out that estimation errors of different mechanisms are not under the influence of frequencies. However, PrivKV, PrivKVM and PrivKV-A are susceptible to the location of means. These three mechanisms achieve higher estimation accuracy with the rise of mean. In both cases, PrivKVM returns an inaccurate result with large variance. As we analyzed formally, the error of estimation might accumulate when iterating. Compared with other mechanisms, the KVUE mechanism achieves the lowest error in both frequency estimation and mean estimation. Also, F2M and KVOH mechanisms attain acceptable results compared with existing methods. Also, with the increase of frequency, the variance of error decreases. This is because, with more usable data, the estimation becomes settled. When estimating with uniform distributed data, the result is shown in Figure 4. As in the case of Gaussian distribution, the result of frequency is not profoundly affected by situations of frequency. Like aforementioned, we can draw that a higher frequency leads to a lower mean estimation error. C. Influence of default value in F2M In the F2M mechanism, we set the default value of encoding to v = 1. We think that by setting the default value to 1, the discretized value is always the same as 1. That avoids additional errors for further estimation. Thus, a natural question occurs. Will the value of v influence the performance of mean estimation? Here, we do not need to discuss the impact of the default value to frequency estimation as setting default only affects the process of mean estimation. Figure 5 compares F2M mechanisms with respect to different default values of v. We observe that the performance of F2M mechanism does not fluctuate when v changes, which reflects that the noise introduced by discretization is negligible compared to that by the randomized response. D. Conditional analysis For the efficiency consideration, we only test the 2-way conditional analysis over d ∈ {2, 4, 8} (with 20 observations under each configuration). We use datasets with f k = 0.8 in the low average case with 10 5 and 10 6 users. Figure 6 and 7 compare privately and non-privately computed conditional result of frequency and mean. We first figure out that the error of conditional mean estimation is lower to that of conditional mean estimation. We think this is because the error of frequency is involved in the mean estimation, as we analyzed in 1-Way frequency and mean estimation. We also VII. CONCLUSION AND FUTURE WORK In this paper, we propose a series of locally differentially private mechanisms for frequency and mean estimation of key-value data. Based on the previous work of PrivKV, we first propose a decoding mechanism for the data aggregator. Moreover, we combine several state-of-art LDP methods to improve the performance of frequency and mean estimation in the local settings. Theoretical analysis and empirical experiments validate the effectiveness and robustness of our proposed mechanisms. Beyond that, we introduce the notion of conditional analysis in key-value data analysis that allows the aggregator to learn the correlation between keys and corresponding values. The first part of work in the to-do list is to achieve an unbiased estimator for the mean. In this paper, we achieve low estimation error by unbiased estimation of the number of different key-value states after discretization. This leads to biased mean estimation. We will further show that we can achieve an unbiased estimator with the use of iteration. Besides that, to support conditional analysis in key-value data, we encode all of a user's data with one hot encoding mechanism. This takes cost in both communication and computation. Graham et al. [26] use the Hadamard transform as evaluating a Hadamard entry is practically faster [27]. As our next move, we intend to improve the efficiency by the Hadamard transformation and improve accuracy by using an optimal encoding that achieves lower variance in the conditional analysis of key-value data. A. Error bound for F2M For the randomized response, assume there are x records with value 1 of N records. For the aggregator, after receiving N records with X records being 1, the estimated x * can be adjusted by: And we have E[x * ] = x. According to the Chernoff-Hoeffding bound to independent {0, 1} random variables, for all t > 0, we have Pr[|X * − X| ≥ t] ≤ 2e − 2t 2 N . Setting r = t · e ǫ +1 e ǫ −1 and δ = 2 · e − 2r 2 N ·( e ǫ −1 e ǫ +1 ) 2
2019-07-11T06:34:02.000Z
2019-07-11T00:00:00.000
{ "year": 2019, "sha1": "f2b4a2020b90df6dfa5d2f51e587f671b0436c5c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f2b4a2020b90df6dfa5d2f51e587f671b0436c5c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
3047827
pes2o/s2orc
v3-fos-license
Incorporation of Ceramides into Saccharomyces cerevisiae Glycosylphosphatidylinositol-Anchored Proteins Can Be Monitored In Vitro ABSTRACT After glycosylphosphatidylinositols (GPIs) are added to GPI proteins of Saccharomyces cerevisiae, a fatty acid of the diacylglycerol moiety is exchanged for a C26:0 fatty acid through the subsequent actions of Per1 and Gup1. In most GPI anchors this modified diacylglycerol-based anchor is subsequently transformed into a ceramide-containing anchor, a reaction which requires Cwh43. Here we show that the last step of this GPI anchor lipid remodeling can be monitored in microsomes. The assay uses microsomes from cells that have been grown in the presence of myriocin, a compound that blocks the biosynthesis of dihydrosphingosine (DHS) and thus inhibits the biosynthesis of ceramide-based anchors. Such microsomes, when incubated with [3H]DHS, generate radiolabeled, ceramide-containing anchor lipids of the same structure as made by intact cells. Microsomes from cwh43Δ or mcd4Δ mutants, which are unable to make ceramide-based anchors in vivo, do not incorporate [3H]DHS into anchors in vitro. Moreover, gup1Δ microsomes incorporate [3H]DHS into the same abnormal anchor lipids as gup1Δ cells synthesize in vivo. Thus, the in vitro assay of ceramide incorporation into GPI anchors faithfully reproduces the events that occur in mutant cells. Incorporation of [3H]DHS into GPI proteins is observed with microsomes alone, but the reaction is stimulated by cytosol or bovine serum albumin, ATP plus coenzyme A (CoA), or C26:0-CoA, particularly if microsomes are depleted of acyl-CoA. Thus, [3H]DHS cannot be incorporated into proteins in the absence of acyl-CoA. The lipid moieties of the glycosylphosphatidylinositol (GPI) lipid at the stage when it is transferred by the transamidase to GPI proteins are different from those found on mature GPI anchors of Saccharomyces cerevisiae. The free GPI lipids contain a phosphatidylinositol (PI) moiety, which comigrates in thin-layer chromatography (TLC) with the free PI of yeast membranes and therefore probably contains the typical C 16:0 and C 18:1 fatty acids found in yeast PI (7,12,26,29,31). In contrast, the majority of mature protein-linked GPI anchors contain a ceramide moiety, and a minor fraction contains a diacylglycerol modified to have C 26:0 fatty acids in sn2 (9,31). Ceramides of GPI anchors contain phytosphingosine (PHS) and C 26 fatty acids, as do the bulk of the free sphingolipids in yeast membranes, which are inositolphosphorylceramides (IPCs) and derivatives thereof (9,32). Thus, all mature GPI proteins of yeast contain large lipid moieties with C 26 fatty acids, in the form of either a ceramide or a special diacylglycerol, and these lipids are introduced by remodeling enzymes (remodelases) that replace the primary lipid moiety of the anchor. Recently, two gene products required for introducing C 26:0 fatty acids into the primary GPI anchor have been identified (Fig. 1). PER1 encodes a phospholipase A 2 that removes the C 18:1 fatty acid of the primary anchor (13). GUP1 encodes an acyltransferase required for the addition of a C 26:0 fatty acid to the liberated sn2 position, thus generating a pG1-type anchor (4,13) (Fig. 1). pG1-type anchors may be the preferred substrate for the enzymes introducing ceramides, since the normal ceramide-containing anchor lipids are strongly reduced in per1⌬ and gup1⌬ mutants, but significant amounts of abnormal, more polar, base-resistant, inositol-containing anchors are observed in gup1⌬ cells (4, 13; unpublished results). Yeast cells lacking CWH43 are unable to synthesize ceramide-containing GPI anchors, while the replacement of C 18 by C 26:0 fatty acids on the primary diacylglycerol anchor by Per1 and Gup1 is still intact (14,34). CWH43 comprises an open reading frame encoding a 953-amino-acid protein with 19 predicted transmembrane domains. Single amino acid substitutions in the hydrophilic, lumenally exposed C-terminal part (amino acids 666 to 953) completely abolish the introduction of ceramides into GPI anchors, whereas mutations in the Nterminal part tend to destabilize the protein (14,21,34). The cwh43⌬ mutants grow well in rich media and do not secrete GPI proteins, and some cwh43⌬ strains are also able to grow in the presence of calcofluor white, quite unlike the per1⌬ and gup1⌬ mutants (14). The yeast remodelase activity introducing ceramide (ceramide remodelase) can be monitored by metabolic labeling experiments using tritiated inositol ([ 3 H]inositol) or tritiated dihydrosphingosine ([ 3 H]DHS) (28,31). When fed to intact cells, these tracers are rapidly taken up and incorporated into lipids and GPI proteins but not into other proteins. [ 3 H]DHS labels only those GPI proteins which carry a ceramide in their anchors. All [ 3 H]inositol-or [ 3 H]DHS-derived label can be removed from the metabolically labeled proteins in the form of PIs or IPCs using nitrous acid, a reagent that releases the inositolphosphoryl-lipid moieties from GPI anchors by cleaving the link between glucosamine and inositol (10,28,31). It presently is not clear what the substrates for the Cwh43mediated remodelase reaction are. It appears that certain GPI proteins such as Gas1 do not receive ceramide anchors (9), whereas many others do. It also is not clear if this is because Cwh43 itself discriminates between different protein substrates or because only certain proteins get access to Cwh43. Furthermore, it is unclear if Cwh43 replaces the phosphatidic acid or the diacylglycerol moiety of the GPI proteins and if it introduces either a ceramide or only a long-chain base, the latter of which would have to be acylated through a second biosynthetic reaction. Here we report on a microsomal assay that recapitulates the findings previously made with living cells and allows these questions to be addressed under defined conditions in vitro. Microsomal ceramide-remodeling assay. Microsomes from X2180 were used unless indicated otherwise. The standard conditions for the in vitro remodeling assay were as follows. Microsomes equivalent to 100 g of microsomal proteins were incubated for 1 h at 30°C in 25 mM Tris-HCl (pH 7.5) containing 1 mM ATP, 1 mM GTP, 1 mM CoA, 30 mM creatine phosphate, 1 mg/ml of creatine kinase, cytosol (200 g of proteins), 10 nmol of C 26:0 , 20 g/ml myriocin, 200 g/ml cycloheximide, and 10 Ci (0.17 nmol) of [ 3 H]DHS in a final volume of 100 l. Myriocin and cycloheximide were added to inhibit endogenous DHS production and block incorporation of [ 3 H]DHS into newly made proteins during incubation. Where indicated, C 26:0 -CoA (10 nanomoles total) or MgCl 2 (2 mM) was also included. "Final conditions" for the assay were determined through the various attempts to optimize incorporation and to make the assay more defined. When done under "final conditions," assay mixtures contained 600 g bovine serum albumin (BSA) instead of cytosol, and in addition to the ingredients of the standard assay mixture also contained MgCl 2 (2 mM), glutathione (GSH) (5 mM), and NADPH (1 mM). To set up the assay, [ 3 H]DHS and C 26:0 or C 26:0 -CoA were dried in separate tubes, and [ 3 H]DHS was resuspended in 25 l of lysis buffer by vortexing and then transferred to the tube containing the dried C 26:0 or C 26:0 -CoA. After vortexing, other ingredients were added. Reactions were started by adding microsomes and stopped by the addition of 600 l of CHCl 3 -CH 3 OH (1:1). Protein pellets were extensively delipidated by repeated extraction with organic solvents as described for GPI proteins of metabolically labeled intact cells (16). Labeled proteins either were analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE)/fluorography or were further delipidated and purified by concanavalin A-Sepharose affinity chromatography as described previously (16). Bound proteins either were released from concanavalin A-Sepharose by boiling in SDS sample buffer and analyzed by SDS-PAGE/fluorography or were released from concanavalin A-Sepharose using pronase and the radioactivity of the thus-generated anchor peptides was detected by scintillation counting. In many assays, samples were divided into two parts to be analyzed separately by SDS-PAGE/fluorography and pronase treatment/scintillation counting, and the two methods always gave excellent agreement. For analytical purposes the anchor peptides were freed from hydrophobic peptides using octyl-Sepharose column chromatography, and labeled anchor peptides were eluted with 25% and then 50% propanol (16). Anchor peptides are normally found only in the 50% eluate, except for anchors from gup1⌬ cells, which also contain abnormally polar anchor lipids so that the bulk of gup1⌬ anchor peptides elute at 25% propanol (see Fig. 5C) (4). Lipid moieties were released from peptides by nitrous acid treatment for analysis by TLC (16). Lipids were resolved by TLC on Silica 60 plates (20 by 20 cm) using solvent 1 (chloroform-methanol-NH 4 OH at 40:10:1) or solvents 2 and 3 (chloroform-methanol-0.25% KCl at 55:45:10 and 55:45:5, respectively). Radioactivity was detected by fluorography or radioimaging using the Bio-Rad Molecular Imager FX. Radioactivity was quantified by one-or two-dimensional radioscanning in a Berthold radioscanner. Microsomes incorporate [ 3 H]DHS into GPI proteins. Cellfree microsomes were prepared from X2180 wild-type (wt) cells that had been treated for a few hours with myriocin, a specific inhibitor of serine palmitoyltransferase, the key enzyme for DHS biosynthesis (18,23), and were labeled with (14) FIG. 1. GPI anchor lipid remodeling in the ER of Saccharomyces cerevisiae. Yeast genes implicated in the various steps are indicated in italic. Anchors are designated according to the lipid moiety they release upon nitrous acid treatment. BstI generates pG2-type anchors, which are gradually transformed into pG1-and IPC/B-type anchors over about 20 to 30 min (31). Upon arrival in the Golgi apparatus, a small fraction of GPI anchors with ␣-hydroxylated C 26:0 fatty acid is generated (IPC/C-type anchors, not shown). 5). However, when microsomes were boiled (lanes 3 and 7), labeling of the proteins was completely abolished. This strongly suggested that the remodeling, or at least the incorporation of DHS into proteins, is an enzymatic process and not a spontaneous reaction. Myriocin pretreatment of cells was critical (Fig. 2B, lane 1 versus 2), and this was probably for two reasons: first, because the lack of sphingolipids, and specifically of ceramides, delays the transport of GPI proteins from the endoplasmic reticulum (ER) to the Golgi apparatus and causes an accumulation of immature GPI proteins in the ER (18,28,33,35), and second, because myriocin pretreatment allows cells to be starved of DHS, PHS, and ceramides and thereby prevents the introduction of ceramides into GPI anchors by the ER-based GPI anchor ceramide remodelase Cwh43. The appearance of distinct bands on SDS-PAGE argued that most of the labeled glycoproteins were still localized in the ER and had not reached the Golgi apparatus, where the glycan elongation transforms most GPI proteins as well as many other secretory proteins into diffuse and poorly migrating proteins of high molecular mass (6). Figure Characterization of GPI anchors generated in vivo. The inositolphosphoryl-lipid moieties of GPI-anchored proteins from wt cells labeled with [ 3 H]inositol are of three kinds, as described before (31); pG1, a remodeled form of PI containing C 26:0 in sn2 of glycerol, IPC/B, and IPC/C ( Fig. 1 and 3A, lane 4). IPC/B-and IPC/C-type GPI anchor lipids are formed in the ER and the Golgi apparatus, respectively (28). The major IPC/B and IPC/C moieties of GPI anchors are believed to contain PHS-C 26:0 , and PHS-C 26:0 -OH ceramides, respectively, based on the chemical analysis of yeast GPI anchors, which were shown to contain PHS, C 26:0 , and smaller amounts of C 26:0 -OH (9, 28). As a prelude to the analysis of the in vitro-remodeled GPI lipids, we wanted to confirm by genetic means that the in vivo-generated, metabolically labeled anchor lipid previously named IPC/B indeed contains PHS and C 26:0 , not DHS and C 26:0 -OH. The genetic confirmation was sought using a sur2⌬ mutant, which is deficient in the transformation of DHS into PHS, and a scs7⌬ mutant, which is unable to hydroxylate the ␣ carbon of the fatty acid in ceramides (17,22). wt and mutant cells were labeled in vivo with [ 3 H]inositol, and their GPI lipids were isolated for analysis by TLC. The GPI lipids of the wt strain showed the usual remodeled PI (pG1), IPC/B, and three minor bands, one of which may represent IPC/C (Fig. 3A, lane 4). The anchor lipids of scs7⌬ also contained IPC/B as the main anchor lipid (Fig. 3A, lane 2 versus 4), confirming that the fatty acid of IPC/B is not hydroxylated. In contrast, IPC/B was no longer present in sur2⌬ cells (Fig. 3A, lane 3 versus 4), confirming the presence of PHS in IPC/B. A more hydrophobic band named IPC/A was seen in this strain, which must represent DHS-C 26:0 . The result also indicates that DHS-containing ceramides can be utilized by the ER remodelase. GPI anchors are remodeled to IPC/A and IPC/B in vitro. Anchor lipids from [ 3 H]DHS-labeled proteins generated in the in vitro assay were compared with anchor lipids generated in intact cells, as shown in Fig. 3B. Anchor lipids labeled in vitro in wt microsomes appeared as two bands, one comigrating with IPC/B and the other being more hydrophobic and running at the position of IPC/A (Fig. 3B, lane 3). Both lipids were resistant to mild base treatment (Fig. 3B, lanes 3 and 4, and 4B, lanes 3 and 6), as is expected for ceramide-containing anchor lipids. The two species were, however, destroyed by strong acid hydrolysis and yielded [ 3 H]DHS and traces of [ 3 H]PHS (Fig. 4A, lane 3, and B, lane 4). PHS was not formed upon strong acid hydrolysis of sur2⌬-derived anchor lipids (Fig. 4B, lane 7 versus 4). When anchor peptides obtained from in vitro-labeled GPI proteins were treated with PI-specific phospholipase C, two different anchor lipids were removed, which migrated in the region of ceramide standards on TLC (Fig. 3B, lane 7). This argues that the difference between IPC/A-and IPC/B-type anchor lipids resides in the ceramide moiety. To confirm that the two in vitro-generated anchor lipids comigrating with IPC/A and IPC/B contain DHS-C 26:0 and PHS-C 26:0 , respectively, we repeated the in vitro experiment with sur2⌬ cells. As seen in Fig. 3C, deletion of SUR2 eliminated the band comigrating with IPC/B, whereas IPC/A was still made. The predominance of IPC/A in anchor lipids generated by wt microsomes suggested that Sur2 hydroxylase may not be optimally working in the in vitro system. Sur2 belongs to a family of lipid desaturases and hydroxylases often requiring cytochrome b 5 as an electron carrier. When EDTA was removed from the buffers used during cell lysis, we found that the proportion of IPC/B made in vitro was significantly increased (Fig. 3C, lanes 6 and 7 versus 2 and 3). This is compatible with the view that EDTA partially inactivates Sur2 activity by removing an iron atom from an essential component of the hydroxylase. Cwh43, Mcd4, and Gup1 are required for ceramide remodeling in vitro. cwh43⌬ cells lack the capacity to make ceramidebased GPI anchors, so that all GPI anchors of cwh43⌬ cells are of the pG1 type ( Fig. 1) (14, 34). cwh43⌬-derived microsomes were entirely unable to incorporate [ 3 H]DHS into proteins (Fig. 5A, lane 3), although they made normal amounts of ceramides (Fig. 5B, lane 1Ј versus 3Ј). Incorporation of (Fig. 5A, lanes 3, 4). The same plasmid completely restored the incorporation of [ 3 H]DHS into proteins in intact cwh43⌬ cells (not shown). Incomplete restoration in vitro may be caused by the fact that the overexpression of Cwh43 from the GAL1 promoter may render the accumulation of not-yet-remodeled GPI proteins during preculture in myriocin less efficient. The mdc4⌬ strain lacks an ethanolamine-phosphate side chain on the ␣1,4-linked mannose of its GPI anchors, and this has been found to be correlated with a complete absence of ceramide remodeling (36). Similarly, the microsomes of this cell line do not incorporate any [ 3 H]DHS into proteins (Fig. 5A, lanes 5 and 6). The in vitro remodelase test also faithfully reproduced the ceramide remodelase defect of gup1⌬ cells (4). Microsomes of gup1⌬ cells still incorporated [ 3 H]DHS into proteins, albeit with a lower efficiency than wt cells (Fig. 5A, lanes 9 and 10 versus 7 and 8), but most GPI proteins that were labeled in the wt were also labeled in gup1⌬ cells. It was previously reported that metabolic labeling of gup1⌬ cells with [ 3 H]inositol yields GPI anchor peptides that elute from the preparative octyl-Sepharose column at 25% propanol and contain abnormally polar anchor lipids (4). This previous study showed that of the three polar anchor lipids of gup1⌬ cells (Fig. 5C, lane 2), only the one of intermediate mobility is mild base sensitive, suggesting that it represents a lyso-PI (4). This lipid was not labeled with [ 3 H]DHS in vitro, but the two lipids that previously were characterized as mild base resistant were labeled, and these were also mild base resistant when labeled in vitro (Fig. 5C, lanes 3 and 4). Thus, these two anchor lipids seem to contain a long-chain base and inositol but possibly lack a fatty acid. The incorporation of [ 3 H]DHS into proteins in microsomes from per1⌬ cells was also severely (Ͼ5-fold) reduced, whereas the synthesis of ceramides was not affected (not shown). Altogether, it appears that the microsomal ceramide-remodeling assay faithfully reproduces the events that have been observed in intact cells. Definition of optimal conditions for the microsomal ceramide remodelase activity. Numerous experiments were carried out in an attempt to optimize incorporation of [ 3 H]DHS into proteins. Many ingredients were found to consistently either enhance or inhibit the incorporation of [ 3 H]DHS into proteins, but for some of them the effect was somewhat variable from one experiment to the next. For instance, omission of cytosol had very drastic effects in some experiments but less drastic effects in others. Only those parameters which gave consistent results in many experiments are described in the following. Freezing microsomes prior to the assays resulted in an 80% loss of activity. The standard in vitro reaction was done in the presence of myriocin to prevent the biosynthesis of cold DHS from serine and NADPH present in the added cytosol, but omission of myriocin from the in vitro assay did not diminish the incorporation of [ 3 H]DHS (not shown). When cold DHS was added to the reaction mixture, the incorporation of [ 3 H]DHS was strongly reduced, as shown in Fig. 6A and B. Calculations show that the chemical amounts of DHS incorporated into GPI proteins increased when the amount of DHS was raised from 0.17 nmol to 4.32 nmol (ϭ 1.3 g/ml) (Fig. 6B). As can be seen in Fig. 6C, using 100 g of microsomal protein per assay we can observe a constant, close-to-linear increase of incorporated radioactivity during the first 30 to 60 min of incubation. Using less protein does not significantly reduce the rate of incorporation of [ 3 H]DHS (Fig. 6C). A likely explanation of this fact is that with fewer microsomes and hence less cold DHS and PHS, the specific activity of [ 3 H]DHS in the assay becomes higher. The close-to-linear time course, however, and the total absence of incorporation with boiled microsomes ( Fig. 2A and 6C) argue that under our standard conditions (60 min of incubation and 100 g of protein) we will observe less incorporation if one of the required enzyme activities or substrates becomes limiting. On the technical side, adding [ 3 H]DHS to assays not directly, but incorporated into phosphatidylcholine-containing liposomes reduced its incorporation into proteins by a factor of about 2 (not shown). The addition of cytosol was strongly stimulatory (see Fig. 8A, bar 2 versus 1, and 8B, bar 7 versus 1), but [ 3 H]DHS was incorporated into the same proteins in the presence or absence of cytosol ( Fig. 2A). As boiled cytosol had the same enhancing effect ( Fig. 2A), we initially assumed that the active factor might be an ion or small molecule. Fractionation of cytosol by gel filtration on Biogel-P2 (separating in the range of 100 to 1,800 Da) and testing individual fractions in the microsomal remodeling assay did not, however, reveal any stimulatory activity in the low-molecular-weight range (not shown). We also were unable to extract a stimulatory lipid from cytosol using an organic solvent (Fig. 7, bars 7 and 8). Interestingly, cytosol was efficiently replaced by other proteins, such as BSA, defatted or boiled BSA, or rabbit serum (Fig. 7, bars 3 to 6). In our view, the data suggest that proteins might stabilize the microsomes, e.g., by preventing their aggregation during the incubation, or that proteins protect the remodelases and/or GPI protein substrates from proteolytic degradation. Further experiments were done to evaluate the importance of CoA, ATP, GTP, and C 26:0 -CoA. The simultaneous omission of CoA and ATP reduced the incorporation of [ 3 H]DHS into proteins by about 35% (Fig. 8A, bar 5, and B, bar 8), whereas omission of C 26:0 was in most cases of no consequence (Fig. 8A, bar 4 versus 1, and B, bar 3 versus 1). This suggests that microsomal membranes contain sufficient C 26:0 -CoA or precursors thereof to attach a certain amount of [ 3 H]DHS to GPI proteins but that exogenously added CoA and ATP can enhance the reaction. Addition of C 26:0 -CoA to the standard reaction mixture usually had no effect (not shown). However, C 26:0 -CoA usually enhanced the incorporation of [ 3 H]DHS when CoA and ATP were lacking (Fig. 8A, bar 6 versus 5, and B, bar 9 versus 8), but not to levels higher than observed under the standard conditions. Curiously, C 26:0 -CoA consistently had little effect if added to reaction mixtures that contained ATP (Fig. 8A, bar 8 versus 7, and B, bar 10 versus 8). Aureobasidin A, a specific inhibitor of IPC synthase (24), could be expected to increase the amount of ceramide available for the remodeling reaction by blocking the further metabolism of ceramide, but it did not stimulate the incorporation of [ 3 H]DHS into GPI proteins (Fig. 8A, bar 9 versus 1, and data not shown). While many tests showed stimulation of the remodeling reaction by ATP, this stimulation was not dependent on the presence of Mg 2ϩ , suggesting that ATP, not Mg 2ϩ -ATP, is required. As shown in Fig. 8B (without Mg 2ϩ ), the simultaneous omission of C 26:0 , CoA, ATP, and GTP again reduced the incorporation of [ 3 H]DHS into proteins quite significantly (Fig. 8B, bars 1, 2, and 8). Furthermore, the omission of either ATP and GTP or CoA caused a similar reduction (Fig. 8B, bars 1 to 5). Gel electrophoresis experiments also showed that where the assay conditions were tested in duplicate or triplicate assays. Reaction 1 contained a total of 36,000 cpm of anchor peptides. (B) As for panel A, but assays were carried out in the absence of Mg 2ϩ , reaction 5 contained apyrase, and microsomes for reaction 6 were from cells not precultured with myriocin. The bars indicate the means from three to five independent assays for reactions 1 to 7 and from duplicate assays for reactions 8 to 10. Reaction 1 contained a mean of 54,000 cpm of anchor peptides. (C) Standard assays using microsomes from X2180 cells were performed with some ingredients omitted as indicated at the bottom. Labeled proteins were analyzed by SDS-PAGE and fluorography. 312 BOSSON ET AL. EUKARYOT. CELL the profile of labeled proteins was not significantly different when microsomes were deprived of the possibility to make acyl-CoA (Fig. 8C). Reactions became more dependent on exogenously added CoA and ATP, and became even dependent on C 26:0 , when microsomes were derived from cells that had been precultured in the presence not only of myriocin but also of cerulenin, a drug which blocks fatty acid biosynthesis by inhibiting the ␤-ketoacyl-acyl carrier protein synthase (1,20). As shown in Fig. 9, omission of C 26:0 reduced the incorporation of [ 3 H]DHS into proteins significantly (Fig. 9, bar 4 versus 3), omission of CoA or ATP led to a severe reduction (Fig. 9, bars 5 and 6 versus 3), and C 26 -CoA could restore some activity even though ATP was present (Fig. 9, bar 9 versus 5). Thus, after preculture of cells with cerulenin, the microsomes may be low in acyl-CoA so that the remodeling reaction becomes more dependent on acyl-CoA synthesis. The very-long-chain fatty acid-specific acyl-CoA synthase Fat1 has recently been localized in the ER (25). We further investigated whether fatty acids other than C 26:0 would enhance the standard reaction (using microsomes from cells incubated with myriocin but not cerulenin). Compared to the reaction without added fatty acid, the addition of C 26:0 or C 16:0 was of no consequence; only C 24:0 slightly stimulated the incorporation of [ 3 H]DHS into proteins (not shown). Addition of physiological electron donors such as GSH or NADPH did not stimulate the reaction (not shown). The TLC mobility of anchor lipids also did not change when GSH, NADH, or NADPH was added to standard reaction mixture (not shown). After these studies, we now utilize a slightly modified standard assay including Mg 2ϩ , C 24:0 , NADPH, GSH, and BSA instead of cytosol (final conditions; see Materials and Methods). DISCUSSION Ceramides are found in the GPI anchors of certain plants (e.g., pears), Trypanosoma cruzi, Paramecium, Aspergillus fumigatus, and Dictyostelium, sometimes as the sole anchor lipid (3,27). Recent studies show that, similar to the case in yeast, the first steps of GPI biosynthesis in A. fumigatus and T. cruzi do not use ceramide as the lipid support, suggesting that ceramide is added by remodeling at a later step not only in yeast but also in other species (2,11). In a recent report we described a microsomal assay for the Gup1-mediated addition of fatty acids in the sn2 position of GPI anchors, which revealed that the GUP1 homologue of Trypanosoma brucei can remodel free GPI lipids as well as GPI anchors of proteins (19). Here we describe a further assay allowing measurement of the replacement of diacylglycerolbased GPI anchors by ceramide-based anchors. These assays set the stage for further biochemical investigation and for reconstitution experiments of the various remodeling reactions. The SDS-PAGE/fluorography profile of GPI proteins labeled in vitro is very similar to that of GPI proteins labeled in vivo when the exit from the ER is blocked. This argues that the microsomal assay measures the GPI-remodeling event occurring in the ER. While in vivo remodeling in the ER generates IPC/B-containing anchors, the in vivo remodeling in the Golgi apparatus generates IPC/C-containing anchors (28). The fact that the in vitro-labeled GPI anchors contain IPC/B but not IPC/C suggests that the in vitro test reproduces ER remodeling or else that Scs7, the enzyme hydroxylating the fatty acid moiety of sphingolipids and required for the generation of IPC/C, is not operative in our assay. The current knowledge suggests that Scs7 is localized outside the ER in vesicles, whereas Sur2, generating PHS from DHS and required for the generation of IPC/B-containing anchors, is localized to the ER. (25). The increase of DHS incorporation upon addition of cold DHS (Fig. 6B) might be thought to be due to a detergent effect of DHS, but the 4.3 nmol giving the highest incorporation corresponds to a concentration of 0.0013% (wt/vol) in the assay; 4.3 nmol/assay correspond to 1.3 g of DHS mixing in with a total of approximately 100 g of membrane lipids. While 4.32 nmol of long-chain bases/100 g membrane lipid is a 15-fold-higher concentration than the physiological 42 pmol/ A 600 unit of cells (8), ceramide synthase-deficient lag1⌬ lac1⌬ strains have Ͼ20-fold-increased long-chain base levels and are living (15). Thus, adding 4.32 nanomoles to the assay mixture is not expected to significantly alter the membrane structure. Moreover, adding 1.3 g of lyso-phosphatidic acid, a natural detergent, had no influence on the incorporation (not shown). A more likely interpretation of the results in Fig. 6 and 8 is that the DHS concentration in the standard assay somewhat limits the rate of DHS incorporation, whereas the C 26:0 concentration does not. The concentration of nonremodeled GPI proteins that can serve as substrates is most likely also rate limiting, but it is impossible to test this by varying their concentration in our microsomal assay. C 26:0 -CoA or C 26:0 , required for synthesis of C 26 -CoA by the ER-based acyl-CoA synthase Fat1 (25), had major effects only if the cells had previously been depleted of acyl-CoA by cerulenin treatment (Fig. 9), but the addition of C 26:0 -CoA to the standard assay was usually of no consequence. In contrast, addition of C 26:0 -FIG. 9. Cerulenin enhances the need for ATP and C 26:0 in the microsomal assay. X2180 wt cells were precultured for 180 min with cerulenin (10 g/ml) and for the last 90 min with myriocin (40 g/ml) in addition (bars 3 to 9) or only with myriocin for 90 min (bars 1 and 2). Microsomal remodeling assays were run in duplicate under standard conditions (bars 1 and 3) or with ingredients omitted or added as indicated at the bottom. The amount of radioactivity in anchor peptides was determined by scintillation counting and plotted as a percentage of incorporation under standard conditions (bar 1). Reaction 6 contained 0.5 U of apyrase in addition, and 2 or 10 nanomoles of C 26:0 -CoA was added in conjunction with 0.1 mg of purified yeast acyl-CoA binding protein. 2C) (15). This argues that in this in vitro system, the acyl-CoA-dependent ceramide synthesis pathway does not feed directly into the ceramide pool that is utilized by the GPI ceramide remodelase activity. The same idea is supported by findings obtained with intact cells: (i) the type of ceramides predominating in GPI anchors (PHS-C 26:0 ) is different from that predominating in IPCs (PHS-C 26:0 -OH) (28) (Fig. 3) and (ii) GPI anchor lipids made by a lag1⌬ lac1⌬ ydc1⌬ ypc1⌬ strain kept alive by the murine LAG1 homologue Lass5 contain the typical IPC/B moiety although the free sphingolipids of this strain almost exclusively contain C 16:0 and C 18:0 fatty acids in their ceramide moiety (5). These data suggest that Cwh43 does not just transfer ceramides made by the acyl-CoA-and Lag1-or Lac1-dependent ceramide synthase but may generate ceramides on GPI anchors through a different mechanism. However, the significant dependence on CoA and ATP of the incorporation of [ 3 H]DHS into proteins in microsomes from acyl-CoA-depleted cells (Fig. 9, bars 5 and 6) suggests that [ 3 H]DHS cannot be incorporated as such, as proposed in the introduction, but that it has to be acylated before being added to proteins. Further studies are necessary to fully understand the mode of operation of Cwh43.
2018-04-03T03:24:21.010Z
2008-12-12T00:00:00.000
{ "year": 2008, "sha1": "27046bf9e12c86746e7ccc7497b099198f23ae59", "oa_license": null, "oa_url": "https://doi.org/10.1128/ec.00257-08", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "06e996353bd8b155b5aff8328c5eea18a62609a0", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
253324257
pes2o/s2orc
v3-fos-license
Reclust: an efficient clustering algorithm for mixed data based on reclustering and cluster validation ABSTRACT INTRODUCTION Clustering analysis is one of the most important approaches in data mining, and it seeks to determine the nature of groupings or clusters of data objects in attributes space. Clustering methods are employed in a variety of applications [1], including social network analysis [2], knowledge discovery, image processing, text and sentiment analysis [3]. Clustering analysis seeks to group data objects with similar properties together, and those with distinct characteristics into separate clusters. Hierarchical and partitional clustering methods are the two types of clustering algorithms [4]. Data are dispersed into a dendrogram of layered segments using a split or agglomerative technique in hierarchical clustering algorithms. Data are partitioned into a certain number of clusters by minimizing an objective cost function in partitional clustering algorithms. For specific kinds of information, clustering algorithms have been developed. Continuous values are used to represent numerical data, whereas categorical data, which is a subset of discrete data, can only have a finite number of values. Many real-world applications use categorical data, such as name, gender, and educational level. Both numerical and category values were present in the mixed datasets. Real-world data is frequently of various sorts. Medical data, for example, includes categorical and numerical values such as age, height, weight, and salary, as well as categorical and numerical values such as nationality, gender, employment, [5], marital status, and chest pain type [6]. When a dataset comprises both numerical and categorical variables, the issue of determining the similarity of two data becomes more complicated [7]. Splitting the numeric and categorical elements of a mixed dataset and finding the Euclidean distance between two data points for numeric characteristics and the Hamming distance for categorical features is a simple technique for solving the similarity problem [8]. For clustering mixed data, several techniques have been developed. To cluster heterogeneous data, Huang [9] presented the well-known k-prototypes technique, which merged the k-means and k-modes approaches. The k-prototypes algorithm was improved in [10] by incorporating attribute influence and enhancing the cluster center representation. The unsupervised feature learning (UFL) approach was developed by Lam et al. [11] by combining the fuzzy adaptive resonance theory (ART) with the UFL. The approach Kaymeans for mixed large data sets (KAMILA) introduced by Foss et al. [12] can directly deal with multiple types of attributes and requires fewer parameters. Chen and He [13] used the principle of density clustering to present a self-adaptive peak density clustering technique. Most mixed data clustering algorithms have two main goals: to develop new approaches to construct novel measures of similarity between mixed characteristics and to cluster data using previous or new strategies to obtain a local optimum result. This paper proposes an efficient clustering algorithm for mixed numerical and categorical data based on re-clustering and cluster validation called reclust. The proposed method contains three important processes: initial clustering, validation, and re-clustering. The initial clustering process uses four traditional clustering algorithms such as expectation-maximization (EM), hierarchical cluster (HC), k-means (KM), and selforganizing map (SOM). The validation process evaluates the clustering result. The re-clustering process reclusters the incorrectly clustered data. The validation and the re-clustering process is an iterative process [14]. It improves the quality of cluster results. The remaining part of this research paper is as follows: section 2 describes the research background including different clustering methods for numerical and categorical data and also explains clustering algorithms used in research. Then, the proposed methodology is explained in section 3. The performance of the proposed work is analyzed in section 4-the conclusion and the future work of this research work are provided in section 5. RESEARCH BACKGROUND 2.1. Mixed data clustering Clustering mixed data is a difficult process that is rarely accomplished using well-known clustering algorithms developed for a certain type of data. It is common knowledge that converting one type to another is insufficient since it may result in data loss [15]. For clustering mixed datasets, Que et al. [16] suggest a similarity measurement using entropy-based weighting. An automatic categorization technique is used to convert numerical data into category data. The relevance of various attributes is then denoted using an entropybased weighting technique. Li et al. [17] offer a mixed data clustering technique with a noise-filtered distribution centroid and an iterative weight modification strategy. It defines a noise-filtered distribution centroid for categorical attributes. By integrating the mean and noise-filtered distribution centroid, this method displays the cluster centre with mixed properties. The frequency of occurrences for each potential value of the categorical attributes in a cluster is more accurately recorded by the noise-filtered distribution centroid. Jia and Cheung [18] show how to cluster data using soft subspace clustering with both numerical and categorical features. The model is based on the definition of object-cluster similarity and is attribute-weighted. Using a uniform weighting approach for numerical and categorical qualities, the attribute-to-cluster contribution is measured by accounting for both inter-cluster difference and intra-cluster similarity. For data with heterogeneous features, D'Urso and Massari [19] suggest a fuzzy clustering model. Different sorts of variables, or qualities, can be considered using the clustering model. This result is obtained by using a weighting system to combine the dissimilarity measurements for each attribute, yielding a distance measure for several attributes. During the optimization phase, the weights are computed objectively. The weights in the clustering findings represent the importance of each attribute type. Rodriguez et al. [20] suggest a multipartition clustering process that combines Bayesian network factorization and the variational Bayes framework to efficiently handle mixed data. K-means clustering algorithm Let X=x 1,x 2,...,x n be a data collection in a d-dimensional Euclidean space Rd, and A=a 1,a 2,...,a c be the c cluster centres, with d ik=x i-a k as its euclidean norm. Let U= ik _(nc), where _ik is a binary variable (i.e., _ik 0,1) that indicates whether the data point xi belongs to the kth cluster, k=1,2,...,c. By minimizing the k-means objective function, the k-means clustering method is iterated via the updating equations for cluster centres and memberships [12]: Hierarchical clustering In Algorithm 1 describe the hierarchical clustering pseudocode. Methods that use hierarchical clustering build a hierarchy of clusters that are arranged from top to bottom (or bottom to up). The hierarchical algorithms require both of the following to build clusters: -Similarity matrix-this is created by determining how similar each pair of mixed data values are. The shape of the clusters is influenced by the similarity measure used to generate the similarity matrix. -Linkage criterion-this establishes the distance between sets of observations as a function of pairwise distances. Expectation maximization The EM algorithm in Algorithm 2 finds maximum likelihood parameter estimates in probabilistic models. The iterative technique of expectation maximisation (EM) alternates between two steps: expectation (E) and maximum (M). To cluster data, EM employs the finite Gaussian mixtures model, which iteratively estimates a set of parameters until the desired convergence value is obtained. Each of the K probability distributions in the mixture corresponds to a single cluster. A membership probability is assigned to each instance by each cluster [21]. Self organization map The SOM algorithm in Algorithm 3 is a classic unsupervised learning neural network model that clusters input data with similarities. It employs an unsupervised learning methodology and used a competitive learning algorithm to train its network. In order to minimise complex issues for straightforward interpretation, SOM is utilised for clustering and mapping (or dimensionality reduction) procedures to map multidimensional data onto lower-dimensional spaces. The input layer and the output layer are the two layers that make up SOM. The SOM merges the clustering and projection operations (reduce the dimensionality of information). PROPOSED METHOD This section explains the proposed clustering algorithm for mixed numerical and categorical data [22] based on re-clustering and cluster validation called reclust. The proposed method contains three important processes: initial clustering, validation, and re-clustering. The initial clustering process uses four traditional clustering algorithms such as EM, HC, KM, and SOM. The validation process evaluates the clustering result. The re-clustering process re-clusters the incorrectly clustered data. The validation and the re-clustering process is an iterative process. It improves the quality of cluster results. Let D be the mixed dataset consisting of n instances, indicates as {d1, d2, …, dn}. The dataset D has ac categorical attributes and au numerical attributes. Then d i ( In this algorithm, step 1 applies four traditional clustering algorithms. Step 2 evaluates the cluster results. The evaluateCluster uses classes to cluster evaluation method. It builds clustering after ignoring the class attribute. It then allocates classes to the clusters during the test phase, depending on the majority value of the class feature within each cluster. The classification error is then calculated based on this assignment. Step 2e finds the minimum error value of four traditional clustering algorithms. Step 2f extracts the incorrectly clustered data from the evaluation results. Step 3 is an iterative re-clustering, which clusters the incorrect data and evaluates the clustering result. The stop criterion for the re-clustering step is either a minimum error value or a minimum number of instances in incorrect clustered data. EXPERIMENTAL RESULT This section evaluates the performance of the proposed work through experiments. Three publicly available data sets and students' data with seven questionnaires are used to analyze the cluster results. Table 1 shows the summary of the dataset used for experiments. The following metrics are used to evaluate the clustering results: rand index (RI), precision (Pre), and recall (Rec). These evaluation metrics are computed using the classes to cluster assignment (CCA) table shown in Table 2. Let D={D1, D2, D3,…, Dn} be the dataset contains n number of instances, C={C1, C2, …,Ck} denotes set of k clusters generated from D using clustering algorithm and P = {P1, P2, …, Pc}denotes set of c true classes of D. In table 2, aij represents the number of common instances between Pi and Cj i.e aij = |Pi ∩ Cj|. SPi and SCj denote the number of instances in Pi and Cj. The evaluation metrics are computed as shown in: In this experiment, the number of clusters to be found was equal to the number of classes in the data set i.e., c = k. Larger values of RI, Pre, and Rec indicate better clustering results. Table 3 shows the Classes for Cluster Assignment for the emotional intelligence dataset. Most of the classes are correctly clustered. Tables 4-9 shows CCA for EPQ, GSE, EHQ, PNA, RSE [23], SDS datasets. Table 10 shows the comparison of evaluation metrics for different datasets. The metrics RI, Precision, and Recall is compared with ABC-K-Prototypes [24], CCS-K-Prototypes [1], and Multi-view K-Prototype [25]. Table 11 and Figure 1 shows the Rand Index comparison. Table 12 and Figure 2 depict the precision comparison. Table 13 and Figure 3 depicts the recall comparison. CONCLUSION Clustering is a typical data mining technique, and clustering mixed datasets into meaningful groups is possible since mixed items are ubiquitous in real-world datasets. This research presents an effective clustering approach for grouping mixed numerical and categorical datasets. Furthermore, iterative re-clustering and cluster validation enhance the clustering results. In terms of clustering purity, NMI, rand index, precision, and recall, the suggested reclust algorithm was tested on several datasets. The results of the experiments confirm the reclust algorithm's superior performance.
2022-11-05T16:01:12.491Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "8bcf6275f7be0d55d75896432c565f4a3073bbb7", "oa_license": "CCBYNC", "oa_url": "https://ijeecs.iaescore.com/index.php/IJEECS/article/download/28307/16987", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a58fe922c6efd79b471c8134fa0a9e38cc7810e7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
260179837
pes2o/s2orc
v3-fos-license
Synthesis and characterization of a new hydrazone-coumarin chemosensor for Fe 3+ in a water-acetonitrile mixture A new coumarin-based fluorescent chemosensor, 3 , incorporating a hydrazone (-NH-N=C), was designed and synthesized. The chemosensor displayed higher affinity towards Fe 3+ in water-acetonitrile solution in the presence of other competing cations in emission and absorption essays. The binding stoichiometry between the chemosensor 3 and Fe 3+ was shown to occur in a 1:1 ratio, and the binding modes of the chemosensor towards Fe 3+ were identified through the emission response, 1 H NMR, FT-IR and molecular modelling studies. The hydroxyl and amine functionalities on 3 were recognized as possible binding sites. The chemosensor presented a limit of detection, a limit of quantification and the association constant of 0.45 µM, 1.50 µM and 3.01 × 10 1 M - 1 , respectively. Introduction Iron is the second most abundant heavy metal on Earth 1 .It is an essential heavy metal in the human body, playing a crucial role in complex formation with proteins and enzymes that facilitate important physiological processes, including oxygen binding and participation in the electron transport chain [2][3] .However, excessive iron can be toxic to humans and other living organisms 4 .When free iron accumulates in the liver, heart, and nervous system, it enters cells and causes damage to mitochondria 5 .This interruption of vital cellular mechanisms leads to the formation of radicals, ultimately resulting in cell death [6][7] .The presence of ferric ions in the environment is mainly attributed to natural resources and human activities such as mining 8 , as well as industrial 9 or municipal wastewater 10 , which subsequently contaminate soil, vegetation, and water 11 .In developing communities, ferric ions may contaminate drinking water through pipe corrosion.The World Health Organization (WHO) regulations recommend a ferric ion dose of 0.3 mg/L in drinking water, although this may slightly vary depending on the geographical location and coagulating agents used in water treatment plants 12 .Monitoring the concentration of Fe 3+ in drinking water is crucial for the preservation of public health.Water purification facilities commonly employ spectroscopy techniques, such as atomic absorption spectroscopy (AAS) 13 , inductively coupled plasma-mass spectrometry (ICP-MS) 14 , cold-vapor atomic fluorescence spectroscopy (CV-AFS) 15 , and neutron activation analysis (NAA) 16 , to quantitatively monitor iron levels in water.However, these techniques are expensive, destructive, and highly labor-intensive [17][18] .Additionally, they require sophisticated instrumentation unsuitable for field use, skilled personnel, complicated sample collection, pretreatment, and long measurement periods 19 .Optical sensors, including colorimetric and fluorometric chemosensors, utilize electromagnetic radiation to detect analytes across a broad range of wavelengths 20,21 .These sensors employ principles such as absorbance, reflectance, fluorescence, and phosphorescence to measure the properties of light and determine the presence of analytes.Comprising a synthetic binding site, a chromophore or fluorophore, and mechanisms for modifying optical properties upon analyte binding, these sensors offer stability and can be tailored for diverse analytes, unlike biological receptors.Optical sensors provide several advantages over conventional analytical methods, including enhanced sensitivity, selectivity, and the capability to operate across different wavelengths 22 .Several small molecule fluorescent sensors have been developed to detect Fe 3+ 23-24 .However, many of these sensors still face challenges such as poor selectivity, weak sensitivity, and low water solubility.Typically, fluorescent molecular sensors consist of a receptor unit chemically bonded to a light-emitting chromogenic or electrochemical fluorophore or recognition unit.The sensitivity of the recognition unit towards a specific analyte characterizes molecular sensors 25 .Recent studies indicate that fluorescent chemosensors incorporating a coumarin motif as a recognition unit, along with ligands attached to their core, can address issues such as poor metal ion selectivity and sensitivity 26 .This is attributed to the coumarin molecule's sensitivity, high quantum yield, ease of synthesis, and structural tunability of the conjugated coumarin motif [27][28] .Additionally, hydroxyl functionalities have also been reported to enhance the hydrophilic properties of the coumarin backbone 293031 .Quang et al. developed a chemodosimeter based on a rhodamine-6G Schiff base for the selective detection of Fe 3+ ions.Upon the addition of Fe(III) ions to an aqueous solution, the chemosensor demonstrated a significant enhancement in fluorescence.This study not only showcased the high selectivity of the chemosensor towards Fe 3+ ions but also highlighted its potential application in monitoring Fe(III) levels in living cells 32 .In a separate investigation, Zhang et al. presented a cation sensor with the capability of specifically detecting Fe 3+ ions even in the presence of other metal ions 33 .The sensor design was characterized by its simplicity and its ability to achieve remarkable sensitivity and selectivity towards Fe3+ ions.Additionally, Luo and colleagues designed a colorimetric chemosensor based on bis-(rhodamine) for the recognition of trivalent ferric ions (Fe3+).The chemosensor exhibited exceptional selectivity and sensitivity towards Fe 3+ ions 34 .Top of Form Furthermore, several other Fe3+ chemosensors have been reported and discussed in the literature [35][36][37][38] . Synthesis of (E)-8-(2-(dihydrofuran-2(3H) ylidene)hydrazinyl)-7-hydroxy-4-methyl-2H-chromen-2-one (3) The study commenced with the synthesis of fluorescent coumarin-containing hydrazone compound 3.The synthesis of compound 3 involved a series of multi-step reactions starting from resorcinol.Initially, 37 g of resorcinol was dissolved in 45 ml of ethyl acetoacetate, and the resulting solution was slowly added to a cold 150 ml solution of H2SO4 while carefully maintaining the temperature below 10°C.The mixture was stirred for 0.5 hours, followed by its pouring into ice-cold water.Subsequently, the solid product, 7-hydroxy-4-methyl-2Hchromen-2-one, was isolated by filtration and drying.In parallel, coumarin derivative 1 was prepared using established procedures described in the literature [46][47] .The synthesis of coumarin derivative 1 involved a twostep process starting from resorcinol.By employing these multiple-step syntheses, various coumarin derivatives were generated as intermediates, ultimately leading to the synthesis of the target fluorescent coumarincontaining hydrazone compound 3. AUTHOR(S) X-Ray structure of 3 Further structure confirmation of chemosensor 3 was done using single-crystal X-ray diffraction analysis.Compound 3 was recrystallized in THF solvent to afford suitable crystals for x-ray studies.The single-crystal structure for chemosensor 3 (Figure 1B) is presented as expected, with the coumarin ring and all substituents in the same plane.However, there is a difference in the orientation of the tetrahydrofuran ring in the X-ray structure to that of the computational structure of 3 (Figure 1C).This is because the crystal structure was obtained in its fixed solid state, whereas the computational structure was in its gas state.Otherwise, all the substituents and functionalities of 3 are presented on the crystal structure.Hydrogen bonding was observed to stabilize 3 through the hydroxyl group at position 7 of the coumarin and the imine nitrogen, connected to the furan ring. UV-Vis absorption assays The chemosensing ability of 3 was investigated using UV-Vis spectral analysis in a water/acetonitrile (50/50) solvent system.3 showed a spectral pattern in its unbound state with two absorption bands at 275 nm and 320 nm.We further explored the selectivity of 3 towards various metal ions that are mostly found in the water by mixing a solution of 3 with 0.5 molar equivalence of metal ions solutions of Ag + , Na + , Al 3+ , Ca 2+ , Ba 2+ , Fe 2+ , Fe 3+ , Cr 3+ , Hg 2+ , Cu 2+ , Co 2+ , Cd 2+ , Zn 2+ , Li + , Pb 2+ and Ni 2+ at room temperature.Interestingly, Cu 2+ , Fe 3+ and Hg 2+ showed distinct spectral changes where a new absorption band formed at 300 nm upon adding any of these metal ions. The formation of the new peak indicates that 3 forms complexes with these metal ions.None of the other tested cations presented a notable change in the absorption spectra (Figure 2).A systematic approach was carried out to determine the effect of competing cations on 3-Cu 2+ , 3-Hg 2+ and 3-Fe 3+ complexes (Figure 3).This was accomplished by adding 4.5 molar equivalence of metal ions to the solution of 3 before adding the same amount of Fe 3+ to the recognition sensing system.The formation of the 3-Fe 3+ complex was not affected by the addition of competing metal ions compared to the 3-Cu 2+ and the 3-Hg 2+ complexes.This means that 3 is selective to Fe 3+ in the presence of other metal ions, forming a more stable © AUTHOR(S) complex with Fe 3+ than any other competing cations.The chemosensor properties of 3 were further explored by examining its absorption spectra in the presence of different concentrations of Fe 3+ .The gradual addition of Fe 3+ resulted in a hyperchromic shift of the new absorption band at 300 nm.This further indicates that 3 forms a complex with Fe 3+ .The complex's saturation point was attained after adding three molar equivalences of Fe 3+ .Exactly two isosbestic points, at 280 nm and 325 nm, were observed in the absorption spectra of 3 with the addition of Fe 3+ (Figure 4).This proves that there exist various stable complexes of 3-Fe 3+ . Emission spectroscopy The metal cation chemosensing ability of 3 was investigated using emission spectral analysis in the wateracetonitrile solvent system.The selectivity of 3 towards metal ions was initially investigated by mixing it with metal ions solutions of Ag + , Na + , Ca 2+ , Ba 2+ , Fe 2+ , Fe 3+ , Cr 3+ , Hg 2+ , Cu 2+ , Co 2+ , Cd 2+ , Zn 2+ , Li + , Pb 2+ and Ni 2+ at room temperature.There were no notable changes in the spectra of 3 when the metal was added except for Fe 3+ (Figure 5).The interaction of Fe 3+ with 3 caused a quenching in the emission of 3, which indicates interaction.The competition experiments were conducted to assess the stability of the 3-Fe 3+ complex in the presence of other metal ions using fluorescence analysis. AUTHOR(S) Figure 5. Emission spectra of chemosensor 3 (6.12×10 - M) in the presence of the aliquot (8.33×10 -4 M) of different metal ions.The experiments were conducted in water at an excitation wavelength of 320 nm. To achieve this, 1.0 molar equivalence of metal ions were added to the solution of 3 before adding the equivalence of Fe 3+ and monitoring the resultant fluorescence emission at 450 nm.In the presence of Ag + , the quenching caused by Fe 3+ could not be realized, whereas the opposite outcome was realized in the presence of other metal ions (Figure 6).This shows that the 3-Ag + is a non-reversible complex, and thus Fe 3+ complex can't form.However, 3 could still be used as a fluorometric chemosensor for Fe 3+ .heavy metal effects of Fe 3+ ions 48 .The linearity in Fe 3+ ion concentration was found in the 0 -25 µmol/L range with a correlation coefficient of R 2 = 0.99.A Job's plot of 3 with Fe 3+ was constructed to establish the stoichiometry relation of the 3-Fe 3+ complex using the continuous variation method 49 .The graph of emission intensity versus the molar fraction of Fe 3+ at 450 nm indicates that the trend lines fitted to linear sections intersect exactly at 0.5 molar fraction (Figure 8a).This proposes a 1:1 stoichiometry of 3-Fe 3+ complexation.A plot of Io/I against the molar concentration of Fe 3+ is presented in Figure 8b, where I and Io are the emission intensity in the presence of different concentrations of Fe 3+ and the absence of Fe 3+ , respectively.A linear Stern -Volmer calibration plot (Eq.1) with a correlation coefficient of 0.99 confirmed the presence of static quenching mode due to the formation of the 3-Fe 3+ complex.The association constant of 3.71 × 10 1 M -1 and the detection limit of 2.2 µM were evaluated graphically from a similar equation [50][51] . © AUTHOR(S) To understand the complexation mode of 3-Fe 3+ , 1 H NMR titration experiments using increasing amounts of Fe 3+ were carried out.Stepwise addition of Fe 3+ aliquots (3 µM) in deuterated dimethyl sulfoxide (DMSO-d6) solution of 3 caused a decrease in intensity of all proton signals for the chemosensor3 (Figure 9).This illustrates that Fe 3+ interact with 3, which causes a decrease in signal due to paramagnetism. Molecular modelling studies were used to confirm the binding sites.The unbounded 3 shows higher electron density around the hydroxyl at position-7 and the hydrazone at position-8, and low electron density at carbonyl at position-2 and the methyl group at position-4 on the coumarin ring (Figure 10A).The electron-dense region can then take up a Lewis acid Fe 3+ which can take up a lone pair of electrons, as illustrated in Figure 10B.The interaction between Fe 3+ and the electron-dense region of 3 interrupts the charge transfers around the coumarin ring.The absorption and emission spectra analysis, Job's plot study and 1 H NMR titration experiment data of 3 all support the complexation with Fe 3+ .It can be concluded that the complexation of Fe 3+ with 3 involves the lone pairs of electrons on the hydroxyl and primary amine functionalities via a stable six-membered ring. Table 1.Comparative study on the proposed method and the existing fluorogenic Conclusions In summary, a new coumarin-based fluorescent chemosensor 3 incorporating a hydrazone was successfully synthesized through multiple-step syntheses.The addition of Fe 3+ into acetonitrile/water solution of 3 resulted in the formation of a new absorption band at 300 nm and the quenching of the fluorescence.The competition studies of the 3-Fe 3+ complex showed that 3 could act as a highly sensitive fluorescent chemosensor for quantitative recognition of Fe 3+ ions.The Job's plot analysis showed a possible 1: 1 binding ratio for the 3-Fe 3+ complex.The selectivity is greatly based on the involvement of the lone pairs of electrons on the hydroxyl and primary amine functionalities via stable six-membered rings, as illustrated by molecular modelling. Experimental Section General.All the reagents and solvents used to prepare the chemosensor were purchased from Sigma Aldrich, Merck and utilized without any purification.All derivatives of 8-amino-7-hydroxy-4-methyl-2H-chromen-2-one, 1, were obtained according to the literature method 46 .The stock solution used for absorbance and fluorescence studies was obtained from pure solid samples of (E)- Measurements The crude products were purified using Column Chromatography technique with silica gel of particle size 0.050 -0.063 mm (70% ethyl acetate and 30% hexane).FT-IR spectra analysis to confirm compound functionalities was done using Opus software on a Perkin-Elmer FT-IR 180 spectrometer, and the 1 H NMR and 13 C NMR analyses were done on a Bruker Advance DPX 400 Spectrometer (400 MHz) in CDCl3 or DMSO-d6.NMR analyses were done at room temperature, and tetramethyl silane (TMS) was used as the internal reference.The Perkin Elmer Lambda 35 UV-Vis spectrometer and Perkin Elmer LS 45 spectrometer were used for recording UV-Vis and emission spectrum, respectively. Figure 3 . Figure 3. Absorption responses of 3 (1.53 × 10 -5 M) upon addition of 4.5 molar equivalence of various metal ions (green bar) and addition of Fe 3+ (mol eq) with other metal ions (mol eq) (orange bars).The experiments were performed in water, and the concentration of metal ion stock solutions was 0.01 M. Figure 6 . Figure 6.Fluorescence responses of 3 (6.12× 10 -4 M) upon the addition of various metal ions (8.33 × 10 -4 M aliquot) (blue bar) and upon addition of Fe 3+ (1.0 mol eq) with other metal ions (1.0 mol eq) (brown bars).The experiments were performed in water at an excitation wavelength of 320 nm, and the metal ion stock solutions had a concentration of 0.01 mol/L.
2023-07-27T15:05:11.063Z
2023-08-25T00:00:00.000
{ "year": 2023, "sha1": "bca01c9145027c95e5ac4ffe97ddd5ae0f547828", "oa_license": "CCBY", "oa_url": "https://www.arkat-usa.org/get-file/79766/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "34d80810e071fd984cfe506065bf3aa1ba8c98b4", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
18061429
pes2o/s2orc
v3-fos-license
MRI Tractography of Corticospinal Tract and Arcuate Fasciculus in High-Grade Gliomas Performed by Constrained Spherical Deconvolution: Qualitative and Quantitative Analysis BACKGROUND AND PURPOSE: MR imaging tractography is increasingly used to perform noninvasive presurgical planning for brain gliomas. Recently, constrained spherical deconvolution tractography was shown to overcome several limitations of commonly used DTI tractography. The purpose of our study was to evaluate WM tract alterations of both the corticospinal tract and arcuate fasciculus in patients with high-grade gliomas, through qualitative and quantitative analysis of probabilistic constrained spherical deconvolution tractography, to perform reliable presurgical planning. MATERIALS AND METHODS: Twenty patients with frontoparietal high-grade gliomas were recruited and evaluated by using a 3T MR imaging scanner with both morphologic and diffusion sequences (60 diffusion directions). We performed probabilistic constrained spherical deconvolution tractography and tract quantification following diffusion tensor parameters: fractional anisotropy; mean diffusivity; linear, planar, and spherical coefficients. RESULTS: In all patients, we obtained tractographic reconstructions of the medial and lateral portions of the corticospinal tract and arcuate fasciculus, both on the glioma-affected and nonaffected sides of the brain. The affected lateral corticospinal tract and the arcuate fasciculus showed decreased fractional anisotropy (z = 2.51, n = 20, P = .006; z = 2.52, n = 20, P = .006) and linear coefficient (z = 2.51, n = 20, P = .006; z = 2.52, n = 20, P = .006) along with increased spherical coefficient (z = −2.51, n = 20, P = .006; z = −2.52, n = 20, P = .006). Mean diffusivity values were increased only in the lateral corticospinal tract (z = −2.53, n = 20, P = .006). CONCLUSIONS: In this study, we demonstrated that probabilistic constrained spherical deconvolution can provide essential qualitative and quantitative information in presurgical planning, which was not otherwise achievable with DTI. These findings can have important implications for the surgical approach and postoperative outcome in patients with glioma. G liomas are the most common type of WM-involved invasive cerebral primary neoplasm in adults. These brain tumors represent approximately 80% of primary malignant brain tumors and almost 3% of all types of cancer, and patient prognosis is poor. In recent years, the use of noninvasive study techniques, such as cortical mapping and fMRI, has improved presurgical planning for brain neoplasms. However, these methods alone are considered inadequate to achieve the primary neurosurgical aim, obtaining the most radical tumor resection with the minimum of postoperative deficits, because they do not provide good anatomic representation of the spatial location of WM tracts affected by the tumor. 1 Tractography is the most common neuroimaging technique used to reveal WM structure by analysis of DWI signals dependent on anisotropic water diffusion. 2 From DWI gradient directions, it is possible to generate an anisotropic map showing WM bundles and their orientations; this information is adapted by tractographic algorithms to yield a 3D representation of WM tracts. DTI-based tractography is widely used for presurgical planning and is a powerful tool in the evaluation of major WM fiber bundles; it has also a positive impact on neurosurgical resection, disease prognosis, and preservation of brain function. 3 Although widely used and histologically validated, 4 DTI approaches have several limitations, such as partial volume effects or lack of tensor estimation in voxels characterized by low fractional anisotropy (FA) values. 5 Recent tractographic algorithms, such as probabilistic constrained spherical deconvolution (CSD), have overcome these limitations. 6 The corticospinal tract (CST) and arcuate fasciculus (AF) are 2 of the WM pathways most commonly investigated by tractography because of their important roles in voluntary movement control and language, respectively. 7 Probabilistic CSD improves tractographic reconstruction of the lateral portion of the CST, corresponding to the somatotopic representation of hand, face, tongue, and voluntary swallow muscles, which is not detectable by DTI-based approaches. 8 In addition, this technique allows better evaluation of all AF components, including projections to the Geschwind area and other cortical regions, compared with other tractographic methods. 9 The main aim of this study was to evaluate WM tract alterations of both the CST and AF in patients with frontoparietal high-grade gliomas (HGGs), through a qualitative and quantitative analysis by using probabilistic CSD tractography, to obtain reliable presurgical planning. Participants In our study, we recruited 20 patients (9 women and 11 men; mean age, 47.4 Ϯ 14.2 years; age range, 20 -67 years), all affected by HGG, which involved mainly the lateral part of the frontoparietal lobes. After MR imaging evaluation, all patients underwent surgery, and all diagnoses were confirmed histologically. The study was approved by our institution review board, and written informed consent was obtained from all subjects. Data Acquisition and Preprocessing The study was performed with an Achieva 3T MR imaging scanner (Philips Healthcare, Best, the Netherlands) by using a 32channel coil. We performed the following sequences: • A T1-weighted fast-field echo 3D high-resolution sequence with TR, 8.1 ms; TE, 3.7 ms; flip angle, 8°; reconstruction matrix, 240 ϫ 240; voxel size, 1 mm 3 without an intersection gap; • A FLAIR volume isotropic turbo spin-echo acquisition sensitivity encoding 3D sequence with TR, 8000 ms; TE, 360 ms; TI, 2400 ms; reconstruction matrix, 240 ϫ 240; voxel size, 1 mm 3 without an intersection gap; • Diffusion-weighted MR imaging acquired with a dual-phase encoded pulsed gradient spin-echo sequence with TR, 15,120 ms; TE, 54 ms; scan matrix, 160 ϫ 160; section thickness, 2 mm without an intersection gap; 60 gradient directions; b-value, 1000 s/mm 2 . We corrected the diffusion-weighted dataset for eddy current distortions and motion artifacts and adjusted the diffusion gradients with proper rotation of the b-matrix. 10 Data Processing and Fiber Tracking To avoid possible coregistration errors between morphologic and diffusion images caused by HGGs, 8 we did not normalize patient data to a common template space as usual. Seed ROIs were also selected directly in the native space of each subject. Fiber tracking was obtained with a probabilistic CSD algorithm (MRtrix software package; http://software.incf.org/software/mrtrix/mrtrixpackage) 9 with manually selected ROIs. Fiber tracking was stopped when 10,000 tracts reached the selected ROI or when 100,000 total tracts were generated. The 3D segmentation of each HGG and the 3D visualization of tracts were performed by using 3D Slicer software (www.slicer.org) 11 with T1-weighted images set as an overlay. Qualitative Analysis Qualitative analysis was performed by a radiologist with 20 years of experience who evaluated reconstructed tracts on superimposed 3D T1-weighted images. We verified the anatomic course of each fiber bundle section by section, in axial, coronal, and sagittal planes, comparing the neoplasm-affected side with the healthy one. During this step, the reader could detect any macroscopic WM tract dislocation or disruption in the affected side. Moreover, neoplasm 3D segmentation was used to depict the spatial relationship between tracts and gliomas. Quantitative and Statistical Analysis Quantitative analysis of reconstructed tracts, both in healthy and affected hemispheres, was performed by using the output data of MRtrix. Diffusion tensors were sampled along fiber tracts, and the mean values of FA, mean diffusivity (MD), linear coefficient (Cl), planar coefficient (Cp), and spherical coefficient (Cs) were considered for each bundle. All parameters were calculated with in-house scripts built with the Matlab software package (MathWorks, Natick, Massachusetts), based on the eigenvalues obtained from MRtrix output data. As previously described, [12][13][14] we used a combined approach of diffusion tensor parameters and probabilistic CSD tractography to overcome the well-known limitations of DTI in voxels containing crossing fibers and to increase the sensitivity of diffusion tensor metric changes in these regions. Statistical analysis was performed by the nonparametric Wilcoxon signed rank test for paired measures of bundles in the healthy-versus-affected sides. Furthermore, Šidák correction was performed to account for multiple comparisons. The significance threshold was set to an ␣ value of .05, resulting in an effective threshold of .0102 after correction. RESULTS All recruited patients were symptomatic with different degrees of motor and/or speech impairment, from mild to severe. All patients had both CST and AF involved by HGGs, which were located in the lateral frontoparietal lobes. There was a prevalence of left-sided involvement (11 left side, 9 right side). After surgery, the histologic analysis of resected neoplasms confirmed each lesion as III or IV World Health Organization grade: grade III (n ϭ 15) and grade IV (n ϭ 5). We were able to obtain tractographic reconstructions of the medial portion of the CST and the lateral portion of the CST and AF, both in the healthy and neoplasm-affected sides in all 20 patients. Qualitative Analysis Qualitative analysis revealed the anatomic course of each reconstructed fiber bundle, morphologic alterations of WM tracts on affected sides, and the spatial relationship between those tracts and the HGG. Probabilistic CSD tractography of the CST yielded streamlines of both the medial and lateral portions of the CST on each side of the brain. Qualitative analysis for the healthy side showed a reliable representation of all components of the Pensfield homunculus ( Fig 1A). In the neoplasm-affected side, all evaluated tracts showed a strict spatial relationship with HGGs. In 6/20 patients, the lateral portion of the CST was disrupted by the neoplasm, with consequent poor representation of streamlines (Fig 1B-D). In the other 14 patients, the lateral portion of the CST passed through the neoplasm and was, therefore, less compromised than expected on the basis of morphologic MR imaging evaluation of the neoplasms ( Fig 1E). In all patients, the medial portion of the CST was not affected by the neoplasm (Fig 1F). Probabilistic CSD tractographic reconstructions also allowed reliable recognition of the anatomic course for all AF segments, such as Broca, Wernicke, and Geschwind projections (Fig 2) on both sides. Qualitative analysis showed that the AF was involved by the neoplasm on the affected side of all subjects (Fig 2D-F), with different degrees of morphologic impairment. Quantitative Analysis Quantitative analysis showed that FA was significantly decreased in the lateral portion of the CST (z ϭ 2.51, n ϭ 20, P ϭ .006) and in the AF (z ϭ 2.52, n ϭ 20, P ϭ .006) of the side affected by the neoplasm; furthermore, no significant difference was found for FA (P Ͼ .0102) in the medial portion of the CST. Although MD was increased in all affected tracts, only the lateral portion of the CST reached significance (z ϭ Ϫ2.53, n ϭ 20, P ϭ .006). Cl was significantly decreased in the affected bundles for the lateral portion of the CST (z ϭ 2.51, n ϭ 20, P ϭ .006) and the AF (z ϭ 2.52, n ϭ 20, P ϭ .006). Cs was significantly increased for the affected lateral CST (z ϭ Ϫ2.51, n ϭ 20, P ϭ .006) and for the AF (z ϭ Ϫ2.52, n ϭ 20, P ϭ .006). No differences were observed when measuring Cp. Quantitative results are summarized in DISCUSSION In this study we evaluated 20 patients with HGGs centered in the lateral frontoparietal lobes. Each patient underwent glioma surgical resection after presurgical evaluation of the main WM fiber bundles involved by MR imaging tractography. Because the main goal of the surgical approach was to remove neoplasms while preserving brain function, accurate presurgical planning was essential to granting the best possible quality of life after surgery. Presurgical tractographic analysis, performed by a probabilistic CSD-based tractographic approach, revealed involvement of the CST and AF in each patient. These 2 WM fiber bundles have clinically relevant roles in voluntary motion control and speech and language, respectively. Common DTI approaches cannot depict the entire motor tract, allowing only reconstruction of the Coronal T1-weighted MR image (A) shows the corticospinal tract (arrow) on the healthy side with overlay of the Pensfield motor homunculus (arrowheads indicate neoplasm margins) and the neoplasm volume segmentation (B) (blue volume). Coronal (C) and sagittal (D) rotated images show tractographic reconstruction of CSTs on both healthy (arrow) and neoplasm-affected sides (empty arrow) of the brain. In this patient, the lateral portion of the CST on the affected side is poorly represented due to neoplasm disruption. E, Coronal T1-weighted MR image shows CSTs, both on healthy (arrow) and neoplasm-affected sides (empty arrow) of another patient (neoplasm segmentation is depicted by red volume). In this case, on the affected side, the lateral portion of the CST passes through the neoplasm and is poorly involved. F, Coronal T1-weighted MR image shows the medial portion of CSTs on the healthy side (arrow) and the absence of their involvement in the neoplasm-affected one (empty arrow). N indicates neoplasm. medial portion of the CST, corresponding to a somatotopic depiction of the lower limbs, trunk, and upper limbs without the hand. Conversely, these techniques could not represent the lateral portion of the CST, which corresponds to a somatotopic depiction of the hand, face, tongue, and voluntary swallow muscles, 15 due to the inherent limitations of DTI approaches to resolve complex fiber configurations, 6 which were estimated to represent approximately 90% of WM voxels of the entire brain. 16 By overcoming these limitations, probabilistic CSD-based tractography allowed a reconstruction of the entire CST (medial and lateral parts), 8 with a marked match between tracts and all somatotopic parts of the Penfield motor homunculus. It also demonstrated that brain neoplasms can involve different WM modifications, resulting in DTI parameter changes and alterations of the average WM fiber bundle morphology represented with tractography. In particular, Witwer et al 17 described deviated, infiltrated, edema-tous, and disrupted WM patterns, depending on tumor type and location. Different DTI-based studies reported that high-grade gliomas mainly cause complete tract disruption, 18 whereas low-grade gliomas infiltrate tracts along myelinated fibers. 19 All these patterns of tumors cause DTI parameter changes, in particular FA decrease, in involved regions. 20 Furthermore, the presence of voxels with FA values lower than the DTI threshold (commonly set to 0.2) causes an interruption of reconstructed WM tracts, 21 and the use of an FA threshold lower than 0.2 results in poor accuracy of major eigenvector-direction estimation. 1 These limitations can produce 3 negative effects on tractographic presurgical planning, particularly in CST evaluation. First, the reconstructed CST tract could show a false interruption, for example, in the lateral portion of the CST. 20 Second, DTI findings could suggest a false safe resection margin around the lesion. 8 Last, the lack of reconstruction of the lateral portion of the CST fails to provide any qualitative or quantitative information about this part of the tract and its relationship to neoplasms. These combined effects make presurgical planning inaccurate and may contribute to unexpected postoperative functional deficits. Use of the CSD technique allowed reconstruction of fibers in voxels with low anisotropy and characterization of voxels in tumoral and peritumoral areas 1 and voxels with complex axonal spatial configurations. In addition, it was demonstrated that edema-affected and infiltrated tracts could reduce their anisotropy, while preserving sufficient directional information for tractographic depiction. 1 In our study, we could reconstruct the medial and lateral portions of the CST of 20 patients in both the healthy and neoplasmaffected sides of the brain by using probabilistic CSD tractography. This approach allowed reliable reconstruction of these pathways with an accurate representation of the entire Penfield motor homunculus on the healthy side, avoiding well-known reconstruction problems from crossing fibers. Moreover, on the involved side, we detected differing degrees of involvement for the lateral portion of the CST, from deviation to disruption, with no alterations of the medial portion. AF depiction in presurgical planning is clinically relevant to avoid aphasic syndromes induced by surgical lesions or stroke. 21,22 This WM bundle connects to Broca, Wernicke, and Geschwind areas and other brain regions involved in language and speech. 23 Because brain stimulation techniques are less effective for WM tracts than for GM, evaluating the morphologic localization of the AF is essential for appropriate presurgical planning. [24][25][26][27] DTI-based tractographic studies revealed incomplete AF reconstructions, in particular of the anterior portion. 24 The evaluation of this AF segment is considered an important predictor of postoperative outcome because a lesion in the anterior portion of AF could cause negative effects on speech fluency. 28 DTI approaches have tractographic reconstruction limitations in the centrum semiovale, due to the presence of crossing fibers. 24 In addition, DTI tractography of AFs in patients with brain neoplasms resulted in incomplete correspondence with intracortical stimulation, suggesting that this technique is not optimal for mapping language areas. 29 Finally, the same negative effects of DTI techniques discussed above for CST are also relevant to AFs involved by gliomas during presurgical planning. Thus, the use of probabilistic CSD, which includes voxels not considered by DTI tractography, could provide advantages in the evaluation of DTI metrics in aphasic syndromes as well. In this cohort of patients, we found that probabilistic CSDbased tractography allowed reliable representation of AFs both in glioma-involved and noninvolved sides of the brain, even in arduous regions for conventional DTI tractography. Qualitative analysis showed that all affected AFs were dislocated, infiltrated, or disrupted by the neoplasm. From probabilistic CSD tractography, we extrapolated a quantitative analysis based on the main DTI parameters (FA, MD, Cp, Cl, and Cs). FA and MD are well-known values measuring axonal integrity and anisotropic water diffusivity, respectively. Cl measures the intravoxel uniformity of tract direction and fiber tract organization, Cs estimates the intravoxel diffusivity, 30 and Cp estimates voxels in which there are crossing or kissing fibers. 12 No statistically significant changes were found for the medial CST both in healthy and affected sides of the brain, suggesting that in our group of patients, these bundles are not involved with HGGs. In all patients, we found statistically significant differences in both the lateral CST and AF between the involved and noninvolved sides. In particular, we detected decreased FA and Cl and increased Cs in the involved side. MD was increased in all affected tracts, reaching significance only in the lateral portion of CST, probably due to the low number of subjects. The significant FA decrease could reflect a remarkable change in WM microstructure, induced by tract deviation, infiltration, or disruption or influenced by edema. 30 MD increase could be associated with loss of WM integrity with a consequent increase in free tissue water. 31 Cl and Cs are 2 known shape-oriented anisotropy measures, indexes of anisotropic and isotropic diffusion changes, respectively. 30 In our study, neoplasms changed the intravoxel uniformity and diffusivity of affected tracts, causing Cl decrease and Cs increase. These parameters, combined with the use of FA and MD, could provide a powerful quantitative estimation of WM tracts involved by HGGs. Finally, we evaluated Cp for all tracts, which is another shape-oriented parameter reflecting the intravoxel presence of crossing fibers. 30 In our study, the Cp value was not significantly different between the healthy and affected side. This result could be due to a lack of sensitivity in cases of Ͼ2 crossing fibers inside the same voxel. 12 The major limitation of this study is the relatively small patient cohort, which might influence the statistical power of our findings. In addition, it was not possible to perform follow-up MR imaging tractography after surgical neoplasm resections. This lack of MR imaging follow-up prohibited us from providing an evaluation of the postoperative outcome after CSD-based presurgery planning. CONCLUSIONS The results presented here demonstrated that probabilistic CSD tractography provides useful qualitative and quantitative analysis in presurgical planning for HGGs. Our qualitative analysis showed that probabilistic CSD allowed reliable reconstruction of tracts not detected with other DTI techniques, such as those involved by neoplasms or with complex fiber configurations. We also demonstrated that quantitative analysis based on CSD tractography can characterize the involvement of the tracts by the neoplasms, overcoming the well-known quantitative underestimation related to DTI reconstruction. Furthermore, because postoperative quantitative measurements are also important for the prediction of brain-function recovery, 32 further studies performed with probabilistic CSD could provide noteworthy results on surgical-outcome evaluation.
2016-10-11T02:19:10.865Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "cfadf0220b7d974bb613a133cae496ffaacd6691", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/ajnr/36/10/1853.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "a6e7c3d419deea4a3f875abe87fed5d3bd45851c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117130786
pes2o/s2orc
v3-fos-license
Nonlinear magnetotransport in dual spin valves Recent experimental measurements of magnetoresistance in dual spin valves [A. Aziz et al., Phys. Rev. Lett. 103, 237203 (2009)] reveal some nonlinear features of transport, which have not been observed in other systems. We propose a phenomenological model describing current-dependent resistance (and giant magnetoresistance) in double spin valves. The model is based on a modified Valet-Fert approach, and takes into account the dependence of bulk/interface resistance and bulk/interface spin asymmetry parameters for the central magnetic layer on spin accumulation, and consequently on charge current. Such a nonlinear model accounts for recent experimental observations. I. INTRODUCTION Spin accumulation (spin splitting of the electrochemical potential) is a nonequilibrium phenomenon which is associated with a spatially nonuniform spin asymmetry of two spin channels for electronic transport [1][2][3] . In the simplest case, it appears at the interface between ferromagnetic and non-magnetic metals, when current has a nonzero component perpendicular to the interface 4 . Spin accumulation also appears in more complex systems, like single or double spin valves exhibiting current-perpendicular-to-plane giant magnetoresistance (CPP-GMR) 5,6 effect, as well as in single or double tunnel junctions. Current-induced spin accumulation is particularly pronounced in spin-polarized transport through nanoparticles 7,8 or quantum dots and molecules 9 . In the case of spin valves based on layered magnetic structures, spin accumulation and GMR are usually accounted for in terms of the Valet-Fert description 4,10 , in which the spin accumulation is linear in current, while resistance and magnetoresistance are independent of current magnitude and current orientation. The description involves a number of phenomenological parameters which usually are taken from CPP-GMR experimental data. Originally, it was formulated for collinear (parallel and antiparallel) magnetic configurations, but later was extended to describe also current induced spin torque 11 and CPP-GMR for arbitrary noncollinear geometry 12 . The Valet-Fert description was successfully applied not only to single spin valves, but also to double (dual) spin valves 13 , F L /N L /F C /N R /F R , where F C is a magnetically free layer separated from two magnetically fixed outer layers (F L and F R ) by nonmagnetic spacers (N L and N R ). An important feature of such structures is an enhanced spin accumulation in the central layer (F C ) for antiparallel magnetizations of the outer magnetic layers (see Fig.1). Spin accumulation may be then several times larger than in the corresponding single spin valves. Accordingly, such a magnetic configuration of dual spin valves (DSVs) diminishes the critical current needed to switch magnetic moment of the central layer, and also enhances the current-induced spin dynamics 13,14 . Another interesting consequence of the enhanced spin accumulation in the central layer of a dual spin valve is the possibility of nonlinear transport effects. Recent experimental results 15 indicate that the enhanced spin accumulation may cause unusual dependence of magnetoresistance on dc current. It has been shown that when magnetizations of the outer layers are antiparallel, resistance of a DSV for one current orientation is lower when the F C layer is magnetized along the F R one and higher when it is aligned along magnetization of the F L layer, while for the opposite current orientation the situation is reversed. Moreover, the difference in resistance of both collinear configurations markedly depends on the applied current. These observations strongly differ from the predictions of the Valet-Fert model 4 , which gives resistance (and magnetoresistance) independent of the current density. Such a nonlinear behavior may originate from several reasons. The Valet-Fert description is based on the assumption of constant (independent of spin accumulation and current) basic parameters of the model, like bulk/interface resistance, bulk/interface spin asymmetry, spin diffusion lengths, etc. This is justified when spin accumulation is small and/or change in the density of states on the energy scale comparable to spin accumulation is negligible in the vicinity of the Fermi level. Density of states can be then considered constant, i.e. independent of energy. Since the density of states determines electron scattering rates, one may safely assume that the transport parameters mentioned above are also constant. However, when the density of states at the Fermi level varies remarkably with energy and spin accumulation is sufficiently large, this assumption may not be valid, and the parameters mentioned above may depend on spin accumulation 15 . This, in turn, may lead to nonlinear effects, like the experimental ones described above 15 . The spin accumulation, however, is rather small -of the order of 0.1 meV for current density of 10 8 A/cm 2 . Thus, to account for the experimental observations one would need rather large gradient of the density of states with respect to energy at the Fermi level. More specifically, to account the experimental observations, the change in density of states should be of the order of 10% on the energy scale of 1 meV. Although this is physically possible, one cannot exclude other contributions to the effect. Spin accumulation can directly change effective scattering potential for electrons at the Fermi level. Moreover, spin accumulation can also indirectly influence transport parameters, for instance via current-induced shift of the energy bands due to charging of the layers or due to electron correlations, which are neglected in the description of the spin accumulation. Since the experimental results show that the nonlinear effects appear only in the antiparallel configuration, where spin accumulation in the central layer is large, we assume that the indirect contributions are proportional to spin accumulation (at least in the first order). Since, it is not clear which contribution is dominant, we present a phenomenological approach, which effectively includes all contributions to the observed nonlinear transport. We assume that bulk and interfacial resistances as well as spin asymmetries vary with spin accumulation and show that such variation leads to effects comparable to experimental observations 15 . Structure of this paper is as follows. In section II we describe the model. Numerical results are presented in section III for bulk and interfacial contributions. Section IV deals shortly with magnetization dynamics in DSV. Finally, we conclude in section V. II. MODEL Electron scattering rate and its spin asymmetry become modified when the spin-dependent Fermi levels are shifted (eg. due to spin accumulation). All the effects leading to this modification can be included in the description of charge and spin transport presented in Refs. 11 and 12, which generalize the Valet-Fert model to noncollinear magnetic configurations. We analyze the situation when the effect originates from the bulk resistivity and bulk spin asymmetry factor β of the F C layer, which are assumed to depend on spin accumulation, as well as from similar dependence of the corresponding interface parameters. Let us begin with the bulk parameters. The spin-dependent bulk resistivity of a magnetic layer is conveniently written in the form 4 where ρ * is determined by the overall bulk resistivity ρ F as ρ * = ρ F /(1 − β 2 ). When the spin accumulation is sufficiently large, one should take into account the corresponding variation of ρ * . In the lowest approximation (linear in the spin accumulation) one can write where ρ * 0 is the corresponding equilibrium (zero-current limit) value, and g(x) is spin accumulation in the central layer, which varies with the distance from layer's interfaces. To disregard this dependence, we average the spin accumulation over the layer thickness g = (1/d) FC g(x)dx. In Eq.(2) q is a phenomenological parameter, which depends on the relevant band structure. This parameter effectively includes all effects leading to the modification of transport parameters. This equation can be rewritten as whereg is a dimensionless variable related to spin accumulation,g = (e 2 j 0 ρ * 0 l sf ) −1 g, with j 0 denoting the particle current density and l sf being the spin-flip length. We also introduced the dimensionless current density i = I/I 0 , with I = ej 0 denoting the charge current density and I 0 being a current density scale typical for metallic spin valves (I 0 = 10 8 A/cm 2 ). The parameterq in Eq. (3),q = (eI 0 l sf )q, is a dimensionless phenomenological parameter which is independent of current. The bulk spin asymmetry parameter β becomes modified by spin accumulation as well, and this modification can be written in a form similar to that in the case of ρ * , i.e. where β 0 is the corresponding equilibrium value and ξ effectively includes all the contributions. When introducing the dimensionless spin accumulation defined above, one can rewrite Eq. (4) as whereξ = (eI 0 ρ * 0 l sf )ξ. Similar equations can be written for the interfacial resistance R * and interfacial asymmetry parameter γ, which define spin-dependent interface resistance as Analogously, we can write the dependence of R * and γ on spin accumulation in the form where g(x i ) is spin accumulation at a given interface. The constants R * 0 and γ 0 are equilibrium interfacial resistance and asymmetry parameter, respectively. Relations (7) lead to the following dependence of the interfacial parameters on the current density: The parameters q, ξ, q ′ , and ξ ′ introduced above describe deviation from usual behavior of the resistance (magnetoresistance) described by the Valet-Fert model. These parameters will be considered as independent phenomenological ones. III. NUMERICAL RESULTS To calculate resistance and spin accumulation for arbitrary noncollinear magnetic configuration, we apply the formalism described in Refs. 11 and 12. This formalism, however, is modified by assuming ρ * , β, R * and γ to depend on current density (spin accumulation). Therefore, for a particular magnetic configuration and for certain values of i,q,ξ,q ′ , andξ ′ , the spin accumulation has to be calculated together with ρ * , R * , β, and γ in a selfconsistent way. In the first step, we assume equilibrium values; ρ * = ρ * 0 and β = β 0 (R * = R * 0 and γ = γ 0 ), and calculate the corresponding spin accumulation g 0 (x) in the central magnetic layer. Then, we calculate the zero approximation of the out-of-equilibrium parameters according to Eqs. (3), (5), and/or (8). With these new values for ρ * and β (R * and γ) we calculate the out-ofequilibrium spin accumulation in the central layer and new out-of-equilibrium values of ρ * and β (R * and γ). The iteration process is continued until a stable point is reached. Finally, for the obtained values of ρ * , β, R * , γ, and spin accumulation, we calculate the resistance R of the DSV at a given magnetic configuration (see Ref. 12). In all our calculations, magnetizations of the outermost layers are assumed to be fixed and antiparallel (like in experiment 15 ). Current is defined as positive for electrons flowing from F R towards F L . The equilibrium parameters have been taken from the relevant literature (see Appendix). In this section we apply the above described model to two examples of DSV structures. The first one is a symmetric DSV with F L = F R = Co(20nm), F C = Py(8nm), and with the magnetic layers separated by 10nm thick Cu spacers. The second considered structure is an asymmetric exchange-biased DSV similar to that used in experiment 15 , namely Cu-Co(6)/Cu(4)/Py(2)/Cu(2)/Co(6)/IrMn(10)-Cu, where the numbers in brackets correspond to layer thicknesses in nanometers. A. Bulk effects In this subsection we consider pure bulk effects assumingq ′ = 0 andξ ′ = 0. We start from a symmetric DSV, and the corresponding numerical results are shown in Fig. 2. First, we analyze the case withq = 0.1 andξ = 0. Figure 2(a) shows how ρ * varies when magnetization of the central layer is rotated in the layer plane. This rotation is described by the angle θ between magnetizations of the F L and F C layers. The higher the current density, the more pronounced is the deviation of ρ * from its equilibrium value ρ * 0 . The current-induced change in ρ * 0 reaches maxima when magnetic moment of the central layer is collinear with those of the outer layers. These maxima are different for the two opposite orientations of the magnetic moment of F C layer, as the corresponding spin accumulations are different. For θ = π/2, however, one finds ρ * = ρ * 0 . This is because spin accumulation vanishes then due to opposite contributions of both interfaces (for symmetric DSVs). Variation of ρ * in Fig. 2(a) is shown only for positive current, i > 0. When current is negative, the change in ρ * due to spin accumulation changes sign (not shown), as also can be concluded from Fig. 2(c). The current-induced angular dependence of ρ * makes the resistance of the DSV dependent on the current density. As shown in Fig. 2(c), the angular dependence of the resistance, becomes asymmetric, i.e. its magnitudes in the opposite collinear states (θ = 0 and π) are different. Such an asymmetric angular dependence qualitatively differs from that obtained from the Valet-Fert description, where the resistance is symmetric. When magnetization of the central layer switches (e.g. due to an applied magnetic field) from one collinear state to the opposite one, one finds a drop (positive or negative) in the resistance, defined as ∆R = R(θ = π) − R(θ = 0). Moreover, when the current direction is reversed, the corresponding drop in resistance also changes sign, as shown in Fig 2(c). Let us consider now the situation where β changes with the spin accumulation (current), while ρ * is constant,ξ = 0.1 andq = 0. General behavior of β and of the corresponding resistance with the angle θ is similar to that discussed above (see Fig 2(b,d)), although the sign of the resistance drop for the current of a given orientation is now opposite to that obtained in the case discussed above, compare Figs 2(c) and (d). Generally, the sign of the drop in resistance may be controlled by the parametersξ andq. In real structures, however, both parameters,ξ and q, may be different from zero, and the observed behavior results from interplay of the bulk and interface effects discussed above. To show this, we consider now an asymmetric exchange-biased DSV structure, Cu-Co(6)/Cu(4)/Py(2)/Cu(2)/Co(6)/IrMn(10)-Cu, similar to that studied experimentally. the symmetric DSV structure, the difference in the devi-ations of both parameters from their equilibrium values for θ = π and θ = 0 is now much more pronounced. As before, the nonequilibrium values of the parameters cross the corresponding equilibrium ones for nearly perpendicular configuration, θ ≈ π/2. The resistance shown in Fig. 3(c) reveals well defined drop between both collinear configurations, and the drop changes sign when the current is reversed. However, the resistance drops are now different in their absolute magnitude due to the asymmetry of DSV. Figure 3(d) shows the resistance drops as a function of the current density for three different sets of parameters. For the parameters used in Fig. (3a-c), i.e. for q =ξ = 0.1 [line (1)], the absolute value of the drop increases rather linearly with increasing magnitude of current, although the growth of ∆R is faster for positive than for negative current. In the second case,q = 0.1 andξ = 10 −3 [line (2)], the dependence remains nearly the same, with only a small deviation from the first case. Finally, we reduced the parameterq,q = 10 −3 , whilẽ ξ = 0.1 [line (3)]. Now, the dependence strongly differs from the first two cases. ∆R only slightly varies with current and remains rather small. Such a behavior results from interplay of the bulk and interface contributions. This interplay is presented also in Fig. 3(e), where the resistance drop is shown as a function of iq and iξ. Additionally, the latter figure shows that for any value ofq there is a certain value ofξ for which ∆R = 0 (presented by the line). B. Interfacial effects Now we consider the nonlinear effects due to currentdependent interfacial parameters, as given by Eqs. 8. For both symmetric and asymmetric spin valves we assume that the parametersq ′ andξ ′ are equal for both interfaces of the central layer. Consider first a symmetric DSV. The corresponding results are summarized in Fig. 4. Variation of R * , when the central magnetization rotates in the layer plane, is shown in Fig. 4(a) forq ′ = 0.1 andξ ′ = 0. The curves below the equilibrium value R * 0 correspond to R * on the left interface, while these above R * 0 describe R * on the right interface. When the central magnetization is close to the collinear orientation (θ = 0,π), R * on the left and right interfaces are significantly different, and this difference becomes partly reduced when θ tends to θ = π/2 (for the systems considered). Generally, the higher current density, the more pronounced is the shift of R * on both interfaces from their equilibrium values. The corresponding angular dependence of the DSV resistance is shown in Fig. 4(c) for the current densities I/I 0 = ±3. This angular dependence results in small resistance drops of opposite signs for opposite currents. As shall be shown below, the small value of ∆R is due to a relatively large thickness of the central layer. Similar conclusions can also be drown in the case whenq ′ = 0 and only γ depends on spin accumulation, as shown in For the asymmetric exchange-biased DSVs, we assume that both R * and γ depend on spin accumulation. As shown in Fig. 5(a) forq ′ =ξ ′ = 0.1, there is a relatively large drop in resistance for the assumed parameters. This resistance drop ∆R increases rather linearly with the current density, as shown in Fig. 5(b). A small deviation from the linear behavior can be observed only for larger values of negative current. Calculations for different thicknesses of the central layer, d = 2, 8, and 16 nm, show that the slope of the curves representing the resistance drop as a function of the current density decreases as the thickness increases, see Fig. 5(b). In other words, the dependence of resistance on current becomes less pronounced when the central layer is thicker. We note, that such behavior was not observed in the case of the bulk contribution. This feature arises because for thicker magnetic layers the bulk resistivity dominates the pillar resistance and suppresses the current-induced effects due to interfaces. Additionally, the slope of the curves presenting the resistance drop as a function of current density depends on the parametersq ′ andξ ′ , and can change sign for appropriate values of these parameters. This is shown in Figs. 5(c) and (d), where one of the parameters, eitherξ ′ (c) orq ′ (d) has been reduced to 10 −3 . Sinceq ′ andξ ′ are of the same sign, their effects are opposite and the corresponding contributions may partly compensate each other. This is also shown in Fig. 5(e), where the resistance drop ∆R is shown as a function of iq ′ and iξ ′ . From this figure also follows that total compensation of the contributions to the resistance drop occurs for the points corresponding to the line in Fig. 5(e). IV. MAGNETIZATION DYNAMICS In the analysis presented above magnetization of the central layer was in the layer plane. However, when the magnetization switches between the two collinear orientations (due to applied magnetic field), it precesses and comes into out-of-plane orientations as well. Such a precessional motion modifies spin accumulation and also DSV's resistance. In this section we describe variation of the resistance, when magnetization of the central layer is switched by an external magnetic field back and forth. To do this we make use of the single-domain approximation. We also assume that the magnetic field is applied along the easy axis of the central layer, similarly as in experiment (see Fig. 1). Time evolution of the spin moment of central layer is described by the Landau-Lifshitz-Gilbert equation Hereŝ is a unit vector along the spin moment of the central layer, γ g is gyromagnetic ratio, µ 0 vacuum permeability, α is a dimensionless damping parameter, and H eff stands for effective magnetic field, which includes external magnetic field (H ext ) applied alongê z -axis (see Fig. 1), anisotropy field (H ani ), and demagnetization field (H dem ) calculated for a layer of thickness d = 2 nm and elliptical shape with the major and minor axes 130 nm and 60 nm, respectively. H th is a stochastic gaussian field with dispersion D = (αk B T )/(γ g µ 2 0 M s V ), which describes thermal fluctuations at temperature T , where k B is the Boltzmann constant, and V is volume of the central magnetic layer. Magnetic moments of the outer layers are assumed to be fixed due to much larger coercive fields of these layers. Moreover, the torque due to spin-transfer has not been included. Figure 6 shows quasistatic minor hysteresis loops of the resistance in external magnetic field, calculated for asymmetric exchange-biased DSV at T = 70 K. These figures are in agreement with the results obtained in the preceding section, and also in good agreement with experimental observations 15 . They also show that the drop in resistance changes sign when the direction of current is reversed. Moreover, one can observe small salient points in the hysteresis loops, which appear during the reversal process -especially at low current densities. These points indicate on the minima in resistance at noncollinear configurations and have been observed experimentally as well. The minor hysteresis loops appear also in the case when the nonlinear effect is due to bulk parameters (not shown). Some differences however appear, for instance in their dependence on the layer thickness. This suggests, that the experimentally observed effects are more likely due to interface contribution, which is quite reasonable as the spin accumulation is maximal just at the interfaces. V. CONCLUSIONS AND DISCUSSION In summary, we have extended the description of spin accumulation and magnetotransport in order to account for nonlinear magnetotransport in metallic spin valves. We assumed the dependence of bulk resistivities, interface resistances, and bulk/interface asymmetry parameters on spin accumulation in the central layer. The assumed phenomenological parameters effectively include different contributions leading to modification of the spin-dependent density of states at the Fermi levels. The obtained numerical results reflect the trends observed experimentally. More specifically, the dependence on spin accumulation of any of the considered parameters leads to an asymmetric modification of spin valve resistance in comparison to its equilibrium (zero-current) value. This modification results in a drop in resistance when the magnetic moment of the central layer switches between two collinear configurations. Moreover, this drop depends on the current density, as has been also shown in experiment 15 . Within our phenomenological description we can reproduce mainly linear dependence of the currentinduced resistance drops, with a small deviation from the linearity for higher current densities. However, the description fails to account strongly nonlinear variation of ∆R, which was observed in some DSV structures at high current densities 15 . To account for this behavior one should take into account higher order terms in the expansion of the relevant parameters. Additionally, when only interfacial contribution is taken into account, the dependence of ∆R on current becomes less pronounced with increasing thickness of the central layer. Such a behaviour has been observed experimentally 15 , too, which indicates that the interface contribution to the nonlinear effects is more important than the bulk one. The resistance drop measured experimentally at the current density of I = 10 7 Acm −2 is about 0.04 fΩm 2 . To reach effects of similar magnitude within the interfacial model, as shown in Fig. 5, one needsξ ′ ∼ 1, i.e. ξ ≃ 1.13 (meV) −1 (when assuming the effect is due to variation of interfacial asymmetry parameter only). If direct contribution from spin accumulation would dominate, then the corresponding change in the density of states would be of the order of 10% on the energy scale of 1 meV. This slope may be much smaller in the presence of other contributions.
2010-05-22T07:45:35.000Z
2010-04-08T00:00:00.000
{ "year": 2010, "sha1": "3c996064ba422f090b627a20a6a7a67b5e51bac8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1004.1439", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3c996064ba422f090b627a20a6a7a67b5e51bac8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260407650
pes2o/s2orc
v3-fos-license
FACTORS INFLUENCING PROFESSIONAL SKILLS AMONG ACCOUNTING STUDENTS AT KWAZULU-NATAL UNIVERSITIES. : Employers are increasingly concerned that most recent graduates lack professional skills. In addition, whether students pursuing accounting degrees in professionally accredited institutions are more financially savvy than non-accredited institutions is still being determined. In 1582 undergraduate accounting students at the University of KwaZulu-Natal (UKZN), Mangosuthu University of Technology (MUT), and Durban University of Technology (DUT) were surveyed using self-administered questionnaires. The data were analyzed using Statistical Package for the Social Sciences version 25 (SPSS 25). The results indicated that the majority of respondents were female. 72.6 percent of respondents are influenced by the South African Institute of Charted Accountants (SAICA) accreditation, whereas 95.2 percent of respondents with outstanding professional skills are influenced by non-SAICA accreditation. The study's findings disprove previous claims that accreditation has no bearing on students' abilities. Finally, the investigation contributes to South Africa-relevant knowledge. INTRODUCTION According to numerous practicing accounting professionals, most accounting graduates need to satisfy the standards of potential employers in a globalized business environment (de Bruyn, 2023). By Handoyo and Anas (2019), Employers seek graduates with multifaceted skills and attributes pertinent to shifting global norms. As a result of the globalization of the business environment, accounting bodies have also developed guidelines for the breadth of skills necessary for the profession to remain pertinent. Since the 2008 global financial crisis (GFC), improving the professional skills and ethical conduct of emerging professionals such as accountants has become a global concern that has captivated the attention of various stakeholders (Feghali et al., 2022). Other factors, such as regulatory requirements, technological advancements, globalization, and the rising number of corporate failures, have also contributed to the scrutiny of accounting education and curriculum in recent decades (Ebaid, 2022;Handoyo & Anas, 2019;Samkin & Stainbank, 2016). Studies in several nations have indicated that college students lack professional skills (Mvunabandi et al., 2023). These findings align with those regarding financial literacy in South Africa (Dhlembeu et al., 2022). Kobina, Yensu, and Obeng (2020) found that all age groups exhibited low financial capability and professional competence levels. However, little is known about the professional skills of university students, particularly those enrolled in accounting-related Professional skills refer to a degree curriculum with professional accreditation that delivers graduates with a technical knowledge foundation and the ability to apply that information successfully when they enter the workforce and advance their future growth. Based on research conducted by Bui & Porter (2010) and Jackling & De Lange (2009), it has been suggested that accounting programs may need to meet the expectations and requirements of employers. Watty (2014) recommended that the next step in resolving these problems was to include crucial skills in the curriculum. University accounting programs should equip graduates with excellent technical knowledge and job creation abilities to suit employers' needs; these graduates should be able to contribute immediately to future business (Albrecht & Sack, 2000;Ellington, 2017;O'Connell et al., 2015). The 2015 International Education Standard (IES) 3 Initial Professional Development -Professional Abilities from the International Accounting Education Standards Board (IAESB) outlines the professional abilities employers look for in accounting graduates. The IAESB expects to see chances for students to develop a number of the competencies outlined in the IES 3 when evaluating a degree for professional certification. The Initial Professional Development (IPD) learning outcomes for professional skills are laid out in the IES for aspiring accountants to follow. These are divided into four categories of competency: intellectual, interpersonal communication, personal, and organizational skills, which a professional accountant combines with technical proficiency and a commitment to professional values, ethics, and attitudes to demonstrate professional competence. SAICA has led the way in accounting education provided in South African tertiary institutions and continues to have a significant impact (de Villiers & Venter, 2010). Accreditation is granted after thoroughly evaluating the accounting courses these institutions offer. Higher education institutions that offer chartered accounting programs are required by SAICA to have the necessary resources and to adhere to certain SAICA regulations (SAICA, 2014). Its competency framework, which all certified institutions are required to use, includes qualities for professional skills, among other requirements. According to Clanchy and Ballard (1995), higher education institutions can only make sure that students have the chance to pick up technical skills while they are still undergraduates. According to Fogarty (2010), these schools only have a small amount of room for the additional skills that the profession and future employers demand. Sikka, Haslam, Kyriacou, and Agrizzi (2007) examined accounting training material and discovered that there needs to be more study of ethics, principles, theories, or social responsibility issues in addition to technical instruction. While many teachers have tried to help graduates develop their talents, the results could have been more consistent. No research has been done on revising the current curriculum to enhance the skills students learn in universities. According to SAICA guidelines, UKZN is a recognized tertiary institution (UKZN, 2018). While being tertiary institutions, Mangosuthu University of Technology (MUT) and Durban University of Technology (DUT) are not SAICA-accredited. According to De Villiers and Venter (2010), institutions that offer accounting curricula but still need to be certified by SAICA may face difficulties luring students interested in the South African chartered accountant profession. In order to prevent teachers from implementing a system similar to that employed by technical universities in South Africa in our curriculum, universities offering accounting programs use an externally created structure of competencies as part of their curricula (Livingstone & Lubbe, 2017). Following a thorough evaluation by the accounting organization's Academic Review Committee (ARC) in 2018, UKZN was given a level 1 rating. The highest rating by SAICA, Level 1, indicates that the institution has complied with all criteria for the accreditation of its BCom (Accounting) undergraduate and graduate programs (UKZN, 2018). UKZN introduced new strategies that raised the caliber and caliber of its programs thanks to the efficient monitoring provided by SAICA. Additionally, the number of students enrolling in this course climbed from 248 in 2017 to 373 in 2018, while the Certificate in the Theory of Accounting (CTA) throughput rates increased from 38% in 2016 to 49% in 2017 (Bokana, 2019). Additionally, the Initial Test of Competence was taken in January and June 2018, and UKZN students did incredibly well on both occasions, demonstrating the relationship's success. According to Hussein's (2017) research in Egypt, university students' professional skills are crucial. The author suggested that Egyptian colleges review their approach to teaching accounting in order to forge solid ties with professional firms. In both developed and developing nations, research has been done on professional skills (Abayadeera & Watty, 2016;Awayiga et al., 2010;Bui & Porter, 2010;Jackling & De Lange, 2009;Kavanagh & Drennan, 2008). Instead of preparing students for long-term career objectives, most accounting curricula concentrate on preparing graduates for entry-level positions. Universities must provide accounting programs that will properly prepare graduates with the knowledge needed to place them in senior posts once they begin working in collaboration with professional groups. Therefore, accounting educators must balance the needs of higher education and the professional body to produce graduates who are completely competent and ready for the job market (Barac, 2014). The International Federation of Accountants (IFAC) (2014) underlined that accounting educators should provide students with the capabilities employers demand, including technical knowledge, professional skills, and characteristics like values, ethics, and a professional attitude. The ultimate objective is to generate graduates who are prepared for the job market and can live up to employers' and the continually changing workplace standards. The key stakeholders affecting accounting education at South African universities are the Department of Higher Education and Training (DHET), SAICA, and potential employers. The DHET also established crucial cross-field objectives, or professional skills, to be incorporated in all registered qualifications in response to skills deficits that had been found (Killen, 2010;South African Qualification Authority-SAQA, 2000). while a result, professional skills are included in the accounting curriculum with various talents, which students learn while studying to increase their competence levels. As accounting students need professional skills that will help them in the job, the writers of this paper investigate the professional skills of accounting students at three universities. According to De Villiers (2010), colleges must develop creative solutions to satisfy stakeholder needs to be relevant and competitive. Hesketh (2011) agrees and notes that "assessing additional skills in professional exams will involve new approaches to academic assessment," impacting how academic providers instruct and evaluate their students. However, Strauss-Keevy (2014) contends that training institutions are better suited than academic programs to build interdisciplinary/professional abilities because academics need to prepare more. Numerous researchers from various nations have investigated accounting practitioners' expectations regarding the professional skills that accounting graduates should have at the beginning of their entry-level position (Bui & Porter, 2010;Crawford et al., 2011;Hancock et al., 2009;Jackling & De Lange, 2009). These studies indicate a substantial disparity between what graduates know and what they can do, which is consistent with the requirements of professional organizations for an accredited institution. It has been reported that university accounting programs do not adequately develop many of the professional skills accounting practitioners expect graduates to possess (Bui & Porter, 2010;Hancock et al., 2009;Kavanagh & Drennan, 2008;Tempone et al., 2012;Van Romburgh & Van der Merwe, 2015). For instance, a New Zealand study by Bui and Porter (2010) revealed that graduate students needed help utilizing their professional and technical skills. (Bui & Porter, 2010) The authors identified the expectation-performance gap between the professional skills accounting practitioners expect graduates to possess upon entry and the professional skills they observe recently qualified graduates demonstrating. Mvunabandi, Marimuthu, and Maama (2022) discovered that training officers in South Africa value the generic/professional skills prerequisites for entry-level trainee accountants. When SAICA's training program shifted from a knowledge-based to a skills-based approach in 2010, Steenkamp (2012) analyzed accounting students' perceptions. Although the renewed emphasis on pervasive skills was positive for students, many felt they needed to be made aware of the changes too late and were concerned about their impact on their assessment (Hall, 2018). Numerous academics, such as Bui and Porter (2010), concur that accounting students must possess selfreflective, problem-solving, effective oral, listening, and written communication skills. METHODS This research used a questionnaire to collect quantitative data on the professional skills of accounting students in universities in KwaZulu-Natal. The structured questionnaire measured accounting students' professional skill levels and antecedents. Five questions relating to professional abilities were included in 1,582 questionnaires. The study's queries were adapted from prior research (Mandell, 2004;Skagerlund et al., 2018). This study's demographic included all accounting students in KwaZulu-Natal universities enrolled in full-time three-year undergraduate programs. It included first-, second-, and third-year students enrolled for the 2017-2018 academic year who were pursuing Bachelor of Commerce in Accounting, Bachelor of Commerce General, and National Diploma in Accounting degrees at the designated universities. Although there are four universities in KwaZulu-Natal, this investigation focused on the three with the greatest number of students. University of KwaZulu-Natal (UKZN), Durban University of Technology (DUT), and Mangosuthu University of Technology (MUT) were the selected universities. The University of Zululand (UNIZULU) was excluded because of difficulties obtaining student access. The study employed both straightforward random and convenience sampling, and a total of 1582 questionnaires were deemed valid for the study. The current study's findings were categorized using ranges and analyzed per previous studies (Volpe et al., 1996;Mandell, 1998;Huston, 2010) to measure the study's outcomes effectively. Data presentation and analysis. The Significance of professional skills for the accounting profession and the expected level of exposure are known. However, it is still being determined whether the professional skills requirements of accounting students can be quantified in terms of their knowledge. Due to the unknown levels of actual professional accounting student skills, potential employers hire graduates without the necessary skills. Although a previous study by Steenkamp and Smit (2015) indicated that at the beginning of the training contract/end of their three/four-year degree, students/graduates did not meet the expectations of the accounting profession in terms of professional abilities, this study found that this was not the case. This knowledge lacuna has yet to be investigated in South Africa or elsewhere. In this investigation, we assess the professional capabilities of students. The descriptive analysis conducted on each of the five items-lifelong learning, communication skills and professional judgment, information technology skills, critical thinking skills, and problem-solving skills-indicates that the majority of respondents possess strong professional skills, with 72.1%, 90.5%, 92.4%, 93.9%, and 93.2%, respectively. The results are summarised in detail in Table 1. Respondents' professional skills. According to the overall analysis of all five professional skill measures, most respondents (n=1506, or 95.2%) have good professional abilities, as opposed to the minority (n=76, or 4.8%), with weak professional skills. Professional skills versus institutions. The institutions and professional backgrounds of the respondents were described and estimated. The findings showed that the majority of the 864 respondents from UKZN (n = 829; 95.9%) have high professional skills. Similarly, the results revealed that most of the 404 and 314 respondents from DUT and MUT, with (n=372; 92.1%) and (n=305; 97.1%), respectively, have good professional skills. For a comparison of the institutions and respondents' professional skills, see the table below. Professional skills versus institutions. Based on the analysis in the table above, it can be concluded that more than 95% of respondents from UKZN, more than 92% of respondents from DUT, and more than 97% of respondents from MUT have strong professional capabilities. It leads to the conclusion that MUT responders have the highest professional skills. Socioeconomic factors vs professional skills. The Pearson Chi-square and probability tests revealed a significant correlation between student educational levels, racial identity, year of study, age group, and professional skills (lifelong learning) based on the data in Table 3 above. The Pearson Chi-square and probability tests revealed a significant correlation between students' educational levels, years of study, and professional skills (communication skills). Similarly, the study discovered a significant correlation between professional abilities (ICT skills) and student educational levels and year of study, as indicated by the Pearson Chi-square test and the likelihood test. According to the Pearson Chi-square and probability tests, the study also discovered a significant correlation between students' educational levels and their professional skills (critical thinking). Regression model of Socioeconomic factors vs professional skills. The association between the respondents' socioeconomic traits and professional skills was established using a bivariate model. The objective was to determine how effectively the socioeconomic traits of the respondents could predict their professional skill set. The link between the respondents' socioeconomic factors and professional skills, as shown by a scatterplot of the study, appeared to be negative and linear and did not show any bivariate outliers. With r (1578) =.228 and p =.000, the relationship between the predictor variables (respondents' socioeconomic characteristics) and professional skills were statistically significant. Additionally, an ANOVA test conducted as part of the regression study revealed that the regression model performs better when four predictors-the socioeconomic characteristics of the respondents-are included than when the mean is used alone, with F = 21.605; p =.000. According to the p-value, predictions made using the regression model with the four predictors were substantially more accurate than those made without them. Hence, a statistically significant association exists between the predictive variables (respondents' socioeconomic characteristics) and the outcome variable (professional skills). Thus, professional skills among accounting students were predicted using the socioeconomic factors of the respondents. In light of the respondents' socioeconomic characteristics, the regression equation for predicting the financial professional capabilities of accounting students was = 4.982 -(0.249 + 0.249 + 0.189 + 0.222) x. The r2 for this equation was.052, meaning that the socioeconomic factors of the respondents could predict 5.2% of the variation in professional skills. It suggests the statistical Significance of the coefficients for study level, year of study, respondents' institution, and financial inclusion. The respondents' study level, academic year, educational setting, and financial status impact their professional abilities. With a significant value of 0.000, 0.000, 0.000, and 0.022, respectively, the respondents' level of study, year of study, institution, and financial inclusion impact their financial professional skills. According to the regression model's findings, schooling plays a statistically significant role in defining the professional skills of accounting students. This result conflicts with previous research that did not demonstrate a link between education and professional skills (Ansong & Gyensare, 2012;Botha, 2013;Chmelková, 2016;Motsepe, 2016), although it is in line with some studies (Albeerdy & Gharleghi, 2015;Fatoki, 2014;Shahrabani, 2013). The degree of study is statistically important in predicting the professional skills of accounting students, according to the regression model's findings. Ansong and Gyensare (2012) discovered that university students in Ghana financial capacity may be impacted by their mother's educational level. According to Tang and Peter (2015), personal financial knowledge, financial experience, and parental education improve young Americans' professional skills. According to the regression model's findings, residency is not statistically significant in predicting the professional abilities of accounting students. It was clear from the fact that Asian students in America behaved more responsibly with their money than white students. However, several studies (Botha, 2013;H. Chen & Volpe, 1998;Volpe et al., 1996) failed to detect a significant correlation between race and financial capacity. According to the regression model's findings, parent education is not statistically significant in predicting the professional skills of accounting students. According to several investigations (Albeerdy & Gharleghi, 2015), this conclusion is supported. However, some studies (Angulo-Ruiz & Pergelova, 2015;Ansong & Gyensare, 2012;Németh et al., 2015;Tang & Peter, 2015) have identified a favorable association between parents' educational attainment and professional competence. According to the regression model's findings, gender has no statistically significant impact on how professionally skilled accounting students are. This result aligns with some earlier research (Thapa, 2015). However, some studies by Agnew andHarrison (2015), De Clercq &Venter, 2009;Oseifuah & Gyekye, 2014) have discovered a favorable association between gender and financial capability. While several studies have concluded that male students are more professionally skilled than female students (Bucher-Koenen et al., 2017;Z. Chen & Garand, 2018;Montford & Goldsmith, 2016;Oseifuah & Gyekye, 2014), a few have asserted that female students are better equipped to make financial decisions than male students (Fatoki, 2014;Shaari et al., 2013). In two South African universities, Fatoki (2014) found that female students with non-business degrees had superior professional abilities to their male counterparts. Age is statistically important in determining the professional skills of accounting students, according to the regression model's findings. Most studies, including Xiao, Chen, and Sun (2015), Volpe et al. (1996), de Bassa Scheresberg (2013), and Zdemir, temizel, sönmez, and Er (2015, have discovered a positive association between college students' age and their professional skills. According to Volpe et al. (1996), the ability of American college students to make sound financial decisions in the domain of investment literacy increases with age. Another study by Chen and Volpe (1998) on the overall financial literacy of American university students confirmed this conclusion by finding evidence that older students tend to make wiser financial decisions than younger ones. Age and financial decisions have a favorable link, according to a comparable study done among South African students pursuing Chartered Accountancy (De Clercq & Venter, 2009). Race is statistically important in determining the professional skills of accounting students, according to the regression model's findings. The results of previous research that have found that race affects financial capability by Agnew andHarrison (2015), De Clercq &Venter, 2009;Serido et al., 2016;Shahrabani, 2013) align with this study's findings. According to a 2009 study by De Clercq and Venter, there is a link between race and financial literacy among South African students pursuing Chartered Accountant degrees. According to Shahrabani's (2013) research, Jewish pupils were more financially literate than their Arab counterparts. The latter only received a 39% overall mean score compared to the former's 50%. The study also concludes that nationality affects one's capacity for making financial decisions. According to the regression model's findings, parents' income is not statistically significant in predicting the professional skills of accounting students. This result is in line with several investigations (Mandel & Klein, 2007;Jorgensen & Savla, 2010). However, other studies (Botha, 2013;Herawati et al., 2018;Zhu, 2018) have identified a favorable association between parents' income and professional skills. According to Botha's (2013) research, parental income was a significant factor in determining the financial capacity of South African students. According to Soria, Weiner, and Lu's (2014) research, college students from low-income families are more likely to make unwise financial decisions. Summary of key findings. According to the current study's findings, accounting students generally possess a high level of professional expertise, as evidenced by the total percentage mean score, which was 95.2%. According to the study's findings, there is a statistically significant correlation between students' professional skills and SAICA accreditation, and certain sociodemographic factors impact accounting students' financial aptitude, financial socialization, and professional skills. Additionally, statistically significant links have been found between SAICA accreditation and financial capacity and between accounting students' financial socialization and professional skills. However, the relationship between accounting students' financial socialization and financial aptitude was not statistically significant. RESULT AND DISCUSSION Descriptive statistics showed a strong correlation between high professional skills and SAICAaccredited tertiary institutions compared to non-accredited tertiary institutions. The current study's results refute studies claiming that accreditation plays no significant role in students' ability. However, a thorough assessment of the literature reveals that research on factors influencing professional skills among accounting students in higher education is inclusive, and the findings of the current study both confirm and refute those of the earlier study. The positive impact of professional skills. According to a study by Wells, Gerbic, Kranenburg, and Bygrave (2009), professional accounting training enhances one's professional capacity in other practical spheres of life and commercial settings. The impacts that were emphasized included enhancements in financial analytical skills. According to several studies, financial capability is positively connected with increased technical professional knowledge (Brown et al., 2014;Drever et al., 2015;Xiao & O'Neill, 2016;Xiao & Porto, 2017). In a survey of Chief Financial Officers and their direct reports in large companies, Spraakman, O'Grady, Askarany, and Akroyd (2015) discovered that there is an intermediate level of proficiency in the use of ICT, such as Microsoft tools; this supports the findings of earlier studies such as (Strauss-Keevy, 2014) and (Viviers, 2016) that highlighted a lack of/poor ICT skill. In agreement with Spraakman et al. (2015), Ramachandran and Ragland (2016) found that using Microsoft tools like Microsoft Excel is still difficult. According to Karin Barac and Du Plessis (2014), implementing the Computer Assisted Audit Technique (CAATS) has increased the ICT ability to audit students. It is important to highlight that several studies have found that female students significantly outperform their male counterparts regarding ICT proficiency (Ainley et al., 2016). This result aligns with Sargent and Borthick's (2013) study of students who lacked critical thinking abilities, which found that their performance in future courses increased their grade point average (GPA). However, according to Azizi-Fini, Hajibagheri, and Adib-Hajbaghery's (2015) research, first-year and final-year students need better critical thinking abilities. This conclusion was drawn after examining the test results of 150 students from Kashan's University of Medical Science. There is no statistically significant correlation between critical thinking abilities and demographic traits, according to Azizi-Fini et al. (2015). Roksa et al. (2017) investigated racial disparities in African American and white students' critical thinking abilities using data from the Wabash National Study of Liberal Arts Education and discovered racial differences. The research's findings concur with those of earlier studies (Barac & Du Plessis, 2014;Brooks, Pomerantz, & Pomerantz, 2016;Sithole, 2015;Thompson & Washington, 2015). According to Smith and Szymanski (2013), kids in high school are frequently pushed to memorize, which leads to the development of weak thinking skills. Adler and Milne (1997) came to the same conclusion but added that group work, which necessitates teamwork, also improves oral communication. Maelah et al. (2012) agreed. Teamwork enhances verbal and written communication skills, according to Van der Merwe (2013). According to Brooks et al.'s (2016) research, students demonstrated a very favorable orientation toward using technology devices. According to the study, female and first-generation students demonstrated higher involvement, enrichment, and effectiveness levels. The results of Tempone et al. (2012) are pertinent even though they did not poll accounting students since they highlight the Significance of communication skills. According to a poll of Australian companies and accounting professional organizations, communication, cooperation, and self-managed abilities are still important. Tan and Laswad (2016) concurred with earlier research that cooperation improves communication abilities, which is extremely advantageous to future employers. According to Milliron (2012), technical abilities frequently taught in universities and colleges are not as crucial as communication and analytical skills. The use of Computer Assisted Audit Techniques (CAATS) in technical training, according to Barac and Du Plessis (2014), has increased the ICT abilities of students studying auditing. University students, according to Sithole (2015), are prepared technologically. 100 University of Swaziland students who signed up for internships were included in the study. According to its findings, accounting students are capable of utilizing technology and are prepared to do so in the workplace. According to Jones (2011), students' problem-solving abilities significantly differed when responding to structured and non-structured questions. After comparing the 2012 and 2013 CPA test results, Thompson and Washington (2015) concluded that better problem-solving abilities were to blame for improving outcomes. Problem-based learning (PBL) was employed by Birgili (2015) to assess students' problem-solving abilities, particularly when a PBL paradigm was applied. Negative or no relationship/Contradiction and extent of professional skills. Sithole's (2015) findings disagreed with those of earlier research. According to the results of his survey of 100 University of Swaziland students who had applied for internships, accounting students are capable of handling and demonstrating technological expertise. The conclusions of several investigations (Kgapola, 2015;Kunz, 2016;Odendaal, 2015;Ramachandran & Ragland, 2016;Spraakman et al., 2015) are different from those of the current study. ICT skills continue to be a problem, according to a study by Kgapola (2015) that involved 146 accounting professionals in South Africa. The mean score for ICT skills was 2.80, while the mean score for other abilities was 4.74. These findings were supported by Kunz's (2016) study of first-year accounting students, which revealed that IT competence levels are still quite low. This study looked at the trainees' perceptions of their expertise and the expectations of potential employers. There was a gap that was 25.4% wide. In major companies, Chief Financial Officers and their direct reports were surveyed by Spraakman et al. (2015). They discovered an intermediate level of proficiency in using ICT, such as Microsoft tools, which supports the findings of earlier research that emphasized a lack of/poor ICT abilities. In agreement with Spraakman et al. (2015), Ramachandran and Ragland (2016) found that using Microsoft tools like Microsoft Excel is still difficult. Odendaal (2015) found that, particularly if the students were familiar with the topic, 70% of the students surveyed could answer accounting difficulties about a conceptual framework. It would imply that unfamiliar problems are difficult for accounting students to tackle. CONCLUSION The results of this study add new empirical knowledge to what is already known about the professional skills of young people and university students in South Africa. The study examined the professional abilities of students enrolled in programs at institutions with and without SAICA accreditation (DUT and MUT) and SAICA accreditation (UKZN). Its findings refute research suggesting that accreditation does not influence student aptitude much. The results of this research study, which involved university students, are particularly pertinent for developing future curricula since they offer empirical support for areas in which accounting students' professional abilities can be strengthened finally, because the majority of previous research on financial literacy and capability was done in developed nations like the US, the UK, and the Netherlands (see, for example, Atkinson, McKay, Collard, & Kempson, 2007;Lusardi, 2008;Alessie et al., 2011), this study adds new empirical knowledge to the body of knowledge already available on financial professional skills of university students. Future research could be conducted using a mixed method approach as all limitations associated with the quantitative research design, including its weakness in handling the social complexity of a phenomenon and its rigidity because the same questions were asked in the same format and manner, apply to this study. Since the study was only conducted at three universities, it is challenging to extrapolate the results to the entire nation. Therefore, it is advised that a study be done on other universities to compare results. Future studies should compare the sociodemographic characteristics of South African practicing accountants and other professionals to their financial capabilities, financial socialization, and professional skills. Declarations. The authors confirm that this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere. The authors have no competing interests to declare relevant to this article's content. Only the authors are responsible for the content and writing of this article.
2023-08-03T15:14:28.359Z
2023-07-31T00:00:00.000
{ "year": 2023, "sha1": "8de0b0f540b700262426cd39f607c1b3091df47d", "oa_license": "CCBYNC", "oa_url": "https://journalkeberlanjutan.com/index.php/ijesss/article/download/605/640", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "81ad5d691e9bc20228ccb6820129167dd33e9628", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
221167431
pes2o/s2orc
v3-fos-license
Data-driven software design with Constraint Oriented Multi-variate Bandit Optimization (COMBO) Software design in e-commerce can be improved with user data through controlled experiments (i.e. A/B tests) to better meet user needs. Machine learning-based algorithmic optimization techniques extends the approach to large number of variables to personalize software to different user needs. So far the optimization techniques has only been applied to optimize software of low complexity, such as colors and wordings of text. In this paper, we introduce the COMBO toolkit with capability to model optimization variables and their relationship constraints specified through an embedded domain-specific language. The toolkit generates personalized software configurations for users as they arrive in the system, and the configurations improve over time in in relation to some given metric. COMBO has several implementations of machine learning algorithms and constraint solvers to optimize the model with user data by software developers without deep optimization knowledge. The toolkit was validated in a proof-of-concept by implementing two features that are relevant to Apptus, an e-commerce company that develops algorithms for web shops. The algorithmic performance was evaluated in simulations with realistic historic user data. The validation shows that the toolkit approach can model and improve relatively complex features with many types of variables and constraints, without causing noticeable delays for users. We show that modeling software hierarchies in a formal model facilitates algorithmic optimization of more complex software. In this way, using COMBO, developers can make data-driven and personalized software products. Introduction Design of user-facing software involve many decisions that can be optimized with user data. The decision variables-called the search space-can include both product aspects that are directly or indirectly visible to the user. For example, what wordings to use in headings or how items should be ranked in recommender systems (Amatriain 2013) or search engines (Tang et al. 2010). Traditionally, randomized controlled experiments (i.e., A/B tests or split tests) are used to iteratively validate the design choices based on user data (Kohavi et al. 2008). Recently, data-driven optimization algorithms have been proposed to perform automated experimentation on software in larger scale on bigger search spaces simultaneously, at e.g., Amazon (Hill et al. 2017) and Sentient (Miikkulainen et al. 2017). Personalization in particular is touted (Hill et al. 2017) as an opportunity to apply optimization algorithms to improve the user experience for different circumstances in, e.g. device types or countries. The benefits of using optimization algorithms need to be balanced against the cost of implementing it. If the implementation cannot be broadly applied to many parts of the software product this investment might not pay dividends. Previously, data-driven optimization algorithms have only been applied to simple software (Hill et al. 2017;Miikkulainen et al. 2018) with a flat structure in the decision variables, such as colors, layouts, texts, and so on. Software with more complex behaviours cannot be directly optimized with these techniques. We hypothesize that to handle more types of software the algorithms must understand the hierarchies that software is build with. For example, a software feature can have dependencies between variables such that one variable can only enabled if another one is, and so on. We suggest modeling the search space and the relationships between variables-called constraints (Biere et al. 2009;Rossi et al. 2006)-in a formal language that can describe the software hierarchy. Developers can use constraints to exclude certain combinations from the optimization search space that would otherwise generate undesirable or infeasible variants. For example, the color of a button should not be the same as the background. Feature models (Chen et al. 2009;Kang et al. 2002) from software product lines have been suggested by Cámara and Kobsa (2009) as a suitable modeling representation to handle the variability of experimentation. With feature models, software variable dependencies are described in a tree hierarchy. Feature models also usually support a limited set of more complex constraints. To this end we introduce the open-source toolkit called Constraint Oriented Multi-variate Bandit Optimization (COMBO) targeted at software engineers without deep optimization expertise. The toolkit consists of a domain-specific language for specifying a hierarchical model of the optimization search space with many types of constraints and multiple banditbased machine learning algorithms (see Section 3) that can be applied to optimize a software product. To the best of our knowledge, this is the first attempt at combining bandit-based optimization algorithms with constraints. We have validated the toolkit's capabilities in a proof-of-concept by implementing two feature cases relevant to the e-commerce validation company Apptus. The algorithmic performance has been evaluated in simulations with realistic data for the feature cases. Finally, we discuss the implications of using toolkits such as COMBO in the context of a data-driven software development process, which we define as the practice of continuous optimization. The current barriers to applying continuous optimization need to be lowered in order to encourage and enable developers to shift towards a higher degree of experimentation. For example, since modern software products are in a state of constant change, the optimization search space will have underperforming variables removed and new variables added. The algorithms need to gracefully handle such continuous updates of the search space model without restarting the optimization. We also call attention to several remaining barriers such as: handling concept drift (Kanoun and van der Schaar 2015) and ramifications on software testing (Masuda et al. 2018). We also provide considerations for what metrics could be optimized for and what the toolkit could be applied to. The rest of this paper is structured as follows. Section 2 contains background and related work on continuous experimentation. Section 3 introduces theory on bandit optimization. In Section 4 the research context and methods are described along with threats to validity of the validation and limitations of the solution. In Section 5 the COMBO toolkit is presented. In Section 6 the toolkit is validated and the algorithms are evaluated. Finally, Sections 7 and 8 discuss continuous optimization, metrics, future directions, and conclude the paper. Background and Related Work on Continuous Experimentation Many web-facing companies use continuous experimentation for gauging user perception of software changes (Auer and Felderer 2018;Ros and Runeson 2018). By getting continuous feedback from users software can evolve to meet market needs. Randomized controlled experiments (i.e. A/B tests, split tests, etc.) in particular are emphasized by high-profile companies such as Microsoft (Kevic et al. 2017), Google (Tang et al. 2010), and Facebook (Feitelson et al. 2013) as an evidence-based way of designing software. The section below provides background on continuous experimentation through the lens of randomized controlled experiments for software optimization. This gives context to the main topic of our work on applying data-driven software optimization algorithms. Experiments can be executed either on unreleased prototypes or after deployment to real users. Bosch-Sijtsema and Bosch (2015) and Yaman et al. (2016) explain how qualitative experiments on prototypes are used early in development to validate overarching assumptions about user experience and the business model. While post-deployment experiments, such as randomized controlled experiments, are used to measure small differences between software variants for optimizing a business related or user experience (UX) metric. Prototype experiments are advocated for both in the lean startup framework by Ries (2011) and in user experience research (Chamberlain et al. 2006;Williams 2009). Lean startup is about finding a viable business model and product through experimentation, for example, a pricing strategy suitable for the market. Lean startup has been brought to software product development in the RIGHT model by Fagerholm et al. (2017). Experiments based on design sketches are used within user experience to validate user interaction design, e.g. through user observations. In both lean startup and user experience research there is a need to get feedback on prototypes early in the process. Though, as the product or feature design matures and settles, the shift can move towards optimization with randomized controlled experiments to fine tune the user experience. This can exist simultaneously with prototype experiments as different features in a product can have different levels of design maturity. Randomized Controlled Experiments In a randomized controlled experiment, variables are systematically changed to isolate the effect that each setting of the variable has on a certain metric. The variable settings are mapped to software configurations and each unique software configuration is assigned a user experiment group. When the experiment is deployed to a product environment each user of the system is randomly assigned to a user experiment group. Usually there are thousands of users per group to obtain statistical significance. Randomized controlled experiments have been studied in data science as online controlled experiments. The tutorial by Kohavi et al. (2008) from Microsoft provides a good introduction to the statistics and technicalities of it. The research includes studies on e.g. increasing the efficiency (Fabijan et al. 2018a) and trustworthiness of results (Kohavi et al. 2012). The structure of the controlled experiment is referred to as the experiment design. In an A/B test there are two user experiment groups and in an A/B/n test there can be any number, but still with one variable changed. Thus, they are univariate. In a multi-variate test (MVT) there are also multiple variables that each can have multiple values. There are different strategies for creating experiment groups in an MVT. For example, in the full factorial design all interaction terms are considered so if there are n binary variables there would be 2 n groups. In a fractional factorial design only some interactions are considered. Infrastructure is a prerequisite for controlled experiments on software . The bare minimum is an experimentation platform that handle the randomized assignments of users and statistics calculations of the experiment groups. Microsoft have described their experimentation platform (ExP) for conducting experiments in large scale (Gupta et al. 2018). It has some additional optional features such as segmentation of users with stratified sampling for cohort analysis, integration with deployment and rollback of software, sophisticated alerting of suspected errors, and so on. Personalization MVTs are the current standard experiment design for having personalized experiment results (Hill et al. 2017;Kohavi et al. 2008). By personalization (Felfernig et al. 2010;Schwabe et al. 2002) we mean that there are contextual variables that describe aspects of a user, such as device type or age, and that the point of personalization is to find different solutions for the different combinations of contextual variables. Having many personalization variables will result in needing many more experiment groups. In the classical experiment designs of A/B/n tests and MVTs, users are allocated into experiment groups uniformly at random. That is, each experiment group will have equally many users. When the number of groups is large this can be inefficient. In any optimization algorithm, the allocation of users to experiment groups changes based on how well it performs in relation to some metric. Thus, they can concentrate users to the most promising configurations. Experimentation Implementation Strategies There are two distinct implementation strategies for randomized controlled experiments of software: using feature flags or multiple deployment environments. Firstly, feature flags (Rahman et al. 2016) are essentially an if/else construct that toggle a feature at run time. This can be extended to do A/B testing. Secondly, having multiple software releases deployed to different environments, ideally done through containerized blue-green deployment (Révész and Pataki 2017). The advantage of this approach is that the software variants can be organized in different code branches. The number of experiment groups can be huge, especially with personalization and optimization. For the algorithmic optimization advocated in this work, having deployment environments for each combination of variable settings is infeasible. Thus, the feature flag strategy is presumed. However, there are scheduling tools that optimize the efficiency of experimentation in different deployment environments by Schermann and Leitner (2018) and Kharitonov et al. (2015). Model-Based Experimentation and Variability Management Experimentation introduces additional variability in software by design. Cámara and Kobsa (2009) suggested modeling the variables through a feature model from software product lines research (Kang et al. 2002). Feature models allow for the specification of hierarchies and other relations between feature flags and have been used to capture variability in many software products (Chen et al. 2009). With feature models one can perform formal verification on the internal consistency of the model and perform standard transformations. This approach does not seem to have gained traction for continuous experimentation. Our approach of adding constraints and hierarchies (see Section 5) is not new per se, but the use of this in combination with optimization is novel to the best of our knowledge. In practice, less formal methods are used for configuring many overlapping experiments at Google (Tang et al. 2010) and Facebook (Bakshy et al. 2014). Facebook has open sourced parts of their infrastructure for this in the form of a tool (Bakshy et al. 2014) (PlanOut) that can be used to specify experiment designs. The tool also contains a namespace system for configuring overlapping experiments. In both companies' approaches, they have mutual exclusivity constraints where each experiment claims resources, for example, a specific button on a page. A scheduler or other mechanism ensures that experiments can run in parallel without interfering with each others resources. Automated Experimentation with Optimization Algorithms There is an abundance of tools that optimize parameters in software engineering and related fields, e.g., tweaking machine learning hyper-parameters (Borg 2016; Snoek et al. 2012), finding optimal configurations for performance and efficiency (Hutter et al. 2014;Hoos 2012;Nair et al. 2018), tweaking search-based software engineering parameters (Arcuri and Fraser 2011), and the topic of this work with software parameters for business metrics. In the various optimization applications, the assumptions are different on how the optimization problem is structured and what the technical priorities are. For example, when optimizing machine learning hyper-parameters for deep learning it is important to minimize the number of costly experiments. Bayesian optimization with Gaussian processes (Snoek et al. 2012) is often used there, but the approach does not scale beyond a few thousand data points (Riquelme et al. 2018), because the computational cost depends on the number of data points for Gaussian processes. One notable related research field is autonomic computating (Kephart and Chess 2003) that includes self-optimization of performance and efficiency related parameters, such as the loading factor of a hash table or what implementation to use for an abstract data structure. Such performance factors have relatively high signal-to-noise ratio in comparison to metrics involving users and the factors also exhibit strong interactions in terms of memory and CPU trade-offs. Many of these optimization tools (e.g. in Hoos 2012; Nair et al. 2018) assume that the optimization is done before deployment during compilation in a controlled environment. Some recent work has moved the optimization to run time and studied the required software architecture and infrastructure for cyber-physical systems (Jiménez et al. 2019;Gerostathopoulos et al. 2018) and implications (Mattos et al. 2017;Gerostathopoulos et al. 2018) of this change. We have also found several optimization approaches similar to our work, viz., they target large search spaces, with metrics based on many users, and are applied at run time in a dynamic production environment. The most similar work to ours is by Hill et al. (2017) at Amazon where the problem formulation of multi-variate multi-armed bandits (see next section) is first formulated and addressed with a solution. There is also related work on search-based methods (Iitsuka and Matsuo 2015;Miikkulainen et al. 2017;Tamburrelli and Margara 2014) and hybrids between search-based and bandit optimization (Miikkulainen et al. 2018;Ros et al. 2017). Search-based tools was first suggested by Tamburrelli and Margara (2014) for automated A/B testing using genetic algorithms, and then independently by Iitsuka and Matsuo (2015) for website optimization using local search. Ros et al. (2017) suggested improving the genetic algorithms with bandit optimization and a steady state population replacement strategy. At Sentient they have implemented some of these ideas (Tamburrelli and Margara 2014; Ros et al. 2017) in a commercial tool with both genetic algorithms (Miikkulainen et al. 2017) and genetic algorithms with bandit optimization (Miikkulainen et al. 2018). A genetic algorithm maintain a population of configurations and evaluate them in batches. The configurations are ranked by a selection procedure, and those that perform well are recombined with each other with some probability of additional mutation. A genetic algorithm were implemented in our toolkit. However, genetic algorithms were not investigated further in our work because we are not aware of an elegant way of doing personalization with genetic algorithms. Maintaining a separate population for each combination of context variables is not feasible, because each separate population would have too few users. What we implemented in the toolkit was that when a user arrives with a given context, try to match it with a configuration in the algorithm's population, if there is no match then generate a new configuration with the selection procedure that is coerced to match the context and add it the population. That scales better to more context variables due to concentrating effort to popular contexts, it eventually runs into the same problem with too few users per configuration. Theory on Bandit Optimization of Software Bandit optimization is a class of flexible optimization methods, see Fig. 1 for a summary. Univariate bandit optimization is formalized in the multi-armed bandit problem (MAB) (Burtini et al. 2015). The name come from the colloquial term one-armed bandit which means slot machine. In MAB, a gambler is faced with a number of slot machines with unknown reward distributions. The gambler should maximize the rewards by iteratively pulling the arm of one machine. Applying MAB to A/B/n testing is sufficiently common to have its own name: bandit testing (such as in Google Optimize). The choice of arm is sometimes called an action but is referred to as configuration in this work. An optimizer policy solves the MAB problem. Some of the policies are very simple, such as the popular -greedy. It selects a random configuration with probability , otherwise the configuration with the highest mean reward. A policy that performs well must explore the optimization search space sufficiently to find the best configuration. Policies must also Fig. 1 Bandit optimization setting summary. A user at time t of the system provides context c t and receives personalized configuration x t . The user provides the reward y t by using the software system. An optimizer policy selects configurations which maximizes rewards based on a machine learning algorithm model θ t . The model can predict rewards based on configurations and contexts and is continuously updated exploit the best configurations that it has found to get high rewards. This is known as the exploration-exploitation trade-off dilemma (Sutton et al. 1998) in reinforcement learning. In -greedy this trade off is expressed explicitly in the parameter. A high results in random exploration, and low results in in pure exploitation of the best configuration and runs the risk of getting stuck with a local optima. In the multi-variate case, the MAB setting is extended by a machine learning algorithm that attributes each reward to the variables in the multi-variate arm. This is known as multivariate MAB (Hill et al. 2017) or more generalized as combinatorial MAB (Chen et al. 2013). Contextual MAB is another more well known extension used for e.g. recommender systems (Li et al. 2010) which does personalization in the univariate case. In this work, we refer to all of these settings as bandit optimization, but the focus is on multi-variate MAB. Herein lies the crucial difference between the multi-variate MAB setting and other forms of black-box optimization (where the objective function is unknown). Namely, that the algorithm learns a representation in the form of an online machine learning model and optimizes it with respect to its inputs. Some black-box optimization algorithms (Arcuri and Fraser 2011;Hoos 2012;Nair et al. 2018) find and keep track of the best performing data points and iteratively replaces them with better performing ones. That is not the case in our work because of the assumption that the signal-to-noise ratio is so low that estimating the performance of a specific data point is inefficient. Other black-box optimization algorithms (Hutter et al. 2014;Riquelme et al. 2018;Snoek et al. 2012) have offline machine learning models that need to keep track of all data points and retrain the machine learning model at intervals, that does not scale well to very large data sets. To summarize so far, a more precise definition of bandit optimization follows. A user arrives in the system at time step t. A software configuration x t ∈ X , x t,i ∈ R, i = 1, . . . , n, is chosen for the user with context variables c t ∈ C, c t,j ∈ R, j = 1, . . . , m, where n is the number of configuration variables, m the number of context variables, X is the search space of decision variables, and C is the context space. When the user is no longer using the software system, a reward y t ∈ R in some metric is obtained from the user. A policy is tasked with selecting x t such that the rewards are maximized over an infinite time horizon. Practical Implications on Software Systems When applying bandit optimization in practice there are multiple assumptions: -The number of users are at least measured in the thousands. Depending on how many variables are included in the search space and the signal-to-noise, the required number of users will be more or less. -The value incurred by a user must be quantified to a single value at a discrete time. This will be a simplification because users continuously use software. -The optimization is done online at run time with continuous updates. A user's configuration must be generated quickly, so that users do not suffer noticeable delays. For software with an installation procedure (desktop or mobile) the optimization can be done during installation. -The rewards are assumed to be identically and independently distributed for all users. The algorithms are not guaranteed to perform well when the reward distributions change over time, though they still might perform well. Additionally, in comparison to A/B testing, bandit optimization requires storing the configuration per user. Standard procedures for controlled experimentation assign experiment groups with a pseudo-random allocation with a random seed based on something that is static and unique for the user, e.g., a user's id. As such, a user will have the same experiment group even if their session expires. With bandit optimization, they might be assigned to another group unless the group assignment is stored persistently. Persistant storage of user group assignment has implications on data privacy (Hadar et al. 2018). Mattos et al. (2019) conducted an empirical investigation into bandit testing for software. They found that while the technique performs better at finding good configurations, it can lead to statistical issues such as inflated false positive error rates. They advice to apply bandit testing only when the consequences of false positives are low, as it is with optimization. Thompson Sampling The optimizer policy of choice in our work is Thompson sampling since it often performs better than -greedy in general (Chapelle and Li 2011;Russo et al. 2017). The idea is to match the probability of selecting an arm with the probability of it being optimal. It works as follows for standard MAB. A Bayesian posterior distribution is placed on the expected reward of each arm. In each step t, a sampleθ k,t is drawn from each posterior distribution P (θ k |y k ), where k is an arm index, θ k is the prior reward parameter of the respective arm, and y k its previous rewards. The arm a t at step t with the highest sample is selected, that is a t = arg max kθk,t . The posterior distribution is updated continuously. For example, if the rewards are whether users click on something or not. Then, a suitable posterior distribution family for binary rewards is the Beta-distribution, parameterized by the number of clicks and non-clicks. Figure 2 shows a simulation of this, comparing Thompson sampling, -greedy with = 0.2, and random (as in standard A/B/n testing). Thompson sampling and -greedy initially behave like the random policy, but improve quickly. Thompson sampling finds the optimal arm, while -greedy is stuck with a random arm with probability 0.2. For multi-varate MAB the posterior distribution is a probabilistic machine learning method denoted q parameterized by the machine learning model θ . A single joint sampleθ t is drawn from the multi-variate distribution P (θ|y) each step. The configuration x t is then Fig. 2 Example simulation of different bandit policies with binary rewards. There are five available arms, four with mean reward 0.1 and one with reward 0.15. The simulation is repeated 100 times and the plot shows the average result. The figure illustrates that Thompson sampling eventually converges to the optimal reward while -greedy converges to 0.1 + 0.15(1 − ) = 0.14 chosen as such: Depending on the machine learning method q this optimization problem is set up differently and it will be significantly harder to solve than in the univariate case. In MAB, the computational complexity at each step is linear in the number of arms, the arg max calculation can be a for-loop over each sample. While for multi-variate MAB, the computational complexity at each step is NP-hard. Section 5.2 contains the specifics for how we did the setup of the optimization for different machine learning methods in the toolkit. Research Context and Methods Our research was an on-site collaboration with the e-commerce company Apptus spanning 20 weeks, and was part of a long term research project. The research was conducted with a design science methodology, following the guidelines by Wieringa (2014) and Runeson et al. (2020). As such, the toolkit was designed as a solution that addresses an industrially relevant problem, and the validation of the solution is done with sufficient rigor (see Fig. 3 for an overview). This section ends with discussing the limitations of the solution design and threats to validity of the validation. Validation Company and e-Commerce The validation company Apptus is a small Swedish company which develops a platform for e-commerce. Their product platform provides web shops with various data-driven Fig. 3 Research methods overview divided in three stages. Based on previous studies (gray boxes), the theory was identified and the design of the toolkit was designed to support the validation company's product. The toolkit was evaluated with two feature cases, that were first implemented in the toolkit and then subject to simulations algorithms. It includes a recommender system, product search engine, ads display algorithms, etc. Apptus deploy their software to other companies' web shops, so they have a business-to-business relationship (B2B) with their customers. Apptus has no direct relationship with the end-users of the software (consumers), but have access to consumer data through their customers' web shops. Operating multiple web shops incur a greater need for personalization (see Section 2.2) to optimize their software for different circumstances. Experimentation is well established in e-commerce for a number of reasons; we believe this is the case for four reasons. First, the consumers often have a clear goal in mind to purchase a product. Second, this goal is aligned with the goals of the web shop companies. Third, there is an industry standard for quantifying this joint goal through the sales funnel of: clicks, add-to-carts, and purchases. Finally, consumers tolerate some degree of change in the interface, especially in what ads and products are displayed. A prior case study by Ros and Bjarnason (2018) outlines how Apptus uses continuous experimentation to improve their product in several scenarios: validating that a change has the intended outcome, manual optimization, and algorithmic optimization. Thus, Apptus is experienced with using optimization in various forms to optimize the web shops. They also use bandit optimization in their product recommendation system (Brodén et al. 2017) and as part of a customer facing experimentation platform (targeted at marketers and software engineers) that optimizes which algorithm should be active in which parts of the web shop. Research Stages The design and validation of the COMBO toolkit was done in three stages (see Fig. 3): (1) identifying a problem with industrial relevance through a literature study (Ros and Runeson 2018) and a case study with interviews (Ros and Bjarnason 2018), (2) designing a solution to solve the problem, and (3) validating that the solution works. Problem Identification Prior to this study, two studies were performed for problem identification. First, a systematic mapping study (Ros and Runeson 2018) on continuous experimentation identified suitable algorithms to the problem domain and the assumptions in the optimization formulation as stated in Section 3. Second, an interview study with five participants from the validation company was conducted (Ros and Bjarnason 2018). The interviews investigated the difference between experimentation and optimization approaches, such as finding benefits and challenges of the respective approaches. One prominent challenge was that optimization algorithms are specific to a certain circumstance (e.g. product recommendations) and are hard to apply outside these circumstances. This challenge was also present in related work in their narrow application to visual design and layout. Solution Design The design of the toolkit was done in iterations with subsequent validation, to ensure that the toolkit had the necessary functionality to support the validation. The decisions taken in the design include what optimization algorithms to implement, what constraints to support, and how the search space should be specified. The design choices were anchored in two workshops with employees at the validation company. The approach was to be inclusive in terms of optimization algorithms by using available optimization libraries. The constraints supported were simply added as needed. The search space and constraints specification was a choice between meta-programming with annotations that some related work used (Hoos 2012;Tamburrelli and Margara 2014) and declarative code with a domain-specific language, where the latter was chosen because it is more flexible for users of the toolkit. Validation The toolkit was validated on-site at the validation company by two feature cases. Each feature case include a validation proof-of-concept demonstration and evaluation simulations. Technical details on the simulation set up is given in Section 6 alongside the presentation of the feature cases. The first feature case, an auto complete widget, was identified in a brainstorming workshop with three employees from the validation company. It was chosen because it has both a graphical interface and is a data-driven component while being a well isolated part of the site. The second feature case, a top-k categories listing, was selected in a discussion with an employee to push the boundaries of what is technically feasible with the toolkit and because it was an existing feature that had real historic user data. The proof-of-concept validation was conducted to demonstrate the toolkit's soundness. It included a step to ensure that all necessary variability of the feature case was captured and then demonstrating that it could be implemented in the toolkit. For the auto complete feature we found a listing of top 50 fashion sites and filtered it down to 37 fashion web shops that had an auto complete widget. We analyzed how the widgets varied and then implemented variables and constraints to sufficiently capture the variability. In addition, there was a brainstorming workshop with a user experience designer to validate the choices. For top-k we chose a client web shop that had a sufficient number of users and a complex category tree and re-implemented the functionality of the original top-k categories. Simulations of the feature cases were performed to show that the optimization algorithms can navigate the search space in ideal circumstances, within a reasonable data and execution time budget. The use of simulation avoids the complexities of a deployment and the evaluation can be repeated and reproduced as many times as needed. Also, it enables benchmarking the different algorithms. In the simulation, the optimization algorithm iteratively chooses a configuration from the search space and a reward from the user is simulated to update the algorithm. However, the historic user data could not be used directly to simulate users, because all combinations of the variables in the model are not present in the data. Instead we used the historic data to train a supervised machine learning model that can predict the expected reward of each configuration. This prediction can be used to simulate user rewards from any configuration. The machine learning model is an instance of a surrogate model (Forrester et al. 2008), which is a general engineering technique used as a replacement for something that cannot be directly measured, which are users in this case. Data Collection Both qualitative research notes and quantitative data sets for evaluation were collected. Decisions taken during solution design and implementation of the feature cases were recorded as notes, as recommended by Singer et al. (2008). The notes were consulted during the write up. The notes were of two types. First, in the form of 20 weekly diaries when on-site at the validation company. They contained notes on decisions taken and considered during solution design, and implementation of the feature cases. Second, notes were taken after the three workshops outlined above, (1) when identifying the first feature case, (2) when presenting the toolkit design choices, and (3) when brainstorming the variability of the first feature case. The data sets used to train the surrogate user model were collected at the validation company. The second feature case (the top-k categories) had historic data that was used. For the first feature case (the auto complete widget) we collected data through a questionnaire at the validation company. There we collected screenshots from the 37 fashion web sites with an auto complete widget. Then 16 employees were asked to rate 20 randomly chosen screenshots on a 10 point Likert scale to rate the user experience of the auto auto complete widget. Many of them mentioned that it was hard and somewhat arbitrary. However, the data set was only used to get a somewhat realistic benchmark, in comparison to a completely synthetic benchmark with randomly generated data. Limitations There are technical and statistical limitations to what is possible with the toolkit. The number of variables that can be solved for within a given millisecond budget is limited by the number of constraints, how sparse the configuration is (i.e. how many variables are zero valued), and the complexity of the underlying machine learning model. The ability to make inference from the learned model will also depend on the machine learning model complexity, for example, inference from neural networks is notoriously difficult. This limitation is a fundamental trade-off between algorithmic performance and inference ability. There are also limitations on the impact that can be obtained from applying the optimization. In e-commerce, much of the interface can be measured directly in terms of its impact on revenue, thus the ability to have an effect is promising. We believe the toolkit can be of use also outside of e-commerce, although there must be a way to quantify the user experience of a given feature. Finally, we are not claiming that all experiments in software engineering should be replaced by optimization. For instance, when adding optimization capabilities to a software feature, it would be prudent to validate that the optimization actually works with an ordinary controlled experiment. Threats to Validity Here threats to validity that threaten the validation are identified along with the steps taken to mitigate them. The external validity of the feature cases are threatened by that the evaluation is performed with only one company. This is mitigated by that the validation company has multiple clients that operate web shops. Thus, the feature case implementations are relevant to multiple company contexts, still within the e-commerce domain though. The internal validity of the validation is dependent on the quality of the datasets and the resulting surrogate user models. If the surrogate user models are too easily optimized by an algorithm the simulations will be unrealistic. For example, if there are regions in the optimization search space with low data support the surrogate user model might respond with a high reward there. To mitigate this threat we evaluated the performance with a baseline random algorithm and an oracle algorithm with access to the surrogate user model. In this way, an upper and lower bound of reasonable performance is clearly visible. Also, both the external and internal validity of the evaluation would be increased if the toolkit was evaluated in a production environment in a controlled experiment. A deployment to a production environment would uncover potential problems that cannot be seen through simulations. Thus, an actual deployment to real users is a priority in future work. However, the approach with surrogate user models for simulations is sufficient to demonstrate the toolkit's technical capabilities. The use of the surrogate user models also increases replicability, since at least the simulations are fully repeatable (see Appendix). Tooling Support for Bandit Optimization The open source toolkit Constrained Online Multi-variate Bandit Optimization (COMBO) 1 is a collection of bandit optimization algorithms, constraint solvers, and utilities. This section will first present concepts in the toolkit and then specifics of the included algorithms. COMBO is written in Kotlin and can be used from any JVM language or Node.js. Kotlin is primarily a JDK language but can transcompile to JavaScript and native with LLVM. To use the toolkit one must first specify a search space of variables and constraints in an embedded domain-specific language (DSL) and map them to software configuration parameters. Variables can be of different types: boolean, nominal, finite-domain integers, etc. Though COMBO is optimized for categorical data rather than numeric data. For example, the internal data representation is by default backed by sparse bit fields. Constraints can be of types: first-order logic, linear inequalities, etc. For example, if one boolean variable b requires another variable a the constraint b ⇒ a can be added. As mentioned in Section 3, some use cases of the toolkit require context variables for personalization. There is no explicit difference between decision variables and context variables in COMBO-any variable can be set to a fixed value when generating a configuration and have the rest of the variables chosen by the algorithm. Variables and constraints are always specified within a model. There can be any number of nested models in a tree hierarchy. The hierarchy is inspired from the formal variability management language of feature models (Cámara and Kobsa 2009). The hierarchy fulfills two additional purposes. First, having a mechanism for the case when a variable has subsettings but the variable itself is disabled; then the sub-settings should not be updated. It is counted as a missing value by the bandit optimization algorithm. A variable can also be declared as optional in which case it has an internal indicator variable that specifies whether it is missing. Second, the hierarchy supports lexical scope to enable composability. That is, models can be built separately in isolation and then joined together in a joint superordinate model without namespace collisions, because they have different variable scopes. Each model has a proposition that governs if the variables below it are toggled. The constraint b ⇒ a can also be expressed implicitly through the tree hierarchy if a is the root variable and b is a child variable, as such: The example shows a basic model of a search space with a variable b and the root variable a. It has the implicit constraints b ⇒ a and a with two solutions : (a, b) and (a, ¬b). Note that the root variable is always a boolean which is also added as a unit constraint. Figure 4 shows a more advanced example, there are five variables and three of them are defined in the root model. The variable Product cards is the root of a sub model and is the parent to the variables Two-column and Horizontal, so there are implicit constraints Twocolumn ⇒ Product cards and Horizontal ⇒ Product cards. Conceptually, in the application that uses COMBO, the variables Two-column and Horizontal modify the behavior of the Product cards so the first two variables can only be true when Product cards are enabled. Further details of the DSL illustrated by the example in Fig. 4 are summarized below: Constraint Solvers The model specifies a constraint satisfaction problem which is solved with, e.g., finitedomain combinatorial constraint programming solvers (Rossi et al. 2006) or boolean satisfiability (SAT) solvers (Biere et al. 2009), depending on what type of constraints and variables are used. The primary use of the constraint solvers is to perform the arg max calculation of multi-variate Thompson sampling, as part of the Optimizer Policy box in Fig. 1. We see three more uses for constraint solvers for experimentation that are enabled by the toolkit. First, a constraint solver can be used to formally verify the model, which is the main point of Cámara and Kobsa (2009). For example, by verifying that each variable can be both enabled and disabled. Second, randomly ordered permutations of configurations can be used to sample the search space for integration testing purposes. Third, in a large scale MVT with a fractional factorial design (see Section 2.1) and constraints between variables, being able to generate random solutions is required. The problem of generating random solutions uniformly is known as a witness (Chakraborty et al. 2013). The toolkit includes the SAT solver Sat4j (Le Berre and Parrain 2010) and constraint programming solver Jacop (Kuchcinski 2003). It also features two search-based optimization algorithms: genetic algorithms and local search with tabu search, annealing random walk, and unit constraint propagation (Jussien and Lhomme 2002). The extensions to local search are crucial because there are mutual exclusivity constraints created through nominal variables that would be otherwise hard to optimize with. When used in combination with the bandit optimization algorithms listed below the solvers should optimize some function rather than just deciding satisfiability. The specifics of how this is done depends on the bandit algorithm. In general, both the black-box searchbased methods and the constraint programming solvers can be applied to any function, while the SAT solvers can be applied to optimize linear functions with integer weights if they support MAX-SAT. Bandit Optimization Algorithms The sections below give some intuition behind how the multi-variate bandit optimization algorithms included in the toolkit work. They include decision tree bandit (Claeys et al. 2017;Elmachtoub et al. 2017), random forest bandit (Féraud et al. 2016), generalized linear model bandit (Chapelle and Li 2011;Hill et al. 2017;Li et al. 2010), and neural network bandit (Riquelme et al. 2018). All of them have custom implementations in COMBO, except the neural network bandit which is implemented using the Deeplearning4j framework. Decision Tree and Random Forest Bandit Decision trees recursively partition the search space into homogeneous regions. It has been formulated as a contextual MAB algorithm with Thompson sampling (Elmachtoub et al. 2017) and used for A/B testing in practice (Claeys et al. 2017). The algorithm has a tree for each arm that partition the contextual variables. The trees are updated iteratively using the VFDT (Domingos and Hulten 2000) procedure where each tree initializes with a root node and adds splits greedily. When selecting an arm the statistics of the leaf node corresponding to the user context is used by an optimizer policy. When adapting this to multi-variate bandit optimization we use only one tree that splits both decision variables and context variables. Due to the partitioning of the search space the posterior distributions used for Thompson sampling can be defined separately. The policy of selecting a software configuration from a given tree with Thompson sampling can proceed as follows in three steps. First, sample a value from the posterior distribution of all leaf nodes and select the leaf node with the maximum sampled value. Second, calculate the configuration from the leaf node by following the leaf node to the root node and set all parameters according to the splits taken in the path. Third, any x-value unset in the selected leaf node can be chosen at random. For example, consider the middle tree in Fig. 5 with root split on x 3 , and then further split on x 1 for x 3 = 1, with x ∈ {0, 1} 3 . If the leaf node where x 3 = 1 and either x 1 = 0 or x 1 = 1 is selected, then x 2 can be randomly selected. For the leaf node where x 3 = 0, both x 1 and x 2 is randomly selected while satisfying the constraints. Random forests are an ensemble learning version of multiple decision trees that often perform better than an individual tree. Each tree sees a random subset of variables and data Fig. 5 Example of random forest bandit with three trees. The square leaf nodes show the bootstrapped statistics for binary success/failure. Statistics for potential splits are also kept at each leaf node but not shown here points. Random forest also has a contextual MAB formulation (Féraud et al. 2016), although the derivation of this work to multi-variate MAB is unclear. Instead, we did a monte carlo tree search (Browne et al. 2012) over time by augmenting each decision tree in the ensemble with statistics at each split node that aggregates the data of all children below it. When selecting a configuration from the random forest the following procedure is applied iteratively. Aggregate all split nodes at the top level of each decision tree by their decision variable. Sample a value from each decision's pooled statistics and select the best split. Then update the top level node of each tree by following along the split decision. Continue until all trees are exhausted. Consider the example random forest in Fig. 5. The posterior distributions for the top level decisions are: x 1 = {0, 1} with Beta(8, 12) versus Beta(4, 14) and x 3 = {0, 1} with Beta(3.5, 14) versus Beta(8, 12.5). Note here that the statistics for the middle tree's x 1 node is not counted in the x 1 decision since that node is for x 1 conditioned on x 3 = 1, i.e., P (θ 1 |y 1 , x 3 = 1). Then, four samples are drawn from the distributions and the decision corresponding to the highest sample is selected. Suppose x 3 = 1 is selected, then the left tree is unchanged, the middle tree will have a new top node x 1 and the right tree is exhausted. The next decision is only on x 1 with distributions Beta(3.5, 10) versus Beta(7, 9.5), after that decision all trees are exhausted. Generalized Linear Model and Neural Network Bandit Bandit optimization with linear models for contextual MAB have seen lots of use for recommender systems (Chapelle and Li 2011;Li et al. 2010) and have been adapted to multi-variate bandit optimization at Amazon (Hill et al. 2017) with Thompson sampling. Any type of regression that fits in the generalized linear model (GLM) framework can be used, such as logistic regression for binary rewards or linear regression for normal distributed rewards. Each variable in the search space and context space has an estimated weightθ . The predicted value is a linear combination of the weightsθ and the concatenation of variables x t and context c t . For measuring the uncertainty of a prediction we also need a continuously updated error variance-covariance matrix of the weights. For ordinary linear regression models, is σ 2 (X T X) −1 , where σ is the standard error and X is a matrix where each row is a user's configuration. As more data points are added the values in the covariance matrix will shrink. For generalized linear models, the Laplace approximation (Chapelle and Li 2011;Russo et al. 2017) yields a similar calculation. The linear model is updated continuously with second-order stochastic gradient descent using the covariance matrix updates as such: 2 where ∇g is the gradient of the prediction error of a specific input vector (representing a software configuration) at time step t. With this setup Thompson sampling can be used as a policy as follows. Generate a sampled weight vectorθ t from the multi-variate normal distribution N (θ t , t ), whereθ t are the model weights and t is the error variance covariance matrix of the weights. Subsequently, the action x t chosen at step t is the one which maximizes the expected rewards from the sampled weights using the linear prediction, i.e., x t = arg max x∈Xθ (x, c t ), subject to the constraints. This objective function can be solved efficiently with many approaches. Including search-based methods which Hill et al. (2017) used, constraint-programming, or Having a neural network be continuously updated can be done in many different ways. In COMBO, the linear model in neural linear is updated continuously on every step but the network is updated in batches. The batch update is kept for some configurable time and used to train the neural network in multiple epochs, after which it is discarded. The objective function is significantly harder for the neural networks than for GLM. Constraint-programming is possible but it will be very slow due to poor constraint propagation through the network. The only appealing option in the toolkit is the search-based methods. Validation The COMBO toolkit was validated by implementing two feature cases and then evaluated by simulating their usage as realistically as possible using data from the validation company. One of the cases modifies an existing feature to make it data-driven and the other improves and generalizes an existing algorithm. As detailed in the Section 4.2.3, the procedure was done in four steps as follows. First, the variability of the feature case was analyzed. Second, the variability was implemented in the toolkit as a proof-of-concept validation. Third, a data set was collected with configurations and reward measurement pairs and a surrogate user model was trained using the data set. Finally, the surrogate user model was used to repeatedly simulate users in a controlled environment. The simulation results are summarized jointly in Section 6.3 and replication instructions are available in Appendix. Feature Case 1: Auto Complete Widget for Product Search The validation company has recently expanded to provide graphical interfaces to their algorithms, targeted at fashion web shops. One of the algorithms with a new graphical interface that they provide is the auto complete search that pops down when a user starts typing a search query. The case was chosen in a workshop session for two reasons. First, there is no industry consensus on how an auto complete widget is supposed to look. There is not even an agreement on what type of items the widget should show: suggested search terms, brands, product categories, and/or products. It also might be the case that different web shops require different configurations depending on available product data and how big and diverse the product catalogue is. Second, it is an isolated component with low risk that does not affect the rest of the site appearance. Model Variability There were three inputs to the decisions taken for modeling the auto complete feature. First, a brainstorm session with a user experience designer was conducted. During the session, several existing auto complete widgets were investigated. The optimization target was decided to be usability, such as whether or not the suggestions saw use or not. It was decided to include only user experience variables and exclude visual design parameters from the model. This was because the validation company preferred if the visual design was consistent with the web shops' general style. Examples of user experience variables are: the types of data to use, whether to have two columns, whether prices and/or sales indicator should be shown with images, etc. Example design parameters are: colors of highlight and borders, font selection and size, etc. Second, we used guidelines produced from a company specialized in user experience (UX) research for e-commerce: Baymard Institute. They have conducted user tracking sessions and synthesized advice for auto complete and more. The guidelines are not free and the specifics cannot be disclosed, other than that as a requirement from the validation company the guidelines should be adhered to in the designs, either as a decision variable in the optimization or as a constraint. It included things like the maximum number of items to display, how to keep the different data types separate, and some common specific pitfalls. Third, screenshots from 37 fashion web sites were analyzed to serve as test cases such that the model captures all the salient variability on the sites. Only variables with five or more occurrences were included in the model. Model Implementation The resulting model of the auto complete in the COMBO DSL can be seen in a much simplified form in Fig. 4. Some visualizations renditions of the model are given in Fig. 6. The full model has 32 variables (or 53 binary variables), with 28 explicit constraints and 43 implicit constraints from hierarchy or variable specification. Some relevant Fig. 6 Visualization of different auto complete widgets. To the left are two randomly generated designs and to the right is one with a high score. The scores of the surrogate user model are in increasing order from left to right, with scores: −0.02, 0.01, and 0.31, where higher scores indicate better perceived user experience. All were generated with constrained equal height for illustration purposes parameters not visible in Fig. 6 are: underlined highlight, strict search matching, images with fashion models, etc. Among others, there are constraints to calculate the total discretized width and height of the widget (which can be used to generate widgets of specific dimensions), constraints for exclusive variables (e.g., in Fig. 6 the inlined categories that say in Men to the left and the counts to the right occupy the same space), and that there must be at least either search term suggestions, product cards, or category suggestions. After the data was collected it took 10 h to construct the model by the first author. Having test cases available made the process of eliminating invalid variants much easier, such that some valid combinations were not accidentally removed. Also the ability to do formal validation was useful, by querying the system for whether a specific combination was valid or not. Surrogate User Model Implementation Since the functionality of the auto complete widget was new at the time, there was no data to base the surrogate user model on for the simulations. Instead a questionnaire on the 37 collected sites were constructed; for each web shop a search on the ambiguous search term 'shi' was conducted. Then participants were asked to score the usability of 20 randomly chosen web shops on a 1-10 scale. In total 16 participants completed the questionnaire which resulted in 320 data points. A dataset was then constructed by describing each web site in terms of the model. The score was z-normalized to mean 0 and variance 1, as such, the left most widget in Fig. 6 is below average while the middle is slightly above. The dataset was used to train the surrogate user model. Linear regression was chosen since there was not enough data points for anything more sophisticated. Generating a simulated reward for bandit feedback was then done by making a prediction with the surrogate user model and adding noise estimated from the standard error of the fitted model. Variables were added to handle the following confounding factors: persons scoring the web site, whether suggested terms and images were relevant to search, whether there were duplicated suggestions, what genders the search results were targeted to (male, female, mixed, and unisex). In addition, some pairwise interaction terms were added. These extra variables were not part of the search space but they improved the predictive power of the surrogate user model. The final surrogate user model size was 141 binary variables and 141 constraints (coincidentally equal). Feature Case 2: Top-k Categories Many web shops have organized their product catalogue as a category tree (c.f., Fig. 6 under the Categories headings). The validation company provides many algorithms related to the category tree; one example is displaying a subset of the top-k most relevant categories. Where k is the number of categories to display in a given listing. A naïve algorithm would be to simply display the most clicked categories, this might then not sufficiently cover all users' interests. They have more advanced versions in-place already in Hammar et al. (2013). The purpose with this feature case is to show that a generic implementation of the feature is possible using the toolkit. Another goal was to evaluate the algorithms with a more challenging optimization problem than the previous feature case. The search space is much larger and since the data is from real usage the signal-to-noise ratio is lower. Model Variability Since this feature is a re-implementation of an existing feature we could select a real web shop and use their category tree. The chosen web shop had sufficient data volumes and a category tree of medium size. The category tree has 938 nodes, with a maximum depth of 4. The web shop that the data came from operates in three different countries. The country that a user comes from was added as a nominal variable to the model. In the simulations the country variable was used for personalization and was generated randomly. Model Implementation The model corresponds directly to the category tree, see a simplification in Fig. 7, the actual model is programmatically built different for each specific web shop. A constraint is added to the bottom of each sub model to enforce that one of the sub models' variables are active. In addition to the constraints from the hierarchy, there is also a numeric variable k that control how many categories there can be. A cardinality constraint ensures that the number of categories are less than or equal to k. The k-variable can be set for each user or chosen by the decision algorithm. The total model size is 1093 variables with 1101 binary values and 1245 constraints. Surrogate User Model Implementation The data set for the surrogate user model was collected with 2737568 configurations and reward pairs. Each data point was constructed by observing what categories were clicked on for each consumer and a reward of 1 was received if the consumer converted their session to a purchase and 0 otherwise. Only data points with at least one click on a categories listing were eligible. In this case, the parameter k is derived from historic user data in the training set for the surrogate user model. This means that the data set was collected for a slightly different scenario than what it is used for. One artifact of this is that the learned surrogate user model is maximized by having k = 100, since that correlates with users that spend more time on the site and are likely to convert to customers. However, this effect would not be present in the actual use case. To counteract this, the k parameter is fixed to a specific value (k = 5) during simulation. Since there were lots of data and the point was to make a challenging problem, a neural network was chosen to be the surrogate user model. A standard feed-forward neural network was trained with PyTorch for 30 min with desktop hardware. There were first a dropout layer; then two hidden layers with 15 nodes each, ReLU activation, and batch normalization; and finally a softmax output. Simulation Evaluation Results All bandit optimization algorithms from Section 5.2 and two baselines were evaluated on both surrogate user models, see an overview of the results in Table 1. Again, these results do not provide strong evidence in favor of one algorithm or another, but they show the feasibility of the approach and highlight important choices in algorithm design. Local search was used as optimizer by all the algorithms with the same configuration, except neural linear which was tuned to improve the speed at the expense of performance. The simulations were done on the JVM with desktop hardware with an 8-core computer. All simulations in the table and figures were repeated for multiple repetition runs: 1 000 runs for auto complete and 200 runs for top-k categories. In each repetition, each algorithm start out from a blank slate with no learned behaviour. The algorithm iteratively chooses a configuration and updates the underlying machine learning model at each time step, for time horizon T =10 000 steps. As such, all numbers in Table 1 are means of means. Each algorithms have several hyper-parameters which are algorithmic parameters that are tweaked before the simulation begins. They were specified with a meta-model in COMBO and optimized with a random forest bandit. The following summary contain the most important hyper-parameters of the respective algorithms: Random Baseline for comparison. The configurations are generated uniformly at random with the only criteria that they should satisfy the constraints. Oracle Baseline for comparison. This uses the local search optimizer to maximize the surrogate user model directly, which the other bandits do not have direct access to. The global maximum for auto complete is 0.3953 and for top-k categories it is 0.1031. The numbers for Oracle in Table 1 show how far from optimal the local search optimizer is on Mean rewards is the maximization target. Choose and update is how long time the algorithm takes to choose a configuration and update it respectively average. Since the surrogate user model has additional interaction terms the optimization problem is harder for Oracle than for some algorithms. DT Decision tree bandit. The hyper-parameters were related to how significant a split should be (δ and τ -parameter of VFDT (Domingos and Hulten 2000)). RF Random forest bandit with 200 decision tree bandits (as in the DT above). The number of trees in the forest does have an impact on performance but has diminishing payoff after 100. The hyper-parameters were both the parameters from the decision tree bandit and parameters for the bootstrapping procedure of standard random forest, i.e., how many variables and data points each tree should have. GLM diag Generalized linear model bandit with a simplified diagonalized covariance vector. The hyper-parameters were the prior on the weights, regularization factor, and an exploration factor. GLM f ull The same as the above GLM diag but with a full covariance matrix. The hyperparameters were the also same. NL neural linear bandit with logit output, ReLU activation on the hidden layers, and weight-decay regularization. The neural network optimizer was RMSProp running on a CPU. Since neural linear combines the representation power of a neural network with a GLM f ull bandit, it inherits the hyper-parameters of the GLM bandit. It also has hyperparameters in the number of hidden layers and their widths, mini-batch size of updates to the network, learning rate of the optimizer, number of epochs to repeat data points, and initialization noise of the weights. Table 1 summarizes the results of all algorithms for the feature cases, with both mean calculation time and mean rewards. The dimensions are as follows. Mean rewards is the maximization target and is the average expected reward per repetition. The numbers in parentheses are standard deviations between repetition runs for mean rewards. The mean rewards with bold emphasis are statistically significant with p-value below 0.0001. Algorithms with high standard deviation tend to get stuck in local optima for some runs of the simulations. Choose measures how long the bandit algorithm takes to generate a software configuration in milliseconds and update time how long it takes to update the bandit optimization algorithm. The choose time is critical to keep low in order to not degrade user experience, the update times are not as critical as long as they are not too excessive since they cannot be fully parallelized. The Figs. 8 and 9 further illustrate the differences in performance of the algorithms. Figure 8 shows the performance over each time step averaged over the simulation repetitions. The figure illustrates that the specific choice of time horizon to T =10 000 does have an impact on performance, if the simulation would continue for 10 or 100 times as long Figure 9 show the overall variation in mean rewards between repetitions with boxplots on the quartiles and outliers outside the interquartile range. The results show that the random forest (RF) bandit perform well in terms of rewards for both auto complete and top-k and the neural linear (NL) bandit has poor performance. Clearly, the NL bandit needs more data to converge. As evident from Fig. 9, the performance difference between the algorithms is more pronounced in the top-k categories feature. Here it is clear that the random forest bandit is needed for its improved representational power in comparison to the simpler models. However, as can be seen in Fig. 8, the generalized linear model bandit with full covariance matrix (GLM f ull ) converges quicker in the auto complete feature. It should be noted that some algorithms are favored over others in the simulation due to the choices in the surrogate user model. That is, the machine learning model in the top-k categories' surrogate user model is the same as the one used by the neural linear (NL) bandit. Also, we did not add any interaction terms to the linear bandits, which could have been done. We reasoned that there will almost always be unmodeled interaction terms in real world applications. Since the surrogate user model for auto complete had pairwise interaction terms only, the linear models would have no interaction terms. In the top-k categories feature the performance would probably improve as well with interaction terms. Initially, before adding hyper-parameter search, the GLM and NL bandits performed much worse than their final performance. Thus, they derived higher benefit from tuned hyper-parameters. As such, the applicability for GLM and NL are dependent on having a simulation environment in which to search for hyper-parameters. In addition, NL is very hard to tune correctly in comparison to the other methods, as evident from the wide performance spread and the large number of hyper-parameters. The outcome of network architecture is also hard to predict. Adding more network nodes or layers increases the representational power of the algorithm, but increases convergence time which also affects performance. Regarding the time estimates in Table 1, all bandit algorithms are usable within reasonable time; with a choose time below 50ms. The clear winner in both choose and update times is the decision tree (DT) bandit algorithm. The effect of scaling to more variables on choose and update times can also be seen in the table. The choose time for the RF bandit (see Section 5.2.1) scales poorly, though it can be controlled by limiting the number of trees and the number of nodes and variables per tree. The simulations for the neural linear (NL) bandit are slowest since they are dominated by the non-parallelizable updates. All the bandit optimization algorithms presented here have trade-offs between performance and choose and update time efficiency. For example, the maximium number of nodes in decision tree (DT), the number of trees in random forest (RF), the number of hidden layers in neural linear (NL), or the number of interaction terms in the generalized linear models (GLM). We have not fully explored this trade-off, other than stating the specific choices we made. We suggest that when deciding on an actual algorithm, developers should start with the threshold on choose time that is acceptable and then find the best algorithm that stays below the threshold. In summary, the random forest (RF) bandit is the clear winner in the simulations and should serve well as a good default choice. It achieves highest performance for both feature cases and had few outliers with runs of poor performance (c.f., Fig. 9). Discussion We introduced the Constraint Oriented Multi-variate Bandit Optimization (COMBO) toolkit, used for designing software that improves over time. Using the toolkit, each user receives its own configuration and the toolkit optimizes the overall user experience. It contains machine learning algorithms and constraint solvers, that can be applied to optimize a search space that is specified in a domain specific language. We used the toolkit to model the variability of two features relevant to an e-commerce company called Apptus. Thereby we demonstrated that the toolkit can be applied in industry relevant settings. In this section, the implications of when this is put into practice is discussed. From Continuous Experimentation to Continuous Optimization We define continuous optimization as a practice that extends continuous experimentation further by having an algorithm optimize software for better product user experience and business value. The practice entails having a decision algorithm jointly co-optimize the variables in a production environment with user data. As with continuous experimentation, the variables in the optimization can be anything, e.g., details in the visual design and layout, user experience flow, or algorithmic parameters on a web server that impact user experience. We see two main reasons for that continuous optimization will improve products. First, very large search spaces can be explored by an algorithm. This means that the optimization algorithm might find solutions that developers might otherwise overlook. Miikkulainen et al. (2017) mention this as a common occurrence in their commercial tool for visual design. Second, according to Hill et al. (2017), it enables personalization of software to users by having variables that describe users in the optimization (e.g., device type and location). As many parameters as needed can be added to the search space in order to finely segment users so that the algorithm can find different solutions for different users. According to Fitzgerald and Stol (2017), discontinuous development is more important than continuous, meaning that product innovation is more important than refinement. We believe that the introduction of a toolkit like COMBO to a development process is not a to contradiction to this. Following the reasoning by Miikkulainen et al. (2017), continuous optimization can de-emphasize narrow incrementalism by offloading parts of the decision making process so that developers can focus on the more important parts. Thus, continuous optimization offers a complementary approach to software design, by loosely specifying implementation details and letting the algorithm decide instead. Based on the procedure used to build the feature cases in Section 6, we propose a process for how continuous optimization should be conducted in industry with continuous software development. The validation of the process is preliminary. The four steps of the process are: (1) investigate and prioritize the variability of the feature by user experience (UX) research methods, data mining, or prototype experiments, (2) formulate a model of the variables in the optimization search space and add constraints to prune invalid configurations, (3) tweak the algorithmic performance in offline simulations, and finally (4) validate the solution in a controlled experiment with real users. The process can then restart from step 1. This process is similar to one that is reportedly used for developing recommender systems at Netflix (Amatriain 2013) and Apptus (Ros and Bjarnason 2018). Simulations are also used at both companies to evaluate changes to their recommender system in fast feedback cycles. If the effect of a change is evaluated to be positive in the simulation, then the change is deployed and subjected to a controlled environment (i.e. an A/B test) in a production environment on real users. Considerations for What Metric and Changes to Optimize for Experimentation and optimization is done with respect to a given metric. Though, quantifying business value in metrics is a well documented challenge for many software companies (Fabijan et al. 2018b;Lindgren and Münch 2016;Olsson et al. 2017;Yaman et al. 2017). Having many metrics in an experiment is one coping strategy, in the hope that together they point in the right direction. Hundreds of metrics are reportedly used for a single experiment at Microsoft (Kevic et al. 2017;Machmouchi and Buscher 2016). For optimization, multi-objective optimization (Nardi et al. 2019;Sun et al. 2018) can be applied offline at compile time, where someone can manually make a trade-off between metrics from a Pareto front. However, for the online bandit optimization algorithms, a single metric is required to serve as rewards (though that metric can be a scalar index). As mentioned in Section 4.1, there are established metrics in e-commerce that measure business value. Optimizing for revenue or profit is possible but only few users convert to paying customers. Updates to the algorithm will also be delayed due to having to wait until a session expires to determine that they did not convert. In addition, e-commerce companies will have different business models that can result in needing other metrics. For example, they might want to push a certain product since they are overstocked or they might want as many consumers as possible to sign up to their loyalty club to have a recurring source of revenue. All of these volatile factors make optimizing for business value challenging. In e-commerce, it is unlikely that a change in the user interface will instill a purchasing need in consumers. The way that an optimization algorithm can affect business value is rather by removing hurdles in the user experience that would otherwise make a consumer turn to a competitor. For example, a web shop can be judged to be untrustworthy due to the impression it gives from its design, then customers would not want to buy from there. Consequently, it might be better to optimize with user experience metrics, since all users can contribute data to a user experience metric even if they do not convert. Therefore we argue that, if possible, user experience metrics should be the default choice for optimization before business value metrics. User experience metrics come with their own set of challenges. For instance, optimizing the number of clicks on one area of the user interface will probably lead to a local improvement, but possibly at the expense of other areas of the site. This effect where changes shift clicks around between areas is known as cannibalization (Dmitriev and Wu 2016;Kohavi et al. 2014). If the area that gets cannibalized has higher business value than the cannibalizing area the optimization can even be detrimental. This has been researched at Apptus (Brodén et al. 2017) for their recommender system in particular. If the recommender system is optimized for clicks, it can distract consumers with interesting products, rather than products that they might buy. Ultimately, different parts of the user interface can be optimized for different things. Each interface component is designed to fulfill a goal and that is the goal to be quantified. Returning to the feature cases from Section 6, in the auto complete feature the goal was to aid users in the discovery of products through the search. Thus, whether they used the aid or not-measured in click-through-rate-is a reasonably risk free optimization metric. In the top-k categories, using clicks seems riskier since some categories can distract users or might lead them to believe that the store does not sell certain products that they do sell. For that reason, business value metrics seem fitting. Whether this reasoning is correct or not is something that can be validated in an A/B test once the optimization system is put into production. Future Directions Future work is required to strengthen the evaluation of COMBO to make a stronger claim about the applicability and suitability of the toolkit. A more thorough continuous optimization process would be useful as well. Below we present remaining technical barriers for wide-spread adoption of tools like COMBO that require further study, in no particular order. Ramifications on Software Quality and Testing The continuous optimization practice that we advocate for is not without risks. Both the need to maintain machine learning models and the increased variability of software, caused by optimization, are challenging to handle on their own. Sculley et al. (2014) from Google describe how maintaining machine learning models is a source of technical debt, this has also been studied extensively by software quality researchers (Masuda et al. 2018). Regarding software testing, the model-based approach (Chen et al. 2009;Kang et al. 2002) to experimentation (Cámara and Kobsa 2009) and optimization can be used both for formal verification and software testing (see Section 5.1). Much has been published on model-based testing (Utting et al. 2012), also for user interfaces specifically (Silva et al. 2008). Since a model is built regardless for optimization, focusing more on integrating the model-based testing techniques in the toolkit would be fruitful. Concept Drift for Bandit Optimization In machine learning, concept drift occurs when the environment that the machine learning model has been trained in changes over time, i.e., if users change their behaviour. There are general approaches to detect and be more robust against concept drift, e.g., Minku and Yao (2011), and specific solutions to software engineering related applications (Kanoun and van der Schaar 2015;Lane and Brodley 1998). Those approaches cannot be directly applied to this work since detection of concept drift is not enough. For univariate multi-armed bandits, the solution is usually to apply a moving window to the arms' descriptive statistics (Brodén et al. 2017;Burtini et al. 2015). This adds more complexity to the solution since another hyper-parameter needs to be estimated, i.e. how long the window should be. Also, unlearning specific data points for multi-variate multi-armed bandits is harder than unlearning for descriptive statistics. Thus, more work is required to study the impact and approaches for concept drift in this domain. Continuous Model Updates Adding or removing variables to the optimization search space must be done without restarting the optimization. For linear models, decision trees, and random forests, model updates are straightforward to both implement and make inference about. For neural networks the effect of a change will be unpredictable, the algorithm might not even converge to a new solution so it could be better to restart the model. This is further discussed in the technical debt for machine learning paper (Sculley et al. 2014). Furthermore, users will have their existing configurations be invalidated when the underlying model is updated. Their configuration in the new model's search space should preferably not be drastically different from their old one to avoid user confusion. We plan to add support to the toolkit for generating configurations that minimize the distance between a configuration of an old search space to a new one; while at the same time maximizing the expected reward of the new configuration. Bandit Optimization Algorithms Several bandit optimization algorithms are included in the COMBO toolkit. Still, we have only begun to explore the all the design options for algorithms. For example, ensembles of several bandit optimization algorithms could be applied to improve performance by initially using algorithms that learn quickly and then gradually switching to algorithms with better representational power. Algorithm Tuning and Cold Start Based on the simulation evaluation in Section 6.3 we advice against just using default settings on a bandit optimization algorithm. The performance can improve drastically by tuning hyper-parameters in simulations, though we refrain from giving numbers on the improvement since the default parameters are somewhat arbitrarily set in the first place. However, this will be hard when implementing a new feature with no data. This was the case for the auto complete feature in Section 6.1 where data was manually collected. This might be too expensive for regular software development. Regardless, the constraint oriented approach in the COMBO toolkit can be used here to exclude incompatible design choices observed during feature development (possibly in conjunction with informative Bayesian priors). The situation is analogous to the cold start problem in recommender systems (Schein et al. 2002). That occurs when new products are added that do not have any associated behavioural data. Thus, connections could be drawn to this research field. Conclusion In e-commerce, continuous experimentation is widespread to optimize web shops; highprofile companies have recently adopted online machine learning based optimization methods to both increase scale of the optimizations, and to have personalized software to different user's needs. This technology is readily available but can be hard to implement on anything other than superficial details in the user interface. In this work, the open-source toolkit COMBO is introduced to support the algorithmic optimization at a validation company in e-commerce. The toolkit can be used for building data-driven software features that learn from user behaviour in terms of user experience or business value. We have shown that modeling software hierarchies in a formal model enables algorithmic optimization of complex software. Thus, the toolkit extends optimization to more use cases. There are still many further opportunities for future work in this domain to enable adoption in other fields than e-commerce. However, we insist that the toolkit can be used today to improve the user experience and business value of software.
2020-08-19T14:51:20.670Z
2020-08-18T00:00:00.000
{ "year": 2020, "sha1": "84237168939e2e1e8a21d6f3653865f5ab95a51f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10664-020-09856-1.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "56fc89fd6edc145bd158befa0f1f959376e4a47a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118577168
pes2o/s2orc
v3-fos-license
The Synergy between Weak Lensing and Galaxy Redshift Surveys We study the complementarity of weak lensing (WL) and spectroscopic galaxy clustering (GC) surveys, by forecasting dark energy and modified gravity constraints for three upcoming survey combinations: SuMIRe (Subaru Measurement of Images and Redshifts, the combination of the Hyper Suprime-Cam lensing survey and the Prime Focus Spectrograph redshift survey), EUCLID and WFIRST. From the WL surveys, we take into account both the shear and clustering of the source galaxies and from the GC surveys, we use the three-dimensional clustering of spectroscopic galaxies, including redshift space distortions. A CMB prior is included in all cases. Focusing on the large-scale, two-point function information, we find strong synergy between the two probes. The dark energy figure of merit from WL+GC is up to a factor ~2.5 larger than from either probe alone. Considering modified gravity, if the growth factor f(z) is treated as a free function, it is very poorly constrained by WL or GC alone, but can be measured at the few percent level by the combination of the two. On the other hand, for cosmological constraints derived from (angular) power spectra and considering statistical errors only, it hardly matters whether the surveys overlap on the sky or not. For instance, the dark energy figure of merit for overlapping surveys is at most ~20 % better than in the disjoint case. This modest gain can be traced to the fundamental fact that only a small fraction of the total number of modes sampled by the GC survey (or by the WL survey) contributes to the cross-correlations between WL and GC. I. INTRODUCTION Weak gravitational lensing surveys and spectroscopic galaxy redshift surveys are among the most promising nearfuture probes of dark energy, modified gravity and other cosmological physics. Weak gravitational lensing, the subtle distortion of galaxy images by large-scale structure along the line of sight, directly measures metric perturbations. A measurement of cosmic shear, the large-scale correlations of the shear of galaxy images due to weak gravitational lensing, therefore constrains cosmology through its sensitivity to the power spectrum of these metric perturbations and through the dependence of the signal on the geometry of the universe. Moreover, as a bonus, lensing surveys contain cosmological information in the clustering of the lensing source galaxies. Encouraging results have already been obtained from existing cosmic shear data, see e.g. [1][2][3][4][5][6][7], even though so far only a modest fraction of the sky (a few hundred square degrees) has been used for these studies. Spectroscopic galaxy surveys measure the three-dimensional matter distribution up to a galaxy bias factor, which can be modeled on large scales, and are also sensitive to the expansion history of the universe, as this determines the conversion from observed angular positions and redshifts to three-dimensional coordinate positions. Moreover, since a galaxy's redshift is determined not just by its cosmic distance, but also by its peculiar velocity, the observed galaxy power spectrum or correlation function receives a modification depending on the statistics of the large-scale velocity field. These redshift space distortions make it possible to directly measure the growth rate of large-scale structure. As a cosmology probe, spectroscopic galaxy clustering surveys (see e.g. [8-13]), are already at a more mature level than weak lensing, with strong and robust cosmological constraints so far coming especially from measurements of the baryon acoustic oscillations scale [14][15][16][17][18]. The imminent availability of high-quality galaxy clustering and weak lensing data begs the question how much more can be learned about dark energy and modified gravity when the two probes are combined. In this work, we will consider only the information in the two-point statistics of the observed fields (shear, source galaxy density and spectroscopic galaxy density) and restrict the analysis to large, quasi-linear scales. However, we note that invaluable additional information may be encoded in the data beyond the large-scale two-point function, and that several promising methods for extracting this information have been proposed, e.g. [28][29][30]. Considering for simplicity two surveys covering an equal amount of sky (but not necessarily the same part), one can study two distinct scenarios. On the one hand, if the survey areas are completely disjoint, one expects strong complementarity for two reasons. First of all, somewhat trivially, the combination of surveys covers twice as much sky, and therefore, heuristically, twice as many modes as any survey individually. Secondly, and more importantly, the two probes have distinct sensitivities to cosmological parameters so that combining them helps break degeneracies in cosmological parameter space and allows for constraints that are much stronger than expected solely based on the larger sky coverage (i.e. uncertainties smaller than σ/ √ 2 can be achieved, where σ is the smallest of the two individual probe uncertainties). On the other hand, one can consider the scenario where the two surveys overlap fully. A potential downside of this relative to the previous case is that this does not enlarge the total sky coverage so that the probed volume is smaller than in the case of disjoint surveys, although we will show that this is a small effect. On the positive side, since both surveys now probe the same three-dimensional density modes, there is information available in the cross-correlations between the two surveys. In the case of the the cross-correlations between the shear modes and the spectroscopic galaxy density modes, this signal is equivalent to galaxy-galaxy lensing, when the spectroscopic galaxies are at lower redshift than the lensing source galaxies. By measuring the same density modes using different probes, one is effectively applying a multi-tracer method [31], and it should in principle be possible to extract certain cosmological information without the limitation of sample variance. While not the focus of this article, there are large additional advantages of overlapping surveys: for instance, the imaging survey can provide target selection for the spectroscopic survey, and the fact that the two surveys are subject to different systematics allows for more robust measurements when they are combined, (e.g., see [29,30] for such a method). These same-sky benefits are not explicitly included in our analysis of statistical uncertainties only. While it is clear that in both of the above cases, the combination of weak lensing and galaxy clustering will improve cosmology constraints, two questions merit further study. (1) How strong exactly is this complementarity for upcoming surveys (how large a boost in constraining power can be expected)? and (2) how important is the overlap on the sky between the two surveys (i.e. what is the difference in expected constraints between the two scenarios discussed above)? These questions have been studied to some extent in the literature [32][33][34][35][36], but especially the answer to question (2) varies between groups. In this article, we study joint constraints from lensing and spectroscopic galaxy surveys for three combinations of upcoming surveys: Subaru Measurement of Images and Redshifts (SuMIRe), which is the combination of the HSC lensing survey and the PFS spectroscopic galaxy survey (both with the 8.2m Subaru telescope), EUCLID [23], and WFIRST [24]. The latter two are satellite surveys that will carry out both imaging and spectroscopic redshift programs. Using the Fisher matrix formalism, we will focus our attention on forecasted constraints on the dark energy equation of state, quantified by a Figure of Merit, and on the growth factor of large-scale structure as a function of redshift, f (z). We will compare joint constraints to constraints from the weak lensing and spectroscopic survey individually, always including a CMB prior from Planck, and marginalizing over cosmological (and galaxy bias) parameters, including the sum of neutrino masses. While all three surveys will have full overlap between the imaging and spectroscopic components, we will also study the hypothetical case of them being disjoint, in order to quantify the importance (or lack thereof) of the cross-correlations (in other words, the same-sky benefits) between the surveys. The article is organized as follows. In section II, we will explain our forecast method, discussing the parameter space, and paying specific attention to our approach for combining the information from angular power spectra and three-dimensional galaxy power spectra. In section III, we briefly discuss the three survey combinations SuMIRe, EUCLID and WFIRST. We present forecasted constraints for the different survey scenarios in section IV, and will explain these results in more detail in section V. We conclude with discussion and a summary in section VI. II. METHOD We use the Fisher matrix formalism (see, e.g., [37]) to forecast cosmological constraints. Our study takes into account data from two types of surveys. On the one hand, we consider a sample of lensing source galaxies from a weak lensing survey (WL). We assume these galaxies have photometric redshifts (described by an unbiased Gaussian distribution with scatter σ z = σ z,0 (1 + z)), which are used to divide the sample into N tom tomographic redshift bins, defined by (photometric) redshift ranges {z min tom,i , z max tom,i } Ntom i=1 . In each bin, we then use two fields: the lensing shear as a function of position on the sky, γ (although in practice we capture its information in terms of the convergence κ), and the relative overdensity of the number of source galaxies, p (p standing for photometric). The clustering of the source galaxies is biased relative to the underlying dark matter field so we model the galaxy bias as piecewise constant in redshift and introduce a free galaxy bias parameter for each bin, b (p) i , giving the value of the bias in the (true) redshift range z min tom,i − z max tom,i . For simplicity, we assume this bias is constant in redshift Since we consider only (quasi-)linear scales, we take the bias to be scale-independent. On the other hand, we consider data from a spectroscopic galaxy redshift survey (GC). We divide this sample into N s redshift bins with bounds {z min s,i , z max s,i } Ns i=1 ,and consider the galaxy overdensity field, s, in each bin. Since we assume the redshift in the spectroscopic sample to be measured with perfect accuracy, we have full access to three-dimensional galaxy clustering information in each redshift slice. As detailed below, to properly describe not only the information in the 3D power spectrum of the spectroscopic survey, but also the cross-correlations with the 2D p and γ fields, we split the Fisher matrix into two parts. One part will be the usual Fisher matrix calculated for the 3D power spectrum in redshift space (including redshift space distortions, Alcock-Paczynski effect, etc.), while the other part will come from the angular auto-and cross-spectra with γ and p, using the projected spectroscopic galaxy density as a function of position on the sky in each redshift bin. To avoid double counting s-modes, we will remove the transverse modes already included in the 2D Fisher matrix from the 3D Fisher matrix. Just like for the photometric galaxy density, we introduce an independent, free galaxy bias parameter, b i , for each spectroscopic redshift bin (with label i). To summarize, we consider three types of data: the shear γ and photometric galaxy overdensity p from an imaging/lensing survey, and the galaxy overdensity s from a spectroscopic survey. We will sometimes refer to the imaging survey and its data as WL (while it also contains galaxy clustering information because of the p field, the main goal of these surveys is the weak lensing shear signal) and to the spectroscopic survey as GC. As described in the following, we use cross-and auto-correlations between these fields (and between the various redshift slices) to construct a Fisher matrix and to forecast expected cosmological constraints. In addition, all our results will have a Planck CMB prior included, where we neglect information from CMB lensing because we want all the late-universe clustering information to come from the lensing and galaxy clustering surveys of interest. We also neglect the correlation between CMB temperature anisotropies and large-scale structure due to the Integrated Sachs-Wolfe effect. This is justified because the signal-to-noise in this signal is low. A. 2D Fisher Matrix The 2D Fisher matrix encapsulates the information contained in the angular auto-and cross-power spectra of the p, γ and s fields (or subsets thereof) in the different redshift bins. We refer to the literature for the relevant equations describing the 2D Fisher matrix, e.g. [33]. To calculate the angular power spectra, C l , we employ the Limber approximation [38,39], which expresses a spectrum as the line-of-sight integral over the product of two kernels and the 3D matter power spectrum P (k) (for the p and s fields, the kernel contains a galaxy bias factor). We use the linear P (k), as obtained from CAMB [40], for all spectra except the shear auto-spectra (γγ, including cross-spectra between different tomographic bins of course), for which we use the non-linear matter spectrum obtained by applying the HaloFit [41,42] prescription to the linear power spectrum. Using the non-linear signal for cosmic shear significantly increases the cosmological information (see, e.g., [43]) and is justified by the fact that shear directly measures the underlying matter field and will therefore be easier to model in the mildly non-linear regime than galaxy clustering, which would involve the additional complication of non-linear galaxy bias. We include γ modes up to ℓ max = 2000. This is a common choice for cosmic shear forecasts, although we note that modeling non-linear clustering and baryonic effects to the required accuracy for this multiple range will be far from trivial. For galaxy clustering, we choose a more conservative cutoff, ℓ max = k max D(z i ), where k max = 0.2h/Mpc and D(z i ) is the comoving angular diameter distance to the central redshift of the i-th bin. For both galaxy clustering and shear, we also apply a cutoff ℓ min = 20, because the largest angular scales may be contaminated by systematics. We will refer to the resulting Fisher matrix as F 2D {} , where the curly brackets will contain the set of observables included, chosen from p, γ and s. All 2-point functions of the included fields will be used, both for the calculation of the signal and that of the covariance matrix. This means that when a set of fields, e.g. γ, p and s are included in a single Fisher matrix, F 2D {γ,p,s} , the fields are assumed to be measured on the same part of the sky and their covariances are included. Fisher matrices for two non-overlapping surveys can be obtained by summing together two separate Fisher matrices (see section II C). B. 3D Fisher Matrix For the Fisher matrix of the 3D spectroscopic galaxy power spectrum in redshift space, P ss (k, µ) (µ being the cosine of the angle between the wave vector and the line-of-sight direction), we follow the approach by [44]. The details of how we model P (k, µ) are as in [26], except that we do not include the shot noise-like parameter P sn . In particular, this means we apply an exponential damping in the Fisher matrix describing the effects of non-linear clustering and redshift space distortions, and that we assume the use of density field reconstruction [45] to ameliorate this damping. We include modes up to k max = 0.2h/Mpc, but find our results to not be particularly sensitive to small variations in k max because the non-linear damping described above acts as a de facto cutoff. Finally, as mentioned above, we exclude N ⊥ transverse (µ ≈ 0) modes from our Fisher matrix, where, for each bin in k and z, N ⊥ is chosen to equal the number of s modes used in the 2D Fisher matrix. Specifically, this means that for a bin centered on k c and z i , we exclude a wedge ∆µ = 2π/(∆D(z i )k c ) around µ = 0, where ∆D(z i ) is the width in comoving distance of the redshift slice centered at z i . The same approach was followed in [33,34]. We will refer to the resulting Fisher matrix as F 3D * ss (the star indicates that transverse modes are left out). C. Combining Surveys The main survey combinations we will consider are the following (the CMB prior is implicit): {γ,p} + F CMB As explained above, the WL survey includes the information in the clustering of the source galaxies, the p field, in addition to lensing shear. To clarify the above notation, F 2D {γ,p} includes all cross-and auto-spectra of the types γγ, γp and pp. We will also in some cases consider the case where the information in the clustering of source galaxies is neglected and only γ is used from the WL survey, i.e. replacing F 2D {γ,p} by F 2D {γ} . • GC only: F = F 3D * ss + F 2D {s} + F CMB Alternatively, this matrix could be calculated by replacing F 3D * ss + F 2D {s} by F 3D ss , the 3D galaxy power spectrum without transverse modes removed. We have calculated this matrix as a consistency check, and find reasonable agreement between the two prescriptions, lending support to our method of separating transverse modes and non-transverse ones. Note that, while we refer to this case as GC for galaxy clustering, it refers to the information in the spectroscopic survey only and thus does not include the additional galaxy clustering information that would be available from the lensing source galaxies in an imaging survey. D. Parameters We consider a base set of N cosmo = 9 cosmological parameters, {ω b , ω c , Ω Λ , τ, σ 8 , n s , Σm ν , w 0 , w a } (the effect of the time-varying dark energy equation of state is implemented in CAMB using the parametrized post-Friedmann formalism [46]). Note that this set includes the sum of neutrino masses, which is an unknown that needs to be marginalized over. Its fiducial value is Σm ν = 0.15 eV. On top of these cosmological parameters, depending on which observables are taken into account, we include the N tom photometric galaxy bias, and the N s spectroscopic galaxy bias parameters. Our dark energy constraints are calculated within this parameter space of a maximum of N cosmo + N tom + N s parameters. We will summarize such constraints in terms of the dark energy figure of merit (FOM, [47]), (1) Note that, unlike in [47], we do not marginalize over spatial curvature, Ω k , but do include Σm ν . As an aside, we find that generally the forecasted FOM looks significantly stronger when Σm ν is fixed: typically a factor 2 − 3 larger than when it is properly marginalized over. We will also study constraints on the linear growth rate of matter density perturbations, where D m is the linear growth factor of matter perturbations (δ m ( k, a) ∝ D m (a)). We study these constraints in the modified gravity (MG) scenario where f (z) is allowed to deviate from its value in general relativity (GR), [48]. Given an amplitude σ 8 of the linear power spectrum at z = 0, the amplitude of perturbations at z > 0 (σ 8 (z)) is computed based on our choice of f (z). In other words, we force the amplitude of perturbations to be consistent with our growth factor parametrization. Moreover, for a given σ 8 , the modified growth is taken into account to calculate the correct, corresponding primordial amplitude of perturbations, A s . This way, our treatment of CMB data is also consistent with f (z). To parametrize f (z) in the modified growth case, we introduce N f = N s + 2 parameters, {f i } Ns+1 i=0 , describing a piecewise constant deviation from the GR value. We thus have one free parameter for each spectroscopic redshift bin, i.e. f i describes the growth factor in the redshift range z = z max s,i − z max s,i for i = 1, ..., N s , and, in addition, f 0 is the growth factor in the range z = 0 − z min s,1 and f Ns+1 the growth factor for z > z max s,Ns (we assume that the growth history returns to that of GR well before z ≈ 1100 so that the primary CMB anisotropies are not affected). The inclusion of free growth at redshifts beyond the range probed by galaxy clustering will have strong implications for the ability of WL or GC individually to constrain the low redshift growth history because the amplitude of matter fluctuations at the largest redshift probed by the spectroscopic survey is now no longer determined by the CMB measurement (f Ns+1 affects the translation of the amplitude of perturbations at CMB last scattering to that at low redshift). See, e.g., [49] for a discussion of the effects of marginalizing over growth at high redshift. While we do not include this information in the present study, we do note that CMB lensing may help constrain f Ns+1 , at least to some degree. When presenting our MG constraints on the growth factor, we marginalize over Σm ν , w 0 , w a (in modified gravity, the effective dark energy equation of state should be considered a parametrization of the expansion history) and the other parameters, so that a maximum of N cosmo + N tom + N s + N f parameters are included in the Fisher matrix. We note that, if instead we were to fix Σm ν when constraining growth, the constraints would be stronger, but only by < ∼ 30% (on the other hand, marginalizing over growth does strongly degrade the neutrino mass constraint). III. SURVEYS We make predictions for the following three combinations of WL and GC surveys. A. SuMIRe The Subaru Measurement of Images and Redshifts (SuMIRe) combines the weak lensing/imaging data from the Hyper Suprime-Cam (HSC) survey and the spectroscopic data from the Prime Focus Spectrograph (PFS) cosmology survey. The survey specifications (and fiducial galaxy bias) we use can be found in our previous publication, [43]. We choose the following binning, The EUCLID satellite mission will provide both weak lensing and spectroscopic galaxy clustering data. Note that the EUCLID spectroscopic survey will be done using slit-less low-resolution spectroscopy, which does not need a pre-imaging survey to find the targets. We again follow the specifications outlined in [43]. The binning choice is, Finally, we consider the WFIRST satellite mission, which will also provide both weak lensing and galaxy clustering information. The WFIRST spectroscopic survey will use slit-less spectroscopy. Our assumed survey specifications mostly follow [24]. Specifically, we assume both the lensing survey and the redshift survey will cover 2000 deg 2 . For the lensing survey, we assume an angular number densityn A = 70 arcmin −2 , and the same redshift distribution as we assumed for HSC ( z = 1). We assume the source galaxies have galaxy bias b (p) (z) = 1 and photometric redshift scatter σ z (z) = 0.04(1 + z) (compared to 0.05(1 + z) for the previous two surveys). For the spectroscopic sample,, we assume a fiducial galaxy bias b (s) (z) = √ 1 + z. We use the spectroscopic galaxy number density specified in Table 2-2 of [24]. Finally, the binning choices are, IV. RESULTS A. SuMIRe Dark Energy We consider first the dark energy figure of merit for the different survey combinations possible with HSC and PFS (SuMIRe). Figure 1 shows constraints for GC (PFS) only, WL (HSC) only and for GC + WL, with and without overlap (of course the actual surveys will overlap). The left panel shows the case where the clustering of lensing source galaxies is not included on the WL side. In this case, while both surveys individually (an implicit CMB prior is always included) deliver strong dark energy constraints, GC has significantly more constraining power. The two bars on the right (of the left panel) show that substantial improvements can be achieved by combining the two surveys, almost doubling the FOM compared to the case of GC alone. However, very little of this complementarity comes from the cross-correlations between the two sets of observables, and, accordingly, the difference in FOM between the overlapping and non-overlapping scenarios is not noteworthy. We will address the reasons why this is the case in Section V. The right panel shows the case where all information from the imaging survey is used, i.e. both the shear and the clustering of source galaxies. With this included, WL alone is competitive with GC alone. There is again strong complementarity when the two probes are added, but the overlap still does not matter much. The above of course does not take into account other benefits of the overlap between the two surveys. For example, in the case of SuMIRe, HSC imaging will provide an ideal multi-color catalog of galaxies to find targets for the followup spectroscopic PFS survey, which is extremely important. Moreover, if the photometric redshifts of the imaging survey cannot be properly calibrated using a deep spectroscopic training/validation sample, the cross-correlations with the GC survey can be used to improve the photo-z calibration and thus make WL a stronger probe of dark energy, see, e.g., [43,[50][51][52][53][54]. Growth Factor We next turn to constraints on the growth history, f (z), in a modified gravity scenario, considering the bounds on the growth parameter in each spectroscopic redshift bin. As discussed previously, we marginalize over the growth factor at redshift both below and above the redshift range where galaxies are observed by PFS. This has important implications. In particular, marginalization over the growth parameter, f Ns+1 , at z > z max s,Ns = 2.4 implies that, even though the CMB measures both cosmological parameters and the amplitude of perturbations in the early universe, the CMB does not determine the amplitude of perturbations at redshifts where we observe large-scale structure (GC or WL). Schematically, GC measures the combinations b (s) i σ 8,i and f i σ 8,i (where i labels the redshift bin). With σ 8,i now unknown despite the CMB prior, both galaxy bias and the growth factor remain unconstrained. Since WL, through its dependence on the amplitude of matter perturbations, σ 8,i , is only sensitive to a degenerate combination of growth factor parameters (i.e. an integral over redshift of f (z)), lensing alone (+CMB) yields rather poor growth factor constraints whether f Ns+1 is left free or not, although constraints are better with this parameter fixed. When the two probes are combined, the degeneracy discussed above is broken, and very strong ( > ∼ 3%) constraints can be obtained in all bins. Figure 2 shows these constraints both for the case of overlapping and disjoint surveys. The horizontal bars indicate the bin widths. While these constraints are strong, and will provide a strong test of general relativity, it again does not matter whether the surveys overlap or not. B. EUCLID Figure 3 shows the forecasted dark energy constraints for EUCLID (cf. Figure 1). Focusing on the right panel, where the WL part of the survey includes information from the clustering of lensing source galaxies (see the left panel for the case where this information is neglected), we see that WL and GC surveys on their own give comparable constraints, as was the case for SuMIRe. We do wish to note, however, that the comparison between the two is strongly dependent on the treatment of non-linear scales for each probe. As a reminder, for GC, we apply a non-linear cutoff, ensuring that little information is included from the non-linear regime. WL shear, on the other hand, includes modes up to ℓ max = 2000, and uses the information in the (HaloFit) non-linear matter power spectrum. It thus probes rather far into the non-linear regime, unlike GC. In both cases, we have tried to follow the more or less standard choices adopted in the literature, to ease comparison. Dark Energy Considering the synergy between the weak lensing and galaxy clustering components, the picture is qualitatively the same as for SuMIRe: combining WL with GC strongly improves dark energy constraints compared to the individual surveys, but little is gained from overlap in sky coverage. Only the combination of the two probes allows for strong growth factor constraints (forecasts for individual probes not shown). The constraints are effectively independent of whether or not the surveys overlap on the sky. Growth Factor The growth factor constraints are shown in Figure 4 (cf. Figure 2). While constraints from any probe individually are again poor (σ(f (z)) ≫ 1), combining them, the growth factor can be measured to 1% − 2% in each spectroscopic galaxy bin. Whether or not the surveys overlap again has little relevance. C. WFIRST Finally, we show both the dark energy and growth factor results for WFIRST in Figure 5. In all cases, the lensing survey is assumed to use both the shear information and the clustering of source galaxies. Comparing the dark energy results (left panel) to SuMIRe (which has similar sky coverage), we find that the WFIRST spectroscopic survey looks comparable to PFS (SuMIRe). Comparing the results from the imaging survey component of WFIRST and SuMIRe, on the other hand, we find that the WFIRST lensing survey is significantly stronger than HSC. This is mainly explained by the large number density of the WFIRST source galaxies,n A = 70 arcmin −2 vs.n A = 20 arcmin −2 for HSC. In addition, we have assumed slightly better photometric redshifts for WFIRST (σ z (z) = 0.04(1 + z) vs. σ z (z) = 0.05(1 + z) for SuMIRe and EUCLID). The joint dark energy constraints from WFIRST are much stronger than for SuMIRe, mainly because of the much more powerful imaging survey. Considering next the growth factor constraints (right panel, Figure 5), the WL and GC individually are again unable to place meaningful constraints (and are therefore not shown in the figure), but for the combination of the two, we find relative uncertainties in the range 3% − 25% (from low to high redshift). The joint constraints are thus comparable to those from SuMIRe at the low redshift end, but significantly weaker towards the highest redshifts. To explain this, we first note that the f (z) bounds are mainly driven by the spectroscopic survey, while the main role of the lensing survey is to help break degeneracies between σ 8 (z), b (s) (z) and f (z). While both spectroscopic surveys cover a deep, and equally broad redshift range (z = 0.6 − 2.4 for PFS and z = 1.075 − 2.85 for WFIRST), the number density of galaxies in WFIRST drops ton < ∼ 1 × 10 −4 (h −1 Mpc) −3 at z > 2, while the PFS galaxy density is n > 3 × 10 −4 (h −1 Mpc) −3 at all redshifts. This mostly explains the degradation of constraints at the high redshift end of WFIRST. However, at all redshift, the difference in redshift bin width between the two surveys also has the effect of making WFIRST appear weaker. For SuMIRe, we used bins with width ∆z = 0.2, while here we use ∆z = 0.1, thus halving the volume available per bin and weakening the constraints within bins (but giving a larger number of independent f (z) measurements). If we had used equal bin widths, the WFIRST f (z) constraints would thus be stronger than those from SuMIRe at low redshift. At the high redshift end, the number density is the dominant effect however. In conclusion, considering both the dark energy and growth factor constraints, we again find strong complementarity between weak lensing (plus source clustering) and spectroscopic galaxy clustering. However, as for the other surveys, the role of the overlap between surveys is very limited. D. Uncertainty in Photometric Redshift Distribution We have seen in the previous subsections that, based on joint cosmological constraints only, overlap between surveys does not bring large advantages compared to surveys covering disjoint regions of the sky. One scenario in which crosscorrelations between the two types of surveys can be very advantageous is when the photometric redshift distribution of the weak lensing survey has not been calibrated perfectly a priori [50][51][52][53][54], for example because the training sample of spectroscopic galaxies is insufficiently large or complete. It has been shown in [43] that in this case the crosscorrelations between the number densities of lensing source galaxies and spectroscopic galaxies (i.e. the ps spectra) can help calibrate the photo-z distribution and thus strongly improve the cosmology constraints from weak lensing alone (in this case, the cosmological information in the spectroscopic galaxy clustering was not used). Therefore, if Cross-correlations between spectroscopic and photometric galaxies help calibrate the photo-z distribution and thus slightly improve the benefits of overlapping surveys relative to disjoint surveys. Left: SuMIRe. Right: EUCLID. we repeat the analysis of the previous sections, but this time allowing for uncertainty in the photo-z distribution, we might expect larger benefits from having the two surveys overlap. To model the photometric redshift distribution, we follow the approach of [43] and treat the distribution p(z ph |z) as a Gaussian. We thus ignore the possibility of outliers in the distribution. For the bias and scatter, we assume, as before, a fiducial b z (z) = 0 and σ z (z) = 0.05(1 + z). To model uncertainty in these quantities, we describe both functions by a spline with 11 nodes evenly spaced in redshift in the range z = 0 − 3. We assume the distribution is calibrated (e.g. using a deep, matching spectroscopic sample) at the level σ(b z,i ) = σ(σ z,i ) = 0.05, where b z,i and σ z,i are the values at the spline nodes. We then allow the data to self-calibrate the photo-z parameters. As stated above, specifically the sp spectra are very useful for this. Unlike in [43], we still use the full cosmology information present in the spectroscopic galaxy sample. The analysis is thus the same as in previous sections, except with the 22 photo-z parameters added. We show results for the dark energy figure of merit in Figure 6. As expected, allowing for photo-z distribution uncertainty leaves the spectroscopic galaxy clustering constraints unchanged, significantly weakens the FOM from weak lensing only, and weakens the joint constraints to a lesser extent. The FOM is lowered less when surveys overlap than when they do not, because overlapping surveys allow self-calibration using sp cross-correlations. However, the difference is still not spectacular: overlapping surveys give a ∼ 13% larger FOM for SuMIRe and a ∼ 16% larger FOM for EUCLID. Finally, we note that we did not apply a prior to the bias of the lensing source galaxies here, and that this could improve the photo-z calibration somewhat and therefore the gains obtained from overlapping surveys. V. WHY DOES OVERLAP NOT MATTER? We have shown so far for all surveys we considered that there is strong complementarity between weak lensing (including clustering of source galaxies) and spectroscopic galaxy clustering in the sense that combining the two leads to large improvements in cosmological constraints. However, the role of overlap between surveys has proven to be limited. This is somewhat surprising given some of the literature on this topic [33][34][35]. To understand the lack of importance of overlap better, let us consider in more detail the difference between the overlapping and disjoint survey scenarios. In overlapping surveys, on the one hand, one can exploit the additional information present in the crossspectra (i.e. γs and ps). On the other hand, one measures a smaller number of independent modes when the surveys overlap (disjoint surveys offer twice the sky coverage). This loss of information is also quantified by the cross-spectra. For instance, the covariance between the spectra C ss ℓ and C γγ ℓ is given by 2(C γs ℓ ) 2 / [f sky (2ℓ + 1)]. One could thus imagine that our results are explained by the two above mentioned effects cancelling out. However, we find that this is not the case, and that instead, both effects are small individually. The real reason for overlap not making a difference The correlation of the lensing modes with the spectroscopic galaxies is far from 100% because the overlap between the spectroscopic galaxy redshift distribution and the lensing kernels is limited (the spectroscopic sample lacks coverage at z < 0.6). Note that the z ph = 0 − 0.6 lensing source bin does have some galaxies at z > 0.6 due to the photo-z scatter, explaining why the black curve is not exactly equal to zero. Right: When galaxies at z = 0 − 0.6 are added to the spectroscopic survey the correlation becomes close to maximal, except at high multipoles. At a given multipole ℓ, the correlation coefficient for tomographic bins i is defined as ρ ≡ 1 − (Fγ i γ i Cγ i γ i ) −1 , where C is the covariance matrix of the modes γi and {sj } Ns j=1 and F is its inverse (we suppress the ℓ-dependence of the previous expressions here). In other words, ρ is the generalization of r = C γs l / C γγ l C ss l to the case of multiple s fields. is thus that the cross-spectra between probes contribute very little compared to the auto-spectra. The main reason for this is that the effective number of 3D density modes probed by the cross-correlations is very small compared to the number of modes probed by either the lensing auto-spectra or the galaxy clustering auto-spectra (see Fig. 5 of [36]). Heuristically, (spectroscopic) galaxy clustering probes a three-dimensional sphere in k-space with radius given by k max = 0.2h/Mpc, while shear probes mainly transverse modes in k-space, but with a much larger non-linear cutoff, so that the numbers of modes in the two probes are within the same order of magnitude. The cross-spectra, however, probe only the overlapping region between the two volumes in k-space, i.e. only transverse modes with a low cutoff k max = 0.2h/Mpc. This volume thus contains a much smaller number of modes. We make this argument more quantitative below. Consider SuMIRe for example. Ignoring for now the lost information due to shot noise, with our default choices of k max = 0.2h/Mpc and (three tomographic bins with) ℓ max = 2000, the total number of spectroscopic galaxy and shear modes explicitly included are N mode s = k 3 max V /(6π 2 ) = 1.4 × 10 6 (where V is the survey volume) and N mode γ = f sky N tom ℓ 2 max = 0.43 × 10 6 respectively (this is a rough estimate and the true number of available shear modes is determined by the redshift width of the shear kernels rather than the number of tomographic bins). The number of overlapping modes is less straightforward to quantify. We cross-correlate Ns j=1 ℓ 2 (k max , z j ) = 8.6 × 10 4 transverse s-modes with the γ− and p−modes (with ℓ(k, z j ) ≈ kD(z j )). The number of shear (and also source density) modes is no larger than f sky Ntom i=1 ℓ 2 (k max , z i ). This gives 3.2 × 10 4 shear modes. Since this is smaller than the number of transverse s-modes, the number of independent modes probed by cross-correlations cannot be larger than this, N mode ⊥ < 3.2 × 10 4 . This is merely ∼ 2% of N mode s and ∼ 7% of N mode γ . On top of this, it needs to be taken into account that the correlation between the shear modes and the spectroscopic modes is not optimal. Figure 7 (left panel) shows the effective correlation coefficient ρ of each shear mode, as a function of multipole, with the full set of transverse spectroscopic galaxy density modes. We do not include shot noise or any non-linear cutoff in this plot. For a given shear mode, if the set of s-modes probed all the 3D density modes contributing to that shear mode, the correlation would be optimal, ρ = 1. Instead, we see that ρ is significantly smaller than unity. The main reason for this is the lack of overlap in redshift between the shear kernel and the spectroscopic galaxy distribution. The mean source redshift is z ∼ 1 so that the typical shear kernel peaks halfway between z = 0 and z = 1. However, SuMIRe only includes galaxies at z > 0.6, thus missing a large fraction of the lensing kernel. The right panel of figure 7 shows how the correlation coefficient increases when spectroscopic galaxies are added at z = 0 − 0.6. This is in fact a very realistic scenario to consider, as spectroscopic galaxy clustering information from SDSS and BOSS can be included at z < 0.6. Now the correlation is much stronger although still not identically equal to one. However, while imposing full redshift overlap (by adding galaxies at z = 0 − 0.6) strengthens the cross-correlation signal, we have checked that even after the z = 0 − 0.6 galaxies have been added, the effect of survey overlap on dark energy and modified gravity constraints is negligible, thus confirming that the dominant reason for the lack of complementarity is the number of modes probed by the cross-correlations. The counting of modes presented above does not include the effect of shot noise, which strongly reduces the effective number of modes accessible at small angular scales. As an alternative proxy for the effective number of modes we therefore consider where F σ8,σ8 is the diagonal Fisher matrix element corresponding to the amplitude σ 8 . Thus, S/N is simply the signal to noise of detecting the amplitude of the signal, including all modes up to the cutoff. In the case of a single angular power spectrum (i.e. a single redshift bin), this becomes (the expression for a 3D power spectrum is very similar). If C l is an auto-power spectrum (as opposed to a crossspectrum), in the absence of shot-noise, we have σ 2 (C l ) = 2(C l ) 2 / [f sky (2ℓ + 1)], so that which is exactly the total number of modes. The presence of shot noise increases the variance σ 2 (C l ) = 2(C l + N l ) 2 / [f sky (2ℓ + 1)] and thus reduces N eff . In the case of cross-correlations, N eff also takes into account the reduced information due to the two fields not being optimally correlated. In the case of no shot noise, we for example have for the effective number of modes probed by γs, A correlation coefficient r < 1 causes a reduction in the effective number of modes. Using the above definition of N eff , we find N eff = 8.3 × 10 5 for the spectroscopic survey alone, N eff = 4.4 × 10 5 for the imaging survey (combining the γ and p modes), while the cross-correlations (i.e. γs and ps) gives N eff = 4.4 × 10 4 , which is a factor ten smaller than the number of modes probed by lensing and a factor twenty below the number probed by galaxy clustering. In conclusion, given the limited effective number of modes available in the cross-spectra, it is not surprising that the overlap between surveys does not significantly affect the forecasted cosmology constraints. We find qualitatively similar mode counting results for the other surveys. A. Summary of Results We have studied dark energy and modified gravity constraints that can be obtained from the combination of weak lensing and spectroscopic galaxy redshift data that will be available from the SuMIRe, EUCLID and WFIRST surveys. For the weak lensing components of these surveys, we considered galaxy shear in tomographic bins, and the overdensity of lensing source galaxies. We assumed the lensing source galaxies have tomographic redshifts. For the spectroscopic galaxy clustering components of the surveys, we considered the three-dimensional, redshift-space overdensity field of the galaxies. Using the Fisher matrix formalism, we have quantified the information encoded in all available two-point functions of these observables, on large (linear and quasi-linear) scales. We always included a prior from Planck CMB data and marginalized over cosmological (and galaxy bias) parameters, including neutrino mass. In all cases, we have found strong complementarity between the two probes. The dark energy figure of merit is up to a factor of ∼ 2.5 larger when the probes are combined than for the strongest of the individual probes, and all three surveys promise strong constraints on the dark energy equation of state. Even more dramatically, treating the growth parameter, f (z), as a free function (i.e. allowing for deviations from general relativity), we find that the combination of a weak lensing with a galaxy clustering survey can constrain f (z) at the few percent level, while each survey individually has negligible constraining power. While all three combinations of surveys studied here have full sky overlap of the weak lensing and spectroscopic components, we have also studied how the forecasted constraints depend on survey overlap. We have done this by comparing forecasts for the fully overlapping case to the case where the surveys are described by the exact same specifications, except that the imaging and spectroscopic components now cover disjoint regions of the sky. We have found for all three survey combinations that the difference in constraining power between these two scenarios is small (typically < 10% differences in uncertainties and figures of merit). Following [36] (see their Appendix B for a clear, qualitative explanation), we attribute this to the small number of three-dimensional density modes probed by the cross-correlations between the survey observables, as compared to the number of modes probed by the autocorrelations within each survey. We have shown, that the number of modes probed by the cross-correlations depends on the method used to do the calculation, but is no more than 10% of the smallest number of modes probed by each survey individually. Moreover, we found that for the surveys under consideration in this work, the limited redshift overlap between the spectroscopic redshift distribution and lensing kernels weakens the cross-correlations between these observables. However, while enhancing the redshift coverage of the spectroscopic survey by adding galaxies at low redshift improves the level of correlation between the surveys, we have checked that the same-sky benefit is small even in that case. Finally, we have included the possibility of uncertainty in the parameters describing the photo-z distributions, following the methodology described in [43]. In this case, it is known that cross-correlations of the lensing source galaxies with a spectroscopic sample can help calibrate the photo-z distribution. Thus, one might expect larger samesky benefits in this case. We found, however, that this is not a large effect. The boost in dark energy figure of merit due to covering the same sky area is at most ∼ 20%, when photo-z scatter and bias are treated as free parameters. B. Comparison with Literature It is worth commenting on the current status in the literature of the question of same-sky benefits, i.e. the question of how much better (if at all) dark energy and/or modified gravity constraints from overlapping survey are compared to those from disjoint surveys. There are a number of groups that have recently addressed this issue, or are in the process of doing so, but results vary strongly. To crudely summarize, [33] find only modest same-sky benefits for realistic survey galaxy number densities and [36] also finds results consistent with ours. On the other hand, [34] found enormous increases in figure of merit (up to a factor of 100, although not the same figure of merit as considered here), and, more recently, [35] also found large same-sky improvement factors (a factor ∼ 4 for dark energy in their abstract). In addition, several groups [55,56] are working on new results, and thus add to the variety of answers. While a comparison between the various groups is not easy because of different choices of survey properties, it is unlikely that survey specifications can explain why some groups find large ( > ∼ 4) improvement factors, while others find only modest ∼ 1 − 1.5 improvement factors, if any. An interesting work to compare with is the one by [35], first of all, because it is public (available on the arXiv) and, secondly, because it presents such different results compared to ours. The major difference between [35] on the one hand and the present article (and also [33,34,36]) on the other hand, is the forecast method. While we consider the three-dimensional power spectrum of spectroscopic galaxies, [35] treat the spectroscopic galaxy density purely in terms of angular power and cross-spectra. While a large number of spectroscopic redshift bins is used, N s = 40, this still implies a bin width ∆z = 0.0425, corresponding to a comoving distance ∼ 130 − 50h −1 Mpc (at z = 0 − 1.7), over which line-of-sight clustering information is lost. On the upside, this approach allows the use of a single, consistent method to describe all data and their covariances, and no Limber approximation is used to compute the angular spectra. We have tried to reproduce the same-sky benefits of [35] by emulating their survey assumptions as much as possible given the information provided in the article. Moreover, to mimic their treatment of the galaxy clustering information, we have performed calculations either using angular spectra in 40 bins (however, still using the Limber approximation) or by using a three-dimensional power spectrum, but with a line-of-sight degradation factor smoothing out information on scales smaller than that of the bin width. While neither of these methods matches exactly the approach of [35] (because we are not set up to do a forecast based on exact angular power spectra, i.e. without assuming the Limber approximation), one would expect to be able to at least approximately reproduce their results. Unfortunately, even with these changes, we cannot reach improvement factors for the dark energy figure of merit better than ∼ 1.4, while [35] find factors of 3 − 5 for their case of scale-independent galaxy bias (although we are not sure whether this result includes the CMB prior or not). We have also checked that the lack of same-sky benefits in our forecasts is not driven by our perhaps optimistic assumptions about modeling of galaxy clustering in the non-linear regime, k max = 0.2h/Mpc (although, we remind the reader that we always exponentially suppress information at scales < ∼ 10h −1 Mpc to take into account bulk flows [44]). To this end, we have repeated our analysis for SuMIRe with k max = 0.1h/Mpc. While the resulting dark energy and growth constraints are significantly weaker than for k max = 0.2h/Mpc, the results remain the same qualitatively: combining the two surveys strongly improves constraints, but it hardly matters whether the surveys overlap or not. While we are very confident in our results, it is important to resolve/explain the differences between groups and for the community to converge on a single answer to and understanding of the question of same-sky benefits, as this is an important consideration when designing future surveys. There has been some effort to resolve the differences as part of a MS-DESI working group, but clearly more work needs to be done to reach a resolution. C. Other Sources of Synergy Although throughout this paper we have focused on the large-scale galaxy clustering information up to k max = 0.2 h/Mpc to avoid uncertainties inherent in non-linear processes such as galaxy formation, several groups [29,30,[57][58][59] have proposed promising methods for using the small-scale clustering signal, where the signal-to-noise ratio is much higher, in order to infer the connection of galaxies to dark matter halos. In particular, [30,57] showed that the cross-correlations of spectroscopic galaxies with shapes of background galaxies, or with the positions of photometric galaxies at similar redshifts to the spectroscopic galaxies, are useful to constrain the galaxy-halo connection and then improve the cosmological interpretation of the large-scale clustering signal. The small-scale clustering information might thus offer a further promising synergy of imaging and spectroscopic galaxies, beyond what we discussed in this paper, if our understanding of galaxy formation at small scales is improved, or a robust, empirical method to calibrate these uncertainties is developed. These methods can only be applied if the imaging and spectroscopic surveys overlap on the sky. Other types of synergy between overlapping surveys not studied in this work are the fact that the imaging survey can be used to create a target catalog for the spectroscopic survey, and the fact that the combination of the two data sets will be robust against systematics that affect only weak lensing, or only galaxy clustering. VII. ACKNOWLEDGMENTS Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work is supported by NASA ATP grant 11-ATP-090. MT was supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, by the FIRST program "Subaru Measurements of Images and Redshifts (SuMIRe)", CSTP, Japan, and by Grant-in-Aid for Scientific Research from the JSPS Promotion of Science (23340061). We also acknowledge the input of Sudeep Das, who wrote an early version of the Fisher matrix code which some of the code we used for this work builds on.
2013-08-29T06:06:37.000Z
2013-08-28T00:00:00.000
{ "year": 2013, "sha1": "103b4d8cb9ad48dfdb9b44b1d2122aaf1730ffea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "103b4d8cb9ad48dfdb9b44b1d2122aaf1730ffea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
264977133
pes2o/s2orc
v3-fos-license
Antiferromagnetic Tunnel Junctions for Spintronics Antiferromagnetic (AFM) spintronics has emerged as a subfield of spintronics, where an AFM N\'eel vector is used as a state variable. Efficient electric control and detection of the N\'eel vector are critical for spintronic applications. This review article features fundamental properties of AFM tunnel junctions (AFMTJs) as spintronic devices where such electric control and detection can be realized. We emphasize critical requirements for observing a large tunneling magnetoresistance (TMR) effect in AFMTJs with collinear and noncollinear AFM electrodes, such as a momentum-dependent spin polarization and N\'eel spin currents. We further discuss spin torques in AFMTJs that are capable of N\'eel vector switching. Overall, AFMTJs have potential to become a new standard for spintronics providing larger magnetoresistive effects, few orders of magnitude faster switching speed, and much higher packing density than conventional magnetic tunnel junctions (MTJs). Introduction Spintronics is a vigorously developing field of electronics, where electron's spin controls device functionality [1].Conventional schemes rely on magnetic tunnel junctions (MTJs)-key devices of modern spintronic technologies, such as magnetic randomaccess memories (MRAMs) (Fig. 1a).In an MRAM, MTJs carry information bits, that can be written-in and read-out by electric means.An important advantage of an MRAM is its nonvolatility; however, it is deficient with its low switching speed that is determined by the time required to rotate the magnetization of a ferromagnet, which is typically a few nanoseconds (Fig. 1a).This is nearly three orders of magnitude slower than charging a capacitor in CMOS technologies. Antiferromagnetic (AFM) spintronics has recently emerged as a subfield of spintronics, where an AFM order parameter known as the Néel vector is used as a state variable [2][3][4].Due to being robust against magnetic perturbations, producing no stray fields, and exhibiting ultrafast dynamics, antiferromagnets can serve as promising functional materials for spintronic applications [ 5 ].Potentially, antiferromagnets can replace ferromagnets due to their orders of magnitude enhanced switching speed and storage density.To realize this potential, efficient electric control and detection of the AFM Néel vector are required.These functionalities can be realized using AFM tunnel junctions (AFMTJs) as core spintronic devices.The recent theoretical predictions [ 6 -8 ] and experimental demonstrations [9,10] show that AFMTJs can exhibit a strong electric response to the state of the Néel vector, and that the Néel vector itself can be electrically controlled [11].Potentially, AFM random-access memories (AFM-RAMs) are envisioned (Fig. 1b) that can provide a stronger magnetoresistive response, much faster operation speed, and higher memory density than MRAMs.This review article features fundamental properties of collinear and noncollinear AFMTJs, such as a momentum-dependent spin polarization and Néel spin currents, that control their magnetoresistive properties and discusses spin torques in AFMTJs that are capable of Néel vector switching. Magnetic tunnel junctions and tunneling magnetoresistance A ferromagnet hosts exchange-coupled parallel-aligned magnetic moments carrying a finite magnetization (Fig. 2a), which can be used as a state variable to encode the information.Magnetization is easily controlled by magnetic fields and spin torques, which allows convenient write-in of information.The non-vanishing magnetization in a ferromagnet is inherited from its exchange-split electronic band structure (Figs.2b and 2c) that is also responsible for a variety of useful spin-dependent transport properties [1].Among them are those (e.g., magnetoresistance) that allow the electrical detection of the magnetization state for information read-out.To date, most spintronic devices have been based on ferromagnets.AFM-RAMs are expected to exhibit a stronger magnetoresistive response, a much faster operation speed, and higher density compared to the conventional MRAM, due to the advantages of AFMTJs. Magnetic tunnel junctions (MTJs) An MTJ is the most common spintronic device utilizing the advantages of ferromagnets.Figure 2d shows schematics of an MTJ, where left and right electrodes are ferromagnetic (FM) metals separated by an insulating non-magnetic tunnel barrier.Magnetization of the left electrode is pinned, while it is free in the right electrode.Parallel (P) and antiparallel (AP) alignments of magnetization in the two electrodes represent "0" and "1" bits of information. Tunneling magnetoresistance (TMR) Electron tunneling in an MTJ can be effectively controlled by the relative orientation of magnetization.Switching between the P and AP states changes the resistance of an MTJ.This effect is known as tunneling magnetoresistance (TMR) and can be used for read-out of "0s" and "1s" in an MTJ [12,13].The magnitude of the effect is normally quantified by the TMR ratio, = , where ( ) is resistance of the P (AP) state.TMR can be understood assuming that electron's spin is conserved in the tunneling process, so that tunneling of up-and down-spin electrons occurs in parallel in two spin conduction channels [14]. As a result, TMR can be qualitatively described in terms of Julliere's formula = , where 1 and 2 are spin polarizations of the two electrodes [12].The widely used definition of the total spin polarization is = where ↑,↓ ( ) is the spin-dependent density of states (DOS) at the Fermi energy ( ) (Figs. 2c and 2e).Based on this formula, a larger spin polarization of the electrodes favors a larger TMR.While relevant for polycrystalline MTJs, this simple description does not capture anisotropy of the Fermi surface that is essential for tunneling in crystalline MTJs, and it does not reflect effects of the tunnel barrier and interfaces [15]. In crystalline MTJs with no diffuse scattering, tunneling conductance can be described in the ballistic transport regime where the transverse wave vector ∥ is conserved, so that [16,17] where is the spin index, and ∥ is the ∥ -and -dependent transmission. ∥ is determined by the probability of tunneling of the Bloch states across the barrier at ∥ .Due to being separated into ∥ -dependent channels conducting in parallel, the spindependent conductance in MTJs can be better characterized in terms of the ∥ -dependent number of conduction channels ∥ ( ∥ ) in the electrodes, rather than the total DOS-a characteristic integrated over ∥ .The number of conduction channels is defined by the number of propagating Bloch states in the transport direction at the Fermi energy [18]: where σ () is energy of the n-th band, is the band velocity along the transport z direction, and is the Fermi distribution function.A ∥ -dependent spin polarization ∥ can then be defined by and the net spin polarization by These quantities capture electron's spin and velocity at the Fermi surface, and thus are more relevant for the description of transport properties than the DOS.They can be regarded as transport spin polarizations of a magnetic metal in the ballistic transport regime.In crystalline MTJs, the ∥ -dependent transport spin polarization ∥ is more appropriate to quantify TMR than the net spin polarization .Based on ∥ , TMR in crystalline MTJs can be explained by stronger transmission for a P magnetization state than for an AP state due to the matching of ∥,1 and ∥,2 in the two electrodes (labeled by 1 and 2) for the P state and mismatching for the AP state (Fig. 2e).The tunneling barrier also plays an important role in TMR.First, transmission is expected to be stronger at those ∥ where the decay rate in the barrier is lower.Second, barrier effectively transmits only those Bloch states that have symmetry matching to the low-decay-rate evanescent states in the barrier [19,20].This process may significantly enhance TMR if the barrier selects conduction channels with a high degree of spin polarization (Fig. 2f).For example, the matching of the majorityspin ∆1 band in the Fe (001) electrode to the ∆1 evanescent state in the MgO (001) barrier is responsible for a large positive spin polarization and giant values of TMR predicted [20,21] and observed [22,23] in crystalline Fe/MgO/Fe (001) MTJs. TMR due to momentum-dependent spin polarization An antiferromagnet is a magnetically ordered material with equivalent magnetic moments exactly compensated, and thus zero net spontaneous magnetization [ 42 , 43 ].In collinear antiferromagnets with two antiparallel aligned magnetic sublattices, there exists a symmetry � enforcing = − , where is the magnetization on magnetic sublattice = , (Figs.3a and 3b).The Néel vector = − can serve as the magnetic order parameter.The most common � symmetry is � � that combines space inversion � and time reversal � (Fig. 3a).This symmetry enforces Kramers' spin degeneracy in the band structure, as � � ↑ () = ↓ () (Fig. 3a).This spin degeneracy also appears in compensated antiferromagnets with � = � ̂ symmetry ( � is spin rotation and ̂ is half a unit cell translation) in the absence of SOC.As a result, the spin polarization ∥ of all conduction channels vanishes (Fig. 3a). Figure 3b shows a typical spin-split Fermi surface of an altermagnet.Here, although � � and � ̂ symmetries are broken, there are two glide symmetries, � and � , that connect two sublattices.The Fermi surface exhibits an anisotropic spin distribution () that is � -odd, i.e. () = (−) .This contrasts with a � -even spin distribution in SOC-split nonmagnets, where () = −(−).The anisotropic � -odd () makes transport properties direction dependent.For example, in the [001] transport direction, the ∥ -dependent conduction channels in the (001) 2D Brillouin zone host antisymmetric spin polarizations, i.e. ∥ � , (c-f) are reprinted from Ref. [6] under permission of the Creative Commons CC BY license.due to the [001] direction being invariant under � and � .As a result, a current flowing along the [001] direction is globally spin neutral [6].On the contrary, an uncompensated ∥ appears in the conduction channels in the (110) 2D Brillouin zone, due to the [110] direction not being invariant under � and � .This allows a net spin-polarized current along the [110] direction with a finite transport spin polarization (Fig. 3b) [50,51]. Due to TMR being controlled by the ∥ -dependent spin polarization ∥ , even for the transport direction supporting only spin-neutral currents ( = 0), altermagnets can produce TMR in AFMTJs.This possibility has been investigated for RuO2 [6], a high-temperature AFM metal discovered recently [52,53].RuO2 has a rutile structure with two magnetic sublattices RuA and RuB (Fig. 3d).Its magnetic space group P42'/mnm' ensures the compensated magnetization and spin-split electronic structure.The RuO2 Fermi surface has similar characteristics to those shown in Fig. 3b.In the (001) stacking, the conduction channels reveal an antisymmetric distribution of ∥ with respect to the � and � planes in the 2D Brillouin zone (Fig. 3e).Since switching the AFM Néel vector reverses ∥ , matching ∥ in a RuO2 (001)based AFMTJ can be controlled by the relative orientation of the Néel vector in the two electrodes.This ensures a finite TMR.First-principles quantum-transport calculations for all-rutile RuO2/TiO2/RuO2 (001) AFMTJs confirm this prediction [6].As seen from Fig. 3f, the distribution of ∥ in the P state echoes the distribution of ∥ in bulk RuO2 (001), while ∥ in the AP state is blocked at the wavevectors with � ∥ � = 1 in bulk RuO2 (001).The resulting TMR is ~500% which is comparable to the TMR predicted for conventional Fe/MgO/Fe MTJs [20,21]. A giant TMR also appears in a RuO2/TiO2/RuO2 (110) AFMTJ, where ∥ is uncompensated in the RuO2 (110) 2D Brillouin zone supporting a spin-polarized current with a finite [51].While in this case, TMR is expected directly from the presence of the net spin polarization of RuO2 (110) (like in a FM MTJ), this conventional contribution to TMR appears to be small compared to the contribution associated with the matching of ∥ -dependent spin polarization ∥ in the two electrodes. Altermagnets such as RuO2 (110) can also serve as a counter electrode in MTJs with a single FM electrode.Since both FM and AFM electrodes have finite and uncompensated ∥ , the TMR is expected to occur due to the ∥ matching mechanism.This approach can be used to verify the application potential of altermagnets, since FM electrodes can be easily switched by an applied magnetic field.From the practical perspective, this can also simplify the design of a conventional MTJ, due to no need for an additional pinning layer.The giant TMR of RuO2/TiO2/CrO2 (110) all-rutile MTJ has been predicted recently [54,55], using half-metallic CrO2 [56,57] as a FM electrode and RuO2 as an AFM counter electrode. TMR due to Néel spin currents In addition to the momentumdependent spin polarization, the sublattice-dependent spin polarization in real space can also result in TMR in collinear AFMTJs [11].In a collinear AFM metal with two magnetic sublattices and , a longitudinal charge current can be decomposed into intra-sublattice current ( = , ) and inter-sublattice current ( ≠ ) (Fig. 4a), such that The associated spin current is The spin current flowing through sublattice , dubbed the Néel spin current, I given by It hosts a sublattice-dependent spin polarization The intra-and inter-sublattice currents are determined by the inter-and intra-sublattice electron hopping along the transport direction.Since the intra-sublattice currents and are spin-polarized, the Néel spin currents with large can emerge if the intra-sublattice hopping is dominant.For A-type antiferromagnets composed of antiparallelaligned FM layers and C-type antiferromagnets composed of antiparallel-aligned chains, the intra-sublattice electron hopping is usually stronger than the inter-sublattice hopping.In these cases, the two AFM sublattices can be considered as connected in parallel with staggered Néel spin currents on the sublattices (Fig. 4a).AFMTJs based on such AFM electrodes can then be qualitatively considered as two MTJs connected in parallel, which naturally supports TMR (Fig. 4b). The Néel spin currents do not rely on spin-split electronic structure, and hence can emerge even in � � symmetric antiferromagnets with Kramers' spin degeneracy.For example, in the recently discovered two-dimensional (2D) van der Waals magnet Fe4GeTe2 (Fig. 4c), where the AFM order is induced by doping [58], Néel spin currents with a large spin polarization | | = 68% for each layer are predicted, despite the spindegenerate band structure (Fig. 4d), resulting in a sizable TMR in a Fe4GeTe2-based AFMTJ (Fig. 4e) [11].Such a lateral junction can be realized experimentally using the recently developed edge-epitaxy technique [59][60][61]. The TMR in the RuO2/TiO2/RuO2 (001) AFMTJ discussed above can be also understood in terms of the Néel spin currents [11].This is due to rutile MO2 (M is a transition metal element) being composed of edge-sharing MO6 octahedra chains along the [001] direction, where the adjacent chains share common corners of the octahedra.Therefore, RuO2 can be regarded as a C-type antiferromagnet that supports Néel spin currents with a non-zero spin polarization.This fact has been verified for a RuO2/TiO2/[TiO2/CrO2]n/CrO2 (001) MTJ, where [TiO2/CrO2]n represents a multilayer of alternating TiO2 (001) and CrO2 (001) monolayers with n repeats [54].The latter can be fabricated using modern thin-film growth techniques [62,63].Although ∥ matching of bulk RuO2 (001) and CrO2 (001) electrodes is unable to generate TMR by symmetry, the presence of the [TiO2/CrO2]n multilayer results in a different effective barrier thickness for the Néel spin currents flowing on RuA and RuB sublattices.As a result, this AFMTJ can be decomposed into two parallelconnected MTJs with different barrier thickness, where conduction is dominated by the MTJ with smaller barrier thickness.This generates sizable TMR and proves the existence of the Néel spin currents [54]. AFMTJs with noncollinear AFM electrodes In a magnetically frustrated crystal structure, collinear magnetic moment alignment may not guarantee the lowest energy.For example, in a Kagome lattice with AFM nearest-neighbor exchange interactions, the co-planar moments form a noncollinear AFM alignment with a 120° angle between each other (Fig. 5a).The Néel vector for such antiferromagnets is not uniquely defined, and the noncollinear AFM order is often represented by a magnetic multipole, a magnetic toroidal multipole [64], or even by direction of a small net magnetization generated by magnetic moment canting due to SOC [65,66].The latter allows these noncollinear antiferromagnets to be considered as weak ferromagnets exhibiting spin-dependent transport properties, such as the anomalous Hall effect [67][68][69].This fact has stimulated broad interest in the properties of noncollinear antiferromagnets. Spin polarization in noncollinear antiferromagnets Noncollinear antiferromagnets exhibit lower symmetry compared to their collinear counterparts.As a result, they generally support nonrelativistic spin-split band structures and spin-textured Fermi surfaces (Fig. 5b) [70,71,72 ], even in the presence of � ̂ symmetry [47].Different from � -even spin textures in nonmagnetic materials induced by SOC, the � -odd spin textures support longitudinal spin-polarized currents [70,73], indicating the possibility of using them as electrodes in AFMTJs (Fig. 5c). However, since spin is not a good quantum number in noncollinear antiferromagnets, it is not clear if the spin-matching mechanism is still valid for noncollinear AFMTJs.The definition of ∥ given by Eq. ( 3) is inappropriate in this case.One can redefine the ∥ -dependent spin polarization as a vector [74] where ∥, is the spin expectation value for band at ∥ and EF, and ∥, is the net spin ∥ = ∑ ∥, at ∥ .This definition is equivalent to Eq. ( 3) for the collinear case.For noncollinear magnets, although the magnitudes and orientations of ∥ vary with ∥ , the magnitude of spin polarization ∥ = � ∥ � is large when spins ∥, at ∥ are nearly parallel in all conduction channels n, and is exactly 100% when only one conduction channel is present.Therefore, for noncollinear AFMTJs with two identical electrodes, the spin matching mechanism is expected to work if ∥ is large [74]. TMR in noncollinear AFMTJs The possibility of TMR in noncollinear AFMTJs has been proposed based on the predicted transport spin polarizations of Mn3X (X = Sn and Ir) [70] and ANMn3 (A = Ga, Ni, Sn, or Pt) [73], and then calculated from first principles for AFMTJs with noncollinear AFM Mn3Sn electrodes [7].Mn3Sn has a hexagonal D019 structure of space group P63/mmc, where Mn atoms form a Kagome-type frustrated lattice, with the magnetic moments aligned with 120° angles between each other [66] (Fig. 5d).Such a magnetic alignment belongs to a magnetic space group ′′ that has � � and � ̂ symmetries broken.This results in a non-spindegenerate electronic structure with three Fermi surface sheets, each having finite momentum-dependent spin expectation values.Figure 5e shows the spin texture contributed by one of these sheets, indicating a finite ∥ and thus a possibility for Mn3Sn to serve as electrodes in a noncollinear AFMTJ.There are three other equivalent magnetic states in Mn3Sn, which can be obtained by a 60˚ rotation around the [001] axis.Switching between these magnetic alignments in one Mn3Sn electrode while keeping the other fixed in a Mn3Sn/vacuum/Mn3Sn AFMTJ changes the matching conditions for ∥ and generates sizable TMR as large as ~300% (Fig. 5f) [7].A large TMR has also been predicted for noncollinear AFMTJs based on Mn3Pt [9] and GaNMn3 [74] electrodes. Unlike collinear antiferromagnets whose magnetic state is difficult to control by external stimuli (this is why collinear AFMTJs have not been experimentally realized yet), noncollinear antiferromagnets can be controlled by a magnetic field [66,75] or a spin torque [76][77][78][79].This is due to nonvanishing net magnetization induced by SOC [66], strain [80], or interfaces [81].Therefore, a noncollinear AFTMJ is easier to realize in experiment. Recently, two independent experiments have been successfully performed to observe TMR in noncollinear AFMTJs [9,10].The first one [9] utilized cubic AFM Mn3Pt as electrodes and MgO as barrier in this AFMTJ (Fig. 5g).The bottom Mn3Pt (001) layer was pinned by the exchange bias from an adjacent collinear AFM MnPt layer, while the top Mn3Pt layer was free to be switched by an external magnetic field.The maximum TMR in this AFMTJ was found to be about 100% at room temperature (Fig. 5g) and about 138% at 10 K. Several devices have been tested and more than 50% of them had roomtemperature TMR of more than 70%. Another work [10] studied noncollinear Mn3Sn/MgO/Mn3Sn AFMTJ (Fig. 5h), where the bottom epitaxial Mn3Sn layer had a (011 � 1) orientation, and the top Mn3Sn layer was polycrystalline.The P and AP states in this AFMTJ could be switched by the magnetic field producing TMR of about 2% at room temperature (Fig. 5h).The effect appeared to be not as large as that predicted [7] likely due to polycrystallinity of the top magnetic layer. Spin torques in AFMTJs Although magnetic fields are widely used to toggle between P and AP states in MTJs, their generation requires substantial currents making them energy inefficient.For low-power and high-density applications, spin torques are more favorable and thus have been extensively explored [82][83][84][85][86][87][88].The spin-torque control of magnetization is even more critical for AFMTJs, since most AFM electrodes (except those with SOC-induced small net magnetic moment), are insensitive to a magnetic field. Spin torques for magnetic switching The dynamics of a magnet can be well described by the Landau-Lifshitz-Gilbert-Slonczewski equation [82,89]: where is the magnetization of sublattice , is the damping constant, is the gyromagnetic ratio.The first two terms in Eq. ( 10) describe the precession and damping torques induced by the intrinsic effective field , .The two last terms are external field-like and damping-like spin torques ∝ − × and ∝ − × ( × ), where is the current induced non-equilibrium spin polarization on sublattice .For ferromagnets, , is mostly determined by the anisotropy field , , and spin torques driven by can directly compete with the intrinsic torques to switch magnetization.The switching of ferromagnets is therefore controlled by , and occurs in the GHz frequency range. In MTJs, spin polarization is carried by the longitudinal current flowing across the junction.The tunneling current emitted by one electrode carries spin angular momentum that can be transferred to the magnetic moment of the other electrode, generating a spin-transfer torque (STT) [84].In addition, spin torques can be produced by an in-plane current via spin-Hall [87] and Rashba-Edelstein effects [85].This requires an additional spin-source layer that is adjacent to the free layer and has a large SOC.This type of spin torque is known as the spin-orbit torque (SOT).for switching the Néel vector.However, the spin torque can be used to tilt and enforcing a Néel vector precession driven by , [90].Due to the large , , such precession occurs in the THz frequency range.This property can be used for the switching of the Néel vector, provided that an appropriate torque can be generated. Spin torques in collinear AFM electrodes We, first, consider a damping-like torque that is usually employed to switch ferromagnets and can be generated by a uniform spin polarization of a tunneling current, a spin-Hall effect, or a Rashba-Edelstein effect [82][83][84][85][86][87][88].Normally, in collinear antiferromagnets, these effects lead to = = for the two sublattices, which results in a uniform damping-like torque = and staggered field-like torque = − . The former tilts and causing the staggered , to drive an ultrafast oscillation of the Néel vector in the plane perpendicular to (Fig. 6a, left).This oscillation is persistent that is promising for THz applications [90].However, the damping-like spin torque induced by a uniform polarization is indeterministic for the reversal of the Néel vector. We note, however, that such spin torque can be used to rotate the Néel vector between the easy axes in antiferromagnets with multi-axial anisotropy.For example, in an antiferromagnet with bi-axial anisotropy and easy axes along the x and y directions, the Néel vector can be switched from the initial x direction to the final y (or −y) direction.This is because a currentinduced uniform along the x axis exerts a damping-like spin torque that drives a persistent oscillation confined within the y-z plane.When the current is released, the Néel vector relaxes to the easy y axis.This approach has been employed in heterostructures of antiferromagnet/heavy metal bilayers, where was generated by the spin Hall effect in the heavy metal layers [95,[97][98][99]. In contrast, if a staggered spin polarization is induced in an antiferromagnet, i.e. = − , the staggered damping-like torque = − and the uniform field-like torque = can be generated [93,94].The latter tilts and , resulting in the staggered , that drives the rotation of toward the direction of (Fig. 6a, right).When is rotated to be parallel to , vanishes and the dynamics of the Néel vector stops.Therefore, if the staggered is collinear to the easy axis of an antiferromagnet, the ultrafast and deterministic switching of the Néel vector can be achieved. The required staggered and uniform can be realized in AFMTJs due to the Néel spin currents.Tunneling Néel spin currents can transfer the staggered from the reference layer to the free layer, exerting the required torques for switching.First-principles quantum-transport calculations have been performed for RuO2/TiO2/RuO2 (001) AFMTJs (Fig. 6b) and Fe4GeTe2/vacuum/Fe4GeTe2 lateral AFMTJs [11].These calculations show that the total field-like torque in the free layer FIG.6 (a) Schematics of the dynamics of collinear antiferromagnets induced by a spin current with uniform spin polarizations (left) and staggered spin polarizations (right) on two sublattices.(b) Atomic structure of a RuO2/TiO2/RuO2 (001) AFMTJ (left) and calculated STT driven by Néel spin currents in this AFMTJ (right).Reprinted from Ref. [11] with permission.(c) Schematic noncollinear AFMTJ (top) and calculated STT in this AFMTJ for different magnetic states of the electrodes (bottom).Reprinted from Ref. [96] with permission.(d) SOT switching of an epitaxial Mn3Sn film with assistance of a magnetic field.Reprinted from Ref. [77] with permission.(e) Field-free spintorque switching of a Mn3Sn polycrystalline film.Reprinted from Ref. [79] under permission of the Creative Commons CC BY license. is large, while the total damping-like torque is small, consistent with the expectation of = and = − .The total field-like torque induced by the Néel spin currents is comparable to that in a Fe/MgO/Fe MTJ with similar barrier thickness [104], robust to the interface structure, and thus can be used to generate the ultrafast deterministic switching of the Néel vector in AFMTJs [11]. Spin torques in noncollinear AFM electrodes.In a noncollinear antiferromagnet with three magnetic sublattices ( , , and ) aligned within the Kagome plane, the spintorque dynamics exhibits rich behavior.For example, when an external spin current is injected into a noncollinear antiferromagnet and its spin polarization is perpendicular to the Kagome plane, i.e. ⊥ , the damping-like torque is uniform.This torque tilts along the out-of-plane direction, causing the finite , to drive an ultrafast oscillation of within the Kagome plane (like in the case of Fig. 6a, left) [105,106].In the presence of multi-domain states, such may drive fast motion of domain walls [107]. If is in the plane, the resulting torques on the sublattices are different, due to different directions of relative to .For example, in a noncollinear AFMTJ based on Mn3Pt-type electrode (Fig. 6c) [96], the magnetic group symmetry allows an x-directional longitudinal spin current carrying spin polarization along the y direction, i.e. ∥ [70].Theoretical modeling shows that this spin current exerts damping-like self-torques on and (Fig. 6c, bottom) [96].Such self-torques, though interesting, are not able to switch a noncollinear antiferromagnet, because they are internally generated within the antiferromagnet and rotated together with its magnetic order.In contrast, STTs generated by a tunneling spin current injected from another noncollinear AFM electrode can perform the deterministic switching (Fig. 6c, bottom).The self-torques in this case can be useful to reduce the switching current density [96]. Besides the global spin currents, noncollinear local spin currents due to nonrelativistic [96] and relativistic [108] origins can also emerge in noncollinear antiferromagnets.In addition, noncollinear antiferromagnets support the nonrelativistic Rashba-Edelstein effect [109].These factors can also contribute to spin torques.Furthermore, due to the existence of the nonrelativistic net magnetization [66], many noncollinear antiferromagnets can be considered as weak ferromagnets, and their spin-torque dynamics can be understood as the interplay of weak magnetization and current-induced spin polarization.For example, in an epitaxial noncollinear AFM Mn3Sn with perpendicular net magnetization, SOT switching by a spin Hall current generated from an adjacent heavy metal layer requires an assisting in-plane magnetic field (Fig. 6d) [76][77][78].This is like the conventional SOT switching of a perpendicular ferromagnet.Remarkably, a field-free switching of a polycrystalline Mn3Sn film (Fig. 6e) has been reported recently [79], which may combine the different types of spin torques mentioned above. Finally, we note that STTs in noncollinear AFMTJs are different from those in collinear MTJs [96,110 ].The latter cannot occur for the perfect P or AP states, and hence thermal activation is required to induce magnetic fluctuations and activate switching.In a noncollinear AFMTJ, however, STTs can occur in any configuration due to noncollinear sublattice moments (Fig. 6c, bottom) [96].This eliminates the requirement of thermal activation that suffers from energy dissipation. Summary and Outlook As is evident from this brief review, AFMTJs exhibit interesting functional properties useful for applications.The most notable among them is the giant TMR effect.In conjunction with the possibility of the AFM Néel vector switching by spin torques, it provides potential for novel and more advanced AFM-RAMs.The underlying physics of these properties is determined by the strong exchange interactions in AFM metals, crystal symmetry of the antiferromagnets, and the associated nonrelativistic momentum-and sublattice-dependent spin polarizations.Due to the strong magnetoresistive responses, AFMTJs are expected to be superior to those AFM spintronic devices that are controlled by the relativistically-induced spin-dependent properties associated with a weak SOC [94,95,97, 111 -113 ].However, while AFMTJs have promising perspectives, their investigations are still in a rudimentary stage of development, and thus substantial efforts are required to further elucidate their basic properties and evaluate their value for spintronics. Although the mechanism of TMR in collinear AFMTJs has been understood, experimental demonstrations are still lacking.This is largely due to difficulties of controlling the Néel vector in collinear antiferromagnets.In this regard, an MTJ with a single FM electrode and an AFM counter electrode can be employed as a preliminary test for using collinear antiferromagnets in AFMTJs [54,55].In addition, as-grown AFM films usually host a complex domain structure with oppositely aligned AFM domains.Using such films as a reference layer in an AFMTJ would obviously diminish TMR.This problem could be addressed by depositing an AFM film on a FM layer, so that the AFM domains are aligned and switched by an exchange bias [114,115].Once the domains in the AFM reference layer are well aligned, the Néel vector of the free AFM layer could be deterministically switched by spin torques induced by the Néel spin currents [11]. Ultimately, it is desirable to use electric means to align the collinear AFM domains rather than a magnetic field-controlled exchange bias.Although a spin current with a uniform spin polarization cannot deterministically switch the Néel vector in collinear antiferromagnets on its own, this may be possible with assistance of other factors, such as an external magnetic field, a Dzyaloshinskii-Moriya interaction [ 116 , 117 ], or interfacial uncompensated magnetization.These possibilities are worth further investigations.In addition, in an X-type antiferromagnet, aligning AFM domains is possible by passing an external spin current that exerts a spin torque on a single magnetic sublattice [ 118 ].A similar approach is feasible for AFMTJs with an engineered structure, such as RuO2/TiO2/[TiO2/CrO2]n/CrO2 (001) [54]. Spin-torque switching of noncollinear AFM metals have been demonstrated in several experiments [76][77][78][79], but requires further investigations.There are various mechanisms to induce spin polarization globally and/or locally in noncollinear antiferromagnets, and various factors affecting its magnitude and direction.Due to relatively low symmetry of noncollinear antiferromagnets, the spin-polarization magnitude is expected to strongly depend on the electric field direction causing dissimilar spin-torque dynamics of noncollinear antiferromagnets with different crystallographic orientations.Net magnetization due to relativistic spin canting [65,66], piezomagnetism [80], and breaking periodicity at the interface [81] may also influence the spin-torque dynamics. We expect that spin-torque switching of antiferromagnets could be very energy-efficient.To switch the Néel vector, the applied spin torque does not need to directly compete with the strong exchange field, but rather with much smaller magnetic anisotropy and intrinsic damping.Only a slight tilting of the magnetic moments by the spin torque is required to initiate the exchange-driven spin dynamics.Therefore, the associated critical current is expected to be comparable or than that needed for the spin-torque switching of ferromagnets.In addition, the spin-torque switching of antiferromagnets is expected to be much faster than that of ferromagnets.As a result, the comparable or smaller switching current and the much shorter switching time are expected to consume much less energy. In addition to the spin torque, there are other means to switch the Neel vector in antiferromagnets.For example, it is possible to control the Néel vector by piezoelectric strain induced by electric field [ 119 , 120 ].In magnetoelectric antiferromagnets, such BiFeO3 [121,122,123] and Cr2O3 [124], may be potentially employed as an exchange-coupled under-(over-)layer in AFMTJs to control the Néel vector of the AFM electrode by an electric field applied to the magnetoelectric.The Néel vector may also be switched by voltage controlled magnetic anisotropy (VCMA) [125]. Another issue that needs to be addressed is the material choice for AFMTJs.Among the relatively small number of known altermagnets, only three of them [52,126,127], to the best of our knowledge, are metallic and antiferromagnetic at room temperature.This limits the choice of altermagnetic electrodes for realistic applications.Further material search and design are required [128,129].In this regard, noncollinear antiferromagnets may be a better choice, because many of them have the Néel temperature above room temperature and two of them, namely Mn3Pt and Mn3Sn, have already demonstrated their functionality in AFMTJs [9,10].An important issue which needs to be addressed is magnetic anisotropy which is typically much weaker in noncollinear antiferromagnets than in their collinear counterparts.To realize robust nonvolatile states in noncollinear AFMTJs, optimizing their magnetic anisotropy is necessary.Yet, weak magnetic anisotropy makes noncollinear AFMTJs suitable for magnetic sensor applications [130]. AFMTJs with AFM electrodes that have multiple anisotropy axes may be useful to realize spintronic devices with multiple non-volatile resistance states.For example, in the case on Mn3Sn, different non-volatile resistance states can be obtained associated with the ground-state Néel vector configurations of the AFM electrodes (Fig. 5f).As we have discussed in Sec. 4, switching between these states can be accomplished by a damping-like spin torque. Tunneling barrier also plays a very important role in the performance of AFMTJs.The widely used MgO in conventional MTJs [22,23] may be not an optimal choice for AFMTJs [9,10].This is due to the evanescent states in MgO mostly supporting transmission of electrons with the transverse wave vectors ∥ around the center of the 2D Brillouin zone [20,21], where the spin polarization ∥ of the AFM electrodes may be relatively small [6,74].It would be desirable to search for insulating materials with low decay rates at ∥ away from the zone center to match the spin-polarized conduction channels of AFM electrodes when designing AFMTJs [74].In addition, if the barrier exhibits a sizable SOC, interesting transport phenomena may occur in AFMTJs due to the interplay between the nonrelativistic and relativistic effects, such as unconventional Hall effects [131][132][133], tunneling anisotropic magnetoresistance [134], and non-reciprocal transport [135]. Finally, probably the most critical requirement for observing the predicted giant TMR effects in AFMTJs is good crystallinity of AFMTJs.Conservation of the transverse momentum ∥ in the process of tunneling imposes stringent conditions on the quality of thin films and heterostructures comprising AFMTJs.Since the momentum-and sublattice-dependent spin polarization in antiferromagnets is highly anisotropic, capabilities for epitaxial growth of AFMTJs along selected directions are required.These challenges need to be addressed by qualified material scientists. Overall, there are clear indications that AFMTJs can overperform conventional MTJs in terms of the TMR magnitude, switching speed, and packing density.The predicted TMR values are gigantic, and the first experimental data provides a lot of optimism.The switching speed of AFMTJs is expected to be a few orders in magnitude faster.The high packing density is guaranteed by the absence of stray magnetic fields.Thus, AFMTJs have potential to become a new standard for spintronic devices, making this research field rich with opportunities for innovation and new developments. FIG. 1 FIG. 1 (a) Schematic of MRAM consisting of an array of conventional MTJs.(b) Schematic of AFM-RAM consisting of an array of AFMTJs.AFM-RAMs are expected to exhibit a stronger magnetoresistive response, a much faster operation speed, and higher density compared to the conventional MRAM, due to the advantages of AFMTJs. FIG. 2 FIG.2 (a-c) Schematic of magnetic structure (a), Fermi surface (b), and electronic density of states (DOS) (c) of a ferromagnet.Red and blue colors in (b) denote up-and down-spin Fermi surfaces.Projection of the Fermi surface into a 2D Brillouin zone represents a distribution of conduction channels.(d) Schematic of a conventional MTJ based on two FM electrodes and a nonmagnetic tunnel barrier.(e) Schematic of the TMR mechanism based on spin-polarized DOS.(f) Schematic of the TMR mechanism based on the momentum-dependent spinpolarization matching of conduction channels in two electrodes.(g) Schematic of momentum-dependent spin-filtering in the tunnel barrier. FIG. 3 FIG. 3 (a) Schematics of the magnetic structure and the Fermi surface of a collinear antiferromagnet with magnetization compensated by � � symmetry.The symmetry enforces spin degeneracy of the Fermi surface and conduction channels (indicated by grey color).(b) Schematics of the magnetic structure and the Fermi surface of a collinear antiferromagnet with magnetization compensated by two glide symmetries � and � .The symmetries allow spin splitting at wavevectors away from the � and � invariant planes (indicated by red and blue colors).This leads to the momentum-dependent spin polarization that is compensated in the (001) plane and uncompensated in the (110) plane.(c) Schematic of an AFMTJ with two AFM electrodes and a nonmagnetic tunnel barrier.(d) The atomic structure of a collinear AFM RuO2, which hosts two sublattices RuA and RuB with antiparallel magnetic moments.(e) Momentum-dependent conduction channels and associated spin polarizations in the 2D Brillouin zone of RuO2 (001).(f) Calculated momentum-dependent transmission of a RuO2/TiO2/RuO2 (001) AFMTJ.Figures (c-f) are reprinted from Ref. [6] under permission of the Creative Commons CC BY license. FIG. 4 FIG. 4 (a) Schematics of staggered Néel spin currents in a collinear antiferromagnet with strong intra-sublattice coupling.(b) AFMTJ with Néel spin currents that can be qualitatively considered as two MTJs connected in parallel.(c) � � symmetric 2D A-type AFM metal Fe4GeTe2 that supports Néel spin currents.(d) Lateral AFMTJ with sizable TMR despite the spin-degenerate electronic structure of Fe4GeTe2 electrodes.The figures are reprinted from Ref. [11] with permission. FIG. 5 (a) Schematic of the typical noncollinear AFM alignments.(b) Non-relativistic anisotropic spin texture at the Fermi surface of a noncollinear antiferromagnet supporting spin-polarized currents.(c) Schematic of a noncollinear AFMTJ with P (left) and AP (right) Néel vectors.Figures (b) and (c) are reprinted from Ref. [70] with permission.(d) Atomic and magnetic structure of Mn3Sn.(e) Projected to the (0001) plane spin distribution on a selected Fermi surface sheet in Mn3Sn.(f) Calculated tunneling conductance G per lateral unit cell area (left axis) and resistance-area (RA) product (right axis) for a Mn3Sn/vacuum/Mn3Sn AFMTJ when the magnetic domain shown in (d) is rotated by an angle around the [0001] axis.Figures (e,f) are reprinted from Ref. [7] with permission.(g,h) Experimental results on TMR in Mn3Pt/MgO/Mn3Pt (g) and Mn3Sn/MgO/Mn3Sn (h) tunnel junctions.Left panels schematically show geometry of the AFMTJs; right panels show resistance vs applied magnetic field.Figure (g) is reprinted from Ref. [9] with permission.Figure (h) is reprinted from Ref. [10] under permission of the Creative Commons CC BY license. � × , � .Here, superscripts F and D denote the associated precession (field) and damping torques, respectively.Since , ∝ − ( ≠ ), , and , vanish if the two sublattices are antiparallel.However, if and are tilted due to a spin torque, the exchange field generates staggered , .Due to , being typically a factor of 10 3 greater than , , it is practically impossible to create such a large spin torque to directly compete with ,
2023-11-04T13:07:02.543Z
2023-05-15T00:00:00.000
{ "year": 2023, "sha1": "9c27bd799dcb75b9c5bd54af00eaf0ad20e00e37", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s44306-024-00014-7.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "b3091435cb90a82b83128be7ac395f51bc4c8c52", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
249574542
pes2o/s2orc
v3-fos-license
Tamoxifen use and potential effects on liver parenchyma: A long‐term prospective transient elastographic evaluation Abstract Tamoxifen is a commonly prescribed drug in both early and metastatic breast cancer. Prospective studies in Asian populations demonstrated that tamoxifen‐related liver steatosis occurred in more than 30% of the patients within 2 years after start of treatment. No well‐designed prospective studies on potential tamoxifen‐related liver steatosis have been conducted in Caucasian patients so far. Therefore, our prospective study aimed to assess the incidence of tamoxifen‐related liver steatosis for a period of 2 years in a population of Caucasian breast cancer patients treated with tamoxifen. Patients with an indication for adjuvant treatment with tamoxifen were included in this study. Data were collected at 3 months (T1) and at 2 years (T2) after start of tamoxifen treatment (follow‐up period of 21 months). For the quantification of liver steatosis, patients underwent liver stiffness measurement by transient elastography with simultaneous controlled attenuation parameter (CAP) determination using the FibroScan. A total of 95 Caucasian breast cancer patients were included in this evaluation. Liver steatosis was observed in 46 of 95 (48%) and 48 of 95 (51%) of the patients at T1 and T2, respectively. No clinically relevant increase in liver steatosis was observed during the treatment period of 2 years with tamoxifen (median CAP = 243 ± 49 dB/m (T1) and 253 ± 55 dB/m (T2), respectively; p = 0.038). Conclusion: In this prospective longitudinal study in Caucasian breast cancer patients, no clinically relevant alterations in liver steatosis in terms of CAP values and liver/lipid parameters were observed after 2 years of tamoxifen treatment. This study therefore demonstrates an absence of tamoxifen‐related adverse events such as steatosis and (early) development of fibrosis or cirrhosis during a treatment period of at least 2 years. Abstract Tamoxifen is a commonly prescribed drug in both early and metastatic breast cancer. Prospective studies in Asian populations demonstrated that tamoxifen-related liver steatosis occurred in more than 30% of the patients within 2 years after start of treatment. No well-designed prospective studies on potential tamoxifen-related liver steatosis have been conducted in Caucasian patients so far. Therefore, our prospective study aimed to assess the incidence of tamoxifen-related liver steatosis for a period of 2 years in a population of Caucasian breast cancer patients treated with tamoxifen. Patients with an indication for adjuvant treatment with tamoxifen were included in this study. Data were collected at 3 months (T1) and at 2 years (T2) after start of tamoxifen treatment (follow-up period of 21 months). For the quantification of liver steatosis, patients underwent liver stiffness measurement by transient elastography with simultaneous controlled attenuation parameter (CAP) determination using the FibroScan. A total of 95 Caucasian breast cancer patients were included in this evaluation. Liver steatosis was observed in 46 of 95 (48%) and 48 of 95 (51%) of the patients at T1 and T2, respectively. No clinically relevant increase in liver steatosis was observed during the treatment period of 2 years with tamoxifen (median CAP = 243 ± 49 dB/m (T1) and 253 ± 55 dB/m (T2), respectively; p = 0.038). Conclusion: In this prospective longitudinal study in Caucasian breast cancer patients, no clinically relevant alterations in liver steatosis in terms of CAP values and liver/lipid parameters were observed after 2 years of tamoxifen treatment. This study therefore demonstrates an absence of tamoxifen-related adverse events such as steatosis and (early) development of fibrosis or cirrhosis during a treatment period of at least 2 years. Tamoxifen is a commonly prescribed drug in both early-stage and metastatic breast cancer. [1] Although the toxicity profile is relatively mild, tamoxifen use is associated with development of fatty liver disease. Prospective studies in Asian populations demonstrated that tamoxifen-related liver steatosis occurred in more than 30% of the patients within 2 years after start of treatment. [2,3] The concept of primary liver steatosis (related to metabolic risk factors) and secondary (e.g., drug use) can intermingle in clinical practice. Earlier, we described a Caucasian patient who developed a severe stage of liver steatosis, 6 months after starting with daily tamoxifen treatment. [4] Despite these data, no well-designed prospective studies on potential tamoxifen-related liver steatosis have been conducted in Caucasian patients so far. Considering that most patients with early-stage breast cancer have a good prognosis, preventing severe long-term side effects such as fatty liver disease is highly relevant. Moreover, recent data suggest a clinical benefit of extending tamoxifen therapy to 10 years especially in premenopausal, young patients. [5,6] Our prospective, observational study aimed to assess the incidence of tamoxifen-related liver steatosis for a period of 2 years in a population of Caucasian breast cancer patients treated with tamoxifen. Caucasian patients with an indication for adjuvant treatment with tamoxifen were included in this study. Patients who had longer than 3 months of tamoxifen treatment or started with a dose higher than 20 mg once daily, and patients with a non-Caucasian ethnicity, were not eligible for inclusion. The study was approved as a secondary endpoint by the Local Ethics Committee (Erasmus MC) and was registered in the Dutch Trial Registry (www.trial regis ter.nl; NL6918). [7] Written informed consent was obtained from all patients participating in this study. All patients were evaluated for a period of 2 years after start of tamoxifen therapy. Data were collected at 3 months (T1) and at 2 years (T2) after start of tamoxifen treatment, during two outpatient visits, including blood sampling for liver function (e.g., alanine aminotransferase, aspartate aminotransferase [AST], gamma-glutamyltransferase, alkaline phosphatase [ALP], total bilirubin [TB]) and lipid spectrum. For the quantification of liver steatosis, patients underwent liver stiffness measurement (LSM) by transient elastography with simultaneous controlled attenuation parameter (CAP) determination using the FibroScan Touch 502 software version C 3.2 (Echosens). Experienced operators performed all FibroScan examinations as per the manufacturer's recommendations. Primary endpoint in this observational study was the alteration in liver steatosis 2 years after start with tamoxifen treatment compared with baseline measurements (T1). Statistical differences between groups or paired data points were calculated by appropriate parametric or nonparametric tests. All tests were two-sided, and p < 0.05 was considered statistically significant. Five percent of our patients were excluded from analysis due to loss to follow-up (not for a medical reason); therefore, a total of 95 Caucasian breast cancer patients (age = 55.9 ± 12.0 years and body mass index [BMI] = 25.5 ± 3.8 kg.m −2 ) were included in this evaluation, and all 190 FibroScan assessments were performed and eligible for analyses. The FibroScan was performed 3 months after initiation of tamoxifen due to practical considerations. Generally, development of liver steatosis progresses slowly; however, a rapid development (within a few months after tamoxifen initiation) may not be excluded in rare cases. Liver steatosis (defined by a CAP > 248 dB/m according to a validation report by Echosens) was observed in 48% and 51% of the patients at T1 and T2, respectively. No clinically relevant increase in liver steatosis was observed during the treatment period of 2 years with tamoxifen (median CAP = 243 ± 49 dB/m [T1] and 253 ± 55 dB/m [T2], respectively; p = 0.038). Also, no alterations were observed in fibrosis scores between 3 months and 2 years of treatment with tamoxifen (4.6 ± 1.4 kPa [T1] and 4.4 ± 1.4 kPa [T2], respectively; p > 0.05). Results of the FibroScan assessments are presented in Table 1. Liver fibrosis, defined by LSM > 7.0 kPa, was diagnosed in 9 patients (9%) at T1 and in 6 patients (6%) at T2, respectively. In case of a suspicion of severe liver fibrosis (>9.5 kPa), patients were referred to a hepatologist for a second opinion. In all cases, no diagnosis of hepatitis was made by the hepatologist. Lifestyle advices (limited alcohol intake, exercise, diet etcetera) were given, and follow-up for liver fibrosis was advised. These consultations did not lead to dose alterations, interruptions, or discontinuations. Furthermore, the liver parameters were stable over time in these patients. A statistically significant difference was found between biochemistry parameters at 3 months compared with 2 years of tamoxifen treatment, including an increase in mean AST, triglycerides, apolipoprotein B and glucose, and a decrease in mean TB, ALP, and low-density lipoprotein. No differences were observed between T1 and T2 for weight and BMI. In our population, 13 of 95 (14%) patients used drugs for diabetes mellitus, hypertension, or hypercholesterolemia. No association between those drugs and liver steatosis at T1 or T2 was found (p > 0.05). In addition, liver fibrosis stiffness score was stable over time in patients with steatosis (mean 4.9 ± 1.5 kPa at T1 vs. 4.6 ± 1.6 kPa at T2; p > 0.05). In general, patients with a CAP > 248 dB/m were characterized by a higher BMI (26.9 ± 3.7), age (58.9 ± 11.6), or triglycerides levels (1.8 ± 0.8) compared with the population below 248 dB/m. These findings clearly indicate "lifestyle factors" as major risk factor for the development of liver steatosis. The main parameters of the population tamoxifen users are depicted in Table 1. Previously, a prospective observational study in 175 Chinese patients demonstrated a cumulative incidence of liver steatosis of 38% after 2 years of tamoxifen use. [3] This is prospective, observational study investigates the potential effect of tamoxifen on liver steatosis in a Caucasian population. Both studies show no clinically relevant alterations of liver enzymes after extensive tamoxifen use during 2 years. [3] In contrast to an Asian population, no increase in liver steatosis was observed in our Caucasian population. The mechanism of development of fatty liver disease in (Asian) tamoxifen users is not fully elucidated, although there are indications of disturbance of the lipid homeostasis due to antagonism of the estrogen receptor. [8] In line with historical data, 48% of our patients were diagnosed with liver steatosis at T1. [9] Among a study population in the United States, liver steatosis prevalence was low in Asian patients (18%) and high among Mexican Americans (48%). [10] Therefore, apart from traditional risk factors ("lifestyle") and adaption of the Western culture, ethnic factors appear to play a significant role in the development of liver steatosis. The absence of lifestyle-related risk factors (e.g., hip-waist circumference, alcohol consumption) is a minor limitation of our study. In addition, a follow-up of 2 years is limited to identify serious complications of steatosis, such as nonalcoholic steatohepatitis or liver fibrosis. The data of this study may not be generalizable to other populations that are more ethnically diverse (non-Caucasians) or have a higher mean BMI. In conclusion, in this prospective longitudinal study in Caucasian breast cancer patients, no clinical relevant alterations in liver steatosis in terms of CAP values and liver/lipid parameters were observed after 2 years of tamoxifen treatment. This study therefore demonstrates an absence of tamoxifen-related adverse events such as steatosis and (early) development of fibrosis or cirrhosis during a treatment period of at least 2 years. *p-value < 0.05; **p-value < 0.01; ***p-value < 0.001.
2022-06-12T06:18:16.635Z
2022-06-10T00:00:00.000
{ "year": 2022, "sha1": "46f9cee86dc6ff1aa121da2eb50ddf1c18e4dc44", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "0bf4901e173dd53709ad75390d4daddf112f1cca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
39002687
pes2o/s2orc
v3-fos-license
Contribution of the Carboxyl Terminus of the VPAC1 Receptor to Agonist-induced Receptor Phosphorylation, Internalization, and Recycling* When exposed to vasoactive intestinal peptide (VIP), the human wild type VPAC1 receptor expressed in Chinese hamster ovary (CHO) cells is rapidly phosphorylated, desensitized, and internalized in the endosomal compartment and is not re-expressed at the cell membrane within 2 h after agonist removal. The aims of the present work were first to correlate receptor phosphorylation level to internalization and recycling, measured by flow cytometry and in some cases by confocal microscopy using a monoclonal antibody that did not interfere with ligand binding, and second to identify the phosphorylated Ser/Thr residues. Combining receptor mutations and truncations allowed identification of Ser250 (in the second intracellular loop), Thr429, Ser435, Ser448 or Ser449, and Ser455 (all in the distal part of the C terminus) as candidates for VIP-stimulated phosphorylation. The effects of single mutations were not additive, suggesting alternative phosphorylation sites in mutated receptors. Replacement of all of the Ser/Thr residues in the carboxyl-terminal tail and truncation of the domain containing these residues completely inhibited VIP-stimulated phosphorylation and receptor internalization. There was, however, no direct correlation between receptor phosphorylation and internalization; in some truncated and mutated receptors, a 70% reduction in phosphorylation had little effect on internalization. In contrast to results obtained on the wild type and all of the mutated or truncated receptors that still underwent phosphorylation, internalization of the severely truncated receptor was reversed within 2 h of incubation in the absence of the agonist. Receptor recovery was blocked by monensin, an endosome inhibitor. The neuropeptide vasoactive intestinal polypeptide (VIP) 1 exerts its multiple regulatory functions through interaction with two high affinity receptors named VPAC 1 and VPAC 2 . These are members of a family of G protein-coupled receptors (GPCRs), designated as Class II or B. This class also includes, among others, receptors for peptides of at least 20 amino acid residues like secretin, glucagon, glucagon-like peptides, growth hormone-releasing peptide, parathormone, and pituitary adenylate cyclase-activating peptide (1). VPAC 1 and VPAC 2 receptors are preferentially coupled to the G␣ s protein (1) responsible for increasing cyclic AMP concentrations but may also, with a lower efficiency, couple to G␣ i and G␣ q proteins (2) responsible for a [Ca 2ϩ ] i and inositol 1,4,5-trisphosphate increase. As with most, if not all, of the GPCRs, both VIP receptors are desensitized, sequestered, and down-regulated after exposure to agonist (3)(4)(5). This was observed in cells expressing native receptors as well as in transfected Chinese hamster ovary (CHO) cells and HEK 293 cells. It was recently demonstrated that VPAC 1 receptor phosphorylation and desensitization was enhanced by co-transfection with the G protein receptor kinases, GRK2, -3, -5, and -6 (5). Although the overexpression of arrestin or of a dominant negative mutant did not modify receptor internalization, the inhibitory effect of a dominant negative mutant of dynamin suggested the following sequence of events for receptor regulation: agonist stimulation, G protein kinase-mediated phosphorylation, ␤-arrestin translocation, and dynamin-dependant receptor internalization (5). In the present work, we detailed the contribution of the carboxyl-terminal intracellular tail to receptor internalization by studying truncated and mutated human VPAC 1 receptors expressed in CHO cells. We developed a monoclonal antibody against the amino-terminal extracellular part of that receptor that permitted the quantification, by flow cytometry, of the receptors expressed at the cell membrane. Immunoprecipitation of the receptor after metabolic labeling with [ 32 P]orthophosphate, followed by SDS-PAGE and autoradiography, allowed for phosphorylation quantification. We therefore evaluated the link between receptor phosphorylation and receptor internalization. We also evaluated the recycling of the receptors to the membrane. We found that VIP induced in the wild type (WT) receptor a rapid phosphorylation and internalization, which was not reversible within 2 h. Mutation into Ala or truncation of all of the Ser/Thr residues in the C-terminal tail and mutation of one Ser in the second intracellular loop abolished receptor phosphoryl-ation and internalization. A larger truncation of a domain located between the seven transmembrane helix and the Ser/Thr-containing region led to a receptor that was no longer phosphorylated but remained internalized. However, receptors were re-expressed at the membrane within 120 min. Single and combined mutations of the Ser/Thr residues indicated the possibility of alternative phosphorylation sites in mutant receptors and also indicated that phosphorylation of all of the identified sites was not necessary for receptor internalization. Construction of Truncated and Mutated Receptors The cell line expressing the VPAC 1 receptor has been detailed in a previous publication (6). Generation of the truncated receptors was achieved by introduction of a stop codon using the QuikChange sitedirected mutagenesis kit (Stratagene, La Jolla, CA) essentially according to the manufacturer's instructions as described (2). The expected mutation was confirmed by DNA sequencing on an ABI automated sequencing apparatus, using the BigDye Terminator Sequencing Prism Kit from ABI (PerkinElmer Life Sciences). The complete nucleotide sequence of each construction was verified by DNA sequencing. 20 g of the receptor-coding region were transfected by electroporation in the CHO cell line expressing aequorin and G␣ 16 (kindly provided by Vincent Dupriez, Euroscreen SA, Belgium) as described (2). Selection was carried out in culture medium (50% Ham/F-12, 50% Dulbecco's modified Eagle's medium, 10% fetal calf serum, 1% penicillin (10 milliunits/ml), 1% streptomycin (10 g/ml), 1% L-glutamine (200 mM)), supplemented with 600 g of Geneticin (G418)/ml of culture medium. After 10 -15 days of selection, isolated colonies were transferred to 24-well plates and grown until confluence, trypsinized, and further expanded in 6-well plates, from which cells were scraped and membranes were prepared for identification of receptor-expressing clones by an adenylate cyclase activity assay in the presence of 1 M VIP. The selected clones were expanded in the same medium as that used for the selection but in the absence of Geneticin. Membrane Preparations Membranes were prepared from scraped cells lysed in 1 mM NaHCO 3 by immediate freezing in liquid nitrogen. After thawing, the lysate was first centrifuged at 4°C for 5 min at 400 ϫ g, and the supernatant was further centrifuged at 20,000 ϫ g for 15 min. The pellet was resuspended in 1 mM NaHCO 3 and used immediately. Binding Studies Binding studies, using 125 I-labeled VIP, were performed for 30 min at 23°C in a total volume of 120 l containing 20 mM Tris-maleate, 2 mM MgCl 2 , 0.1 mg/ml bacitracin, 1% bovine serum albumin (pH 7.4), and 3-30 g of protein/assay. The assays were performed in such conditions that specific binding was strictly proportional to the amount of protein. Bound and free radioactivities were separated by filtration through glass fiber GF/C filters presoaked for 24 h in 0.01% polyethyleneimine. The filters were rinsed three times with 20 mM sodium phosphate buffer (pH 7.4) containing 0.5% bovine serum albumin. Binding site density was evaluated as follows: density ϭ bound/free ϫ IC 50 /mg of protein. Adenylate Cyclase Activity Adenylate cyclase activity was determined by the Salomon (7) procedure as previously described (8). Membrane proteins (3-15 g) were incubated in a total volume of 60 l containing 0.5 mM [␣-32 P]ATP, 10 M GTP, 5 mM MgCl 2 , 0.5 mM EGTA, 1 mM cAMP, 1 mM theophylline, 10 mM phospho(enol)pyruvate, 30 g/ml pyruvate kinase, and 30 mM Tris-HCl at a final pH of 7.8. The reaction was initiated by membrane addition and was terminated after a 15-min incubation at 37°C by adding 0.5 ml of a 0.5% SDS solution containing 0.5 mM ATP, 0.5 mM cAMP, and 20,000 cpm of [ 3 H]cAMP. cAMP was separated from ATP by two successive chromatographies on Dowex 50Wx8 and neutral alumina. Preparation of a Monoclonal Antibody for VPAC 1 Genetic Immunization and Generation of Monoclonal Antibodies-Genetic immunization and generation of monoclonal antibodies were performed according to Costagliola et al. (9). The protocol was approved by the local Ethical Committee for Animal Experimentation. Six-week-old Balb/c female mice were anesthetized by injections of 6 -10 mg/kg Ketamin HCl® associated with 0.1 ml/kg Rompum®. The anterior tibialis muscle of each leg was injected at day 0 with 100 l of 10 mM cardiotoxin (Latoxan, Rosans, France). Five days later, 50 g of the plasmid construct was injected in the same region in a final volume of 100 l of 0.09% NaCl. Injections were repeated 3 and 6 weeks thereafter. Blood samples were obtained from retro-ocular puncture 7 weeks after the initial immunization, and serum was tested for the presence of antibodies against the VPAC 1 receptor. The mouse selected (by FACS and Western blotting) for monoclonal antibody (mAb) production, was boosted by an IV injection of 100 l of a saline solution containing 10 6 CHO cells expressing the human VPAC 1 receptor. Three days later, splenocytes were fused with SP2O, a nonsecreting myeloma cell line, at a 3:1 ratio in the presence of polyethylene glycol. Fused cells were then delivered into 10 96-well plates and selected by 100 M hypoxanthine, 400 M aminopterine, and 16 M thymidine. Irradiated macrophages from mouse peritoneum were added to the well for supply of cytokines and growth factors. After 10 days, culture supernatants were screened by FACS (see below), and the cells producing antibodies were cloned by dilution. The monoclonal antibody selected (mAb-VPAC 1 ) was purified using ImmunoPure IgG purification kit (Pierce) and was of the IgG 2a subtype, based on the mouse mAb isotyping kit (Isotrip; Roche Applied Science). Properties of the Selected Antibody-CHO cells expressing the recombinant human VPAC 1 receptor were detached from the plates using a 5 mM EDTA, 5 mM EGTA phosphate-buffered saline (PBS) solution, harvested by centrifugation (500 ϫ g, 4°C, 4 min), washed once with PBS solution, and resuspended to 3 ϫ 10 5 cells/tube in 100 l of PBS, 0.1% bovine serum albumin, containing 0.1 g of purified mAb VPAC 1 . After a 30-min incubation at 4°C, the cells were washed in the same buffer and centrifuged in the same conditions. They were then incubated for 30 min, on ice in the dark, with secondary antibody, an fluorescein isothiocyanate-conjugated ␥-chain-specific goat anti-mouse IgG (Sigma). The cells were again washed and resuspended in 250 l of PBS, 0.1% bovine serum albumin. The level of fluorescence was analyzed using a FACScalibur (BD Biosciences), and the data were processed using Cell Quest software. Basal fluorescence was determined from a sample of nontransfected CHO cells. The use of propidium iodide (10 g/ml) allowed exclusion of debris and dead cells from the analysis. The same procedure was used to evaluate the selectivity of the antibody; the level of fluorescence observed with CHO cells expressing the human VPAC 2 and the rat VPAC 1 and VPAC 2 receptors was not different from that of cells that did not express the human VPAC 1 receptor. Chimeric receptors made of different parts of the human VPAC 1 and VPAC 2 receptors and expressed in CHO cells (8) were only detected when the amino-terminal domain of the VPAC 1 receptor was conserved (Fig. 1). Furthermore, preincubation of membranes prepared from cells expressing the human VPAC 1 receptor with the monoclonal antibody at increasing concentrations did not modify the binding of 125 I-labeled VIP or the VIP-stimulated adenylate cyclase activity. Receptor Internalization and Trafficking Receptor internalization was defined as the percentage of cell surface receptors that were no longer accessible to the monoclonal antibody after agonist exposure. Cells expressing the VPAC 1 receptor were incubated with agonist at 37°C and, after washing three times with ice-cold phosphate-buffered saline, were processed for FACS analysis as described above. Details on specific protocols for evaluation of receptor recovery are given in the figure legends. Confocal microscopy was also used to confirm receptor sequestration. Cells were cultured on 22-mm glass slides for 72 h. After a 30-min VIP treatment, cells were washed in PBS and fixed with Ϫ20°C absolute methanol for 10 min. Nonspecific protein binding was prevented by a 15-min incubation with 5% normal sheep serum. The cells were then incubated overnight at 4°C with the monoclonal anti-VPAC 1 antibody (1:250). The primary antibody was diluted in PBS, 1% normal sheep serum, 1‰ azide. The optimal working dilution had been previously determined empirically by serial dilutions for the antibody used. After three washes and a second 15-min incubation at room temperature with normal sheep serum, the cells were incubated for 30 min at room temperature with fluorescein isothiocyanate-conjugated ␥-chain-specific goat anti-mouse IgG (Sigma) (1:100) and diluted in the same solution as the primary antibody. Omission of the primary or secondary antibody resulted in the absence of labeling. Cells were finally incubated for 5 min at room temperature with Hoechst 33258 (Molecular Probes, Inc., Eugene, OR). After three rinses in PBS, coverslips were mounted with "Slow Fade Light" anti-fade mounting medium (Molecular Probes) in 50% glycerol (or the Calbiochem mounting medium) before viewing under an LSM510 NLO confocal microscope fitted on an Axiovert M200 inverted microscope equipped with a C-Apochromat ϫ63/1.2 numeric aperture water immersion objective (Zeiss). A ϫ2 electronic zoom was used across regions of interest. The 488-nm excitation wavelength of the Argon/2 laser, a main dichroic HFT 488, and a band pass emission filter (BP500 -550 nm) were used for selective detection of the green (fluorescein-5-isothiocyanate) fluorochrome. The nuclear stain Hoechst was excited in multiphotonic mode at 760 nm with a Mai Tai tunable broad band laser (Spectra-Physics, Darmstadt, Germany) and detected using a main dichroic HFT KP650 and a band pass emission filter (BP435-485 nm). Optical sections, 2.5 m thick, were collected for each fluorochrome sequentially. The images generated (512 ϫ 512 pixels, pixel size 0.14 m) were merged and displayed with the Zeiss LSM510 software and exported in jpg image format. All figures show a single optical section across the regions of interest. Scale bars represent 10 m. Immunoprecipitation and Determination of Receptor Phosphorylation and Receptor Density Cells were first cultured in phosphate-free Dulbecco's modified Eagle's medium for 16 h and then incubated for 2 h at 37°C in the presence of 0.1 mCi/ml acid-free [ 32 P]orthophosphate. At the end of this labeling period, agonist was added for 5 min. Phosphorylation inhibitors were added 30 min prior to agonist addition. Cells were then washed three times with ice-cold buffer consisting of 10 mM HEPES, 4.2 mM NaHCO 3 , 11.7 mM glucose, 1.2 mM MgSO 4 , 4.7 mM KCl, 118 mM NaCl, and 1.3 mM CaCl 2 , pH 7.4, and then lysed in 1.2 ml of a buffer consisting of 20 mM Tris, 100 mM (NH 4 ) 2 SO 4 , and 10% glycerol, pH 7.5. The cell lysate was centrifuged at 600 ϫ g at 4°C for 10 min, and the supernatant was centrifuged at 19,000 ϫ g for 30 min. The resulting pellet was resuspended in the same buffer containing 1% dodecylmaltoside (Roche Applied Science) and solubilized for 45 min at 4°C. The remaining insoluble material was eliminated by a further centrifugation. The supernatant (200 l) was added to 50 l of a 10% protein A-Sepharose suspension coated during 2 h with 2 g of purified mAb VPAC 1 . After a 150-min incubation under rotating agitation at 4°C, the Sepharose beads were separated by centrifugation and washed successively with the concentrated lysis buffer and then with a 2-fold diluted buffer and finally with water. The final bead pellet was resuspended in a buffer consisting of 125 mM Tris, 10% ␤-mercaptoethanol, 4% SDS, 20% glycerol, 0.02% bromphenol blue, pH 6.8. After heating at 60°C for 10 min, the samples were resolved by SDS-PAGE using a 10% gel. The gel was fixed and dried, and the phosphorylated bands were detected and quantified by phosphorimaging (Vilber Lourmat, Kaiser). The contribution of GRK to agonist-dependent phosphorylation was assessed in in vitro assays. CHO cells expressing the VPAC 1 or the CCR5 receptor (kindly provided by Dr. C. Blanpain, IRIBHN, Brussels) were grown as previously described and resuspended in a buffer consisting of 20 mM Tris HCl, 10 mM MgCl 2 , 10 mM NaCl, and 1 mM EGTA added to a protease inhibitor mixture (Complete; Roche Applied Science). Fifty l of membrane (2 g/l of protein) were incubated with 50 M [␥-32 P]ATP for 5 min at 37°C in the presence of the tested agents. The final volume was 100 l. Reactions were started by the addition of ATP and stopped by a 30-s centrifugation at 13,000 ϫ g, removal of the supernatant, and solubilization of the membranes with 250 l of a buffer consisting of 20 mM Tris, 100 mM (NH 4 ) 2 SO 4 , 10% glycerol, a protease inhibitor mixture (Complete), and 1% dodecylmaltoside, pH 7.5. Human VPAC 1 solubilized receptor was then immunoprecipitated as previously described, and protein was resolved on 10% SDS-PAGE. CCR5-solubilized receptor was immunoprecipitated in 80 l of a 50% protein G-Sepharose suspension and 3 g of purified monoclonal antihuman CCR5 antibody (2D7; PharMingen). After five successive washes, the sample was resuspended in the buffer, heated at 50°C for 20 min, and processed as for the VPAC 1 receptor. Receptor density was evaluated in all cases by binding studies using 125 I-labeled VIP as ligand as previously described (10) and confirmed in some cases by Western blotting. Because Western blotting could not be performed with the monoclonal antibody used for the FACS and immunoprecipitation studies, we used a polyclonal antibody generated by Prof. Schulz (Otto-von-Guericke-University, Magdeburg, Germany) directed against the 438 -457 sequence of the carboxyl terminus of the receptor (11). This antibody does not recognize any of the truncated receptors under study (data not shown), but Western blots performed on WT and selected mutated receptors validated binding data (see "Results"). Membranes were prepared as described above. The protein were resolved by 10% SDS-PAGE, transferred on a nitrocellulose membrane, and incubated with 1 g/ml primary antibody overnight at 4°C. We used as secondary antibody anti-rabbit peroxidase-conjugated antibody. Proteins were visualized using SuperSignal® West Pico reagent chemiluminescent substrate (Pierce and Perbio Science). 1 Receptor Phosphorylation and Internalization-VIP induced a rapid, dose-dependent stimulation of 32 P incorporation into a protein with an apparent molecular size of 75 kDa that immunoprecipitated with the mAb-VPAC 1 (Fig. 2). No signal was observed in nontransfected cells. Agonist-induced VPAC Receptor phosphorylation was agonist-dependent. Forskolin, phorbol esters, and the selective VPAC 1 antagonist were inac- tive per se; forskolin and phorbol esters did not modify VIPinduced phosphorylation. VIP-stimulated phosphorylation was not affected by 100 nM H-89, 6 M staurosporine, 300 nM K252a, and 100 M genistein but was inhibited by 30 M H-89, 100 M A3, and 200 -500 M CKI-7 (Fig. 3). The possible contribution of the G protein receptor kinases to receptor phosphorylation was evaluated on membranes incubated in the presence of radioactive ATP and 1 M VIP with 0.1 mM Zn 2ϩ and 1 g/ml heparin as inhibitors (12)(13)(14). A positive control consisted in CHO cell membranes expressing the chemokine receptor CCR5 stimulated by RANTES (15) (Fig. 3). VPAC 1 receptor internalization was estimated by flow cytometry by the decrease in fluorescence associated with the binding of the mAb-VPAC 1 . A typical experiment is shown in Fig. 4. Exposure to VIP induced a rapid and sustained decrease in the receptor number expressed at the cell surface that was completely blocked by preincubation in the presence of 0.5 M sucrose, suggesting that the receptor was internalized in endosomes. The selective VPAC 1 receptor antagonist was inefficient. Results observed at 5 and 30 min were presented in Fig. 4. Seventy-five percent of the receptors disappeared within 30 min. Receptor internalization was further visualized by confocal microscopy; as shown in Fig. 5 (left panels), fluorescence signal was detected in unstimulated cells exclusively at CHO cell membranes, whereas the fluorescence was scattered in the cytosol after a 30-min stimulation with 1 M VIP. As negative control (right panel), we used the poly(A) mutant that was considered as noninternalized by the FACS technique (see below) and remained indeed at the cell membrane by confocal inspection (Fig. 5). The reversibility was tested after 30-min exposure to agonist; after three washes, cells were incubated for 20 -120 min, and the receptors accessible to the antibody were again evaluated. There was no reappearance of the receptors (Fig. 6). We used as positive control the VPAC 2 receptor expressed in the same CHO cell line, which was similarly internalized but was re-expressed to the membrane within 120 min (Fig. 6); the results of the VPAC 2 receptor were detailed in Ref. 16. Properties of Carboxyl-terminally Truncated Receptors-The truncated receptors studied were schematized in Fig. 7 and listed in Table I. They were all stably expressed in CHO cells. For each construction, at least four clones were generated and studied. To compare the different constructions, we detailed clones that expressed (when possible) a similar receptor density. The binding receptor properties and the capability of VIP to stimulate adenylate cyclase activity were detailed elsewhere for some of the truncated receptors (17) and are summarized in Table I. The IC 50 values of binding, the EC 50 values, and the maximal stimulatory effect of VIP on adenylate cyclase were comparable for all of the truncated receptors. We already reported that two receptors had an elevated basal adenylate cyclase activity that was decreased by the selective VPAC 1 receptor antagonist (17), suggesting a constitutive activity but one that did not modify VIP stimulation. As compared with the 1-457 wild type receptor, the 1-444, 1-441, and 1-436 truncated receptors had a comparable 30% reduction of phosphorylation measured by densitometry on gels loaded with the same amount of receptors. The 1-433 and 1-429 truncated receptors had a 70% reduction in receptor phosphorylation (Fig. 8). The 1-421 truncated receptor retained only 10% of the VIP-stimulated phosphorylation. The shortest receptor tested, 1-398, exhibited a still detectable VIP-stimulated phosphorylation, but too low to be valuably quantified. Receptor internalization was rapid (65-80% of the receptors were no more accessible to the antibody after 5 min of incubation with 1 M VIP, and this value remained stable for the next 25 min). Internalization was slow down for the 1-429 VPAC 1 receptor and for the shorter fragments 1-421, 1-417, and 1-402. For this last construction, 10 and 20% only of the receptors were not accessible after 5 and 30 min of incubation, respectively. Surprisingly, the shortest fragments 1-401, 1-400, 1-399, and 1-398 were internalized as rapidly and as efficiently as the wild type receptor (Table I). As mentioned above, VIP-stimulated wild type receptor internalization was not reversible after repeated washings of the treated cells and further incubation for 120 min in absence of agonist. The same behavior was observed for the 1-444 to 1-429 truncated receptors. For the 1-421 to 1-402 truncated receptors, the results were difficult to analyze due to the low level of internalization, but no reappearance of the receptors was suspected. Surprisingly, reappearance of the internalized 1-401, 1-400, 1-399, and 1-398 truncated receptors was obvious (Fig. 6). Receptor reappearance was in all cases inhibited by 25 M monensine but not by 10 g/ml cycloheximide (data not shown). Properties of Carboxyl-terminally Mutated Receptors-Serine and threonine residues were mutated in alanine separately or collectively to precise the phosphorylatable residues and the functional consequences on ligand recognition, adenylate cyclase activation, coupling to G protein, internalization, and reappearance of the receptors at the cell membrane. To validate the receptor quantification by binding assay, we performed a Western blot of the wild type and some point-mutated VPAC 1 receptors using the polyclonal anti-VPAC 1 antibody (Fig. 9). A same amount of receptor, evaluated by binding assay, was introduced in each lane. The results obtained after The results are summarized in Tables II and III, and the location of the mutations can be found in Fig. 7. The S455A mutant was not different from the wild type receptor except for a 40% reduced phosphorylation. The triple mutant S447A/ S448A/S449A (that eliminates a protein kinase A consensus sequence) also had properties undistinguishable from those of the wild type receptor except for a 30% reduction in receptor phosphorylation. Internalization was comparable with that of the wild type receptor, and re-expression of the receptor at the cell surface was not observed within 120 min. The S447A, S441A, T438A, T432A, S431A, S425A, and S422A mutants, on all of the parameters tested, were not different from the wild type receptor. The S435A and the T429A had as sole difference with the wild type receptor a 66% reduction in VIP-stimulated receptor phosphorylation. Combining the three mutations that decreased from at least 50% receptor phosphorylation led to the S455A/S435A/T429A triple mutant; surprisingly, the individual effects on receptor phosphorylation were not additive, the phosphorylation level reaching as for the single mutant 40% of that of the wild type receptor. However, at variance with the single mutants, internalization of the receptor was significantly slowed down (Tables II and III). Mutations of Ser Residues in IC2-As mentioned above, the deletion of all of the Ser and Thr residues of the carboxyl terminus markedly reduced but did not abolish the VIP-stimulated receptor phosphorylation. We hypothesized therefore the possibility of a phosphorylatable Ser/Thr residue in the intracellular loops connecting the transmembrane domains. As we have previously shown (18), the importance of the distal part of the IC3 for receptor coupling to the G proteins and as a poorly coupled receptor could not be helpful, we first mutated the Ser 247 and Ser 250 residues in IC2. The single mutated S247A receptor was undistinguishable from the wild type receptor, but the S250A had a 50% reduction of VIP-induced phosphorylation (Fig. 10) without change in ligand recognition, basal and VIP stimulated adenylate cyclase values, and receptor internalization and trafficking (Table II). Combining truncation of the receptor C terminus containing all of the Ser/Thr residues and S250A mutation (S250A 1-421) completely abolished VIP-stimulated receptor phosphorylation and markedly slowed down receptor internalization (Fig. 10). Surprisingly, combination of the Ser/Thr mutations in the C terminus that reduced receptor phosphorylation with the S250A mutation did not further reduce phosphorylation or did not further slow down receptor internalization; the S250A/ S435A/S455A/T429A receptor was not different from the S435A/S455A/T429A receptor. The activity, including receptor phosphorylation, internalization, and trafficking, of S250A/ S435A/S455A, S250A/S455A/T429A, and the S250A/S435A/ T429A were not different from any corresponding single mutant (Table III). Mutation in Ala of all the Ser and Thr residues of the Cterminal tail and of Ser 250 led to a receptor with binding properties and adenylate cyclase activity not different from that of the wild type receptor but that was neither phosphorylated nor internalized (by the fluorescence-activated cell sorting technique and confocal microscopy; see carboxyl poly(A) in Table III and Fig. 5). DISCUSSION The aims of the present work were to identify the amino acid residues of the human VPAC 1 receptors that are phosphorylated during agonist stimulation and to correlate the phosphorylation level with receptor internalization and eventually reexpression to the membrane. To identify the phosphorylated residues, we first searched for consensus sequences (19,20) and identified a protein kinase A (Ser 447 -Ser 448 -Ser 449 in the C terminus), a protein kinase C (Phe 249 -Ser 250 -Glu 251 -Arg 252 in IC2), and casein kinase (Ser 247 -Phe 248 -Phe 249 -Ser 250 in IC2; Ser 331 -Asp 332 -Ser 333 -Ser 334 and Ser 334 -Pro 335 -Tyr 336 -Ser 337 in IC3; Ser 422 -Gly 423 -Gly 424 -Ser 425 in the C terminus) consensus sites (underlined amino acids correspond to the phosphorylated residues). Because the receptor was not phosphorylated by forskolin or by phorbol esters, and because the VIP stimulated phosphorylation was not inhibited by low concentrations of H-89 or by staurosporine or K252a, we hypothesized that VIP-induced phosphorylation was not mediated by protein kinase A and protein kinase C. The partial inhibitory effect of CKI-7 (21), a selective inhibitor of casein kinase 1-␣, did not exclude involvement of that enzyme. However, phosphorylation by casein kinases implies the presence of an acidic function in the consensus (22), preferentially a phosphorylated Ser/Thr, to anchor the enzyme and trigger a phosphorylation cascade. There was no evidence that these potential initiators (Ser 247 , Ser 331 , and Ser 334 in the intracellular loops and Ser 422 in the first part of the C terminus) were indeed phosphorylated. In the distal part of the C terminus tail, however, a phosphorylation cascade starting with Thr 429 could involve Thr 432 , Ser 435 , Thr 438 , and Ser 441 . However, by single mutation, only Thr 429 and Ser 435 were identified as candidates for phosphorylation. GRK remained by exclusion the main kinase(s) identified. This was tested on membranes, since there is no known selective cell-permeable inhibitor; Zn 2ϩ and heparin are reported to antagonize GRK activity (12)(13)(14). In our experimental conditions, Zn 2ϩ was only partially efficient. However, in the positive control used (15), similar results were obtained. Finally, only Ser and/or Thr residues were phosphorylated; the tyrosine kinase inhibitor genistein was inactive, and the nonselective Ser/Thr kinase inhibitor A3 (23) completely blocked VIP-stimulated phosphorylation. Recent data (24) suggest that Ser 447 in the protein kinase A consensus in the carboxyl terminus could be phosphorylated, but it was not demonstrated that this was performed through kinase A activation; replacement of Ser 447 with Ala increased basal unstimulated phosphorylation and blunted the VIP-induced phosphorylation, a finding that was not observed in the present work. The strategy used to identify the phosphorylated residues consisted of the progressive truncation of the carboxyl terminus and individual mutations in Ala of the suspected residues, followed by combined mutations. For the interpretation of the results on the truncated receptors, we made the assumption that a decreased phosphoryla- tion was due to the suppression of one phosphorylatable residue and that an unchanged phosphorylation level meant that the residues that were suppressed were not phosphorylated. In other words, we did not consider that the truncated receptors may be phosphorylated on residues other than those used in the wild type receptor. We also made the assumption that the phosphorylation level was directly linked to the number of phosphorylated residues, and we did not consider possible kinetic changes in the kinase and phosphatase activities. We also considered that the evaluation by binding studies of the number of receptors was appropriate and that phosphorylation was in any case proportional to the receptor density. Considering these points, the similar 30% reduction in receptor phosphorylation of the three truncated forms 1-444, 1-441, and 1-436 suggested that at least one residue of Ser 447 , Ser 448 , Ser 449 , and Ser 455 was phosphorylated. Since 447-448-449 was a protein kinase A consensus sequence and since protein kinase A has been excluded, we first considered the residue Ser 455 as a good candidate. Mutation into Ala reduced receptor phosphorylation by 40%. However, the simultaneous mutation to Ala of the three adjacent Ser residues reduced the phosphorylation by 30%. The single mutation of the Ser 447 residue did not significantly modify VIP-induced phosphorylation under our conditions as already discussed. The marked decrease in receptor phosphorylation when comparing the 1-436 and the 1-433 mutants focuses on the Ser 435 residue; indeed, its replacement by Ala decreased by 70% the phosphorylation level. Since phosphorylation of the 1-429 truncated receptor was comparable with that of the 1-433, it was unlikely that Ser 431 and Thr 432 were phosphorylated. Mutation of these residues in Ala confirmed this hypothesis. The VIP-stimulated phosphorylation of the 1-421 receptor was extremely low but detectable. Removal of either Thr 429 , Ser 425 , or Ser 422 could be responsible for that decrease. Individual mutations of these Thr and Ser residues into Ala indicated that Thr 429 was the only residue to be phosphorylated. From the results on the truncated and the single mutation receptors as well as the effect of forskolin, 12-O-tetradecanoylphorbol-13-acetate, and inhibitors, we considered that the following residues were likely candidates for VIP-stimulated VPAC 1 receptor phosphorylation: Ser 455 , Ser 448 or Ser 449 , Ser 435 , and Thr 429 in the C terminus and also Ser 250 in the IC2 loop. A recent study on the 5-HT 2A receptor also implicated a serine located in the IC2 and a second in the C terminus in the agonist-mediated receptor desensitization (25). However, the results obtained when combining mutations of these identified residues indicated that the effects on phosphorylation were not additive. This contrasts with results published, for instance, for the CCR5 receptor where four phosphorylatable serine residues were identified and each contributed equally to the total phosphorylation level (15). In our model, combining double (data not shown), triple, and quadruple mutations of the target residues identified by point mutation maintained a phosphorylation level of about 30% of that observed in the wild type receptor, a value reached with some single mutations. However, mutation of all of the phosphorylatable residues of the carboxyl terminus abolished receptor phosphorylation. This suggests that phosphorylation can operate on other residues when the preferred ones are missing. This alternative phosphorylation has been described for rhodopsin; rhodopsin kinase can efficiently phosphorylate other serine and threonine residues in the absence of the three sites preferentially phosphorylated (26,27). A second point to be considered is the fact that single mutation of Ser 435 and Thr 429 to Ala induced a more pronounced decrease in receptor phosphorylation than mutation of Ser 250 , Ser 455 , and the sequence Ser 447 -Ser 448 -Ser 449 . This suggested a hierarchy in the phosphorylation of the VPAC 1 receptor that could be explained by the fact that some residues are better substrates, constitute a kinase binding site, or trigger a phosphorylation cascade. This last point was already discussed. Hierarchical phosphorylation has already been reported for the ␦-opioid receptor (28), the N-formyl peptide receptor (29) and the CCK receptor (30,31). Whatever the explanation, we concluded that there is variability as to which residues are phosphorylated in mutated and probably also in truncated receptors. Due to this variability, it is difficult to correlate phosphorylation data and receptor internalization. A quantitative aspect can be discussed; if we consider the mutant and the truncated receptors longer than residues 1-402, the receptor phosphorylation level must be reduced to at least 30% of the wild type receptor level to decrease receptor internalization. Thus, the phosphorylation rate is in excess for internalization. Such a low phosphorylation requirement has already been described for other receptors; internalization of the CCR5 receptor only requires the presence of two phosphorylated serines in the C terminus (32), even if, in vivo, four distinct C-terminal residues are phosphorylated (15). A stoichiometry of 2 mol of phosphate/ mol for the ␤ 2 -adrenergic (33) and m2 muscarinic receptor (34) is sufficient for their internalization, whereas additional phosphorylation of up to 10 -11 mol of phosphate/mol of receptor does not amplify the phenomenon. For these two receptors, the position of the phosphorylated sites was not critical. Receptor internalization was directly correlated to arrestin binding, and complete phosphorylation of the receptor was not necessary for arrestin-receptor complex stability. Several mechanisms are possible for receptor internalization: first, an arrestin-, clathrin-, dynamin-dependent process; second, an arrestin and clathrin-independent but dynamin-dependent process through caveolae; third, an arrestin and clathrin-independent but dynamin-dependent process that does not require caveolae; and fourth, an arrestin-, clathrin-, and dynamin-independent process. Concerning the VPAC 1 receptor, the established facts are as follows: (a) a dynamin-dependent mechanism; (b) a VIP-dependent arrestin recruitment to the membrane without any effect of dominant negative mutant (5); (c) internalization in endocytic vesicles, which could be blocked by sucrose (present work); (d) a relative dependence on receptor phosphorylation (present work). Considering other class 2 GPCRs, the following appears to be true. (a) The secretin receptor is phosphorylated after agonist exposure, but phosphorylation is not required for internalization (35). Arrestin is recruited to the membrane, but there is no effect of dominant negative construction. Dominant negative dynamin was also without effect (36). (b) The parathyroid hormone receptor is internalized by an arrestin-dependent mechanisms but requires the presence of two highly conserved residues located in the core of the receptor: Asn 289 and Lys 382 . These residues could regulate a conformational modification necessary to translocation toward the endocytic endosomes (37). The use of a pathway that differs from the classical clathrin-coated pit pathway is not limited to class 2 GPCR; internalization of the class 1 GPCR 5-HT 2A receptor involves also atypic mechanisms (38). In the present work, we showed that VPAC 1 receptor internalization occurs by two different mechanisms: a phosphorylation-dependent nonreversible pathway and a phosphorylationindependent pathway that allows rapid recycling of the receptor to the plasma membrane. This is the case of the truncated 1-421 to 1-402 receptors. A possible explanation for this is that the multiple positive charges in the 402-421 domain (Arg 403 , Arg 404 , His 406 , Lys 417 , and His 420 ) may prevent interactions of negatively charged residues located in the intracellular domains (Glu 394 and Glu 398 in the C-terminal tail but also Asp 327 and Asp 332 in IC3 and Glu 251 in EC2) with an unidentified intracellular partner. It must be noticed that Glu 394 was identified as necessary for coupling of VPAC 1 receptor to G␣ s (39). In conclusion, the present data do not allow an unambiguous identification of the Ser/Thr residues of the VPAC 1 receptor that are phosphorylated in response to VIP. This is probably due to the possibility of alternative phosphorylation when key residues are mutated or eliminated by truncation. They clearly demonstrate that when all of the potential phosphorylation sites located in the C terminus and one Ser residue in the IC2 loop are mutated into Ala, the VIP-stimulated phosphorylation is abolished, and the receptor is no longer internalized, although it is still fully active. Truncation of the distal part of the C terminus containing all of the Ser/Thr residues also abolishes receptor phosphorylation and internalization. However, a receptor more proximally truncated, although still fully active and not phosphorylated is internalized rapidly, supporting the notion of recruitment of arrestin-insensitive/GRK-insensitive pathways. This internalization differs from that of the wild type receptor by its reversibility within 2 h, suggesting new interactions with the receptor trafficking machinery. (Department of Pharmacology and Toxicology, Otto-von-Guericke-University, Magdeburg, Germany). We are indebted to Perrine Hague and Huy Nguyen-Tran, from the "Laboratoire de Neurophysiologie," for skillful technical assistance in confocal microscopy.
2018-04-03T04:26:35.457Z
2005-07-29T00:00:00.000
{ "year": 2005, "sha1": "26b755c7f0a4661b44b5ecdc2da66eca31481933", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/30/28034.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "4effa51ffbe58f0cf88e8dfd37c4979038bb8101", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
10150044
pes2o/s2orc
v3-fos-license
The “Journal of Functional Morphology and Kinesiology” Journal Club Series: Highlights on Recent Papers in Movement Analysis We are pleased to introduce the fourth Journal Club. This edition is focused on several relevant studies published in the last years in the field of movement analysis, chosen by our Editorial Board members. We hope to stimulate your curiosity in this field and to share with you the passion for the sport seen also from the scientific point of view. Introduction Human movement analysis is the observation and definition of movements of humans.Movement analysis is often carried out in a laboratory.Simple analysis can involve simple observations.Advanced analysis often involves some form of technology, for example high speed, or optical/optoelectronic cameras to generate the kinematics needed for analysis.Often, force plates and/or electromyography can be combined to provide complete information [1].The study of movement analyses is an integrated perspective influenced by the fields of motor learning, anatomy, functional morphology, experimental psychology, neuropsychology, kinesiology, biomechanics, and human factors engineering.The level of management is directed toward upperclass undergraduate and graduate students in physical education and others pursuing an analytical understanding of human movement.Human movement includes how the human organism learns to move, the underlying factors leading to the structure of movement, how our movements adapt to simple and complex environmental situations, and how we might proceed in our quest for understanding communication through a multidimensional technique in the analysis of human movement [2].A structural analysis of human movement is explored as one means of determining the characteristics of movement under differing environmental, morphologic, and biomechanical conditions.The principles of organization of planned, purposeful human movement are developed through an understanding of patterns of human movement, the factors influencing skilled motoric adaption to our environment, and the neuromuscular control processes involved [3].The movement analysis allows us to verify the ability to perform various movements and to improve their efficiency by improving the body schema and coordinative and proprioceptive components of the subject.Analyzing a movement is equivalent to locating the position and speed at each instant and characterizing the linear or angular displacement of each part of the body in motion [4]. There are different types of movement analysis, i.e., the kinematic, dynamic, quantitative and qualitative movement analysis [5]: • The kinematic analysis studies the movement caused by forces and motion factors applied together. The most important results are the union of motion and definition of displacements, velocities and accelerations of the parties. • The dynamic motion analysis allows to evaluate the forces generated by the movement, and the movement itself. • The qualitative analysis describes and analyses movements non-numerically, by seeing movements as "patterns", while quantitative analysis describes and analyses movement numerically. • The quantitative analysis can sometimes appear more objective because of its "data"; however the accuracy and reliability of such data can be very suspect, particularly when obtained in competition.Qualitative analysis is often more strongly rooted in a structured and multidisciplinary approach, whereas quantitative analysis can appear to lack a theoretical grounding and to be data-driven. Highlight by Lingyan Wang As an important part of the musculoskeletal system, tendons are fibrous connective tissues that connect muscle to bone.Tendon injures frequently happen in daily life.However, clinical therapeutic options are limited to conservative and surgical treatments with a long recovery period.Wu, et al. [6] developed a novel method to augment tendon healing and reduce tendon adhesion through gene therapy.As we know, transforming growth factor beta 1 (TGF-β1) plays a critical role in adhesion formation during tendon healing.In this study, Adeno-associated virus (AAV) was used to transfer TGF-β1-miRNA to injured and surgically repaired tendons in chickens.Several weeks after AAV-mediated treatment, the improvement of tendon gliding and decreases in adhesion formations were found.These results supported the intraoperative treatment of TGF-β1-miRNA for the recovery of tendon function after surgery.AAV is a promising method for gene therapy and widely used in therapeutic applications.Moreover, this study demonstrates that AAV-mediated TGFβ1-miRNA may be a new strategy for treating tendon injury.Tendon abnormalities have also been described in some inherited metabolism disorders, such as Alkaptonuria which leads to a reduction in the structural integrity of collagen and increases rupture.Recently, Depreux et al. used a novel in utero gene transfer method [7,8] for delivery of active antisense oligonucleotides (ASOs) into amniotic cavity of mouse embryo [9].The amniotic cavity surrounding the fetus could serve as an ideal drug reservoir.ASOs are an established tool for the therapeutic modulation of gene expression.Fetal therapeutic strategies to manage disease processes represent a powerful new approach for clinical care and open the door for fetal drug therapy to treat congenital disease of the tendon. Highlight by Luís Silva The structure behind the biological organization and its complex interactions, feedback capacity and regulation is characterized by a chaotic structure [10], which is influenced by unhealthy pathological states or skill deficits [11].When deterministic traditional linear measures were unable to quantify uncertainties in data, nonlinear ones are capable of detecting subtle differences within the time series, such as for postural sway.Force-plate posturography has been a common method used to assess postural control [12] measuring anteroposterior (AP) and mediolateral (ML) displacement of the center of pressure (CoP).A recent study [8] reported that the spatial magnitude of sway is affected by the gaze target distance in older adults, also showing that the width of the multifractal spectrum (a measure of temporal dynamics) is greater for older adults when compared with younger adults.Accordingly, Mufano, Wade, Stergiou and Stoffregenet [13] evaluated the kinematics of the CoP, regarding the amount of sway and the multifractality, in elderly people standing on a ship at sea (that departed from Nassau, Bahamas) when looking to the nautical horizon.Two premises were asked: (1) the spatial amplitude of postural sway could decrease when looking at the nautical horizon comparing to a closer target; (2) the nautical horizon could affect the multifractality of standing body sway in older adults.For this purpose, 18 adults aged from 56 to 78 years were recruited and their postural activity was evaluated during stance on a force plate at a frequency of 50 Hz.The authors found that both positional variability and width of the multifractal spectrum were higher in the body's ML axis than in the AP axis.The spectrum width was also higher when looking at the horizon than looking to a nearby target, contrarily to the first premise.Additionally, the target distance variation affected spectrum width for postural activity in the AP axis, but not in the ML axis.These findings showed that the nautical horizon has an influence on the multifractality of sway, increasing spectrum width when regarding a closer target.These results are different from those observed on land where a decreased spectrum width is observed for a more distant visual targets compared to closer ones [14].The ship motion is seen as a form of motor constraint that can eliminate age related-variations in the spatial magnitude of postural sway.Future research is needed to understand changes in complexity with aging and its adaptive sway capabilities when submitted to different motor constraints. Highlight by Michelino Di Rosa Parkinson's disease (PD) is a progressive movement disorder diagnosed in 1% of the US population over age 65 [15,16].This disease presents as a progressive loss of neurons in the substantia nigra of the midbrain; thus altering the nigrostriatal neural conduction [17].In PD, the cause of fatigue and decreased endurance is unknown but thought to be associated with disease processes involving injuries to the basal ganglia and Restless Leg Syndrome (RLS).Bradykinesia also increases fatigue by prolonging the time required to complete activities and tasks.The individual must work harder to carry out simple movements or tasks.Muscles do not move well or are poorly conditioned, i.e., atrophy [18].Loss of muscle strength increases fatigue and decreases endurance [19].Zhao M. et al. in the manuscript entitled "Effects of coordination and manipulation therapy for patients with Parkinson disease" [20], analyzed the effects of a new exercise training regimen, i.e., coordination and manipulation therapy (CMT), on motor, balance, and cardiac functions in patients with Parkinson disease (PD).They divided 36 PD patients into the CMT (n = 22) and control (n = 14) groups.The patients in the CMT group performed dry-land swimming (imitation of the breaststroke) and paraspinal muscle stretching for 30 min/workday for 1 year.The control subjects did not exercise regularly.The same medication regimen was maintained in both groups during the study.Clinical characteristics, Unified Parkinson's Disease Rating Scale (UPDRS) scores, Berg balance scale (BBS) scores, mechanical balance measurements, timed up and go (TUG) test, and left ventricular ejection fraction (LVEF) were compared at 0 (baseline), 6, and 12 months.Biochemical test results were compared at 0 and 12 months.The primary outcome was motor ability.The secondary outcome was cardiac function.In the CMT group, UPDRS scores significantly improved, TUG test time and step number significantly decreased, BBS scores significantly increased, and most mechanical balance measurements significantly improved after 1 year of regular exercise therapy (all p < 0.05).In the control group, UPDRS scores significantly deteriorated, TUG test time and step number significantly increased, BBS scores significantly decreased, and most mechanical balance measurements significantly worsened after 1 year (all p < 0.05).LVEF improved in the CMT group only (p = 0.01).This preliminary study suggests that CMT effectively improved mobility disorder, balance, and cardiac function in PD patients over a 1-year period. Return to Safe Driving after Total Knee Arthroplasty: More Kinesiological Research Is Needed Highlight by Jan Cabri, Carlos Marques and João Barreiros Total knee arthroplasty (TKA) is a common orthopaedic surgical procedure with high success and long-term survival rates.The features of enhanced rehabilitation programs (sometimes also called fast-track protocols) are in-depth patient education; modified anaesthesia protocols; early mobilization (same-day mobilization); rapid resumption of activities of daily living; and multimodal pain therapy.A positive effect of this approach is a reduction in length of hospital stay (LOS), without an increase in readmission rates due to complications (LOS reductions from 10 to 4 days have been reported).Despite the existence of well-structured information about the ongoing rehabilitation process, patients want to know when they can safely resume car driving.The question is of socio-economic importance, since some patients are discharged from the hospital to their homes and depend on driving a car to reach the doctor or the rehabilitation center.However, the available evidence on which physicians and doctors can rely when advising patients on when they can safely resume car driving after TKA is scarce. An important human factor in accident prevention research is the brake response time (BRT).The BRT is a measure of cognitive and psychomotor performance and has been used in traffic accident prevention research to assess driving capability in different populations.BRT can be divided into two main components: reaction time (RT) and movement time (MT).RT is defined as the time frame needed for signal perception, signal identification and response selection.Accordingly, RT ends with the initiation of the motor component of the response, that is, with the early evidence of muscle activity or with a pressure reduction on the gas pedal.In contrast, MT is defined as the motor response to the signal. TKA causes MT delay, which affects BRT negatively.An increase in task complexity also significantly increases BRT [21][22][23].TKA seems to affect peripheral aspects related to the execution of the movement.Soft tissue lesions may be the cause of such performance impairments after TKA. However, a review of the literature revealed that the normalization of BRT after TKA varied among the 10 existing studies and ranged from 28 to 56 days.Multiple factors may have led to the wide variation of results.Methodological weaknesses in some studies and the long time period between the first and the last publication with all the developments made in the field (i.e., surgery techniques, anaesthesia protocols, rehabilitation protocols) are two possible explanations.In view of the discrepancy of results it is not possible to make a generalized evidence-based recommendation.Further high quality research on this issue is necessary.
2017-05-06T05:29:38.243Z
2017-02-08T00:00:00.000
{ "year": 2017, "sha1": "8687e60e31d4c1acd82fbb2b57c9379ac46ab526", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2411-5142/2/1/7/pdf?version=1487245483", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2db9edcec9a0d2f2202480a9d5c07147cab901c0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17553694
pes2o/s2orc
v3-fos-license
Harmful effect of epinephrine on postreperfusion syndrome in an elderly liver transplantation recipient with sigmoid ventricular septum Supplemental Digital Content is available in the text Introduction A sigmoid ventricular septum (SVS) is a morphological change of the heart characterized by angulation between the ascending aorta and the basal ventricular septum. [1,2] This morphological change has been considered a normal aging process involving elongation or tortuosity of the ascending aorta. [1,3] Because an SVS is frequently observed on echocardiography and has no clinical implications in most subjects, not much attention may have been paid to this feature in daily clinical practice. [4] However, in certain clinical circumstances, an SVS has been reported to cause left ventricular outflow tract (LVOT) obstruction, which may result in a severe hemodynamic derangement. [3,5,6] Because of an increasing survival rate and larger number of elderly liver transplantation (LT) recipients, anesthesiologists may face an increasing incidence of SVSs in LT surgery. [7] During the reperfusion period from LT surgery, characteristic hemodynamic changes, known as postreperfusion syndrome (PRS), often occur. [8] Nevertheless, this potential risk from an SVS has been largely overlooked to date because severe hemodynamic perturbation has not been reported in LT patients with cardiac morphological changes. Herein, we describe a 70-year-old patient whose pretransplant echocardiography was reported as normal, although an SVS was seen during the examination. Disclosure: The part of this article was presented in 2014 American Society of Anesthesiologists annual meeting as a medically challenging case. Author contributions: Y-JM and G-SH were responsible for the conception of the case report and writing the manuscript. JHP and SL were involved in recording echocardiographic images and videos. J-EO was involved in anesthetic management of the patient. Severe hemodynamic instability occurred after administration of a small dose of epinephrine, the drug of choice for the treatment of PRS, because of a LVOT obstruction with systolic anterior motion (SAM) of the mitral valve leaflets after reperfusion of the graft. Clinical presentation We obtained the written informed consent from the patient for publication of this case report and accompanying images. A 70year-old male patient diagnosed with hepatitis C virus-related liver cirrhosis was scheduled for living donor LT. His past medical history was unremarkable. The preoperative electrocardiogram showed normal sinus rhythm without ST-or T-wave abnormalities. His chest radiograph showed a normal heart size with mild pleural effusion because of a large amount of cirrhotic ascites. Routine preoperative transthoracic echocardiography (TTE) showed a hyperdynamic left ventricle with normal systolic function and an ejection fraction of 65%. The end-diastolic thickness of both the interventricular septum and posterior wall was 9 mm, which was in the normal range. The mitral valve had a normal morphology with trivial regurgitation. Continuous-wave Doppler of the LVOT showed a peak systolic flow velocity of 1.25 m/s, which corresponded to a pressure gradient of 6 mm Hg. The overall conclusion was a normal echocardiogram by a cardiologist and did not mention the presence of SVS (Fig. 1). Upon arrival in the operating room, the patient's blood pressure was 103/60 with heart rate of 60 bpm. After applying standard monitoring, general anesthesia was induced with 200 mg thiopental sodium, 5 mg midazolam, and 100 mcg fentanyl. Neuromuscular blockage was achieved with 10 mg vecuronium, and anesthesia was maintained with a continuous infusion of fentanyl and 1 vol% sevoflurane in a 50% air/oxygen mixture. After endotracheal intubation, the radial and femoral arteries were cannulated for continuous blood pressure monitoring. Blood pressure waveforms were digitized and recorded during entire period of LT surgery. Two central venous cannulations for a pulmonary artery catheter and a large-bore catheter were placed in the right internal jugular vein. Transesophageal echocardiography (TEE) was placed to guide the intraoperative management. After 7 hours of uneventful preanhepatic and anhepatic phases, graft reperfusion was performed with unclamping of the hepatic and portal veins. Shortly after, PRS occurred with a decreased blood pressure as low as 49/27 mm Hg. Mixed venous oxygen saturation (SvO 2 ) did not decrease, and TEE showed normal contractility of both ventricles without a LVOT obstruction in the mid-esophageal 5-chamber view ( Fig. 2A, Video 1A, http://links. lww.com/MD/B204). Because the decreased blood pressure persisted, 10 mcg epinephrine, the treatment of choice for PRS, was injected twice intravenously. However, the patient's blood pressure did not increase, and the SvO 2 decreased. We were about to provide an incremental dose of epinephrine, as usually done, because we believed that the initial amount was not sufficient to overcome PRS. Surprisingly however, the TEE indicated a hypovolemic and hyperdynamic left ventricle along with SAM of the mitral valve leaflets that led to an obstruction of the LVOT (Fig. 2B, Video 1B, http://links.lww.com/MD/B204). A moderate degree of mitral valve regurgitation was also found in the color Doppler mode (Fig. 2C, Video 1C, http://links.lww.com/MD/ B204). Therefore, 100 mcg phenylephrine was injected along with an intravenous fluid bolus to increase the intraventricular After the patient's vital signs were stabilized, we reviewed the preoperative TTE and intraoperative TEE. These scans showed that the thickened end-diastolic basal interventricular septum had protruded into the LVOT, which was not mentioned in the preoperative TTE (Fig. 1). The patient had a prominent "knuckle" of the end-diastolic basal interventricular septum (18 mm) (Fig. 2B), and the LVOT was angulated. The angle measured between the basal portion of interventricular septum and the ascending aorta was 93°, which was far less than the normal range (145 ± 7°). With the features mentioned above, the diagnosis of an SVS could be established. During the neohepatic phase, no further adverse events occurred. The patient was transferred to the intensive care unit and extubated on postoperative day 1. Three days later, a follow-up TTE was performed showing no change in the SVS without any obstruction. The patient recovered and was discharged on postoperative day 26. Discussion Our present case clearly showed that anesthetic management of a patient with an SVS during LT surgery can be complicated by the potential risk of an LVOT obstruction, especially during the reperfusion period. Although dynamic LVOT obstruction has been reported in patients with an SVS, [3,5,6,9,10] our present report is the first to document this phenomenon during LT. Because of unique hemodynamic and cardiac morphological characteristics, elderly patients with end-stage liver disease are highly susceptible for developing LVOT obstruction. After reperfusion of the graft, LT recipients often encounter severe hemodynamic instability, the so-called PRS. This instability can present as a decrease in systemic vascular resistance (SVR) and relative hypovolemia or sometimes can be accompanied by decreased contractility of both ventricles. In these situations, the treatment of choice is epinephrine, a nonselective adrenergic agonist that can be effective in both circumstances. However, as in our present case, further hemodynamic instability develops with epinephrine treatment in patients with an SVS. Strong inotropic and chronotropic effects by epinephrine in conjunction with low SVR and relative hypovolemia because of PRS can exacerbate a dynamic LVOT obstruction. [8,11] Thus, an early differential diagnosis of prolonged PRS is important, and patient management should be differentiated according to this diagnosis. In our present case, TEE played a crucial role in both the early recognition and management of a dynamic LVOT obstruction caused by SAM of the mitral valve leaflets. Because TEE was continuously monitored during the reperfusion period in our patient, the dynamic LVOT obstruction could be recognized instantly. Also, appropriate management could be initiated with phenylephrine and fluid loading rather than treatment with additional epinephrine that may have resulted in fatal consequences. Unlike numeric data obtained from the blood pressure waveform, TEE permits an instant and direct assessment of both structural and dynamic functions of the heart. [12,13] Moreover, TEE has been reported to be relatively safe, as it showed low incidence of hemorrhagic complication despite the presence of esophageal varices. [14,15] Therefore, TEE should always be considered for patients with prolonged PRS, especially for patients with a high risk of developing an LVOT obstruction. [16,17] Although the underlying mechanism for a dynamic LVOT obstruction in an SVS case has not been well-established, it is believed that this mechanism is similar to that for hypertrophic cardiomyopathy. [9] The angulation of the LVOT alters flow vectors in the left ventricular cavity, and protrusion of the basal septum causes flow acceleration around the narrowed LVOT. The "drag effect" and "Venturi effect," respectively, are thought to involve SAM of the mitral valve leaflets in patients with an SVS developing a dynamic LVOT obstruction. [6,9] A recent review by Hymel and Townsley [16] summarized several characteristic echocardiographic features for predicting SAM of the mitral valve leaflets. These include a basal interventricular septal thickness >15 mm, a C-sept distance (distance from the mitral coaptation point to the septum) <25 mm, a mitral-aortic angle (angle formed by the intersection of the mitral annulus and aortic annulus) <120°, and an abnormal mitral leaflet length. Another study by Tano et al [3] showed that a short end-systolic leaflet tethering distance (the distance between the tip of the posterior papillary muscle and the contralateral anterior part of the mitral annulus) in the resting state was a major determinant for developing an LVOT obstruction with SAM of the mitral valve leaflets in patients with an SVS during a dobutamine provocation test (29.9 ± 4.2 vs 35.2 ± 4.6 mm). Our present case satisfied most of these provocative conditions. Our present patient had an 18mm end-diastolic basal interventricular thickness, a 20-mm Csept distance, and a 32-mm leaflet tethering distance. The angle measured between the basal septum and ascending aorta was 93°a nd the mitral-aortic angle was 102° (Fig. 1). There has been previous case report of an SVS with a LVOT obstruction during surgery. [10] In that patient, a low SVR caused by spinal anesthesia provoked a dynamic LVOT obstruction. In other previous case reports, dynamic LVOT obstructions were provoked in certain hemodynamic settings such as a dobutamine stress test, exercise test, or administration of a phosphodiesterase 3 inhibitor. [3,6] These settings are all associated with increased contractility and a decreased SVR. During LT surgery, recipients undergo a similar but usually more aggravated hemodynamic derangement during graft reperfusion. For this reason, anesthesiologists should keep in mind that a dynamic LVOT obstruction may occur during LT surgery even in patients with a less severe form of SVS. Moreover, whenever refractory hemodynamic instability occurs in circumstances with increasing contractility and decreasing SVR, an undiagnosed SVS should always be suspected as a hidden cause. In summary, we here report the first case of a dynamic LVOT obstruction arising at the graft reperfusion period of LT surgery in a patient with an SVS. Dynamic LVOT obstruction should always be considered as a possible cause for hemodynamic instability during the reperfusion period, especially in elderly patients with an SVS. In addition, TEE is a very useful tool for both the diagnosis of hemodynamic derangements and the guidance for appropriate management during LT surgery. The routine use of TEE is therefore highly recommended.
2018-04-03T00:51:46.331Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "38169072fa17c12015d7f9d03972081eec96d031", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000004394", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5dadb7159589280a282541031d0288e939cb985f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216654295
pes2o/s2orc
v3-fos-license
DETERMINANTS OF AGRICULTURE-RELATED LOAN DEFAULT: EVIDENCE FROM CHINA This paper investigates agriculture-related loan default in 2002–2009 through a large data set from a leading Chinese state-owned bank. Using logit regression, we find the default rate on agriculture-related loans is significantly higher than that on non– agriculture-related loans. We find that base interest rates, loan maturity, the type of collateral, firm size, ownership structure, and managerial quality rating have a significant impact on agriculture-related loan default, but this also depends on how agriculture-related loans are defined. The results provide insight into the real impact of monetary policy on agriculture-related lending. banks when executing san nong policies. This is a concern for policymakers and critical to the sustainable development of this market (Xie and Jin, 2015). Fourth, the literature on loan default involves the determinants of default, the effect of collateral on default, and the heterogeneity of borrowers (Savitha and Kumar, 2016;Zhou et al., 2016;Yin et al., 2019). There is a dearth of literature dedicated to the sector-specific credit quality of loans, particularly loans in emerging markets. Finally, in an era of high output volatility of agricultural products, this study of one of the largest agricultural producers is also of general interest to the world community in terms of managing food supply and food security. 4 This paper contributes to the literature by filling the void in studies dedicated to bank lending in the agricultural sector. The findings reveal the key determinants of the credit quality of agriculture-related loans that inform the business operations of banking institutions. We are specifically interested in two sets of variables that are particularly important in agricultural loans: agriculturally related factors (e.g., temperature and rainfall) and loan-specific interest rates. Indeed, we find that the two sets of variables significantly determine the default rate in this unique sector. We also provide insight for emerging economies on the financial risk in policyoriented lending to the agriculture sector. This paper is organized as follows: Section II reviews the literature related to loan default. Section III presents the methodology and data. Section IV interprets the results on the default of agriculture-related loans relative to other types of loan. It also conducts robustness tests. Section V analyzes the determinants of agriculture-related loan default, and Section VI concludes the paper. II. LITERATURE REVIEW A. Determinants of Bank Loan Default Identification of the determinants of bank loan default can be traced to Campbell and Dietrich (1983), who use US bank loan data to find that the ratio of concurrent payments to revenue, the loan-to-value ratio, the unemployment rate, loan maturity, and the initial loan-to-value ratio have a significant influence on the loan default rate. Berger and De Young (1997) use Granger causality tests and find that, when a bank's capital decreases, the amount of bad loans increases, and there is a bilateral intertemporal relation between the quality of loans and cost efficiency. In addition, cost efficiency is one of the most important factors for predicting bad loans and failing banks. Elsas and Krahnen (1998) argue that the credit risk with corporations is greater than with non-corporations, i.e., partnerships and sole proprietors involve less moral hazard. De Young (2001, 2006) suggest that geographic distance increases the costs of information collection and monitoring such that the loan default rate increases with the distance between the borrower and the lender. Jiménez and Saurina (2004) analyze more than 3 million individual loans in Spain between 1988 and 2000 and find that the default rate on collateralized loans is higher than for unsecured loans. They also find that savings banks tend to extend loans with higher credit risk and that closer banking relationships between banks and firms increase bank risk taking. Landier, Nair, and Wulf (2005) find that an information-based loan lending model can reduce the inefficiency caused by geographic distance, and lenders who use credit scoring models face a lower default rate. Conversely, Rossi (1998) and Flannery and Samolyk (2006) find that an automated lending model is associated with economies of scale, and lending at the breakeven point of credit rating models would lead to a higher default rate. De Young, Glennon, and Nigro (2008) find that longer distances and lower credit ratings lead to a higher probability of default. Jiménez and Saurina (2009) find that default rates are highly correlated with economic cycles. In addition, the credit losses on manufacturing, construction, consumer, and collateralized loans are generally higher. However, Sha and Wang (2019) find that, as long as the debtor's financial information is controlled for, there is no industry effect in determining US default, challenging the intuition that macroeconomic conditions have predictive power for default. Using a microlevel Chinese bank loan database, Yin et al. (2019) reach the same conclusion, that borrower heterogeneity dilutes sensitivity to economic change in co-determining loan default. In sum, previous research has identified various determinants influencing bank loan default. The role of macroeconomic conditions, previously recognized as a determinant of loan default, is now being challenged with new evidence from various countries. Most of the literature relies on bank-level consolidated data to conducting analyses, and few studies use contract-or individual-level data. Given that the decision to default is ultimately made by the borrower, and not the bank, data obtained directly from the borrower could provide a clearer picture in this research field. B. Role of Collateral Many studies focus on the impact of collateral on the default rate. Stigliz and Weiss (1981) and Chan, Greenbaum, and Thakor (1987) argue that banks' requirement for collateral when providing loans reduces the adverse selection problem, which, in turn, leads to a lower default rate. Aghion and Bolton (1992) and La Porta et al. (1998) suggest that, according to creditability threat theory, collateral is an effective tool to guarantee borrowers' good behavior. Smith and Warner (1979) suggest moral hazard as a determinant of collateral use in loan lending, i.e., collateralized debt prevents the borrower from swapping high quality assetsto low quality assets, and ties up funds that could otherwise be use to finance from projects. Chan and Kanatas (1985) consider the situation in which the borrower cannot change the returns of the lender, i.e., in a perfectly competitive risk-neutral credit market with no moral hazard. When the creditworthiness of the lender and that of the borrower are identical, there is no need for collateral. , Bolton and Scharfstein (1996), and Manove, Padilla, and Pagano (2001) argue that, when firms use external assets as collateral, banks can obtain repayment upon default. This affects borrowers' motivation of technical default and reduces adverse selection. Collateral can substitute for the determinant of the quality of the project to be financed. Jiménez and Saurina (2004) also find a negative relationship between the quality of collateral and the credit risk of borrowers. Jiménez et al. (2014) find that lower short-term interest rates encourage small-cap banks to provide more loans to risky firms without collateral, likely leading to greater levels of default. Chen and Lin (2016) show that government bail-out programs reduce the default risk for banks, but indirectly increase the default risk for borrowing firms. Other studies find a close relationship between collateral requirements and high credit risk in lending. Empirical research shows that collateralized loans face higher risk. To some extent, they are called high-default probability loans (see also Orgler, 1970;Hester, 1979); in other words these loans have a higher risk premium (see also Berger and Udell, 1990;Booth, 1992;Booth and Chua, 2006;Angbazo, Mein, and Sanders, 1998; however, these studies are limited to the US loan market). Igawa and Kanatas (1990) suggest that, in an ex-ante information asymmetry credit market, collateral will lead not only to the approval of borrowers' loan applications, but also to moral hazard when borrowers use the loans. Freund et al. (1998) argue that collateral-based lending models lead to credit crises. Padilla (1999, 2001) think that the higher the amount of collateral required, the worse the quality of the loans (ex post credit risk), and the higher the default rate. First, when banks receive a guarantee for loans, they will have less motivation to filter out potentially problematic borrowers and loans. Second, optimistic entrepreneurs usually underestimate their probability of bankruptcy and are therefore willing to provide any collateral required to obtain funding. Based on output, consumption, and foreign debt, Arellano's (2008) default risk model for emerging economies predicts that interest rates and default incentives are higher in recessions. Li et al. (2013) show that, in the wake of the 2008 financial crisis, agricultural loan delinquency rates were consistently below banks' overall loan delinquency rates. Nwachukwu (2013) identifies the characteristics of the beneficiaries of a government-sponsored agricultural loan program in Nigeria to investigate the high default rates. Weber and Musshoff's (2012) results suggest that, in the Tanzanian loan market, agricultural firms have lower delinquency rates than non-agricultural firms do. Castro and Garcia (2014) and Ouyang and Zhang (2019) find that commodity price volatility and climate factors have a modest impact on agricultural loans, while macroeconomic conditions for the agricultural sector and intermediate input prices have greater influence. Dinterman et al. (2018) study how economic factors have impacted farm businesses in the United States. They find that macroeconomic factors such as interest and unemployment rates have strong predictive power for farm bankruptcies. Escalante et al. (2017) show that non-white male and female farm borrowers are usually charged higher interest rates than others, which could be attributed to lenders' credit risk management strategy. Bailey et al. (2011) find that firms with poor performance are more likely to receive bank loans, and their subsequent long-run performance is typically poor. The authors also note negative stock market reactions, where the share prices for Chinese borrowers typically decline significantly around bank loan announcements. Chang et al. (2014) used a proprietary database from a large Chinese state-owned bank to examine the usefulness of banking relationships in predicting loan default. They find that the contribution of banking relationships to predicting default is greater than that of other, hard information. In sum, the literature has documented the unique structure of agriculturalrelated bank loans under formal and informal financing channel theory. The use of collateral and government involvement confuses the estimation of genuine credit quality of loans in the agriculture sector. Some country-specific research provides insights into the issue, but these studies are limited to survey data and small samples. In lights of these limitations, Yin et al. (2019) has examined the role of collateral played in bank loans by using large dataset from a leading commercial bank. Using the same dataset, we are further interested in the determinants of credit quality in the agriculture sector. Morgan et al. (2012) and Yin et al. (2019) point out that agricultural businesses are risky. Apart from market and business operation risks, the agriculture sector also suffers from additional, weather-related risks. Banks exercise less monitoring and control, because agricultural production is located in rural areas where transportation and information collection is not so convenient. There is also a lack of instruments that agriculture-related businesses can use to hedge these risks, because the derivatives market is underdeveloped in China (Ouyang and Zhang, 2019). Therefore, we form two research hypotheses. The first hypothesis is as follows. H1: Agriculture-related loans have a higher probability of default than other types of loans. where, the borrower i at time t, Default is a binary variable that takes the value of zero when the loan is normal, and one if the loan goes bad (see Section 3 on the data and variables for the definition of a bad loan, or default), and AR is a dummy variable that takes the value of one when the loan is related to agriculture, and zero otherwise, with AR defined according to three criteria (see also Tables A1-A3 in the Appendix). The first criterion is set by the People's Bank of China, the second by the UN, while the third criterion is set by the National Bureau of Statistics of China. Due to the broad coverage of the criterion of the People's Bank of China, it is widely used in the Chinese banking industry. We adopt this criterion to classify agriculture-related loans. The other two criteria are used to check the sensitivity of the results to the choice of agriculture-related loan classification. Due to the higher credit risk involved in agriculture-related lending, the coefficient is expected to bear a positive sign. (1) In line with Yin et al. (2019), we use the following loan information variables to explain the probability of default: maturity, amount, repayment method, type of guarantee, and interest rate, where maturity is either short, medium, or long term. 5 We introduce two dummies (MID and LONG) for the maturity of loans, i.e., the first one taking the value of one if the loan is medium term, and zero otherwise, and the other taking the value of one if the loan is long term, and zero otherwise. We use the logarithm of the loan amount as the size of the loan (Amount). The interest rates on loans are calibrated by two variables, i.e., the logarithm of the base interest rate (BaseIR) and the logarithm of the range of loan-specific interest rate fluctuations (FloatIR). The repayment methods are a bullet payment at maturity (BulletP), periodic interest payments plus principal at maturity (PeriodicP), customized periodic repayments (CustomizedP), and standard periodic payments. The types of guarantee include unsecured (Unsecured), guaranteed (Guaranteed), collateralized (Collateralized), pledged (Pledged), and discounted notes (Disnotes), which are assigned binary variables of the same name. To examine the effect of firm-specific characteristics on the quality of loans, following Liu et al. (2019) and Yin et al. (2019), we control for four categories of managerial quality rating, i.e., excellent, average, restricted, and knockout, and firm size, denoted as mega-sized (Mega), large (Large), medium sized (Medium), or small, We also control for the ownership structure, i.e., state-owned enterprises (SOE); collectively owned enterprises (CO); stock cooperative enterprises (SC); associated enterprises (AE); limited liability companies (LTD); corporations (CORP); private enterprises (PRI); foreign enterprises (FOR), including Hong Kong, Macau, and Taiwanese enterprises; and other enterprises, including, e.g.,, sole proprietor and partnership. These indicators are binary variables of the same name in the analyses. We control for the size of the borrowers by classifying them as mega-sized, large, medium sized, and small. We use the 12-month-averaged temperature (TEMP) and precipitation (RAIN) levels in the same area as control variables for the weather (see also Sha and Wang (2019) for a discussion on using the 12-monthaveraged value in predicting default). To control for the impact of time-varying macroeconomic factors, we introduce year dummies, and as a residual term. We use a logit model to estimate Equation (1). Considering that credit quality is a discrete variable, we also use an ordered logit model to estimate Equation (1). Monetary policy is an important systemic risk factor in determining credit risk of loans in China, not only because a tighter monetary policy drains liquidity for businesses, but also because it signals the arrival of a less favorable business climate (Ayyagari et al., 2010). We thus propose the following hypothesis. H2: Compared to changes in loan-specific interest rates, tightening in monetary policy, i.e., an increase in base interest rates, is more likely to lead to default in agriculture-related loans. To further investigate the determinants of agriculture-related loan default and to test H2, we use the following model to estimate a subsample of only agriculturerelated loans: (2) If H2 is valid, BaseIR will have a significant, positive sign, whereas FloatIR should be nonsignificant. The control variables include contract-and firm-specific control variables, as well as weather variables and the year dummies, exactly as in Equation (1). III. DATA AND RESEARCH METHODS This paper uses corporate loan data for 2002-2009 from a leading state-owned bank in China. 6 To avoid sample selection bias, we exclude loans that were made during this period but which will mature after 2009. In our data set, credit risk is measured by the five categories of loan quality (in descending order), i.e., normal, concerned, subprime, suspicious, and a loss. According to industry practice, the last three categories are usually classified as bad loans. Therefore, we introduce a binary variable that takes the value of one if the loan is subprime, suspicious, or a loss, and zero otherwise. To check the sensitivity of the results to the choice of default, we use the method of Ping and Yang (2009), reclassifying bad loans to also include the concerned category. The descriptive statistics (see Table 1) show that mid-and long-term loans account for only a small proportion of the sample. The loan amount can have a wide range. In terms of repayment types, while customized periodic payments are rarely used, periodic and bullet payments are much more common. Loans backed by discounted notes or collateral are much more common than guaranteed or pledged loans. The borrowers are often small and medium-sized companies. Using the same data set, Yin et al. (2019) find that the default rate of nonagriculture-related loans is 6.38%, whereas that of agriculture-related loans is much higher, at 11.6%. To further analyze the credit risk on agriculture-and nonagriculture-related loans, we compare the status of these two types of loans. While the proportion of non-agriculture-related loans with normal status is higher than that of agriculture-related loans (81.6% vs. 75.84%), the proportions of nonagriculture-related loans with a lower credit quality status are smaller than those of agriculture-related loans . Table 1. Descriptive Statistics of Explanatory Variables This table reports selected descriptive statistics for all variables used in the paper. AR is a dummy that takes the value of 1 when the loan is agriculture-related and 0 otherwise; MID and LONG are dummy variables for loans with mid-term and long-term maturities, respectively; BaseIR is the official interest rate set by People's Bank of China, depending on the maturity of the loan; FloatIR is the range of loan specific interest rate changes; Amount is the log of the amount of the loan; BulletP, PeriodicP, and CustomizedP payment are dummies for loans with these three methods of repayment; Guaranteed, Collateralised, Pledged; and Disnotes are dummies for the types of collateral; Excellent, Averaged and Restricted are dummies for borrowers with such managerial ratings; Mega, Large and Medium are dummies for the size of the borrowers; SOE, CO, SC, AE, LTD, CORP, PRI, and FOR are dummies for the ownership structure of the borrowers. TEMP and RAIN are the annual mean temperature and mean precipitation respectively. Variable Obs A. Basic Results The results presented in Table 2 are based on the People's Bank of China's definition of agriculture-related loans. Table 2 shows a very significant positive relationship between being agriculturally related and default, i.e., agriculture-related loans are more likely to result in default than other types of loans. This result is consistent with H1, i.e., agriculture-related loans in China are riskier and less controllable than other types of loans. Table 2. Agriculture-related loans and credit risk We report the marginal effects of Logit regression following equation (1) Regarding firm-specific characteristics, medium-term loans (Medium) and long-term loans (Long) are more likely to default than short-term loans; however, the effect for long-term loans is weak. This finding is consistent with theory and previous empirical studies, (e.g., Campbell and Dietrich, 1983). The base interest rate has a positive relationship with default in logit estimation; i.e., loans granted during a high-interest rate period are more likely to end up in default. The interest rate float (FloatIR) has a positive relationship with loan default, i.e., interest rate adjustments specific to large loans are associated with reduced ability to repay. A positive relationship between the loan amount (Amount) and default is found, i.e., the larger the loan (higher Amount), the more likely the default. However, this relationship has only modest economic significance. In our sample, repayment methods do not have a significant impact on loan default, although one would expect bullet payments to be riskier than periodic repayments, since, in the former, the entire cash flow occurs at maturity. The type of guarantee has a significant impact on default, i.e., loans backed by discounted notes have a much lower chance of default. Discounted notes (Disnotes) have stable value and are liquid; therefore, the cost of default for borrowers is high. This result is consistent with the works of Aghion and Bolton (1992) and La Porta et al (1998). Managerial quality has a significant impact on loan default, i.e., in comparison to firms rated excellent, average, and restricted, those with knockout ratings (Knockout = 1) are more likely to default. This finding is intuitive, i.e., low-quality management can make inferior decisions that can lead to the failure of the firm. The effect of firm size on loan default is remarkable; i.e., while mega-sized firms contribute to higher levels of default, large to medium-sized firms are less likely to default. Regarding firm ownership structure, state-owned enterprises and collectively owned enterprises are more likely to default. Logit estimations show that stock cooperative, limited liability, and private firms are less likely to default. Weather also plays a role in loan default. When TEMP and RAIN increase, the chances of default increase accordingly. However, the effect of temperature seems to be very modest. This result is consistent with the work of Castro and Garcia (2014), in that warmth and rainfall contribute to agricultural production as long as they do not surpass certain thresholds. The Year dummies have an effect on default; however, in this paper, the main idea is for them to absorb macroeconomic factors that are not included in our control variables. Table 3 shows ordered logit estimations of Equation (1). The results are consistent with those in Table 2, where agriculture-related loans have a higher probability of default than other types of loans. Table 3. Agriculture-Related Loans and Credit Risk -Ordered Logit We report the results of Ordered Logit regression following equation (1) B. Robustness Checks B.1. Alternative Definitions of Agriculture-related Loans The results in Table 4 are based on the definitions of agriculture-related loans by the UN and by the Chinese Bureau of Statistics. When the Chinese domestic classification is used, there is a positive relationship between the loan's probability of being agriculture-related and default. This positive relationship is not statistically significant, however, when the UN classification is applied. This result could suggest that agriculture-related sectors have country-specific characteristics and that a international standard is not applicable. Compared to the People's Bank of China's classification, alternative definitions lead to nonsignificant relationships between interest rates and loan default. The repayment method, type of guarantee, firm size, ownership structure, management ratings, and time have very similar effects on loan default, regardless of how agriculture-related loans are defined. Generally, agriculture-related loans are more likely to result in default than nonagriculture-related loans are. B.2. Alternative Definitions of Default We follow Ping and Yang (2009) to redefine default to include loans with a concerned status, to check the sensitivity of the results to the definition of loan default. Table 5 shows that agriculture-related loans have a consistently higher default rate than non-agriculture-related loans do. Both BaseIR and FloatIR have a much stronger positive effect on default than on the previous definition of default. We therefore conclude that our main findings do not vary due to alternative default definitions. Table 4. Agriculture-Related Loans and Credit Risk -Alternative Definition of Agriculture-Related Loan We estimate the equation (1): Pr(Default it =1)=α+βAR it +γX it +μ it following two standard that defining agricultural loans in China and in UN (see Table A Table 5. Agriculture-Related Loans and Credit Risk -Alternative Definition of Default We estimate the equation (1): Pr(Default it =1)=α+βAR it +γX it +μ it by using Ping and Yang (2009) B.3. Determinants of Default on Agriculture-related Loans To further investigate the determinants of default on agriculture-related loans, i.e., those related to loan contract information and firm-specific characteristics, we run Equation (2) using the agriculture-related loan subsample. Table 6 shows that medium-term agriculture-related loans are more likely to result in default. The same effect is found for long-term loans, however weak. Generally, our analysis suggests that default increases with loan maturity. The higher BaseIR is, the more likely the agriculture-related loan will end up in default, while FloatIR does not have a significant impact on agriculture-related loan default. This result is attributed to the vulnerability of the agriculture sector in China; 7 i.e., the resilience of agriculture-related loans to risk is low, and macroeconomic shocks can precipitate their default. This finding confirms H2, i.e., monetary policy is an important systemic risk factor in determining credit risk on loans in China, even when the loan-specific rate is given and controlled for. Regarding loan contract-specific information, the type of guarantee has no effect on agriculture-related loan default, as for other types of loan. With respect to firm-specific information, firm size affects the default rate: mega-and mediumsized firms are found to have lower chances of default. Managerial quality ratings have a significant effect on default, i.e., borrowers rated as excellent have the lowest default rate, followed by average borrowers and restricted borrowers, with knockout borrowers most likely to default. This finding is consistent with other types of loans. The weather effect is still significant and consistent with nonagriculture-related loans. V. CONCLUDING REMARKS This paper investigates agriculture-related loan default in China. Consistent with our hypotheses, agriculture-related loans are more likely to result in default than non-agriculture-related loans, after controlling for other factors. The only exception is when the UN classification is used. This could suggest that agriculturerelated sectors have country-specific characteristics and an international standard definition is not applicable to single country. An alternative definition of default does not change the conclusion that agriculture-related loans are more likely to default than non-agriculture-related loans are. However, such a redefinition of default generally affects the influence of contract-specific characteristics on the default of all types of loans. In the analysis of the determinants of agriculture-related loans, we find that default increases with maturity. However, unlike other types of loan, long-term agriculture-related loans do not show a significantly higher credit risk than their short-term counterparts. This result could be because the agriculture-related subsample does not encompass as many long-term loans, since financial institutions engaged in agriculture-related lending do not usually wish to have prolonged exposure to a single entity. We also find that the higher the base interest rate, the more likely agriculture-related loans will end up in default, while loan-specific interest rate fluctuations do not have a significant impact on agriculture-related loan default. These two findings are consistent with our hypotheses and could be attributed to the low resilience of the agriculture-related sector to macroeconomic shocks. Guaranteed and collateralized agriculture-related loans are also more likely to result in default. This finding suggests that the moral hazard arising from the introduction of guarantees and collateral requirements could contribute to the credit risk of agriculture-related loans. Firm-specific characteristics, such as firm size, the borrower's managerial quality, and ownership structure also have a significant influence on the default of agriculture-related loans. Remarkably, the agriculture-related loans in our sample show a downward trend in default between 2003 and 2008. Our findings confirm the concerns of financial institutions that agriculturerelated loans are generally riskier than non-agriculture-related loans. Policymakers should pay more attention to the impact of macroeconomic policies, such as monetary policy, on systemic risk in the agriculture-related loan market. An agriculture-related derivatives market, such as weather derivatives, could be developed to help agriculture-related businesses better manage their uncontrollable risks. For financial institutions, borrower-specific risk characteristics should play an important role in lending decision, while the design of loan contracts is also essential. The systematic study of the determinants of agriculture-related loan default contributes to the literature on the credit risk of loans from a sector that is critical to the fundamental wellbeing of the world population. Service to agriculture, forestry, livestock farming and fishing 1611 APPENDIX Agricultural infrastructure 1621 Agricultural product processing 1631 Agricultural product export 1632 Circulation of other agricultural material 1641 Agricultural science and technology 1651 Rural area infrastructure 1661 Manufacturing of agricultural tools and equipments 1711 Other agriculture-related 1811 Particular non agriculture-related 2111 Agriculture -individual 2121 Forestry -individual 2131 Livestock farming -individual 2141 Fishing -individual 2151 Service to agriculture, forestry, livestock farming and fishing -individual 2161 Other individual agricultural activities 2211 Rural student loans 2221 Other rural consumer loans 9999 Other Vegetables and horticultural products planting A013 Fruits, nuts, beverage and fragrance products planting A014 Herbal medicine crops planting A021 Trees planting and cultivation A022 Timber and bamboo logging A023 Forestry products collection A031 Livestock breeding A032 Pig breeding A033 Poultry breeding A034 Hunting A039 Other livestock farming A041 Sea fishing A042 Inland fishing A051 Services to agricultural sector A052 Services to forestry sector A053 Services to livestock farming industry A054 Services to fishing industry *International Standard Industrial Classification of All Economic Activities Rev 4
2020-04-16T09:10:14.465Z
2020-01-31T00:00:00.000
{ "year": 2020, "sha1": "3665b15fa50af1127d3b82bff74d32117474ff8f", "oa_license": "CCBYNC", "oa_url": "https://www.bmeb-bi.org/index.php/BEMP/article/download/1160/900", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e25bb544354c47ffe4e4208549bc2f53c44e67f5", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [ "Business" ] }
212918855
pes2o/s2orc
v3-fos-license
Science Fair was One of the Highlights of My Middle School Life: Using Science Fair to Develop NGSS Practices Abstract This article illustrates how a seventh-grade life science unit connects to the Science and Engineering Practices and Nature of Science in the Next Generation Science Standards and used science fair projects as a context for students to solve problems and understand how authentic science is done. We outline how student interests drive the development and presentation of science fair projects and discuss each component of a science fair project to reflect the practices and nature of science and how we support students along the way. The article includes images of students and of their work for science fair projects. Introduction This article illustrates how independent science fair projects in a seventh-grade life science unit connect to both the Science and Engineering Practices (SEPs ; Table 1) and the Nature of Science (NOS; Table 1) in the Next Generation Science Standards (NGSS; NGSS Lead States, 2013). Independent science fair projects are investigations in which students, individually or in small groups, seek to investigate phenomena of interest to them with assistance from their teacher or a mentor (Bunderson & Anderson, 1996). The SEPs are ways that authentically unite the processes used by scientists to understand the natural world. The NOS is an understanding of "science as a way of knowing" (Lederman, 1992, p. 331). As presented in appendix H of the NGSS, the NOS refers to the context in which science is done, including the core values and the assumptions that are part of the development of scientific knowledge (Lederman & Zeidler, 1987). Science fairs are venues where students' science and engineering projects are evaluated and celebrated (Bencze & Bowen, 2009). In their projects, students include the SEPs to answer testable questions (STEM Education Coalition, 2012) and thereby also illustrate many components of the NOS. Science education research has documented positive learning outcomes for students participating in science fairs (Schmidt & Kelter, 2017;Koomen et al., 2018), supporting their value in science education (Figure 1). At the beginning of the school year, an insect investigation laid the foundation for the SEPs and the NOS in the first author's seventh-grade life science classroom in a small rural school district in the Upper Midwest. A monarch butterfly unit introduced students to the SEPs (asking questions, developing models and using mathematics, information and computer technology and computational thinking; NGSS Lead States, 2013). The unit was based on lessons adapted from the "Monarchs in the Classroom" curriculum (Oberhauser & Goehring, 2009), including lessons on rearing, observing, collecting data accurately, and developing a graph (Appendix 1; Appendices 1-7 are available as Supplemental Material with the online version of this article). To interpret graphs, students used question frames from the Biological Sciences Curriculum Study (BSCS) "Identify and Interpret" strategy: "What do you see?" and "What does it mean?" For example, students described what they saw in Figure 2 (possible answers might be a line that goes to the upper right, or number of days, etc.). Next, students explained what it means when the line goes up (the mass of the larva is increasing or there is a positive slope). Then the class discussed how this positive slope represents a model of larval growth over time. The insect unit ended in a mealworm lab that supported students in developing a testable question, a hypothesis, and a plan for carrying out an investigative study (Appendix 2). Application of SEPs & NOS in Science Fair Projects Our students used a teacher-created template (Appendix 3) to design their projects. They eventually used Google Slides for each section of their project: title, introduction, question/hypothesis, methods, results, discussion, and conclusion. The slides were printed out and placed on a trifold board for presentation at the science fair. In the The American Biology Teacher, Vol. 82, No. 1, pp. 43-48, ISSN 0002-7685, electronic ISSN 1938-4211. © sections that follow, we illustrate how science fair projects integrate the SEPs and the NOS, through three randomly selected science fair projects by middle school students (projects 1 and 3 were each completed individually; project 2 was completed by a pair of students) seeking to answer the following questions. (1) Project 1: What effect does tannic acid in the St. Louis River have on duckweed (Lemna minor) growth while under stress from motor oil pollution? (2) Project 2: What effect does gender have on who is willing to eat an edible insect? (3) Project 3: What effect do artificial sweeteners have on growth of probiotic bacteria? Abstracts for all three projects are found in Appendices 4, 5, and 6. Our instruction included informal connections to the SEPs and the NOS rather than an explicit emphasis. In the next sections, we use the scientific form of the project to discuss the process students engaged in to complete their projects (Koomen et al., 2018). Identifying a Topic As an introduction to the research projects, students (1) listened to former students (8th-12th graders or alumni) share their completed projects and experiences, (2) chose a topic of personal interest, and (3) received a template for planning their investigation that builds from the foundation they experienced with the insect investigations. Former science fair students talked to the middle schoolers about their projects, including what they did, what they learned, and why doing the project was worthwhile for them. They also shared how they took an idea based on their interests to develop an investigation ( Figure 3). In small groups, students identified five potential science fair topics they were interested in exploring. Next, they spent a week learning about those five topics, using handheld electronic devices to document links to our learning management system. After students identified a topic of interest, a teacher screened the topic idea for efficacy as a science fair project, and the students received a template for the project (Appendix 3), building on the foundational experience of the insect investigations. Developing Testable Questions & Hypotheses SEPs: Asking questions and defining problems. After selecting a topic, students continued to look up background research on their topic for the introduction to their science fair project. For example, the student author of Project 1 was interested in recent news reports about oil spills and how those spills impacted the growth of aquatic plants like duckweed. Her background research included information on density of oils, how contaminants like oil are taken up into plants, and the basic biology of duckweed as described in her introduction slide (Figure 4; note that all student work reproduced in this article is shown in its original form, including some errors and typos). This helped her define the problem and eventually became part of her introduction to the science fair project. The teacher showed students how to use EasyBib to write citations. We modeled how to develop a scientific research question using the following format: What effect does _____ (independent variable; what I changed) have on _____ (dependent variable; what depends on the independent variable)? In Project 1, all of the student's research helped identify independent variables (tannic acid solutions, water type, and motor oil solution) and dependent variables (percentage change in frond production) that could be studied. Identifying the variables led her to write a testable question ("What effect does tannic acid have on Lemna minor while under stress of motor oil?") and hypothesis: "If the duckweed is grown in the St. Louis River with extra tannic acid, duckweed growth will be positively affected when compared to duckweed grown in the St. Louis River control without tannic water." NOS: Science addresses questions about the natural and material world. Science fair projects require a testable question that leads to a scientific investigation or (in engineering topics) the definition of a problem, typically generated by something students have observed or wondered about. In our experience, students tend to choose projects that require easily accessible materials. Examples include distilled water and motor oil (Project 1); edible insects ordered online and Google Forms survey tools (Project 2); and Petri dishes, pipettes, and an incubator from the school science lab (Project 3). As in the NOS matrix performance expectations, these student questions were defined by the constraints of available materials and the middle school science background to answer questions about the natural and material world and were limited to explanations that rely on observation and empirical evidence (NGSS Lead States, 2013, appendix H). Methodology SEPs: Planning and carrying out an investigation. Once students had identified an area of interest and developed a research question and hypothesis, they planned and carried out an investigation "to produce data to serve as the basis for evidence that met the goals of an investigation" (NGSS Lead States, 2013, appendix F). Students developed the rationale detailing why their study was important as they determined independent, dependent, and control variables. They developed step-by-step instructions; a description of materials; a plan to collect data, with consideration of measurement units (grams/liters, etc.); and a plan to organize data, choosing the style of graph appropriate for their study ( Figure 5). NOS: Science investigations use a variety of methods. While completing their science fair projects, students used a variety of methods (NGSS Lead States, 2013, appendix F) to answer their testable questions, choosing discipline-specific methods and tools (NGSS Lead States, 2013, appendix H). These specific practices "are guided by a set of values to ensure accuracy of measurements, observations, and objectivity of findings." The student in Project 1 used the convention of making water dilutions with oil to measure how different concentrations of motor oil affect duckweed. The students who surveyed the effect of gender on willingness to eat edible insects (Project 2) used the social science practice of collecting participant consent forms. The students working with live organisms to test the effects of artificial sweeteners on bacteria (Project 3) adhered to safety guidelines and used statistical tests like analysis of variance (ANOVA) to evaluate their results. SEPs: Developing and using models. Drawing from our previous work modeling the growth and development of monarchs through graphs, students developed a model that represented their research findings. For example, in Project 3, students developed a model ( Figure 6) to describe the effect of artificial sweeteners on bacteria by hypothesizing that the higher the level of artificial sweetener, the less probiotic bacteria would grow in the agar plate. Students communicated this model by creating a graph to show the number of bacteria that grew in water, different dilutions of artificial sweetener, different dilutions of sugar, and probioticsa graph that effectively describes a phenomenon (NGSS Lead States, 2013, appendix F). The graph displayed the evidence of the effect of a variable, the presence of artificial sweeteners, on the system of probiotic bacteria growth in a Petri dish. NOS: Scientific models, laws, mechanisms, and theories explain natural phenomena. As noted above, the students in Project 3 developed a model ( Figure 6) that described the phenomenon of the effect of artificial sweeteners on bacteria. They hypothesized that there would be less probiotic bacteria with the increase of an artificial sweetener. Their hypothesis was an example of how an "idea may contribute new knowledge for the evaluation of a scientific theory" (NGSS Lead States, 2013, appendix H). FINDINGS SEPs: Using mathematics, information and computer technology, and computational thinking. After students had completed their data collection, they used mathematics, computational thinking, and computer technology to organize the data into tables or graphs, building on prior work with the monarch line graphs. Students chose the appropriate graph to use to display their data. For some projects, students used inferential statistics, including ANOVA and the relevance of a P-value. The mathematical representations were used "to support scientific conclusions and design solutions" (NGSS, 2013, appendix F). For example, Project 2 students developed bar graphs to illustrate percentages of each gender willing to try eating insects (Figure 7) with error bars representing the variability of the data. SEPs: Analyzing and interpreting data. Students created a data collection table and entered the data as they were collecting them. When they were finished collecting data, students converted the table into a graph that included a title and labeled axes. Building from their prior knowledge and research, they used the BSCS tools to interpret the data by observing patterns, thus supporting the NGSS standard that calls for analyzing and interpreting data to provide evidence for phenomena (NGSS Lead States, 2013, appendix F). In Project 2 (Figure 7), students placed gender on the x-axis and the percentage of people willing to try eating an insect on the y-axis. The template prompts students to think about what phenomena to measure and why, along with describing what they observed. As noted above, students used the BSCS sentence frames to analyze and interpret their data. For example, in Project 3, students displayed their data in figures (Figure 6 and 8). In the results section (Figure 9), they interpreted what their graphs meant. In the results slide (Figure 9), they described the purpose of their study and explained the relationship between dependent and independent variables in their graph, again addressing the standard to "analyze and interpret data to provide evidence for phenomena" (NGSS Lead States, 2013, appendix F). NOS: Scientific knowledge is based on empirical evidence. In Project 2, students obtained data by receiving formal participant consent before participating in the survey. Students presented their data in a bar graph to depict their findings in Figure 6 above. Bar graphs were used to follow the "common rules" of obtaining and displaying tabulated or counted data in bar graphs (NGSS Lead States, 2013, appendix H). Conclusions SEPs: Constructing explanations. In our teaching, students used a modified version of the explanation framework developed by McNeill & Krajcik (2012), which assisted students in constructing explanations from their data analysis using claims, evidence, and reasoning arguments. The evidence referred to data that supported their claim or conclusion to a problem. Reasoning referred to scientific principles to describe how the evidence supports the claim. As students constructed explanations, they determined the hypothesis that was supported by the data/evidence in their investigation. Next, students substantiated the claim with quantitative reasoning, as illustrated in the Project 3 discussion section (words in square brackets inserted by authors): The original hypothesis was if the normal flora (probiotics) of the small intestine are treated with saccharin (an artificial sweetener), then the probiotics' ability to metabolize glucose will be affected. Their scientific explanation was based on evidence obtained from their experiments that the metabolism of probiotic bacteria would be affected by the presence of saccharin. The explanation aligned with the NOS assumption that "theories and laws that describe the natural world operate today as they did in the past and will continue to do so in the future" (NGSS Lead States, 2013, appendices F and H). SEPs: Engaging in argument from evidence. To engage in argument from evidence, students contextualized their results within the scope of their investigations by referring back to the background research literature they collected at the beginning of the study. In their project conclusion, they presented "a written argument supported by empirical evidence and scientific reasoning to support or refute an explanation or a model for a phenomenon or a solution to a problem" (NGSS Lead States, 2013, appendix F), as illustrated in the conclusion for Project 1 (Figure 10, with author-inserted brackets identifying claim, evidence, and reasoning; McNeill & Krajcik, 2012). Each project's evidence-based "written argument . . . supports an explanation or a model [e.g., oil on the surface of water will impact the growth of duckweed] for a phenomenon" (NGSS Lead States, 2013, appendix F). For example, the underlined text in Figure 10 incorporated other key elements of the NOS (e.g., "Scientific knowledge is based on empirical evidence") as students built their outcomes (oil negatively affected the duckweed) on the basis of logical and conceptual connections between evidence and explanations (e.g., the layer of oil cuts off oxygen exchange and diminishes sunlight, reducing the plant's ability to photosynthesize; NGSS Lead States, 2013, appendix H). SEPs: Obtaining, evaluating, and communicating information. Finally, the students wrote an abstract ( Figure 11) summarizing their project by using a template (Appendix 7). In the abstract they described how they had obtained, evaluated, and communicated information about their scientific study. When they went back to their background research, they "synthesize[d] information from multiple appropriate sources" (NGSS Lead States, 2013, appendix F). NOS: Science is a way of knowing. Throughout the process of developing their science fair projects, our students built an understanding that "science is both a body of knowledge and the processes used to add to that body of knowledge." Their initial research, presented in their introduction, allowed them to understand science knowledge about their topic and how their experiment related to that body of knowledge. The discussion and conclusion sections of their projects brought their research full circle, with students reflecting on who would benefit from the results and what they would do in future iterations of the study. Those ruminations allowed students to think critically about the elements of their investigation. Conclusion Independent science fair projects provide an opportunity for students to do science "like scientists" and to explain the results of their projects just as scientists explain their own work. In other words, these projects connect students to the practices of science and the nature of science, and students develop an awareness of how science helps to solve problems and build knowledge. Recent research has documented the ways in which science fair projects promote interest in STEM educational endeavors and careers, thus constituting valuable experiences for students (Koomen, Hedenstrom & Moran, in review
2020-01-02T21:53:02.112Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "b8675b991c7d80b5f5c74380a0e67dc1ec0896de", "oa_license": null, "oa_url": "https://online.ucpress.edu/abt/article-pdf/82/1/43/400880/abt_2020_82_1_43.pdf", "oa_status": "GOLD", "pdf_src": "BioOne", "pdf_hash": "c04c22d88c3162032916526ab6ff343bfa264b62", "s2fieldsofstudy": [ "Education", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
91030878
pes2o/s2orc
v3-fos-license
Performance of wool type angora rabbits under temperate conditions of Kashmir ( J & K ) , India An attempt has been made to determine the production and quality performance of wool type Angora rabbits and screen out the best suitable breed under temperate conditions of Kashmir. A total of 202 records of French Angora and German Angora rabbit breeds maintained for 3 years (2009-2011) were evaluated to estimate the performance of quality and production traits in relation to genetic and non-genetic factors. For French Angora rabbits, the overall body weight gain (adult weight), annual wool yield (AWY), staple length (SL), medullation percentage (MP) and fiber diameter (FD) were found to be 2.506 ± 0.0432 kg, 303.575 ± 0.316 gms, 5.161 ± 0.0183 cms, 2.228 ± 0.0217 % and 12.289 ± 0.0178 μ, respectively. In case of German Angora rabbits, the values of 2.506 ± 0.033 kg, 605.96 ± 0.474 gms, 6.219 ± 0.0279 cms, 2.513 ± 0.0348 % and 12.347 ± 0.0265 μwere observed for the respective traits. The breed was found to reveal significant effect (P<0.01) on birth weight, weaning weight, annual weight, annual wool yield, staple length and medullation percentage and non-significant effect on fiber diameter. The sex was found to exhibit non-significant effect on all the traits under study. Based on present study, it can be concluded that German Angora breed of rabbit is most suitable for angora wool production and quality under temperate climatic conditions of Kashmir region. INTRODUCTION Angora rabbit wool is softer, silky and is eight times warmer than Angora sheep wool (Pokharna et al., 2004).Rabbit fur is widely used throughout the world.Angora wool is the third largest animal fibre produced, after sheep wool and mohair, with annual world production of about 8500 tons.Presently, China dominates the International Angora wool market and contributes about 90 % to the total world production of Angora wool (Schlink and Liu, 2013).India is marginal producer of Angora wool with estimated annual production of about 30-40 tons.Angora wool production is the most important economic trait among Angora rabbits and appears to be affected by a number of genetic as well as non-genetic factors (Thebault et al., 1992;Katoch et al., 1999;Allain et al., 2004).Heritability estimates as genetic parameters for different wool traits in Angora rabbit are reported to be low to moderate (Allain et al.,2004).Further, wool traits could be improved by direct and indirect selection methods in Angora rabbits (Allain et al., 2004;Rafat et al., 2007).Initial wool clips have been found to be important in early selection due to their high genetic correlation with latter clips (Rafat et al.,, 2231-5209 (Online) All Rights Reserved © Applied and Natural Science Foundation www.jans.ansfoundation.org2009).Significant genetic correlation has been reported between wool yield and corresponding body weight in Angora rabbits (Garcia and Magofke, 2010;Singh et al., 2006).Likewise, correlated responses for body weight after selection for fleece yield in Angora rabbits have been observed experimentally (Qinyu, 2012).Present investigation was carried out to evaluate the performance of wool type Angora Rabbits under temperate conditions of Kashmir (J&K), India. MATERIALS AND METHODS The data has been recorded over a period of three years (2009 to 2011) from the different breeds of rabbits maintained at Government Angora Rabbit Farm, Wusan-Pattan, District Baramulla, J&K, India.The traits studied were body weight gain, annual wool yield, staple length, medullation percentage and fiber diameter.Temperature (maximum and minimum) and relative humidity were also recorded on monthly basis during the entire period of study (Table 1).Mean, standard errors and coefficient of variations (CV) were computed statistically.The effects of genetic and non-genetic factors such as breed and sex on the growth parameters were analyzed by least square analysis using the technique developed by Harvey (1990). The following model was adapted in the present investigation with assumptions that the different components being fitted into the model were linear, independent and additive.Y ijk i + S j +e ijk Where, Y ijk = k th record of individual of i th ram of j th sex R i = Random effect of i th ram S j = Fixed effect of j th sex e ijklm = Error associated with each observation and assumed to be normally and independently distributed with mean zero and variance (0, σ 2 e ) RESULTS AND DISCUSSION Growth performance: The least square means for birth weight (BT), weaning weight (WT) and annual weight gain (AwT) along with their standard errors are presented in Table 2.The average birth weight, weaning weight and adult weight were found to be 0.387 ± 0.00698 kg (127), 0.964 ± 0.00798 kg (127) and 2.506 ± 0.0333 kg (127), respectively for French Angora, whereas the value of respective traits were found to be 0.39 ± 0.00893 kg (75), 0.961 ± 0.0102 kg (75), and 2.519 ± 0.0432 kg (75) for German Angora.Sivakumar et al. (2013) observed 0.5 kg birth weight and lower estimate of 0.6-0.7 kg weaning weight in Soviet Chinchilla breeds of rabbit.Lower estimate of 0.6-0.7 kg weaning weight and 1.8-1.9kg adult weight were observed by Ghosh et al. (2008) in New Zealand White and Soviet Chinchilla On contrary, lower estimates of birth weight ranged from 0.3-0.4kg and higher estimate of weaning weight from 2.1-2.2 were observed by Olonofeso et al. (2012) in three breeds of rabbit.The lower estimate of adult weight ranging from 2.2-2.5 kg were observed by Khalil et al. (2013) in Baladi Red and New Zealand White breeds of rabbit.Similar results of weaning weight from 0.7-1.3kg were observed by Adelodun (2015) in four breeds of rabbit. Breed was found to reveal significant effect (P<0.01) on birth weight, weaning weight and annual weight, but effect of sex was found to be non-significant on these traits.Similar findings of significant effect of breed on live litter body weight of Rabbit in Minna, Niger State, Nigeria were observed by Egena et al. (2012).Significant effect of genotype and nonsignificant effect of sex on individual kit weight in rabbit breeds and thier crosses were reported by Chineke (2005).On contrary, breed having nonsignificant effect on individual weaning weight on local rabbits of subtropical climate were reported by Ghosh et al. (2008). Wool production performance: The least square means for annual wool yield (AWY), staple length (SL), medullation percentage (MP) and fiber diameter (FD) along with their standard errors are presented in (2006).No literature was found to estimate fiber diameter, staple length and medullation percentage in German and French Angora Rabbits.Breed was found to reveal significant effect on annual wool yield, staple length and medullation percentage, but has non-significant effect on fiber diameter.The effect of sex was found to be non-significant on annual wool yield, staple length, medullation percentage and fiber diameter.Similar results of significant effect of breed on wool production of different lines and strains of Angora rabbit were reported by Neupane et al. (2010).On the contrary, significant effect of sex on wool yield in Angora rabbit was reported by Sood et (2007). Conclusion The most common rabbit breeds in India as well in temperate Kashmir for Angora wool production are French Angora and German Angora and their production performance under different climatic conditions has to be ascertained by screen out the best suiting breed for the region for production and quality of angora wool.The German Angora breed of rabbits found to be best suited under temperate climate conditions of Kashmir valley of J&K.The overall body weight gain in adult rabbits was 2.506±0.033kg and the annual wool yield was 605.96±0.474gms with staple length of 6.219±0.0279cms,medullation percentage of 2.513±0.0348% and fiber diameter 12.347±0.0265µwhich is better than French Angora breed of rabbits. Based on present study it can be concluded that German Angora breed will be suitable for profitable wool production, and the findings will also help in further technology development and its transfer to the end users (farmers) in the region for successful rearing and maximizing income. Table 1 . Average temperature and humidity for the period of2009-2011. Table 2 . Least square means ± SEM for growth parameters of wool type rabbit breeds (Sex-Wise Comparison). Table 3 . Least squares means ± SEM for production traits of wool type rabbit breeds (Sex-Wise Comparison).
2019-01-02T04:01:03.449Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "f7ba3cfba052d8195b04c4e74e408ae6c1e14c05", "oa_license": "CCBYNC", "oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/1315/1259", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f7ba3cfba052d8195b04c4e74e408ae6c1e14c05", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
31856984
pes2o/s2orc
v3-fos-license
Emergence of chaos in a viscous solution of rods It is shown that the addition of small amounts of microscopic rods in a viscous fluid at low Reynolds number causes a significant increase of the flow resistance. Numerical simulations of the dynamics of the solution reveal that this phenomenon is associated to a transition from laminar to chaotic flow. Polymer stresses give rise to flow instabilities which, in turn, perturb the alignment of the rods. This coupled dynamics results in the activation of a wide range of scales, which enhances the mixing efficiency of viscous flows. In a laminar flow the dispersion of substances occurs by molecular diffusion, which operates on extremely long time scales. Various strategies have therefore been developed, particularly in microfluidic applications, to accelerate mixing and dispersion at low fluid inertia [1][2][3]. The available strategies are commonly divided into two classes, passive or active, according to whether the desired effect is obtained through the specific geometry of the flow or through an oscillatory forcing within the fluid [2]. An alternative method for improving the mixing properties of low-Reynolds-number flows was proposed by Groisman and Steinberg [4] and consists in adding elastic polymers to the fluid. If the inertia of the fluid is low but the elasticity of polymers is large enough, elastic stresses give rise to instabilities that ultimately generate a chaotic regime known as "elastic turbulence" [5]. In this regime the velocity field, although remaining smooth in space, becomes chaotic and develops a power-law energy spectrum, which enhances the mixing properties of the flow. While the use of elastic turbulence in microfluidics is now well established [6][7][8][9][10], new potential applications have recently emerged, namely in oil extraction from porous rocks [11]. In this Letter we propose a novel mechanism for generating chaotic flows at low Reynolds numbers that does not rely on elasticity. It is based on the addition of rigid rodlike polymers. At high Reynolds numbers, elasticand rigid-polymer solutions exhibit remarkably similar macroscopic behavior (e.g., Refs. [12][13][14][15]). In both cases the turbulent drag is considerably reduced compared to that of the solvent alone. In particular, when either type of polymer is added in sufficiently high concentrations to a turbulent channel flow of a Newtonian fluid, the velocity profile continues to depend logarithmically on the distance from the walls of the channel, but the mean velocity increases to a value known as maximum-drag-reduction asymptote. Here we study whether or not the similarity between elastic-and rigid-polymer solutions carries over to the low-Reynolds-number regime, i.e whether or not the addition of rigid polymers originates a regime similar to elastic turbulence. We consider a dilute solution of inertialess rodlike polymers. The polymer phase is described by the symmetric unit-trace tensor field R(x, t) = n i n j , where n is the orientation of an individual polymer and the average is taken over the polymers contained in a volume element at position x at time t. The coupled evolution of R(x, t) and the incompressible velocity field u(x, t) is given by the following equations [16,17] (summation over repeated indices is implied): where ∂ k = ∂/∂x k , p(x, t) is pressure, ν is the kinematic viscosity of the fluid, and f (x, t) is the body-force which sustains the flow. The polymer stress tensor takes the form σ ij = 6νη p R ij (∂ l u k )R kl [16]. The intensity of the polymer feedback on the flow is determined by the polymer concentration, which is proportional to η p . This expression for the polymer stress tensor is based on a quadratic approximation proposed by Doi and Edwards [16]. More sophisticated closures have been employed in the literature (see, e.g., Ref. [18] and references therein); here we focus on the simplest model of rodlike-polymer solution that may display instabilities at low Reynolds number. In addition, we disregard Brownian rotations assuming that the orientation of polymers is mainly determined by the velocity gradients. For large values of the Reynolds number, the system described by Eqs. (1) has been shown to reproduce the main features of drag reduction in turbulent solutions of rodlike polymers [14,17,19,20]. Here we study the same system at small values of the Reynolds number. Equations (1) are solved over a two-dimensional 2π-periodic box and f is taken to be the Kolmogorov force f (x) = (0, F sin(x/L)). For η p = 0 the flow has the laminar solution u = (0, U 0 sin(x/L)) with U 0 = F L 2 /ν, which becomes unstable when the Reynolds number Re = U 0 L/ν exceeds the critical value Re c = √ 2 and eventually turbulent when Re is increased further (e.g., Ref. [21]). Even in the turbulent regime, the mean flow has the sinusoidal form u = (0, U sin(x/L)), where · denotes an average over the variable y and over time. The Kolmogorov force has been previously used in the context of non-Newtonian fluid mechanics to study turbulent drag reduction [22], the formation of low-Re instabilities in viscoelastic [23,24] and rheopectic fluids [25], and elastic turbulence [26,27]. Numerical simulations of Eqs. (1) are performed by using a dealiased pseudospectral method with 1024 2 gridpoints. The time-integration uses a fourth-order Runge-Kutta scheme with implicit integration of the linear dissipative terms. The parameters of the simulations are set to keep Re = 1 fixed below Re c in the absence of polymer feedback (η p = 0). The viscosity is set to ν = 1, the length scale of the forcing is either L = 1/4 or L = 1/8, and its amplitude is F = ν 2 /L 3 . The feedback coefficient is varied from η p = 1 to η p = 5. The stiffness of the equations increases with η p , limiting the accessible range of parameters. Initially the flow is a weak perturbation of the η p = 0 stable solution, while the components of R are randomly distributed. When the feedback of the polymers is absent (η p = 0) the initial perturbation decays and the polymers align with the direction of the mean shear flow. Conversely, at large η p the flow is strongly modified by the presence of the rods. The streamlines wiggle over time and thin filaments appear in the vorticity field ω = |∇ × u| (see Fig. 1, left panel). These filaments correspond to appreciable localized perturbations of the tensor R away from the laminar fixed point (Fig. 1, right panel) and are due to the rods being unaligned with the shear direction. Notably, we find that the mean flow, obtained by means of long time averages, maintains the sinusoidal form u = (0, U sin(x/L)) also in the presence of strong polymer feedback (Fig. 2). The time series of the kinetic energy in Fig. 3 show that, in the case of a low concentration (η p = 1), the system repetitively attempts but fails to escape the laminar regime in a quasiperiodic manner. The amount of kinetic energy is initially close to that in the laminar regime. After some time, the solution dissipates a small fraction of kinetic energy but quickly relaxes back towards the laminar regime until it restarts this cyclic pattern. In contrast, for higher concentrations the kinetic energy is significantly reduced and, after an initial transient, fluc- tuates around a constant value. We have observed that different initial conditions for R may give rise to longer transients that involve a quasiperiodic sequence of activations and relaxations comparable to that observed for low values of η p . Nevertheless, the statistically steady state achieved at later times is independent of the peculiar choice of initial conditions. The reduction of the kinetic energy of the flow at fixed intensity of the external force reveals that the presence of the rods causes an increase in the flow resistance. This effect can be quantified by the ratio of the actual mean power P = F U/2 provided by the external force and the power P lam = F 0 U/2 that would be required to sustain a laminar mean flow with the same amplitude U in the absence of polymers. In the latter case, the force required would be F 0 = νU/L 2 and the corresponding mean power Table I). would be P lam = F 0 U/2 = νU 2 /2L 2 . Figure 4 shows the ratio as a function of η p and indicates that more power is required to sustain the same mean flow in solutions with higher concentrations. The analysis of the momentum budget confirms that the increased resistance is due to an increase of the amount of stress due to the polymers. In the steady state the momentum budget can be obtained by averaging Eq. (1a) over y and time: where Π r = u x u y , Π ν = ν∂ x u y , and Π p = σ xy are the Reynolds, viscous, and polymer stress, respectively. Remarkably we find that these profiles remain sinusoidal as in the η p = 0 case, namely Π r = −S cos(x/L), Π ν = νU L −1 cos(x/L), and Π p = Σ cos(x/L). Equation (3) then yields the following relation between the amplitudes of the different contributions to the stress: These contributions are reported in Table I and they are shown in the inset of Fig. 4. The results confirm that the polymer contribution to the total stress increases with η p , whereas that of the viscous stress decreases. The contribution of the Reynolds stress is extremely small (less than 10 −2 ), which demonstrates that inertial effects remain negligible as η p is increased. Figures 3 and 4 also suggest the presence of a threshold concentration for the appearance of fluctuations. Further insight into the dynamics of the solution is gained by examining the energy balance in wave-number space. For sufficiently large values of η p , the kineticenergy spectrum behaves as a power law E(k) ∼ k −α , where the exponent α depends both on the concentration and on the scale of the force and varies between 4 and 5 (Fig. 5). A wide range of scales is therefore activated, and this results to an enhancement of the mixing properties of the flow. Furthermore, the energy transfer due to the fluid inertia is negligible, and the dynamics is characterized by a scale-by-scale balance between the polymer energy transfer and viscous dissipation (inset of Fig. 5). The regime described here has properties comparable to those of elastic turbulence in viscoelastic fluids, namely the flow resistance is increased with the addition of rods and the kinetic-energy spectrum displays a powerlaw steeper than k −3 . In addition the Reynolds stress and the energy transfer due to the fluid inertia are negligible; hence the emergence of chaos is entirely attributable to polymer stresses. Our study establishes an analogy between the behavior of viscoelastic fluids and that of solutions of rodlike polymers, similar to what is observed at high Reynolds number. These results therefore demonstrate that elasticity is not essential to generate a chaotic behavior at low Reynolds numbers and indicate an alternative mechanism to enhance mixing in microfluidic flows. This mechanism presumably has the advantage of being less affected by the degradation observed in elastic turbulence [28], since there are experimental evidences that the degradation due to large strains is weaker for rodlike polymers than for elastic polymers [29]. Experimental studies aimed at investigating the phenomenon proposed in this Letter would be very interesting. Open questions concern the dependence of the mixing properties of rigid-polymer solutions on the type of force and on the boundary conditions. Additional insight into the dynamics of these polymeric fluids would also come from a stability analysis of system (1), in the spirit of the approach taken for the study of low-Reynolds-number instabilities in viscoelastic [23,24] and rheopectic [25] fluids. Finally, the orientation and rotation statistics of microscopic rods in turbulent flows has recently attracted a lot of attention [30][31][32][33][34]; it would be interesting to investigate the dynamics of individual rods in the flow regime studied here. The authors would like to acknowledge the support of the EU COST Action MP 1305 'Flowing Matter.' The work of E.L.C.M.P. was supported by EACEA through the Erasmus Mundus Mobility with Asia program.
2017-05-11T12:07:23.000Z
2017-05-11T00:00:00.000
{ "year": 2017, "sha1": "bdd003bcc8d18d67e1d33517610da7782abdf0fe", "oa_license": null, "oa_url": "https://iris.unito.it/bitstream/2318/1689552/1/pmv17.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bdd003bcc8d18d67e1d33517610da7782abdf0fe", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
54542910
pes2o/s2orc
v3-fos-license
SCHEDULING COPPER REFINING AND CASTING OPERATIONS BY MEANS OF HEURISTICS FOR THE FLEXIBLE FLOW SHOP PROBLEM Management of the operations in a copper smelter is fundamental for optimizing the use of the plant’s installed capacity. In the refining and casting stage, the operations are particularly complex due to the metallurgical characteristics of the process. This paper tackles the problem of automatic scheduling of operations in the refining and casting stage of a copper concentrate smelter. The problem is transformed into a flexible flow shop problem and to solve it, an iterative method is proposed that operates in two stages: in the first stage, a sequence of jobs is constructed that configures the lots, and in the second, the constructed solution is improved by means of simulated annealing. Fifteen test problems are used to show that the proposed algorithm improves the makespan by an average of 9.42% and the mean flow time by 12.19% with respect to an existing constructive heuristic. INTRODUCTION In a copper concentrate smelter, a flow of wet sulfide ore is received, and through a metallurgical process, copper is produced.The process involves various typical operations, such as the storage and preparation of the load, melting, conversion, refining and casting, removing slag, and absorption of gases.In the first operation, the wet ore is dried in a rotary furnace, and in smelting, the dry ore is fed into reactors where it is subjected to high temperatures to purify it.As a result, blister copper is obtained that contains 62 to 75% copper, which is the raw material for the conversion operation in which a new purification process gives rise to a product that contains about 99% copper (Pradenas, Zúñiga & Parada, 2006).In the refining and casting stage, the product of the conversion is again purified in a rotary furnace and is then poured into casting wheels, generating 99.7% pure copper, which represents the final product. The smelted copper concentrate from the conversion stage is transported and fed to the refining furnaces by means of large ladles moved by a bridge crane.Figure 1 is an operations diagram of a refining and casting plant in which the furnaces are next to each casting wheel, providing independent loads.The refining operation is conducted in a set of reactors of different capacity.To load a given refining reactor, it must be in operation, it must not be processing another load, and a certain amount of time must have passed since it was unloaded.The processing time in this stage depends on the content of the material in the furnace and on its chemical characteristics, and therefore, it is known beforehand. Copper anodes are cast in fixed molds inserted in rotating casting wheels.The molds in a wheel are filled from a pair of refining furnaces next to the casting wheel, but at the time of filling, only one furnace can be operated.The casting wheel turns periodically and sets up other molds to be filled.As the wheel turns, the filled molds are cooled with water until they reach the sector where they are unloaded automatically. Ladles with copper Refining furnace Casting Wheel To get the maximum performance from the equipment involved in the refining and casting process, it is necessary for the operations to be conducted in a coordinated fashion during a planned period, that is, they must be synchronized.These operations include operating the bridge crane, loading the furnaces, running refining cycles, loading the molds, and defining the furnace-mold relation.The complexity of this coordination causes a general tendency to deal with the activity manually, generating, as a result, a series of problems that are translated into inefficiency in the operations.Such inefficiencies include the non-availability of a bridge crane when required, material from the conversion stage that does not find a furnace for loading, and loads in furnaces that do not have an available casting wheel.It is therefore necessary to generate a schedule for loading the refining furnaces in a given planning period (typically one day) that specifies the loading of each ladle with molten material from the refining stage.Also, the order and time at which each refining furnace must feed its corresponding casting wheel must be determined.This gives rise to the problem of the automatic scheduling of operations in the refining and casting stage of a copper concentrate smelter that we call the Refining and casting Scheduling Problem (RCSP).Pradenas et al. (2005) developed a constructive heuristic to generate practical solutions for a copper concentrate smelter.In their work, they mention that the problem can be considered to belong to the family of scheduling problems, but the topic was not explored further.The problem's characteristics place it within the scheduling family of problems, which in general are still a great intellectual challenge because of their NP-hardness in most cases (Pinedo, 2008) we propose a mathematical model and an algorithm to solve it.We develop a computational experiment considering real operating situations. Section 2 presents a mathematical model of the refining process with its particular constraints.In Section 3, the problem is characterized, a hierarchical approach is established to solve the problem, and an initial solution is improved by simulated annealing in the search for better feasible solutions.Section 4 presents the results and parameterization, and finally, Section 5 gives the main conclusions. MATHEMATICAL REPRESENTATION OF A COPPER REFINING AND CASTING PLANT According to the characteristics of this production system, some elements are seen that allow the problem to be associated with a flexible flow shop type configuration (Pinedo, 2008): • A job that enters the system has the possibility of continuing at any of the available work centers, i.e., for each stage of the process a job can be processed on any of the machines available at the job centers, characterizing a flexible type configuration (Bagchi, Gupta & Sriskandarajah, 2006). • When a job's process is finished in the refining furnace, it takes a unidirectional flow toward the adjacent casting wheel. • The machines available at each stage of the process are identical, as they have the same processing speed. The mathematical model for the problem is constructed with binary integer variables and continuous variables following the strategy that has typically been used to represent scheduling problems (Blazewicz, Dror & Weglarz, 1991).The model is presented in equations (1-10). Parameters The model's parameters are obtained from the unloading of the reactors of the previous stage as well as from the operational characteristics of the refining and casting plant.There are n jobs to be processed at q job centers.Parameter p i corresponds to the refining time of job i, which is a group of ladles with blister copper (in a range of 4 to 6 ladles) that are loaded in a refining furnace.The refining time depends mainly on the concentrate's composition.The arrival time r i is obtained from estimates of when the loads will come from the previous conversion stage and considers the time required to transfer the ladles that determine job i to the refining furnace.The casting time c i depends mainly on the amount of loaded material.Parameter S represents the time required to prepare the wheel before starting a new casting, and A represents the cleaning time of a refining furnace. = preparation time of the casting wheel, A = preparation time of the furnace after a casting, E N C = number of allowed linkages. Decision variables The decision variables used in the model belong to three groups, some binary, some continuous, and some integer.These variables are related to each other and were selected for convenient formulation of the operating constraints.The binary variable x k i j takes a value of 1 when job i is the j th job processed on machine k.In this way each job assigned to a machine has a route and a position j that delivers the sequence in which it will be processed.Variable t k j denotes the time at which the process of the j th job of the sequence corresponding to machine k begins.Variable m k corresponds to the number of jobs assigned to machine k. x k i j = 1 if job i is the j th job processed on machine k, and 0 in any other case; t k j = starting time of the j th job on machine k; m k = number of jobs assigned to machine k. Auxiliary variables linkage of the jobs processed on k and k + 1) and 0 in any other case.Linkage occurs when a wheel finishes processing a job at the same instant at which one of the associated furnaces finishes the refining process; then, it is possible to load the wheel again without any prior preparation.The number of linkages per wheel in the real problem is limited because the workers are exposed to elevated temperatures, causing them to become physically exhausted.For that reason, only one linkage per wheel is generally allowed in a daily schedule. Objective function and constraints The function to be minimized (1) in the model corresponds to the makespan of the process, i.e., the time required to complete all the jobs in the system.Constraint (2) establishes that for every machine there is a set of jobs m k and that their sum is equal to the total number of jobs (n) to be processed in the system.The group of constraints (3) states that a job does not start its process if it has not arrived at the refining stage (r i ), causing the time at which job i that is in the j th position on machine k starts being processed to be greater than or equal to the time of its arrival.Constraint (4) forces all the jobs to be processed in some machine k and in a given position j.Constraint (5) forces at least one of the jobs to be processed in the j th position on machine k.Constraint (6) restricts the sequence of jobs on one machine, preventing them from interrupting each other.Constraint (7) allows a linkage to be produced in a wheel when a particular condition is fulfilled.If that condition is not fulfilled, constraint (8) allows a new process to be started on the machine after the processing of the previous job has been finished and the corresponding casting wheel has been cleaned.Constraint (9) limits the number of linkages that can be produced in the scheduling of a wheel.Finally, constraint (10) shows the general definition of the model's variables. The general precedence constraints are forced by the entry constraints of the jobs to the furnaces and by the double use of the wheel by contiguous furnaces.Under normal operating conditions, a typical refining and casting plant has 4 conversion reactors, 6 refining furnaces, and three casting wheels that maintain 20 hours of effective operation daily.With these facilities, a plant processes approximately 72 blister copper ladles consolidated in 12 jobs, so the possible routing combinations of the jobs are on the order of 6 12 and the operating constraints number 948. s.t. Representation of the problem through a graph Typically, a flow shop problem is represented with a directed graph (Balas, 1969;Nowicki & Smutnicki, 1998).However, in this case, two particularities of the problem must be taken into account: the concept of batch or virtual batch and the job processing order.Starting from the unloading of the conversion reactors, the copper ladles are grouped (4 to 6 ladles) into jobs that enter the refining and casting stages associated with their corresponding unloading at different arrival times.Thus, a job constitutes a virtual processing lot, i.e., the jobs are not necessarily available and stored physically one after another in a space or buffer of the plant, but rather, the purpose of their grouping is to allow the definition of virtual job sub-batches before each processing center (routing -sequencing), allowing the scheduling problem to be decomposed at the level of each center, as represented schematically in Figure 2. SOLUTION PROCEDURES Given a lot of N jobs to be processed in the refining stage, a batch or sublot is defined as a subset of jobs (i) assigned to a center that can be exchangeable among the different centers because the machines are identical; thus, a batch is not associated with a specific center.Each center l has a set of jobs N l of size n l .The processing order of the jobs of each batch N l can be expressed as a permutation l = { l (1), l (2), . . ., l (n l )} where l ( p) denotes the element belonging to sublot l that will be processed in position p. Figure 3 presents a schematic diagram with the methodology used for the refining and casting plant.The makespan of the process corresponds to the maximum time for completing a job that is in position p of sublot l.Let us consider that each job has an associated furnace-wheel processing time in the job center, so p li denotes the processing time at center l of job i.Therefore, a recursive function can be established for the completion time of a job in a sublot C 1, l( p) : where, p is in the discrete set {1, 2, 3, . . ., n l } and C l, l(0) = 0. a l( p) is the elapsed time between the output of the conversion stage and the arrival to the refining and casting stage.The process time p l, l( p) corresponds to the refining time (r ) plus casting time (m) of the job of sublot l that is in position p, i.e.: The completion time of a job for center l in position p starts when the previous job is finished (position p − 1 of sublot l) provided the job has arrived (position p), which is why the maximum of those two times are in equation (11). Another approach used to solve the FFSP decomposes the problem into two levels: a routing sub-problem and a job shop sub-problem (Brandimarte, 1993).For the routing sub-problem, we developed a job assignment heuristic forming sublots for each processing center starting from a given job sequence.Job sequencing is initially determined with a heuristic based on the LPT dispatch rule (Pinedo, 2008), and at the later stage, simulated annealing (Kirkpatrick Jr & Vecchi, 1983; Talbi, 2009) is used.A diagram of the procedure's stages is presented in Figure 6.The construction function generates the processing sublots and schedules the sublot jobs inside a center.The sublots are constructed by assigning each job to a processing center according to the procedure presented in Figure 4. On the other hand, to construct the job schedule of a sublot each job is assigned to some machine of the processing center according to the procedure described in Figure 5. The representation of the subproblem for the simulated annealing method requires the generation of a solution neighboring the current solution.For that purpose, a position p of the job sequence is chosen randomly, and the neighboring solution is obtained from random exchange among the positions p by p − 1 or else p by p + 1.The new sequence of jobs generates new sublots for each processing center, and the new starting time of each job must be recalculated. The scheme of Figure 6 describes the general procedure.From the information regarding the available processing centers and the jobs to be processed, an initial (or job entry) sequence to the refining plant is determined, the sublots of jobs for each center are constructed, and the scheduling for each element of the sublot is determined.In this way, the evaluation function calculates the makespan of the process.In the second stage, the interaction between the SA and the generation of sequences takes place.As neighboring job entry sequences are generated, Figure 5 -Construction of sublot scheduling. Test problems To generate the test problems, a computational tool was executed that programs the activities in the conversion plant, which is the previous stage in copper production (Pradenas et al., 2006).The output of this tool produces the input data for the copper refining and casting problem.This information contains data on the instant at which each ladle with molten material is produced in the conversion stage.One RCSP instance is defined by the number of jobs that will be processed, the availability of the processing centers, and the arrival and processing times of each. To generate the test problems, the normal operation of a refining and casting plant is considered, and alterations are made as to the availability of the processing centers and the number of jobs to be processed, increasing the complexity of the real problem instances.Table 1 presents the characteristics of the generated instances and the available centers.For each of the five instances of the three types of problems, refining, casting, arrival, and loading times were generated randomly with a uniform triangular probability distribution, as presented in Table 2. Time is measured in minutes and load size in metric tons.For the studied cases, it is assumed that the processing centers for each type of instance are always available, so that the three casting wheels are always operating and are fed by one of their corresponding refining reactors.Table 3 gives the details. Hardware used All of the programs were executed on a computer with an Intel T2400 -1.83 GHz processor with 2 Gb of RAM.Both algorithms were implemented in C++ language on a Windows XP operating system. Parameter definition The simulated annealing method has various control parameters whose values must be established so that it will deliver good quality solutions in a reasonable execution time.To achieve an adequate parameterization, use was made of the 15 instances and of the WinCalibra software, whose procedure is based on a Taguchi Factorial Experiment Design and a local search (Adenso-Diaz & Laguna, 2006).The values obtained from the parameterization are presented in Table 4. Schedule generation Table 5 shows the results obtained for the 15 instances using the proposed method as well as the constructive heuristic (Pradenas et al., 2005).The completion time (C max ), the mean flow time, and the CPU time are given for each instance. The proposed method improves C max by an average of 9.42% and mean flow time by 12.19% with respect to the constructive heuristic.Even though the CPU time is very low, it is longer in simulated annealing, consistent with the difference between a constructive heuristic and the iterative characteristics of the simulated annealing metaheuristic.Figure 7 shows the average percentage improvement in C max and mean flow time for each of the three types of problems studied obtained by using the proposed method instead of the constructive heuristic. CONCLUSIONS In this paper, we have proposed an approach based on the flexible flow shop problem to tackle an operations management problem that arises in a copper smelter in its material refining and casting stage.The problem has additional characteristics beyond those treated in the scheduling literature due to conditions typical of the metallurgical process.To represent the problem, use was made of directed graphs and virtual batch processes of jobs.In this way, an algorithm that operates at two levels is defined.First, from a job sequence constructor, job sublots are configured for each available refining center in such a way that a route and a sequence for entering the refining center are defined that optimize the use of the casting wheel.At the second level, this constructed solution is improved by means of the simulated annealing method.After solving 15 test problems, the proposed algorithm improved on the makespan by 9.42% and the mean flow time by 12.19% with respect to a constructive heuristic. Figure 1 - Figure 1 -Configuration of a refining and casting plant. Figure 3 - Figure 3 -Scheme of a copper refining and casting plant with batch jobs. Procedure 1 :Figure 4 - Construction of sublots StartGiven a sequence of jobs to be processed; While there is some unassigned job in the sequence Search for an available processing center If there is an unoccupied center; Assign the job to a center Otherwise Assign the job to the tail of the center that is set free first End.Construction stage of Sublot scheduling.Procedure 2: Job scheduling Start For each of the sublots associated with the centers; While there is a job of the sublot that has not been scheduled Choose the first job of the sublot Search for an available machine in the center If the center has a machine If the machine is unoccupied Schedule the starting time of the job (T i ) Schedule the completion time (T f ) Otherwise Following T i of sublot = T f −1 + T setup wheel Schedule its completion time (T f ) Otherwise Choose a machine available at the center If there is an unoccupied machine Schedule its starting time of the job (T i ) Schedule its completion time (T f ) Otherwise If there is linkage Following T i of sublot = T f −1 − T refining next job Schedule its completion time (T f ) If there is no linkage T i following sublot = T f −1 + T setup wheel Schedule its completion time (T f ) Table 1 - Characteristics of the jobs. Table 2 - Distribution of the data that characterize each job. Table 3 - Description of the processing centers. Table 5 - Summary of results.
2018-12-04T10:38:55.114Z
2011-12-01T00:00:00.000
{ "year": 2011, "sha1": "b2cf84089941c1802dbaa3246b5c382e70548089", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/pope/a/sS5fTNMd9StVYyQJH7LfpLr/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2cf84089941c1802dbaa3246b5c382e70548089", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
54702686
pes2o/s2orc
v3-fos-license
THE EXPERIENCES OF MOTHERS WHO LOST A BABY DURING PREGNANCY The purpose of this study was to explore and describe the experiences of mothers who lost a baby during pregnancy and care given by doctors and midwives during this period. To realise this goal the researcher followed a qualitative, exploratory, descriptive and contextual approach. Data were collected by using in-depth unstructured interviews. The interviews were taped and transcribed verbatim. Data were analysed through open coding. Data were collected until saturation had occurred. All mothers who were interviewed described their experiences of the loss of a baby during pregnancy. Some shared the same experiences and others did not. In the findings of this research, it became clear that mothers with the loss of a baby during pregnancy had experienced hardships and difficult times during this period. They expressed the wish that people acknowledge their loss, be considerate, sensitive, and give them a listening ear and emotional support. On the other hand, mothers identified the inability of health workers to give them the appropriate support. INTRODUCTION AND BACKGROUND The loss of a baby during pregnancy can take numerous forms, given individual circumstances.These could be a miscarriage, an ectopic pregnancy, or a stillbirth, which means loss of an embryo or a fetus.The loss of a baby during pregnancy not only involves the catastrophe of a baby's death, but also shatters parents' dreams, plans and hopes."Grief or mourning", or bereavement, means to be robbed of something valued, because that something, or someone, has been unfairly taken away, it is understandable that the experience of being bereaved is one of strong, overwhelming and sometimes even violent reactions (Baker & Nackerud, 1999:1).The loss of a baby during pregnancy holds great significance for the parents, the loss of an adult is the loss of the past, and the loss of the baby is the loss of the future.A baby can represent hope for the future, hope of a better life, and hope of greater opportunities.A baby can represent the potential for fulfilling the dreams, a way of starting over, another chance to alter the course of a lifetime.A baby can embody dreams and fantasies.The child can become a part of the parent's identity, and that part of the identity is lost.It is no surprise that the loss of a baby during pregnancy often results in increased anxiety in subsequent pregnancy and feelings of guilt.And this period is oftentimes a period of crisis. One must be careful, however, to avoid making assumptions about the meaning of the loss of a baby during pregnancy to a particular woman.It is difficult, if not impossible, for anybody else to understand the significance of a pregnancy or a baby to another person.This is because the loss of a baby during pregnancy carries with it a vast range of profoundly deep feelings, which include unspoken hopes and expectations based on personal as well as cultural values.Attempts have been made to compare the severity of grief following loss at different stages, perhaps to demonstrate that certain women deserve more sympathy or care.A study investigating this point, however, showed no significant differences in the grief response between mothers losing a baby by miscarriage, stillbirth or neonatal death (Frazer & Cooper, 2003:696). Those who have lost newborns or who have experienced stillbirths have found that people do not recognise the loss as being as tragic as the loss of an older child. The death of an infant is often considered "an unfortunate occurrence", and one that can easily be rectified by the birth of another child.Often, no one but immediate family members see the baby.Because of this, to most people, the baby did not exist as a "real person" and they cannot begin to be aware of the love, the hopes and the future that were lost with that child (Mander, 1988:1). According to Condon (1986:987), there are repeated findings in the literature that approximately one third of parents bereaved by stillbirth perceive the obstetric team as failing to provide adequate support or information in both the short and longer terms.These parents may be at greater risk of subsequent psychiatric illness. PROBLEM STATEMENT Although the general topic of death and dying is receiving increasing attention by the medical community, the problem is that little is known about the impact that the loss of a baby during pregnancy has on the lives of those experiencing it.What happens to these people after the woman is discharged from the hospital?How do parents as individuals react to their loss?What happens to a couple's relationship with each other as a result of their baby's death?(Kennel & Klaus, 1982:267). In Kennel and Klaus (1982:264), it is stated that there is a wide range of reactions of people in response to the loss of a baby during pregnancy.Its potential effects can alter the lives of those who experience it.The grief each person feels is a unique part of them.Therefore, no one else can completely understand another's pain.Thus, it is important to describe and explore the experiences of every mother who experiences the loss of a baby during pregnancy. In practice the researcher experienced that there are many complaints from bereaved mothers about the lack of support given to them during their grieving process. The mothers verbalise that all the doctors and midwives seem to care about is that the baby has been delivered, after which they apparently do not care about the emotional trauma the mother is undergoing.The strongest reactions probably occur when there are comments suggesting that the family should forget about the loss and get on with their lives or get on with another pregnancy.It may be even suggested that the baby was small and therefore their loss should not be as great as it would have been if the baby had lived longer (Kennel & Klaus, 1982:267). From the above-mentioned problem statement the following questions arose: • How did the mothers experience the loss of a baby during pregnancy?• How did the mothers experience care given by midwives and doctors during the loss of a baby during pregnancy? PURPOSE OF THE RESEARCH The overall objective of this research was to describe the experiences of mothers with the loss of a baby during pregnancy and the professional care they received during this period in the maternity unit of a public hospital in Gauteng Province, South Africa. DEFINITIONS OF TERMS Concepts used in this study are defined as follows: The loss of a baby during pregnancy: This is when a mother looses an embryo/fetus during pregnancy due to a miscarriage, an ectopic pregnancy, or a stillbirth (Woods & Esposito, 1987:120). Bereavement: Bereavement is defined as the entire process precipitated by loss through death.In this research the bereaved are mothers who have lost a pregnancy (Woods & Esposito, 1987:5). Support: In this study, support is that function that prevents or reduces stress in a mother with the loss of a baby during pregnancy.Supporting her makes her feel accepted and respected.She is reassured that she is cared for and is allowed to communicate freely and share her experiences and feelings.The support should be given initially by health workers, and later in co-operation with family and other community members, for example, the church. Experience: It involves gaining knowledge by being personally involved in an event, a situation or a circumstance (Burns & Grove, 2003:15). RESEARCH DESIGN AND METHOD In this research, a qualitative research design, which is exploratory, descriptive and contextual in nature (Mouton, 1996:169;Mouton & Marais, 1990:45), was used. Understanding the meaning of a phenomenon in a particular situation is, useful for understanding similar phenomena in similar situations (Burns & Grove, 2003:37-38;Mouton, 1996:133;Mouton & Marais, 1990:52).The strategy of this research is contextual in nature (Mouton, 1996:133;Mouton & Marais, 1990:52).The research thus aims to provide a description and an exploration of a particular phenomenon or experience or group, within the context of the phenomenon's specific setting and world significance.This research focused on how mothers experienced the loss of a baby during pregnancy and the care given by midwives and doctors in a public hospital in Gauteng. Their viewpoints will be based on knowledge obtained through the field study. DATA COLLECTION In-depth unstructured interviews were conducted with mothers who lost a baby during pregnancy.These interviews were taped and transcribed verbatim in the language preferred by the mother.The interviews were held within the first six weeks after the diagnosis had been made.The time depended on the emotional condition of the mother because immediately after delivery, some mothers were too emotional to discuss or describe their loss.Therefore interviews were scheduled from fortyeight hours to six weeks after delivery for those who were not ready for the interview when they were still in hospital. One question was asked, namely: "Describe how you experienced the loss of your baby during pregnancy and the care you received from both midwives and doctors during this period".During the interview communication skills were used to obtain the necessary information. The researcher contacted each participant to confirm an appointment at a central place and at an appropriate time for the participants.Comments were made about sensitive ethical issues such as maintaining confidentiality of data, preserving the anonymity of the informants and using research for its intend purposes (Creswell, 1994:148).The ethical standards as set by the Democratic Nursing Organisation of South Africa (DENOSA) were adhered to before and during the interview (DENOSA, 1998:1-7). The researcher created a context that was conducive for mutual trust between the researcher and the participant (Marshall & Rossman, 1995:67).Privacy was ensured during the interview.The participants were ensured that their participation was entirely voluntarily and they could withdraw from the research at any stage if they felt so.The interviews would be stopped if the participant suffered severe stress during the interview. The possibility of referring the participants for councelling was discussed with them after the interview. A pilot study was conducted with one mother to refine the question (Burns & Grove, 2003:38).The question was asked to one mother and the interview was conducted as planned.The reason was to see whether the question was clear to the mother and whether the interview developed as planned.The mother understood the question and the interview went well, so no changes were made to the question and interview procedure.This mother and interview was therefore added to the main sample. POPULATION AND SAMPLING Population The population consisted of mothers who lost a baby at any stage during pregnancy and who were admitted to a maternity unit of a public hospital in Gauteng.It is mostly Black patients who are treated in this hospital. The research is bound to the uniqueness of this specific academic public hospital in Gauteng and is not representative of the whole population. Sampling A purposive sampling method was used to select the mothers, using set criteria, that is, all mothers who lost their babies during pregnancy and who were admitted to the maternity unit were interviewed.Patton (in Denzin & Lincoln, 1994:229) suggests that the logic and power behind purposive selection should be information-rich.In this research the adequacy of the research was attained when sufficient data had been collected so that saturation occurs and variation is both accounted for and understood (Denzin & Lincoln, 1994:230).Saturation means that themes and categories in the data become repetitive and redundant, such that no new information can be gleaned by further data collection (Polit & Hungler, 1999:43).Ten mothers were sampled when saturation was achieved. DATA ANALYSIS "The intent of the analysis was to organise the data into a meaningful, individualized interpretation or framework that describes the phenomenon studied" (Burns & Grove, 2003:29).Tape recordings of the interviews were transcribed verbatim in the language in which the interviews were held.Transcriptions were analysed by the researcher according to Tesch (unmodified) (in Creswell, 1994) and by an independent coder.The person was requested to also analyse the data according to Tesch's method, independently from the researcher. The two analyses were then compared to ensure trustworthiness.She was selected as she has obtained her DCur (Nursing), with a thesis in which she also used qualitative interview methods.Data analysis is a process of bringing order to the data, organising what is collected into concepts, categories and basic descriptive statements (Patton, 1987:144). Enhancing trustworthiness Trustworthiness refers to the extent to which a research study is worth paying attention to, worth taking note of, and the extent to which others are convinced that the findings are to be trusted (Babbie & Mouton, 2001:276). The criteria by Lincoln and Guba (1985:290-300) served as guidelines for the researcher. To enhance credibility the researcher: • had prolonged contact with the study field.She is a midwife, who has knowledge and clinical experience in this area.The literature that was consulted enabled her further to satisfy the criterion of being knowledgeable about the phe-nomenon under investigation; • bracketed existing knowledge and preconceived ideas and especially personal views about the existing problems in the clinical area; and • conducted the phenomenological semi-structured interviews until data saturation occurred, namely until the collected data were repeated and confirmation of previously collected data took place (Streubert & Carpenter, 1995:22-23). The categories identified by the researcher were compared with those identified by the other coder.No major discrepancies were identified between these persons' analysis of the data.An in-depth literature review further confirmed these categories.This enhanced confirmability. Transferability was ensured by the researcher providing in-depth discussions of the data obtained, data RESULTS The results were based on the analysis of the in-depth, unstructured interviews with mothers about how they experienced the loss of a baby during pregnancy and the care they received from doctors and midwives.After a discussion with the independent coder all identified themes and sub-categories were finalised.A literature control was integrated as a further measure of trustworthiness of the findings (Lincoln & Guba, 1985: 294-331). All mothers who were interviewed described their experiences of the loss of a baby during pregnancy, some shared the same experiences, but some did not.The themes of experiences that were identified were: Themes -The mother's experience of the loss • Confused (losing one's mind) • Emptiness (there is no fulfillment) • Sadness (at losing the baby) • Pain (emotional pain) • Anger (towards oneself, baby, nurses and doctors) • Guilt (that maybe she caused the death) • Fear (of falling pregnant again) • Denial (that she lost the baby) • Failure (to fulfill expectations) • Frustration (because of losing the baby) • Loneliness (because of isolation) • Lost hope (of falling pregnant again) Chalmers (2000:16) states that what most people dread after losing a baby is that their experiences may seem to be illogical or irrational.However, when bereaved couples talk with other parents who have lost a baby, it soon becomes clear that "they are not going crazy", but that the up-and-down and "confusion" feelings they are experiencing are common reactions to pain and are to be expected.Borg and Lasker (1981:17) suggest that parents sometimes think they are "going crazy".Their emotional reaction is so strong that they become disoriented, depressed, bitter and withdrawn for many months and maybe even years.Friends may expect them to bounce back quickly, have another child and try to forget the past, but they cannot forget.Borg and Lasker (1981:36) further suggest that whatever the source of failure of others to express sympathy, it leaves couples feeling alone and confused about their emotions.Many people never speak about a miscarriage at all and have a very difficult time resolving their grief.They may find it hard to tell people who did not even know they were expecting, that the pregnancy had ended.Often, however, they find comfort from talking to others who have themselves been through a miscarriage and who can understand the sadness, the fears, the anger and the disappointment. Emptiness According to mothers who lost a baby during pregnancy, there is no fulfillment of giving birth to a baby because there is no baby.They came back from hospital with empty hands whilst other mothers were holding their babies."You know, I just felt confused.I could not cry, I really felt empty"."When a child is born dead, there is nothing.The world remembers nothing and the gap in the womb is replaced by an emptiness in your arms. You are not recording a birth or a death". Sadness Mothers who have lost a baby before giving birth, feel sad because of this tragedy, and because of the emptiness and disappointment."But I am glad that I held him because I know that I was pregnant and it was not just me dreaming.I am glad in that respect, but it's very sad, because I know what he looked like and I know he was gorgeous, and I think it's unfair…". According to Borg and Lasker (1981:20), there is also an overwhelming sadness as is usual after any tragedy.Some parents feel sad for the baby.They say that it seems particularly unfair for a baby to die.It is expected that an elderly person, who has experienced life, will die, but an infant is thought to deserve a chance at life.They are sad for themselves as well, sad because of the emptiness and disappointment.Their wish to become parents -to have someone to nurture, to love, to teach, to care for, to play with -someone who would care for them in their old age and inherit the benefits of their work -has not been granted. Pain Mothers say the pain they are feeling is not physical pain but emotional pain because in the end there is no reward.The following excerpts confirm this: "At times it is not easy to communicate the pain because there is simply no appropriate language with which to describe the tragedy"."It was painful and when I say painful I don't mean like a headache! But what can I say … emotional pain". When a baby is stillborn the hopes built up during the months of pregnancy are suddenly gone.It is not only the mother who is affected.Fathers also develop an attachment to the baby before it is born.Some stillbirths occur with foreknowledge of the situation and it is known that the baby is already dead.This is very painful for parents, especially if they have to wait for the natural onset of labour to produce a dead baby (Wright, 1992:102). Anger The anger that these mothers experience is because they have delivered a dead baby.To them there is no reward.Sometimes they are angry with themselves, the baby, the nurses and the doctors."The anger is because you are going to deliver a foetus that is dead.I'm going to go through this pain and in the end there is no reward.There is no baby.You come home empty. Nothing … Life goes on". "I needed people to acknowl- edge what had happened, not to trivialise it.I was grateful for the people who listened, and stayed, knowing that they could not take away the hurt.I was angered by those who tried to make it better, with false comfort and too quickly offered explanations …". In the study conducted by Cleirel (1991:256), it was shown that "… anger after loss must be seen as an individually defined way of coping with the loss, rather than a "typical" part of the loss reaction.Feelings of anger are often regarded as protest against the loss.It has been found that it may be directed towards different people, objects or circumstances that may also be held responsible for the death".Woods and Esposito (1987:138-139) explain that often, behind the question "why?" is anger.This anger has no focus.Parents are angry at everything and everyone at the same time.They realise that no one and nothing can be blamed.How frustrating this is.Sometimes, parents are able to express anger at God.They ask, "Why, when there are so many people out there who mistreat children, does God allow them to have babies, and yet he stops us from having this child we wanted so much?"Anger is the result of a gradually developing awareness of the reality of the situation.As the significance of their loss of a baby during pregnancy begins to dawn on them, parents (and significant others) experience the different emotions of anxiety and anger.With the full effect of their loss, come more focused feelings of bitterness, resentment, blame, rage, and envy of those with normal pregnancy outcomes.Blame and anger may be a destructive force in relationships with family members, and prevent these relationships from being a source of comfort and support.Venting of angry feel-ings on care providers protects these family relationships for more positive interactions (Mereinstein & Gardener, 1993:536). Guilt Most mothers experience feelings of guilt, because they think that perhaps they caused the death by doing some- Most parents (especially mothers) feel guilty that their baby has died.They search the months of pregnancy trying to pin the cause for their child's death on something they may have done.This is a normal reaction, and healthy as long as they talk about it together, rather than hiding these feelings from each other.It is only after they have gone through the long, slow process of checking possible causes that they will gradually come to accept that they did not cause their baby's death (Chalmers, 2000:12). Fear Mothers experience fear falling pregnant again.They fear that loss will happen again."I still feel sad and I have fear of falling pregnant again".Meinstein and Gardener (1993:532) mention that, for the family who experiences an intrapartum demise, the joyous expectations of labour and birth suddenly changes to fear, anxiety and dread that the "worst" might possibly happen to them again.Woods and Esposito (1987:252-253) stress that, if a previous pregnancy has ended disastrously, a couple's anxiety and fears, both founded and unfounded, might be greater than normal.Issues regarding perinatal diagnostic testing need to be discussed.Pertinent information and the rationales in support of or against a patient's having amniocentesis or sonograms performed need to be explored.Many parents, during a subse-quent pregnancy, opt for more testing than is probably necessary.The tests eliminate some of their anxiety.Indeed, maternal anxiety is a valid factor to be considered in determining what testing will be done, even if the previous pregnancy would not have had a better outcome if a particular test had been carried out. Denial After losing a pregnancy mothers attempt to deny the reality of the loss, because they do not want to face the pain of this loss.This is confirmed by the following excerpt: "After the miscarriage was over, I had this crazy feeling that I would go home and continue with the pregnancy.I knew the reality, but somehow I did not believe it".Borg and Lasker (1981:19) explain that, although it is essential that the bereaved parents express their emotions over time and talk about their loss, some degree of denial is a normal part of grieving.It is a form of protection, a way of not having to face up to the pain."This did not happen to me" is a common feeling. Failure A mother who loses a baby feels that she has failed to fulfil the expectations of being a woman, a mother.She feels she has failed herself, her husband and most of all her child."I guess it's important to carry on one way or the other.Think of new changes -focus in on something else.There are a couple of courses that I wanted to do, so I can think about a career change.In some way, I feel like a failure and wonder if my life plans are possible.So I think the main thing will be useful and meaningful"."I feel … you know, it is something I cannot describe, but I feel like a failure". It is common for parents especially the mother, to feel that they are a failure as people, and to be afraid of facing people as failures (Borg & Lasker, 1981:148). Frustration Mothers with the loss of a baby during pregnancy become frustrated because they could not achieve their goal of delivering a live baby. The following excerpts confirm this: "Although I did not want others to suffer, I was angry, frustrated and depressed, and so I distanced myself"."I felt bad, and frustrated because when you carry the baby for so long, you expect a sort of reward, that is, getting a live baby … This is really frustrating". When a tragedy occurs, a woman's confidence is often shattered.The effort of becoming pregnant and planning for a baby is too great, and she is determined never to go through such a trauma again.For the woman who very much wanted a child and can no longer look forward to having one, there is a deep frustration in addition to the grief for the baby who was so loved (Borg & Lasker, 1981:94). Loneliness Mothers who have lost a baby in pregnancy often feel lonely and empty, and this feeling of loneliness inexplicable."I though I was going crazy and ah, I hoped for it, so the pain would stop.Because it just hurt so much -the loneliness … just unbelievable loneliness.Ah, it's just like -the pain and the loneliness and having to deal with it, ah!You just kind of wish you would slip over that edge into some unknown space and not have to worry about it any more".Borg and Lasker (1981:7-8) report that bereaved parents experience feelings of being alone and isolated from others.Most of their family and friends do not understand what sort of emotional support is needed."At least you never knew this child", they will say, hoping to ease the pain or "it could have been worse" or "you'll have another one".In their own way, they might be trying to offer hope for the future, but to the bereaved parents, it often seems that these people do not comprehend the enormity of what has happened. Lost hope Mothers lose hope because all their hopes and dreams were lost through this tragedy.There is profound disappointment and all plans collapse."I felt empty because I knew I was going to give birth to a dead baby, and I lost hope". One participant in Pilkington's study (1993:134) said she was feeling empty with shattered dreams and lost hope.The participant reached out to God for strength and found an acceptance and a desire to pull through one step at a time, while weighing her blessings and finding ways to fill hollow feelings. Lack of communication that leads to lack of information Mothers raised concerns about midwives and doctors not communicating with them.Therefore, it would appear that these mothers are not given enough information, sometimes none at all.This in itself will lead to them making uninformed decisions."Even in the antenatal ward, nurses don't communicate with you"."Doctors don't have time for the patients.Just a few.But they don't have time for the patients!Most of them don't have time for the patients.So, I thought maybe that once they talked to the nurse about the patient's problem or when they write in the bedletter, that means they leave it to the nurses". Lack of therapeutic listening Mothers stated that they wished that midwives and doctors could give them a listening ear when they spoke about their loss.They mentioned how frustrated they were, because most of the time there was nobody to talk to and the stress built up. The following excerpt confirms this: "Because most of the time at home there is no time for this, when you arrive at the hospital, you find women with their kids, happy, and you are sitting there with a painful heart, in tears.You miss a person who will sit next to you, and let you open up to her". According to Mander (1999:1) if possible a sympathetic listening ear is needed, but in the absence of a human ear other means such as pen and paper, may allow the necessary outpouring to help her to make connections between this experience and the other strands of her life. Lack of emotional support What mothers would like to see is midwives and doctors giving them emotional support by comforting, talking and being there during this time. The following excerpt confirms this: "What I ask from the nurses is that they should try hard to comfort patients who come in.They should try hard to look at the other person's problem, try to comfort her the way they have been taught, and according to their capability, because most of the time, at home there is no time for this".Mereinstein and Gardener (1993:545-546) explain that professional presence and support are essential to families in crisis because of the increased dependency needs that accompany grief and loss.Yet certain aspects of the environment such as privacy, quiet and comfort may be difficult to obtain in a noisy and busy perinatal setting.The recommendation to never leave the family alone must be balanced with their need for privacy and personal time alone with their baby (that is, stillborn, ill or dying).Simply saying, "I will stay with you unless you ask me to leave so that you can have some private time with your child" or "Would you like me to leave for a while so that you can be alone with your baby?" offers both support and privacy. Insensitivity Most of the time midwives and doctors are insensitive in the way they treat mothers and their stillborn babies.These mothers are screened in the labour ward and they deliver without any person attending to them.Dead babies are not treated the same as those who are alive, and they are often wrapped in plastic or placed in a receiver."I wanted to be treated like others, but because he was dead, I could just deliver him on a plastic".Woods and Esposito (1987:65-66) suggest that, at delivery, care providers should balance the patients' perception of events with technical aspects of the procedure.All too often, stillborn foetuses are delivered into a pan or in the labour bed.Although these approaches may satisfy clinical standards for a stillbirth delivery, they can impart a very negative and uncaring attitude to the patient. Health workers do not care about patients Mothers experienced a lack of care for them as patients.It appeared to many of the mothers that, once their doctors established that the baby was dead, they relinquished care to the nurses. The following excerpts confirm the discussion: "Doctors don't have time for the patients.Just a few.But they don't have time for the patients!Most of them don't have time for the patients.So, I thought maybe that once they talked to the nurse about the patient's problem or when they write in the bed letter that means they leave it to the nurses"."The doctor avoided me, and when I pressured him, he said: 'These things happen and you should try to put that behind you'.He really offered no support, he was so cold". LIMITATIONS One limitation was that the researcher did not always gain the co-operation of the staff in the clinical area.As they failed to call the researcher when there was a mother to be interviewed, some clients would leave without being seen by the researcher. RECOMMENDATIONS The results from the interviews held with mothers with the loss of a baby during pregnancy elicited many personal emotional experiences.In addition, there was some dissatisfaction with the care received from midwives and doctors.Recommendations to address this are the following: Nursing research It will be advisable to conduct research with the health workers, where they are given the opportunity to describe their experiences when caring for these women and thus share their own views.Further research can be conducted in other institutions, to see if the same results are found. Nursing practice It is important to advocate for hospital policy that includes providing support for staff members, as they are emotionally affected by working with bereaved families. Better support for staff would help them to provide better support for parents.Nursing guidelines should be developed for quality care of these patients. Nursing education There is a need to educate nurses on the grieving process and specific interventions for bereavement care. CONCLUSION In the findings of this research, it became clear that mothers with the loss of a baby during pregnancy experienced hardships and difficult times during this time. They wished that people acknowledged their loss, were considerate, sensitive and gave them a listening ear and emotional support.On the other hand mothers reflected the inability of health workers in giving them the appropriate support.The health care providers should keep in mind that every parent in this situation is on their own journey.Their job is to walk with the parents on their journey. - The experience of mothers regarding the care given by midwives and doctors during the incident • Lack of communication leading to lack of information • Lack of therapeutic listening • Lack of emotional support • Insensitivity • Health workers do not care about patients Confusion Mothers experiencing the loss of a baby during pregnancy have confused emotions and feel as if they are losing their minds.The following excerpts from the interviews support this discussion: "I really feel terrible.I feel confused and I feel as if I am going to lose my head"."Finally, I heard my baby crying weakly.Nobody said anything, so I asked, 'Is it a boy or a girl?' My doctor responded that there were problems and she could not tell.I was confused, I must have heard wrong.I asked her again.She repeated the same answer with no other explanation.No one showed me my baby". thing wrong and sometimes because they failed the child by not carrying her/him to term.The following excerpts confirm this: "I felt myself it was my fault.I felt it was something wrong with me that makes my babies be born early.I feel it's my body rejecting the baby"."I just don't want to encourage anything to happen.I think really I am blaming myself for going into labour, for getting out of bed.If only I had stayed in bed that extra day, would it have made any difference?" BABBIE, E & MOUTON, J 2001: The practice of social research.Cape Town: Oxford University Press.BAKER, L & NACKERUD, L 1999: The relationship of attachment theory and perinatal loss.Death Studies, 23(3):1.BORG, S & LASKER, J 1981: When pregnancy fails.Boston: Beacon Press.BURNS, N & GROVE, SK 2003: The practice of nursing research: Conduct, critique and utilization.Toronto: WB Saunders.CHALMERS, C 2000: A cross-cultural survey of women's experiences.Journal of Nurse-Midwifery, 39(4):265-272.CLEIREL, MPH 1991: Adaptation after bereavement.A comparative study of the aftermath of death from suicide, traffic accident and illness of the next of kin.Proefschrift.Amsterdam: Rijks Universiteit.CONDON, JT 1986: Management of established pathological grief reaction after stillbirth.American Journal of Psychiatry, 143:987-992.CRESWELL, J 1994: Research design.Qualitative and quantitative approaches.London: Sage.DEMOCRATIC NURSING ORGANISATION OF SOUTH AFRICA 1998: Ethical standards for nurse researchers.Pretoria: DENOSA.DENZIN, NK & LINCOLN, YS 1994: Handbook of qualitative research.Thousand Oaks: Sage.FRAZER, DM & COOPER, MA 2003: Myles textbook for midwives;
2018-12-16T00:30:32.393Z
2007-11-12T00:00:00.000
{ "year": 2007, "sha1": "c3d74a9bf3c16c94ec2f7b7043f32a792c064d6b", "oa_license": "CCBY", "oa_url": "https://hsag.co.za/index.php/hsag/article/download/245/235", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "c3d74a9bf3c16c94ec2f7b7043f32a792c064d6b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
6317956
pes2o/s2orc
v3-fos-license
Reduced Gut Acidity Induces an Obese-Like Phenotype in Drosophila melanogaster and in Mice In order to identify genes involved in stress and metabolic regulation, we carried out a Drosophila P-element-mediated mutagenesis screen for starvation resistance. We isolated a mutant, m2, that showed a 23% increase in survival time under starvation conditions. The P-element insertion was mapped to the region upstream of the vha16-1 gene, which encodes the c subunit of the vacuolar-type H+-ATPase. We found that vha16-1 is highly expressed in the fly midgut, and that m2 mutant flies are hypomorphic for vha16-1 and also exhibit reduced midgut acidity. This deficit is likely to induce altered metabolism and contribute to accelerated aging, since vha16-1 mutant flies are short-lived and display increases in body weight and lipid accumulation. Similar phenotypes were also induced by pharmacological treatment, through feeding normal flies and mice with a carbonic anhydrase inhibitor (acetazolamide) or proton pump inhibitor (PPI, lansoprazole) to suppress gut acid production. Our study may thus provide a useful model for investigating chronic acid suppression in patients. Introduction Aging and metabolic syndrome are among the major issues in contemporary medicine. Although the mechanisms underlying these health problems remain incompletely understood, it has been demonstrated that aging and metabolism are intimately related, and that gut homeostasis plays an important role in regulation of these processes [1][2][3]. Recent studies have suggested that chronic (at least 10 months), but not short term (4 months), PPI treatment is associated with undesirable weight gain [4,5]. Similarly, a suboptimal weight loss after gastric bypass bariatric surgery was found in two separate cohorts of PPI users [6]. Acid homeostasis in the gut may thus play a critical role in metabolic regulation. Gastric acid is produced by the parietal cells of the stomach, and not only facilitates digestion and absorption of nutrients, but also helps to prevent bacterial overgrowth and enteric infection. In the Drosophila digestive tract, the midgut region is considered to be the equivalent of the mammalian stomach/small intestine. Specialized acid-producing cells, the copper cells of the midgut, were originally identified by their orange florescence in copper-fed larvae and their ability to accumulate radioactively-labeled cooper [7]. Copper cells show several striking similarities to mammalian gastric parietal cells in morphological studies, and they also co-localize with the acidic region of Drosophila midgut [7,8]. Moreover, acid secretion is not detectable in mutant flies with perturbed copper cell differentiation [9]. Copper cells of the fly midgut thus appear to be structurally and functionally related to mammalian parietal cells. Unlike parietal cells of the mammalian stomach that rely on H + K + -ATPase (of the P-type H + ATPase family, pha) for acid secretion, Drosophila copper cells use V-type H + ATPase (vha) for acid secretion [10]. No pha has thus far been shown to be expressed in Drosophila. Vacuolar-type H + ATPases are multisubunit proton pumps comprising two functional parts, an ATP catalytic V 1 complex (subunits A to H), and a proton-translocating V 0 complex (subunits a, c, d and e). vha is an ATP-hydrolyzing enzyme that can transfer energy to proton gradients across diverse biological membranes, thus permitting regulation of the acidity of an organelle or of the extracellular side of the membrane. Although it has been suggested that gut acidification in insects may work through vha [11], it remains to be shown whether vha is actively involved in acid production in the Drosophila midgut. Here, we report that vha16-1 is a critical component of acid production in the fly midgut. Genetic and pharmacological approaches for acid suppression in the fly and in mice both induced increased body weight and lipid accumulation, and in flies, were also accompanied by accelerated aging. Our results may have critical implications for the use of chronic acid suppression in a clinical setting. Materials and Methods Flies and life span assays w 1118 , EP2372, P[Δ2-3], P112087, Cyo/sp;TM3,Ser/TM6, Cop-Gal4 (NP3270), daughterless-Gal4 and UAS-GFP fly stocks were obtained from the Bloomington Drosophila Stock Center. UAS-vha16-1 was generated from a full-length vha16-1 cDNA and subcloned into the pUAST vector as previously described [12]. All flies were raised on standard sucrose/yeast/cornmeal food and were backcrossed into the w 1118 background for at least 5 generations, as described previously [13]. For the life span assays, flies that had eclosed within 48 hours (approximately 100 males and 100 females) were transferred to a 1-liter population cage and maintained in a humidified, temperature-controlled incubator with 12-hour on/off light cycle at 25°C [14]. Fresh food was provided every other day, and the number and sex of dead flies were scored. Fly food contained 5% dextrose, 5% yeast, 2% agar, and 0.23% Tegosept (Apex). 500uM lansoprazole (Takeda Pharmaceuticals Taiwan), acetazolamide (Sigma) or vehicle control was added to the foods described in the experiment. Genetic screen for starvation resistance The P-element-containing line P112087 was crossed to a constitutively expressed transposase source P[Δ2-3] in order to excise and transpose the P-element to other chromosomal locations in progeny. The transposase was then removed by crossing F1 progeny to a balancer line Cyo/ sp;TM3,Ser/TM6. Individual flies having the P-element integrated on the 2 nd or 3 rd chromosome were established as stock lines. For starvation challenge, ten-day-old flies that had been maintained on regular food were transferred to vials containing 2% agar and the number of dead flies was counted every 3 to 4 hours. For mRNA quantification, total RNA was prepared from at least 30 flies using the NucleoSpin RNA Kit (Macherey-Nagel), and the RNA was converted to cDNA as described previously [15]. Quantitative polymerase chain reaction (qPCR) was carried out using a StepOnePlus Real-Time PCR System (Applied Biosystems), SYBR Green Master Mix (Fermentas), and gene-specific primers 5'-GGAACTCACGAAGCAAGTGTTGA-3' and 5'-AAAACCGCACCAT TGGATACAT-3' for vha16-1, and 5'-AATGGGTGTCGCTGAAGAAGTC-3' and 5'-GACG AAATCAAGGCTAAGGTCG-3' for glyceraldehyde 3-phosphate dehydrogenase (GAPDH). A two-step PCR reaction was carried out with denaturation at 95°C for 15 seconds, annealing and extension combined at 60°C for 1 minute, in a total of 40 cycles. The mRNA expression level of each target gene compared to GAPDH was quantified by subtraction: Ct (specific gene) -Ct (GAPDH) = ΔCt. A difference of one PCR cycle equates to a two-fold change in mRNA expression level. The uniqueness of amplicons was confirmed using dissociation curves. Gut acidity, total triglycerides, body weight, feeding and fecundity measurements in flies To measure fly gut acidity, 10-day-old flies were fed with yeast paste containing 1% bromophenol blue for 24 hours, and midgut acidity was observed under a dissecting microscope. To measure total triglycerides, ten 10-day-old flies were homogenized in PBS containing 0.05% Tween 20 and centrifuged at 16,000 g. Total triglycerides were measured using a triglyceride test kit (Fortress Diagnostics). For body weight measurements, 10-day-old flies were anesthetized by CO 2 and weighed immediately, using a microbalance (Sartorius). Female fecundity was determined by daily counting of eggs produced by 3 mating pairs. Flies were passed daily to new vials containing appropriate food, and the number of eggs laid was counted and recorded for the first 14 days of adult life. For the feeding assays, 10-day-old flies were transferred to fresh vials with regular food containing 0.5% FD&C no. 1 blue food dye. After 6 hours, 10 flies were homogenized in a single tube containing PBS, and the amount of ingested dye was determined by spectrophotometer for dye absorbance at 620nm. Mouse experiments All experimental protocols followed the local animal ethics regulations and approved by National Taiwan University College of Medicine and College of Public Health Institutional Animal Care and Use Committee (IACUC). C57BL/6 male mice were obtained from National Taiwan University College of Medicine Laboratory Animal Center and maintained in an animal room with controlled temperature at 22-24°C and humidity at 50-55% under 12 hr light/ dark cycle. All mice were fed ad libitum with standard pelleted mice chow (LabDiet 5058, Pico-Lab). For chronic acid suppression, mice received a daily subcutaneous injection of lansoprazole for 3 to 4 months. Food intake, water consumption and change in body weight were monitored regularly. Blood from tails of mice was collected at different time points and serum total triglycerides and cholesterol were measured using commercial kits (Fortress Diagnostics). For stomach acidity measurements, the mouse stomach was removed and washed with 1ml PBS. Gastric contents were collected and pH of the collected gastric juice was measured using a pH meter (Sartorius). Statistics All data are expressed as mean ± SEM. Survival curves were analyzed by the Kaplan-Meier procedure and log-rank test. Data for all other assays were analyzed using one-way ANOVA or Student's t test. Isolation of vha16-1 mutant flies To identify novel genes involved in metabolic regulation, we set up a genetic screen for starvation resistance using P-element-mediated mutagenesis in D. melanogaster. A total of 696 mutant lines were generated, and a mutant, dubbed m2, was found to have prolonged survival under food-deprived conditions, and was homozygous viable and fertile (Fig 1a and Table 1). Inverse PCR followed by sequencing showed that the landing site of the P-element for this mutant is in the first intron of the vha16-1 gene (Fig 1b), which encodes the c subunit of vha. The insertion site of P[GawB] was further confirmed by PCR, which detected an approximately 350 base pair PCR product spanning the vha16-1 gene and a P[GawB] fragment (Fig 1b and 1c), and an approximately 120 base pair PCR product within the P[GawB]. This P-element insertion produces a hypomorphic mutation of the vha16-1 gene, since we detected approximately 40% down-regulation of vha16-1 mRNA in m2 homozygous mutant flies (Fig 1d). For comparison, an additional vha16-1 mutant EP2372 was obtained from the Bloomington Drosophila stock center. EP2372 is homozygous lethal, but heterozygous mutant flies showed reduced vha16-1 mRNA expression and increased starvation resistance, similar to our results for m2 homozygous mutant flies (Fig 1a, 1b and 1c). vha16-1 mutation reduces gut acidity in flies Since m2 mutant flies carrying P[GawB] constitutively express the yeast transcription activator protein Gal4, we crossed them to mutant flies carrying a UAS-GFP construct. The progeny reporter flies displayed a high level of GFP expression in a circumscribed segment of the central midgut in adult flies (Fig 2a and 2b). This segment corresponds to the copper cell region, where vha is highly expressed in the apical plasma membrane, and is responsible for acidification of the midgut lumen [16,17]. To evaluate whether midgut acidity is affected in vha16-1 mutants, flies were fed bromophenol blue (BPB, a pH indicator) for 24 hours. BPB changes color from yellow at pH = 3.0 to blue at pH = 4.6. We found that 56.06% of m2 homozygous mutant flies and 60.61% of EP2372 heterozygous mutant flies showed diminished midgut acidity, compared to 19.23% of control w 1118 flies (Fig 2c-2g). Although vha16-1 overexpression specifically in copper cells did not affect midgut acidity of flies, it did effect recovery of midgut acidity of EP2372 heterozygous mutant flies to the normal level (Fig 2h). Moreover, a majority of w 1118 control flies and vha16-1 mutant flies fed with acetazolamide or lansoprazole also exhibited diminished midgut acidity (Fig 2g). Lansoprazole is has been shown to inhibit gastric and vesicular acidity through inhibition of both vha and pha [18,19]. Gut acid suppression increases lipid accumulation and body weight in flies Starvation resistance in animals is generally associated with altered metabolism. For instance, animals having increased lipid accumulation or body weight are often resistant to food deprivation because of their greater nutrient storage [13,20]. We examined triglycerides and body weight in flies as a measure of nutrient storage and ability to resist starvation. We found that m2 homozygous mutant flies have an elevated level of triglycerides and increased body weight compared to control w 1118 flies (Fig 3a and 3b). Intriguingly, chronic acid suppression by lansoprazole also induced these phenotypes, similar to what is seen in m2 homozygous mutant flies (Fig 3a and 3b). Increased nutrient storage upon gut acid suppression was not associated with alterations in feeding behavior, since the feeding rate for m2 homozygous mutant flies was comparable to that of control w 1118 flies (Fig 3c). Although female fecundity is also linked to energy availability in animals, this is not a factor in m2 homozygous mutant flies, as normal egg production was observed in the first 2 weeks of female adult life (Fig 3d). Gut acid suppression reduces life span in flies Because metabolism is considered to be a central component of life span regulation, it is therefore crucial to determine how life span is affected in flies that have decreased midgut acidity. We found that median and maximal life span were significantly decreased in both m2 and EP2372 mutants, as well as in lansoprazole-treated flies, compared to control w 1118 flies (Fig 4a and 4b and Table 1). Gut acid suppression induced an obese-like phenotype in mice We further verified whether acid suppression in mice could induce a similar phenotype to that observed in flies. We treated mice with lansoprazole for 3 to 4 months, and found that stomach acidity was significantly reduced in a dosage-dependent manner (Fig 5a). Over the course of lansoprazole treatment, we found that serum triglycerides and cholesterol gradually increased for mice on higher doses of lansoprazole (5 and 25 mg/kg) but not on the lower dose of lansoprazole (1 mg/kg). Levels of serum triglycerides and cholesterol were significantly higher 60 days after receiving lansoprazole (Fig 5e and 5f). Moreover, body weight was also increased in mice treated with higher doses of lansoprazole (Fig 5d). Neither food intake nor water consumption was altered in mice treated with lansoprazole (Fig 5b and 5c). Discussion Our study demonstrates that flies and mice consistently exhibit an obese-like phenotype, that is accompanied by accelerated aging in flies, when gut acid production is suppressed by either genetic or pharmacologic means. On the other hand, the resulting elevated store of triglycerides probably confers starvation resistance upon these flies [21,22]. The relationships among fat storage, starvation resistance, and life span are complex and sometimes counterintuitive. Obesity and increased stores of triglycerides can be associated with either increased or decreased starvation resistance, likely depending on whether triglycerides stored in lipid droplets can be mobilized. For example, flies with the Brummer lipase gene loss-of-function mutation are with UAS-vha16-1 flies. vha16-1 mRNA significantly increased in Da-Gal4;UAS-vha16-1 flies compared to control (UAS-vha16-1) flies. Experiments were done in triplicate and each replicate contained more than 30 flies for each group. *, P < 0.05, compared to the control group. doi:10.1371/journal.pone.0139722.g002 obese and starvation sensitive [23], while overexpression of Lsd-2 (encoding a perilipin protein), leads to an obese and starvation-resistant phenotype, reminiscent of our flies with deficient midgut acidity [24]. Furthermore, increased weight or lipid content is not necessarily linked to decreased life span. The methuselah mutant fly is heavier, and starvation resistant, but also long-lived [25]. A recent study by Lee [26] also showed a positive (and almost linear) correlation between lipid content and life span in female flies fed diets having various protein: Table 1. Parallel change in life span and stress resistance (starvation resistance, in particular) has often been noted in Drosophila studies [28]. For instance, flies with loss of the Apolipoprotein D (ApoD) homolog exhibit reduced life span and lowered starvation resistance [29], while overexpression of the ApoD homolog leads to life span extension and starvation resistance (without alteration of lipid content) [30]. Other examples include the methuselah mutant and female flies with loss of chico, both being long-lived and starvation resistant [25,31]. Furthermore, starvation resistance has been successfully used as a screening strategy for identification of longevity genes [32]. However, our study makes the case that the two can be decoupled, and that caution should be exercised in using one to infer the other. An aspect of digestion that is conserved between flies and mammals is the role of an acidified gut compartment. Even yeast, a unicellular organism in which the vacuoles serve as digestive organelles could be seen as conserving this strategy. Interestingly, vha is also responsible for vacuolar acidification, and suppression of vacuolar acidity in yeast affects nutrient signaling, leading to shortened replicative life span [33]. Together with our findings, these observations suggest the intriguing possibility that the relationship between digestive organ/organelle acidity and life span may be evolutionarily conserved. In addition to supporting the robustness of findings observed in genetic models, another benefit of using pharmacological models in our study derives from the fact that suppression of midgut acidity is limited to adult fly life. This helps to exclude the possibility that suppression of larval midgut acidity could contribute to some of the phenotypes observed in the genetic models, although a previous study has suggested that larval midgut acidity may be dispensable [34]. Importantly, pharmacological models have direct clinical relevance since acid suppression is a commonly used treatment for both peptic ulcer disease and gastroesophageal reflux disease in human patients. Indeed, long-term treatment with PPI was shown to be associated with altered intestinal microflora [35,36] and with undesirable weight gain [4], the latter being consistent with our finding in flies. These previous reports, along with our findings, raise serious questions, and call for careful studies of the potential risks that may be associated with acid suppression therapy in humans. In conclusion, our study adds to the organismal-level understanding of the inter-relationships between gut homeostasis, metabolism, and aging, in which gut acidity plays a role. Flies and mice with deficient gut acidification recapitulate features of metabolic syndrome, and thus could be candidates for disease modeling. Further study is needed to elucidate precisely how gut acidity acts to modulate various aspects of gut homeostasis and metabolism, and it would be of great interest to test whether preservation of gut acidity, if feasible, can extend life span.
2018-04-03T03:29:18.380Z
2015-10-05T00:00:00.000
{ "year": 2015, "sha1": "83c626fd7fdfd4357c44afc7b9fc4f872e9ec86a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0139722&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06979d24e1bbb5e37208ca35beac640d995a6d08", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12570847
pes2o/s2orc
v3-fos-license
Access to Care for Methadone Maintenance Patients in the United States This policy commentary addresses a significant access to care issue that faces methadone maintenance patients seeking residential treatment in the United States. Methadone maintenance therapy (MMT) has demonstrated strong efficacy in the outpatient treatment of opiate dependence. However, many opiate dependent patients are also in need of more intensive interventions, such as residential care. Many publically-funded residential treatment programs explicitly decline to admit MMT patients, contending that methadone raises both clinical and administrative problems in treatment. Although advocates within the field believe that this issue is a violation of the American’s with Disabilities Act (ADA) in the United States, no lawsuits have been brought forth, and there is no legal precedent or public policy to inform the debate. The present paper provides an overview of this problem and discusses factors that may contribute to the problem, including an abstinence-oriented philosophy and treatment program operational concerns. The paper also draws parallels between methadone and other medical conditions and analyzes the problem in the context of disabilities encompassed in the ADA. Finally, recommendations on strategies for increasing access to residential care for MMT patients are provided. criminality (Teeson et al. 2006). Methadone maintenance therapy (MMT) has highly demonstrated efficacy at reducing illicit opiate use (Gossop et al. 2003) and improving overall psychosocial functioning, including employment, criminal activity, and contraction of HIV/AIDS. In the United States, there are approximately 1,396 MMT programs serving over 254,000 patients (SAHMSA 2006). MMT patients receive limited services as part of their opioid replacement treatment, including medical screening, dosing, and outpatient counseling. However, for many patients, a higher level of care is necessary to deal with their opiate addiction or other comorbid disorders. Consequently, many MMT patients seek out a higher level of care at residential treatment programs. For opiate-dependent patients receiving MMT, the adage "more is better" seems to be supported by research. For example, delivery of more treatment services within the context of methadone treatment has been found to lead to better outcomes, with those receiving a higher number of services showing greater improvements, even after controlling for patient factors (Gossop et al. 2003). This study found that "treatment dose," as measured by number of days in treatment, number of treatment sessions attended, and number of services received, was predictive of positive outcome. Evidence also seems to suggest that patients with previous methadone treatment may differentially benefit from residential treatment. For example, Cheung and Ch'ien (1999) found that patients who participated in methadone treatment prior to residential treatment were five times more likely to be abstinent from heroin at a three year follow-up point. The authors interpreted the findings to suggest that participation in methadone programs may facilitate successful outcome in residential treatment and that a compatible relationship between the harm reduction and abstinence oriented approaches to treatment may be possible. Despite the effectiveness and widespread use of MMT, and the demonstrated benefit of combining MMT with other more intensive forms of treatment, integration remains a controversial topic. In fact, many substance abuse treatment providers assert that MMT is incompatible with recovery and the abstinence-based treatment models of most residential treatment programs. Methadone maintenance program staff have reported years of frustration in trying to obtain residential treatment services for their MMT patients, as many state-licensed and publicly funded residential treatment programs have policies in place that explicitly deny access to care for MMT patients (Zweben et al. 1999). This paper is not intended to an exhaustive critique of access to care issues for MMT patients, but proposes to focus on several key factors that may contribute to access to care problems in the United States, offer several arguments in favor of increasing access to care, and provide some practical recommendations for increasing access to care for this population. Contributing Factors Multiple factors likely contribute to the current status of residential treatment for MMT patients in the United States. Although systems level factors such as the limited availability of residential treatment and insurance/payment issues probably play an important role, this section will focus on the impact of two key factors, including the impact of the abstinenceoriented philosophy that is commonly espoused with residential treatment programs and the operational concerns that face residential treatment programs who extend treatment services to MMT patients. Impact of Abstinence-Oriented Philosophy Until the last several decades, the evolution of substance abuse treatment occurred largely in isolation from scientific study or traditional medical care (Miller et al. 2006). Mainstream society and the medical field viewed addictions as moral problems of the will, rather than valid medical disorders, and consequently devoted few resources to scientifically investigate effective treatments. As a result, substance abuse treatment emerged largely in the form of treatment by compassionate peers, who were themselves in recovery. This peerled perspective has generally been strongly influenced by viewpoints that complete abstinence is the only route to recovery and that many medications, including methadone, merely enable individuals to maintain addictive lifestyles. This viewpoint is also widely held in society at large, in which abstinence models tend to evoke more support than methadone. Zweben and Sorensen (1988) pointed out that American society regards methadone as a "necessary evil" and believe that "patients should use the least possible amount," an adage that is unsupported by research. With the rise of behavioral research in the last several decades, we now have a variety of empirically evaluated treatment approaches that do not follow an abstinence-oriented philosophy. Yet abstinence models of recovery, still have a dominant influence, especially in residential treatment facilities. The use of abstinence-oriented models, such as Alcoholics Anonymous, has been found to benefit patients who electively choose to participate in such programs, but not those who are coerced to participate (Kownacki and Shadish 1999). Despite this, abstinence-oriented treatment philosophies have become integrated into a variety of other treatment modalities, including many residential treatment programs. Providers who hold an abstinence-based view of recovery have been found to be less receptive to the dissemination of evidence-based practices and to rely more heavily on testimonial evidence and personal experience when making clinical decisions (Miller et al. 2006). These tendencies may have contributed to the bias against MMT that often exists within residential treatment programs. Decades of misinformation within these abstinenceoriented programs and lack of receptivity to evidence demonstrating effectiveness have created a stigma against the use of methadone in recovery. This stigma, in turn, has resulted in the denial of treatment services to MMT within many residential treatment programs. Treatment Program Operational Concerns Residential treatment programs often have treatment philosophies contending that abstinence from all substances is necessary for "true" recovery, and these beliefs about the correct path to recovery are commonly cited as the reason for denying treatment to MMT patients. However, Zweben et al. (1999) also describe some practical operational concerns that may serve to prevent equal access to care. As they describe, "residential programs vary widely in their sophistication, and some have little or no experience in dealing with medication and related matters" (Zweben et al. 1999, pp.250). Consequently, these programs may feel unable to adequately address issues such as safe storage, monitoring patient dosage, and collaborating with physicians and methadone programs. While much of the stigma against the use MMT does not seem grounded in evidence, some important arguments against the integration of MMT and residential treatment have been put forth. Residential treatment programs are faced with a complex context for their clinical decision making (Zemore and Kaskutas 2008). Unlike methadone clinics, in which the behavior of one client has little effect on others, patients within residential treatment programs are highly dependent on one another. Here the behavior of one individual can have a huge effect on the overall environment and, consequently, what may be beneficial to one client may be harmful to the community as a whole. Some residential treatment providers argue that the many frequent trips to the methadone clinic that are often required for appropriate dosing can significantly interfere with the residential treatment process. For one, providing transportation to and from methadone clinics can be timely, expensive, or impractical for residential staff. In addition, an important component of residential treatment programs is that they are designed to help patients sever ties to negative influences in their outside environment. Frequent trips to the methadone clinic have been argued to provide patients with many potentially dangerous opportunities to maintain these bonds. MMT clients who have to leave the site to get dosed may also miss important group activities, interfering with assimilation into the community, and sometimes making non-MMT clients feel that they receiving special privileges. The "nodding out" behavior, or abruptly falling asleep during sedentary activity, that is sometimes associated with methadone treatment has also been cited as potentially disruptive to groups. One last potential complication of integrating the two forms of treatment is the ability of residential programs to securely store and distribute take-home doses of methadone. Arguments in Support of Increasing Access to Care While research has demonstrated that residential treatment improves the treatment prognosis of MMT patients, there are several other compelling reasons to increase access to care for this population. Drawing parallels between MMT patients and patients with other medical conditions may help to alleviate potential bias against this group by adding historical perspective to the debate. In addition, denial of treatment may be a violation of the Americans with Disabilities Act, which is aimed at providing equal opportunities for individuals with physical and mental disabilities, including substance use disorders. Parallels to Other Conditions Although the participation of MMT patients in residential treatment raises some potential concerns, the obstacles are not unlike what one might expect for an individual with another medical condition, such as epilepsy (Grunfeld and Komlodi 2006). Here too, the individual would require frequent trips to a health care provider and possibly be prescribed anticonvulsants with side effects of drowsiness. In addition, little controversy can be found within the literature regarding the appropriateness of residential treatment for individuals with other medical disorders, suggesting that stigma and differences in treatment philosophy are a driving force of denial of treatment services to MMT patients. Greenberg et al. (2007) point out that there was similar ideological contention years ago, when psychiatric medications were first being utilized in addiction treatment centers, but now psychiatric medications are seen as the standard of care for patients with co-occurring disorders. In addition, at least two published articles have demonstrated that the potential barriers to integration of MMT and residential treatment can be successfully overcome with appropriate staff training and collaboration between sites (Zweben et al. 1999;Sorensen et al. 2009). Americans with Disabilities Act Implications Limitations in access to care for MMT patients hinder these patients from obtaining needed substance abuse and mental health treatment services. While differences in norms regarding effective treatments and practical considerations may serve as barriers, collaboration between residential treatment and methadone programs would ultimately increase access to evidence-based care for MMT patients. Issues of disability have special salience for MMT patients (Benoit et al. 2004) who are commonly denied services as a result of their use of prescribed medications for the treatment of their substance dependence. In fact, many interpret the denial of residential treatment services as a violation of the Americans with Disabilities Act (ADA; Zweben et al. 1999). However, the ADA can be interpreted to inform the denial of treatment services to MMT patients in several ways. The ADA (1990) is an American civil rights law that prevents discrimination based on disability. The law was enacted in 1990 as protects individuals with disability from discrimination in areas such as employment and receipt of public services. The ADA defines disability as "a physical or mental impairment that substantially limits one or more of the major life activities of such individual." Under this definition, an individual with substance dependence is considered a qualified individual with a disability. One caveat to this definition is that a qualified individual with a disability shall not include individuals who are engaging in the illegal use of drugs. However, the definition of illegal use of drugs does not include the use of drugs taken under the supervision of a licensed health care professional, as is the case with prescribed methadone. Section 302 of the ADA prohibits discrimination by health care providers in the form of denial of participation in services. Here the imposition or application of eligibility criteria that screen out individuals with a disability is considered discrimination. This section can be interpreted as clearly prohibiting the denial of residential treatment services to opiate dependent MMT. However, the ADA also specifies that denial of services on the basis of disability is not considered discrimination in the case that making such accommodations would fundamentally alter the nature of the service or would result in an undue burden. This section can be interpreted to exempt denial of treatment to MMT patients from the label of discrimination, by arguing that allowing MMT to participate in abstinence based treatment would fundamentally alter the nature of the treatment program and compromise the recovery of other treatment participants. While discrimination against individuals taking prescribed medications through the denial of treatment may be a violation of the ADA, the authors are aware of no litigation that has yet been brought forth to establish legal precedence with regard to methadone within residential treatment. In addition, as yet there are no legislative policies or licensing regulations in place that address equal access to care for MMT patients. Discussion and Recommendations It was the purpose of the present paper to increase awareness of this access-to-care issue, explain both sides of the argument, and provide a rationale for improving access to care for MMT patients. It should be acknowledged however, that this paper is far from a comprehensive evaluation of access to care issues that face MMT patients. The paper focuses on several important factors that may contribute to the present access to care inequities and offers several arguments in favor of increasing access to care. The paper is also limited to examples from the United States and research and discussion would be further enhanced by examining additional perspectives. In the paper, we aimed to demonstrate that access to residential treatment would clearly benefit MMT patients. While no legal precedence has been established regarding the implications of MMT patients' access to care, a solution in which MMT patients can receive residential treatment without fundamentally altering the treatment experience of other patients would be optimal. Greenberg et al. (2007) suggest several practical strategies that may improve the feasibility of methadone within residential treatment, including educating staff about methadone to eliminate misconceptions and reduce staff generated stigma. In addition, the authors recommend preparing staff to deal with difficult situations regarding methadone, including jealousy from other patients and nodding off behaviors. The authors also recommend educating other patients on methadone related topics and facilitating twelve-step oriented MMT groups within the program. A remaining issue is whether the treatment models used in medically-oriented methadone programs and abstinence-oriented therapeutic communities are so incompatible that they cannot be combined effectively. Cherry (2008) points out that philosophical differences can so deeply divide mental health and addiction services that it is impractical to integrate them. While further research is needed on this topic, Sorensen et al. (2009) recently completed a trial that found similar outcomes of residential treatment for matched MMT and non-MMT patients. In the trial, MMT patients were found to have residential treatment and substance abuse outcomes that were no different than their non-MMT counterparts, painting an optimistic picture of the possibility of equal access to residential care for MMT patients.
2017-08-03T00:05:15.032Z
2009-04-04T00:00:00.000
{ "year": 2009, "sha1": "5eb326c360b0af467cc9e81195bb1cdb5af78a88", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11469-009-9204-6.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "96dfbbf690759a6df54a3a5901932af77de437db", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
126655904
pes2o/s2orc
v3-fos-license
Solitons solutions of nonlinear Schrödinger equation in the left-handed metamaterials by three different techniques This paper, derives the exact traveling-wave solution and soliton solutions of nonlinear Schrodinger equation (NLSE) with higher-order nonlinear terms of Left-handed metamaterials (LHMs), the authors apply three different methods, namely: csch function method, the exp ( − ϕ ( ξ ) ) -Expansion method and the simplest equation method. The results obtained are Dark, Bright solitons and other solutions, which are well known in optics metamaterials and LHMs. Introduction It is well known that the partial differential equation (PDEs) of the non-linear Schrodinger equation with hightorder nonlinear terms are near the complex physics phenomena which are concerned many fields from physics to biology etc [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. Recently, some effective methods for getting solitons solutions in LHMs and optics has attracted many researchers attention because of soliton theory which is a very important and fascinating area of research in nonlinear left-handed metamaterials and optics. Houwe Alphonse et al [4] studied optical soliton in left-handed metamaterials. M Mirzazadeh et al [5] reported solitons to generalized resonant dispersive nonlinear Schrödinger's equation with power law nonlinearity. I. V. Shadrivov et al [6] studied Spatial solitons in left-handed metamaterials. Ekici Mehmet et al [7] investigated optical solitons in birefringent fibers with Kerr nonlinearity. Biswas et al [8] obtained bright and dark solitons for MMs. Anjan Biwas et al [9] demonstrated the existence of singular solitons in optical metamaterials by ansatz method and simplest equation approach. Alphonse HOUWE et al obtained solitons of the perturbed nonlinear Schrödingers equation in the nonlinear left-handed transmission lines [18]. In this perspective, many methods for obtaining exact solutions of NLSE was investigated, such as Tan-sech method [10,11], Exponential rational function method [12], the sine-cosine method [13,14], the modified simple equation method [15], and so on. In [16], N Taghizadeh et al used the first integral method to find exact soliton solution of the nonlinear Schrodinger equation and Ma and Chen [19] is used Direct search method to obtain exact solutions of the same nonlinear Schrodinger equation. This cubic nonlinear Schrodinger equation [16,19], which is similar to that obtained in a left-handed transmission lines loaded with a varactor is in the following form: Where u=u (x, t) is the complex-valued function of two real variables x , t. a is the group velocity dispersion and the term c is the nonlinearity coefficient. The index m>0,is the full nonlinearity parameter. For a=p=1, c=q=μ, and m=1 correspond to the non-linear Schrodinger equation and have been discussed [19]. Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Solitary waves of nonlinear schrödingers equation in the left-handed metamaterials can paved the way for relevant studies, e.g., modulational instability. In the present paper, to befall many exact solutions and solitons solutions of this model of equation (1), the authors has used three integrations schemes. They are csch method, the exp(−f(ξ))-Expansion method and the simplest equation method that will uncover solitons solutions to the model. The beginning hypothesis is the traveling-wave transformation. The elaboration are all recorded in the upcoming section. Traveling wave assumption The solution of equation (1) is supposed to be where U(ξ) is the amplitude component of the wave and x =x vt, while v is its speed. Here θ (x, t)=−kx+ω t+θ 0 represents the phase component of the soliton. The parameters ω, k and θ 0 are respectively the inverse pulse width, the frequency and the phase constant. After changing the variables, and substituting equation (2) into equation (1), and separating the real and imaginary parts it is obtained: from equation (3) leads to the speed wave of the soliton : Now, multiplying equation (4) by ¢ U and integrating once with zero constant gives w ¢ - can be written as follows w ¢ - Application In this section, three different integrations tools will be applied to befall exact solution and soliton solutions csch function method The solutions of many nonlinear equation can be expressed in form [20] x and admits the following derivative where A, τ and μ are parameters to be determined, μ is the wave number Substituting equations (10) and (9) into the reduced equation equation To Balance the terms of the csch functions to find τ. 2τ+2=(m+2)τ, and t = Solving the system of equations equation (12) and equation (13) result is: then, if m>2, and therefore The key step is to suppose that the solution of equation (8) can be expressed by a rational polynomial as the following : the parameter N, it obtained by balancing the highest-order linear term with the nonlinear term, where i=(0, 1, K.., N) and f (ξ) satisfies the following ordinary differential we balance The constraint condition is m 2 −4>0. The simplest equation method The demarche is to suppose that V(ξ) satisfies the Bernoulli and Riccati equations method [22,23]. The step is to introduce the solution V(ξ) of equation (8) in the following finite series form where a i are real constants with ¹ a 0 N , and N is a positive integer to be determined. f(ξ) satisfies the following ordinary differential equation Where ρ, A and B are independent on ξ, and will be determined later To obtain different exact solution and other solutions dependent of the parameters ρ, A and B two cases will be present If we surmise m=2, equation (8) becomes: By balancing the linear term of highest order derivatives with the highest order nonlinear term in equation (32), leads to (N−1) 2 =0, and N=1 Then equation (30) becomes (2), is obtained (2), the result is (2), the following solutions of equation (1) is obtained For A>0, where ξ 0 is the integration constant. Some graphical representations In this part of the paper, the application of the results obtained above are illustrated. Figures 1-5 are the graphical representation of equation (41). By varying the parameters k, a 1 , a, c, v, ω,one arrives at graphic representations well known in LHMs and optical fiber from the different graphical representations above, the solitons solutions (dark, bright) and other solutions obtained by the simple equation method.The results obtained are comparable to those well known in [18,24]. summary In this study, the authors apply successfully three different methods namely: csch function method, the f x -( ( )) exp -Expansion method and simplest equation method to construct soliton solutions and other solutions to the nonlinear Schrodinger equation (1). The results obtained are dark, bright and singular 1-soliton solutions. Note that the first two integrations failed to find known solitons. In the future, this model can be studied from a different perspectives. Subsequently, the model will be consider perturbation terms and spatiotemporal dispersion. Certainly abundant 1-solitons solutions and other solutions will be obtained. These results will be later disposable.
2019-04-23T13:22:03.485Z
2019-01-28T00:00:00.000
{ "year": 2019, "sha1": "b3abb58acad2f84a8a59d3bdb6506473af6077cc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2399-6528/aaff2c", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "effed0d39eee356b3c6bad8e70ddab1212e038f9", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Physics" ] }
221507651
pes2o/s2orc
v3-fos-license
Coherent Dynamics in Quantum Emitters under Dichromatic Excitation We characterize the coherent dynamics of a two-level quantum emitter driven by a pair of symmetrically-detuned phase-locked pulses. The promise of dichromatic excitation is to spectrally isolate the excitation laser from the quantum emission, enabling background-free photon extraction from the emitter. Paradoxically, we find that excitation is not possible without spectral overlap between the exciting pulse and the quantum emitter transition for ideal two-level systems due to cancellation of the accumulated pulse area. However, any additional interactions that interfere with cancellation of the accumulated pulse area may lead to a finite stationary population inversion. Our spectroscopic results of a solid-state two-level system show that while coupling to lattice vibrations helps to improve the inversion efficiency up to 50\% under symmetric driving, coherent population control and a larger amount of inversion are possible using asymmetric dichromatic excitation, which we achieve by adjusting the ratio of the intensities between the red and blue-detuned pulses. Our measured results, supported by simulations using a real-time path-integral method, offer a new perspective towards realising efficient, background-free photon generation and extraction. Solid-state quantum emitters, in particular semiconductor quantum dots (QD), offer a promising platform for generating quantum states that can facilitate dephasingfree information transfer between nodes within an optical quantum network [1][2][3][4]. On-demand indistinguishable photon streams for this purpose can be made using coherent excitation of QDs [5]. While resonance fluorescence of QD suppresses detrimental environmental charge noise [6] and timing jitter [7] in the photon emission, the excitation laser must be filtered from the single photon stream. Typically, this is achieved with polarization filtering of the resonant laser. However, unless employing a special microcavity design [8][9][10], polarization filtering inherently reduces collection efficiency by at least 50%. This motivates the consideration of alternative off-resonant excitation techniques to allow spectral filtering [11]. In particular, off-resonant phonon-assisted excitation [12,13], resonant two-photon excitation [14] and resonant Raman excitation [15] schemes benefit from being able to spectrally isolate the zero-phonon line from the laser spectrum, to enable efficient single photon generation. The argument is that this would make it possible to efficiently excite quantum emitters using pulses that are spectrally separated from the fundamental transition. However, a different picture unfolds when the system 2 dynamics is considered in more detail: the Hamiltonian of an ideal two-level system (2LS) driven by a dichromatic pulse f (t) = R (t)e −i∆t + B (t)e i∆t with real envelopes of the red-and blue-detuned pulses R (t) and B (t) (in the rotating frame with respect to the 2LS) is given by where we have expressed the two-level state as a pseudospin Bloch vector s precessing about the time-dependent precession axis Ω(t). In the limit t → ∞ the integral in Eq. (4) becomes the Fourier transform F[f ](ω = 0) of f (t), evaluated at the two-level transition frequency. This proves analytically that -irrespective of the driving strength, no excited state population exists after the pulse unless there is overlap between the dichromatic excitation spectrum and the fundamental transition of the quantum emitter. To illustrate this, Figure 1(a) presents the dynamics of an ideal 2LS under dichromatic excitation with Gaussian pulses. This shows transient excited state population which, however, vanishes again towards the end of the pulse, i.e. the overall accumulated pulse area does indeed cancel almost completely. The small but finite residual occupation can be explained by the nonzero overlap between the tails of the Gaussians and the fundamental transition. Consequently, coherent Rabi-like oscillations with unity population inversion can still be obtained, albeit at much larger intensities, as depicted in Figure 1(b). However, this obviously defeats the purpose of employing the dichromatic excitation scheme. The transient excited state occupation is key to understanding how significant population inversion can still be obtained even if the exciting pulse has no spectral overlap with the transition of the emitter: Any additional interaction or dissipation can interfere with the complete cancellation of the pulse area, and thus lead to a finite population inversion after the pulse. For example, in a laser-driven QD, the interaction with phonons induces incoherent thermalization dynamics in the instantaneous laser-dressed state basis, unlocking excited state population up to ∼ 50%. In this Letter, we propose and experimentally demonstrate an alternative, externally controllable approach to dichromatic pulsed excitation (DPE). To obtain large stationary occupations of the excited state we employ asymmetric dichromatic excitation with red and bluedetuned pulses with different intensities. For B (t) = R (t), the Bloch sphere precession axis Ω(t) then has a finite time-dependent y-component, adding more degrees of freedom to the coherent dynamics. For suitable parameter choices, such asymmetric DPE can result in Bloch sphere trajectories that coherently evolve towards the excited state at long times. We experimentally verify our insights by characterizing the dynamics and quality of the scattered photons under DPE of a solid-state 2LS. As discussed in detail in the following, our results confirm a maximal population transfer fidelity of approximately 50% under symmetric DPE, owing to incoherent phonon-induced dynamics. Further, we show that coherent dynamics with a population inversion of 80% are achievable through an asymmetric weighting of the red and blue components of the dichromatic pulse. We conclude our study by analyzing the quality of the resulting photons in terms of the degree of multi-photon suppression and Hong-Ou-Mandel (HOM) visibility. As a solid-state 2LS for the dichromatic excitation experiments, we use the negatively-charged exciton transition, X 1− , of a charge-tunable, planar cavity InGaAs QD sample [18,23]. Figure 2(a) shows the experimental setup to generate the dichromatic pulses for excitation. A mode-locked laser with 80.3 MHz repetition rate and 160 fs pulse width is sent to a folded 4f setup, which consists of lenses, beam expanders (BE), a grating, a set of two motorized razor blades (RB) and a beam block. The RB control the overall spectral width of the diffracted beam while a beam block placed between them removes the undesired frequency component resonant with the zero-phonon line, simultaneously ensuring phase-locking. After back reflection on a mirror, the remaining light recombines on the same grating and gets coupled into an optical fibre before exciting the QD. Figure 2(b) depicts an example of the spectra of the excitation laser and the absorption profile of the QD, detuned from the zero-phonon-line (ZPL) at ω 0 = 1.280 eV (968.8 nm), measured using a spectrometer with ∼ 30 µeV resolution. The spectrum of the QD shows an atomic-like zero-phonon line (ZPL), along with a broad, asymmetric phonon sideband (PSB) arising primarily from interaction with longitudinal acoustic (LA) phonons [23,24]. The excitation laser spectrum shows the spectral width and the separation of the red and blue sideband of 0.5 meV and 1.2 meV, respectively. We define the pulse contrast C of the dichromatic pulse as a function of the integrated intensity of the red (I R ) and blue (I B ) sideband, as C = (I B −I R )/(I B +I R ). Finally, after filtering on the ZPL, the scattered photons are detected on a superconducting nanowire single photon detector with ∼ 90% detection efficiency at ∼ 950 nm. We first compare the experiment results for symmetric dichromatic driving (C = 0), blue-detuned excitation (C = 1), and red-detuned excitation (C = −1) with that obtained via pulsed resonant fluorescence (RF). These results are depicted in Figure 2(c). While we observe the expected Rabi oscillation under RF, we record much higher emission intensities at C = 1 than at C = −1, consistent with findings from previous studies, and corresponding to phonon-assisted excitation [25,26]. Contrary to the expected minimal state occupation for an ideal 2LS under symmetric dichromatic excitation at C = 0 (c.f. Figure 1), we observe a population inversion fidelity of ≈ 50% at a pulse area of ∼ 20 π. We attribute this to the unavoidable electron-phonon interaction: as discussed, phonon-induced thermalization allows occupations of 50%, compared to only vanishingly small levels for a dissipation-less 2LS. We now proceed to characterize the dynamics of the system under asymmetric DPE. To achieve this, an additional beam block mounted on a motorized translation stage is added in front of the RB to allow independent control of the width of the red or blue sideband. The excitation pulse is split via a 99/1 fibre beam splitter, with the low power channel sent to the spectrometer to estimate the pulse contrast and the higher power channel used to excite the QD. Figure 2(d) shows the experimental measured emission count rate as a function of pulse contrast and excitation power. We compare the experimental data with simulations using a numerically exact real-time path-integral formalism [27] with parameters typical of GaAs QDs [28] and employing a pair of rectangular driving pulses. We refer to Section I in the Supplementary Materials [29] (SM) for full simulation parameters. The simulation, taking into account of the excitonphonon coupling in Figure 3(a) shows close qualitative agreement with the experimental data. For comparison, the dynamics obtained in the absence of exciton-phonon coupling is depicted in Figure 3 To better illustrate the coherent dynamics of the asymmetric pulses, in Figure 3(d) we present simulated 2LS population dynamics on the Bloch sphere for a pulse area up to the first coherent oscillation in Figure 3(c) at C = −0.65. In the absence of dissipation (top), the state of the 2LS remains pure and is constrained to the surface of the Bloch sphere. The nontrivial spiralling trajectory is a consequence of the time-dependent x and y components of the effective electric field associated with the asymmetric pulse. In this particular instance, the trajectory evolves towards the excited state located at the north pole. When the interaction with phonons is accounted for (bottom), the system features mixed-state dynamics that are no longer restricted to the surface of the Bloch sphere. Qualitatively, the spiralling trajectory still looks similar to that of the phonon-free case. However, now the excited state is no longer reached. Rather, the projection onto the z-axis gives a final excited state occupation of ≈ 60%. Note that this value is lower than the measured 80% inversion fidelity, likely due to a slight mismatch in the pulse shape between simulation and experiment. In any case, our combined results indicate that for C ≈ −0.65 phonons certainly quantitatively affect the dynamics but dominating coherent oscillations nonetheless survive. In contrast, for positive pulse contrast, the higher maxima of the coherent oscillations are strongly suppressed by the interaction with phonons. This qualitative difference in behaviour between positive and negative pulse contrast is attributable to the differing spectral overlap of the dichromatic pulse pair with the QD's phonon side band (cf. Figure 2). Richer and even more complex dynamics emerges when moving beyond the case of a 2LS. In Sections VII and VIII of the SM [29] we present a range of spectroscopic results from more complex multi-level solid-state systems, however, a full exploration of those systems under DPE is beyond the scope of the present study. Having identified the pulse contrast and excitation power to optimize the emission count rate, we proceed to characterize the single photon performance from our QD under DPE. By sending the photons into a Hanbury-Brown and Twiss interferometer, we observe ⊥ ) and parallel (g (2) ) polarizations. (c) Close-up of the zero delay peak for g (2) reveals a dip, due to temporal filtering from our detectors. Dashed (green) and solid (orange) lines represent the convolved and the de-convolved fit to the experimental data (solid circles), respectively. (d) Two-photon interference visibility VHOM as a function of the integration time window for g (2) (⊥) around τ = 0. The solid (dashed) line is obtained from integrating the convolved (deconvolved) fit in (c). multi-photon suppression of g (2) (0) = 0.016 (1), indicating near perfect single photon emission, as shown in Figure 4(a). We then measure the indistinguishability of the scattered photons via HOM interference between two consecutively emitted photons at a time delay of 12.5 ns. The figure of merit here is the two-photon interference visibility V HOM , determined by sending the photons into an unbalanced Mach-Zehnder interferometer with an interferometric delay of 12.5 ns to temporally match the arrival time of subsequently emitted photons on the beam splitter. Figure 4(b) and (c) show the normalized HOM histogram as a function of time delay τ between detection events for photons prepared in cross (g (2) ⊥ ) and parallel (g (2) ) polarizations within a 60 ns window and a 6 ns wide zoom into the central peak, respectively. This close-up on the co-polarized g (2) peak near the zero-delay illustrates the characteristic dip. We fit the experimental data with the function [30][31][32][33], convolved with a Gaussian instrument response function (bandwidth of 0.168 ns), where the independently measured lifetime is T 1 = 687 (3) ps. This yields a de-convolved visibility of V deconv. HOM = 0.95 (1) and a 1/e width of τ C = 0.33 (2) ns. The signature dip around the zero delay, usually present under non-resonant pumping and resonant two-photon excitation schemes, indicates deviation from the transform limit and thus imperfect photon wave packet coherence. The width of the dip corresponds to the characteristic time of T * 2 = 2τ C = 0.66 (4) ns for the inhomogeneous broadening of the emitter due to pure dephasing or timing jitter in emission [30,34]. We speculate that this may be dominated by phonon-induced dephasing, as we observe a narrower dip under phononassisted excitation while noting its absence under strict monochromatic resonant excitation. See Section III and IV in SM [29] for the corresponding experimental evidence and discussion. Figure 4(d) shows V HOM as a function of integration window around τ = 0 for temporal filtering of events between detection. Temporal post selection [31] increases the raw visibility, V HOM from 0.29 (2) to 0.81 (12) when narrowing the integration time window from 10 ns to 0.1 ns, respectively. Integrating the fit function to g (2) (⊥) (solid lines) gives a maximum convolved (de-convolved) visibility of V HOM = 0.81 (0.95). The presence of residue coincidences around the zero delay in the histogram for scattered photons under DPE indicates the effect of finite time jitter and dephasing in the photon coherence, rendering the scheme partially coherent. In summary, we have shown that, counter-intuitively, symmetric dichromatic excitation is unsuitable for achieving coherent population control of quantum emitters. Specifically, it suffers from excitation inefficacy due to cancellation of the accumulated pulse area, and the inversion efficiency scales with the spectral overlap of the driving pulses with the emitter resonance. This nullifies the purported advantage of separating the spectrum of the driving field from the emitter zero-phonon line for background-free photon extraction. Recognizing this problem, we demonstrate that a simple adjustment in the relative weighting of the red and blue-detuned pulses is sufficient to improve the population inversion efficiency whilst maintaining minimal spectral overlap. Unity population inversion is then possible for an ideal 2LS, and we have measured 80% inversion efficiency with our QD sample. The presence of intensity oscillations under asymmetric driving demonstrate the coherent nature of the observed dynamics, yet those dynamics deviate from canonical Rabi oscillations and intrinsically feature non-trivial and complex Bloch-sphere trajectories. Our work has further experimentally demonstrated near perfect multi-photon suppression and high levels of photon indistinguishability (via temporal filtering) for such an asymmetric dichromatic excitation approach. This provides a new route to coherently excite quantum emitters, opening the prospect of background-free single photon extraction with suitably optimized cavity-coupled pho-tonic solid-state devices [35][36][37]. where |g and |e are the ground and excited states of the quantum dot (QD), respectively, b † q is the creation operator of a phonon in mode q, ω q is the energy of mode q, and γ q describes the strength of the coupling between phonon mode q and the exiced QD state. Using a real-time path integral method [1], the dynamics induced by the total Hamiltonian H tot is solved numerically exactly, i.e., without any approximation other than a finite time discretization. The influence of the phonons, which are assumed to be initially in thermal equilibrium at temperature T = 4 K, is uniquely determined by the phonon spectral density J(ω). For deformation potential coupling to longitudinal acoustic phonons We use standard parameters [2] for a GaAs-based QD with electron radius a e = 3.0 nm, hole radius a h = a e /1.15, speed of sound c s = 5110 m/s, density ρ = 5370 kg/m 3 and electron and hole deformation potential constants D e = 7.0 eV and D h = −3.5 eV, respectively. To approximate the pulses used in the experiment, we assume a rectangular shape in the frequency domain. The red and blue-detuned pulses each has a full-widthat-half-maximum (FWHM) of Γ = 0.4 meV and is detuned by ∆ = 0.6 meV from the transition energy of the two-level system. As in the experiment, different overall intensities for red and blue-detuned pulses are implemented via different spectral widths of the rectangles W R and W B while the heights of the rectangles are chosen to be the same. In the time domain these pulses take the form where t 0 is the time corresponding to the center of the pulse and A is the pulse area for a single resonant pulse in absence of QD-phonon interactions. Figure S1(a) shows the emission spectrum of the QD under resonant continuous wave (CW) excitation, showing a narrow zero-phonon line (ZPL, shaded) and a broad phonon-sideband (PSB), originated from relaxation from the phonon-dressed states. The fit (orange solid line) is obtained from the polaron model using previously cited parameters [3], which gives a ZPL fraction of ≈ 92%. The schematic of the energy level of the X 1− transition in zero magnetic field (inset of Figure S1A) and |↓ ⇔ |↑↓, ⇓ . Here, the single (double) arrows refer to electron (heavy-hole) spin state. Each transition has a well-defined optical selection rule such that it can be optically coupled with right (σ + ) or left (σ − ) circular polarized light. Keeping the frequency of the excitation laser fixed at ω 0 = 1.280 eV, we scan through the resonance of the QD via d.c. Stark tuning to measure the linewidth of the scattered QD photons. A Lorentzian fit to the detuning spectra of the QD in Figure S1(b) under weak excitation gives a full-width-at-half-maximum (FWHM) of Γ = 2.43 (6) µeV. Figure S1(c) shows the time-resolved lifetime measurement of the emission under pulsed monochromatic resonant excitation at πpulse. A single-sided exponential decay fit to the data (convolved with the instrument response function with FWHM of 160 ps) reveals an excited state lifetime of T 1 = 0.687 (3) ns. This corresponds to a transformlimited linewidth of ∼ 1 µeV. The deviation of measured linewidth Γ from the transform-limited linewidth indicates the existence of pure-dephasing from the solidstate environment, possibly originating from charge and spin noise of the QD device [4][5][6]. III. PULSED MONOCHROMATIC RESONANCE FLUORESCENCE (RF) To benchmark the performance of the dichromatic pulse excitation (DPE) scheme, we perform pulsed monochromatic resonance excitation (resonance fluorescence, RF) on the same transition (and the same QD). We optically excite the QD using a ≈ 14 ps-width pulse (spectral bandwidth of ≈ 80 µeV), and filter out the QD signal via polarization and spectral filtering to suppress the excitation laser spectrum. Figure S2(a) shows the normalized intensity of the emission as a function of the square root of the excitation power. We fit the data using the time-dependent excited state population function, derived from the pure dephasing model [7,8], showing coherent Rabi oscillation as a function of pulse area. Fixing the excitation power to a π-pulse, we perform intensity correlation and Hong-Ou Mandel (HOM)-type two-photon interference measurements on the scattered photons. Due to the imperfect excitation laser rejection (signal-to-background of ∼ 20), we obtain a multi-photon suppression of g (2) (0) = 0.080 (2) for the scattered photons, as shown in Figure S2(b). In Figure S2(c, d), we observe a post-selected HOM visibility V HOM of 0.84 (15) and 0.49 (3) at 100 ps and 10 ns integration windows, respectively. The HOM visibility is computed as the ratio of two-photon interference of consecutive photons, prepared in parallel, g (2) and in perpendicular polarization, g ⊥ , which follows as ⊥ . With monochromatic resonant excitation, despite the higher g (2) (0) due to the imperfect suppression of the excitation laser, we observe a higher two-photon interference visibility (V HOM = 0.58 (3) (1), see Figure S3 in Section IV). This implies, while the DPE scheme benefits from the fact that polarization filtering is not needed for background-free single photon collection, the RF excitation technique is still preferred as a means to generate single photons with higher indistinguishability. Figure S3 demonstrates the performance of the QD (for the same transition) under phonon-assisted excitation. The excitation laser pulse has a pulse width of 7 ps and is detuned ≈ 0.8 meV from the ZPL. The excitation pulse area is ≈ 20 π, corresponding to saturation count rate. The scattered photons are then spectrally filtered with the same 120 µeV-bandwidth grating filter to suppress the scattered laser. Despite large multi-photon suppression, giving g (2) (0) = 0.025 (1), due to the emission timing jitter that arises from the absorption of phonons assisting the population of the excited state, we observe a HOM visibility is slightly lower than that for the DPE and RF schemes, giving a post-selected HOM visibility of V HOM = 0.64 (14) and V HOM = 0.19 (1) at 100 ps and 10 ns integration windows, respectively. In addition, we observe a narrower (and shallower) dip (giving a 1/e width of 158 ps) around the zero-delay in g (2) , as indicated in Figure S3 (1), suggesting that dephasing due to the phonon-bath operates at a time scale way shorter than the bandwidth of our detection instrument response function (FWHM= 160 ps), as predicted in Ref. [9]. V. TWO-PHOTON INTERFERENCE VISIBILITY: COMPARISON BETWEEN RF AND DPE SCHEMES In this section, we report on the results of on the HOM visibility as a function of temporal delay between the arrival time of the two input photons on the beam splitter, δ. We render both input paths of the beam splitter indistinguishable in polarization (g (2) ), and measure the detection time delay τ between "click" events on the photon detectors for each δ. We perform this measurement on the same transition and QD under both RF and DPE schemes. Figure S4 shows the comparison of the HOM visibilities between the RF (a-c) and The fit (solid line) to the experimental data (circles) is derived from the pure dephasing model [7,8]. (b) Intensity-correlation histogram of the scattered photons at π-pulse shows a multi-photon suppression of g (2) (0) = 0.080 (2). (c) Two-photon interference histogram of the consecutively emitted QD photons at π-pulse, prepared in parallel (g (2) ) and perpendicular (g DPE (d-f) schemes. Figure S4(a) and (d) shows the normalized coincidence around the zero delay, g (2) (0), integrated over a window of 10 ns as a function of δ. When the two input photons perfectly overlap with each other on the beam splitter (δ = 0), we observe a minimum in g (2) (0). The data is fitted with a simple exponential function (g (2) (τ = 0, δ) = 0.5 × (1 − V exp(−|δ|/T 2 ))) to extract the coherence time T 2 and the visibility of the HOM dip, V, of the scattered photons. We obtain a T 2 = 0.457 (27) ns (T 2 = 0.548 (13) ns) and V = 0.49 (2) (V = 0.35 (1)) for scattered photons under RF (DPE) excitation. The lower HOM visibility V for the DPE scheme, despite much higher signal-to-background ratio, is due to presence of the dip in the detection time histogram. This is evident in Figure S4(b, c, e, f). Figure S4(b) and (e) show the 2D plot of the normalized coincidence as a function of both δ and τ . We observe a similar pattern reported in Ref. [10,11], in which the presence non-vanishing dip around the zero detection time delay τ = 0 is due to either the timing jitter or pure dephasing mechanism. Figure S4(c) and (f) show the coincidence histogram of g (2) (τ, δ) for δ =-0.9, -0.5, 0 and 0.5 ns. The appearance of the dip even at perfect overlap (δ = 0) in the DPE case with negligible background, is a signature of pure dephasing/timing jitter in the emission, which originates from phonon-induced dephasing [9,12]. In Section IV, we observe a similar signature (narrow dip around the zero time delay τ ) in g (2) (τ, δ = 0), which further confirms our claim that the HOM visibility suffers from the same phonon-induced dephasing mechanism in the DPE scheme. For an emission that is dephasing and jitter free, we expect the disappearance of the dip around τ = 0 [13]. We attribute the disappearance of the dip for the RF case as a signature of jitter-or dephasing-free performance, and the nonvanishing coincidence g (2) (τ = 0, δ = 0) is solely due to the imperfect filtering of the background laser scattering in the collection. With proper filtering to improve the signal-to-background ratio (ideally > 100), we should be able to minimize these coincidences, giving close to unity indistinguishability [14]. VI. DICHROMATIC PULSES WITH DIFFERENT PULSE PARAMETERS This section explores the population inversion efficiency of a solid-state two-level system for different pulse parameters under DPE. Here, we address the negativelycharged exciton (X 1− ) transition of a different QD. The two pulse parameters: pulse width, ∆ω, and pulse detuning, ∆, are used to characterize the pulse shapes. They are defined as the spectral width of the red/blue-detuned pulse and the detuning between red and blue-detuned pulses, respectively. To reduce experimental complexity, we vary the pulse width and detuning symmetri- cally, keeping the red and blue-detuned components of the dichromatic pulses the same throughout. Figure S5 shows the emission spectra and the detected count rates from pulsed RF and DPE at various pulse parameters. Here, we vary the thickness of the beam block and the separation of the razor blades in the pulse strecher (see Figure. 2(a) in the main text) to remove the particular spectral components in the original 160 fs (corresponds to spectral bandwidth of ∼ 11 meV) laser pulse. The excitation laser spectra used for dichromatic excitation is illustrated in Figure S5(a), along with the resonantly driven emission spectrum from the X 1− transition under pulsed RF at π-pulse, in order to highlight the spectral overlap between the laser pulses and the broad phonon-sideband. Figure S5(b) shows the emission count rate as a function of square root of the average excitation power for various DPE pulse parameters. The pulse area is normalized to the optical power needed for a π-pulse under pulsed monochromatic RF (pulse width of ∼ 35 ps and bandwidth of ∆ω = 0.05 meV). Under pulsed RF, we observe Rabi oscillation in emission intensity as excitation power increases beyond the π-pulse. The fit (solid blue line) to the experimental data (RF, blue circles) is derived from the same model used for the fit in Figure S2(a). Unlike the monochromatic RF case, we observe a sigmoid-like saturation curve in the emission intensity. We observe reduction of saturation intensities, below 0.5 times the intensity at π-pulse under RF, and an increase in the excitation power needed to reach saturation as the pulse detuning ∆ increases beyond 1.5 meV. The observed reduction in the saturation intensity with ∆ is consistent with the reduction in population inversion efficiency under monochromatic phonon-assisted excitation at large detuning [15], confirming the impact of phonon-mediated preparation and excitation-induced dephasing [12,16]. The anomaly at ∆ = 4.4 meV can be attributed to either dominant phonon-assisted driving than the dichromatic driving or experimental imperfection in the excitation. For instance, any imbalanced in components of the red and blue-detuned pulses, slight detuning of the dichromatic pulses from the ZPL of the QD emission and chirping in femtosecond pulses introduced by the dispersion of the fibre would deviate from the theoretical behaviour. Nevertheless, the experimental evidence shows that the dynamics of the emission is sensitive to the excitation dichromatic pulse details (frequency detuning, pulse contrasts, pulse widths and pulse detuning). Hence, extra care has to be taken when selecting pulse parameters for the dichromatic pulses, ideally avoiding femtosecond pulses (with pulse detuning ∆ 2 meV) if the excitation pulses are made to propagate in optical fibres to minimize any possible pulse chirping effects. VII. DICHROMATIC EXCITATION ON THE NEUTRAL EXCITON In this section, we perform DPE on the neutral exciton X transition. Here, we consider balanced dichromatic pulses, with (dichromatic) pulse contrast C = (I B − I R )/(I B + I R ) ≈ 0, where I R and I B are the integrated intensity of the red and blue-detuned (from the resonance of the X transition) component of the dichromatic pulses, respectively. Figure S6(a) shows energy level schematic of the four-level biexciton-exciton (XX-X) cascade system. Upon excitation into the biexciton state |XX , a cascaded radiative decay from biexciton state |XX to the vacuum ground state |0 is initiated via either of the intermediate neutral exciton states |X H(V) . This generates a pair of polarization-entangled, orthogonally polarized photon pairs consisting of emission from both the biexciton-exciton (XX) and the exciton-vacuum (X) states transitions, distinguished via polarization (in the horizontal (H) or vertical (V) linear polarized basis) and difference in emission energy equal to the biexciton binding energy, E B . Figure S6(b) illustrates the laser spectrum for both the DPE (at pulse detuning of ∆ = 1.2 and 2.5 meV) and the resonant two-photon excitation (TPE). The two-photon resonance lies at the half the energy difference between the X and XX transition, as indicated by the dashed lines, which gives a biexciton binding energy of E B = 1.95 (1) meV. The exciton fine structure splitting, independently measured via time resolved lifetime measurement, gives δ = 19.6 (1) µeV. We resolve one of the exciton fine structures |X H(V) by adjusting the linear polarizer in the collection to the polarization axis of the desired transition, while keeping maximal suppression in the excitation laser by calibrat-ing the orientation of the linear polarizer in the excitation accordingly. We spectrally filter either transitions before detecting the photons on a SNSPD. Figure S6(c) shows the emissions from the two transitions, observed simultaneously under TPE, as a function of the excitation power. As demonstrated in previous literature [17] , we observe Rabi oscillation in both X and XX emissions, which enables coherent manipulation of the state occupation of the excitonic states. Surprisingly, when employing dichromatic driving on the same transitions with red-detuned pulses overlapping with the two-photon resonance (∆ = 2.5 meV), we observe similar Rabi oscillation, shown as purple circles in Figure S6(d). Here, we speculate that unlike the solid-state two-level system (negatively-charged exciton, X 1− ), the contribution from two-photon resonance driving (red-detuned) to the state population inversion dominates over the phonon-assisted driving (blue-detuned). We validate this by showing that the Rabi oscillation observed at C = 0, is similar to that observed under TPE when we drive the X transition solely with the red-detuned pulses (C = −1). Additionally, we observe lower emission intensities when excitation laser only consists of the blue-detuned component (C = 1) of the dichromatic pulses. These evidences confirm our hypothesis, indicating a deviation from the expected outcome from the solid-state two-level system (c.f. Figure 2 and 3 in the main text) when dealing with multi-level system. As we decrease the dichromatic pulse detuning to ∆ = 1.2 meV such that there is minimal overlap between red-detuned pulses with the two-photon resonance, we observe the disappearance of the Rabi oscillation when both red and blue-detuned component of the dichromatic pulses are present. In addition, we observe a higher emission intensity when it is driven with the red-detuned pulses, compared to the lower detected count rate under phonon-assisted driving using blue-detuned pulses. These results are illustrated in Figure S6(e). It is interesting to note that for the blue-detuned driving (C = 1), while still having lower the emission intensity as the reddetuned driving (C = −1), it shows sign of saturating at excitation power beyond 25 µW. This indicates that even when there is no overlap between the excitation pulses and the two-photon resonance, for a biexciton-exciton cascade system, the contribution from two-photon resonant driving (C = −1) dominants over the phononassisted driving (C = 1) in affecting the state dynamics. This adds further complexity in exploiting the DPE technique to coherently drive of the neutral exciton, X transition. Further modeling would be beneficial to understand the physics behind this phenomena and to potentially utilize it as a tool for coherent single photon generation for multi-level atom-like system. The architecture for the two samples, labeled as sample A and B, are illustrated in Figure S7(a) and (d), respectively. While both of them have the same heterostructure, which consists of 1L-WSe 2 encapsulated by few layers of hexagonal boron nitride (h-BN), their sample structures differs in the planar cavity design. For sample A, the heterostructure is placed on top of a 140 nm stopband flat distributed Bragg reflector (DBR) centred at ≈ 710 nm (1.7463 eV) with a 6 nm thick bottom h-BN flake acting as a spacer, forming a λ/4 planar cavity at λ = 780 nm (1.5895 eV). In contrast, for sample B, the heterostructure is placed on top of a gold mirror with a bottom hBN flake of 59.3 nm, creating a λ/4 planar cavity at λ = 780 nm (1.58954 eV). The photoluminescence emission from SPEs in both samples (grey, shaded), excited using the same non-resonant continuous wave source at 532 nm (2.33 eV), are shown in Figure S7(b) and (e). Their emission profile are detuned from the zero-phonon line (ZPL, green, shaded) at 1.6025 and 1.5946 eV, respectively. Upon a close inspection of the emission spectra in Figure S7(b), we observe emission peaks, which correspond to the ZPL from multiple emitters. The two peaks (ZPL and the peak beside it) in Figure S7(e) belongs to the exciton fine structures of the same transition. The dichromatic laser spectra are displayed alongside the SPEs emission spectra, with the red and blue-detuned (from the ZPL) laser components given by the red and blue shaded region, respectively. The broad phonon-sideband (PSB, orange, shaded), detuned ≈ −0. and PSB, we obtain a ZPL fraction of ≈ 65 % for both samples, typical for SPE in these materials at cryogenic temperature. Subsequently, we filter out the ZPL using a grating-based spectral filter (FWHM= 0.296 (1) meV) to suppress the laser sideband before it is detected on a spectrometer. The emission intensity of the ZPL, as a function of excitation power for both samples, are shown in Figure S7(c) and (f), respectively. While the pulse parameters for the two dichromatic excitation differ (e.g. the dichromatic pulse detuning, ∆ for the excitation on sample A and B are ∆ = 2.0 and 6.7 meV, respectively), we observe some form of oscillations. Figure S7(g) and (h) demonstrate suppressed multi-photon emission prob-ability, g (2) (0) ∼ 0, from the spectrally filtered ZPL signal in Sample A, measured using a fibre-based Hanbury-Brown and Twiss interferometer, under continuous wave and pulsed non-resonant 750 nm excitation, respectively. These results confirm the nature of single photon emission from these emitters. While an accurate interpretation of the experimental data is currently unavailable due to the lack of clear quantum optical picture for these emitters, these results demonstrate coherent population driving of SPEs in 1L-WSe 2 under DPE, as an alternative to monochromatic resonant excitation [23,24].
2020-09-07T01:01:01.045Z
2020-09-04T00:00:00.000
{ "year": 2021, "sha1": "6219a276ad9885914270caa767358539175d9e0a", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.126.047403", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "6219a276ad9885914270caa767358539175d9e0a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
64543285
pes2o/s2orc
v3-fos-license
Formal Dependability Modeling and Optimization of Scrubbed-Partitioned TMR for SRAM-based FPGAs SRAM-based FPGAs are popular in the aerospace industry for their field programmability and low cost. However, they suffer from cosmic radiation-induced Single Event Upsets (SEUs). Triple Modular Redundancy (TMR) is a well-known technique to mitigate SEUs in FPGAs that is often used with another SEU mitigation technique known as configuration scrubbing. Traditional TMR provides protection against a single fault at a time, while partitioned TMR provides improved reliability and availability. In this paper, we present a methodology to analyze TMR partitioning at early design stage using probabilistic model checking. The proposed formal model can capture both single and multiple-cell upset scenarios, regardless of any assumption of equal partition sizes. Starting with a high-level description of a design, a Markov model is constructed from the Data Flow Graph (DFG) using a specified number of partitions, a component characterization library and a user defined scrub rate. Such a model and exhaustive analysis captures all the considered failures and repairs possible in the system within the radiation environment. Various reliability and availability properties are then verified automatically using the PRISM model checker exploring the relationship between the scrub frequency and the number of TMR partitions required to meet the design requirements. Also, the reported results show that based on a known voter failure rate, it is possible to find an optimal number of partitions at early design stages using our proposed method. Introduction Field programmability, low manufacturing cost, and other advantages make SRAM-based FPGAs an attractive option compared to ASICs for space applications. Unfortunately, the main disadvantage of these devices is their sensitivity to cosmic radiation effects commonly known as Single Event Upsets (SEUs) [1]. SEUs occur when one or more bits in configuration memory change state due to a radiation event. If only one bit in the configuration memory is affected, then it is called a Single-Bit Upset (SBU). A Single-Cell Upset (SCU) is defined as an event that induces a single-bit upset at a time. If more than one bit is affected at a time in multiple storage locations, the event is known as a Multiple-Cell Upset (MCU). Since the FPGA configuration bits are stored in volatile SRAMs, SEUs are a major concern for the successful operation of safety-critical systems. To deal with SEUs, designers mostly rely on redundancy-based solutions, such as Triple Modular Redundancy (TMR) [2] and configuration memory (Configuration Bits) scrubbing [3]. TMR is a well-known technique for fault mitigation in which three redundant copies of the same logic perform the same task in parallel. A majority voter chooses the correct output from these three copies. Scrubbing uses a background task that corrects the SEUs using error-correcting code memory or a redundant copy of data, either periodically or on detection of an SEU. Reliability analysis of TMR and their related improvements have been studied for a long time and widely reported in the literature. By contrast, partitioning of TMR for reliability improvement got less attention from the research community. In [4], the authors present Markov models for partitioned TMR and showed how TMR partitioning can help improving the reliability. In their models, the TMR partitions were assumed to be equal in size. This assumption reduces the complexity of the model. However, this is a clear limitation, because in a real design, it is not always possible or even desired to have partitions of equal size. An optimal partitioning for TMR logic was reported in [5] and the relationship between DCEs (Domain Crossing Events) and the number of partitions was explored via fault injections. Interestingly, a unified model that can quantify the effects of both SCU and MCU at an early design stage is not reported to date. One of the goals of our work aims at overcoming these limitations. Early analysis of availability and reliability impacts of SEUs on a design can help designers developing more reliable and efficient systems while reducing the overall cost associated with their design. As spacecrafts are typically constrained by strict power budgets, too frequent scrubbing will drain energy. By contrast, less frequent scrubbing will allow accumulation of SEUs, which will eventually break TMR tolerance. Thus, analyzing trade-offs between the number of TMR partitions and scrub frequency is necessary. We propose a methodology based on Probabilistic Model Checking, to analyze this relationship at early design stages while analyzing reliability and availability of the design. Probabilistic model checking is a well known formal verification technique, mainly based on the construction and analysis of a probabilistic model, typically a Markov chain. The main advantage is that the analysis is exhaustive, and therefore results in numerically exact answers to the temporal logic queries, which contrasts discrete-event simulations [6]. Another advantage of this technique is its ability to express detailed temporal constraints on system executions in contrast to analytical methods. It is worth mentioning that formal verification is widely used in industry and government research agencies including Air Force Research Laboratory (AFRL), NASA JPL, NASA Ames, NASA LaRC, National Institute of Aerospace (NIA) and so on, for verification of safety-critical hardware and software systems [7,8,9]. In our previous work [10,11], we proposed a methodology that evaluates a Markov reward model (constructed from its high-level description) against performance-area-dependability tradeoffs using the PRISM model checker. In brief, we showed that there are cases where rescheduling of a Data Flow Graph (DFG) in conjunction with scrubbing can serve as a better faulttolerant mechanism when compared with another fault-tolerant mechanism that combines the use of spare components with scrubbing. In addition, we also explored the relationship between fault detection coverage and scrubbing intervals. In this paper, we use a similar technique to extract the DFG from the high-level description of a system as explained in [11] in detail. The main contribution of this paper is modeling of partitioned TMRs to explore the relationships between the scrub frequency and the number of TMR partitions in early design stages. Our modeling and analysis exploit the formal verification tool PRISM. This makes our work highly desirable since the use of formal verification techniques (especially model checking) in system design phases to verify the Design Assurance Level (DAL) compliance (as defined in the DO-254 standard [12] for airborne electronic hardware) is highly recommended by NASA and Federal Aviation Administration (FAA) [13]. Of course, performing the analysis only at early stage is not sufficient. Later in the design phases, hardware designers and radiation experts will need to perform additional tests like fault injection and beam testing in nuclear reactors. However, an early analysis in the design phase, such as our proposed approach, can reduce the overall design time, effort and cost. Hence we argue that our proposed technique bridges the gap between FPGA designers, radiation experts and applied computer science using formal verification techniques. In our proposed approach, starting from a high-level description of a design (that employs partitioned TMR), the system is first modeled using the Continuous Time Markov Chain (CTMC) formalism. For CTMC modeling, we utilize the DFG of the system, the number of intended partitions, the scrub rate, and the failure rates that are obtained from a component characterization library. The model is then encoded using the PRISM modeling language. Properties related to the system's reliability and availability expressed using probabilistic temporal logic (in our case Continuous Stochastic Logic (CSL) [14]) are then verified automatically using the PRISM model checker [15], and the relationship between the scrub interval and the number of TMR partitions is assessed. In summary, our contributions in this paper are: 1. We propose formal models (using the CTMC formalism) of partitioned-TMR systems irrespective of the partition size (equal/non-equal sized). The proposed model captures both SCUs and MCUs. We limit our model up to SCUs and Double-Cell Upsets (DCUs). 2. We propose a methodology for early trade-off assessment of TMR partitioning and periodic blind scrubbing. The proposed modeling technique is modular, which means each partition of the TMRed system is first modeled as a separate CTMC and then composed in parallel to a larger CTMC that models all the TMR partitions in the system. Since the approach is modular, it is easily extendable to any number of partitions 1 . 3. Our CTMC formalization of the TMR partitioning is then encoded m1 m2 m3 m4 Input Output Figure 1: Sample unmitigated circuit using the PRISM modeling language, and we utilize the Probabilistic Model Checking technique using the PRISM tool to perform quantitative assessments of an SCU prone, and an SCU and DCU prone FIR filter. Our analysis shows that increasing the number of TMR partitions also increases the design reliability and availability for the designs that are prone to both SCUs and DCUs (in the case when the failure of TMR voters are ignored). Moreover, the number of partitions also has a direct relationship with the configuration memory scrubbing frequency. Indeed such a relationship can be quantitatively analyzed at early design stages based on the design requirements and constraints using the proposed methodology. 4. We also show that based on some given failure rate of voters (in contrast to the previous case where the failures of TMR voters were ignored), it is possible to identify the optimal number of partitions that offers the highest reliability. To our knowledge, this is the first unified model that captures SCUs, DCUs and voter failures for optimal TMR partitioning at early design stages. The remainder of the paper is organized as follows. Section 2 reviews the related works in this area. Section 3 describes the background about SEU mitigation techniques and probabilistic model checking. The proposed methodology and modeling details are discussed in section 4, and in section 5, we present quantitative results obtained using our proposed methodology. Section 6 concludes the paper with proposed future research directions. TMR partitioning and Related works Traditional TMRed design can deal with a single fault at a time. Thus, faults in multiple redundant modules will break the TMR. For illustration, Figure 1 shows a sample circuit (each box represents a module) and Figure 2 shows the TMR implementation of the sample circuit in the traditional way. While designing a system with traditional TMR, the components are triplicated and a majority voter is placed at the output of the circuit. The voter can provide correct outputs even if one of the branches (or domains) of the TMR is faulty. Figure 3 shows the same system implemented with partitioned TMR (as suggested in [4]). In terms of dependability, each partition can be considered as a separate entity. This circuit will only fail if two or more domains in the same partition are affected by one or more faults. For example, upsets in module m2 in domain two (second row) and m3 in domain one (first row) will break a traditional TMR system, whereas it will get successfully masked in a system with partitioned TMR. In FPGA configuration bitstream, a bit that is important for the functionality of a design is categorized as a critical bit. Using the component characterization library [16], the number of essential bits, also known as potentially critical bits to implement each component (adder, multiplier, etc.) in a target FPGA can be estimated. More accurate SEU susceptibility analysis can be performed using the fault injection techniques [17,18], however, for first-order worst-case estimation, it is valid to assume that all the essential bits are considered as critical bits. In [10,19], authors demonstrated how rescheduling of data-flow graphs can help to optimize the performability, and also showed the use of Erlang distribution for accurate modeling of scrubbing technique. In these works, to facilitate the early analysis, authors used the concept of characterization library [16]. In our proposed approach, we use the characterization library to calculate the failure rate of the TMR domains. It is worth mentioning that the proposed methodology is generic enough to be used with a different characterization library with more precise and accurate data (from radiation experts), without any major changes. There are three main techniques to analyze SEU sensitivity in ASIC or FPGA based designs: 1) hardware testing with techniques such as particle beams and laser testing 2) fault injection emulation or simulation; and 3) analytical techniques. These three types of techniques are complementary, and they are typically applied at different design flow steps. The first two techniques are expensive in terms of cost and testing time. Moreover these two techniques often require a completed implementations [20,21]. On the other hand, early analysis using analytical methods tends to be relatively less accurate in some aspects. Nonetheless, they can provide a much better controllability and observability, while enabling quick estimation of SEU susceptibility analysis, without the risk of damaging the devices [22]. In addition, they can also capture features of the true test conditions that would be very hard to accurately reproduce when bombarding the circuit or while Both academia and industry have heavily studied reliability and availability prediction of TMR. The effectiveness of different TMR schemes implemented with a different level of granularity has been evaluated experimentally (using beam test) and reported in [23]. In [24], authors proposed an analytical model for systems with TMR, TMR with EDAC and TMR with scrubbing. They discuss the Markov modeling of these techniques throughout the paper, however, frequent voting or partitioning was not addressed. In [25], authors present a design flow to scrub each domain in a TMR independently to maximize the availability. In their approach, each partition is scrubbed on-demand when required. Since TMR is very expensive in terms of area and power, in [26] authors show TMR can be implemented only on selected portions of a design to reduce cost. Even though some level of reliability is sacrificed in this approach (compared to the approach where the whole system can be triplicated), in terms of area constraint, this tool may maximize the reliability. The work in [4] proposes a reliability model for partitioned TMR systems, but only for designs with equal sized partitions. Also, the possible failure of voters was not considered in any of the works mentioned above. A probabilistic model checking based approach for evaluating redundancybased software system was proposed in [27]. The effect of domain crossing event and how to insert the voter cleverly in a hardware design was demonstrated in [5]. In this work, the authors analyzed different partitioning schemes for the same design, and using the fault injection technique finds the optimal number of TMR partitions suitable for that design. Our work contrast all of these related works mentioned above. The proposed models can handle the effect of both SCUs and DCUs on designs, irrespective of the partition's size. In addition, the proposed model also considers the voter failures based on which optimal partitioning can be obtained. Probabilistic Model Checking Model checking [28] is a well-established formal verification technique used to verify the correctness of finite-state systems. Given a formal model of the system to be verified in terms of labelled state transitions and the properties to be verified in terms of temporal logic, the model checking algorithm exhaustively and automatically explores all the possible states in a system to verify if the property is satisfiable or not. Probabilistic model checking deals with systems that exhibit stochastic behaviour and is based on the construction and analysis of a probabilistic model of the system. We make use of CTMCs, having both transition and state labels, to perform stochastic modelling. Definition 1. The tuple C = (S, s 0 , TL, L, R) defines a CTMC which is composed of a set of states S, an initial state s 0 ∈ S, a finite set of transition labels TL, a labelling function L : S → 2 AP which assigns to each state s ∈ S a subset of the set of atomic propositions AP which are valid in s and the transition rate matrix R : S × S → R ≥0 . The rate R(s, s ) defines the delay before which a transition between states s and s takes place. If R(s, s ) = 0 then the probability that a transition between the states s and s is defined as Remark 1. We allow self-loops in our CTMC model, and according to Definition 1, self-loops at state s are possible and are modeled by having R(s, s) > 0. The inclusion of self-loops neither alters the transient nor the steady-state behavior of the CTMC, but allows the usual interpretation of Linear-Time Temporal (LTL) operators (we refer the interested reader to [29] for more details about the syntax and semantics of LTL) like the next step (X ) that we will exploit in Section 5 to check the correctness of the model. In the Probabilistic model checking approach using CTMCs, properties are usually expressed in some form of extended temporal logic such as Continuous Stochastic Logic (CSL), a stochastic variant of the well-known Computational Tree Logic (CTL) [28]. A CSL formula Φ defined over a CTMC M is one of the form: Each Φ is known as a state formula and each φ is known as a path formula. The detailed syntax and semantics of CSL can be found in [14]. In CSL, S p (Φ) asserts that the steady-state probability for a Φ state meets the boundary condition p. On the other hand, P p (φ) asserts that the probability measure of the paths satisfying φ meets the bound given by p. The meaning of the temporal operator U and X is standard (same as in LTL). The temporal operator U ≤t is the real-time invariant of U. Temporal operators like always ( ), eventually (♦) and their real-time variants ( ≤t and ♦ ≤t ) can also be derived from the CSL semantics. Below, we show some illustrative examples with their natural language translations: The probability of the system eventually completing its execution successfully is at least 0.99". 2. S ≤10 −9 [F ailure] -"In the long run, the probability that a failure condition can occur is less than or equal to 10 −9 ". In the PRISM property specification language P, S, G, F, X and U operators are used to refer to the P, S, , ♦, X and U operator. In addition, PRISM also supports the expression P =?[φ] and S =?[Φ] in order to compute the actual probability of the formula φ and Φ being satisfied. PRISM also allows the use of customized properties using the filter operator: f ilter(op, prop, states), where op represents the filter operator (such as Figure 4: Proposed methodology forall, print, min, max, etc.), prop represents the PRISM property and states (optional) represents the set of states over which to apply the filter. Figure 4 presents our proposed methodology. It starts from the high-level functional description of the system being designed, formulated in C/C++. The DFG is then extracted using the GAUT [30] tool. The DFG extraction part from a C/C++ code and the use of component characterization library is inspired from our previous work [11]. Once the DFG is extracted, depending on the resource, performance or area constraint for hardware implementation, it can be scheduled using the appropriate scheduling algorithm. Since scheduling is out of the scope of this work, we assume a fully parallel implementation of the DFG for high performance. However, it is worth mentioning that the methodology will work irrespective of the scheduling approach. Proposed Methodology Depending on the number and the size of partitions defined by the user, each domain in each partition can be represented as one or a collection of nodes (nodes in the graph represent a basic operation such as add, multiply, etc.). Each node can be implemented as a component in the FPGA. For clarity, we define the TMR module, domain and partitions in the context of our paper. Module: A module (or component) in a TMR refers to the basic operations as they appear in the C/C++ code of the design and the obtained DFG. The failure rate calculation of a module is based on the component characterization library from [16]. As mentioned earlier, any other component characterization library (such as those obtained by fault injection or beam testing) can also be used with our methodology without any major changes. Domain: A TMR domain (or branch) consists of one or more modules. An unpartitioned TMR will have three replicated domains, whereas a partitioned TMR will have three redundant domains in each partition. Based on this fact, each domain in the same partition has an equal failure rate. In contrast, domains from different partitions may have equal or different failure rates depending on the size of partitions. Since the input of each domain is connected to the output of a voter from the previous partition (it is well known that if communication between systems is a single line it becomes a single point failure that can be avoided by triplicated wires carrying the 2 or more out of 3 code), hence the failure rate of a domain can be expressed as: where, k is the total number of modules in a domain, λ i is the failure rate of module i, and λ voter is the failure rate of a voter. Note that, since there is no voter included in the first partition, hence λ voter = 0 for the domains of first partition. The last three voters voting on the final output also can be modeled (with an extra partition) using Equation 1 by replacing k i=1 λ i = 0. Partition: A TMR partition refers to the logical partitioning of the TMRed design. Each partition may have equal or unequal size. Voters are inserted after each partition to vote on the output from the domains. Since in a partition a domain is replicated three times, hence the upset rate of a (fully operational) partition is equal to 3 * λ domain . For illustration, in Figure 5 the partitioning of a DFG representing an 8tap FIR filter is shown. All the domains in partition-1 have four multipliers and three adders. On the other hand, each of the domains that are part of partition-2 contains four multipliers, four adders and three voters. The failure rate for a module is calculated using the following equation: umber of critical bits For our experiments, λ bit = 7.31 × 10 −12 SEUs/bit/sec where λ bit represents the SEU rate for the Higher Earth Orbit, and the number of critical bits in a module is obtained from the characterization library. It is important to mention that an SEU can cause either an SCU or an DCU. Hence, λ domain needs to be adjusted accordingly. The simplest way is to multiply the λ domain with the SCU and DCU coefficient, α SCU and α DCU respectively. Based on the calculated failure rate of each domain, the number of partitions and the user defined scrub rate, a CTMC model of the system is then built and encoded in PRISM modeling language. Different reliability and availability properties are then verified using the PRISM model checker to assess if the design meets its requirements based on the provided quantitative results. If requirements are not met, the number of partitions or the scrub frequency is then modified, and the analysis is performed again. Formal Modeling of partitioned TMR We start with formalizing individual TMR partitions, and then show how to construct the whole partitioned system. A system with N partitions can be defined by a set: Here each P i ∈ P represents a TMR partition. Each of these partitions are prone to SCUs. However, as mentioned earlier an SEU can flip two or more bits simultaneously in the FPGA configuration bitstream, inducing MCUs in TMR partitions. This situation is more common in a harsh radioactive environment such as in outer space. As mentioned earlier, in this paper we limit our modeling to Double-Cell Upsets (DCUs). Also, for the proposed models, we consider the following assumptions: Assumption 1 : For the SCU model, each domain in the TMR may fail independently. The time-to-failure due to a configuration bit flip(s) (either inducing an SCU or DCU) is exponentially distributed. The exponential distribution is commonly used to model the reliability of systems where the failure rate is constant. The scrub interval is assumed to follow an exponential distribution as well, with a rate, µ = 1/τ , where τ represents the scrub interval. It is also possible to approximate the deterministic scrub interval using the Erlang distribution as shown in [19,31]. Assumption 2 : The design employs the blind scrubbing technique [32,33]. Blind scrubbing is a very popular and reliable scrubbing strategy that requires no additional detection algorithm before fixing the configuration memory upsets. As according to the fault-free design structure, there is no electrical node in the logical netlist stemming from one domain that is re-converging in another domain, the domains are physically independent, except if a set of flipped configuration bits (one or more than one) introduce a short between two nets belonging to two distinct domains. This is a physical design issue with the FPGA and how it is placed and routed. We can either neglect this possibility as its occurrence probability is expected to be low, or do careful physical design such that the probability of the event becomes negligible. The MCU modeled in the paper considers the scenario where a single SEU may [perform_scrub] [perform_scrub] [perform_scrub] Modeling of Single-Cell Upsets (SCUs) Each TMR partition P i ∈ P which is prone to SCUs can be described as a CTMC. Figure 6 shows an SCU prone TMR partition model with scrubbing. We removed the state labels from the figure and the transition labels are specified inside square brackets ([ ]) for clarity. Each node in the model denotes the current state of a domain in that specific partition: state 3 (operational) represents the state in which all domains are operating correctly (all the modules are fault-free), state 2 (degraded) represents a state where one out of three domains is operating incorrectly (in one of the domains at least one of the modules is faulty), but the output is still not erroneous, and state 1 (f ailed) represents a failure state in which two or more domains are operating incorrectly (more than one module is faulty in two or more domains). Since it is a TMR system, 2 out of 3 domains need to be working at a time. In this model, µ represents the scrub rate. Note that, since we consider periodic blind scrub for this paper, the whole system gets scrubbed periodically. This is reflected in the model by the perf orm scurb transition with the scrub rate µ (irrespective of its current state). From state 3, the system can move to state 2 if it encounters an SEU with the scu 1 transition and the rate of this transition is 3 * λ domain . From state 2, the system has two options: (1) the system can get scrubbed and go back to state 3; or, (2) another module in a fault-free domain of the TMR can fail -which will lead the system to state 1 with a transition label scu 2 and transition rate 2 * λ. Once the system enters the state 1, which is a failed state, it will remain in that state until the system gets scrubbed eventually and comes back to state 3 with the scrub rate µ. Since in our modeling, each TMR partition is modeled as a separate CTMC that is equivalent to the model shown in Figure 6, this allows us to model systems with either equal or unequal sized partitions based on the fact that failure rate of a partition is reflected in its CTMC model. Once all the partitions are modeled as separate CTMCs, we construct the model of the overall partitioned scrubbed-TMR system from the parallel composition [34] of those CTMCs. , if TL 1 = TL 2 = α(Full synchronization), , otherwise(Interleaving synchronization). and s 1 , s 1 ∈ S 1 , α 1 ∈ TL 1 , α 2 ∈ TL 2 , α ∈ TL 1 ∩ TL 2 , R 1 (s 1 , s 1 ) = λ 1 , s 2 , s 2 ∈ S 2 , R 2 (s 2 , s 2 ) = λ 2 . Figure 7 shows the resulting composed CTMC for a TMR-scrubbed system with two partitions where λ P i and λ P j represent the failure rate of left which means that if the current state of any of the partitions are labelled as operational or degraded, then we classify those corresponding states as up, otherwise down. Hence, in Figure 7, up = {4, 5, 7, 8} and down = {0, 1, 2, 3, 6} Modeling of Multiple-Cell Upsets (MCUs) An SEU that causes a DCU invokes failures in multiple TMR domains simultaneously and each of those domain may belong to the same, or to [perform_scrub] [perform_scrub] [perform_scrub] To achieve the first goal, we enhance the model presented in Figure 6 and define them as follows: We refer to this model as the "combined model", since it captures both SCUs and DCUs (in the same partition) . Let us first consider the case where a DCU causes failure of two domains, but in the same partition. The parameter β represents the DCU rate of a domain pair, which corresponds to a situation causing a domain failure while also causing failure of another domain in the same partition (that would happen if an ionizing particle first hit a domain and then induces an ionized track spreading to a second domain to also make it fail). For instance, let us consider the DCU case where domain 1 fails first in a partition and then causes a failure to domain 2 of the same partition with a rate γ (as paticle upsets take place on a very short time scale, the two domains are assumed to fail at the same clock cycle). The reverse could also happen, which means Figure 9: Model for synchronization of DCUs in separate domain (two partitions) domain 2 can fail while invoking a failure to domain 1 with a rate γ. So, for the pair domain 1 and 2, the rate at which either of them fails in the same cycle as to the other one is γ + γ. As defined, the two considered events are disjoint and their rates can be summed accurately. In our model, we combined and express them together, which means β = γ + γ. Similarly, we have two more domain pairs to consider in a TMR partition (irrespective of their order since we combine the rates), which are domain 1 and domain 3, and domain 2 and 3. In that context, Figure 8 The second case models a DCU affecting two separate partitions (in addition to the DCUs affecting in a same partition). This case is significantly more complex, and to model this we need to introduce new transitions (in addition to those defined in Definition 4) in separate partitions . It requires the use of pairwise synchronization (via parallel composition as defined in Definition 3) of associated transitions in different partitions to represent a simultaneous failure due to a DCU in different partitions. Figure 9 shows how we extend the "combined model" (that was shown in Figure 8) for incorporating DCUs in distributed domains (shown by dotted arrows) for the case where a design has two partitions (corresponds to the layout shown in Figure 5). In this extended model, we represent DCUs in the same partition using the previously defined parameter β and DCUs in separate partitions using another parameter β1. To define β1, we consider that domain 1 in P i can fail while invoking another failure of domain 1, domain 2 or domain 3 in partition P j with a rate γ1 + γ1 + γ1. In our model, this is represented as β1 = γ1 + γ1 + γ1. So the rate at which the three domains in partition P i will fail due to a DCU while causing a failure to a domain in partition P j is 3β 1 . Similarly, the rate that either of the three domains in partition P j will fail due to a DCU while invoking a failure to a domain in partition P i is also 3β 1 . In Figure 9, the transition [P i dcu 1 ] and [P j dcu 1 ] represent a DCU in partition P i and P j respectively (DCUs in the same partition). In contrast, the [dcu ij ] and [dcu ji ] transitions (triggers based on the current state) in partition P i is synchronized using the same transition label ([dcu ij ] and [dcu ji ]) in partition P j with the rate 1 (refer to Remark 2), which depicts DCUs in two separate partitions. Let us consider a case where both partitions have three operational domains, hence both Markov chains are in state 3. For a DCU in separate partitions, either encountered by a domain in partition P i or P j , both Markov chains will move to state 2 simultaneously (this is synchronized using the same label [dcu ij ] in both partitions). If the partition P j is in state 2, and partition P i is in state 3, then the left Markov chain will move to state 2 and the right Markov chain will move to state 1 simultaneously, and this has been synchronized using the same label [dcu ji ]. Remark 2. Please note that, for full synchronization, as in Definitions 3, the rate of a synchronous transition is defined as the product of the rates for each transition. We need to synchronize the scrub transition "perform scrub" between the TMR partition models. For example, the intended scrub rate (µ) is specified in full for the scrub transitions in one of the partitions (in the first partition as shown in Figure 9), and the rate of other scrub transition(s) in rest of the partition models (in second partition as shown in Figure 9) are specified as 1. Similarly we synchronize transitions for modeling DCUs in separate domains. Figure 10 shows the combined model after parallel composition of the partitions (refer to Definition 3) that encapsulates the effect of both: SCUs and DCUs (same and separate domains). In this model, β P i and β P j represent the DCU rate (DCUs in the same partition) of a domain in the first and second partition respectively. The parameter β1 P i and β1 P j represent DCUs in respective separate partitions. For example, in state 8 both partitions are working fine. However, if one of the domains in the partitions encounters an SCU, then the system can move to either state 5 or state 7, depending on the location of the domain. Also, if the system is in state 8, and a DCU occurs in any domain of either partitions (DCU in separate partitions), it will trigger another domain failure in the other partition simultaneously. This leads to a path from state 8 to state 4 with the rate 3 * β1 P j + 3 * β1 P i . The rest of the two transitions from state 8 to state 2 and state 6 represent DCUs in the same partition with associated rates. For our analysis, we developed Markov models for four design options, starting from no partition 2 up to eight partitions. The complexity of these models in terms of total number of states and total number of transitions is shown in Table 1. As observed from Table 1, with increasing number of partitions, modeling of TMR gets very complicated since increasing number We were able to keep such modeling manageable since we define each partitions separately as modules in the PRISM language and utilize the PRISM model checking tool for parallel composition of modules (representing TMR partitions) for generating the complete model for analysis. Please note that modules in PRISM language and modules in a TMR should not be confused. We refer to [35] for details about the PRISM modeling language. Similarly, N partitions can be modeled using our methodology by adding new modules to the PRISM code. Quantitative Analysis of an FIR Filter Filters are commonly used in digital communication systems for different purposes, such as for equalization, signal separation, noise reduction and so on. Communication is a fundamental issue for all space-borne applications ranging from satellites to unmanned missions. That is why digital filters have an important role to play for such systems [36]. To illustrate the applicability of our approach, we analyze an 8-bit 64-tap FIR filter (the target platform is a Xilinx Virtex-5 SRAM-based FPGA) using both, the SCU model and the combined model for a different number of partitions. FIR filters [37] are widely used in space applications for their excellent stability and simplicity of their implementation according to the given response. An N-tap discrete finite impulse response (FIR) filter can be expressed as follows: x[ ], y[ ] and h[ ] are the input samples, output samples and the filter coefficients respectively. All experiments are conducted for a mission time of 1 Figure 11: SCU Reliability month. Since SEUs can cause both SCUs or DCUs, for the combined model, it was assumed that 99% (α SCU = 0.99) of the SEUs will cause SCUs and 1% (α DCU = 0.01) of them will cause DCUs (with another added assumption that β = β1). Since the model is parametric, any other values for scaling the SCU and DCU rates can be used. Also, we use λ voter = 0 for the first few experiments, and then introduce a non-zero failure rate for the voters (using Equation 1) to evaluate its impact on our models. We analyze four design options (as shown in Table 1) using our methodology. We use the PRISM model checker version 4.1 to analyze the reliability and availability properties for each of them. Before analyzing the model quantitatively, we verify the following LTL style properties (Recall Remark 1) to check the correctness of the model: Correctness Property: filter(forall, P>0 [X oper]) -"From any reachable state, it is possible to reach the oper state in the next step with a probability greater than 0 ". Note that, as mentioned earlier in the preliminary section, blind scrubbing periodically reconfigures the FPGA, which does not require any fault detection. This sets the requirement (specified as the property above) that the system should be repaired irrespective of its failures, i.e. will be scrubbed even in the oper state, which justifies the self-loop in our model. While verifying, PRISM returned true, which means that the correctness property hold in our model. Figure 11 shows the relationship between the reliability and number of partitions in the design for different scrub intervals using the SCU model. Reliability of a system (or component) is defined as the probability that the system performs correctly for a given period of time, from zero (t 0 ) to t 1 , given that the system or the component was functioning correctly at t 0 . In PRISM, this property can be formalized in CSL as P=?[G[0,T] operational], T = 1 month, and we evaluate this property for different scrub intervals starting from 15 minutes up to 4 hours. For all the design options with different scrub intervals, the reliability decreases when the scrub interval increases. However, designs with more partitions show significant improvements in reliability, even with the same scrub interval. For example, if the scrubbing interval is 15 minutes, the design with no partition has a reliability of 0.65 only. In contrast, the design with two, four and eight partitions has a reliability of 0.81, 0.90 and 0.94 respectively. TMR increases the area and power consumption by a factor of 300% as a result of replications. More frequent scrub in such cases will consume more power that might not be appropriate for most space applications. For such circumstances, increasing the number of partitions can offer a good solution instead of a more frequent scrub strategy. For example, if the designer is targeting a reliability higher than 0.80, and if the design has no partition (or fewer partitions), then the designer may consider adopting a more frequent scrubbing strategy (less than an hour, in order of seconds or milliseconds) to meet some requirements. Instead of adding such power burden on the system, the designer may adopt TMR with 2, 4 or 8 partitions, which will require scrubbing once per 15 minutes (thus reducing power consumption) and will also meet the requirement. Note that the design option with eight partitions provides a reliability of 0.8 even for a delayed scrub of 1 hour. Using this approach a designer can determine the number of partitions required to meet the design requirements for a given scrub rate or vice versa. Analysis Availability is defined as the ratio of time the system or component operates correctly (system uptime) to its entire mission time. Using the SCU model, Figure 12 shows the availability of the design for different scrub intervals and a different number of partitions. In PRISM, this property can be formalized in CSL as R{"up time"}=? [C<=T]/T, T = 1 month. The design with no partition offers an availability of 4 nines (0.9999) for the scrub interval of 15 minutes which drops up to 1 nine (0.97) with increased scrub interval of 4 hours. Compared to this, all the other options with TMR parti- tioning offer improved availability. For instance, for a scrub interval of three hours, the design with no partition offers only 98% availability, whereas the rest of the design options with partitioning offers availabilities of more than 99%. Most of the communication satellites target more than 99% availability. In such cases, if the power constraint restricts the designer not to increase the scrub interval, then increasing the number of partitions may offer a solution. For the second part of our analysis, we evaluated the same reliability and availability properties using the combined model for all the design options with two, four and eight partitions. The results are shown in Figure 13 and Figure 14. We observe that even for the combined model, with increasing number of partitions, the reliability and availability of the design increases. Interestingly, increasing the number of partitions has a small effect on the value of the availability when the design employs frequent scrubbing, how- Figure 14: Combined Availability ever, it should be noted that approaching 1 even by a small amount can be extremely difficult, and the improvement of availability is not well reflected on a linear scale. In contrast, increasing the number of partitions improves the availability dramatically for the cases where the scrub interval in comparatively longer. From this, we can conclude that our observation for the SCU only model also holds for the combined model. Observation A major observation from these analysis is that when the scrub interval is smaller (frequent scrub), the number of partitions plays a major role increasing the reliability of a system. However, even for a delayed scrub, the improvement is noticeable enough. In other words, the graphs show a trend that to meet the designer's reliability goal, if the number of partitions (which means smaller domains) is increased, less frequent scrub will be required. In contrast, fewer partitions (larger domain size) will require more frequent scrubs to meet a target reliability requirement. For the availability, the number of partitions plays a significant role for longer scrub intervals. For frequent scrub intervals, the number of partitions increases the availability to a minimal level, but for longer scrub intervals, the improvement of availability with the number of partitions is quite significant. Such early analysis on the high-level design description will allow a designer to perform the analysis before the actual implementation of the system considering the design constraints such as power. Using such methodology a designer can find better trade-offs between the number of partitions and the required scrub interval that will meet the design requirements, and also reduce the design effort, Impact of the voter failure So far, we evaluated the proposed models assuming that TMR voters are not prone to failure. To illustrate the capability of our model in this section we introduce a non-zero failure rate (λ voter = 5 × 10 −3 per hour) for voters and evaluate both the SCU and combined model for two, four and eight partitions. The obtained results are shown in Figure 15 and Figure 16. Interestingly, this time we observe a trade-off, and the obtained results clearly contrast the results obtained earlier with non-zero voter failure rate. For the SCU model, TMR with four partitions shows the best reliability compared to the options with two and eight partitions, unless the scrub interval is very long (for example if the scrub interval is more than 3 hours, then all the three TMR options offer a similar reliability). However, for the combined model, the reliability of the TMR with two partitions beats the other TMR versions. This finding is very interesting, since it shows a reversal of trends compared to what we observed when the TMR voters were assumed to be fault-free (in which more partition was always better). Also, when the scrub interval is more than an hour for the combined model, the reliabilities of TMR with two and four partitions become very close to each other. It means that if a designer needs to adopt a very long scrub interval, then TMR with two or four partitions, can both be a good choice for the design. In contrast, if it is possible to adopt a faster scrubbing (interval less than an hour), then TMR with two partitions is the proper choice. Design of a fault-tolerant voter has been an important and active research area for many years [38,39]. Based on the choice of the voter to implement with the TMR partitions, it is straightforward to use our proposed approach for evaluating the relationship between the scrub interval and the number of TMR partitions knowing the specific choice of a voter and associated failure rate by changing only the parameters of the model. Scalability As shown in Table 1, when increasing number of partitions, the number of states also increase. The relationship between the number of partitions N and the number of states S total can be expressed as S total = 3 N , where N ≥ 1. PRISM model checker includes multiple model checking engines, many of which are based on symbolic implementations (using binary decision diagrams and their extensions). These engines enable the probabilistic verification of models of up to 10 10 states (on average, PRISM handles models with up to 10 7 − 10 8 states). So based on this fact, with the proposed approach we can model up to 16 partitions and this is due to the limitation of the PRISM tool. It is worth mentioning that in order to analyze more larger of partitions, it is possible to reduce the state space of the probabilistic models prior to verification. A variety of techniques has been recently developed including symmetry reduction, bisimulation minimisation and abstraction. Among these bisimulation (also known as lumping) [40,41] is of particular interest for our future work, since it preserves widely used probabilistic temporal logics. PRISM also features a variety of advanced techniques such as abstraction refinement and symmetry reduction. PRISM also supports approximate/statistical model checking through a discrete event simulation engine. So considering the capability of the PRISM model checker, it is also possible to analyze systems with larger number of partitions using our methodology. Conclusion and Future Works We presented the formal modeling and analysis of single-cell and multiplecell upsets using a methodology based on the probabilistic model checking technique. This methodology aims to analyze the relationship between the number of TMR partitions, scrub interval and mission time. Increasing the number of TMR partitions allows to reduce the frequency of scrubbing, which results in less energy consumption. However, based on the voter failure rate, it is possible to find the optimal number of partitions. Using the proposed methodology, designers can assess the number of partitions, or the scrub frequency required to meet the design requirement at early design stages. To demonstrate our approach, we have shown the results of our analysis for a 64-tap FIR filter case study. The results showed how the increased number of partitions enable less frequent scrubs and vice-versa, and also we were able to find the optimal number of partitions for both: the SCU and the combined model. Such an early analysis will add more confidence to the design and may reduce the overall design cost, time, and effort. However, since the PRISM modeling is not automated, this restricts us to do so. In the future, we will work to overcome this limitation. It is also worth mentioning that with the decrease of the transistor size, three or more bit upsets are also not uncommon these days. This will be addressed in our future work. Another interesting future work for us will be to include the partial reconfiguration (read-back scrubbing) in our model and to explore the effect of unreliable voters in the design partitions.
2018-03-12T20:38:27.000Z
2018-01-11T00:00:00.000
{ "year": 2018, "sha1": "b6f07007cbf45eb19fea3d65db49d56fffa72fd7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1801.04886", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1b0333d0e3eebcca83e216a03c61b977c05e797d", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
222105279
pes2o/s2orc
v3-fos-license
Women entrepreneurial intentions in subsistence marketplaces: The role of entrepreneurial orientation and demographic profiles in Zimbabwe Abstract Subsistence women in developing economies are largely marginalised yet their circumstances could be improved through entrepreneurship. The study sought to establish the relationship between entrepreneurial orientation on entrepreneurial intention and the moderating and direct effects of demographic profiles as an oasis of establishing a predictive model on prospective rural women entrepreneurs. Data were collected from prospective women entrepreneurs in the rural markets of Manicaland Province, Zimbabwe. A sample of 192 women was used. Data analysis was done using structural equation modeling to address the research hypotheses. Convenience sampling was applied to test the hypotheses relying on consenting women.The adequacy of the sample was tested using Kaiser-mayor-olkin and also the Bartlett’s test for sphericity. Initially, exploratory factor analysis was done using Principal Component Analysis. The rotated component matrix was also extracted. Data analysis was performed using Smartpls program. The results of analysing data show a significant relationship between innovativeness and risk-taking ability on entrepreneurial intention. However, data analysed did not confirm the hypothesised relationships between proactiveness and demographic profiles on entrepreneurial intention. It is recommended that entrepreneurship financiers, Non-Governmental Organisations and Governments should consider rural women’s innovativeness and risk-taking ability in screening potential entrepreneurs for funding and training. PUBLIC INTEREST STATEMENT There are 56% people in rural areas of Africa (Worldometer, 2020) and more than half of these rural dwellers are women. More donor funds and government projects are directed towards women especially in rural markets. The purpose of this study is to prospect for a predictive model of potential women entrepreneurs for funding and training by establishing the influence of entrepreneurial orientation, entrepreneurial intention and demographic profiles in terms of level of education, family business background and age in rural markets. Innovativeness and risk-taking ability were noted as important in identifying rural women's entrepreneurial intention. There is need to impart these elements to rural women prospective entrepreneurs to enhance entrepreneurial intention whereas demographics and proactiveness were not significant. Introduction The poor outnumber the rich in terms of population because "the bottom of the pyramid" had 3.5 billion people in 2017 (GlobalWealthReport-CreditSuisse, 2019; Prahalad & Hart, 2002). The majority of these poor people are in rural areas (subsistence marketplaces) especially in Africa where 56% live in rural areas (Worldometer, 2020). Subsistence marketplaces have illiterate consumers who are resource poor, as a result are largely ignored in most studies in entrepreneurship and business. Women in subsistence marketplaces live at the intersection of poverty, illiteracy and marketplace (Viswanathan, 2017) and they have not attracted researchers' interest at microlevel to understand their entrepreneurial intentions. Moreso, research in such markets is geographically inconvenient (rural areas are usually far away from Universities, respondents are dispersed and do not have a research culture, as compared to data collection in urban shopping malls, streets and online surveys) and pose the extra challenge of translations of instruments to vernacular language (Brislin, 1970). Entrepreneurship is the backbone and engine of economic development of any country. To achieve the sustainable development goals (SDGs) embraced by all United Nations member states in 2015, there is need for robust economic policies especially those related to prospective entrepreneurs' empowerment. Entrepreneurial activities are increasing internationally and expectations from women in subsistence markets have moved to a higher level with the changing world order (Achakpa & Radović-Marković, 2018). An investigation into the link between entrepreneurial orientation of the women figure and entrepreneurial intentions is very important. The women figure is very important in every society. Her degree of entrepreneurial orientation, complemented by her age, parent's employment and family business background as well as her level of education plays a pivotal role in determining her entrepreneurial intention (Kumar et al., 2018;Neneh & Van Zyl, 2017). It is expected that a woman should contribute to the economy and that is why more donor funds and government projects are directed towards women. It is a big loss for the country not to benefit from the women given that they are more than 50% of the population in most African countries (Worldbank, 2020). This is despite the fact that a woman is often seen as a family member who does housework, takes care of children, and spends most of her time at home. The vast benefits from entrepreneurial activities have led many nations to spearhead for a larger scale establishment of start-ups and new ventures (GEM, 2019). There is dearth in subsistence women entrepreneurship literature on the three critical concepts of entrepreneurial orientation. Proactiveness, innovativeness and risk taking are indispensable core components of the entrepreneurial orientation construct as first discussed by Miller (1983) and extended by Lumpkin and Dess (1996). These three dimensions shape the entrepreneurial orientation as a single construct and therefore regard it as a reflective indicator. Thus, entrepreneurial orientation has been extensively publicised as an essential component of entrepreneurial intention when forming a business, goals for the business and its growth target (Neneh & Van Zyl, 2017;Panda, 2018;Quince & Whittake, 2003). In line with the institution's theory, the huge differences in socio-cultural, infrastructural and economic environment makes it imperative to conduct empirical studies in subsistence markets despite the saturation of the concepts in terms of research in occidental and oriental markets (Burgess & Steenkamp, 2006;Meyer & Peng, 2016). Extant entrepreneurship literature noted relations between entrepreneurial orientation and business performance measures. Findings have not been consistent, justifying the need for more studies incorporating the construct. Most previous literature on entrepreneurship examined the link between entrepreneurial orientation and organisational performance. The conclusion from the studies was that businesses with a viable entrepreneurial orientation supersedes to a larger extent those that are have nothing to do with entrepreneurial orientation (Covin & Slevin, 1986;Wiklund & Shepherd, 2003). Nevertheless, mixed results have been obtained from these researches. Alternatively, other studies show no significant or lower correlation connecting entrepreneurial orientation and achievement (Covin et al., 1994;Lumpkin & Dess, 2001). Moreso, other previous studies focused more on propensity of students' intention to be entrepreneurs in future and it was found out that entrepreneurship education was perfectly associated with entrepreneurial intention as well as self-employment (Gatewood et al., 2004;Henderson & Robertson, 2000;Kumar et al., 2018;Parnell et al., 1995;Stohmeyer, 2007;Turker et al., 2005;Wang & Wong, 2004) Some of the studies mainly adhered to the consequence of individual traits during the process of making choices (Brockhaus, 1980;Johnson, 1990;Krueger et al., 2000;Veciana et al., 2005). From these researches, it was discovered that there is a link between entrepreneurial intention and some individual traits like self-confidence and the desire to excel (Duygu & Senem, 2009). Therefore, lack of vast studies on the women entrepreneurship orientation and intention is evident, especially in subsistence markets. Entrepreneurial orientation's roots can be traced back to J. Child (1972) who advanced the emergence of entrepreneurship orientation from a critical standpoint claiming that emerging opportunities could conceivably be successfully tackled by "enthusiastic performance." Other scholars (Mintzberg, 1973;Sorayah & Dygku, 2017) point out that entrepreneurial orientation started in policy formulation studies. Policy formulation is a companywide process that integrates designing, evaluation, resolution making, and many facets of a firm's culture and mission (Sorayah & Dygku, 2017). Miller (1983) describes an entrepreneur, as a person who is innovative, a risky taker and has a first mover advantage which results in a competitive edge. When it comes to an entrepreneurial oriented individual, (Kuhn et al., 2010;Paul, 2013) views such an individual as one who concentrates on technological transformation (innovativeness), embarks on perilous endeavours (risk-taking), and follows favourable circumstances boldly (proactiveness). On the other side of the coin, entrepreneurial intention is viewed as a logical characterisation of the measures to be effectuated by people to either set up original, self-sustaining ventures or to formulate latest service within the existing entities (Krueger et al., 2000;Wiklund & Shepherd, 2003). Objectives are required to make entrepreneurial motives real since they begin with inspirations. Krueger et al. (2000) point out that people form a business not as an instinctive act, but they do it with a motive behind. Since the motivation of entrepreneurship is important to revitalise development in a world which is growth sensitive (Duygu & Senem, 2009), it is important to study on mechanisms that can be put in place to heighten subsistence women's entrepreneurial activity (Achakpa & Radović-Marković, 2018;Rosca et al., 2020;Setini et al., 2020;Siba, 2019). In the same vein, policy makers should give attention on reasons that entices some women to have a zeal for entrepreneurial career whilst others feel not interested. Research context and background to the study The study was done in Zimbabwe, a typical subsistence economy, with a shrinking economy and a three-digit galloping inflation (540% in 2020, according to the Reserve Bank of Zimbabwe). Almost 75% of the people in Zimbabwe survive on less than US$1 per day. Results from such a market would provide research evidence in entrepreneurship literature on poor economies which are largely ignored by scholars (Mari, 2008;Viswanathan & Rosa, 2007). There has been a rise in informal entrepreneurship in Zimbabwe largely due to failed economic policies (Ndiweni & Verhoeven, 2013) and the marginalisation of majority of the population. The rural areas have been adversely affected by the dwindling economy. Rural entrepreneurs have had challenges in securing lines of credit (Munyanyi, 2015). Ironically, the Government of Zimbabwe is working tirelessly to achieve its strategic vision of becoming an upper middle-class economy by 2030 as adopted from the global goals/sustainable development goals (SDGs) embraced by all United Nations member states in 2015. In a bid to ensure the realisation of this vision, the government of Zimbabwe is pushing this agenda through the Ministry of women affairs, community, small and medium enterprises development, Ministry of youth, sports arts and recreation as well the Ministry of finance and economic development. A number of initiatives to boost the economy have been put in place by the government of Zimbabwe like the youth and women empowerment facilities. There is also the Zimbabwe Women Empowerment Bank (ZWMB) that was set up to offer financial support to Zimbabwean marginalised women entrepreneurs. Non-governmental organisations like the World Vision, Plan International, GOAL and Caritas among others are coming in to assist Zimbabwe and other nations to help in the achievement of the world SDGs. Unfortunately, regardless of such efforts to improve the Zimbabwean vision, it is surprising that women entrepreneurs are still at a small scale with very few women taking their businesses to greater magnificent levels. The entrepreneurial orientation of some of these women is questionable. It is the purpose of this study to fill this gap and to increase the existing body of literature on women entrepreneurship. The focus of the study was on prospective women entrepreneurs in the rural areas of Manicaland province in Zimbabwe since the rural areas of Zimbabwe constitute the greatest composition of women in the country and they are the marginalised group targeted by most donors as well as the government empowerment funds. The study also aims at unveiling the fundamental factors that motivates rural women to embark on profitable and long-lasting business ventures so as to avoid wastage of donor and government funds on individuals who are not innovative, proactive or risk takers or who do not possess the right qualities for such funds. In other words, the study findings will enable efficient resource allocation for economic growth maximisation and to provide a green-light to policy makers the world over so that their policies assist in the attainment of the global goals, which is a poverty free world. Viswanathan and his colleagues emphatically indicated the essence of understanding subsistence marketplaces in their own right, so as to improve their quality of life (Venugopal et al., 2015;Weidner et al., 2010). Women entrepreneurship studies in Zimbabwe Entrepreneurship studies in Zimbabwe have been centered on entrepreneurship challenges and factors that motivate entrepreneurship uptake. Mazonde and Carmichael (2016), focusing on urban women, found that Zimbabwean women entrepreneurs are very good in the management of the association between their diverse social obligations and personalities. This makes these women to have a balance between family obligations and entrepreneurial roles for the improvement of their wellbeing. Women entrepreneurs are faced with difficulties in accessing financial capital, struggle between family and work obligations, acquiring of raw materials as well as inadequate knowledge and administration skills (Mauchi et al., 2014). Dumbu (2017) as well as Chikombingo et al. (2017) studied motivational aspects for women entrepreneurs. The findings were that entrepreneurship by women is a result of push factors which are on the negative side like loss of a preceding formal job, the need for the freedom associated with entrepreneurship and sufficient capital. They recommended the government to fund and support women entrepreneurs because it is a substitute for employment. Nhuta and Mukumba (2017) had a study where they detected socio-economic features of women entrepreneurs in Zimbabwe and established the association between women empowerment in entrepreneurship and social-economic development. Chigudu (2018) had an assessment of the degree of female involvement in small and medium enterprises administration in urban Zimbabwe. The findings were that women personally had no confidence of getting involved in substantial high-risk management roles because they believe men can do better than them. Dumbu (2018) studied difficulties faced by cross border women entrepreneurs in Zimbabwe. The results of the study were that these women who are into cross border are faced with difficulties in accessing correct information on customs duty and processes, are deprived of financial knowledge and combining the family burden and entrepreneurial tasks is a big task. To the best of our knowledge, we did not find empirical studies on subsistence women entrepreneurship in Zimbabwe or any homogeneous subsistence marketplace in Sub-Saharan Africa, which sought to establish the relationship between entrepreneurial orientation, demographic factors and entrepreneurial intention. The existence of such studies in developed markets and other markets which are completely divergent in terms of socio-cultural environment does not render this study redundant (Burgess & Steenkamp, 2006). Statement of the problem Governments as well as non-governmental organisations, both local and international have taken a number of initiatives to improve the livelihood of women in emerging markets. However, funds provided for subsistence women entrepreneurs have been free for all. There is no standard screening model to avoid wasting funds after giving them to women who do not possess any entrepreneurship potential. This study sought to establish the influence of entrepreneurship orientation and demographic factors on entrepreneurship intentions of subsistence women. Objectives of the study (1) To determine the relationship between entrepreneurship orientation (innovativeness, proactiveness and the risk-taking) and entrepreneurship intention of subsistence women as prospective entrepreneurs screening variables. (2) To identify the impact (moderating and direct effects) of demographic profiles (education, family business background and age) on entrepreneurial intention of subsistence women as screening variables for entrepreneurship prospects. This study is designed to suggest a possible detection or screening device for prospective women entrepreneurs in developing markets. The research may also be vital in ensuring that technical as well as financial support is channelled to the right recipients with entrepreneurial intentions. It focuses on how innovativeness, proactiveness and risk taking motivates a woman in a subsistence market to have an entrepreneurship intention and also how age, educational level and family business background moderates the relationship between entrepreneurship intention and orientation. The direct effects of demographic factors were also tested. The research could also be used as a preliminary instructional device to identify entrepreneurial aspects that are deficient in prospective trainee women entrepreneurs. Entrepreneurship is an indispensable ingredient for economic growth. Its value is exhibited in numerous ways such as by identifying, evaluating and making the most use of emerging prospects for businesses and driving the economy ahead through transformation (Baharudin et al., 2020;Cuervo et al., 2007;Neneh, 2018;Rosca et al., 2020;Siba, 2019). It also leads to the generation of employment and consequently, to the enhancement of an allinclusive wellbeing of the general public (Reynolds, 1987;Zahra, 1999). The paper unfolds as follows: the next section delves on literature review, divided into theoretical foundations and hypotheses development, followed by research methods. The logical flow takes us to presentation of results and subsequent discussions as well as conclusions, limitations and future study. Theoretical foundations The thrust of this section is to discuss the theoretical foundations that underpin this research. The conceptual model of the study (see Figure 1) takes from the Fishbein and Ajzen (1975Ajzen ( , p. 1985 legacy whilst the antecedents of entrepreneurial intentions were buoyed on Shapero's entrepreneurship event model (Shapero, 1975). The liberal feminist theory brings the rural women marginalisation conceptualisation as adopted in this study. The theory of planned behaviour (TPB) and entrepreneurial event model The conceptual model of this study was based on the theory of planned behaviour (Ajzen, 1985) which was a sequel and logical development from the theory of reasoned action (Fishbein & Ajzen, 1975). The TPB is a leading model on behavioural intentions studies and it is assumed that intentions are a surrogate of actual behaviour (Taylor & Todd, 1995). Gird and Bagraim (2008) appreciate that this is a perfect model to easily understand entrepreneurial behaviour. It is the attitude-behavioural intention link which is the guiding framework of the conceptual model of this study. The entrepreneurship event model was propounded by Shapero (1975) and highlights that certain factors are very crucial to trigger an individual to go for a business venture. Hence, according to this theory, entrepreneurial intentions are subject to three fundamental foundations which are desirability, feasibility and propensity to act (Shapero & Sokol, 1982). Prospective women who possess these characteristics or the appropriate attitude are therefore highly likely to establish a business. Ngugi et al. (2012) agrees with Shapero and Sokol (1982) by pointing out that precise desirability and perceived self-efficacy are very vital foundations for perceived desirability and feasibility. Liberal feminist theory The main women entrepreneurship theory related to this study is the liberal feminist theory. This theory originated in the 18th century arguing that men and women are basically the same since a human being is defined by the ability to be reasonable. Their difference comes from discrimination and imposed barriers like unequal access to education or social segregation. However, these barriers are not permanent and can be removed. As it underpins this research, the liberal feminist theory mainly focuses on the problems faced by individual women as well as the practical solutions to extenuate the habits and preconceptions that lead to gender inequality. These misconceptions are also prevalent with subsistence women whose entrepreneurial potential has been underrated. Broadly speaking, the liberal feminist theory focuses on the manner in which women are viewed and financed in entrepreneurial operations (Rottenberg, 2014). From a tender age, individuals inherently adopt both socially and professionally erected gender principles. At that tender age, the young women and girls are recurrently subjected to the view that superior businesses are for men. They are therefore likely to suffer from an inferiority complex to initiate and run similar superior businesses that are commonly run by men. In fact, they believe Source: Authors' own that to run such kind of businesses is to enter the 'men's world' which is a taboo, especially in subsistence markets. Due to this mentality, these prospective women entrepreneurs will display a reduced disposition towards such a type of entrepreneurship which men usually compete for (Inzlicht & Schmeichel, 2012). In addition, the women who repeatedly get subjected to prototypes and characters of women entrepreneurs as beneficiaries of small loans and owner/managers of small businesses are likely not to excel as much as men do. They may make these small business owner/managers their role models hence they will in turn develop entrepreneurial intention to start and run small business ventures. Consequently, majority of scholars concur on the notion that women have a higher aptitude to go for social as well as non-monetary oriented organisations (Inzlicht & Schmeichel, 2012;Langowitz & Minniti, 2007). This gender duality of placing men and women in separate baskets is pernicious to women who intent to grow highly profitable and growth-oriented businesses. The women will be hesitant to start businesses associated with men. This dichotomy instead, gives the men a huge advantage in business and society. In other words, the liberal feminist philosophy outlines clearly how the gender pigeonhole and the gender function prohibit women to benefit easily and in abundance from resources that are at the disposal of men. This philosophy also brings to light the remedies at personal level that extenuate these obstructions. Moving forward, this emancipation of women must be inclusive of rural women. This study sought to bridge this loophole by having a predictive model on subsistence women's entrepreneurship orientation and intentions. Entrepreneurial orientation Entrepreneurial orientation is the degree to which an organisation or individual consistently acts entrepreneurially rather than conservatively (Covin & Wales, 2012). Entrepreneurial orientation is understood to be expounded by a list of characteristics. These features encompass the desire to take risks, innovativeness, proactiveness, independent or autonomy and determined initiative or competitive aggressiveness; which have all emanated from entrepreneurship and business blue print literary texts (Bolton & Lane, 2012). However, the current study excluded autonomy and aggressiveness. These two constructs had been dropped by Bolton and Lane (2012, p. 227) on the development of the Individual Entrepreneurial Orientation scale adapted in this study due to low Cronbach alphas. Hence the researchers decided drop the two constructs basing on the same reasons. Rauch et al. (2009) articulate that the three facets of entrepreneurial orientation which have been occasionally applied and quoted harmoniously in the previous studies include risk-taking, innovativeness, and proactiveness. These three are described as indispensable constructs and are used together to capture the entrepreneurial intention. Awang et al. (2016) unveil that entrepreneurial orientation stands for the strategies and operations that unveils a ground for entrepreneurial choices and measures. It follows therefore, that the formulation of strategy as a process depicts a person's entrepreneurship intention. The qualities of an entrepreneurially oriented prospective woman who should qualify for a technical and financial assistance can be compared to an entrepreneurial company which deals with markets constituting innovative and risk products (Miller, 1983). Such qualities ensure competitive advantage. Entrepreneurial orientation is very significant in that it helps the company's top executive to isolate the mandate of the company, maintain the business' focus and map a way forward, to accomplish greater benefits than its rivals (Rauch et al., 2009). Hence, a prospective woman with such an orientation will equally be equipped with skills to foster the enterprise forward and to improve the welfare of the society. Innovativeness as an entrepreneurial orientation dimension. Innovativeness is viewed as a person's attempt to produce new products having unravelled concealed opportunities and provide novel solutions (Lumpkin & Dess, 1996, 2001Sorayah & Dygku, 2017). This encompasses trial and inventiveness that result in the production of unique products that have features which extend consumers' utility horizon. S. Miller and Friesen (1982) argue that such innovative firms or individuals do so occasionally while taking risks in product market formulation. On the contrary, Hansen (1997) as supported by Awang et al. (2016) opine that it is pragmatically unendurable to do similar activities and hence any new invention or idea has to be viewed as innovation. To sustain competitive position and to survive, women entrepreneurs have to be innovative. For organisational success, innovativeness is vital and fosters entrepreneurial orientation (Hult et al., 2004). Additionally, Schumpeter (1934) as well as Galindo and Mendez-Picazo (2013) assert that innovativeness is at the very core of entrepreneurship. It is a method applied by entrepreneurs to manoeuvre through competition and exploit economic opportunities to invent new products. Empirical studies indicate the ability of women entrepreneurs to outwit their competitors through novel solutions (Ayub et al., 2013). A study conducted by Ayub et al. (2013) on 60 women entrepreneurs in Pakistan are in tandem with the argument by Cheng et al. (2009) together with Ndubisi and Iftikhar (2012) that innovativeness is vital for the success of entrepreneurial dealings. In a separate research, in Kwazulu Natal, South Africa, government programmes and policies were assessed on women entrepreneurs (Okeke-Uzodike. et al., 2018). The research concluded that there is need for an innovative state of mind, inherent personal motivation and support from both the public and private sectors for one to be a prosperous entrepreneur. The study recommended that the South African government should intervene through empowerment programs in support of women in order to attain its vision 2030, which relate to the National Development Plan (NDP). Moreso, 274 womenowned firms were assessed by Jyoti et al. (2011), and the conclusion was that the business has to be highly innovative to gain competitive advantage. Therefore, from the discussion, we proposed that: H1: The innovativeness of a woman positively influences her entrepreneurial intention. Proactiveness as an entrepreneurial orientation dimension. Proactiveness is the willingness of an entrepreneur to act, where the individual seeks opportunities and looks forward to act well ahead of competitors so as to benefit from first mover advantages be it on invention of new products and services whilst the rival is still passive (Lumpkin & Dess, 2001;Miller, 1983). In other words, proactiveness is defined as the ability to scout for concealed opportunities which may relate to current product provisions, extending to unchartered waters in terms of new products. This gives the entrepreneur an edge over competitors as they drop products that are on the decline stage (Venkatraman, 1989). Its aim is to ascertain the susceptibility to be ahead, rather than being the follower (Covin & Slevin, 1986). This line of thought moves in the same vein with Foss and Klein (2010)'s point that an entrepreneurial action enshrines holding and taking heed of profit generating chances that arises in a disjointed world. Proactiveness therefore is very crucial to an entrepreneurial orientation due to its possibility to display an optimistic outlook. This outlook moves along with an innovative activity usually associated with the entrepreneurial process (Lumpkin & Dess, 1996). Social networking has been found to enhance proactiveness (Nziku & Struthers, 2018). This type of networking is a strong agent of behaviour influence and attitude change towards business startup and sustenance. Wu and Wang (2011) opines that proactiveness, as a grant design of entrepreneurial activity, indicates a forecast into the time ahead and an ascertainment of market demand. According to Miles et al. (1978) as well as Wong (2012), other scholars discovered that the firm that immediately follow suit in a new market can just be as spearheading as the ground breaker and is most probably able to be a winner through being proactive. Prior researches unveiled that a proactive character is greatly linked to entrepreneurial intention amongst students as opposed to parental paradigm and gender (Baker & Sinkula, 2009;Crant, 1996). In order to take advantage of an opportunity, being the ground breaker is very predominant. This will usually result in high profits and establishment of brand recognition. Avlonitis and Salavou (2007)'s research indicates that proactiveness is a major leader to the success of new inventions. This means that highly involved and ignorant entrepreneurs differ to a larger extent in one area of innovativeness, which is also referred to as product or service variation. Proactiveness presents a challenge in that its characteristic of introducing new products and services is closely linked to innovativeness. Morris and Paul (1987) did a research which involved twelve-items for identifying innovativeness, risk-taking and proactiveness. This research disclosed an answer to this challenge and it concluded that innovation is not a replacement of proactiveness. Therefore, the prediction is: H2: Proactiveness of a woman positively influences her entrepreneurial intention. To avoid conflating innovativeness and proactiveness, their differences are succinctly explained. Innovativeness is a tendency to introduce new products or coming up with novel solutions to a problem through trials, experiments and marketing intelligence. However, proactiveness refers to the willingness to take action so as to gain first mover advantages in anticipation of a change in customer needs. Risk taking as an entrepreneurial orientation dimension. According to Quince and Whittake (2003), risk taking is the degree to which people are different in their desire to confront and truculent to risk. Risk taking will always be an agreeable and popular scale to deal with entrepreneurial orientation (Al Mamun et al., 2017;Miller, 1983). Risk-taking means the propensity to take part in brave decisions and activities (Kumar et al., 2018;Lumpkin & Dess, 2001). While other scholars like Neneh (2014) as backed by Fatoki (2014) found out that women entrepreneurs were risk-takers, others researchers noted that some of these women have risk averse traits (Boohene et al., 2008;Wiklund & Shepherd, 2003). Risk taking encompasses partaking in audacious activities to advance into the unknown, accessing huge loans, and investing huge resources in a volatile environment (Rauch et al., 2009;Al Mamun et al. (2017). Entrepreneurs must give first preference to their risks (Ayub et al., 2013;Mozumdar et al., 2020). This is essential because of the uncertainties in the economic environment. A study was performed in Tanzania, for women involved in the food processing industry (Langevang et al., 2018). One of the findings was that women are now breaking the chains of African patriarchal mentality that reduces them to be mere consumers and home defenders who wait to be taken care of by their male counter parts. These women are now entrepreneurs with the ability to pull themselves up and even successfully compete with men in business. Women business associations (WBAs) proved to be one such platform, which is very helpful in giving support to women entrepreneurs in Tanzania. However, policies that support women to run their own businesses must be priopritised by most developing economies (Baharudin et al., 2020;Neneh, 2018) Risk-taking has also been explained as the aptitude and eagerness of a business or person to go for well thought out and schemed golden opportunities in the business' marketplace despite uncertainties attached to these opportunities (Lumpkin & Dess, 2001;Neneh & Van Zyl, 2017). The tag attached to risk-taking in entrepreneurial orientation can also be identified from the greater role it plays in entrepreneurial behaviour (Fatoki, 2014;Quaye & Acheampong, 2013). Risktaking results in the going concern of SMEs (D. Miller & Friesen, 1978;Neneh, 2014). More so, risktaking is viewed by Jalali et al. (2014) as having a strong and positive relationship to firm performance and growth. Alternatively, Wiklund and Shepherd (2003) as affirmed by Hughes and Morgan (2007) found out that risk-averse behaviours lead to poor performance due to a lack of willingness to be robust in grabbing market opportunities. Therefore, hypothesis 3 is: H3: Risk-taking propensity of a woman positively influences her entrepreneurial intention. Effect of demographics to entrepreneurial orientation and intention A variety of factors motivates an individual to embark on entrepreneurship. Studies on the influence of demographic factors which lead to entrepreneurial intention are sparse and findings from these studies are not consistent (Sasu & Sasu, 2015;Wang & Wong, 2004). However, there is a general consensus that demographics have an impact on entrepreneurial intention. Family business background. Family business background has a greater impact on enticing people to start their own businesses. A person raised in a family business set up is motivated to start own business in future (Alsos et al., 2011;Botha, 2020;Chaudhary, 2017;Crant, 1996). Parents who run businesses are keen to educate and help their children to start their own businesses. As an example, Cooper et al. (1994) as corroborated by Sandberg and Hofer (1987) concur on the notion that, children whose parents are entrepreneurs use their parents as role models when operating their own businesses. This means, they copy strategies and tactics of running their business from their parents. These children channel their means of survival from establishing and running their own businesses just like their parents (Fairlie & Robb, 2007;Mcelwee & Al-Riyami, 2003). This practice is common among the Indian nationals who are highly business minded and run their own businesses. Brown (1990) conducted a study in the United Kingdom which was a training program to help university students to form their own businesses. Results from the study indicated that 38% of the students were keen to start their own businesses and it was also found out that their fathers were business owners. Their family business background may have motivated them into such a desire. Crant (1996) together with Schiller and Crewson (1997) reached the same conclusion in their studies Therefore, from the above discussion, hypothesis proposed is: H4a: The relationship between entrepreneurial orientation and intention is moderated by the family business background. H4b: Family business background has direct effects on entrepreneurship intention. Sasu and Sasu (2015) as established by Botha (2020) opines that the time frame which an individual take in general education positively influences their level of entrepreneurship. Education is important in instilling entrepreneurial skills (Al Mamun et al., 2017;Teoh & Chong, 2008). A study conducted by Neneh (2014) concluded that a graduate from university possess a higher inclination towards entrepreneurship than a non-graduate equivalent. On the contrary, some studies argue that the link between university education and entrepreneurship is weak generally (Chaudhary, 2017;Parnell et al., 1995;Turker et al., 2005). More so, Davidsson and Honig (2003) found out that one's education can lead to new opportunity discovery, but it does not necessarily follow that such an individual will open a new business to take advantage of that opportunity. Therefore, hypothesis 5 is: Level of education. H5a. Level of education moderates the relationship between entrepreneurship orientation and intention. H5b. Level of education has direct effect on entrepreneurship intention. Age. Ancient literature has not paid much attention to age as entrepreneurial intentions prognosticate (Kazmi, 1999;Lewis & Massey, 2003;Neneh & Van Zyl, 2017;Quaye & Acheampong, 2013). Surprisingly, of late, attention to age as a variable controlling entrepreneurial intention has been given much recognition and examples of such scholars who focused on age includes Zissimopoulos and Karoly (2007) (Choo & Wong, 2006;Delmar & Davidsson, 2000;Hughes & Morgan, 2007;Wiklund & Shepherd, 2003). Age really triggers an entrepreneurial behaviour in people (Coulthard, 2007;Stohmeyer, 2007). According to Gatewood et al. (2004) younger people possess a greater zeal to act entrepreneurially and form a business than older people. Therefore, from the discussion, hypothesis proposed is: H6a. Age moderates the relationship between entrepreneurship orientation and intention. H6b. Age has direct effects on entrepreneurship intention. Entrepreneurial intention Cognitive psychology is the root of the notion of intention and helps to explain human behaviour (Fatoki, 2014). Intention is a psychological state of an individual that motivates the individual to achieve a desired goal or plan of action. According to Bird (1988) intentionality is explained as a disposition commanding a person's thoughts, encounter and steps towards a particular objective. Hence, Indira (2014) state that entrepreneurial behaviours are also designed behaviours and intention is a forecast of entrepreneurial behaviours. Entrepreneurial intention is possessed by a person who is likely to start up an entrepreneurial venture, or becoming self-employed (Bird, 1988;Thompson, 2009;Tkachev & Kolvereid, 1999). It can generally be defined as an individual's aim of starting a business venture in the foreseeable future. However, in some instances, some women, especially in Africa are faced with conflicting roles, where their effort is required in the society in which they live and at their entrepreneurial workstations (Hundera et al., 2019). This emanated from an Ethiopian case study of 307 female entrepreneurs. The study found out that women with conflicting roles can mitigate them by applying a number of strategies, among them compromisation, social sustenance, dedication to the entrepreneurial function or satisfaction of all roles (Hundera et al., 2019). Indira (2014) opines that, as an intentional activity, entrepreneurship is twofold, that is, it is based on the capacity and the intention of a person seek, discover and grab an opportunity to maximise the benefits that arises hence forth. Krueger et al. (2000) produced a model in which the tendency to take risks and internal locus of control positively influences a feeling towards entrepreneurship and eventually shapes entrepreneurial intent. On top of that, Duijn (2009) identified notable findings in his empirical research that the most crucial entrepreneurial constructs influencing entrepreneurial attitude were proactiveness and risk-taking proclivity. Choo and Wong (2006) articulates that entrepreneurial intention involves hunting for vital information which is essential in achieving venture creation objectives. A variety of researches, Indira (2014), Reynolds (1987), and Krueger et al. (2000) provided empirical evidence that entrepreneurial intention is the first indicator of entrepreneurial behaviour. The overall conceptual model of this study is shown on Figure 1. Research method This study sought to determine the direct effects of entrepreneurial orientation on entrepreneurial intention, the direct effects of demographic factors on entrepreneurial intention as well as the moderated effect of demographic attributes on the relationship between entrepreneurial orientation and entrepreneurial intention. This section provides a detailed analysis followed to achieve these objectives and starts from the validation of the constructs on the model. Research design A quantitative approach was employed in this study. Quantitative approach encompasses the investigation of a natural or social context using raw data analysed statistically. Data were collected from a survey of women entrepreneurs. The research design was basically explanatory as it sought to establish relationships between constructs. Population and sample The population comprises of prospective women entrepreneurs in the subsistence markets of Manicaland province, Zimbabwe. The study focused on Manicaland Province because of its strategic location and demographic advantages. Manicaland is close to Mozambique and is also a gateway to South Africa, a strong trading partner of Zimbabwe. It is one of Zimbabwe's ten provinces with a hype of commercial activities due to its ability to support all forms of agricultural activities, ranging from those that require high rainfalls and those that needs low rainfalls. Again, Manicaland is endowed with vast natural resources qualifying it to be a centre of mining activities. As far as the population distribution is concerned, in the whole of Zimbabwe, Manicaland is the only Province with more people in the rural areas (84.6%) as compared to those in the urban areas who are 15.4% of the total people in the province, (Inter Censal Demographic Survey, 2017). Hence basing on the fact that Zimbabwe has more women than men, the researchers found it ideal to use the rural women of Manicaland province. When it comes to the sample size, it was determined basing on a number of factors (Bryman, 2016) of which in this study; cost and poor research culture of respondents were the major factors. The sample size for the study was 192, this was optimal for variance based-Structural Equation Modeling (SEM) in lieu of covariance based-Structural Equation Modeling as this was found to be more robust for sample sizes which are below 200 (J. Hair et al., 2010). Pallant (2013) argues the need to test for sampling adequacy prior to any application of EFA, as well as the testing of the homogeneity of variance between the test and identity matrices of the data. To achieve this, Kaiser-Mayor-Olkin (KMO)'s test of sampling adequacy as well as the Bartlett's test for sphericity were computed. J. Hair et al. (2010) and Pallant (2013) recommend that the optimal KMO statistic should be greater than 0.5, while the Bartlett's test must be significant at p < 0.05. These assumptions were tested and the results are shown below Table 1: The KMO statistic was 0.801 > 0.50 and with respect to the Bartlett's test, χ 2 (253) = 3129.303; p = 0.000 < 0.05. From the foregoing, the outcomes for KMO were both greater than 0.5 and both outcomes for the Bartlett p-values were less than 0.05, thereby confirming the validity of the use of factor analysis. To achieve the exploratory factor analysis, the researchers considered the Principal Component Analysis (PCA) as the extraction method. This decision was made on the understanding that it was more robust as compared with other extraction methods such as the image factoring, alpha factoring, principal axis factoring, unweighted or generalized least squares (Harrington, 2009;Yong & Pearce, 2013). To further improve the extraction robustness, rotation was applied and because the constructs were expected to be uncorrelated or negligibly correlated, the orthogonal varimax rotation method was chosen in lieu of the direct oblimin rotation methods (Field, 2016;Hair et al., 2014). Variable and measurement The study had the following independent variables: Individual orientation sub-constructs of innovativeness, proactiveness and risk taking were measured using Bolton and Lane (2012) individual entrepreneurship orientation scale. Individual entrepreneurial intention was measured using items adapted from Linan and Chen (2009) and Thompson (2009) individual entrepreneurship intention scales. Reverse items were avoided because they are least understood in subsistence markets (Steenkamp & Burgess, 2002). Demographic profiles were theorized to have moderating effects as well as direct effects. The demographics were education, family business background and age whereas the dependent variable was entrepreneurial intention (see Figure 1). A questionnaire of 23-items was developed to collect data from subsistence women excluding demographic questions. The English version was translated into Shona, the vernacular language of rural women respondents in the eastern province of Zimbabwe (Manicaland) and backtranslated in line with translation best practice (Brislin, 1970). Pilot testing of the questionnaire was done before data collection. Only those women who are not yet entrepreneurs were considered. The questionnaire comprised of items about the level of entrepreneurial capability of the women, their level of innovativeness, proactiveness and risk-taking. Questions on age, education and family business background were also included to enable testing moderating and direct effects of these demographic profiles on entrepreneurial intentions. We used convenience sampling to test the hypotheses. Similar to Marshall et al. (2006)'s sampling strategy, data were collected from consenting women in line with the Helsinki declaration on ethical data collection and also to enhance respondent's honesty. In other words, the study used structural equation modeling to address the research hypotheses. These hypotheses comprised of both direct relationships between latent variables as well as demographic attributes which measured both the direct and moderated relationships. To better handle the latent variables, scholars do concur that structural equation modeling is more robust in lieu of other multivariate regression techniques (Hair et al., 2018). Using standard regression tests failed to accommodate the latent effect in both the independent and dependent variables (Field, 2016). Aggregating the items was inaccurate as they failed to accommodate the inter-item discrepancies (Hair et al., 2011). To this effect, the researchers embraced structural equation modeling (SEM). Since the sample size used was less than 200, SmartPLS, a variance-based SEM tool was used in lieu of the covariance-based statistical tools such as IBM SPSS Amos v26. Exploratory factor analysis The research comprised of two broad constructs, that is, entrepreneurial orientation (EO) and entrepreneurial intention (EI) and each of these two constructs comprised of several items. According to Kline (2005), it is vital to explore and uncover the underlying structure among the set of items for each of the constructs. Lee (2007) as affirmed by Schmitt (2011) recommend the use of exploratory analysis, citing that this is a multivariate dimension reduction technique that seeks to establish the principal dimensions emerging from the data. Data collection method The data were collected using a questionnaire. The data collection was carried out from June to August 2019. From the questionnaire, 23 of the items were gauged on five-point Likert scale ranging from 1 = strongly disagree to 5 = strongly agree. The collected data were then tabulated and a validity and reliability test performed. Construct validation With a view to validating the research constructs prior to structural equation modeling, Carden et al. (2019) recommend the need to conduct validity testing of the extracted dimensions. For the construct validity, the researcher considered the use of the Confirmatory Factor Analysis (CFA) as prescribed by Tabachnick and Fidell (2007) and affirmed by Hair et al. (2018). Two key tests were done and include convergent and discriminant validity. From the foregoing discussion, with respect to entrepreneurial orientation, the least observed statistic was 0.464, and because this was less than the prescribed minimum 0.60, this was dropped. For innovativeness, proactiveness and risk-taking, none of these had path coefficients that were less than the prescribed minimum, and thus for these, none of the items were dropped. Discriminant validity With respect to discriminant validity, both J. Hair et al. (2010) and Byrne (2004) recommend a maximum Heterotrait-Monotrait Ratio (HTMT) covariance of 0.85 between any of the constructs. The respective discriminant validity results are presented in Table 2. The HTMT between innovativeness and entrepreneurial intention was 0.386, and 0.103 with proactiveness, while with risktaking this was 0.131. On the other hand, for risk-taking, the HTMT with entrepreneurial intention was 0.240, while with proactiveness this was 0.102. Lastly, the HTMT between proactiveness and entrepreneurial intention was 0.078. From the results, the maximum HTMT observed was 0.386 and because this was less than 0.85, this meant that discriminant validity was not violated. Analysis method Data analysis was performed using the Smart PLS program and three stages were carried out, that is the evaluation of measurement models, structural models, followed by the testing of research hypotheses. Communalities Upon running the PCA, according to Field (2016), the first criteria of cleaning up the outcome was the consideration of the communalities. These communalities help determine the degree of correlation between an item and the aggregated items, and they show the common variance explained. Under optimal conditions, the common variance explained ought to be greater than 0.4 (Costello & Osborne, 2005;Field, 2016). The key findings are presented are shown in Appendix 1 Three communalities were below the minimum acceptable threshold of 0.40, and these were for the items I would like to start my own venture (0.304), Among various options, I would rather be an entrepreneur (0.381) and I am determined to create a firm in the future (0.370). These were all for the construct entrepreneurial intention and were dropped from the analysis. Total variance explained The Guttman-Kaiser criterion (D. Child, 2006) was used to establish the number of optimal components, and according to this criterion, only components with eigenvalues which are greater than 1.0 ought to be selected. From the scree plot in Figure 2, six components had an eigenvalue that was greater than the minimum threshold 1.0. The highest eigenvalue was 5.335 (variance explained = 17.841%). The second component explained a variance of 13.610% (Eigenvalue = 4.092), while the third explained a variance of 12.561% (Eigenvalue = 2.278). The fourth component had a variance explanation of 10.274% (Eigenvalue = 2.152), while the fifth explained 9.160% (Eigenvalue = 1.274) and the last component explained 6.736% and had the least eigenvalue of 1.011. The cumulative total variance explained by the six components was 70.183%, and being greater than the prescribed minimum of 50.0% (J. Hair et al., 2010), the findings do confirm that the six extracted components were all valid. The respective total variance explained is presented on Appendix 2. Rotated component matrix The rotated component matrix was extracted and according to Dugard et al. (2010), the optimal threshold for item inclusion in each of the components was 0.5. However, scholars such as Field (2016) argue that even 0.4 would be tolerable for exploratory studies, while for confirmatory studies, 0.7 would be ideal. The resultant rotated component matrix is presented on Appendix 3. From the outcome above, six components were extracted using the inclusion criteria defined earlier, that is, factor loadings greater than 0.50, the first and fourth components comprised of four items while the second, third and fifth components comprised of three items. Only the sixth component had two items. Ultimately, four of the items were dropped as they had factor loadings that were less than 0.50 and these included: I am ready to do anything to be an entrepreneur, I am determined to create a firm in the future, I would like to start my own venture and among various options, I would rather be an entrepreneur. Moreso, according to Field (2016) together with Carden et al. (2019), the general standard threshold for the assessment of reliability is 0.7, however, alpha loadings as low as 0.6 are still considered to be reliable while those above 0.7 are considered to be the most desirable (Sweet & Grace-Martin, 2012). From the above results, the least observed alpha statistic was 0.350 for the sixth component and this was less than 0.70, while the second least was the fifth component (0.497), again, less than 0.70. The rest of the other components were all greater than 0.70, with the highest being 0.992. In this regard, components 1, 2, 3 and 4 were retained, while components 5 and 6 which failed to meet the mark were discarded off (J. Hair et al., 2010). Naming the components The last stage for EFA was the attribution stage which entailed the assignment of names to the extracted components. Component 1: Entrepreneurial Orientation -Proactiveness The first component comprised of: • I favour experimentation and original approaches to problem solving rather than using methods others generally use for solving their problems. • I prefer to step-up and get things going on projects rather than sit and wait for someone else to do it. I tend to plan ahead on projects. I usually act in anticipation of future problems, needs or changes. Experimentation, stepping-up, planning and acting in anticipation, all resonated with the proactiveness aspect of entrepreneurial orientation. Component 2: Entrepreneurial Orientation-Innovativeness The following items constituted the second component: I often like to try new and unusual activities that are not typical but not necessarily risky. • I prefer to try my own unique way when learning new things rather than doing it like everyone else does. • In general, I prefer a strong emphasis in projects on unique, one-of-a-kind approaches rather than revisiting tried and true approaches used before. Trying new and unusual activities, trying unique ways, as well as unique and one-of-a-kind approaches, all tended to reflect the attributes of being innovative. Component 3: Entrepreneurial Orientation-Risk taking For component 3, the respective items were: I am willing to invest a lot of time and/or money on something that might yield a high return. I like to take bold action by venturing into the unknown. I tend to act boldly in situations where risk is involved. Willing to invest a lot of time, taking a bold action and acting boldly, all reflected the risk-taking attitude. Component 4: Entrepreneurial Intention The fourth component comprised of entrepreneurial intention-related items and these were: I have very seriously thought of starting a firm. I will make every effort to start and run my own firm. Being an entrepreneur would entail great satisfactions for me. I have a strong intention to start a firm someday. Hypothesis tests Having confirmed the convergent validity and discriminant validity, the researchers sought next to evaluate the relationships between the research constructs and this was to be achieved through structural equation modeling. In this light, researchers classify structural equation modeling techniques into either the covariance based structural equation modeling (CB-SEM) or the variance based structural equation modeling (VB-SEM), otherwise known as the partial least squares' structural equation modeling (PLS-SEM) technique (Hair et al., 2018;Schmitt, 2011;Tabachnick & Fidell, 2007). However, the main factor considered was the sample size and since the sample size for the study was 192, this was optimal for VB-SEM in lieu of CB-SEM as this was found to be more robust for sample sizes less than 200 (J. Hair et al., 2010). In this respect, the researchers considered the use of the SmartPLS over CB-based SEM techniques. Standardised bootstrapped SEM was done with 500 sub-samples and the respective model is for the first three hypotheses is presented in Figure 3: The respective path coefficient and p-values are presented in Table 3: From the foregoing, the highest t-statistic was 4.813 (p = 0.000 < 0.05) and was observed for the relationship between innovativeness and entrepreneurial intention, and this was an indication that entrepreneurial orientation played the largest role towards entrepreneurial intention than the rest of the other entrepreneurial orientation dimensions. With the p-value being less than 0.05, the null hypothesis was rejected and the researcher confirmed that there was a significant relationship between innovativeness and entrepreneurial orientation. The second most significant entrepreneurial orientation dimension was risk-taking and the path coefficient was 2.331 (p = 0.010 < 0.05). Again, with the p-value being less than 0.05, the null hypothesis was rejected and the researcher confirmed that there was enough statistical evidence at the 95% confidence level that risk-taking had a significant influence on entrepreneurial intention. Nevertheless, with respect to proactiveness, the t-statistic was 0.0374 (p = 0.374 > 0.05). In this regard, the p-value was greater than 0.05, it meant that there was not enough statistical evidence to support the hypothesised significance of the relationship between proactiveness and entrepreneurial intention. Effectively, the null hypothesis was not rejected. Further modeling the direct and moderating role of the demographic attributes, that is, employment status, family entrepreneurial history, age and education were also tested. The corresponding hypothesis tested included: To achieve the above hypotheses, again, standardised bootstrapping was done using 500 subsamples in SmartPLS and the results are presented in Table 4 and Figure 4: From the above outcome, only the direct relationship between entrepreneurial orientation and entrepreneurial intention was statistically significant with a high t-statistic of 4.578 > 1.96 and a p-value of 0.000 < 0.05. In this regard, the null hypothesis was rejected for the fourth research hypothesis and the researchers confirmed that there existed a statistically significant positive influence of entrepreneurial orientation on entrepreneurial intention. None of the demographic variables had a statistically significant direct effect or a statistically significant moderation effect as also shown below: For the direct effect of family entrepreneurial background on the women's entrepreneurial intention, the path coefficient was 0.421 (p = 0.336 > 0.05), and for the direct effect of age, this was 1.585 (p = 0.057 > 0.05) while for the highest level of education the path coefficient was 0.468 (p = 0.322 > 0.05). Effectively, none of the demographic factors had a statistically significant direct effect on entrepreneurial intention. In this regard, the researchers failed to reject the null hypothesis for all the direct relationships tested Table 5. With respect to the moderation effect of the demographic variables on the relationship between entrepreneurial orientation and entrepreneurial intention, for family entrepreneurial background, the moderation effect path coefficient was 0.021 (p = 0.491 > 0.05), for the moderation effect of age, the computed path coefficient was 0.118 (p = 0.457 > 0.05) and for the moderation effect of the highest level of education, the respective path coefficient was 0.792 (p = 0.206 > 0.05). Again, none of the demographic factors had a statistically significant moderation effect on entrepreneurial intention and in this light, the researchers failed to reject the null hypothesis for all the moderation effect relationships tested. Model fit With a view to validating the structural equation model results, Schmitt (2011) and Hair et al. (2018) recommend the use of goodness-of-fit tests. There are two main categories that are considered in PLS-SEM and these include the Standardized Root Mean Square Residual (SRMR) as well as Normed Fit Index (NFI) and this is supported by Hair et al. (2018). The SRMR was tested and the threshold considered was 0.08 (Hair et al., 2018). The results are presented below: From the foregoing, the SRMR was 0.067 < 0.08 for the first model and 0.054 < 0.08 for the second model and being less than the maximum threshold, it followed that the SRMR was valid. Further, with respect to NFI, the computed statistic was 0.914 for the first model and 0.943 for the second model, and this was greater than 0.85 (Dijkstra & Henseler, 2015). Hence, the above results indicate that none of the model fit tests was violated and, in this regard, the researcher confirms that the research model used was valid. Discussion The first hypothesis (H 1 ) predicted that the innovativeness of a woman positively influences her entrepreneurial intention. This hypothesis was accepted (the highest t-statistic was 4.813 (p = 0.000 < 0.05, the p-value being less than 0.05). The implication is that there is a greater significant relationship between innovativeness and entrepreneurial intention even for subsistence prospective entrepreneurs in rural areas. Schumpeter (1934) as well as Galindo and Mendez-Picazo (2013) discovered the same results as they concluded that innovativeness is at the very core of entrepreneurship. More so, the findings of this study are in tandem with the argument by Cheng et al. (2009) together with Ndubisi and Iftikhar (2012) who discovered that innovativeness is a key success factor of every entrepreneurial activity. Jyoti et al. (2011)'s resting point was that the business has to be highly innovative to gain competitive advantage meaning to say that innovativeness and entrepreneurial orientation are positively related. Innovativeness components should be trained to trainee rural women entrepreneurs to enhance their entrepreneurial intentions. These innovativeness facets are creativity and the tendency to experiment with new ideas rather than waiting for things to be tried and tested by others. Rural women also need to be equipped with technology skills so that they could use such technology in research and development of context specific new products to provide local solutions to subsistence marketplace problems and make money. The third hypothesis (H 3 ) which predicted that the risk-taking behaviour of a woman positively influences her entrepreneurial intention became the second most significant entrepreneurial orientation dimension. The path coefficient was 2.331 (p = 0.010 < 0.05, with the p-value being less than 0.05). This means the null hypothesis was rejected and the researchers confirmed that there was enough statistical evidence at the 95% confidence level that risk-taking had a significant influence on entrepreneurial intention. The same results were obtained by Teoh and Chong (2008) as well as Fatoki (2014) who confirmed that women entrepreneurs, who are risk takers, goes for well-thought out and organised business opportunities in the market, despite the fact that the outcomes from such opportunities will be unpredictable. Another study by Rauch et al. (2009) found out that people who possess a strong entrepreneurial orientation are associated with a high-risk behaviour. Even in the context of subsistence marketplaces, boldness to venture into the unknown is a prerequisite for entrepreneurial intention. Rural women about to start own business need to be trained in taking calculated risks especially considering that risk taking includes the guts to borrow funds for the venture. Women who are deficient of the risk taking attribute need to be hypnotised into risk taking behaviour. Subsistence women had a different characteristic on proactiveness (H 2 ). When it comes to proactiveness, the t-statistic was 0.0374 (p = 0.374 > 0.05). Since the p-value was greater than 0.05, it meant that there was not enough statistical evidence to support the hypothesized significance of the relationship between proactiveness and entrepreneurial intention. Effectively, the null hypothesis was not rejected. This is a unique result as compared to previous research on proactiveness (Avlonitis & Salavou, 2007). Just like the findings by Morris and Paul (1987), proactiveness was also acknowledged as a distinct sub-construct of entrepreneurial orientation after exploratory factor analysis. The moderating and direct effect of demographic variables on entrepreneurial orientation and entrepreneurial intention were tested (H 4a, H 4b , H 5a H 5b , H 6a , H 6b ) and none of them had a statistically significant direct effect or a statistically significant moderating effect. Basically, the researchers' results indicated that for the direct effect of family entrepreneurial background on the women's entrepreneurial intention, the path coefficient was 0.421 (p = 0.336 > 0.05). This means there was no statistically significant direct effect of family entrepreneurial background on entrepreneurial intention. Nguyen (2018) also reached the same conclusion when he noted that the statistical evidence available is not adequate to support the idea that any individual whose parents own businesses portrays a higher entrepreneurial intention than the one whose parents did not own a business. Alsos et al. (2011) together with Chaudhary (2017 noted different findings by stating that a family business plays a greater function in strengthening the growth of entrepreneurship among members of that family. Chaudhary (2017) further states that family background associated with self-employment will have a positive relationship towards entrepreneurial intent. When it comes to the highest level of education the path coefficient was 0.468 (p = 0.322 > 0.05). It did not have a statistically significant direct effect on entrepreneurial intention. These findings are also consistent with the study of Davidsson and Honig (2003) who are of the view that the connection between educational level in general and entrepreneurship is not very strong and contested. These researchers opine that education can possibly help an individual to identify new opportunities even though it does not necessarily decide whether such an individual will form a new business to utilise the opportunity. On the other hand, education is found to have a greater role to play in instilling entrepreneurial skills (Al Mamun et al., 2017;Neneh, 2014;Sasu & Sasu, 2015). In terms of age as a moderator of the relationship between entrepreneurship orientation and intention, the researchers found out that it was not significant. For the direct effect of age on entrepreneurial intention, it was 1.585, p = 0.057 > 0.05. This means that there is no statistically significant direct effect of age on entrepreneurial intention. Nguyen (2018) also indicated that there is no relationship between age and entrepreneurial intention. However, the results are not consistent with Coulthard (2007); Hughes and Morgan (2007) together with Wiklund and Shepherd (2003) who concluded that indeed, age is a triggering factor of entrepreneurial behaviour and that younger people are bolder to take steps toward acting entrepreneurially and establish a business than older people. The insignificance of all demographic profiles in terms of their moderating and direct effects in subsistence marketplaces of Zimbabwe may actually mirror the economic outlook of the economy. Ndiweni and Verhoeven (2013) noted the rise of informal entrepreneurs across the country. These were prompted by economic hardships regardless of a person's family business background, level of education and age. The employment population rate in Zimbabwe in 2019 (Zimstat, 2019, Labour force and child labour survey) was 36% and the rest of the population was unemployed. The informal sector in 2019 was much bigger than the formal sector (Zimstat, 2019). The rural sector would actually be worse off. These characteristics clearly exhibit the justification of the rural women entrepreneurial responses on the lack of moderating and direct effects of demographic profiles between entrepreneurial orientation and entrepreneurial intention. Conclusion The main purpose of the study was to identify the propinquity between entrepreneurial orientation on entrepreneurial intention and the moderating and direct effects of demographic profiles as an oasis of establishing a predictive model on prospective rural women entrepreneurs. The results suggested that innovativeness and risk-taking as pointers to identify prospective women's entrepreneurial intention were found to be the most appropriate. On the contrary, proactiveness was found to have no evidence to support that it has an effect to a rural prospective woman's entrepreneurial intention. It can be established that despite the challenges that most women in developed countries are faced with, being innovative will increase their chances of business success. The same conclusion was reached in a study which was carried out on Bangladeshi women by Mozumdar et al. (2020). The same study also indicated risk-taking as having the same effect on entrepreneurial intention as innovativeness. This basically means that before embarking on an entrepreneurial activity, these prospective women entrepreneurs should be ready to be associated with innovativeness and risk-taking. In other words innovativeness and risk taking should be taken as characteristics or traits that should be possessed by any woman who intent to be a successful entrepreneur in a subsistence marketplace. This conclusion also support the conclusion reached by Neneh and Van Zyl (2017) who determined that innovativeness is very crucial for an entrepreneur. Additionally, the higher the levels of innovativeness and risk-taking in these prospective women entrepreneurs, the higher the chances of being successful. In simple words and in short, since it is always said that high risk will result in high returns, it is most probable that those women with a very high level of risk-taking propensity will stand a chance of yielding higher returns in their businesses. Entrepreneurship attributes may also be affected or be determined by economic circumstances of respondents in an economy. This entails that risk-taking and innovativeness alone may not suffice to bring out a successful entrepreneur. There is need for technical, financial, and social support to enable these prospective entrepreneurs to positively contribute to the economic development of their nations. Policy making agents of the developing countries where these women are based are supposed to shoulder this responsibility. When it comes to demographic attributes, (family entrepreneurial history, age and level of education), they do not necessarily help as they are insignificant in terms of moderation effect and direct effect on the relationship between female entrepreneurial orientation and entrepreneurial intention. This conclusion was also arrived at by Nguyen (2018) who identified an insignificant relationship when it comes to moderation effect and direct effect on the relationship between female entrepreneurial orientation and entrepreneurial intention. The deduction is that the prospective women entrepreneurs' age, family business background and educational background is irrelevant when it comes to their entreprenerial intention. Theoretical contribution This study brings rural women entrepreneurship into the general women entrepreneurship literature which has mainly focused on urban women entrepreneurship. Whilst there have been a lot of studies on entrepreneurship orientation and intention, we extend these studies into subsistence markets which pose a unique socio-cultural setting. Innovativeness and risk taking entrepreneurship orientation has been noted to relate to entrepreneurship intention but the insignificance of demographic profiles in terms of age, education and family business background are surprising and points for consideration in subsistence women entrepreneurship literature. The study validated entrepreneurial orientation and intention scales items in a subsistence market in Sub-Saharan Africa. A rigorous quantitative approach using partial least squares structural equation modeling was done. Original entrepreneurial orientation scale items developed by Bolton and Lane (2012) were confirmed after exploratory factor analysis to be valid in rural markets. However, only four items remained on the entrepreneurial intention scale items picked from Linan and Chen (2009) as well as Thompson (2009). Practical contribution When screening rural women prospective entrepreneurs, it critical to take into consideration potential women's entrepreneurial innovativeness and risk-taking attitudes. The donors should administer measurement scale items for these two constructs but may not worry about proactiveness items and demographic profiles since these do not have any predictive power on identifying successful women entrepreneurs in subsistence markets. There is no need to read much on age, level of education and family business background when screening subsistence women to get successful entrepreneurs who may effectively utilise seed capital usually provided by donors. Financial institutions may do not need also to read much on these demographic profiles when evaluating potential rural women entrepreneurs' potential to utilise advances and loans so as to minimise bad debts. Governments, may also follow suit and consider innovativeness and risktaking attitudes to identify rural women to benefit from revolving funds provided to rural start-ups. Limitations and future research This study was conducted only on women prospective entrepreneurs, so the results cannot be generalised to male or girls and boys who desire to be entrepreneurs. Further studies on the relationship between entrepreneurial orientation of the women figure and entrepreneurial intentions using different methodology for example, ethnography is required especially on demographic variables because the results of this research are inconsistent with prior researches which stated that demographic variables has an impact on entrepreneurship intention (Chaudhary, 2017;Coulthard, 2007;Hughes & Morgan, 2007;Wiklund & Shepherd, 2003). Moreso, further research needs to be conducted interrogating entrepreneurial orientation and intention with cultural variables such as values (see, Schwartz (1992) theory of human values to postulate a complete model for rural women entrepreneurs. Furthermore, personality variables may also be integrated to produce a comprehensive model to identify entrepreneurial intention in rural women. In general, I prefer a strong emphasis in projects on unique, one-of-a-kind approaches rather than revisiting tried and true approaches used before .350
2020-10-01T23:05:07.515Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "cc23f01779b91f7b43fe9417746f0f2152708dfe", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311975.2020.1818365?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "cc23f01779b91f7b43fe9417746f0f2152708dfe", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
5790202
pes2o/s2orc
v3-fos-license
Methicillin-Resistant Staphylococcus aureus Colonization of the Groin and Risk for Clinical Infection among HIV-infected Adults Data on the interaction between methicillin-resistant Staphylococcus aureus (MRSA) colonization and clinical infection are limited. During 2007–2008, we enrolled HIV-infected adults in Atlanta, Georgia, USA, in a prospective cohort study. Nares and groin swab specimens were cultured for S. aureus at enrollment and after 6 and 12 months. MRSA colonization was detected in 13%–15% of HIV-infected participants (n = 600, 98% male) at baseline, 6 months, and 12 months. MRSA colonization was detected in the nares only (41%), groin only (21%), and at both sites (38%). Over a median of 2.1 years of follow-up, 29 MRSA clinical infections occurred in 25 participants. In multivariate analysis, MRSA clinical infection was significantly associated with MRSA colonization of the groin (adjusted risk ratio 4.8) and a history of MRSA infection (adjusted risk ratio 3.1). MRSA prevention strategies that can effectively prevent or eliminate groin colonization are likely necessary to reduce clinical infections in this population. M ethicillin-resistant Staphylococcus aureus (MRSA) infections are a substantial cause of illness and a major public health problem (1). Although MRSA was traditionally considered a health care-associated pathogen, it has emerged worldwide as a notable cause of communityassociated skin and soft tissue infections (2). In the United States, MRSA pulsed-field gel electrophoresis (PFGE) type USA300 strains have caused most community-associated MRSA infections (3). High rates of community-associated (4)(5)(6) and health care-associated MRSA infections have also been described among HIV-infected persons (7), although the underlying basis for this association is unknown. Proposed mechanisms include immune dysfunction (5,7,8), behavioral risk factors (9), and increased exposure to the health care system (10). The prevalence of MRSA colonization among HIV-infected persons is also high (10%-17%) (11,12), compared with that in the general US population (0.8%-1.5%) (13,14). Colonization with S. aureus is a risk factor for subsequent clinical infection (15,16), and the site of colonization may also be an key risk factor (17). For example, although the anterior nares is considered the primary reservoir of S. aureus (18), MRSA PFGE type USA300 might preferentially colonize the buttocks, genitals, and perineum (17), leading to more infections in these anatomical areas. Improving our understanding of the interaction between MRSA colonization and clinical infection among persons with HIV is necessary so that effective prevention strategies can be developed for this population. Study Design Study participants were recruited from the Atlanta Veterans Affairs Medical Center (Atlanta, GA, USA) HIV clinic, which provides medical care to ≈1,400 HIV-infected veterans and is the largest Veterans Affairs Medical Center HIV clinic in the United States. This study was approved by institutional review boards for Emory University and the Centers for Disease Control At enrollment, data on patients' demographic characteristics, medical history, and antimicrobial drug use within the past 12 months, and microbiologic data on previous S. aureus infections were obtained from electronic medical records. Participants also completed a questionnaire at enrollment and at 12 months that focused on their living situation, self-reported history of skin infections, personal hygiene, sexual behavior, and drug use over the past 12 months. Microbiologic Procedures At each study visit, specimens for S. aureus culture were collected from the anterior nares and the groin by using sterile rayon swabs and placed in liquid Stuart's transport media (Becton Dickinson, Sparks, MD, USA). Study staff collected specimens from the anterior nares, and participants were instructed (using a diagram of the human body) to collect specimens from the groin by swabbing in the skin folds between the thigh and genital area. Swabs were plated on mannitol salt agar (Becton Dickinson) and CHROMagar MRSA (Becton Dickinson) and then placed in 5 mL of trypticase soy broth with 6.5% sodium chloride (Becton Dickinson) as described (19,20). At each study visit, participants were classified as MRSA colonized if MRSA was detected from either the nares or groin culture. Participants were classified as colonized with methicillinsusceptible S. aureus (MSSA) if MSSA was detected and MRSA was not detected. Participants colonized with both MSSA and MRSA (regardless of site) were classified as MRSA colonized. All MRSA isolates were genotyped by PFGE with SmaI (New England Biolabs, Beverly, MA, USA) as described (13,19,20). PFGE patterns were analyzed with BioNumerics Software v 5.10 (Applied Maths, Austin, TX, USA) and were assigned to USA pulsed-field types by using Dice coefficients and 80% relatedness. USA500, Iberian, and Archaic PFGE types were grouped together as USA500/Iberian because they are closely related and difficult to separate by PFGE (21). PCR was used to screen for staphylococcal cassette chromosome mec type and to detect Panton-Valentine leukocidin genes for all isolates (22). USA300 was defined as an isolate with a USA300 PFGE pattern that was positive for Panton-Valnetine leukocidin genes and contained staphylococcal cassette chromosome mec type IVa. Prospective Monitoring for Incident MRSA Clinical Infections Electronic medical and microbiology records were prospectively monitored for incident MRSA clinical infections for 24 months. Participants were classified with a MRSA clinical infection if a clinical infection was documented in the medical record and MRSA was isolated from the culture. Participants with a MRSA clinical infection completed a supplemental questionnaire that focused on the signs and symptoms of their infection and its clinical course. We defined a skin and soft tissue infection in the groin as an infection that involved the buttocks, perineum, genitals, anus, or proximal thigh. Statistical Methods The primary analysis compared participants in whom a MRSA clinical infection developed with those in whom a MRSA clinical infection did not develop. All analyses were performed by using SAS version 9.2 (SAS Institute Inc., Cary, NC, USA). The Wilcoxon rank-sum test (continuous variables) and the χ 2 and Fisher exact tests (categorical variables) were used to test for differences in clinical, demographic, and behavioral variables among participants with and without MRSA clinical infection. Statistical significance was indicated by a p value <0.05. By using a multivariate log-linked binomial regression model (Proc Genmod; SAS Institute Inc.) (23), adjusted risk ratios (aRRs), and 95% CIs were calculated to identify variables associated independently with the development of MRSA clinical infection. All statistically significant (p<0.05) variables in univariate analysis (unadjusted RR) were included in a multivariate model (24), and variables with p>0.2 in the adjusted model were dropped sequentially to create a parsimonious model that was examined for goodness of fit after each step. We also evaluated variables in the final parsimonious model with Kaplan-Meier survival methods with corresponding log-rank tests and Cox proportional hazards models (Proc PHREG; SAS Institute Inc.) of time to MRSA clinical infection. Over a median of 2.1 years of follow-up, 29 MRSA clinical infections occurred in 25 participants (2.5 infections/100 person-years). Skin and soft tissue infections (n = 24) were the most common, followed by pneumonia (n = 3) and bacteremia (n = 2). Three (13%) of the skin and soft tissue infections required hospitalization, and 13 (54%) of 24 skin and soft tissue infections occurred in the groin. Of the 25 participants in whom MRSA clinical infection developed, MRSA colonization was detected at baseline in the groin only or groin and nares in 12 (48%) of 25 participants in whom a MRSA clinical infection developed, compared with 37 (6%) of 575 participants in whom an infection did not develop (p<0.0001). MRSA colonization was also detected at a study visit (baseline, 6 months, or 12 months) preceding clinical infection in 17 (68%) of 25 participants. Among clinical isolates available for PFGE typing from an initial MRSA clinical infection (n = 22), USA300 (n = 14, 64%) was the most common and was identified in 9 (69%) of 13 skin and soft tissue infections that occurred in the groin ( Table 2). USA500/Iberian (n = 8, 36%) was also common and was identified in all of the pneumonia and bacteremia infections. In patients with preceding colonization, the PFGE type of the clinical isolate and preceding colonizing isolate always matched (n = 17/17). In univariate analysis, factors associated with an increased risk of developing MRSA clinical infection included MRSA colonization detected in the groin at baseline, a lower CD4 cell count, a previous history of an abscess, a medical history of MRSA clinical infection, renal insufficiency, a history of syphilis, the use of certain antistaphylococcal agents in the past 12 months, contact with a prison or jail, and certain hygienic factors ( ‡MRSA and MSSA co-colonization was detected in 11 participants at baseline, 10 participants at 6 mo, and 9 participants at 12 mo. For analysis these participants were classified as MRSA colonized. In addition, MRSA colonization with 2 distinct MRSA PFGE patterns was detected in 2 participants at baseline, 1 participant at 6 mo, and 2 participants at 12 mo. for development of MRSA clinical infection (online Table 3, appendix). MRSA colonization detected in the groin included participants with MRSA colonization detected only in the groin (n = 14; aRR 6.6) and participants with MRSA colonization detected in the nares and groin (n = 35; aRR 4.2). This analysis was repeated by using a multiplepredictor Cox proportional hazards model to account for time to MRSA clinical infection and MRSA colonization detected in the groin at baseline (adjusted hazard ratio 5.9; 95% CI 2.5-13.9) and a medical history of MRSA clinical infection (aHR 4.0; 95% CI 1.6-9.6) were significant predictors of time to MRSA clinical infection. Among the 79 participants with MRSA colonization at baseline, USA300 colonization was associated with a nonsignificant but increased risk of developing MRSA clinical infection, compared with other PFGE types (RR 2.1; 95% CI 0.7-5.9). In a separate analysis, MSSA colonization was not associated with developing a MRSA clinical infection (RR 0.8; 95% CI 0.3-2.7). In a subanalysis of MRSA colonization in 383 HIVinfected adults from whom samples were cultured at all 3 visits, MRSA colonization was detected in 48 (13%) participants at baseline, in 52 (14%) at 6 months, and in 50 (13%) at 12 months. Approximately equal numbers of participants became colonized with MRSA or were no longer colonized at each sequential study visit to maintain this stable colonization prevalence (Figure 2). On a percentage basis at each sequential study visit, 21%-31% of MRSA-colonized participants were no longer colonized (without treatment) and 4%-6% of previously uncolonized participants became colonized with MRSA. Over 12 months, MRSA colonization was persistent (detected at all 3 visits) in 26 (7%) participants and intermittent (detected in 1 or 2 visits) in 54 (14%) participants ( Figure 2). The PFGE type remained stable in 23 (88%) of 26 participants with persistent colonization and in 16 (89%) of 18 participants with intermittent colonization at 2 visits. Swab specimens from participants with persistent colonization (n = 78 MRSA isolates from 26 participants) were more likely to yield heavy growth of MRSA (growth detected on direct agar plating without broth enrichment) than were isolates from participants with intermittent colonization (n = 72 MRSA isolates from 54 participants) [91% vs. 75%; p = 0.009]. PFGE type (USA300 vs. other PFGE types) was not significantly associated with persistent vs. intermittent colonization (p = 0.27). Discussion MRSA clinical infections (mainly skin and soft tissue infections) were common among HIV-infected adults in this study. The prevalence of MRSA colonization was also high at each study visit (13%-15%), and MRSA colonization in the groin was a risk factor for developing a MRSA clinical infection. MRSA PFGE types USA300 and USA500/ Iberian were common causes of colonization and clinical infection. USA300 more commonly caused colonization of the groin and clinical skin and soft-tissue infection in the groin. MRSA prevention strategies with HIV-infected adults that can effectively address colonization at this anatomical site are likely necessary to reduce MRSA clinical infections in this population. HIV-infected persons have been found to have 6× the risk for community-associated MRSA skin and soft-tissue infections than HIV-negative patients (25) and an increased odds of having community-acquired S. aureus bacteremia (26). In this study, MRSA colonization in the groin and a medical history of MRSA clinical infection were risk factors for clinical infection. Because, in our study, most skin and soft tissue infections occurred in the groin, colonization of the groin may have directly precipitated clinical infection in this anatomical area. In a previous analysis, we demonstrated that MRSA colonization was also associated with a medical history of MRSA clinical infection, contact with jails and prisons, and correlates of risky sexual behavior (i.e., rarely or never using condoms) (20). In addition, in this analysis, adjusting for MRSA colonization in the groin diminished the association between MRSA clinical infection and risk factors for exposure to MRSA (e.g., contact with jails and prisons) and risk factors related to hygiene (e.g., shaving the groin, genital, or buttock area). These findings suggest that MRSA colonization in the groin may also be a marker of more frequent exposure to MRSA in the environment or poor hygiene or an indicator of immunologic dysfunction (i.e., impaired neutrophil function [27]) that in turn increases a person's susceptibility to clinical infection. Prior studies have demonstrated that USA300 causes most community-associated MRSA infections in the United States, whereas USA500/Iberian clones are associated with health care-associated MRSA infections (1). This epidemiology, however, is changing (28), and participants in this study had risk factors for both community-associated and outpatient health care-associated MRSA exposures. In this study, USA300 caused most skin and soft tissue infections and was more likely colonize the groin only. Other PFGE types, however, also caused both clinical infections and groin colonization, and PFGE type (USA300 vs. other PFGE types) was not independently associated with risk for MRSA clinical infection. These findings suggest that the presence of MRSA colonization in the groin is more useful clinical knowledge than identifying the PFGE type causing colonization (which is rarely determined in clinical practice anyway). The association of MRSA colonization and the development of clinical infection in this study suggest that MRSA decolonization with topical or systemic treatment may be an effective method to prevent clinical infections in this population. A randomized clinical trial of adult hospitalized surgical patients found that using intranasal mupirocin and chlorhexidine gluconate soap total-body wash substantially reduced the rate of health care-associated S. aureus infection by 58% in patients who were nasal carriers of S. aureus (29). Although several randomized controlled trials have demonstrated that MRSA colonization can be eliminated from the groin (30), and short-term clinical benefits of S. aureus decolonization have been demonstrated in the hospitalized setting, data have not been available to support decolonization as a method of preventing MRSA clinical infections in a community or outpatient setting (31). In this study, we observed that although MRSA colonization was frequent, it also fluctuated considerably over time. MRSA colonization spontaneously resolved in approximately half of participants over 12 months, but new MRSA colonization was detected in as many previously uncolonized participants. A MRSA decolonization program would therefore treat a substantial number of persons whose MRSA colonization would have resolved spontaneously and would require ongoing screening to identify new colonization. The substantial fluctuations in MRSA colonization status in this study suggest that strategies that emphasize hygiene and avoidance of potential MRSA exposures might be more effective at preventing MRSA clinical infections in this setting than decolonization, but this hypothesis should be tested in a clinical study that includes decolonization of MRSA from the groin as an intervention. Our study had several limitations. First, our study population was 98% male and our findings are not generalizable to HIV-infected women. Second, the MRSA epidemic in the United States continues to evolve (7), and new risk factors for MRSA infection in HIVinfected adults may emerge that were not significant in this study. In addition, in our study, some risk factors for community-associated MRSA clinical infections, such as methamphetamine use (9) and close contact with someone with a skin infection, were not associated with MRSA clinical infection. These differences might be explained by low frequencies of certain risk factors (i.e., methamphetamine use) in our study population or a social desirability bias may have limited the full disclosure of drug use and sexual and hygienic behavior. Third, we evaluated 45 variables in univariate analysis and 16 variables in the initial multivariate model before creating a final parsimonious model with 6 variables. Although evaluating an extensive list of potential risk factors for MRSA clinical infection had some advantages, the extensive list also increased variance in the initial multivariate model. Fourth, the optimal sampling (i.e., which sites to swab and how to collect the specimen) and microbiologic techniques to evaluate MRSA colonization in the groin have not been established. Although we used microbiologic techniques that have been demonstrated to improve MRSA detection (19), we may have underestimated the true prevalence of groin colonization. Finally, participants may have had MRSA clinical infections that were not cultured, and these infections would not have been captured by our electronic monitoring of microbiologic records. Therefore, we might have underestimated the true incidence of MRSA clinical infections in this population. In this study of HIV-infected adults, MRSA clinical infections were common and associated with MRSA colonization in the groin and a medical history of MRSA clinical infection. MRSA PFGE types USA300 and USA500/Iberian contributed to clinical infections, and participants had risk factors for both communityassociated and health care-associated MRSA exposures. Given this high incidence of MRSA clinical infections, both community-associated and hospital-associated MRSA prevention strategies should be emphasized in HIV-infected adults in settings with high rates of MRSA clinical infections. Current community-associated MRSA prevention strategies include keeping cuts and scrapes clean and covered; practicing good hand hygiene; avoiding shared personal items, such as towels and razors; and decolonization in certain situations (31). Given the frequency of MRSA colonization in the groin and its association with clinical infection, MRSA prevention strategies (both hygienic practices and decolonization treatments) with HIV-infected adults should be used to prevent or eliminate colonization at this anatomic site to reduce MRSA clinical infections in this population.
2014-10-01T00:00:00.000Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "a29468b7d8b56ff29fb139e9eb64ef6d2641f351", "oa_license": "CCBY", "oa_url": "http://www.cdc.gov/mrsa/pdf/MRSA-Strategies-ExpMtgSummary-2006.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f9177b38afc3b4545a849d277943b7d3d857621", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15551232
pes2o/s2orc
v3-fos-license
In vitro evaluation of root canal preparation with two rotary instrument systems – Pro Taper and Hero Shaper Background and aims The purpose of this study was to compare several parameters of root canal preparation using two different rotary Nickel-Titanium instruments: Pro-Taper (Dentsply Maillefer, Ballagigues, Switzerland) and Hero-Shaper (Micro Mega, Besancon, France). Methods Twelve extracted maxillary premolars were randomly divided into two groups and embedded into a muffle system. All root canals were prepared to size 25 using Pro-Taper or Hero-Shaper rotary instruments. The following parameters were evaluated: root canal form, centering capacity of the instrument, the presence of residual dentinal debris and smear layer on the root canal walls, working time and the occurrence of intraoperative accidents. Statistical analysis was performed using the chi2 test (p=0.05). Results The majority of the root canals prepared with Hero Shaper (88.89%) and ProTaper (77.78%) showed a round or oval cross-section postoperatively. Superposition of pre- and postoperative photographs of the cross-sections showed that for the coronal third of the root canals the Hero Shaper performed in a superior manner, while for the apical third better results were obtained with the Pro Taper system. Cleanliness of the root canal walls was investigated under the SEM, in the middle third of the canal, using a five-score system for debris and smear layer. For debris Hero Shaper and Pro Taper rotary systems achieved 66.67% and 50% scores of 1 or 2, respectively. The results for the smear layer were similar: cleaner root canal walls were found after preparation with Hero Shaper (66.67% scores 1, 2), followed by Pro Taper (50%). Mean working time was shorter for Hero Shaper (124s) than for Pro Taper (184s); the difference was not significant. Conclusions Within the limits of this study, both systems had almost the same cleaning ability and excellent centering capacity. for stainless steel instruments [1]. Using nickel-titanium rotary instrument systems is currently an important step in modern endodontics, making possible the treatment of more complex cases with fewer procedural errors [2]. A large number of systems are now available on the market. These show similar features but differ in taper, cutting angle, tip design, number of blades and cross section, all directly influencing the flexibility, cutting efficacy and torsion resistance of the instrument [3]. They allow a quicker preparation with less transportation of dentinal debris beyond the apex, a considerable reduction of risks for the apical periodontium, an increased capacity to negotiate curved root canals, and reliable and reproducible results even when used by less experienced practitioners [1,4]. They are also leading to less reduction in working length during root canal flaring compared to stainless steel instruments [5]. Among their properties, such as flexibility, cutting efficacy, torsional resistance [3], the centering ability and maintenance of initial root canal anatomy are the most important [2]. A large number of Ni-Ti instruments are now available on the market. They all show different designs with specific taper, cutting blades direction, tips and specific motions. Hero Shaper (Micro Mega, Becacon, France) is designed, as the manufacturer claims, with a variable helical angle and an adapted pitch that increases with the taper of the instrument. This design has the purpose to avoid the screwing effect of the instrument. The Pro Taper system (Dentsply, Switzerland) is designed with a progressive taper, a progressive changing helical angle and balanced pitch thus reducing the threading and improving the debris removal [6]. The aim of this study was to assess the mechanical preparation of root canals in vitro with two rotary instrument systems: Pro Taper (Dentsply Maillefer, Ballagigues, Switzerland) and HeroShaper (Miro Mega, Becanson, France). We analyzed the following parameters: root canal form, centering capacity of the instrument, the presence of residual dentinal debris and smear layer on the root canal walls, working time and the occurrence of intraoperative accidents (loss of working length, fracture of instruments, occurrence of perforations). Method The quality of root canal preparation was assessed by the modified Bramante method [7]. This technique allows simultaneous assessment of the amount of debris and smear layer remaining on the root canal walls in longitudinal section, as well as the shape of the root canal in cross-section. Selection and preparation of teeth The study was carried out on a total of 12 teeth, maxillary first premolars with fully formed root and closed apex, without apical resorption. Extracted teeth were kept in 3% paraformaldehyde solution. The selected teeth showing the same mean curvature, were randomly divided into 2 groups of 6 teeth. Preoperatively all teeth were embedded in acrylic resin by using a conformer. The conformer consisted of a metallic cylinder with a base of 3.6 x1.8 cm and 2 pairs of holes with a diameter of 5 and 2 mm respectively. Two pairs of cylindrical pins were inserted into these holes. The pair of pins with a diameter of 5 mm had the role of orientation and the other pair of pins had the role of stabilization. In the center of the base was a metallic support where teeth were fixed with wax in a firm position. The entire inner surface was lubricated, and then the teeth were embedded in acrylic resin 1 mm below the cement-enamel junction. The tip of the orientation pin was placed at the cement-enamel junction. After the setting of the resin the pins were removed by tapping. All teeth were shortened before preparation at the length of 19 mm. Root canal preparation First, the cleaning and shaping of the buccal root canals was performed using the rotary instrument systems Hero Shaper and Pro Taper respectively. Working length was set at 18.5 mm. Preparation of root canals was made by crown-down technique at a constant working speed of 300 rev/min. For the Hero Shaper system the preflaring of the root canal entry was performed with the Endoflare instrument that was active in the coronal 2 mm of the root canal. The coronal 2/3 were prepared with the 6% taper and 0.20 mm diameter instrument. The apical third was prepared with a sequence of two instruments of 4% taper and diameters 0.20 and 0.25 mm respectively. The Pro Taper system consists of six instruments, three shaping files, used for the preparation of the coronal 2/3 of the root canal and three finishing-files for the preparation of the apical third of the root canal. Enlargement of the root canal orifice was performed with the SX instrument (0.19 mm diameter and 19% taper ) that worked on the coronal 1-2 mm from the working length. Subsequently, the root canals were prepared with the S1 instrument (0.17 mm diameter and 11% taper) in 2/3 of the length of the work and later on the entire working length. The following instrument was the S2 (0.20 mm diameter and 11.5% taper) which was used over the whole length in order to prepare the middle third of the root canal. Finishing of the preparation was carried out in the apical third by the use of instruments F1 (diameter 0.20 mm and 7%) and F2 (0.25 mm diameter and 8%) on the entire working length. The irrigating solution used for both groups of teeth was 2.5% sodium hypochlorite in combination with EDTA 15% solution. Irrigation was performed with 2 ml sodium hypochlorite after each instrument used. At the end, the root canals were irrigated with 3 ml of EDTA solution, which was allowed to act for 3 minutes, followed by a lavage with 4 ml saline solution and dried with paper points. Preparation of cross-sections for SEM examination In order to assess the shape of the cross sections and the degree of overlapping of pre-and postoperative cross sections, preoperatively the roots were marked for subsequent repositioning and were sectioned horizontally at 3, 6 and 9 mm from the apex with a microtome (Isomet, Plus, Buheler, Lake Bluff, IL ) whose disc had a thickness of 0.3 mm. The resulted slices were photographed under standard conditions and examined in the scanning electron microscope (SEM). The segments were then placed back into their original position with the help of the marks and pair of pins. After that, the palatal root canals were prepared as described previously. At the end of the preparation, palatal cross sections were photographed again and the images were superimposed over the original. In this manner, the area of the root canal which remained un-instrumented in the coronal, middle and apical thirds respectively, was evaluated. The form of the preparation was determined after superimposing pre-and postoperative root canal outlines. According to Loushine et al. [8] cross sections were classified into round, oval and irregular. The round and oval are considered clinically acceptable, and the irregular clinically unacceptable. Sample preparation for the SEM evaluation of dentinal debris and smear layer In the next stage, the buccal roots were sectioned longitudinally and prepared for the SEM evaluation. Initially the central beam of the SEM had been directed to the center of the object by the SEM operator at 10X magnification. The magnification was then increased to 200X and subsequently to 1000X and the canal wall region appearing on the screen was photographed. Debris and smear layer were evaluated separately and scored from 1 to 5 using the scoring system introduced by Hülsmann et al. [9,10]. The presence of debris was evaluated from the images at 200X magnification using a scale of 5 scores, as follows: 1. clean root canal wall and only a few small debris particles; 2. a few small agglomerations of debris; 3. many agglomerations of debris covering less than 50% of the root canal wall; 4. more than 50% of the root canal walls were covered with debris; 5. complete or nearly complete root canal wall coverage with debris. The smear layer was evaluated from the images at 1000X magnification on a scale of the following five scores: [ The scoring was performed by an independent, trained examinee that could not identify the samples or the instruments used for their preparation. Statistical analysis Statistical analysis of results was performed using IBM SPSS 20.0 software (SPSS, Inc., Chicago, IL, USA). All data regarding the followed parameter were analyzed using the chi 2 test and the limit of statistical significance was set at p<0.05. Evaluation of the root canal form on cross section The quality of the preparation on the cross section was assessed according to the classification of Loushine [8] that considers round and oval sections as clinically acceptable and the irregular as clinically unacceptable. In the coronal third the Hero Shaper system recorded a greater number of round and oval sections then the Pro Taper system, the difference being statistically significant (p=0.010) ( Table I). In the middle and apical third no statistically significant differences between the two rotary systems were found (Table I). Evaluation of the samples prepared with the Hero Shaper system showed that there were highly statistically significant differences in terms of shape (p=0.003), the number of regular sections being 16 while the number of irregular sections being 2; 14 sections prepared with Pro Taper system presented regular shape, the differences between acceptable and unacceptable sections being only close to the limit of statistical significance (p=0.01). Evaluation of the centering capacity The quality of the root canal preparation was also evaluated by assessing the degree of contact between Coronal the instruments and the surface of the root canal. The contact surface between the instruments and root canal walls was calculated by overlaying the images of pre-and postoperative cross sections and expressed as a percentage. An overlap of 100% means complete contact between the instrument and the circumference of the canal and a very good centering ability of the instrument. By evaluating pre-and post-operative overlapping of the cross sections of the palatal root canal, 10 for Hero Shaper and 8 sections for Pro Taper respectively, we found a contact area of more than 50% (Table II). In the coronal third the Pro Taper instruments gave a larger uninstrumented area than the HeroShaper systems (Table II). In the apical third of the root canal the preparation was more uniform with the Pro Taper system, two sections for Hero Shaper system ( Figure 1) and 4 for the Pro Taper ( Figure 2) showing values exceeding 50% overlap. Hero Shaper system recorded better results in the coronal third while the Pro Taper system in the apical third of the root canal (Table II). Evaluation of dentinal debris and smear layer Regarding the dentinal debris, the data showed a majority of scores 2 and 3 evenly distributed among the two rotary systems. No statistically significant differences were found. (Table III, Figures 3,4). By evaluating the smear-layer, only two samples showed scores of 1, both samples belonging to the Hero Shaper (16.67%) group (Table III) (Table III). From a statistical point of view the two rotary systems showed no significant differences. Evaluation of intraoperative accidents During root canal preparation the fracture of one instrument was recorded (Table IV). It was the No. 25 instrument of the Hero Shaper system. Also the loss of working length at one of the prepared sample was observed (Table IV), due to a ledge that occurred through an exaggerated pressure applied with the instrument on the outer wall of the root canal. No perforations were created during the instrumentation of the teeth. Evaluation of preparation time Regarding the time designated to the preparation of the root canals, no statistically significant differences between the two rotary systems were observed (p=0.0019). The Hero Shaper system due to a 4 instruments working sequence, recorded a shorter working time when compared to the Pro Taper system with a 6 instruments working sequence (Table V). Evaluation of the root canal form on cross section Assessment of the root canal form was performed by SEM analysis of postoperative cross sections. Regular cross sections evidenced a good centering ability of the instruments. Both systems showed a large number of round and oval sections. The Hero Shaper obtained a higher number of regular cross sections than the Pro Taper system at all three levels. Hero Shaper recorded only two irregular sections in the apical third, while Pro Taper had two irregular sections at the coronal and two in the apical third. These results are supported by other studies that showed a better compliance with the original shape of the root canal cross section for the Hero Shaper system [11,12]. Evaluation of the centering capacity of the two rotary systems The ability of the two rotary systems to instrument root canals can be appreciated by overlapping pre-and postoperative cross sections. It is considered that a root canal is fully instrumented when the postoperative circumference of the root canal includes completely the original root canal perimeter [7]. After overlapping the pre-and postoperative cross sections of the palatal root, we found that both rotary systems left un-instrumented root canal areas. Although most sections had regular shapes, only a few had a degree of overlap of 75-100%. Only the Pro Taper system recorded that degree of overlapping in the apical third. This fact can be explained by the variable taper of these instruments along their active portion, having a more pronounced taper in the apical third than the Hero Shaper system instruments. A total of 8 of 18 cross sections for the Pro Taper system and 10 of 18 cross sections for the Hero Shaper system had a contact perimeter greater than 50%. There were no statistically significant differences between the two systems at all three levels, coronal, middle and apical. Hülsmann et al. [13] also found a good centering ability of the Hero 642 system, while other studies reported a good centering capacity of the Hero Shaper system being more evident in the middle third [4,11]. Evaluation of the capacity of the two rotary systems to remove dentinal debris and smear layer Only the middle third of the buccal root canal was evaluated regarding the removal of debris and smear layer, because this area is the most easily reproduced and analyzed under SEM. Both systems failed to completely remove the debris and smear layer. Regarding the removal of dentinal debris both systems achieved a great number of scores 2, which indicates a good cleaning capacity. Regarding the smear layer, the Hero Shaper system obtained two scores of 1, showing a large number of open dentinal tubules. Otherwise the results showed a large number of scores 2 almost equally distributed to the two systems. The differences were not statistically significant. This is consistent with the results of other studies. Yang et al. [14] assessing the amount of debris and smear layer removed by the same two rotary systems found no statistically significant differences, except in the apical third, where Pro Taper system gave better results. These data confirm our study, which refers only to the middle third where both systems performed similarly. The parameter of surface preparation is not completely clarified clinically as yet; however, taking into account that viable microbes penetrate deep into the dentinal tubules and can persist during root canal preparation, the use of an irrigant is essential. In this study EDTA was used only as irrigating solution at the end of the preparation and not as a gel during preparation. This fact might have decreased the ability of the two systems to remove debris and smear layer. Another factor that influences the capacity of the instruments to remove the dentinal debris and the smear layer is the depth reached by the irrigating solution during root canal irrigation. A larger diameter and a greater taper of the preparation are improving the irrigation. Since the Pro Taper system implies a larger number of instruments, it also increases the amount of irrigation. The deeper diffusion of the irrigating solution is favored also by the greater taper of the Pro Taper system, which is the only system that has a variable taper throughout the length of the active part combined with a negative cutting angle of the helix. These factors explain the better performance of the Pro Taper system in the apical portion and the lack of differences in other regions of the root canal. Evaluation of preparation time Some studies calculate the working time as the actual intrumentation time, summing the necessary time for the instruments to work in the canal [15]. In this study we found a statistically significant difference between the time needed by each rotary system for the preparation of the root canals. Hero Shaper showed a shorter working time, probably due to the lower number of instruments belonging to this system. Evaluation of intraoperative accidents Procedural errors depend on many factors such as instrument design, the manufacturing process, root canal morphology, pressure applied on the instrument, preparation technique, operator`s experience and the number of uses in the root canal [16]. In the process of comparing the Hero Shaper and Pro Taper a single incident was recorded, namely the intraoperative fracture of a single instrument at the end of sample preparation, which belonged to the Hero Shaper system. It should be noted that the number of teeth included in this study was relatively small, thus not being representative for the assessment of procedural errors. Conclusions The study shows a good centering ability of both systems, but a lower efficiency in terms of removing the dentinal debris and smear layer. Working time was lower for the Hero Shaper system, but an intraoperative incident occurred with this system. Nickel-titanium instruments ensure a nearly ideal tapered preparation, allowing the treatment of the most difficult canals; the effectiveness of these instruments reduces the time required for endodontic treatment, providing more comfort for both practitioner and patient.
2018-04-03T04:44:32.784Z
2015-06-19T00:00:00.000
{ "year": 2015, "sha1": "538a0a1b6c88e1c1d4099b798969373fb1c1f485", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc4632902?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "538a0a1b6c88e1c1d4099b798969373fb1c1f485", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
3944806
pes2o/s2orc
v3-fos-license
Mast cells as the stRength of the inflaMMatoRy pRocess The inflammatory process is a complex host defence mechanism aimed at the elimination of deleterious factors disturbing homeostasis. Inflammation consists of several interdependent stages controlled by a wide range of mediators. Those include acute phase proteins, heat shock proteins, complement components, biogenic amines, cytokines, lipid-derived mediators, reactive oxygen species, nitric oxide, proteolytic enzymes, and kinins. Due to the strategic location in the body, mast cells play a protective role in the inflammatory process, through its initiation, amplification, and resolution. Mast cells degranulate and/or newly produce, and release various mediators classified into three groups: preformed mediators, de novo synthesised lipid mediators, and newly synthesised cytokines. Those mediators have an impact on different processes occurring during inflammation, inter alia, they influence blood vessels leading to dilation, enhanced adhesion molecule expression, and increased permeability. Furthermore, mast cell mediators play a pivotal role in inflammatory cell chemotaxis, degradation of extracellular matrix proteins, impact on stationery cells and resolution of inflammation. The release of mast cell mediators and their actions constitute a highly complex and still not fully understood mechanism, which warrants further studies of the action of mast cells in inflammation. This review will focus on the current knowledge concerning the broad role of mast cells in the inflammatory process. Inflammation Inflammation is a complex biological process, occurring in higher organisms as a result of any tissue malfunction, caused by diverse conditions including pathogen infection or stress factors, e.g.ionising radiation, cold, hot, toxins, and exogenous or endogenous agents.This first-line defence strategy can be characterised by physical symptoms such as swelling, pain, redness and heat triggered by vasodilation, extravasation of fluids, or growing mediator concentrations.The purpose of inflammation is not merely the elimination of injurious agents, but also the removal and healing of damaged cells in order to restore homeostasis [1,2]. In a properly functioning organism "physiological inflammation", known as acute phase response (APR), occurs quickly in the presence of detrimental conditions and subsides after its removal [3].At this stage, a rapid infiltration of leucocytes, namely neutrophils, monocytes, and different subsets of lymphocytes, can be observed.Any disruption in the resolution process extends the duration of the inflammation, leading to undesirable consequences for healthy tissues and as a result to loss of organ function.This "non-physiological" state is defined as a chronic stage [1,2,3,4].Moreover, most inflammatory diseases originate from the continuous inflammatory response, not directly from deleterious factors.These medical conditions include atherosclerosis, type 2 diabetes, rheumatoid arthritis, asthma, obesity, cardiovascular disease, Alzheimer's disease, and to a certain extent even cancer [1,4]. Stimuli triggering the inflammatory response are first recognised by host cells, which are present in the site of harmful factor penetration [1,2].These agents are very diverse and comprise a variety of foreign factors such as pathogens, physicochemical agents, and certain endogenous stimuli, arising in response to cell damage and disruption of homeostasis occurring under pathological conditions [4,5].Recognition can be achieved mainly due to the presence of pattern-recognition receptors (PRRs), specific transmembrane receptors detecting microbial structures known as pathogen-associated molecular patterns (PAMPs), and cellular injuries known as damage-associated molecular patterns (DAMPs).Toll-like receptors (TLRs), NOD-like receptors (NLRs), C-type lectin receptors (CLRs), and RIG-1-like receptors (RLRs) are distinguished among PRRs, and they play a crucial role in tissue damage recognition with subsequent activation of various signalling pathways [6].Leukocyte recruitment is a significant process in the whole defence mechanism and involves many steps of cell adhesion and activation, occurring in a precise manner and period of time, in order to eliminate thoroughly hazardous factors and simultaneously causing less damage to healthy cells [7].Although infiltration of inflammatory cells, such as monocytes/ macrophages, neutrophils, lymphocytes, or dendritic cells, is of prime importance, the role of fibroblasts, epithelial cells, endothelial cells, or hepatocytes is no less significant [1,2].In order to restore homeostasis efficiently, every biological structure in an organism must cooperate and influence each other, thus released mediators from one cell trigger specific defensive reactions from different cells. Prolonged duration of inflammation may lead to various pathological conditions; therefore, the mechanism of inflammation resolution is of critical importance for the organism.This process depends on the release of anti-inflammatory and pro-resolving mediators.Although both are responsible for stopping the inflammation, the former one inhibits or blocks a specific action, e.g.inhibition of pro-inflammatory cytokine secretion, while the latter one triggers stimulation and activation of certain mechanisms, like induction of leukocyte apoptosis [4]. Inflammation is a sophisticated mechanism based on many interconnected steps and requires concerted action of various inflammatory cells.Mast cells are considered among the most important cells taking part in the development of acute, as well as chronic, inflammation.Their specific localisation in an organism, in areas exposed to a pathogen or endogenous factor penetration on the one hand, and a wide spectrum of pro-and anti-inflammatory mediators se-creted, on the other hand, are both highly relevant features in the process of inflammation. Mediators of inflammation As discussed earlier, infiltration of inflammatory cells and activity of cells already existing at the site of inflammation are crucial for this process, and cell migration, as well as activation, is regulated by a wide range of mediators, which are necessary for the initiation, amplification, and resolution of inflammation.Cells capable of mediator secretion may be present in tissues (mast cells, macrophages, dendritic cells), create tissue (endothelial cells, epithelial cells, smooth muscle cells, fibroblasts, hepatocytes), or circulate in the bloodstream with a subsequent tissue infiltration after occurrence of detrimental agent (neutrophils, eosinophils, basophils, T cells, monocytes).Keeping in mind that the inflammatory process is controlled by a vast number of mediators making it very difficult to present a comprehensive list of them, there are some classes of mediators of a well-recognised role in controlling the inflammation.Some sources also categorise mediators into two main classes: plasma protein-derived mediators that are released from distant organs, e.g.acute phase proteins (APPs), heat shock proteins (HSPs), complement proteins, and kinins; and cell-derived mediators, e.g.biogenic amines, cytokines, lipid-derived mediators, and neuropeptides. Acute phase proteins are classical examples of inflammatory mediators, which are synthesised by hepatocytes during APR.They are divided into two groups: negative APPs, e.g.albumin, transferrin, and retinol binding protein (RBP); and positive APPs, e.g.C-reactive protein (CRP), serum amyloid A (SAA), α2-macroglobulin (A2M), haptoglobin (Hp), ceruloplasmin, fibrinogen [8], and mannose-binding lectin (MBL) [9].Heat shock proteins are the second classical representatives of inflammatory mediators, which are synthesised due to the action of stress factors, leading to denaturation of proteins.They are also known as chaperone proteins in view of their activities preventing denaturation and supporting proper folding and aggregation of proteins [10].Common examples of HSPs include HSP60, HSP70, and HSP90 [11].The complement system serves as a crucial mechanism initiating the inflammatory process upon pathogen infection.It consists of a set of proteins, which activate each other in a strictly defined order through classical, lectin, or alternative pathway, depending on the surface pattern recognition.Components C3a and C5a are mostly mentioned as those playing role in inflammation [12].Inflammatory mediators comprise many other compounds taking part in the process of inflammation.Some of them may be classified into certain groups according to their structure, origin, or function.Biogenic amine mediators constitute another class of mediators, with histamine as the model representative [13]. Cytokines are among the most commonly mentioned and the largest group of inflammatory mediators.Different cytokines may act agonistically or antagonistically on the same mechanism or cell; hence, they are divided into two major groups: pro-inflammatory and anti-inflammatory cytokines.This division is based on the general functions of certain cytokines because barely any of them may be considered as the only pro-or anti-inflammatory [14].Interleukin (IL)-1β, IL-6, IL-17 and tumour necrosis factor (TNF) are basic representatives of pro-inflammatory cytokines, which can act singly or in co-operation to stimulate the release of other pro-inflammatory cytokines [1,2,3,14].However, some of them may also trigger the secretion of anti-inflammatory and pro-resolving mediators, e.g.IL-1β and TNF induce the production of IL-10 [4, 14], while IL-1β, interferon (IFN)-γ, and IL-4 contribute to lipoxin release [14,15].Transforming growth factor (TGF)-β and IL-10 are common examples of cytokines suppressing the inflammatory response [1,2,5,14].Interleukin 4 and IL-13 are also considered to act mainly as anti-inflammatory cytokines [2,5,14]. Certain growth factors are likewise classified as cytokines taking part in the inflammatory process, with central representatives of this group as follows: platelet-derived growth factor (PDGF), basic fibroblast growth factor (bFGF), epidermal growth factor (EGF), anti-inflammatory TGF-β, granulocyte-macrophage colony-stimulating factor (GM-CSF), and stem cell factor (SCF).The last frequently mentioned group of cytokines are low molecular weight proteins called chemotactic cytokines or chemokines.Common, and simultaneously very important, examples of chemokines taking part in the inflammatory process include: CXCL8, CCL2, CCL3, CCL4, and CCL5 [14]. While considering various types of mediators, an additional concept, namely inflammasome, should be emphasised.Inflammasome is an intracellular inflammatory protein complex taking part in the maturation of cytokines, specifically IL-1β and IL-18.This complex consists of the following proteins: inflammasome sensor molecule, the adaptor protein apoptosis-associated speck-like protein containing caspase activation and recruitment domain (CARD) (ASC), and caspase 1. Inflammasome possesses the ability to detect PAMPs and DAMPs through TLR activation, leading to the production of inactive cytokine precursors, pro-IL-1β and pro-IL-18.CARD strongly binds caspase-1 and activates it through its self-cleavage.This, in consequence, contributes to the cleavage of cytokine precursors and the release of fully functional mediators IL-1β and IL-18, which significantly influence the inflammatory process [16]. Inflammatory mediators comprise many other classes and unclassified compounds, and although not all of them could be listed here, there are two groups that should be mentioned, namely reactive oxygen species (ROS) with nitric oxide (NO) and kinins.ROS and NO also function as inflammatory mediators.They are released from neutrophils during degranulation, in order to create destructive conditions for foreign microorganisms, but may likewise affect the survival of host cells [2].Kinins are inflammatory mediators, the functions of which are considered crucial for the first-line defence strategy, with bradykinin as a common example [23]. Mast cells Mast cells are crucial elements of different organism physiological and defence strategies, such as maintenance of homeostasis, protection against pathogens, and inflammatory process.They are formed initially in the bone marrow as CD13+/CD34+/CD117+ multipotent haematopoietic progenitors, which subsequently are released to blood vessels in order to settle at the target tissue, where they may dwell for several months.Mast cells are localised in the organism in strategic places in the connective tissue, mainly the surfaces of the mucosa, enabling an efficient carriage of their functions.Therefore, they are commonly localised close to the blood and lymphatic vessels, below the epithelium, lining the respiratory, digestive, or genitourinary systems and in the skin [24,25,26]. Mast cells are characterised by the expression of different receptors, activation of which leads to secretion of various inflammatory mediators.These signalling molecules may be expressed in different cellular compartments including cytoplasm, cell membrane, nuclear membrane, and endosome membrane [24,25,26].Not all receptors are directly associated with inflammation; however, some of them should be mentioned due to their essential properties in determining mast cell functions.These specific molecules include a high-affinity receptor for IgE, e.g.FcεRI, and receptors for IgG, such as FcγRI, FcγRIIA, and FcγRIII.Moreover, mast cells possess hormone receptors, e.g.Mas-related gene X2 (MrgX2) for somatostatin, oestrogen receptor α (ERα), ERβ, progesterone receptor (PR), and multiple G protein-coupled receptors (GPCRs) including receptors for cannabinoids, e.g.cannabinoid receptor type 2 (CB2), and neurotransmitters such as adenosine, e.g.adenosine A2a receptor (ADORA2A), ADORA2B, ADORA3, for substance P, e.g.neurokinin 1 receptor (NK1R), or acetylcholine, e.g.nicotinic acetylcholine receptors (nAChRs) [24,25,26,27]. Another group of mast cell receptors crucial in activation of mast cells during inflammatory processes is PRRs, known to be critical for defense against bacterial or viral infection.This include TLRs, e.g.TLR1, TLR2, TLR3, TLR4, TLR5, TLR6, TLR7, TLR8, TLR9, TLR10; NLRs, e.g.NOD1, NOD2; CLRs, e.g.dectin-1, macrophage-inducible Ca 2+ -de-pendent lectin (mincle); and RLRs, e.g.RIG-1, melanoma differentiation-associated protein 5 (MDA5) [30,31].Activation of PRRs, particularly TLR2 and TLR4, usually leads to stimulation of inflammatory response through the increase of expression of inflammatory mediators [31].However, the signalling pathway initiation through particular receptors, e.g.TLR3, may also downregulate this response due to inhibited degranulation and adhesion of mast cells [32].Although PRRs seem to be specialised in recognition of structures specific for bacteria and viruses, there are many endogenous ligands that are also recognised by these receptors.These host-derived ligands include components normally present in physiological conditions, e.g.fibrinogen, fibronectin, and hyaluronan, as well as molecules synthesised during disturbed homeostasis or occurring after cell/tissue damage, like high-mobility group box 1 (HMGB1), cardiac myosin, and HSPs, e.g.HSP60, HSP70, and HSP72 [33]. Mast cells also express integrins, which primarily serve as receptors for adhesion molecules, such as endothelial-leukocyte adhesion molecule-1 (ELAM-1), intercellular adhesion molecule-1 (ICAM-1), and vascular cell adhesion molecule-1 (VCAM-1), expressed by endothelial cells.Recognition of adhesion ligand is critical for the migratory abilities and localisation of mast cells during the inflammatory process [35].Integrin receptors mediate the adhesion of mast cells to certain ECM glycoproteins such as fibronectin following activation via FcεRI [36].Furthermore, these receptors also provide signal-enhancing release of mast cell mediators upon FcεRI stimulation [37] and play a role in the direct interaction of mast cells with other inflammatory cells [38]. Activated mast cells owe their fundamental role in inflammation to the production and release of a wide range of inflammatory as well as non-inflammatory mediators.These may be classified into three groups: preformed mediators, which are stored in the granules and secreted during degranulation almost instantaneously after mast cell activation; mediators de novo synthesised during membrane phospholipid metabo-lism, which are produced and released in tens of minutes after activation; and newly synthesised cytokines and other mediators, which are formed and secreted several hours after mast cell activation (Table I) [24,25,26,27,39,40,41,42]. Mast cells in inflammation When mast cells are activated, they degranulate releasing a wide range of already stored mediators and/or secrete newly synthesised lipid derivatives and cytokines because the release of mediators is highly dependent on the stimulus and does not always occur with the secretion of all three classes of mediators [24,25,27].Various, mainly pro-inflammatory, functions of released substances mean that mast cells affect different stages of inflammation, including its initiation and maintenance, but also its resolution, suggesting a pivotal role of mast cells in the inflammatory process.The involvement of mast cell mediators in inflammation is presented in Fig. 1.The essential role of mast cells in the process of inflammation could be observed both in APR and chronic stage, where the latter could lead to many pathological conditions.While this review focuses on "physiological inflammation", it is worth noting that pro-inflammatory functions of mast cells may initiate and/or amplify multiple pathological inflammatory conditions including rheumatoid arthritis [43], periodontal disease [44], Hodgkin lymphoma [45], and cancer, where they also play an important role in the process of tumour angiogenesis [46]. It is well known that inflammatory cells incessantly circulate in the bloodstream; however, only some of them will infiltrate the inflammatory site due to precisely defined mechanisms, and mast cells significantly contribute to this movement.It should be kept in mind that the mechanism of cell extravasation to the tissue consists of many stages including margination, rolling, activation, tight adhesion, diapedesis, and CRH -corticotropin-releasing hormone; LL-37 -leucine, leucine-37; VIP -vasoactive intestinal peptide; VEGF -vascular endothelial growth factor; PAF -platelet-activating factor; NT-3 -neurotrophin-3; MIF -macrophage migration inhibitory factor; XCL1 -X-C motif chemokine ligand 1 chemotaxis [7].These processes are firmly regulated through various mediators.Firstly, all recruited cells should have the possibility to attach to the vascular endothelium.This may be accomplished due to the activity of histamine, bradykinin, corticotrophin-releasing hormone (CRH), urocortin, PGD 2 , PGE 2 , platelet activating factor (PAF), vascular endothelial growth factor (VEGF), and NO, which lead to a violent dilation of blood vessels, resulting in blood flow slowdown.Although NO may seem to be unimportant in the early stage of inflammation due to its release only a few hours after mast cell activation, it may fulfil its functions in prolonged inflammatory process.However, mediators arising from membrane phospholipid metabolism may provoke vasodilation after tens of minutes from mast cell activation, while histamine, bradykinin, CRH, and urocortin act even more rapidly, in the first moments after the activa- tion.VEGF also affect vasodilation almost instantaneously, when secreted as a preformed mediator or after several hours, when it is newly synthesised [27,35,47]. Another step facilitating the attachment between endothelial and inflammatory cells is increased expression of adhesion molecules, which, due to the molecule type, selectively bind certain cells that will take part in the inflammation [7,25].This process is also controlled by a set of mast cell mediators, which influence adhesion molecule expression, inter alia, by increasing it.Histamine may trigger this reaction, but also tryptase, substance P, IL-1β, IL-6, TNF, and IFN-γ [25,48,49].Interferon γ, as well as IL-1β, affect adhesion molecule expression several hours after mast cell activation, while tryptase and substance P act immediately as histamine.Due to the fact that IL-6 and TNF may be stored in mast cell granules or de novo synthesised, they may function in two ways: instantly and/or after a few hours.Zhang et al. [49] studied the effect of mast cell-derived IL-6, TNF, and IFN-γ on the expression of VCAM-1, ICAM-1, P-selectin, and E-selectin and observed increased expression of adhesion molecules, with the activity of IFN-γ being the weakest, and TNF the strongest.Moreover, IL-6 and TNF influence adhesion of neutrophils to endothelial cells. Finally, mast cells may influence blood vessels through modification of vascular permeability.This is a key phenomenon of the inflammatory process because it facilitates the exudation in the tissue, thus enabling the infiltration of leukocytes to the site of inflammation, as well as extravasation of various proteins including complement components, e.g.C3a, C5a, fibrinogen, and other humoral factors.Multiple mast cell mediators cause an increase in vascular permeability.Those include a similar set of substances playing a role in vasodilation: histamine, bradykinin, LTC 4 , PGD 2 , PGE 2 , TNF, TGF-β, VEGF, CXCL8, and NO [25,39,48,50].LTC 4 , similarly to other arachidonic acid derivatives, increase vascular permeability after tens of minutes due to de novo synthesis.Because TGF-β and CXCL8 may be newly synthesised, they may also act after a long period of time; however, when released through degranulation, the increase in permeability is very rapid.Similarly, heparin also affects vascular permeability almost instantaneously when secreted as a preformed mediator [25]. Selected inflammatory cells, attached to certain adhesion molecules on endothelial cells, are subsequently attracted by particular chemokines and other mediators, which force cells to migrate in the designated direction indicating the inflammatory site.Some substances may be specific to certain cell populations, mainly leukocyte; however, they may also have an impact on the infiltration of different cells [25].Mast cells are able to induce chemotactic moves of lymphocytes, which in turn release other mediators that cooperate in the initiation and maintenance of inflammation, e.g.Th1 cells secreting IL-2, IL-3, TNF, and IFN-γ; Th2 cells that are able to release IL-4, IL-5, IL-6, IL-10, IL-13, and TGF-β; or in the resolution, e.g. regulatory T cells secreting IL-10 and TGF-β.Lymphocytes may be attracted through multiple mediators that may be released a few hours after mast cell activation [48,51,52,53].Monocytes, with phagocytic abilities and their own set of cytokines, e.g.IL-1, IL-6, IL-12, and TNF [54], are another cell population attracted by mast cell-derived monocyte chemoattractants [25,39,55,56]. Granulocytes are also relevant in the process of inflammation due to their ability to carry on phagocytosis and release of a wide range of mediators, and all granulocytes may also be attracted by mast cell cytokines and other substances.Firstly, mast cells may affect chemotactic movements of neutrophils [25,27,39,41] that secrete mediators such as cathepsin G, MMP-9, defensins, and various enzymes [57].Mast cell-derived preformed and de novo synthesised mediators induce the infiltration of eosinophils [27,28,51,52,56] that not only possess the ability to phagocytose but are also a source of many mediators, e.g.LTC 4 , PGE 2 , PAF, IL-2, IL-4, IL-5, IL-6, IL-10, IL-13, and TGF-β [58].Finally, mast cells act as chemoattractants for basophils [56] that are known to release histamine, heparin, LTC 4 , LTD 4 , LTE 4 , IL-4, IL-13, and GM-CSF as well as various enzymes [28].It is worth emphasising that mast cells are able to attract different inflammatory cells through various mediators; however, they may also increase the infiltration of the inflammatory site by other mast cells through the release of preformed and newly synthesised mediators [27,35,59,60]. Despite the ability of mast cells to release chemoattractants for various cell migration, their effective infiltration would not be possible without the loosening of extracellular molecules secreted by different cells in order to maintain their function.Therefore, there are a set of mediators that enable the remodelling of ECM, and thus the destruction of physical barriers preventing inflammatory cells from entering the inflammatory site, and mast cells are able to secrete some of those enzymes [25].These mediators comprise a group of protease enzymes involving tryptase, chymase, cathepsin G, which can activate metalloproteases, and MMP-9 [20,25,41].Since all of those mediators are stored in mast cell granules, they may be quickly released after mast cell activation, enabling a smooth migration of recruited cells from the beginning of the inflammation process.Mechanisms concerning ECM degradation by mast cell-derived enzymes include collagen type VI cleavage (tryptase), proteoglycan degradation (MMP-9), collagenase activation (tryptase, chymase), collagen type IV, type V, and fibronectin cleavage (cathepsin G, chymase and MMP-9) or vitronectin degradation (chymase) [20,25,41,48,56].However, tryptase and chymase may also take part in the synthesis of ECM structures; specifically, they stimulate collagen production through the activation of lung fibroblasts (tryptase) or by the cleavage of procollagen (chymase) [48]. During the course of inflammation, mast cell mediators affect infiltrated cells by activating them to release their sets of mediators, involving them in the amplification of the inflammatory process, but they also influence stationery cell populations including fibroblasts or those creating tissues: epithelial and smooth muscle cells [25].Fibroblasts are mainly responsible for the remodelling of ECM, inter alia, through the production of MMPs and TIMPs.Tryptase and chymase released from mast cell granules stimulate fibroblasts in the process of ECM synthesis, but so do histamine, lipid mediators, TGF-β, and VEGF.Furthermore, stimulation of fibroblast may occur due to the activity of preformed or newly synthesised mediators such as SCF, bFGF, and NGF, while GM-CSF and PDGF are synthesised de novo after mast cell activation [52].The prime function of epithelium is to protect blood vessels and organs against physical injuries by lining their surfaces.Moreover, epithelial cells are a source of various mediators, e.g.defensins, LL-37, IL-1β, CCL2, and CXCL8 [61].Mast cells may activate epithelial cells, leading to a release of cytokines, through secretion of bFGF, NGF, PDGF, and VEGF [26], which also induce epithelial cell proliferation.Other mast cell mediators such as chymase, LTC 4 , PGD 2 , TNF, IL-6, and IL-13 that is newly synthesised after several hours [25,27,56] increase the secretion of mucus from mucous glands, which are present in the epithelium.Numerous activities of tryptase comprise also the ability to sensitise muscle cells to histamine [62], leading to their constriction.The activity of TGF-β, bFGF, and NGF [56] leads also to increased smooth muscle cell proliferation, while histamine, heparin, LTC 4 , and PGD 2 [25,63] induce the constriction of smooth muscle cells. Although mast cells play a key role in initiation and maintenance of the inflammatory process, they may also take an active part in its resolution, thus preventing the adverse effects of prolonged inflammation.Thus, mast cells secrete known anti-inflammatory mediators, such as IL-10, which may lower the migration of T cells, granulocytes and macrophages [53], and TGF-β [51].Furthermore, mast cell proteases, apart from their role in the degradation of ECM, are able to degrade inflammatory mediators.The list of known proinflammatory cytokines inactivated by mast cell proteases includes IL-5, IL-6, IL-13, IL-33, TNF, and endothelin [64,65,66].Moreover, these proteases are also able to degrade certain danger signals released from damaged tissues, e.g.HSP70 [65]. Conclusions The review of available data clearly indicates that mast cells are critical players in the inflammatory processes; however, it should be remembered that the degree of involvement of these cells largely depends on the site of inflammation and inducing stimulus.Various factors may activate mast cells through different receptors, thus initiating alternative signalling pathways.Due to this, secretory response of mast cells may be varied, e.g.activation via FcεRI triggers a rapid degranulation of preformed mediators, as well as de novo synthesis of arachidonic acid and lipid derivatives, cytokines, and other mediators; while in the case of activation via PRRs, mainly TLR4, no degranulation is observed despite the increased synthesis of cytokines.On the other hand, activation of mast cells by neuropeptides, e.g.substance P and VIP, initiates exclusively degranulation.Because of such differential mast cell response to an inflammatory stimulus, the effect of mast cell activation on the course and intensity of inflammatory process may be greatly different.Therefore, the result of mast cell involvement in allergic inflammation triggered via FcεRI-specific antigen, and in neurogenic inflammation, initiated via neuropeptide receptors, vary from their role in an inflammatory process initiated as a defence mechanism against pathogens. Furthermore, the role of mast cells in the process of inflammation is directly dependent on their subpopulations.Human mast cells are categorised according to proteolytic enzyme content in the granules.Thus, MC T (which corresponds to rodent mucosal mast cells [MMCs]) represents a mast cell population that stores tryptases, while MC TC (rodent connective tissue mast cells [CTMCs]) is characterised by the presence of tryptases, chymases, and carboxypeptidases.The former subpopulation can be found predominantly in a close neighbourhood to T cells or in the mucosa of lungs and intestine, while the latter is present in the skin, lymph nodes, as well as in the lung and intestine submucosa.Besides different localisation, those mast cell subpopulations differ in the content of various mediators, expression of certain receptors, and sensitivity to stimulation; therefore, those features affect mast cell response to particular factors [24].
2018-04-03T00:32:24.703Z
2017-11-30T00:00:00.000
{ "year": 2017, "sha1": "81742b10d50a6ed1e65f0f3e8e80cf287df6f2f5", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-55/pdf-31018-10?filename=Mast%20cells%20as%20the%20strength.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "81742b10d50a6ed1e65f0f3e8e80cf287df6f2f5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6345366
pes2o/s2orc
v3-fos-license
Estimating Daily Reference Evapotranspiration in a Semi-Arid Region Using Remote Sensing Data Estimating daily evapotranspiration is challenging when ground observation data are not available or scarce. Remote sensing can be used to estimate the meteorological data necessary for calculating reference evapotranspiration ETo. Here, we assessed the accuracy of daily ETo estimates derived from remote sensing (ETo-RS) compared with those derived from four ground-based stations (ETo-G) in Kurdistan (Iraq) over the period 2010–2014. Near surface air temperature, relative humidity and cloud cover fraction were derived from the Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit (AIRS/AMSU), and wind speed at 10 m height from MERRA (Modern-Era Retrospective Analysis for Research and Application). Four methods were used to estimate ETo: Hargreaves–Samani (HS), Jensen–Haise (JH), McGuinness–Bordne (MB) and the FAO Penman Monteith equation (PM). ETo-G (PM) was adopted as the main benchmark. HS underestimated ETo by 2%–3% (R2 = 0.86 to 0.90; RMSE = 0.95 to 1.2 mm day−1 at different stations). JH and MB overestimated ETo by 8% to 40% (R2 = 0.85 to 0.92; RMSE from 1.18 to 2.18 mm day−1). The annual average values of ETo estimated using RS data and ground-based data were similar to one another reflecting low bias in daily estimates. They ranged between 1153 and 1893 mm year−1 for ETo-G and between 1176 and 1859 mm year−1 for ETo-RS for the different stations. Our results suggest that ETo-RS (HS) can yield accurate and unbiased ETo estimates for semi-arid regions which can be usefully employed in water resources management. Introduction Evapotranspiration (ET) is one of the main components of the hydrological cycle.Its quantification is essential for water resource management [1].However, it is arguably the most difficult process to measure, especially in arid and semi-arid areas where losses of water tend to be spatially and temporally highly variable [2,3]. Evapotranspiration (ET) consists of two main component processes: evaporation and transpiration [4,5].Evaporation (E) is the loss of water from open water surfaces such as oceans, lakes, reservoirs, and rivers, and from soil pores directly to the atmosphere.In the evaporation process, energy is required to convert liquid water to the vapour state.Most of this energy comes from absorbed radiation which depends (inter alia) on latitude, season, cloud cover, air temperature and surface albedo (the fraction of solar shortwave radiation reflected from the earth back into space, which is affected by surface conditions and soil moisture [4,6]).Transpiration (T) occurs when water absorbed by plant roots is transferred to the leaves via the vascular system and returned to the atmosphere through their stomata [7].It is noteworthy to highlight that evaporation and transpiration occur simultaneously and it is complex to differentiate them.There are three different expressions for ET: potential evapotranspiration (ET p ), reference evapotranspiration (ET o ) and actual evapotranspiration (ET a ).ET p is the water loss which would occur from a vegetated surface when sufficient moisture is available in the soil such that stomata are fully open and resistance to water vapour transport from bare soil to the atmosphere is minimal [8].ET o is defined as the evapotranspiration rate from a hypothetical reference surface with unlimited soil moisture availability [9].The reference surface is assumed to be a grass sward with a height of 0.12 m, a fixed surface resistance (representing the ease with which water vapour is transferred between the surface layer and the atmosphere) of 70 s m −1 and an albedo of 0.23 [9].ET a is the loss of water from a vegetated surface under ambient soil moisture conditions (i.e., soil moisture may be limiting to the evapotranspiration rate).ET o can vary significantly on a daily time scale (which is the most commonly applied input data time step for hydrological modelling).In contrast to precipitation (which is notoriously variable), several studies have reported that variation of ET o is likely to be relatively uniform spatially at the basin scale, except where there are topographic complexities or strong gradients in relief [10][11][12]. ET has a crucial role in the long term terrestrial water balance.Its estimation is essential for water resources management.However, this can be a problem when observed data are sparse or unavailable, as is often the case in low and middle income countries [13,14].Fortunately, remote sensing (RS) has the potential to provide estimates of the meteorological variables required to calculate ET at different scales.Over the last decade, significant improvements in dynamic atmospheric retrieval techniques from RS have been made for several relevant variables with different spatial and temporal resolutions.Examples include the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding (AMSU) and the MODerate resolution Imaging Spectroradiometer (MODIS) which are mounted on NASA's Earth Observing System (EOS) Aqua satellite [15]. AIRS is a passive sensing system which uses infrared hyperspectral sensing to measure temperature and humidity [16].The density profile of constituent atmospheric gases responsible for infrared absorption is used to define a weighting function for each of the 2378 AIRS channels, with wavelengths between 3.7 and 15.4 µm [16].By measuring the infrared radiance (IR) in each of the AIRS channels, atmospheric temperature can be calculated using the Planck equation [17].When cloud cover prevents accurate IR temperature retrieval from the lower atmosphere, measurements can be made by its partner, AMSU.This is a passive multi-channel microwave radiometer measuring atmospheric temperature with a 15-channel microwave sounder with a frequency range of 15-90 GHz.AMSU can provide atmospheric temperature measurements from the land surface up to an altitude of 40 km, as well as cloud filtering for the AIRS infrared channel at altitude to increase the accuracy of measurements [16].This allows NASA to provide an integrated dataset (AIRS/AMSU, hereafter AIRS).AIRS contributes to studies of the atmospheric temperature profile, sea-surface temperature, relative humidity, land surface temperature and emissivity and fractional cloud cover [16]. Zhang et al. [18] used remotely sensed leaf area indices from MODIS with the Penman-Monteith equation, gridded meteorology and a two -parameter biophysical model for surface conductance (G s ) to estimate eight-day average evaporation (E RS ) at a 1 km spatial resolution.A steady-state water balance (precipitation-runoff) approach was used to calibrate E RS which was then applied to estimate mean annual runoff, for 120 gauged sub-catchments in the Murray-Darling Basin of Australia.The results suggest that the evaporation model can be applied to estimate steady-state evaporation and E RS could be used with a hydrological model to generate runoff with an RMSE as low as 79 mm year −1 . Mu et al. [19] developed an algorithm to estimate ET using the Penman-Monteith method driven by MODIS-derived vegetation data and daily surface meteorological inputs.They also applied the model with different meteorological inputs from ground-based stations and vapour pressure deficit and air temperature from the Advanced Microwave Scanning Radiometer (AMSR-E) and Global Modelling and Assimilation Office (GMAO) meteorological reanalysis-based humidity, solar radiation and near-surface air temperature data.Their results were validated using data from six flux towers across the northern USA.Simulated ET_ RS derived from MODIS, AMSR-E and GMAO agreed well with tower-observed fluxes (r > 0.7 and RMSE of latent heat flux <30 Wm −2 (i.e., ET o < 1.05 mm day −1 ). Rahimi et al. [20] compared the Surface Energy Balance Algorithm for Land (SEBAL) with the Penman-Monteith equation to investigate the accuracy of actual evapotranspiration (ET a ) estimation using MODIS data.The results show that there was no significant difference between the SEBAL and PM methods for estimating hourly and daily ET a (RMSE ranged from 0.091 mm day −1 to 1.49 mm day −1 ).Peng et al. [21] compared six existing RS-derived ET products at different spatial and temporal resolutions over the Tibetan Plateau.They used one product (LandFlux-EVAL) as a benchmark due to the lack of availability of in situ measurements.Their results showed that although existing ET products capture the seasonal variability well, validation against in situ measurements are still needed in order to confirm the accuracy of calculated ET, at least in this region and probably in general.Despite the fact that other studies have used RS data to estimate ET, few previous attempts have been made, to our knowledge, to use AIRS data to estimate ET in a data-scarce semi-arid area, such as northern Iraq.Existing ET -RS and reanalysis data products with global spatial coverage include the MODIS 1km PM data [19,22] and reanalysis data such as MERRA-2 [23].However, these data have temporal resolutions of eight days and one month, respectively-which are too course for many hydrological applications.Whilst attempts have been made elsewhere to obtain accurate evapotranspiration estimates from RS (ET o-RS ) at higher temporal resolutions (e.g., daily), for example in South Africa [24] and the USA [25], this has not been performed for many areas of the world where resources are limited and where ground observations are often very scarce.The main objective of this paper is to evaluate the accuracy of daily ET o estimates derived using remote sensing data against ET o calculated using ground observations based on the PM method as a benchmark.Our aim was to focus on the value of RS data while minimising the use of reanalysis data products (i.e., products derived from the reprocessing of historical observed RS data using a consistent analysis system, often involving models and incorporating or "assimilating" ground based observations, where available). Study AREA The study was conducted in the Kurdistan Region of northeastern Iraq (36 • 49 14 N, 44 • 51 39 E to 36 • 12 03 N, 44 • 28 48 E; Figure 1).The altitude in the study area ranges from 399 m to 3061 m above mean sea level.The land use is mainly extensive grazing of sparsely vegetated areas.There are also some irrigated and rain-fed arable areas, woodland, open water and urban areas [26]. The climate of the study area can be described as Mediterranean with hot and dry weather in summer (June to September) and cool and relatively moist conditions in winter (October to May) [27].The transitions from winter to summer and vice versa are marked and often rapid [8].The major moisture sources are the Mediterranean, Black and Caspian Seas [8].Precipitation is varied and mostly falls as rain in winter and autumn (Figure 2) with mean annual precipitation ranging from 500 mm to ca. 1000 mm (Table 1).Winter snowfall is common at elevations above 1000 m above mean sea level [28].Higher temperatures are usually recorded at lower altitudes (Dukan and Sulaimani) compared with the high mountains (Penjween and Chwarta), see Table 1. In addition, the study area experiences extreme seasonal variations in relative humidity (RH) due to the large variation in climate and altitude.The annual average RH in the study area is about 48%.It is high in winter and exceeds 70% but is only 22% on average in August.RH tends to be higher in the high mountains (Penjween and Chwarta) compared with at lower altitudes (Dukan and Sulaimani).The mean wind speed over the study area during 2010-2014 was 1.8 m s −1 .Southerly winds from the lowlands bring increased temperatures and northerly winds tend to bring cooler air [8]. Data Acquisition Meteorological data were obtained for the four stations from Sulaimani Meteorological Office.These data all have daily temporal resolution from 2010 to 2014 and include maximum, minimum and average air temperature ( • C), relative humidity (%), sunshine hours, wind speed (m s −1 ) and rainfall (mm day −1 ). Remote Sensing Data Daily time series of near-surface air temperature ( • C), RH (%) and cloud cover fraction were obtained from Aqua AIRS/AMSU Level 3 Daily Standard Physical Retrieval (AIRS + AMSU) 1 degree × 1 degree V006 (short name AIRX3STD) for 2010-2014 at 1 • spatial resolution.Data gaps were filled using cubic spline interpolation [29].Although this can be problematic if temporal gaps in the data are wide, in our study, AIRS data were available for 99% of the period of interest (2010-2014) and the maximum data gap was just four days.Cubic splines are considered to be a reasonable interpolation method at this resolution and have often been reported to be better than simple linear interpolation for oscillating data, provided the temporal gaps are not too wide [30]. Cloud cover fraction data from AIRS were used to estimate sunshine duration using: where DS is sunshine duration (hours), C f is the cloud cover fraction (established from the AIRS/Aqua L3 Daily Standard Physical Retrieval (AIRS + AMSU) 1 degree × 1 degree V006 cloud-cover fraction data (AIRX3STD)) and H is the maximum possible sunshine hours, calculated as [9]: where ω s is the sunset hour angle which is calculated by: in which ϕ is the latitude and δ is the solar declination i.e.,: in which J is the Julian day of the year (1 to 365, or 366 in a leap year). Reanalysis Data Combination methods such as the Penman-Monteith equation usually require wind speed measurements at 2 m height above ground [9].Hourly estimates of wind speed at 10 m height were obtained from MERRA (Modern-Era Retrospective analysis for Research and Applications) [23] at 0.5 • × 0.6 • spatial resolution.These data were aggregated to compute daily values and then adjusted to the standard 2 m height using [9]; where U 2 is wind speed at 2 m (m s −1 ) and U z is wind speed at z m above ground (m s −1 ).MERRA) is a NASA project which supplies consistent hydro-meteorological analyses of historical remote sensing data [31].It assimilates atmospheric observations into a numerical model called the Goddard Earth Observation System Data Assimilation System Version 5 (GEOS-5).Data products (including monthly surface pressure, relative humidity and air temperature and hourly wind speed) are offered at a broad range of spatiotemporal scales, from 1979 to the present [31].The output of interest for this study is wind speed.It should be noted that the spatial resolution of MERRA and AIRS is different.Therefore, bilinear interpolation was applied to resample the MERRA data to a 1 • spatial grid using the four orthogonal MERRA cells surrounding a given pixel. Reference Evapotranspiration (ET o ) Estimation Methods ET is commonly estimated indirectly from meteorological data [9,32,33] using a variety of different methods [34][35][36].These methods can be grouped into three categories: (i) those based on energy balance and mass transfer concepts, often referred to as the combination equation or Penman-Monteith (PM) method [9]; (ii) those based on empirical relationships between ET o and temperature-(e.g., Thornthwaite [37] and Hargreaves and Samani (HS) [38]); and (iii) and radiation-based approach which utilise measured or estimated solar radiation flux density at the surface (e.g., Jensen and Haise [39]; McGuinness and Bordne [40]; and Priestley and Taylor [41]).The PM method is widely considered to be the most reliable indirect method [9,42,43].However, its main shortcoming is that it requires a complete weather data set (net radiation flux density, temperature, relative humidity and wind speed) which is not always available for many areas [13,32].The other methods have fewer meteorological data requirements [32] and are, hence, widely applied-particularly those based solely on temperature.The performance of temperature-and radiation-based methods, relative to the PM method, is spatially and temporally variable [44,45].The HS method is generally agreed to be the best temperature-based approach [46,47] but has been reported to perform poorly in some semi-arid contexts [45] where radiation-based methods may be more suitable [43].Several alternative approaches to the PM method were, therefore considered here. Four methods were considered: (1) the Penman-Monteith (PM) equation [9] which was used as a benchmark for comparison with the other methods; (2) the Hargreaves and Samani equation (HS) [38]; (3) the radiation-based method of Jensen and Haise (JH) [39]; and (4) the radiation-based method of McGuinness and Bordne (MB) [40].The JH and MB methods have been successfully applied in humid and arid environments [32,48] but the main drawback of these equations is underestimation in humid areas [35] and overestimation in semi-arid areas [32]. All methods require temperature data, the PM also requires RH, wind speed and sunshine hours data.JH and MB also require sunshine data.The equations are as follows. HS : where ET o is the reference evapotranspiration rate (mm day −1 ), U 2 is mean daily wind speed at 2 m height (m s −1 ) (Equation ( 5)), ∆ is the slope of the vapour pressure versus temperature curve (kPa • C −1 ) (Equation ( 10)), R n is the net radiation flux density at the vegetation surface (MJ m −2 day −1 ) (Equation ( 11)), G is the soil heat flux density (MJ m −2 day −1 )-assumed to be zero because it is very small at the daily time scale [9], T a is mean daily air temperature at 2 m height ( • C), T min is minimum air temperature ( • C), T max is maximum air temperature ( • C), R s is the solar radiation flux density at the surface (MJ m −2 day −1 ) (Equation ( 13)), R a is the extraterrestrial radiation (i.e., the theoretical radiation flux density at the top of the atmosphere) [MJ m −2 day −1 ] (Equation ( 14)), e s is the saturation vapour pressure (kPa) (Equation ( 18)), e a is the actual vapour pressure (kPa) (Equation ( 19)), e s − e a is the saturation vapour pressure deficit (kPa), λ is the latent heat of vaporization (i.e., 2.45 (MJ kg −1 )) and γ is the psychrometric constant (kPa • C −1 ) (Equation ( 22)). in which DS is the actual duration of sunshine (hours), H is the maximum possible duration of sunshine (hours) and a s + b s are regression constants set to 0.25 and 0.5, respectively, as recommend by Allen et al. [9]. in which d r is the inverse of the relative distance between the Earth and the Sun ( Equation ( 15)), ω s is defined by Equation (3), ϕ is the latitude, δ is given in Equation ( 4) and G sc is the solar constant = 0.0820 MJ m −1 min −1 .expresses the correction for atmospheric humidity, and the cloudiness is expressed by 1.35 R s R so − 0.35 [9]; R so is the clear-sky solar radiation flux density (MJ m −2 day −1 ) which can be used when calibrated values for a s + b s are not available [9] i.e., R so = 0.75 + 2 * 10 −5 * z R a (17) in which z is the station elevation above sea level (m).The vapour pressure terms are defined as follows: where RH min and RH max are minimum and maximum relative humidity (%) and e 0 min and e 0 max are the saturation vapor pressure at the minimum and maximum air temperatures, respectively (Equations ( 20) and ( 21)): The psychrometric constant is defined as: in which C p is the specific heat capacity at constant pressure; 1.013 × 10 −3 (MJ kg −1 K −1 ), ε is the ratio molecular weight of water vapour:dry air (i.e., 0.622); and P is the atmospheric pressure (kPa).Three statistical metrics were used to evaluate model performance in validation: the Pearson Product Moment Correlation Coefficient (r; Equation ( 23)), the root-mean-square error (RMSE; Equation ( 24)) and the bias (Equation ( 25)). where X G i and X RS i are the ground and RS values, respectively; X G is the average ground value; X RS is the average of RS value; and N is the number of values recorded in the sample. Comparison between Meteorological Variables Estimated from Remote Sensing with Station Data Satellite-derived and ground-measured values of mean daily air temperature (T a ), RH, sunshine hours (DS) and U 2 are compared in Figure 3 for the four stations in the study area.A statistical summary of this comparison is shown in Table 2.The R 2 values between the ground-measured and AIRS-derived values of T a were high (R 2 > 0.88) and highly significant for all stations.The RMSE for T a ranged from 3.2 to 5.1 • C with a tendency of RS to underestimate the ground observations of T a .For RH, the relationship between satellite-derived and ground-based measurements was also significant for all four stations (R 2 > 0.3; p < 0.05).For RH the RMSE ranged from 12.5% to 24% with negative bias for all stations.However, there was a weak but significant relationship for DS (0.15 < R 2 < 0.2; p < 0.05) and the relationship between measured U 2 and MERRA-derived wind speed is even weaker for all stations (Table 2).Remotely sensed DS and U 2 both had positive bias in all cases, except for wind speed at Dukan (Table 2). Since ET is widely known to be driven by turbulent eddies, and is thus sensitive to wind speed, we conducted an extra analysis to evaluate the model sensitivity to the MERRA-wind speed data.We compared ET o estimates derived using the PM equation for all four stations using U 2 derived from MERRA with PM estimates assuming a constant U 2 value (the mean measured daily value for each station during 2010-2014).The ET o predictions produced with the constant wind velocity were actually better overall (closer match with PM estimates obtained using ground-measured data in terms of regression equation slope, R 2 and RMSE: see Supplementary Materials, Figures S1 and S2, Tables S1 and S3), although (as expected) high ET values (>ca 8 mm day −1 ) which often arise on windy days are not well predicted.This implies that that the PM equation can still be used with RS data provided a reasonable estimate can be made for the mean wind speed for the locations of interest. Comparison between Daily ET o -RS and ET o -G The calculated daily ET o-G and ET o-RS estimates are shown in Figure 4.In all cases, the black line shows ET o-G .For all stations, there is seasonal agreement between ET o-G and ET o-RS for all evapotranspiration methods.Estimated ET o-G is plotted against ET o-RS in Figure 5, along with the best-fit linear regression and the 1:1 line.Most of the points are scattered around the 1:1 line for the JH and MB methods which always have high R 2 and regression gradients close to unity.However, there is considerable variability in the slope of the ground-derived versus RS-derived regression lines (0.7 to 0.89) and in R 2 (0.64 to 0.9) when using the HS and PM methods-particularly for the Dukan and Sulaimani stations.These stations have relatively low elevations compared with the other two stations, with higher average temperatures (Table 1).Average annual ET 0 values estimated using the ground and RS data for all methods from 2010 to 2014 are presented in Figure 6.The MB method yielded highest average annual values for both ET o-G and ET o-RS (1670 mm year −1 and 1677 mm year −1 , respectively), while the HS method yielded the lowest annual value of ET o-RS (1198 mm year −1 ) and the PM method yielded lowest annual values of ET o-G (1337 mm year −1 ).The average annual values of ET o-RS were relatively similar to those of ET o-G , which reflects low bias and hence small cumulative errors.Goodness-of-fit statistics are presented in Table 3.The MB method consistently performed better than other methods (in terms of the similarity of the ET o-G and ET o-RS data) for all stations and for all goodness-of-fit criteria, except for the bias at Sulaimani.The greatest differences were observed when the PM and HS methods are compared.The HS method consistently underestimated ground-based ET estimates when RS data were used as inputs (i.e., bias was always negative).Pearson correlation coefficients (r) between ET o -G and ET o-RS were generally high and always highly significant (p < 0.05) for all stations. Cross-Comparison of the ETₒ Methods In Figure 7, different ETₒ-RS values calculated using the HS, JH, and MB methods are plotted against benchmark data (i.e., ETₒ-G PM) for all stations.This comparison is based on the assumption that the PM method is most reliable [49], and that the ground-based measurements at each station best represent the atmospheric drivers for evapotranspiration (i.e., the ground-based data will bestpredict ETo using the PM method).There was considerable variation in model performance against the benchmark data for different stations.The JH and MB methods had regression slopes in the range between 0.95 and 1.4, with most slopes >1, indicating a slight tendency of these methods to overestimate the benchmark values.However, the slopes for the HS method ranged between 0.63 and 0.82, suggesting a tendency for the HS equation to under-predict ET when driven by RS data, particularly at the Dukan station Although the MB method yielded the best coefficient of determination for each station (0.74 < R 2 < 0.86), this was not always the best method in terms of proximity to the 1:1 line.At the two stations with higher elevation (Penjween and Chwarta) the HS method was the best predictor.Table 5 summarises the results statistically.This confirms that the HS method tends to underestimate benchmark ET (−9 < bias% < −0.6) and that the other methods tend to overestimate it (bias ranged between 8.6 and 40%).At all stations the HS method had the lowest RMSE (1-1.3 mm day −1 ).Despite the fact that the JH and MB methods had correlation coefficients which were often better than for the HS method, they had much higher RMSE values (1.8-2.1 mm day −1 ). Cross-Comparison of the ET o Methods In Figure 7, different ET o-RS values calculated using the HS, JH, and MB methods are plotted against benchmark data (i.e., ET o-G PM) for all stations.This comparison is based on the assumption that the PM method is most reliable [49], and that the ground-based measurements at each station best represent the atmospheric drivers for evapotranspiration (i.e., the ground-based data will best-predict ET o using the PM method).There was considerable variation in model performance against the benchmark data for different stations.The JH and MB methods had regression slopes in the range between 0.95 and 1.4, with most slopes >1, indicating a slight tendency of these methods to overestimate the benchmark values.However, the slopes for the HS method ranged between 0.63 and 0.82, suggesting a tendency for the HS equation to under-predict ET when driven by RS data, particularly at the Dukan station Although the MB method yielded the best coefficient of determination for each station (0.74 < R 2 < 0.86), this was not always the best method in terms of proximity to the 1:1 line.At the two stations with higher elevation (Penjween and Chwarta) the HS method was the best predictor.Table 4 summarises the results statistically.This confirms that the HS method tends to underestimate benchmark ET (−9 < bias% < −0.6) and that the other methods tend to overestimate it (bias ranged between 8.6 and 40%).At all stations the HS method had the lowest RMSE (1-1.3 mm day −1 ).Despite the fact that the JH and MB methods had correlation coefficients which were often better than for the HS method, they had much higher RMSE values (1.8-2.1 mm day −1 ). Discussion In this paper, reference evapotranspiration (ET o ) was estimated based on four methods using ground-observed and RS-derived meteorological data (i.e., AIRS and reanalysis wind speed data from MERRA) at four stations in northeastern Iraq.For mean daily air temperature, AIRS and ground-based measurements were very similar for all sampled stations.The positive bias for T a increased with increasing station altitude.Similarly, for RH the relationship between AIRS and ground-based measurements was strong, albeit with a negative bias, for all stations.Despite the better spatial resolution of the MERRA data compared to AIRS data, we decided not to use the MERRA products because we wanted, explicitly, to focus on the value of the RS data and avoid reanalysis products as much as possible.Reanalysis data (which often integrate data from different sources) can be sensitive to observing system changes and there is often some uncertainty due to variations in both the models used and in the analysis techniques employed [31].Unfortunately, we were not able to avoid using reanalysis products completely and MERRA wind speed data (U 2 ) was required because to date no RS wind speed data are available.The relationships for DS and U 2 were weak for all stations.The effect of differences between RS and ground-based meteorological variables on ET o rate will depend on the model sensitivity to the variable in question (i.e., if the model is sensitive to an input variable then predictions of ET will differ significantly if the RS estimate for that variable differs from the ground-based measurement; conversely, if the model is insensitive to the variable in question then ET will be relatively unaffected by errors in the RS estimates).Differences could be due to the different spatial reference frames employed, with meteorological stations recording point measurements and RS platforms observing spatially aggregated variables over large grid cells or pixels.As well as altering ET using empirical methods, differences in T a estimates will also affect other temperature-dependent values such as vapour pressure deficit and ∆. There was generally reasonable agreement between ET o-RS and ET o-G for all the ET o methods evaluated, based on high R 2 values and regression line slopes close to unity compared with the predictions driven by ground-based measurements.However, there was some variation in model performance for individual stations.Regressions between the bias in input variables (RS versus ground) and the bias in ET o estimates (calculated using RS versus the benchmark) for all methods are shown in Table S2.Strong and significant relationships were observed between the bias in sunshine duration and the bias in ET o in the case of the JH and MB methods (R 2 > 0.95, p < 0.05) for all stations.This is not unexpected, given the dependence of these methods on solar radiation (and indirectly DS) suggesting high sensitivity.Other relationships were insignificant -even for the bias in ET from the HS method versus the bias in T a , possibly because the HS method also depends on the theoretical radiation flux density at the top of the atmosphere.The bias in ET o-RS for the PM equation was most sensitive to DS and wind speed, reflecting the high importance of both radiative and aerodynamic terms in this method (by definition). The PM model tended to predict lower ET o than when using ground-based data for the Dukan station, but higher ET o for the Sulaimani, Penjween and Charta stations.This is mainly due to the sensitivity of the PM method to meteorological input data (i.e., radiation, air temperature, humidity and wind speed [9]).Thus, the effects of disparities between ground-level measurements and RS estimates can be significant on ET o calculations especially in windy, warm and or dry conditions [9].For instance, T a derived from RS overestimated ground-based measurements for the Penjween and Charta stations in the mountains (1284 and 1128 m ASL, respectively) but underestimated T a at Dukan, which is located at lower altitude (690 m ASL).These results agree with the results reported by Ferguson and Wood [50] which showed that the positive bias of near-surface air temperature from AIRS increased with increasing elevation.Similar to T a , DS and U 2 also contributed significantly to the deviation of RS and ground-driven ET using the PM method due to high bias and RMSE for the RS-estimates of these variables compared to ground-based measurements. In the cross-comparison of the ET o methods (i.e., when the RS-driven models were compared with the benchmark data set), ET o-RS (HS) slightly underestimated ET o-G (PM: Table 4).This could be due to: (i) The absence of humidity terms in the HS method [32,51] in contrast to the PM method in which ET o is positively correlated with vapour pressure deficit.This is especially important in semi-arid environments were humidity deficits can be high (i.e., when low relative humidity results in a steep gradient in vapour pressure between the surface and the bulk atmosphere).(ii) The fact that temperature-based methods (HS) tend to underestimate ET o at high wind speeds of >3 m s −1 [49].In the original PM method, wind speed is included via the aerodynamic resistance term (which is combined with the surface resistance, specific heat capacity and air density in the FAO version shown in Equation ( 6) via the constants 900 and 0.34).(iii) The fact that atmospheric transmissivity (the ratio of the global solar radiation at ground level to that received at the top of the atmosphere, [52,53]) in semi-arid area tends to differ from other areas due to lower atmospheric moisture content [52].A number of other studies [54][55][56][57][58][59] have reported that the HS method can overestimate ET o in humid environments and under estimate it in semi-arid regions [47].Although a slight negative bias was also observed here, the HS model yielded lower RMSE values overall compared with the other methods suggesting that it is a reasonable method for estimating ET o in semi-arid regions similar to our study area (even when driven by RS data).This result is in agreement with Lopez et al. [7], Tabari [47] and Tabari and Talaee [59] who concluded that the HS method can be successfully used in semi-arid areas. The positive bias obtained from comparisons between ET o-RS calculated using the JH and MB methods and ET o-G PM is in accordance with both Jensen et al. [36] and Tabari et al. [32] who found that these models tend to overestimate ET o compared with the PM method, by as much as 30% and 60%, respectively.In our study the JH and MB methods overestimated the benchmark average annual ET o at all stations (Figure 6) by between 9% and 40%.Instead, the average annual ET o predicted by the HS method was similar to that estimated by the PM method for all stations (e.g., bias ranged between −0.6% and −9%). This study did not take into account the effects of vegetation factors on the ET rate and, instead, focussed on climatic factors.ET o expresses the evaporation power of the atmosphere at a specific location and time of the year and does not consider land cover characteristics and soil factors [9].If required, crop-specific ETp can be calculated from ET o using crop-specific resistance terms in the PM equation or, more generally, using crop coefficients [9] which account for differences in vegetation canopy characteristics such as leaf area index, canopy height and stomatal resistance.ETa can be calculated from ETp (or ET o ) if soil moisture content can be estimated, often via a linear reduction in ETa:ETp between a threshold moisture content and the permanent wilting point [13]. Conclusions Obtaining accurate estimates of ET o is essential for well-informed water management.However, in many parts of the world, the meteorological data required to estimate it are not available or are very scarce.Satellite remote sensing offers an alternative data source to ground stations, provided it can be shown to provide robust and reliable estimates of water fluxes.In this study, we assessed the validity of using daily RS-derived meteorological variables for estimating daily ET o compared with ET o from the same models driven by ground-based meteorological variables, for four stations in northeastern Iraq.The results were also compared with a benchmark model (PM) driven by ground-based meteorological observations.The good agreement (i.e., low RMSE and bias and high r) between AIRS and ground-based data, particularly near-surface air temperature, and the generally good performance of the ET models compared to the benchmark data set, suggest that AIRS data can be used as alternatives to conventional meteorological data to estimate daily ET o with reasonable accuracy.Considering the low density of ground-based stations and the paucity of climatological records in regions such as Iraq, this is encouraging for future hydrological studies and for better-informed water management.The application of the PM method is limited in many semi-arid regions of the world by lack of required weather observations.In such circumstances, simpler models are often used to estimate ET o .In this case, the RS-driven HS method produced better ET o estimates (compared to the PM equation as a benchmark) than the other models.It is recommended that the HS model is used where complete weather observation data are lacking.This method can be successfully employed using RS data to yield accurate and useful daily ET o estimates.This, in turn, is valuable for better policy making and planning in order to ensure efficient use of water resources, to improve irrigation management and for hydrological modelling.Some reanalysis data products already exist which attempt to estimate ET o using a combination of RS and ground-based data and numerical models (e.g., MERRA-2).Future work could usefully compare ET o estimates generated here with those predicted by MERRA-2. Supplementary Materials: The following are available online at www.mdpi.com/2072-4292/9/8/779/s1. The green line presents the ET o-RS when the PM model driven by MERRA-wind speed, Figure S2: Scatterplots of estimated daily reference evapotranspiration using ground-based measurements using PM method (ET o-G ) versus estimated reference evapotranspiration using remote sensing data (ET o-RS ) using PM method when the PM was driven by with MERAA-wind speed and constant-wind speed at four different stations (Sulaimani, Penjween, Chwarta, and Dukan).The solid black line indicates the 1:1 relationship.The grey line shows the best-fit regression with 95% confidence interval (equations and R 2 also shown), Table S1: Statistical summary of comparisons between estimated daily reference evapotranspiration using ground-based measurements (ET o-G ) and remote sensing data (ET o-RS ) with MERRA-wind speed and constant-wind speed data for PM methods at four different stations (Sulaimani, Penjween, Chwarta, and Dukan) over the study period 2010-2014.* means significant at p < 0.05, Table S2: Statistical summary of (BIAS%) between daily ground-measured and remotely-sensed values of T a , RH%, DS and U 2 and BIAS% summary of estimated daily reference evapotranspiration using remote sensing data (ET o-RS ) for four different methods against the benchmark data set (PM method using ground-based measurements: ET o-G : PM) for four different stations (Sulaimani, Penjween, Chwarta, and Dukan) over the study period 2010-2014.* means significant at p < 0.05, Table S3: Summary of annual ET o-G and ET o-RS (with MERRA-wind speed and constant-wind speed data) for PM method at four different stations (Sulaimani, Penjween, Chwarta, and Dukan) over the study period 2010-2014. Figure 1 . Figure 1.(a) Elevation in the study area derived from the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) (https://earthexplorer.usgs.gov/).(b) Regional location of the study area. Figure 3 . Figure 3. Scatterplots of daily , RH %, and measured at ground-based stations (x-axes) compared with those derived from remote sensing (y-axes) for four different stations.The solid black line indicates the 1:1 relationship.The grey line shows the best-fit regression with 95% confidence interval. Figure 3 . Figure 3. Scatterplots of daily T a , RH %, DS and U 2 measured at ground-based stations (x-axes) compared with those derived from remote sensing (y-axes) for four different stations.The solid black line indicates the 1:1 relationship.The grey line shows the best-fit regression with 95% confidence interval. Figure 4 . Figure 4. Plot of daily ETₒ estimates derived from ground-based measurements (ETₒ-G) and remote sensing data (ETₒ-RS) using four methods from 2010-2014 for Sulaimani, Penjween, Chwarta and Dukan stations.The black line presents the ETₒ-G. Figure 4 . Figure 4. Plot of daily ET o estimates derived from ground-based measurements (ET o-G ) and remote sensing data (ET o-RS ) using four methods from 2010-2014 for Sulaimani, Penjween, Chwarta and Dukan stations.The black line presents the ET o-G . Figure 5 . Figure 5. Scatterplots of estimated daily reference evapotranspiration using ground-based measurements (ETₒ-G) versus estimated reference evapotranspiration using remote sensing data (ETₒ-RS) applying four different methods at four different stations (Sulaimani, Penjween, Chwarta, and Dukan).The solid black line indicates the 1:1 relationship.The grey line shows the best-fit regression with 95% confidence interval (equations and R 2 also shown). Figure 5 . Figure 5. Scatterplots of estimated daily reference evapotranspiration using ground-based measurements (ET o-G ) versus estimated reference evapotranspiration using remote sensing data (ET o-RS ) applying four different methods at four different stations (Sulaimani, Penjween, Chwarta, and Dukan).The solid black line indicates the 1:1 relationship.The grey line shows the best-fit regression with 95% confidence interval (equations and R 2 also shown). Figure 6 . Figure 6.Average annual ETₒ estimates derived from ground-based measurements (ETₒ-G) and remote sensing data (ETₒ-RS) using four methods from 2010-2014 for Sulaimani, Penjween, Chwarta and Dukan stations. Figure 6 . Figure 6.Average annual ET o estimates derived from ground-based measurements (ET o-G ) and remote sensing data (ET o-RS ) using four methods from 2010-2014 for Sulaimani, Penjween, Chwarta and Dukan stations. Figure 7 . Figure 7. Scatterplots of estimated daily reference evapotranspiration using remote sensing data (ETₒ-RS) for the HS, JH and MB methods against estimated reference evapotranspiration generated using ground-based measurements (ETₒ-G) with the PM method (the benchmark model) for four different stations (Sulaimani, Penjween, Chwarta and Dukan).The solid black line indicates the 1:1 relationship.The grey line shows the best-fit regression with 95% confidence interval (equations and R 2 also shown). Figure 7 . Figure 7. Scatterplots of estimated daily reference evapotranspiration using remote sensing data (ET o-RS ) for the HS, JH and MB methods against estimated reference evapotranspiration generated using ground-based measurements (ET o-G ) with the PM method (the benchmark model) for four different stations (Sulaimani, Penjween, Chwarta and Dukan).The solid black line indicates the 1:1 relationship.The grey line shows the best-fit regression with 95% confidence interval (equations and R 2 also shown). Figure S1 : Plot of daily ET o estimates derived from ground-based measurements (ET o-G ) and remote sensing data (ET o-RS ) using PM method from 2010-2014 for Sulaimani, Penjween, Chwarta and Dukan stations.The black line presents the ET o-G .The blue line presents the ET o-RS when the PM model driven by constant-wind speed. Table 1 . Elevation, mean daily temperature, relative humidity and average annual rainfall for the four stations located in the study area from 2010 to 2014 (Sulaimani Meteorological Office, 2015). Table 1 . Elevation, mean daily temperature, relative humidity and average annual rainfall for the four stations located in the study area from 2010 to 2014 (Sulaimani Meteorological Office, 2015). Table 1 . Elevation, mean daily temperature, relative humidity and average annual rainfall for the four stations located in the study area from 2010 to 2014 (Sulaimani Meteorological Office, 2015).Mean monthly rainfall (spatially averaged over Thiessen polygons), temperature and relative humidity (RH) in the study area 2010-2014 (Sulaimani Meteorological Office, 2015). Table 2 . Statistical summary of the relationship between daily ground-measured and remotelysensed values of , RH %, and for four different stations during the study period (2010-2014). Table 2 . Statistical summary of the relationship between daily ground-measured and remotely-sensed values of T a , RH %, DS and U 2 for four different stations during the study period (2010-2014). Table 3 . Statistical summary of comparisons between estimated daily reference evapotranspiration using ground-based measurements (ET o-G ) and remote sensing data (ET o-RS ) for four different methods at four different stations (Sulaimani, Penjween, Chwarta, and Dukan) over the study period 2010-2014. Table 4 . Statistical bias, RMSE and Pearson Product Moment Correlation coefficient (r) for ETₒ-RS values against the benchmark data set ETₒ-G (PM) for the different stations over the study period 2010-2014. Table 4 . Statistical bias, RMSE and Pearson Product Moment Correlation coefficient (r) for ET o-RS values against the benchmark data set ET o-G (PM) for the different stations over the study period 2010-2014.
2017-08-19T06:35:18.145Z
2017-07-29T00:00:00.000
{ "year": 2017, "sha1": "bb196785230f907026ab451381f28f401b1d691b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/9/8/779/pdf?version=1501489881", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bb196785230f907026ab451381f28f401b1d691b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology", "Computer Science" ] }
263523415
pes2o/s2orc
v3-fos-license
Comparative Study of Lamina Cribrosa Thickness Between Primary Angle-Closure and Primary Open-Angle Glaucoma Purpose To compare lamina cribrosa thickness (LCT) of primary angle-closure glaucoma (PACG) and primary open-angle glaucoma (POAG) using the enhanced depth-imaging mode of the Heidelberg Spectralis spectral-domain optical coherence tomography (EDI-OCT). Patients and Methods A comparative cross-sectional study was conducted. We enrolled 34 patients with PACG, 38 with POAG, and 62 controls, testing only one eye of each participant. Lamina cribrosa thickness was determined at the center of the optic nerve head using EDI-OCT. Nine points of LCT were measured, and LCT averages were analyzed. Results Mean age, number of glaucoma medications, current intraocular pressure (IOP), cup to disc ratio, and visual field indices, were not significantly different between PACG and POAG eyes. The maximum IOP (SD) was higher in PACG than in POAG, at 32.5 (10.46) vs 25.05 (6.42) mmHg (p = 0.001), and LCTs were significantly different among the PACG, POAG and control groups. Mean (SD) LCTs were 226.99 (31.08), 257.17 (19.46), and 290.75 (28.02) μm, respectively (p < 0.001). Lamina cribrosa thickness was correlated with mean deviation of the visual field (p = 0.001; correlation coefficient, rs = 0.347), while it was inversely correlated with maximum IOP (p < 0.001; correlation coefficient, rs = −0.592). Linear regression analysis revealed that LCT was inversely related to age (p = 0.008), female (p = 0.018), and maximum IOP (p = 0.002). LCT was marginally related to visual field MD (p = 0.053). Conclusion Glaucomatous eyes had thinner LCT than controls, and maximum IOP was inversely correlated to the LCT. PACG eyes had higher maximum IOP and thinner LCT than POAG ones. In living eye, EDI-OCT emphasizes the pressure-dependent mechanism of glaucoma on lamina cribrosa deformation and the higher IOP-loaded stress which leads to a greater lamina cribrosa strain. Introduction Lamina cribrosa assessment has recently become a field of interest, as it can be visualized in vivo with spectral-domain optical coherence tomography (SD-OCT). 1,2paide et al 3 developed an Enhanced Depth Imaging (EDI) technique using SD-OCT (EDI-OCT) to visualize deeper retinal structures, such as the choroid and the lamina cribrosa.An EDI-OCT increases the visibility of the anterior and posterior lamina cribrosa surface in non-human primates compared to the conventional SD-OCT technique, 4 and can therefore be used to measure lamina cribrosa thickness (LCT). Primary angle-closure glaucoma (PACG) is one of the leading causes of blindness worldwide, 5 and its prevalence is highest in Asia. 6A relatively small eye with a crowded anterior chamber leads to irido-trabecular apposition and IOP elevation.Optic nerve head and retinal nerve fiber layer (RNFL) changes in PACG have been studied using imaging technologies such as the Heidelberg retina tomograph (HRT), 7,8 scanning laser polarimetry 9 and OCT. 10 Studies of lamina cribrosa in PACG are sparse.We conducted a comparative study of LCT in primary openangle glaucoma (POAG) and primary PACG.This research explored thickness differences in lamina cribrosa and factors related to LCT in these two types of glaucoma. Methods A comparative cross-sectional study design was employed to investigate LCT in PACG and POAG patients and in healthy controls.The study was performed with the informed consent of the participants and followed all of the guidelines for experimental investigation using human subjects required by the Ethics Committee (EC) of Rajavithi Hospital.All investigations were carried out in accordance with the Declaration of Helsinki.The study protocol was approved by EC in December 2012.All participants were recruited from the Department of Ophthalmology, Rajavithi Hospital, between January and October 2013. Participants were ≥50 years of age with a bestcorrected visual acuity of ≥20/63.POAG was defined as the presence of glaucomatous optic disc (diffuse or focal thinning of the neuro-retinal rim) and an abnormal visual field consistent with glaucoma.IOP was >21 mmHg and showed an open angle when applying gonioscopy.PACG was defined as an eye that had at least 180 degrees of iridotrabecular contact or peripheral anterior synechiae (PAS) with elevated IOP, with glaucomatous optic disc and/or visual field defect.Acute PACG (APACG) was defined as PACG eye that had eye pain, halo, nausea/ vomiting, corneal edema, and IOP > 21 mmHg.Patients who had the previous history of APACG were also eligible only when the optical media was clear for OCT scanning.Maximum IOP was defined as the highest IOP-recorded from our hospital database or from referral document.The control group consisted of those having an IOP of between 10 and 21 mmHg with no history of increased IOP, normal anterior chamber angle, an absence of glaucomatous disc appearance, and no visible retinal nerve fiber layer (RNFL) defect.Eyes after cataract surgery or peripheral iridotomy were eligible. Enhanced Depth-Imaging Spectral-Domain Optical Coherence Tomography of the Optic Nerve Head and Lamina Cribrosa Thickness Measurement OCT B-scans of the ONH were obtained using an EDI-OCT of Heidelberg Spectralis OCT, with a 20-degree retinal window.The center of ONH was identified using a horizontal cross-sectional B-scan.The EDI mode enhanced the visualization of lamina cribrosa beneath the optic disc cup.We defined LCT as the distance between the anterior and posterior borders of the high reflective region.The image was enlarged to 1:1 pixels with the Spectralis OCT program to line up the lamina cribrosa borders.Each anterior and posterior border, which was not in a straight line, was marked with 9 points.LCT was measured by one of us (A.K.), using horizontal crosssectional B-scan for 9 points (equally distanced) at the central plane of ONH (Figure 1).Intra-observer and interobserver reproducibility of LCT measurements were tested in 20 cases, revealing kappa of 0.77 and 0.74, respectively.An analysis of the average LCT was performed, and Bruch's membrane openings (BMO) were marked and measured at the horizontal scan of the lamina cribosa. Statistical Analysis Statistical analysis was performed with SPSS software (version 16.0.0;SPSS Inc., Chicago, IL).Normal distribution was detected using Shapiro-Wilk test.A test of variance for normal distributed data was achieved with Levene's test, and independent Student t-tests and oneway ANOVA with Bonferonni correction were used to compare the data between and among the groups for normally distributed data.Kruskal-Wallis and Mann-Whitney-U test were applied to compare the data between and among the groups for abnormally distributed data, and the level of statistical significance was set at p < 0.05.The relationships between LCT and the characteristic factors including IOP and visual field indices were applied using Spearman correlations and linear regression analysis. Subject Baseline Characteristics Spectralis OCT with EDI techniques were obtained for 64 controls and 77 patients with glaucoma.We excluded seven patients from this study because of poor EDI-OCT scanning quality, which the lamina borders could not be visualized clearly.A total of 62 controls and 72 glaucoma (38 POAG, and 34 PACG) were analyzed.Baseline characteristics are summarized in Table 1.Mean age was not different among the three groups (p = 0.859), nor between the POAG and PACG groups (p = 0.755).The percentages of females in the control, POAG, and PACG groups were 72.58%, 50% and 67.65%, respectively.The current IOP, maximum IOP, vertical cup-to-disc ratio (C/D), BMO, axial length, CCT, mean deviation (MD), and pattern standard deviation (PSD) of the visual field were significantly different between glaucoma and controls (p ≤ 0.016) while the number of medications, current IOP, C/D, MD, and PSD were not different between POAG and PACG groups (p = 0.525, p = 0.141, p = 0.510, p = 0.854, p = 0.378, respectively, Mann-Whitney-U test).The maximum IOP differed significantly between POAG and PACG eyes, at 25.05 ± 6.42 mmHg vs 32.50 ± 10.46 mmHg (p = 0.001).POAG had significantly longer axial length than PACG (23.73 ± 1.02 vs 22.89 ± 0.81 mm, p < 0.001), and also a thinner CCT (524.32 ± 35.31 vs 540.33 ± 28.72 μm, p = 0.015).Bruch's membrane opening in controls was smaller than in glaucoma but was not different between the glaucoma groups. Comparison of Lamina Cribrosa Thickness Among Groups Mean LCT in controls, POAG, and PACG was 290 ± 28.02 μm, 257.17 ± 19.46 μm and 226.99 ± 31.08 μm, respectively.There was a significant difference among the three groups (p < 0.001) as shown in Table 2. Mean LCT of POAG was significantly thicker than that of PACG (p < 0.001). Relationship Between Lamina Cribrosa Thickness and Intraocular Pressure The relationship between the LCT and the maximum IOP was evaluated using Spearman correlation analysis.The LCT and the maximum IOP were significantly different between the POAG and PACG eyes (Tables 1 and 2), and mean LCT revealed a negative correlation with the maximum IOP (p < 0.001; correlation coefficient, r s = −0.592; Figure 2). Relationship Between Lamina Cribrosa Thickness and Visual Field Mean Deviation POAG and PACG eyes had a similar visual field MD (−13.39 ± 9.24 vs −13.49± 8.54 dB) while Spearman analysis revealed a positive correlation between LCT and MD (p = 0.001; correlation coefficient, r s = 0.347; Figure 3).Linear regression analysis revealed that LCT was inversely related to age (p = 0.008), female (p = 0.018), and maximum IOP (p = 0.002).LCT was marginal related to visual field MD (p = 0.053). Discussion In the present study, EDI-OCT showed that lamina cribrosa in glaucomatous eyes was thinner than in controls.This is a posterior portion of the sclera at the optic nerve head (ONH), composed of connective tissues and elastic fibers, 11 which provides the main support for the optic nerve axons when exiting the eye.The lamina cribrosa helps to maintain the pressure gradient among IOP, cerebrospinal pressure, and the surrounding tissue. 12tructural changes in the lamina cribrosa have been thought to cause early damage when glaucomatous optic neuropathy is diagnosed. 11,13Increased IOP can cause posterior displacement of the lamina, 14 leading to optic nerve axon damage.Clinically, an enlarged ONH cup is a typical sign of lamina change.In addition, axoplasmic blockage by lamina distortion can cause retinal ganglion cell damage and ONH changes.Lamina cribrosa structures have been studied in animal 15 or enucleated eyes. 12,14,16Histomorphometric investigations reveal that the lamina thickness is significantly lower in eyes with advanced glaucomatous optic nerve damage than in non-glaucomatous eyes.EDI-OCT can be applied for evaluation of the LCT in living eyes and for monitoring LCT in follow-up periods. In the present study, both groups had advanced-stage glaucoma, with an MD of −13 dB.LCT had an inverse correlation with maximum IOP, which was higher in PACG than in POAG (32 mmHg vs 25 mmHg).9][20][21] In an experiment with ex vivo porcine eyes using SD-OCT, Fatehee et al 22 reported that LCT decreased after acute IOP elevation, and the higher the IOP, the thinner was the lamina thickness.In our study, some cases of PACG had a history of a previous acute attack.The peak IOP destroyed the connective tissue around the ONH, and the lamina cribrosa subsequently thinned.The doseresponse of IOP and glaucomatous damage is commonly presented in clinical studies.Gazzard et al 23 reported a relationship between IOP and visual field loss in PACG and POAG.They found that pre-treated IOP was higher in PACG patients than in POAG patients, and that visual field defect was also worse in PACG.De Moraes et al 24 found that the peak IOP, rather than the fluctuation of IOP and mean IOP, was the strongest risk factor for visual field damage progression in treated glaucoma patients.A specific threshold for IOP elevation may have to be reached to lead to lamina cribrosa deformation and/or lamina backward movement. In addition, linear regression analysis showed that LCT was inversely related to female and age.In histologic study, female appears to have thinner LC than male. 25In glaucoma situation, IOP can possibly further damage LC and make it even thinner.Xiao et al reported that lamina thickness in healthy Chinese subjects, aged >60 years old, are thicker than in younger age groups. 26However, in our study, older age had thinner lamina cribosa.The reason for this result remained unclear.We speculated that the thickness of lamina cribosa might not indicate its rigidity.When the IOP is elevated, a thicker lamina beam might be more susceptible to be compressed, leading to thinner lamina cribosa.Future research on the relationship between lamina thickness and rigidity is required to elucidate this issue. Hao et al reported different outcomes of LCT from those of our study.Chronic PACG (CPACG) patients had 702 thicker LCT than those with POAG. 27These patients with CPACG had no history of acute attack previously.Both glaucoma groups had the same IOP level, at about 19 mmHg, and their visual field mean deviations were not significantly different (−10.5 vs 13 dB).The discrepancy in the results of these two studies may relate to IOP levels; we included patients with a history of acute attack, and their maximum IOP was higher than in the research by Hao et al Their study debated whether optic disc size in CPACG might be smaller than in POAG, but they did not measure the BMO.Our study showed no difference in BMO between PACG and POAG eyes. Korean POAG patients were found to have thinner lamina than Thai POAG counterparts, while the mean severity of visual field damage in Thai patients was more advanced (−13.39 dB in Thai vs −6.58 dB in Korean). 28hese differences might relate to differences in ethnicity, but different techniques were used to measure the LCT.The Korean study measured 3 points of the laminae, 16 while we examined 9.They reported that LCT in normaltension glaucoma (NTG) was thinner than in POAG. 28heir findings conflict with our observation that LCT was inversely correlated with IOP; however, NTG may have another pathophysiology, rather than IOP. 29We speculated that pre-existing thin lamina cribrosa might be susceptible to such low IOP in NTG, and a prospective study of LCT and glaucoma incidence should be conducted to elucidate this issue. Among Asian controls, LCTs were different.The LCTs of Koreans, Thais, and Chinese were 348, 290, and 202 μm, respectively.These differences could be related to the measurement techniques; as yet, there is no standard protocol for LCT measurement.We found that the LCT, determined by EDI-OCT, was not a uniform structure; in particular, the posterior border had an irregular pattern (Figure 1).We measured the lamina at 9 points at equal distances from border to border at the central plane of the ONH, but we could not measure the entire lamina cribrosa due partly to the shadows of blood vessels.However, enhancing the contrast of SD-OCT images improved the visibility of the posterior lamina boundary.This method is similar to that used in the study by Jonas et al, 30 which measured the thickness in the center of the optic disc, at the optic disc borders, and in the intermediary positions between the center and borders of the ONH.These different techniques demonstrate that LCT measurement is still inconsistent; therefore, automated software as used in choroidal thickness measurement 31 to identify the anterior and posterior border of lamina cribrosa and for the lamina thickness measurement would be useful in decreasing intra-and inter-observer errors when using different techniques. Bruch's membrane opening, indicating the optic disc margin of the ONH, was larger in the glaucoma groups than in controls.This finding might be the cause and/or effect of IOP-related stress and strain on the lamina cribrosa and the scleral canal wall.Pre-existing large ONH, eg, in African-descent optic disc, is predisposed to glaucomatous damage. 32Large ONH, for instance in myopia, may be associated with low scleral rigidity and might be susceptible to IOP. 33 On the other hand, IOP-related stress could distend lamina cribrosa and peripapillary sclera, having an effect on scleral canal wall expansion. 17The BMO would be displaced laterally.In this study, however, the BMO in POAG was not significantly different from that of PACG, and there was no correlation between IOP, LCT and BMO. Limitations of the present study included the recruitment of patients from a referral tertiary eye care in a crosssectional manner, so that most of the glaucomatous eyes were at an advanced stage, and we could not speculate about the trend of pre-glaucomatous thickness of lamina cribrosa in POAG and PACG eyes.The maximum IOP data of the individuals were collected from hospital records or referral letters and may not have been correct.Our small sample size might be a source of bias, and we were unable to enroll matched IOP of POAG and PACG patients. In conclusion, this study supports the use of SD-OCT with EDI mode for detecting lamina cribrosa change.The LCT was significantly thinner in the glaucoma (POAG and PACG) group compared to control eyes, and we found that maximum IOP was inversely correlated to LCT, and PACG had a higher maximum IOP and thinner LCT than POAG.The pressure-dependent mechanism deformed the lamina cribrosa in which the higher IOP-loaded stress led to a greater lamina cribrosa strain. Figure 1 Figure 1 Lamina cribrosa thicknesses of different groups imaged by EDI mode of Heidelberg Spectralis SD-OCT using the horizontal cross-sectional B-scan, which ran through the center of the ONH.Notes: Anterior and posterior borders of the lamina cribrosa were determined (red line).Nine points (green dots) of each laminar thickness were measured to calculate the average.(A) The lamina cribrosa of a 78-year-old man with POAG in his right eye.(B) The lamina cribrosa of an 82-year-old woman with PACG in her right eye.(C) The lamina cribrosa of a 67-year-old woman with no glaucoma in her right eye. Figure 2 Figure 2Relationship of lamina cribrosa thickness and maximum IOP using Spearman correlation analysis.Note: The correlation was statistically significant (p = 0.000; correlation coefficient, r s = −0.592). Figure 3 Figure 3 Relationship of lamina cribrosa thickness and mean deviation of the visual field using Spearman correlation analysis.Note: p = 0.001; correlation coefficient, r s = 0.347.
2021-02-25T05:31:56.793Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "48f30470825ed6a2422aa4ae4417a67f6628c906", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=66773", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48f30470825ed6a2422aa4ae4417a67f6628c906", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13696570
pes2o/s2orc
v3-fos-license
Expectant fathers’ participation in antenatal care services in Papua New Guinea: a qualitative inquiry Background The importance of engaging men in maternal and child health programs is well recognised internationally. In Papua New Guinea (PNG), men’s involvement in maternal and child health services remains limited and barriers and enablers to involving fathers in antenatal care have not been well studied. The purpose of this paper is to explore attitudes to expectant fathers participating in antenatal care, and to identify barriers and enablers to men‘s participation in antenatal care with their pregnant partner in PNG. Methods Twenty-eight focus group discussions were conducted with purposively selected pregnant women, expectant fathers, older men and older women across four provinces of PNG. Fourteen key informant interviews were also conducted with health workers. Qualitative data generated were analysed thematically. Results While some men accompany their pregnant partners to the antenatal clinic and wait outside, very few men participate in antenatal consultations. Factors supporting fathers’ participation in antenatal consultations included feelings of shared responsibility for the unborn child, concern for the mother’s or baby’s health, the child being a first child, friendly health workers, and male health workers. Sociocultural norms and taboos were the most significant barrier to fathers’ participation in antenatal care, contributing to men feeling ashamed or embarrassed to attend clinic with their partner. Other barriers to men’s participation included fear of HIV or sexually transmitted infection testing, lack of separate waiting spaces for men, rude treatment by health workers, and being in a polygamous relationship. Building community awareness of the benefits of fathers participating in maternal and child health service, inviting fathers to attend antenatal care if their pregnant partner would like them to, and ensuring clinic spaces and staff are welcoming to men were strategies suggested for increasing fathers’ participation in antenatal care. Conclusion This study identified significant sociocultural and health service barriers to expectant fathers’ participation in antenatal care in PNG. Our findings highlight the need to address these barriers – through health staff training and support, changes to health facility layout and community awareness raising – so that couples in PNG can access the benefits of men’s participation in antenatal care. Electronic supplementary material The online version of this article (10.1186/s12884-018-1759-4) contains supplementary material, which is available to authorized users. Background Globally, maternal and newborn mortality remains unacceptably high. Over three hundred thousand maternal deaths and close to 3 million newborn deaths occur annually, with the vast majority of deaths occurring in developing countries [1,2]. Papua New Guinea (PNG), located in the Asia Pacific region, continues to experience high maternal and newborn mortality and morbidity. Current estimates of maternal mortality vary between 215 and 733 deaths per 100,000 live births [3][4][5]. The leading causes of maternal mortalitypost-partum haemorrhage, eclampsia and sepsisare similar to elsewhere in the world and largely preventable [6,7]. Newborn mortality is also high at 25 per 1000 [8] and international estimates suggest that up to two thirds of these deaths could be prevented with effective, basic care [9]. The Government of PNG has identified early and ongoing antenatal care (ANC) and skilled care at birth as key strategies for addressing these poor health outcomes [5,10]. Currently only 67% of pregnant women receive any ANC [11] and only 55% of pregnant women receive the recommended four or more ANC visits [8]. Further, less than half (44%) of women receive skilled care during childbirth [11]. Gender inequality is a significant issue in PNG; women and girls have significantly less access to health and education than men and boys, violence against women and girls is common, and inequitable decisionmaking in the home contributes to poor health outcomes [12,13]. Women's lack of decision-making power and lack of male partner support have been highlighted as important barriers to women's use of health care services during pregnancy and for childbirth [13][14][15][16]. Global research suggests that involving expectant fathers in antenatal education, either through participation in ANC consultations with their pregnant partner or in antenatal education interventions, can be an effective strategy for improving health behaviours during pregnancy [17], increasing women's utilisation of skilled childbirth care and postpartum care [18], and can be effective in increasing ANC attendance in some contexts [17,[19][20][21][22][23]. Engaging men can also contribute to improved couple communication and shared decisionmaking regarding MCH [24,25], and is therefore particularly critical in settings such as PNG where gender inequality limits women's access to services [15,16]. Engaging men in ANC and prevention of parent-to-child transmission of HIV (PPTCT) initiatives has also demonstrated positive impacts on the proportion of pregnant women and couples testing for HIV, the use of condoms or abstinence to prevent HIV transmission within the couple, adherence to drug prophylaxis regimes and recommended infant feeding practices by HIV-positive mothers, and subsequently increase HIVfree infant survival [26][27][28][29][30]. Despite wide variation across different contexts, barriers to men's involvement in MCH services commonly identified globally include beliefs that it is unnecessary or inappropriate for men to participate in pregnancy or postpartum care, or men feeling embarrassed or ashamed to participate in MCH services [31][32][33][34][35][36][37][38][39][40], and men not being invited to attend services [31]. Other common barriers include fear of being tested for sexually transmitted infections (STIs) and HIV at a clinic [40,41], fear of being perceived as a jealous husband following his wife around [34] or fear of a man being perceived as 'dominated' by his female partner [33,37]. Inappropriate opening hours or long waiting time [37,39,42], work commitments or low job security preventing men taking time off work [31,[41][42][43], negative health worker attitudes towards men's involvement in MCH services and lack of staff capacity or space to engage men [31,33,35,37,39,41,44] have also been identified as barriers to engaging men in clinical settings. Poor understanding among men of the health problems faced by mothers and babies and inadequate knowledge regarding how to take an active role in MCH can also impede men's participation in ANC and MCH [31,33,41,44,45]. Men's involvement in MCH has also been negatively correlated with distance to the health facility [46], having multiple children [47], and women's autonomy [48,49], and positively correlated with wife's education level [47], and male partner income and education [39]. Despite significant barriers in many contexts that act against men participating in antenatal care, research in diverse settings has shown that many men and women welcome greater men's involvement in MCH services [31,35,41,[50][51][52], that many men want and need more information regarding women's and children's health [31,35,41,53], and that a range of strategies can effectively increase men's participation in ANC and men's support for MCH [23,35,[54][55][56][57][58]. The World Health Organization's Recommendations on Health Promotion Interventions for Maternal and Newborn Health recommend interventions to engage fathers during pregnancy, childbirth and the postnatal period [19]. In PNG, the National Sexual & Reproductive Health Policy [59] explicitly advocates for the active involvement of expectant fathers in ANC and in the labour ward during childbirth. However, in practice most expectant fathers do not attend ANC consultations with their pregnant partner or participate in any formal antenatal education, and in many PNG communities pregnancy, birth and infant care are considered 'women's business' resulting in limited men's involvement in maternal and child health (MCH) more generally [41,44]. Lack of engagement with expectant fathers during the antenatal period is a missed opportunity to deliver critical information and services that improve MCH, including STI and HIV testing, treatment and preventive education. The purpose of this paper is to explore barriers, enablers and potential strategies for involving expectant fathers in ANC with their pregnant partner in PNG. Study design and purpose This paper forms part of a larger study that the authors conducted between June and August 2012 to examine health seeking behaviour for antenatal care, men's involvement in ANC, and prevention, testing and treatment of STIs and HIV. This study employed a qualitative study design including focus group discussions (FGDs) and key informant interviews (KIIs) using standard question guides. Given the relative dearth of published research into expectant fathers' involvement in ANC in PNG, this study design was appropriate to explore and build our understanding of this topic. This study was funded by UNICEF PNG and conducted to inform design of the "Haus Man Sambai Long Ol Mama" project, a UNICEF PNG pilot program to increase men's involvement in ANC and PPTCT. Study setting Data were collected across four provinces of PNG. All four provinces had been identified as implementation locations for the Haus Man Sambai Long Ol Mama project, selected for the project due to high rates of parentto-child transmission of HIV. In each province, one to three clinics providing PPTCT services were purposively selected to represent the range of rural and urban clinical settings operating in that province. Ultimately, a total of seven sites were selected: Port Moresby General Hospital and St. Therese Clinic in National Capital District; Migende St. Joseph Rural Hospital in Chimbu Province; Kumin Headquarters PPTCT Centre and Mendi General Hospital in Southern Highlands Province; and Mt. Hagen General Hospital, Tininga and Rabiamul Clinic in Western Highlands Province. Antenatal care services are provided free of charge in these clinic. Participants and sampling The main study participants were adult men and women from communities surrounding the study clinics, including: women who were pregnant or had given birth in the last 12 months (referred to collectively here as 'pregnant mothers'); men whose female partner was currently pregnant or had given birth in the last 12 months (referred to collectively here as 'expectant fathers'); older women aged 50 years or over; and older men aged 50 years or over. In PNG, older people often play an important role in community decision-making and information sharing and older women in particular often support younger women throughout pregnancy and childbirth [44]. Older people were therefore an important informant group to understand attitudes and experiences of men's role in the antenatal period. FGD participants were recruited using convenience sampling via public announcements, flyers, posters, and individual verbal invitations at health centres, community meeting places, churches and other community institutions. A total of 300 community members participated in FGDs, including 78 pregnant mothers, 64 expectant fathers, 77 older women and 81 older men. The average pregnant mother participating in this study was 27 years old (range 17 to 45 years), with two or three children (range zero to 8) and 5.6 years of schooling (range zero to 16). The average expectant father was 31 years old (range 19 to 46 years), had two or three children (range zero to 10 children), and had 6.6 years of schooling (range zero to 12 years). Most pregnant mothers and expectant fathers lived with their partner (94% of mothers and 89% of fathers) and were unemployed (88% of mothers and 64% of fathers). A substantial proportion of all participants were in a polygamous marriage, including 13% of pregnant mothers and 5% of expectant fathers. Older men and women were all believed to be aged over 50 years, although many could not recall their exact age. Older women had an average of four children (range zero to eight) and older men an average of between five and six children (range zero to 15). The average older woman had 3.9 years of schooling (range zero to 14 years) while the average older man had 3.6 years of schooling (range zero to 12 years). Health workers (nurses and midwives) involved in ANC or PPTCT service provision in local hospitals, health centres or clinics were also eligible to participate in KIIs. Two health workers were purposively recruited at each site by trained data collectors in conversation with health service management. Data collection A total of 28 FGDs (four per site) were conducted. At each site, separate FGDs were conducted with pregnant mothers, expectant fathers, older men and older women. Trained FGD facilitators used open-ended question guides developed and pilot-tested for each specific participant group (refer to Additional files 1, 2, 3, 4, and 5). FGDs were led by a facilitator of the same gender as the participants, supported by a note taker, and explored attitudes to expectant fathers' participation in ANC, and barriers, enablers and potential strategies to promote men's participation in ANC. FGDs were held in private rooms in community buildings or health facilities, involved between four and 11 participants, lasted approximately 1.5 h and were conducted in Tok Pisin in combination with other local languages. Basic demographic data were collected from FGD participants regarding number of children, marital status and age. We also undertook 14 KIIs (two per site) with health workers to explore attitudes and behaviours relevant to expectant fathers' participation in ANC, health system factors influencing expectant fathers' participation and opportunities to promote fathers' participation in ANC. KIIs were conducted face-to-face in a private space in the health facility by one facilitator and one note taker, lasted approximately one hour and were conducted in Tok Pisin or English, depending on the participant's preference. This study used different data collection personnel in each province. Provincial teams were supervised by a team leader who managed and participated in data collection across all sites. This approach was adopted to develop research capacity in each location, to minimise security risks to study personnel, and to ensure data collectors were fluent in local languages. Provincial data collectors were community HIV educators sourced from local non-government organisations. Most had limited prior experience of qualitative research. All data collectors participated in a five day training workshop on qualitative research and participated in field-testing study tools. Two digital recorders were used during data collection and data collectors took detailed written notes. However, substantial background noise and softly spoken participants compromised the usefulness of some digital recordings. Detailed notes taken by note-takers in each FGD and KII were compared to voice recordings and amended where possible and if required, before being checked by the FGD or KII facilitator. These written records were then translated into English. Data analysis Translated written records were reviewed based on broad themes of interest, namely: support that fathers currently provide to pregnant partners; attitudes to fathers participating in ANC; barriers to expectant fathers attending ANC; enablers to fathers participating in ANC; and potential strategies for increasing fathers' participation in ANC. Subsequent analysis of written records involved inductive data-driven coding of the text to identify and synthesise recurrent issues in the data. The software package NVivo 10 was used for data management. Provincial research teams provided feedback on initial interpretation of the data, the recurrent issues identified during analysis, and provided assistance with interpretation of findings at a two-day workshop in Port Moresby. Ethics, consent and permissions This research was approved by the Research Advisory Council of the National AIDS Council Secretariat in Papua New Guinea and the Alfred Health Human Research Ethics Committee in Australia. Written or verbal consent to participate was obtained from all participants after data collectors had explained study objectives and procedures and checked that participants understood this information. To protect participant confidentiality, quotes are attributed to the participant group and the province, but without reference to individual sites. Because of the small number of health worker participants, health worker quotes are not attributed to a province. Results Expectant fathers' support for their pregnant partner FGD and KII participants in all provinces reported that some men accompany their partner to the ANC clinic and wait nearby, but that few men participate in ANC consultations with their partner. Participants report that many men support their pregnant partner in other ways, including providing nutritious food and helping with heavier chores such as carrying water, chopping firewood, gardening or other housework. Expectant fathers also commonly provide financial assistance to their pregnant partner, predominantly for bus fares, hospital fees or food, or play a role in organising transport for women to access the health facility. Some men also assist their pregnant partner by caring for older babies or children. Some men do help their wives, especially those men who stay at home with their wives. They help in terms of food and firewood and assist in caring for the baby when the baby is crying or, they even feed them [the baby]. (Pregnant Mothers' FGD, Chimbu Province). Some expectant fathers also provide emotional support and encouragement to their pregnant partner; by discussing health information or providing advice, by discussing her concerns about the pregnancy, or by encouraging her to attend ANC. Some men also reportedly encourage their pregnant partner to rest, while others encourage her to do gentle exercise to build strength for labour. However other men offer limited support to their pregnant partners: When she is pregnant, that's it, the job is done. (Expectant Fathers' FGD, Chimbu Province). Attitudes to expectant fathers participating in ANC consultations Most pregnant mothers, older women and health workers participating in this study were supportive of expectant father's participating in ANC and highlighted important potential benefits to greater men's participation, including improved knowledge among expectant fathers of health needs and danger signs during pregnancy, and greater support for pregnant mothers to access health care services. We want our husbands to ask the health care workers about our health condition during consultations. Mothers are really ready for men to come with them for the antenatal clinic. (Health Worker KII). Importantly, however, pregnant mothers participating in FGDs also reported that some women would not want their male partners to accompany them during an ANC consultation, predominantly because women might feel "shy". Participants reported that women might feel shy if they are seen in public with their partner, or if their partner sees them during the external physical examination or hears them talking to the health worker about topics related to pregnancy or birth. Some [pregnant women] do take their husbands to the clinics while others feel shy for their husbands to accompany them. (Pregnant Mothers' FGD, Western Highlands Province). We don't want the man to see our tummy so we don't want our husbands to come to the clinics. (Pregnant Mothers' FGD, Western Highlands Province). Other pregnant mothers reported that expectant fathers should participate in specific parts of ANC consultations, but should not be present during abdominal or pelvic examinations: When it is time for pelvic examination the men should not come because later when there is an argument they will say all sort of things. But we want them to be there for all blood tests and check-up so the doctor can advise them too. (Pregnant Mothers' FGD, National Capital District). Expectant fathers participating in FGDs were asked whether many men would participate in an ANC consultation with their pregnant partner if they were invited to do so by a health worker. In most FGDs, participants reported that many fathers would attend ANC with their partner, while others would not, even if invited. If we are invited we will go. (Expectant Fathers' FGD, Chimbu Province). Fathers have different views, some will listen [to the health workers], but some will say that it is the woman's work. (Expectant Fathers' FGD, Western Highlands Province). This perceived reluctance of some expectant fathers to participate in ANC was generally attributed to the challenges and barriers to men participating in ANC described in the following section. Challenges and barriers to male involvement in ANC Sociocultural norms and taboos Sociocultural norms and taboos were the most commonly reported barrier to expectant fathers participating in ANC. Participants reported that many men believe that MCH, including ANC attendance, is a woman's responsibility: …the main reason is custom that prevent them [men] from attending ANC. Men think that they are superior or the boss of the family so they are not concerned about the health of the mother and the children. They think it's the job of mothers to look after the children. Some participants also spoke of ANC clinics as 'women's places' or for women only: Most fathers think antenatal clinical is for mothers only. (Expectant Fathers' FGD, Southern Highlands Province). In the village there is a hausman [men's house] and hausmeri [women's house]. Men don't go to the women's house and women don't go to the men's house. There is a big respect between men and women. Some think that ANC is a house for women only so men are not allowed to enter. (Pregnant Mothers' FGD, Southern Highlands Province). Norms around appropriate ways for men to behave towards their female partner were often reported in the form of negative community perceptions and gossip about men 'following' their partner to the clinic: Some men care too much about what others will think of them. [They say] 'he has never seen a woman before, that's why he's following his wife'. (Expectant Fathers' FGD, Chimbu Province). [If men go to the ANC clinic] people in the community will stare at them and say these men follow their wives around too much. (Expectant Mothers' FGD, National Capital District). These norms relating to fathers' participating in ANC contributed to men feeling shy, embarrassed or ashamed to attend ANC with their pregnant partner: There's no waiting house for me so I'm ashamed to go inside and I wait outside. (Expectant Fathers' FGD, Chimbu Province). We encourage them to come but they don't, a few that come, don't go inside because they feel shy because too many women look at them. Less than 10% accompany their wives to the ANC. (Health Worker KII). While feeling ashamed was a commonly reported barrier to expectant fathers' participating in ANC, most expectant fathers participating in this study did not explain the reasons why men might feel ashamed to accompany their partner to ANC. However, health workers, pregnant mothers and older women tended to associate men's feelings of shame or embarrassment with being in the company of large numbers of women (traditionally taboo behaviour in many communities), with observing the health worker examining their pregnant partner's abdomen, or with having too many children or closely spaced pregnancies. Shame as a barrier to men participating in ANC was particularly reported by expectant fathers in FGDs in the Highlands Region. In these areas, prevailing gender norms and dominant forms of masculinity may contribute to men feeling too shy or ashamed to publically show support for their pregnant partner: I'm a Western Highlander, we have an attitude problem, we are ashamed so we only give money to the woman [to go to the clinic]. (Expectant Fathers' FGD, Western Highlands Province). Young men do care about their partners, it is just that they are shy to show their support in front of their peers. (Older Men's FGD, Western Highlands Province). These community attitudes and the dominant forms of masculinity, particularly in the Highlands, undermine men's ability to perform caring acts publicly and act as a barrier to men accompanying their partner to ANC. Fear of an HIV test Some participants in this study spoke of men being reluctant to attend an ANC clinic because they feared having an HIV or STI test. This fear was particularly associated with men who suspected they may be HIV positive or men who had had multiple sexual partners. They are scared of being tested for HIV/STIs if they suspect themselves. (Expectant Fathers' FGD, Chimbu Province). Most men are scared to go to the clinics if they had had multiple sexual partners before their marriage. They are scared of being tested. (Older Men's FGD, Chimbu Province). Participants also highlighted men's concern that the community will presume that a couple attending ANC together are HIV positive. Health worker practices may be unintentionally reinforcing this assumption; although the majority of health workers supported greater involvement of fathers in ANC, in practice fathers were only actively encouraged to attend if the pregnant mother was found to have an STI or be HIV positive, or to discuss family planning. We encourage women to bring partners especially for STI treatment at the same time we provide basic HIV/ AIDS information on how it is transmitted and the importance of treatment and prevention. (Health Worker KII). Once the mother come here looking sick like anaemia or the mother have children one after another (like more than five) or when the woman is detected with Sexually Transmitted Infection then we involve the husband on how to go about solving the problem through family planning and counselling. (Health Worker KII). Poor health worker attitudes The attitudes of some health workers may further dissuade men from attending clinic services with their partners. Some participants spoke of health workers speaking and behaving in ways that made men feel unwelcome in ANC clinics, and even not allowing men to participate in ANC. Nurses are harsh with the man because most men don't attend the clinic with their wives. The first impression given to them keeps the man from attending the clinic with their wives. (Older Men's FGD, National Capital District). This viewpoint was supported by a health worker, who identified health worker attitudes as a barrier to fathers' participation in ANC: Sometimes health workers have no understanding. They impose their values on fathers…If a man is willing to do this [come to the clinic] we as health workers must comply. Traditionally we are used to seeing women in the clinic, this perception needs to change. (Health Worker KII). Concurrent relationships The nature of a couple's relationship was also reported to influence men's involvement in ANC. In Southern Highlands Province and Western Highlands Province, men in polygamous relationships were reportedly less likely to accompany their partner to ANC for fear of being perceived as favouring one wife over others, which may cause conflict in the household: Men who have many wives don't go because the community might think he only favours one and not the others. (Older Men's FGD, Southern Highlands Province). In Chimbu and National Capital District, participants spoke more often of men not attending ANC because they worried about being seen by a current or ex-girlfriend: Some men don't want to go to the clinic if they had an affair with a female health worker. Some men don't want their ex-girlfriends to see them with their wife in case they gossip about them, that's why they hide. (Expectant Fathers' FGD, Chimbu Province). Some men were also reportedly unlikely to attend ANC with a pregnant partner because they did not wish to make their relationship public, as this would hinder their chances with other partners, or because they were already involved with other women and were not very concerned for or supportive of their pregnant partner. When they see another woman, they don't want to follow their wife to the clinic. The sight of a new woman makes them forget about their wife. (Older Women's FGD, Chimbu Province). Time, distance and cost of attending ANC Limited time, coupled with long waiting times for ANC, was highlighted as a barrier to fathers' participation in ANC. Long distances to the nearest clinic or lack of money for transport also had a negative impact on men's participation in clinic services. Working husbands don't have time to come with their wives to the clinic. Those who are unemployed cannot afford the bus to accompany their wives. (Health Worker KII). Enablers that support male involvement in ANC Positive health worker and community attitudes Enablers to male involvement were closely related to the barriers discussed above. Men were reportedly more willing to attend ANC if the community is supportive of fathers attending clinic and if health workers are friendly and welcoming. When the community are happy and compliments a man for bringing his wife to the clinic, it kind of motivates him to take his wife to the clinic. (Expectant Fathers' FGD, Chimbu Province). Sometime it depends on the health workers. If the health workers are friendly and kind to both the husband and the wife, they both turn up on their clinic days. (Older Men's FGD, Chimbu Province). Concerns for the health of their partner or baby Male FGD participants spoke of expectant fathers being motivated to attend ANC by a sense of shared responsibility for the baby's health, by a belief that attending clinic together will be good for the baby's health, or by love for their pregnant partner: Participants noted that many men would be more likely to offer support when his partner is pregnant with their first child than with subsequent pregnancies: There are grave concerns for a first born, second born they are not concerned because it is natural. Men also spoke of being more likely to attend ANC if their pregnant partner is unwell or diagnosed with HIV or an STI. Similarly, health workers reported that they invite expectant fathers to participate in ANC if the mother is experiencing specific health conditions, such as anaemia or an STI or HIV, or when a couple already have many children in order to discuss family planning. Concerns regarding safety or a woman's ability to communicate with clinic staff Expectant fathers are reportedly more likely to participate in ANC if their pregnant partner is illiterate or unable to speak Tok Pisin or English to communicate with health workers. Concerns about a woman's safety on the trip to the clinic may motivate men to accompany their partners to the ANC clinic, although in such cases expectant fathers would not necessarily participate in ANC consultations but rather wait outside the clinic. Opportunities for increasing men's participation in ANC Inviting expectant fathers to attend ANC Focus group discussion participants were asked to suggest strategies for encouraging fathers to participate in ANC. Individual invitations from health workers to expectant fathers, encouraging them to attend ANC with their pregnant partner was a strategy suggested by both male and female FGD participants. Written invitations would reportedly make expectant fathers feel welcome and 'special': If we are given an invitation individually, we will feel special and important and cooperate. (Expectant Fathers' FGD, Chimbu Province). One health worker suggested routinely inviting expectant fathers to participate in ANC as a strategy for overcoming men's fear of community gossip if they participate in ANC: People think that it is not good for men to accompany their wife to ANC because of the fear of gossip. Maybe if we make it normal, a routine exercise, I think it will encourage men folk to come with their wives to ANC. (Health Worker KII). Making facilities welcoming to men Ensuring all health workers treat expectant fathers in a friendly and respectful manner, and making male health workers available to provide workshops or seminars, were commonly suggested strategies for encouraging men's participation in ANC. Maybe the nurses should be welcoming and open up to the men. If our attitude is good we will bring people in especially in the Antenatal Clinic. (Health Worker KII). Male health workers must encourage the men and give them some special lessons. There must be room provided for men only at the health facilities to give workshop/seminar that will help them to look after their wives and children. (Pregnant Women's FGD, Southern Highlands Province). Many participants suggested adapting facilities to welcome men, including by providing appropriate waiting spaces for men, providing pamphlets and information aimed specifically at men and couples, and providing services such as tea and coffee facilities and entertainment for men (e.g. for example games or videos). Raising community awareness Older men, older women, and health workers participating in this research reported that increasing community awareness regarding men's role in supporting the health of their partner and baby will be an important step in increasing male involvement in MCH more generally, and ANC specifically. To get men involved in ANC, there should be more awareness about the importance of ANC. It would be good to sit and talk with the men. Also trained health care workers should give special education to the men. (Health Worker's KII). Engaging older people and community leaders to promote and champion greater participation of expectant fathers in ANC was highlighted as a strategy likely to be effective. Integrating antenatal care in outreach health services to provide information sessions to communities was also suggested. Develop or find some ways to raise awareness on encouraging men to accompany their wives, emphasizing both couples coming to the clinic. Health staff at the clinic should have some time available to do mobile clinic at the village to give education on involving men because most of the men are not aware [that they can attend ANC]. (Older Women's FGD, Western Highlands Province). Compulsory attendance and incentives Some participants in FGDs and KIIs suggested making ANC attendance compulsory for expectant fathers and penalizing pregnant women by refusing service or levying a fee if they did not bring their partner to ANC. If the wife turns up at the clinic on her own, the service fee should be increased but if both turn up it should be reduced. Service fee should be controlled in a way that both partners will visit the clinic. (Older men's FGD Chimbu Province). In order to bring the men to the clinic we should engage the leaders to tell their people that both men and women should go to antenatal clinic because of the prevalence of the HIV and we will make it compulsory that the pregnant mother is seen and treated as long as she comes with the husband. If not she will not be treated, it will be compulsory. (Health Worker's KII). As noted above, however, some female participants reported a preference for attending ANC alone. Participants suggesting compulsory ANC attendance for male partners did not reflect on the potential negative consequences of such an approach. Discussion This qualitative research paper has identified key barriers and enablers to expectant fathers' participation in ANC consultations in PNG, explored attitudes to fathers participating in ANC, and identified opportunities to encourage fathers to accompany their partner to ANC. In general, women and health workers consulted in this study were supportive of expectant fathers participating in ANC consultations, and these participants highlighted a range of benefits that greater male involvement could have for women and children. Importantly, however, this study also found that some women prefer to attend ANC alone or prefer male partners to be present during ANC counselling but not during physical examinations. This finding is consistent with previous research from PNG and other settings, which has shown that attending ANC alone can offer a valued opportunity to travel unaccompanied, network with other women, and/or privately seek health advice or services [41,60]. In seeking to involve expectant fathers in ANC, policy makers and health workers must ensure that women are able to decide whether and when their partner joins them in ANC consultations. A range of strategies exist for engaging fathers without compromising women's privacy or autonomy, including: inviting each expectant father to attend ANC via his pregnant partner, explicitly allowing her to decide whether to pass on this invitation; allocating time at the start of a consultation to talk to a woman alone, before asking if she would like her male partner to join the consultation; allowing women to bring their male partners to ANC counselling, but ensuring that physical examinations and potentially sensitive topics are discussed in private; or seeking to routinely involve expectant fathers in only the first or second visit. While some participants in this study suggested making men's attendance at ANC compulsory or providing disincentives for women attending ANC without a partner, such approaches are not recommended as they can stigmatise single or unaccompanied women and dissuade these women from accessing ANC [30,61]. Regardless of the strategy employed to engage fathers, pre-testing messages and strategies with both men and women will be important in maximising the benefits of engaging men in MCH while minimising potential risks such as loss of women's autonomy in health decision-making [19]. The finding from this study, and other research in PNG [41,62], that some expectant fathers wait outside the clinic while their pregnant partner attends an ANC consultation highlights an opportunity to engage men in ANC consultations or other health education initiatives. Health workers should consider routinely asking women if their partner is waiting nearby and if they would like him to join the consultation. Community members, including expectant fathers, involved in this study believed that expectant fathers would respond positively to a written or verbal invitation to participate in ANC. This finding is in line with international research that has shown that even in contexts where ANC is considered 'women's business' , inviting expectant fathers to attend ANC if their pregnant partner would like them to can make men feel more welcome at ANC [63], and can increase expectant father's participation in ANC [24,25,35,58,64,65], particularly when invitations are tailored to local health concerns [64]. Routinely inviting fathers to participate in one or more ANC consultations (with their pregnant partner's consent), rather than the current practice of inviting fathers to ANC only when their partner tests positive to HIV or an STI, or is experiencing a serious health issue, may also reduce stigma and gossip associated with men's attendance at ANC, thereby reducing barriers to couples attendance at ANC in the future. Our finding that health worker attitudes and capacity are a barrier to expectant fathers' participation in ANC is similar to research findings in other contexts [31,35,41,42,44] and highlights the need to ensure that nursing and midwifery education and in-service training include a focus on respectful, family-centered care and couple counseling skills. Training health workers to engage fathers and provide quality couple counseling has been effective in increasing men's participation in ANC services [24,25] and increasing health worker job satisfaction [24] in other settings and should be trialed in PNG. The perception reported by study participants that ANC clinics are women's places is a finding echoed in the global literature [31-35, 40, 41, 66-68]. In our study clinics, as in most public clinics in PNG, women are unable to make appointments for ANC and instead go to the clinic on designated ANC days, often waiting for long periods before being seen. Further, clinics often lack privacy for women receiving counselling or examinations. These factors contribute to men feeling embarrassed and intimidated when accompanying their pregnant partner, and may make other women seeking ANC uncomfortable. Existing and future ANC clinics can be made more fatherand couple-friendly by ensuring consultation spaces afford adequate privacy, providing specific waiting areas for men and couples, having separate entrances for men, or displaying posters, magazines or educational DVDs that specifically target and depict men. The existing layout and resources available to clinics in PNG means that some of these changes are likely to be immediately feasible in at least some clinics but not others, underscoring the need for a range of strategies that are selected or adapted based on the local community and resources available. Notably, many suggested intervention to make clinics more 'father friendly'such as ensuring the physical environment affords adequate privacy, providing alternative opening hours and ensuring staff are supported to provide quality couple counselling and respectful careare also likely to make existing services more acceptable to pregnant women and adolescents. Other suggested changes such as providing games for men waiting at ANC are likely to be less attractive to health staff and policymakers, due to the expense and the potential to distract men from information and services provided at the clinic. However health workers may be able to harness this expressed preference for entertainment in order to increase men's health-related knowledge; program experience indicates that facilitated, game-based health education for men can be feasible and highly acceptable in PNG [69] and could be usefully integrated into group antenatal education for expectant fathers [70]. Community awareness raising was frequently highlighted by participants as critical to increasing expectant fathers' participation in ANC and has been shown internationally to be effective in increasing men's engagement in MCH [54,71]. Communication strategies, such as mass and social media and interpersonal communication strategies, should seek to provide information and stimulate discussion about the benefits of expectant fathers participating in ANC, while specifically addressing negative community attitudes about men 'following' their partner to ANC. Communication interventions through community groups and institutions, targeting both younger and older people, may also be an appropriate strategy to reach men with information about MCH while barriers to men's full participation in ANC are addressed. Engaging men in settings where they commonly congregate has long been a recommendation of the male engagement literature [72,73] and research indicates that men prefer to be engaged in places where they meet socially [74]. Participants in this study reported that prevalent ideas regarding masculinity, a man's role versus women's responsibilities, and men's spaces versus women's places, tend to limit male involvement in ANC. However, participants also reported divergent community attitudes towards expectant fathers participating in ANC, noting that while some community members criticise and gossip about expectant fathers attending ANC with their pregnant partner, other community members are pleased and supportive. Identifying and supporting strong role models to champion the role of fathers in MCH, including the importance of expectant fathers participating in ANC, may increase the acceptability of men attending ANC and has been shown to be an effective strategy in engaging men in other settings [75,76]. Our findings suggest that some expectant fathers, and indeed pregnant women, will not choose to participate in ANC consultations together, even if given the option to do so. Promising findings from the international literature suggest that men-only group education or oneon-one peer-education can break down barriers to men's participation in MCH and encourage fathers to take a more active, positive role in MCH [23,24,[76][77][78]. As many clinics in PNG only provide antenatal care on specific days and at specific times, and given our finding that some expectant fathers wait outside the clinic while their partner receives ANC, men-only group antenatal education or one-on-one peer education may be a viable option to reach these men. Group education in particular is relatively low cost and has proven effective in improving care-seeking and health-related behaviours [23,24,57,78]. Program experience also suggests that men's group antenatal education is feasible and acceptable in PNG [70]. The finding that men with multiple sexual partners, either in the form of extramarital partners or multiple wives, may be less likely to participate in ANC services and may avoid clinics for fear of an STI or HIV test is particularly concerning. Some 13% of pregnant mothers involved in this study reported being in a polygamous relationship, suggesting that a substantial proportion of women and children are likely to be impacted if we fail to reach polygamous men with information and services. Further research is needed on the most suitable approach to engaging this population group in antenatal education, but interventions such as peerto-peer counselling or men's group health education may be appropriate. This study has some important limitations. Due to the short timeframe available to collect data between commencement of this study and the 2012 General Elections, 1 participants were largely recruited through clinics and public announcements in churches and community meetings, which may have biased participation towards those accessing clinic services, attending church or with a pre-existing interest in health. In addition, voice recordings of focus groups and interviews were of poor quality due to excessive background noise and softly spoken participants (as described earlier). Ultimately written records compiled from detailed data collector notes, supplemented with transcribed voice recordings where possible, were used for analysis. Due to budget and time constraints, analysis and findings were checked with provincial data collection teams but not with participants. Finally, data for this study were collected in National Capital District and the Highlands Region of PNG. Social and cultural diversity across PNG means that the findings of this study may not be applicable to other geographical locations, such as coastal or island regions. Conclusion Expectant fathers in PNG face considerable barriers to participating in ANC with their pregnant partners, including sociocultural norms and taboos, inappropriate clinic infrastructure and poor health workers attitudes. Although many men accompany their pregnant partner to the ANC clinic, few participate in ANC consultations. Findings suggest, however, that most pregnant women and health workers support fathers participating in ANC and that at least some expectant fathers would attend ANC consultations if invited to do so. This study has also identified strategies for increasing expectant fathers' participation in ANC, with implications for program planners seeking to encourage men to take an active, positive role in supporting maternal and child health. Interventions such as explicitly inviting expectant fathers to participate in ANC services, if their pregnant partner would like them to, and ensuring that health workers have the skills to engage men and provide quality couple ANC counselling will be important in increasing men's participation in ANC. Interventions to make clinic spaces more welcoming to expectant fatherssuch as providing posters and pamphlets depicting and targeting fathers, or providing men's or couple's waiting spacesmay also be feasible in many clinics. Community awareness raising interventions will also be needed to build community support for expectant fathers' participation in ANC. Other promising strategies, such as men-only group antenatal education, should be considered to reach men unable or unwilling to attend ANC with their partner. Endnotes Ethics approval and consent to participate This research was approved by the Research Advisory Council of the National AIDS Council Secretariat (approval number RES11.025) in Papua New Guinea and the Alfred Health Human Research Ethics Committee in Australia (project number 7/12). Written or verbal consent to participate was obtained from all participants after data collectors had explained study objectives and procedures and checked that participants understood this information. Participants who were unable or unwilling to provide written consent, for example due to low literacy skills, were invited to provide verbal consent. This verbal consent was witnessed by a data collector. Procedures for obtaining informed consent were approved by the abovementioned ethics committees. To protect confidentiality, quotes are attributed to the participant group and the province, but without reference to individual sites. Because of the small number of health workers, we do not attribute health worker quotes to a province. Competing interests The authors declare that they have no competing interests.
2018-05-08T20:46:48.829Z
2018-05-08T00:00:00.000
{ "year": 2018, "sha1": "4892f9a53f55a6a10928c43645523345db5ed8f0", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-018-1759-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4892f9a53f55a6a10928c43645523345db5ed8f0", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
13397491
pes2o/s2orc
v3-fos-license
Gelatin nanoparticle-mediated intranasal delivery of substance P protects against 6-hydroxydopamine-induced apoptosis: an in vitro and in vivo study Background The aim of this study was to investigate the protective role of intranasally administered substance P-loaded gelatin nanoparticles (SP-GNPs) against 6-hydroxydopamine (6-OHDA)-induced apoptosis in vitro and in vivo, and to provide a new strategy for treating brain pathology, such as Parkinson’s disease. Methods SP-GNPs were prepared by a water-in-water emulsion method, and their stability, encapsulating efficiency, and loading capacity were evaluated. PC-12 cells were used to examine the enhancement of growth and inhibition of apoptosis by SP-GNPs in vitro using MTT assays. In the in vivo study, hemiparkinsonian rats were created by intracerebroventricular injection of 6-OHDA. The rats then received intranasal SP-GNPs daily for 2 weeks. Functional improvement was assessed by quantifying rotational behavior, and the degree of apoptosis was assessed by immunohistochemical staining for caspase-3 in the substantia nigra region. Results PC-12 cells with 6-OHDA-induced disease treated with SP-GNPs showed higher cell viability than their untreated counterparts, and cell viability increased as the concentration of substance P (SP) increased, indicating that SP could enhance cell growth and inhibit the cell apoptosis induced by 6-OHDA. Rats with 6-OHDA-induced hemiparkinsonism treated with SP-GNPs made fewer rotations and showed less staining for caspase-3 than their counterparts not treated with SP, indicating that SP protects rats with 6-OHDA-induced hemiparkinsonism from apoptosis and therefore demonstrates their functional improvement. Conclusion Intranasal delivery of SP-GNPs protects against 6-OHDA-induced apoptosis both in vitro and in vivo. Introduction A hydroxyl derivative of catecholamine, 6-hydroxydopamine (6-OHDA) is a neurotoxicant that activates apoptotic and proapoptotic factors, eg, caspase proteins, as well as transduction of Bax factor, leading to apoptosis and degeneration of dopamine neurons. [1][2][3] Studies have shown that 6-OHDA can induce apoptosis in PC-12 (adrenal pheochromocytoma) cells by activating apoptotic factors. 2 Also, rats intracerebroventricularly injected with 6-OHDA show apoptosis, degeneration, and death of dopaminergic neurons in the substantia nigra. The apoptosis occurs mainly due to a caspase family member-mediated protease cascade, and caspase-3 has a vital role in this process. If large numbers of dopaminergic neurons undergo apoptosis, the result is irreversible degenerative brain disease, ie, Parkinson's disease (PD), for which there is still no effective therapy. 4 Substance P (SP), a member of the tachykinin peptide family, is involved in the regulation of many biological processes in the central and peripheral nervous (Figure 1). 5 SP-containing neurons are widely distributed throughout the central and peripheral nervous systems, especially in the substantia nigra region. 6 Most SP receptors are located within dopaminergic and cholinergic neurons in the basal ganglia, suggesting that SP may have a physiologically regulating effect on these neurons. 7 Therefore, SP and its receptor may have a therapeutic use in PD, which is characterized by impaired dopaminergic transmission. It has been reported that SP and dopamine are regulated via a positive feedback mechanism whereby binding of SP to its tachykinin neurokinin-1 receptor on dopamine neurons causes striatal release of dopamine, and by binding to its D1 receptor on striatal projection neurons, dopamine potentiates the release of SP within the substantia nigra. 8 Previous research has shown that expression of SP is significantly decreased in the basal nuclei in both hemiparkinsonian rats and PD patients, indicating probable involvement of SP in regulating the pathogenesis of PD. 7 In vitro experiments have demonstrated that SP can reduce anti-Fas-induced apoptosis in human tenocytes via regulation of neurokinin-1-specific and Akt-specific pathways. 9 An vivo study suggested that SP can reduce apoptotic cell death by modulating the immune response in the early stages after spinal cord injury. 10 Intracerebroventricular administration of SP to rats with 6-OHDA-induced disease can increase the dopamine content in the brain and help to restore the dopamine deficit, with the positive effects seen being more prominent in the nigrostriatal system than in the mesocorticolimbic dopaminergic system. 11 Further, hemiparkinsonian rats pretreated with SP fragments 12 or an SP receptor antagonist 13 show increased levels of dopamine and its metabolites in the corpus striatum, as well as clear functional recovery. However, the current research focuses on the pharmacological effects of SP given by invasive intracerebroventricular injection, which can result in a high local concentration of SP in the brain, and it has been confirmed that a high level of SP in the brain can induce serious neuroinflammation and further aggravate illness, 8,14,15 so intracerebroventricular injection is not a safe or practical strategy for PD patients who need continuous treatment. Intranasal administration has been reported to be an efficient and noninvasive way to delivery biologics directly into the brain. 16 Gelatin nanoparticles (GNPs) are a type of gelatin-cored nanostructured lipid carrier prepared by a water-in-water emulsion method and have good stability and strong penetrating ability, encapsulating efficiency, and loading capacity, as well as bioactivity. 17 It has been reported that GNPs are a suitable carrier for targeted delivery, making it possible to delivery therapeutics to a focal zone effectively without compromising drug stability or concentration. 18,19 Theoretically, a novel strategy combining GNP-loaded therapeutics and the nasal olfactory pathway might maximize the potential efficacy of SP in the treatment of PD. In the present study, we investigated whether intranasally administered SP-GNPs could maximize the ability of SP to protect against 6-OHDA-induced apoptosis in vitro and in vivo. SP-GNPs were prepared by a water-in-water emulsion method and were found to have good stability, encapsulating efficiency, and loading capacity. The protective effect of SP-GNPs on PC-12 cells with 6-OHDA-induced disease was assessed by MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay, and the inhibition of apoptosis and neuroprotective effect of SP-GNPs in rats Materials and animals All the materials and equipment used in this study were commercially available. SP was purchased from GL Biochem Ltd (Shanghai, People's Republic of China). The protocols and procedures were approved by the local animal experimentation ethics committee. Male Sprague-Dawley rats weighing around 300-320 g were provided by the Laboratory Animal Services Centre at our university. Two to three animals were housed per stainless steel cage on a 12-hour light/12-hour dark cycle in an air-conditioned room at 22°C, and checked daily by the animal care staff. Standard commercial rat chow (Prolab RMH 2500, PMI Nutrition International LLC, Brentwood, MO, USA) and water were available ad libitum. Preparation and characterization of sP-gnPs and blank gnPs SP-GNPs and blank GNPs were prepared using waterin-water emulsion and freeze-drying techniques. 20,21 Briefly, a high concentration of SP was dissolved in 1 mL of 20% w/v Poloxamer 188-grafted heparin copolymer solution. This solution was added to 2 mL of 2.0% w/v gelatin solution to produce a homogeneous mixture. Under sonication (110 W, 15°C) using a probe sonicator, d,l-glyceraldehyde was injected into the mixed solution until its final concentration reached 0.1% w/v to initiate the cross-linking reaction. The mixture was bathed at 5°C under magnetic stirring at 2,500 rpm for 5 hours to form a suspension of SP and GNPs. The suspension was lyophilized to obtain a powder containing SP and polymeric GNPs. Next, the lyophilized powder was dispersed in a solution containing soy phosphatidylcholine, trehalose, and cholesterol. By sonication (90 W, 20 seconds) at 25°C, the suspension was then lyophilized to obtain a powder containing SP-GNPs, which were reconstituted in double-distilled water to form a 2 mg/mL SP-GNP suspension for administration. Blank GNPs (using gelatin solution instead of SP gelatin solution during preparation) was also prepared for the subsequent experiment. The morphologies of the SP-GNPs and blank GNPs were determined using a scanning electron microscope (X-650, Hitachi, Tokyo, Japan). The particle size and zeta potential were determined by dynamic light scattering using a Nicomp™ 380 ZLS zeta potential/particle sizer (PSS Nicomp, Santa Barbara, CA, USA). To determine the encapsulating efficiency of the SP-GNPs and blank GNPs, approximately 1.5 mL of the SP-GNP dispersion were placed in a microtube and centrifuged at 10,000 g for 40 minutes. The supernatant was then collected and diluted for determination of SP content using an enzyme-linked immunosorbent assay kit; this experiment was performed in triplicate. Drug encapsulation efficiency (%) = (total amount of drug − amount of drug in supernatant)/total amount of drugs added initially ×100%. experiment in vitro cell culture Male rat PC-12 cells were used for the in vitro study. The PC-12 cells were cultured at 37°C in high-glucose Dulbecco's Modified Eagle's Medium with 10% fetal bovine serum and 1% penicillin-streptomycin in a humidified incubator containing 5% CO 2 . Cells in the logarithmic growth phase were harvested with trypsin for further experiments. MTT assay The ability of SP-GNPs to impede the growth of PC-12 cells with 6-OHDA-induced disease was confirmed by MTT assay (run in triplicate). PC-12 cells were cultured in a 96-well plate for 24 hours at a density of 5,000 cells per well. With blank PC-12 cells as the control, 100 μM of 6-OHDA was added to the cells for 24 hours to induce cell apoptosis, after which blank GNPs and different concentrations of SP-GNPs were incubated for another 24 hours. Next, 10 μL of MTT 5 mg/mL were added to each well and incubated for 4 hours; 100 μL of formazan solution was then added to each well, followed by incubation for a further 4 hours to dissolve the crystals that developed in each well. The plate was then put into a microplate reader to measure the optical density at 526 nm and quantify the extent of cell viability. The higher the amount of cell viability in each well, the less the degree of apoptosis. experiment in vivo rat model of hemiparkinsonism The rats were anesthetized with pentobarbital sodium 60 mg/kg and then injected with 12 μL of 6-OHDA solution into the right striatum (or vehicle for sham animals) using stereotaxic apparatus ( Figure 2). 22,23 Gentamicin was then given to prevent infection. Four weeks after injection of the 6-OHDA solution, rodent behavior was evaluated by counting the number of apomorphine-induced rotations to determine if the rat model of hemiparkinsonism had been successfully created. The rats were injected with apomorphine 0.5 mg/kg subcutaneously, and both contralateral and ipsilateral full-body rotations were recorded in the following 30 minutes. At least seven full-body contralateral rotations per minute were considered to indicate a successful hemiparkinsonian (PD) model, and these rats were used in the following experiment. effect of sP-gnPs in hemiparkinsonian rats The day after the behavior evaluation, the sham rats and PD rats were randomized into five groups (n=8 per group) and started on daily treatment for 2 weeks. Group 1 comprised sham rats receiving intranasal phosphate-buffered saline and group 2 comprised PD rats receiving intranasal blank GNPs. Groups 3, 4, and 5 comprised PD rats receiving intranasal SP-GNPs at different concentrations (Table 1). Two hours after the end of 2 weeks of daily treatment, the experimental rats were injected subcutaneously with apomorphine 0.5 mg/kg to evaluate the extent of their neurorecovery. All contralateral and ipsilateral full-body rotations were recorded during the 30 minutes following injection of apomorphine. The fewer the number of rotations in the hemiparkinsonian rats, the better the neurorecovery was deemed to be. All rats were euthanized at this point, and their brains were collected for coronal sectioning across the substantia nigra region. The brain tissues were then embedded in paraffin and sectioned for immunohistochemical staining. immunohistochemical staining As one of the endpoint shear enzymes in apoptosis, caspase-3 plays a critical role in the apoptotic cascade. 24 Immunohistochemical staining with anti-caspase-3 antibody was used to evaluate levels of apoptosis in the substantia nigra region in hemiparkinsonian rats treated or not treated with SP. Image-Pro Plus version 6.0 was used to quantify the number of cells, the areas stained, and the degree of staining. The better the protective effect against 6-OHDA-induced apoptosis, the lower rates of caspase-3 staining and apoptosis in brain sections from the PD rats. statistical analysis Statistically significant differences across multiple groups were determined using one-way analysis of variance with the Newman-Keuls post hoc test. Statistically significant differences between individual groups was determined using the Mann-Whitney U-test. All testing was done using Statistical Package for the Social Sciences version 19 software (SPSS Inc, Chicago, IL, USA). A difference was considered to be statistically significant at P0.05. Physicochemical properties and bioactivity of sP-gnPs and blank gnPs Scanning electron micrographs showed that the SP-GNPs and blank GNPs were uniform in shape and size ( Figure 3). Characterization for the SP-GNPs and blank GNPs is shown in Table 2. Dynamic light scattering showed the average particle size of the blank GNPs to be 136±1.32 nm. The polydispersity index (PDI) indicates the distribution of particle size. Low PDI values were observed for the SP-GNPs and the blank GNPs ( Table 2), indicating that both were monodispersed stable systems. After loading with SP, the mean diameters of the nanoparticles and liposomes increased, but were still below 200 nm ( Table 2). The zeta potential is an important indicator of the physical stability of nanoparticles. Nanoparticles with a high absolute zeta potential value are electrically stable while those with a low absolute zeta potential value tend to be less electrically stable. As shown in Table 2, both the SP-GNPs and the blank GNPs had a strong negative surface charge, indicating that coating with phospholipids makes these nanoparticles more stable. The encapsulation efficiency and loading capacity of the SP-GNPs were 93.3±1.4% and 5.2±0.02%, respectively ( Table 2). Table 3 shows the ability of different concentrations of SP-GNPs to limit the growth of PC-12 cells with 6-OHDAinduced disease. When compared with untreated PC-12 cells with 6-OHDA-induced disease, those treated with blank GNPs showed slightly higher but not significantly different cell viability, whereas their counterparts treated with SP-GNPs did demonstrate significantly higher cell viability 1959 intranasal sP-gnPs protect against 6-OhDa-induced apoptosis (P0.05), indicating that SP-GNPs can decrease the extent of apoptosis caused by 6-OHDA and enhance cell growth. In the meantime, cell viability increased as the concentration of SP increased, suggesting that within a certain range of concentrations, the degree of inhibition of apoptosis achieved by SP is concentration-dependent. Behavioral evaluation of PD rats after 2 weeks of treatment The number of apomorphine-induced rotations following 2 weeks of daily treatment with SP-GNPs in each experimental group were consistent with the dopamine levels in the diseased brain. As seen in Table 4, the PD rats that received immunohistochemical staining of caspase-3 in the substantia nigra As seen in Figure 4, immunohistochemical staining for caspase-3 in the diseased substantia nigra was limited in the sham group but extensive in the PD group, indicating that caspase-3 is rarely expressed in normal circumstances but is expressed in large amounts in the presence of PD. Less staining was seen in the SP-GNP groups than in the blank GNP group, suggesting that SP-GNPs can inhibit the expression of caspase-3 and reduce neuronal apoptosis, thus helping the diseased neurons to recover. Further, PD rats receiving 75 μg or 100 μg of intranasal SP per day showed significantly less caspase-3 staining than PD rats that did not receive SP (P0.05), while PD rats receiving 50 μg of intranasal SP per day showed slightly lower level of caspase-3 staining than PD rats (P0.05), indicating that the higher the concentration of SP, the better the effect in protecting against 6-OHDAinduced neuronal apoptosis. Discussion 6-OHDA is a neurotoxin that activates the apoptotic cascade in the central nervous system, leading to apoptosis and degeneration of dopaminergic neurons, which culminates in cell apoptosis in vitro and PD in vivo. SP, a member of the tachykinin peptide family, has been shown to play an important role in protecting against neurotoxin-induced apoptosis. Studies show that drugs or particles smaller than 300 nm can bypass the blood-brain barrier, can be absorbed through the mucous membrane in the nasal olfactory region, and can be delivered into the brain directly through the cribriform plate, beyond which they exert their therapeutic effects in specific regions inside the brain. [25][26][27] In earlier studies, nanoparticles were used as intranasal carriers for therapeutics to enable effective treatment of brain disorders, such as cerebral ischemia 28 and PD. 29 It is reported that nanoparticles administered intranasally can penetrate the brain through several pathways: the olfactory pathway, in which particles are taken up by the olfactory epithelium and the olfactory bulb; the trigeminal pathway, in which particles are delivered along the trigeminal nerve system; the vascular pathway, in which particles are absorbed into the capillaries underlying the nasal mucosa; and other pathways, such as cerebrospinal fluid and the lymphatic system. [30][31][32] However, because of the mucociliary clearance mechanism in the nose, particles cannot be lodged in the nasal cavity for a long period, which 1961 intranasal sP-gnPs protect against 6-OhDa-induced apoptosis limits the application of intranasal administered drug-loaded particles. 27,33 In recent years, gelatin and nonionic surfactants (such as Poloxamer 188) have been used to prepare nanoparticles due to their biocompatibility, biodegradability, low immunogenicity, and amenability for surface modification. [34][35][36] Nanoparticles modified with gelatin have a negative charge that can reduce mucociliary clearance, extend the residence time at the site of delivery, and enhance the therapeutic effect when administered intranasally. [17][18][19] In a previous study, we found that gelatin nanostructured lipid carrier-mediated intranasal delivery of basic fibroblast growth factor could enhance functional recovery in hemiparkinsonian rats. 37 PC-12 cells, a monoamine cell line derived from a pheochromocytoma in the adrenal medulla of a male rat, can express tyrosine hydroxylase and synthesize dopamine intracellularly, so are widely used in the study of PD models in vitro. 38 In our in vitro experiment, we used 6-OHDA to trigger apoptosis and then added SP-GNPs at different concentrations to investigate the effect of SP-GNPs on growth of PC-12 cells. It is evident from the results of these investigations that SP can decrease apoptosis and enhance cell growth to a considerable degree. Further, within a certain range of concentrations, the degree of inhibition of cell apoptosis increases as the concentration of SP increases, with cells growing better and in larger numbers at higher SP concentrations. In our in vivo experiment, SP-GNPs were administered intranasally to rats with 6-OHDA-induced hemiparkinsonism, and these rats showed more functional improvement and less apoptosis than their counterparts that were not treated with intranasal SP-GNPs. Intranasal administration of SP-GNPs inhibited 6-OHDA-induced apoptosis and improved symptoms of hemiparkinsonism. With increasing concentrations of SP, rats with hemiparkinsonism showed more functional improvement, with further decreases in levels of apoptosis, indicating that the strength of the neuroprotective effect had a positive relationship with the SP concentration in the brain. As a noninvasive strategy, GNP-mediated intranasal delivery of SP protects against 6-OHDA-induced apoptosis, and might constitute a practical therapy for PD patients in the future. Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/drug-design-development-and-therapy-journal Drug Design, Development and Therapy is an international, peerreviewed open-access journal that spans the spectrum of drug design and development through to clinical applications. Clinical outcomes, patient safety, and programs for the development and effective, safe, and sustained use of medicines are a feature of the journal, which has also been accepted for indexing on PubMed Central. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use.
2016-05-12T22:15:10.714Z
2015-04-07T00:00:00.000
{ "year": 2015, "sha1": "ecd5cbe1afc751990f96128cb78d325cdbaf4c5e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=24524", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f03b7ebfa1e462a9194c51b15208ad2c77b4fb8e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
140086636
pes2o/s2orc
v3-fos-license
Structural, Band Gap Energy, and Magnetic Characters of Fe2.9Cr0.1O4 Nanoparticles for Preparing Ferrofluids Ferrite nanoparticles have become interesting materials originating from their performances in many applications. In this paper, the preparation metal-doped ferrites in the system of Fe2.9Cr0.1O4 nanopowders is reported. The Cr0.1Fe2.9O4 nanopowders were utilized to produce ferrofluids. The Indonesian sand was used as a raw material to prepare the samples using co-precipitation route. The Fe2.9Cr0.1O4 particles structured in cubic spinel with the particle size and lattice parameter of about 9.4 nm and 8.36 Å, respectively. The band gap energy of the Cr0.1Fe2.9O4 nanopowders was 2.26 eV. Furthermore, the saturation magnetization of the powder was higher than that of the fluid as the effect of particle size and aggregation. Introduction In nanoscience and nanotechnology fields, one of the materials that intensively investigated is magnetic nanoparticles, especially associated with their biomedical applications. Recent article based on the Elsevier database reported that number of articles of the magnetic particles associated with their biomedical applications increase significantly in an exponential growth for the last decade [1]. Furthermore, in the form of Fe3O4 ferrofluids, the magnetic nanoparticles have a specific excellent performance regarding their physical, chemical, and biological characters. Ferrofluids can be defined as a stable colloidal suspension containing superparamagnetic magnetic particles with particle size in superparamagnetic dispersed in organic or inorganic liquid carriers [2]. Commonly, organic or inorganic materials can be used as coated or layered agents to prevent aggregations of the Fe3O4 particles in ferrofluids. Based on the structure, Fe3O4 becomes one of the spinel groups with a general chemical formula of AB2O4, where A is the divalent metal (Fe 2+ ) and B is the trivalent metal (Fe 3+ ). The invers spinel structure can be formed when all divalent metallic ions place B site, a half of trivalent metallic ions places B site and a remained half of the trivalent metallic ions places A site [3]. Therefore, the invers spinel of the Fe3O4 particles can be expressed in the formula of Fe A Such formula guides us to rearrange easily the structure by incorporating other metallic ions such as Mn, Co, Zn, Cu, Cr, and other to enhance the performances of the particles. In this experiment, we introduce chromium (Cr) as one of the metallic transitions in the group VI-B having special character due to its stable oxidation. Cr doped Fe3O4 in the system of Fe2.9Cr0.1O4 nanoparticles have three cation formations such as Fe 2+ , Fe 3+ , and Cr 3+ that would change the ionic arrangement for a better specific behavior. In general, the three metallic ions will place randomly in B and A sites. Quadro and co-workers identified that the Cr and Fe-based catalysts have been utilized in a long time of period for commercial processes due to their high stability [4]. To be more effective and efficient in mass production, despite the important in introducing Cr ions to incorporate Fe3O4 particles, it is also important to exploit an iron sand from local place in Indonesia as a raw material. Moreover, in this paper, the detailed nanostructure, functional group, band gap energy, and magnetic behaviors of the prepared Fe2.9Cr0.1O4 in nanopowders and in ferrofluids are also reported. Experimental Method The main precursor (magnetite powder) was firstly purified from Indonesian sand through mechanical process using a magnetic bar. The magnetite powder was mixed with HCl at room temperature using a magnetic sitter to produce iron chloric solution. The solution was then mixed with CrCl3 solution and following by dropping NH4OH to obtain precipitate. To obtain the powder sample, the precipitate was rinsed by using H2O and following by a calcination process at 100 0 C. Meanwhile, to obtain the ferrofluid sample, the precipitate was directly coated by TMAH and dispersed in H2O. The obtained samples was characterized by means of XRF, XRD, VSM, FTIR spectrometer, and UV-Vis spectrometer. The data from all characterizations were finally analysed based on the qualitative and quantitative approaches. Results and Discussion Before investigating the structure and phase purity of the Fe2.9Cr0.1O4 particles, the elemental composition was investigated using an XRF machine at room temperature. Based on the data analysis, it was shown that the sample had the elemental content with the Cr: /(Cr + Fe) ratio of 4.1 %. Therefore, to study the detailed crystal structure and phase purity of the Fe2.9Cr0.1O4 particles, we characterized the sample using an XRD machine at room temperature and its result is shown in Figure 1. Based on the qualitative data analysis, the XRD pattern has similar pattern to the patterns of the Fe3O4 and Mn-doped Fe3O4 nanoparticles as reported in the literature [5]. It means that the prepared Fe2.9Cr0.1O4 particles from iron sand crystallize in a mono phase. Further quantitative data analysis of the XRD pattern presented that the Fe2.9Cr0.1O4 particles has a spinel structure with the particle size of about 9.4 nm. The XRD pattern presents the crystal structure in the space group of F d -3 m Z in the absence of other phases. Using a Rietveld analysis by performing ICSD No. 29129, it was resulted that the lattice parameters of a = b = c were of approximately 8.396 Å. Nguyen et al. reported that the Crdoped Fe3O4 particles increase the lattice parameters of the system by increasing Cr composition [6]. Additionally, their work presented that the shape and morphology of the magnetic nanoparticles are still in spherical shapes with agglomeration. In order to investigate the structural behavior, the functional group of the sample was investigated using a FTIR spectrometer and its result is presented in Figure 2. The FTIR spectrum in the wavenumber range of 4000 -400 cm -1 exhibits several transmission peaks at ~3400, ~1620, ~1400, ~580, and ~410 cm -1 . The transmission peaks at wavenumber of 3400 and 1620 cm -1 are resulted from stretching and bending functional groups of O-H [7,8]. Furthermore, the peaks at wavenumber of ~1400, ~580, and ~410 cm -1 are resulted from the functional groups of M-O (metal-oxygen). It means that the divalent and trivalent ions of Fe and Cr were successfully occupied the tetrahedral and octahedral positions. After investigating the structure of the Fe2.9Cr0.1O4 nanoparticles, it is followed by investigation of the magnetic behaviors of the samples. The magnetic characters were performed using M-H experiment at ambient temperature as presented in Figure 3-4. In the M-H experiment, the external magnetic field was varied from -1 T to 1 T. From Figure 3-4, it is easy to be known that all hysteresis curves have S patterns indicating all samples exhibit as superparamagnetic materials. Theoretically, the superparamagnetic state of the samples can be calculated or fitted using a Langevin function [3], as written in Equation 1. Where respective M and MS are the magnetization and saturation magnetization, µ and H are the magnetic moment and magnetic field, T is the temperature, and kB is the Boltzmann constant. Figure 3-4, the Langevin function is fitted well the experimental data both for the samples in fluids and in powders. The MS of the Fe2.9Cr0.1O4 nanopowders has a value, which is higher than that of the ferrofluids. The phenomenon is predicted as the effect of the particles size and its clustering. In general, the secondary particle size and clustering of the magnetic nanoparticles in powders has higher than that of the ferrofluids. Physically, in the fluids, the secondary and clustering are affected by the presence of the coated magnetic particles using TMAH dispersing in H2O [9]. In this situation, the building block of the magnetic nanopowders subjected to their primary particles is similar to the building block of the ferrofluids. As superparamagnetic material, the particle size of the Fe2.9Cr0.1O4 both in the powders and in fluids should be smaller than a Weiss domain of the corresponding particles in bulk forms [10].     In this work, another fundamental character of the prepared Fe2.9Cr0.1O4 nanoparticles was investigated using UV-Vis spectrometer and its result is presented in Figure 5. Such experiment was performed to study the optical band gap energy (Eg) of the sample. The Eg of the prepared Fe2.9Cr0.1O4 nanoparticles was determined by plotting the absorption coefficient (αhν) 2 and photon energy (hν) using Tauc's equation as shown in Equation 2 [11]. Based on the fitting analysis, the Eg of the prepared Fe2.9Cr0.1O4 nanoparticles was about 2.26 eV indicating that the value is in the range of the semiconductor character [12]. Where B is the constant. Based on the above discussion regarding band gap energy, the Fe2.9Cr0.1O4 ferrofluids from Indonesian sand open potential applications such as for magnetic sensor based on magneto-optics. In our previous work, the Fe3O4 ferrofluids from Indonesian sand prepared by coprecipitation method had excellent performance related to optical character polarization angle of the fluids under external magnetic field [13]. Moreover, the ferrofluids performed a good response in a linear relation originating from the relation of the intensity obtained by photodetector and the external magnetic field varying from zero to 140 mT. Therefore, it is necessary to develop further research by employing the the Fe2.9Cr0.1O4 ferrofluids resulted in this work for magnetic sensor. Conclusion The Fe2.9Cr0.1O4 nanoparticles formed in spinel structure in nanometric scale of about 9.4 nm. The Fe-O, Cr-O, and H-O bonds dominated the functional groups of the Fe2.9Cr0.1O4 nanoparticles. The samples had a superparamagnetic characteristics associating with S shape of the M-H curves. The Fe2.9Cr0.1O4 nanopowders has MS value higher than that of the ferrofluids predicted as the effect of particles size and clustering of the magnetic particles. Moreover, the Eg of the Fe2.9Cr0.1O4 nanoparticles was about 2.26 eV presenting a semiconductor material.
2019-04-30T13:08:14.869Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "dab1f77c94e979db43761a6c9d98541a06a413ce", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1091/1/012029", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9852f9f35445bfe18d62e2e82c91366a75844be3", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
234496136
pes2o/s2orc
v3-fos-license
Disease and demography: a systems-dynamic cohort-component population model to assess the implications of disease-specific mortality targets Introduction The 2015 Sustainable Development Goals include the objective of reducing premature mortality from major non-communicable diseases (NCDs) by one-third by 2030. Accomplishing this objective has demographic implications with relevance for countries’ health systems and costs. However, evidence on the system-wide implications of NCD targets is limited. Methods We developed a cohort-component model to estimate demographic change based on user-defined disease-specific mortality trajectories. The model accounts for ageing over 101 annual age cohorts, disaggregated by sex and projects changes in the size and structure of the population. We applied this model to the context of Bangladesh, using the model to simulate demographic outlooks for Bangladesh for 2015–2030 using three mortality scenarios. The ‘status quo’ scenario entails that the disease-specific mortality profile observed in 2015 applies throughout 2015–2030. The ‘trend’ scenario adopts age-specific, sex-specific and disease-specific mortality rate trajectories projected by WHO for the region. The ‘target’ scenario entails a one-third reduction in the mortality rates of cardiovascular disease, cancer, diabetes and chronic respiratory diseases between age 30 and 70 by 2030. Results The status quo, trend and target scenarios projected 178.9, 179.7 and 180.2 million population in 2030, respectively. The cumulative number of deaths during 2015–2030 was estimated at 17.4, 16.2 and 15.6 million for each scenario, respectively. During 2015–2030, the target scenario would avert a cumulative 1.73 million and 584 000 all-cause deaths compared with the status quo and trend scenarios, respectively. Male life expectancy was estimated to increase from 71.10 to 73.47 years in the trend scenario and to 74.38 years in the target scenario; female life expectancy was estimated to increase from 73.68 to 75.34 years and 76.39 years in the trend and target scenarios, respectively. Conclusion The model describes the demographic implications of NCD prevention and control targets, estimating the potential increase in life expectancy associated with achieving key NCD reduction targets. The results can be used to inform future health system needs and to support planning for increased healthcare coverage in countries. INTRODUCTION Changes in population size and demographic composition have broad economic and social implications. Informed decisions regarding population-level policies and interventions hinge on robust population projections that delineate the dynamic interplay of demographic processes such as fertility, mortality and migration. Generating counts for population cohorts of interest determines investment in sectors like health, education, infrastructure and others. 1 2 We present a cohort-component population projection model to assess demographic changes associated with changes in the distribution of causes of death. Current population projections reflect a variety of assumptions about fertility, mortality and migration. [3][4][5][6] For instance, the UN produces eight variants of population projections, five of which are based on different trajectories of fertility, Strengths and limitations of this study ► The model provides an understanding of how changes in disease-specific mortality may contribute to the demographic outlook of countries by simulating demographic evolution paths corresponding to prespecified mortality rate outlooks. ► The model tracks population outcomes at a highly disaggregated level and can produce consistent and comparable cross-country estimates for a set of demographic indicators. Open access while mortality assumptions are determined by probabilistic trends of life expectancy at birth, and international migration is assumed either constant or zero. 3 The existing population projection models typically emphasise the role of fertility but do not provide an understanding of how changes in disease-specific mortality rates may contribute to the demographic outlook of countries. Preventable deaths and disability caused by communicable diseases, maternal, perinatal and nutritional conditions (CMPN), non-communicable diseases (NCDs) and injuries constitute core concerns across nations. Among these, cardiovascular diseases (CVDs) are in the lead, accounting for 15.2 million deaths of all 56.9 million deaths worldwide in 2016. 7 Given the rising significance of NCDs in global health, the 2030 Sustainable Development Goals (SDGs) aim to reduce premature mortality from the four major NCDs (CVDs, diabetes, cancer and respiratory diseases) by one-third by 2030, relative to 2015 level. 8 With the adoption of the WHO Global NCD Action Plan by the World Health Assembly in 2013, the WHO Member States agreed on a time-bound voluntary target of attaining a 25% relative reduction in overall mortality from the four leading NCDs by 2025. 9 In a similar vein, the WHO 2013 Global Program of Work (GPW 2013) set the target of 20% relative reduction in the premature mortality (age 30-70 years) from these NCDs between 2019 and 2023. 10 These objectives occur in the context of many budgetary and planning constraints that affect low-income and middle-income countries (LMICs). Further, variations in the incidence and prevalence of diseases across sex and age cohorts require policymakers to formulate targeted interventions and policies. Understanding the evolution of different age-cohorts resulting from shifts in disease-specific mortality over time can inform the resource needs for national scale-up of interventions to attain SDG health targets. The dynamic population projection model in this study simulates a range of demographic evolution paths corresponding to pre-specified disease-specific mortality outlooks. The results provide demographic information needed to plan for services to meet future demands of different segments of the population. Although the model in this study is applied to Bangladesh, it is replicable across different countries and can serve as a tool for planners to simulate user-defined scenarios corresponding to assumed fertility, mortality, and international migration trajectories. Over the last several decades, Bangladesh has made substantial progress in disease prevention and control of childhood communicable diseases, but NCDs have emerged as the primary cause of death and disability in the country. 11 12 In response, the Government of Bangladesh has formulated an NCD action plan to reduce NCDs and associated risk factors through a multisectoral coordinated approach. 13 Bangladesh NCD prevention and control targets are consistent with the 2030 SDGs and with the WHO South-East Asia regional NCD 2025 objectives of reducing by 25% premature mortality from CVDs, diabetes, respiratory diseases and cancer. 13 14 Attainment of these targets entails population-level prevention and treatment initiatives. A first step in planning for such initiatives is information on the demographic outcomes associated with accomplishing the health objectives of these initiatives. 15 To this end, the present study models the demographic outlook for Bangladesh from 2015 to 2030 under the assumption of attaining the 2030 SDG target of reducing premature mortality (age 30-70 years) from four major NCDs by one-third. More specifically, Figure 1 Overview of the cohort-component population model: stock, flows and simulation options. Note: The model is developed using the Vensim DSS (V.8) simulation platform. cmpn, communicable, maternal, perinatal and nutritional conditions; cvd, cardiovascular diseases; dbt, diabetes mellitus; dr, death rate; npl, neoplasms; oth, other non-communicable diseases and injuries; rsp, respiratory disease. Open access we produce the demographic outlook for Bangladesh corresponding to a one-third reduction (ie, ~30%) in the unconditional probability of dying between the exact ages of 30 and 70 years from any of CVDs, cancer, diabetes, or chronic respiratory diseases. METHODS AND DATA Patient and public involvement Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. The systems-dynamic cohort-component population model We develop a cohort-component population projection model that tracks each sex-specific and age-specific cohort of people throughout its lifetime, subject to assumed age-specific and sex-specific mortality, fertility and migration rates. 6 16 The model represents a 'systems' structure defined by the stocks and flows and the connections between them. [17][18][19][20] In this model, the population in each year is the stock variable, while births, deaths and international migration represent the flows. The model starts with defining the initial-year population, disaggregated by sex and age-cohorts, followed by defining the fertility, mortality and migration attributes of each cohort throughout the projected horizon. In other words, the model in this study resembles an ageing chain where, after birth, each birth cohort progresses from childhood (first stock) to old age (last stock) unless the individual dies and leaves the system. Figure 1 presents an overview of the model structure using a stock-and-flow diagram. The population's dynamic path begins with the initial population stock observed in 2015 for Bangladesh, disaggregated by sex and age. In each subsequent year, changes in the annual population stock occur through adding births, subtracting deaths and through net international migration (emigration minus immigration) as expressed in Equation 1. ). In each year, people in each age cohort leave the system due to deaths ( Ds,A,t ) and net international migration ( NMs,A,t ). The causes of deaths are aggregated into six major types of disease categories: CMPN; neoplasms; diabetes; CVDs; respiratory diseases; other NCDs and injuries. ; othNCDs: other non-communicable diseases and injuries (II.D., II.E., II.F., II.J., II.K., II.L., II.M., II.N., II.O., II.P., III.A., II.B.). Sex, age-group and diseasespecific deaths rates determine the number of deaths each year. The online supplemental table S1 maps these broad categories with the disaggregated WHO GHE causes of death codes. 21 Net international migration by sex, age and over time is defined as: where γs,A,t is sex and age-specific net international migration rate over the years. Depending on country contexts, sex and age-specific net migration rates determine the number of people removed from (or added to) the population due to migration to (or from) other countries. The model allows the option of simulating different scenarios by setting sex-specific and age-specific fertility rates; sex-specific, age-specific and disease-specific death rates; and net-migration rates, for each year over the analytical time-horizon (2015-2030). For instance, scenarios of different mortality trends could reflect status-quo (ie, constant death rates over time), trajectories based on historical trends, trajectories based on the predicted impact of disease prevention interventions or reductions in risk factor exposures informed by the literature, or user-defined mortality outlooks based on national plans. We introduce a set of forcing functions with a default or status quo value of 1 but allow scaling-up (scaling-down) functions (over time) corresponding to trend, targets and/or any other implementation sequences: Death rates scale up/down over time: The model generates population counts for 202 annual age-sex cohorts consisting of age 0-100+ years for men and women, respectively. Data on fertility, mortality and net-migration rates were only available by age-group and Open access were assigned to corresponding annual cohorts within each age-group. We used two age-groups depending on data availability, separately for men and women: (1) six broad age-groups: age 0-4, age 4-14, age 15-29, age 30-49, age 50-69, age 70 and above; (2) 5 year age-groups: age 0-4, age 5-9, age 10-14, age 15-19, age 20-24, age 25-29, age 30-34, age 35-39, age 40-44, age 45-49, age 50-54, age 55-59, age 60-64, age 65 and above. Demographic indicators The model produces several key demographic indicators, including population counts and age structure; total, child and old-age dependency ratios; the number of births; crude birth rate; total fertility; net reproduction rate; the rate of natural population increase; the number of deaths by diseases; infant and child mortality rates; crude death rate; life expectancy at birth and at each age; the probabilities of dying between age 30 and 70; and total years of lives lost by diseases. The online supplemental table S2 provides brief definitions of the indicators. 22 BANGLADESH CASE STUDY: DEMOGRAPHIC IMPLICATIONS OF SDG NCD MORTALITY TARGETS Baseline data To initiate the population dynamics, we needed base year population, age-specific fertility rates, age-specific death rates and age-specific net migration rates. We used the 2015 annual cohort (age 0-100) population data from the UN population projection (medium variant); age-specific fertility rates reported by the Bangladesh Bureau of Statistics; age-specific total death rates are obtained from the UN life tables for Bangladesh for the year 2015, and agespecific net migration rates from Bangladesh Bureau of Statistics. 3 23-25 The UN estimate of the ratio of sex at birth for Bangladesh is 1.05 for their entire analytical horizon; we used the same for this model. 25 The UN life table for Bangladesh assumes a 100% mortality rate for ages 85 and above. 25 26 Our model assumed that all people in the last age cohort (100+ years) leave the system (ie, die) with a 100% mortality rate, with interpolated death rates for ages 85-99. The total net migration rate reported in the UN population projection is −2.3/1000 population; our model assumed the same statistic when applying age-specific net migration rates to the baseline (2015) population. 22 25 We used WHO GHE disease burden (mortality) data by cause, age and sex 7 to decompose the total death rates by six broad categories of diseases, so that, δ s,A,d,t=2015 = GHE Death S,A,d,t=2015 Where equation 10 is used to decompose the baseline (year=2015) sex-specific and age-specific death rates into six diseases specific rates and GHE Death S,A,d,t=2015 represent the number of deaths by diseases obtained from WHO GHE mortality data for the year 2015. δs,A,t (ie, sex-specific and age-specific death rates at year t) is the sum of death rates from six broad categories of diseases (d). The online supplemental table S3 reports the 2015 baseline data used in the model, including the death rates by six broad disease categories. Scenarios We compared three demographic outlooks for Bangladesh: status quo, trend and target. The three scenarios differ in terms of their assumed mortality trajectories, keeping fertility and net migration trajectories the same across scenarios. The UN population projection uses five fertility variants: low, medium, high, constantfertility and instant-replacement-fertility. For instance, for Bangladesh, during 2015-2020 the total fertility rates are assumed to be 2.2, 2.05 and 1.68 for the high, medium and low variants, respectively. For the 2025-2030 period the total fertility rates are assumed to be 2.26, 1.82 and 1.42 for the high, medium and low variants, respectively. 3 23 We use the 2015 age-specific fertility rates reported by the Bangladesh Bureau of Statistics (BBS), setting the total fertility rate at 2.10. Then, using the UN probabilistic projections for age-specific fertility rates for the 2025-2030 period, we scaled down the respective 2015 age-specific fertility rates to arrive at a total fertility rate of 1.82 by 2030. For the interim years, the model uses interpolated linear trends. We used the 2015 sex-specific and age-specific net migration rates obtained from BBS, which remains constant during the 2015-2030 period. 23 24 The study uses three variants of mortality trajectories. The 'status quo' scenario entails that the 2015 diseasespecific mortality rates remain constant for the analysis horizon, so that I δ s,A,t = 1 for the 2015-2030 period. The 'trend' scenario adopts sex, age-group and diseasespecific mortality rate trajectories based on the latest WHO GHE regional mortality projections for 2016-2030 for Southern Asia, consisting of Bangladesh and other neighbouring countries. 27 28 We estimated the death rates by sex, age-groups and six broad disease categories for 2016 and 2030 from the number of deaths and total population obtained from WHO GHE study; and then produced a matrix of scale factors such that: Where I δ s,A,t=2030 are sex-specific, age-specific, and disease-specific scale factors for the death rates in 2030 relative to 2015 levels. The interim years use interpolated scale factors and corresponding mortality rate values. For instance, the WHO GHE estimate projects that by 2030, the death rates of infectious, maternal, perinatal and nutritional conditions would reduce by 21% for women aged 70 and above and 48% for men Open access aged 15-29 years. Accordingly, we set the death rate trajectories for the corresponding cohorts to reflect 21% and 48% reductions in 2030 from 2015, respectively. Similarly, depending on sex and age-groups, the reductions of death rates range from 3.4% to 22.9% for CVDs; 9.3% to 22.9% for respiratory diseases; and 3.9% to 14.1% for other NCDs and injuries. The trend projections for neoplasms ranges from 9.3% to 22.9% increases in the death rates by 2030. The changes in diabetes death rates ranges from a reduction of 20.7% to an increase of 3.4%. The online supplemental table S4 reports all sex, age-group and disease-specific scale factors ( I δ s,A,d,t ) for the trend scenario. The third scenario is the 'target' scenario, which entails relative reductions in the mortality rates that result in approximately one-third reduction (ie,~30%) in the unconditional probability of dying between the ages of 30 and 70 years from any one of CVDs, cancer, diabetes or chronic respiratory diseases between 2015 and 2030. For the other two disease categories (ie, CMPN; and other NCDs and injuries) we use the same mortality rate trajectories as in the 'trend' scenario. The mortality rate trajectory for the four major NCDs follows the trend scenario until 2020 for all age groups, and then declines by 33% during 2015-2030 for ages 30- Population outlooks The status quo, trend and target scenarios project 178.9, 179.7 and 180.2 million population in 2030, respectively. Figure 2 shows the projections for total population along with the three main flow variables in the model, that is, total births, total deaths and total net international migration. Given that all fertility and migration assumptions are the same across all three scenarios, differences in the projected population numbers between scenarios reflect differences in the death rate trajectories. The assumption of a steady decline in total fertility from 2. Figure 3 shows the inverted age-sex pyramid illustrating the distribution of various age groups in Bangladesh in 2015 (left panel) and 2030 (right panel). The population is distributed along the horizontal axis, with men shown on the left and women on the right. The male and female populations are broken down into 5-year age groups represented as horizontal bars along the vertical axis, with the youngest age groups (age 0-4) at the top and the oldest at the bottom (age 65 and above). The shape of the population pyramid gradually evolves during 2015-2030 based on fertility, mortality and international migration Open access trends. The apparent cone-shaped population pyramid in 2015 appears more symmetric in 2030, consistent with population ageing over the analytical horizon. The evolving population structure is also reflected in figure 4. The rapid reductions in infant and child mortality accompanied by decreasing fertility led to a continuous reduction in the child dependency ratio (ie, ratio of population age 0-14 and age 15-64) (0.45 in 2015 vs 0.35 in 2030 trend scenario). On the other hand, as the annual cohorts progress through the analytical period, the old-age dependency ratio (ie, ratio of population age 65 and above and age 15-64), after remaining relatively flat during 2015-2020, starts to rise beyond 2020 (0.078 in 2015; 0.077 in 2020; and ~0.10 in 2030 for the three scenarios). The total dependency ratio (ie, ratio of population age 0-14 and age 65 and above, and age 15-64) registers a relatively quick decline from 0.52 in 2015 to 0.45 in 2025 and remains at 0.45 until 2030. The annual number of births is determined by the agespecific fertility rates and the number of women of reproductive age 15-49 years. The trajectory of the number of women in reproductive age is affected by the number of deaths and international migration for the corresponding cohorts. In figure 5, for the trend and target scenarios, it is evident that the number of women aged 15-19 begins to decline after 2021, and the number of women aged 20-24 declines after 2024. The number of women in all other older age groups increases during 2015-2030, with older cohorts showing larger growth. Figure 6 presents the projected mortality trajectories by disease categories. The number of deaths from all disease categories increases except for the CMPN category in the status quo scenario. In the status quo scenario, population decreases moderately for younger cohorts (ie, age <25) and increases more for the older cohorts age 25 and above during 2015-2030 period, leading to net increase in the total population. Consequently, the assumed constant death rates for the CMPN in the status quo scenario results in net increase in total deaths from CMPN. On the other hand, the continuous decline in death rates and a near-flat population trend with a slight decrease in numbers of children and adolescents lead to a reduction in deaths from CMPN in the trend and target scenarios. In all scenarios, NCD deaths rise with the rising population; however, the number of deaths is much smaller in the target scenario. The share of CMPN in total deaths declines from 26% in 2015 to 23%, 17.6%, and 19.1% in 2030 under the status quo, trend and target scenario, respectively. On the other hand, the contribution of the four major NCDs (CVD, respiratory diseases, diabetes and neoplasms) in total deaths increases from 54.9% in 2015 to 58.9%, 63.4% and 60.2% in 2030 under the status quo, trend and target scenarios, respectively. Table 1 shows the number of deaths under the three mortality scenarios and the number of deaths averted under the target scenario compared with the status quo and trend. Of the four major NCDs, CVD is the major killer, followed by neoplasm, respiratory diseases and diabetes. In 2025, the model projects 375 000, 357 000 and 334 000 deaths from CVD under status quo, trend and target, respectively, which entails 23 000 and 41 000 CVD deaths averted under the target scenario compared Open access with trend and status quo scenarios. Over 2015-2030, the target scenario would avert a cumulative 485 000 (285 000 men and 199 000 women) CVD deaths and 282 000 CVD deaths (162 000 men and 120 000 women) compared with the status quo and trend scenario, respectively. Under the target scenario, the cumulative (2015-2030) number of deaths averted from the four major NCDs is projected to be about 897 000 (500 000 men and 396 000 women) and 596 000 (291 000 men and 305 000 women), compared with the status quo and trend scenarios, respectively. The online supplemental table S5 shows the projections of years of lives lost (YLL) in the three scenarios, and YLL averted in the target scenario compared with status quo and trend. Compared with the status quo mortality trajectories, the attainment of NCD targets would avert a cumulative (2015-2030) 14.9 million YLL (ie, 7.74, 2.2, 0.49 and 4.49 million YLL averted form CVD, respiratory diseases, diabetes and neoplasm, respectively). Compared with the trend mortality trajectories, the attainment of NCD targets would avert a cumulative (2015-2030) 12.16 million YLL (ie, 5.30, 0.92, 0.64 and 5.30 million YLL averted form CVD, respiratory diseases, diabetes and neoplasm, respectively). Table 2 Since the drivers of infant and child mortality are primarily CMPN diseases, the magnitudes of reduction are similar in the trend and target scenarios. Large reductions in the probabilities of premature deaths (ie, between age 30-70) are projected in both scenarios, and the reduction is much larger in the target scenario. The probability of death for men between age 30-70 from any of four major NCDs decreases from 219 per 1000 people in 2015 to 198 and 153 per 1000 people in 2030 in the trend and target scenarios, respectively. The probability of premature death for women from four major NCDs decreases from 199 per 1000 people in 2015 to 186 and 138 per 1000 people in 2030 in the trend and target scenarios, respectively. For the target scenario, these entail an overall 30% reduction in the probability of premature deaths from four major NCDs. DISCUSSION The cohort-component model in this study projects the demographic outlook of a population using a systemsdynamic process determined by inter-relationships between population determinants, including those affected by policy actions. 2 17 29 The strengths of this Open access model are several. First, it is replicable as it uses established principles about the dynamics of the population process. Second, it can produce consistent and comparable cross-country estimates that are easy to update using country data across multiple countries. Third, it can provide focused estimates for target groups of interest because it tracks population outcomes at a highly disaggregated level. In the same vein, the model can be flexibly adapted to the intended disaggregation schemes (eg, more aggregate) of population cohorts and disease categories; Finally, the model outcomes can be potentially linked to other dynamic inputs related to health systems, education, the environment, housing and city planning, infrastructure, energy and utility and alike. 29 The main contribution of the model used in this study is in estimating the expected demographic shifts associated with different disease-specific mortality trajectories. The resulting estimates inform the effects of proposed NCD control targets, linking the number of deaths averted by achieving these targets to demographic shifts in the population. 29 The model in this study has several limitations. The cohort-component method does not explicitly incorporate socio-economic determinants of population change. The evolution of fertility, mortality and migration over time are not endogenously determined; the respective Open access trajectories are set exogenously using informed assumptions. To that effect, the model outcomes are projections based on a set of assumptions about trajectories of mortality, fertility and migration. The objective is not to make a perfect prediction of the future, but to assess comparative differences in population trajectories resulting from different health policy scenarios, keeping other input assumptions constant. Therefore, the model outcomes should not be interpreted as a perfect forecast but are based on conditional calculations showing what the future population would be if a particular set of reasonable assumptions were to hold true. Using similar assumptions but different approach, a global study by Cao et al quantified the potential gains in average expected life-years lived between 30 years and 70 years of age worldwide should the SDG target of a one-third reduction in premature mortality from the four major NCDs be achieved, as well as the maximum gains if all premature mortalities from these diseases were eliminated. 30 While the model in our paper captures differences in mortality scenarios, it does not capture the extent of disabilities averted from attaining the targets. Also, the scenarios do not consider the mortality implications of the recent COVID-19 pandemic in Bangladesh. The model generates the evolution of annual cohorts and population structure during 2015-2030 using demographic indicators for Bangladesh that are consistent with those offered by international agencies. [3][4][5] For instance, while the model replicates the baseline (2015) demographic indicators as reported in UN population projections, the population shares in 2030 for the 0-14, 15-64 and 65 and above years old age-groups in the UN medium variant projections versus the model trend projections compare as follow: 22.9 versus 23.8; 69.7 versus 68.8; and 7.4 versus 7.4, respectively. This model captures dynamic population outflows based on deaths from disaggregated disease categories, allowing comparison between diseasespecific mortality scenarios. We estimated that by attaining NCD targets in compliance with SDG 2030 goals, people in Bangladesh will live longer by more than 3 years on average (3.28 and 2.71 years for men and women, respectively). Over the 15-year analysis period, a cumulative 1.73 million all-cause deaths (99 600 men and 73 600 women) and 584 000 all-cause deaths (284 000 men and 300 000 women) would be averted in the NCD target scenario compared with the status quo and trend scenarios, respectively. In the target scenario, the cumulative number of deaths averted from the four major NCDs are projected to be 896 000 (500 000 men and 396 000 women) and 597 000 (291 000 men and 305 000 women), compared with the status quo and trend scenarios, respectively. These estimates inform the potential benefits as well as tradeoffs in health and demographic outcomes associated with accomplishing current NCD targets. Contributors MJH conceptualised the study, implemented the methodology, developed the modelling framework in the Vensim software, acquired data and led the formal analysis and write-up. BKD contributed to the study plan and analysis, model development, interpretation of results and critical review of the paper. DK contributed to the study plan and analysis, interpretation of results and critical review of the paper. All authors critically reviewed the manuscript and approved the final version. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Disclaimer The findings and conclusions of this report are those of authors only and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
2021-05-15T06:16:55.633Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "97750caa0aedaedeeb434ffbfd040d70ae2013c2", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/5/e043313.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "7237d1973b72655801284e70fdc57c8081d85c31", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
53766954
pes2o/s2orc
v3-fos-license
Application of MRI Post-processing in Presurgical Evaluation of Non-lesional Cingulate Epilepsy Background and Purpose: Surgical management of patients with cingulate epilepsy (CE) is highly challenging, especially when the MRI is non-lesional. We aimed to use a voxel-based MRI post-processing technique, implemented in a morphometric analysis program (MAP), to facilitate detection of subtle epileptogenic lesions in CE, thereby improving surgical evaluation of patients with CE with non-lesional MRI by visual inspection. Methods: Included in this retrospective study were 9 patients with CE (6 with negative 3T MRI and 3 with subtly lesional 3T MRI) who underwent surgery and became seizure-free or had marked seizure improvement with at least 1-year follow-up. MRI post-processing was applied to pre-surgical T1-weighted volumetric sequence using MAP. The MAP finding was then coregistered and compared with other non-invasive imaging tests (FDG-PET, SPECT and MEG), intracranial EEG ictal onset, surgery location and histopathology. Results: Single MAP+ abnormalities were found in 6 patients, including 3 patients with negative MRI, and 3 patients with subtly lesional MRI. Out of these 6 MAP+ patients, 4 patients became seizure-free after complete resection of the MAP+ abnormalities; 2 patients didn't become seizure-free following laser ablation that only partially overlapped with the MAP+ abnormalities. All MAP+ foci were concordant with intracranial EEG ictal onset (when performed). The localization value of FDG-PET, SPECT and MEG was limited in this cohort. FCD was identified in all patients' surgical pathology except for two cases of laser ablation with no tissue available. Conclusion: MAP provided helpful information for identifying subtle epileptogenic abnormalities in patients with non-lesional cingulate epilepsy. MRI postprocessing should be considered to add to the presurgical evaluation test battery of non-lesional cingulate epilepsy. INTRODUCTION Surgical management of patients with cingulate epilepsy (CE) is highly challenging, especially in the setting of negative MRI. Due to its mesial and deep location from the cerebral surface as well as the absence of unique ictal manifestations, scalp video-electroencephalography (EEG) may be misleading or nonlocalizable (1)(2)(3)(4). The fast propagation of seizure activities originating from cingulate cortex (CC) within the limbic network (5), complicated functional connectivity between homotopic cingulate and sensorimotor cortex (3,6), and diffuse bilaterally secondary synchrony of epileptiform discharges from cingulate lesions (2, 7) all contribute to the difficulty in localizing CE. A confirmed MRI lesion can contribute directly to the identification of the epileptogenic zone (EZ) (8). When patients have no apparent lesions on the MRI, presurgical evaluation and surgical management can be particularly difficult, as seizure origin could be strongly influenced by the availability of collective expertise and experience in semiology, neurophysiological exploration, and functional imaging interpretation (4,8). Previous studies with voxel-based MRI post-processing using a morphometric analysis program (MAP) (9) combined with visual MRI analysis indicated high sensitivity in the identification of subtle epileptic lesions (10)(11)(12)(13)(14); MAP+ findings was reported to provide valuable targets for invasive evaluation and resection (15). However, there was no study on the post-processing neuroimaging characteristics of CE with a normal pre-surgical MRI. In the current study, we aimed to investigate the usefulness of voxel-based MRI post-processing to detect subtle abnormalities in CE with a negative pre-surgical MRI. In relation to the MAP findings, we examined the non-invasive electro-clinical characteristics and functional imaging findings in these patients. When possible, concordance with intracranial EEG finding was investigated. Patients This retrospective study was approved by the institutional review board ethical guidelines of two hospitals: Cleveland Clinic Foundation (CCF) and the Second Affiliated Hospital of Zhejiang University (SAHZU). We reviewed a consecutive series of patients who had surgery at CCF from January 2008 to December 2016 and SAHZU from January 2013 to April 2017. The inclusion criteria were as follows: (1) intracranial-EEG (ICEEG) confirmed focal cingulate ictal onset during recorded habitual seizures, or resection of the cingulate cortex with/without adjacent cortex rendered the patient seizure-free or having marked seizure improvement with 1-year follow-up; (2) preoperative MRI and postoperative MRI/CT data were available; (3) preoperative MRI was considered as negative or suspicious of a subtle lesion during the multidisciplinary patient management conferences (PMC). Patients were excluded if they (1) had poor Abbreviations: CE, cingulate epilepsy; CC, cingulate cortex; MAP, morphometric analysis program; FCD, focal cortical dysplasia. MRI quality; (2) had a definite lesion in the cingulate cortex on MRI; and (3) seizures recurred without a marked improvement after surgery. The vertical anterior/posterior commissure lines (VAV/VPC) were used as a landmark to divide the cingulate cortex into three parts: the anterior cingulate, located rostral to the VAC; the middle cingulate, located between VAC and VPC; and the posterior cingulate, located caudal to the VPC line (2). Presurgical Evaluation The surgical strategy was discussed based on pre-surgical evaluation including history, semiology, video scalp-EEG, MRI, FDG-PET, subtraction ictal SPECT co-registered with MRI (SISCOM), Magnetoencephalography (MEG) and ICEEG. Semiology based on history and video-EEG was evaluated with classifications developed by Lüders et al. (16). Results of presurgical evaluation tests were obtained from chart reviews of the patients' clinical files. Data Acquisition and Analyses MRI post-processing was based with MAP07 within MATLAB 2015a (MathWorks, Natick, Massachusetts) and analyzed on a voxel basis (9) with comparison to a normal database consisting of 90 normal controls (17). Patients from CCF were scanned by 3.0-T MRI scanners (Trio or Skyra, SIEMENS, Erlangen, Germany) with T1-weighted Magnetization Prepared Rapid Acquisition with Gradient Echo images; patients from SAHZU were scanned with a 3.0-T MRI scanner (MR750, GE Healthcare) with a 3-dimensional (3D) T1-weighted Spoiled Gradient Recalled Echo sequence. Detailed parameters can be found elsewhere (18). The final outputs of MAP consisted of three feature maps, the junction, extension, and thickness maps. The junction map is sensitive to blurring of the gray-white matter junction; the extension map is sensitive to abnormal gyration and extension of gray matter into white matter; the thickness map is sensitive to abnormal cortical thickness (9). A blinded reviewer (Shan Wang) used a z-score threshold of 3 to identify candidate MAP+ regions in the junction file and then examined the suspect on extension (Z>6) and thickness (Z>4) files. The abnormality was reaffirmed by a neuroradiologist (SEJ), checking pre-operative MRI including T1-weighted, T2-weighted and FLAIR sequences to confirm MAP+ positive regions. In all MAP+ patients, we used SPM12 to co-register preoperative T1weighted images, MAP files and postoperative MRI images in order to confirm whether the location of the MAP+ regions was included in the resection. Pathology and Outcome Surgical pathology, when available, was re-reviewed by dedicated neuropathologists from each hospital. The diagnosis and classification of FCD were performed according to the ILAE guidelines (19). Postoperative seizure outcomes were determined according to Engel's Classification (8). Engel Class 1 (seizure-free) and 2 (>90% reduction) were regarded as marked improvement of seizure frequency (2,8). Patient Population Out of the 1,518 patients with a localized resection from the CCF surgical database, 21 patients had resection of the cingulate cortex; 17 of the 21 patients had strictly non-lesional MRI or had subtle cingulate abnormalities. Ten patients were further excluded because the invasive EEG onset was not merely limited to the cingulate region, or the resection included but extended beyond the cingulate cortex, or there was no marked seizure improvement after surgery. Out of the 240 patients with a localized resection from SAHZU, 7 patients had resection of the cingulate cortex; 3 patients had strictly non-lesional MRI. One patient was further excluded because the invasive EEG onset was not merely limited to the cingulate cortex. Therefore, a total of 9 patients were identified from the two Epilepsy Centers (7 from CCF), including 5 from anterior CE, 3 from middle CE and 1 from posterior CE. Six were females; the median age at surgery was 22 (range, 14.5-38) years; the median epilepsy duration was 60 (range, 13-173) months. Six patients with negative MRI underwent ICEEG monitoring, which confirmed cingulate focal ictal onset during their habitual seizures. Subtle CC abnormalities in three patients were identified during re-review at PMC and no ICEEG was recommended per PMC consensus for these 3 patients. Detailed clinical information, results of pre-surgical evaluation, pathology, and postsurgical seizure outcomes were summarized in Table 1. Non-invasive Pre-surgical Evaluation On scalp EEG, ictal onset lateralized to the ipsilateral hemisphere (fronto-centro-parietal = 1, central = 1, frontal = 2, temporal = 1, hemisphere = 1) in 6 of the 9 patients. FDG-PET was performed in all 9 patients; in only 2 patients, hypometabolism overlapped with (and also extended beyond) the CC (P1 and P9). Ictal SPECT was successfully obtained in 4 of the 9 patients (injection time: 12-16 s); the hyperperfusion areas contained the CC only in one patient (P1). MEG was performed in 5 of the 9 patients; positive findings were found in 4 patients, and only 2 of the 4 patients had MEG findings overlapping with the CC (P1 and P2, both loose clusters). MAP Findings The MAP findings are illustrated for all 9 patients in Figure 1. Single MAP+ abnormalities were found in 6 patients (P1-P6), including 3 of the 6 patients with negative MRI, and 3 patients with subtly lesional MRI. In P1-P3 who had negative MRI, MAP gray-white junction file pinpointed a subtle abnormality in the anterior or middle CC, which was found in retrospect to represent subtle blurring of gray-white matter junction in the original T1/FLAIR images, concordant with ICEEG (Figure 1). P4-P6 with subtly lesional MRI were all found to have abnormalities on MAP in the anterior CC; they did not have ICEEG as the subtle findings were identified during re-review at PMC. P7-P9 had negative MAP while their ICEEG showed focal ictal onset in the cingulate cortex. MAP extension or thickness files did not have additional yield; only in P4, a supra-threshold abnormality was seen on the extension file accompanying the junction file. Outcome, Surgery, and Pathology Out of the 6 MAP+ patients, 4 patients (P3-P6) had the resection completely overlapping with the MAP+ region and became seizure-free; two patients (P1 and P2) didn't become seizure-free: P1 experienced seizure recurrence at 15 months following laser ablation that partially overlapped with the MAP+ abnormality, and became seizure-free for 1 year after the second resection to clean up the resection margin, which included the entire MAP+ region; in P2, who had marked improvement in seizure frequency and intensity (Class II), post-operative MRI indicated incomplete removal of the area corresponding to ICEEG and MAP. The 3 MAP-negative patients did become seizure-free (one Class Ia, two Class Ib) following resective surgery guided by ICEEG. Surgical pathology revealed FCD in 7 patients, including FCD type Ib (n = 2), type IIa (n = 2), and type IIb (n = 3). No specimen was sent to pathology examination in the two patients who had laser ablation. DISCUSSION Non-lesional cingulate epilepsy is a rare form of epilepsy (2). Our current study presents the largest series of patients with surgically confirmed non-lesional cingulate epilepsy, with utility of MRI postprocessing to help identify subtle structural abnormalities in this challenging cohort. We showed that voxel-based MRI postprocessing identified subtle epileptic abnormalities in the majority of patients, while the localization value of scalp EEG, PET, ictal SPECT, and MEG was relatively limited. This finding emphasizes the practical value of adding MRI postprocessing into the presurgical evaluation workflow of MRInegative cingulate epilepsy. Surgical management of patients with CE is challenging, as CE exhibits significant heterogeneity in its manifestations due to different seizure propagation patterns (3). Animal and human studies have demonstrated that the anterior CC is bi-directionally connected to the prefrontal and premotor areas, and the posterior CC bi-directionally connected to the mesial temporal regions (1,3,20,21). Moreover, epileptic discharges from the CC often present secondary bilateral synchronous epileptiform discharges, which increases the difficulty to precisely localize (22). Not surprisingly, scalp EEG was less helpful to localize EZ located in the CC because of its low spatial resolution and inability to detect deep focus (3). Complex epileptic networks and fast propagation of discharges from the CC could account for the relatively low yields of PET and SISCOM as reported in previous studies (1, 2, 12, 23). Wong et al. (24). demonstrated that rapid spread of epileptic activities could result in widespread hypometabolism, sometimes remote to the EZ. Diffuse regions of hyperperfusion might reflect the epileptic network which includes the epileptic focus as well as the propagation pathways away from the onset, further complicating the task of localization (25). Although MEG has theoretical advantages including high spatial and temporal resolution in identifying epileptic activities from deep structures compared to scalp EEG (26), its localization seemed to be limited for CC as shown in our study, perhaps due to the CC producing radially oriented sources difficult to be detected by MEG source localization. In 50% (3 of 6) of the patients with CE and negative MRI in our series, abnormalities were identified using MAP; in all 3 patients with CE and subtly lesional MRI, abnormalities were identified using MAP; the overall detection rate was analogous to published series whose detection rate ranged between 43 and 50% in MRI-negative epilepsies (12,13). FCD is the most common identifiable pathology among MRI-negative epilepsies, frequently presenting blurring of the gray-white matter junction (11). Therefore, it is expected that junction map was the most helpful feature map in the current study and previous studies (11)(12)(13). The majority (4 of 5) patients with FCD type II were successfully detected by MAP in our study, while neither case with FCD type I (P7-P8) was MAP+. Therefore, the type of the underlying pathology likely contributes to the negative MAP results. It's a considerable challenge to identify and demarcate FCD type I by current MRI techniques even in patients with confirmed histopathology (27), as FCD type I is typically not as well-characterized on the MRI with less prominent features. Our previous study looked at a group of 150 MRI-negative epilepsies which mostly consisted of FCD type I; not all patients with positive pathology of FCD type I were MAP+; additionally, 5 patients with FCD type I had seizure recurrence even though resection fully overlapped with their MAP+ regions, which suggests insufficient delineation of the full extent of the FCD type I using the current technique (13). Another point worth noting is that the T1-based MAP processing, as utilized in this study, would not be able to capture subtle FCDs with a strong T2 change but no T1 change. This could be another factor contributing to negative MAP results. In the face of a completely non-lesional MRI (visual-negative and MAP-negative), ICEEG is often mandatory to explore the epileptogenic zone. Being seizure-free is the gold standard to identify epileptogenic characteristics of MAP+ changes (11,12). In the current study, MAP+ findings were included in the surgical resection in 4 patients with seizure freedom, suggesting that these findings were true positive findings. The two patients who didn't become seizure-free both had laser ablation which partially overlapped with their MAP+ abnormalities; the less optimal seizure outcomes might be due to the incomplete removal of the epileptic structural abnormality. The type of surgery could also be contributive; although minimally invasive, laser ablation was reported to be less effective than conventional resective surgery in a prior study on 19 pediatric patients (28). LIMITATIONS Patients studied here were a highly selected cohort and could not represent all patients with cingulate epilepsy. Using a combined dataset from two epilepsy centers, there might have been differences in the interpretation of presurgical evaluation tests and surgical decision. These limitations should be considered when interpreting results from our study. CONCLUSION Surgical management of patients with cingulate epilepsy is highly challenging, particularly when the MRI is negative. The localizing yield of non-invasive tests such as scalp EEG, PET, ictal SPECT and MEG in non-lesional cingulate epilepsy is relatively limited and ICEEG is often mandatory. MRI postprocessing could be incorporated into routine surgical evaluation to enhance detection of subtle epileptogenic abnormalities in this particularly challenging population. ETHICS STATEMENT This study was carried out in accordance with the recommendations of the institutional review board ethical guidelines of two hospitals (Cleveland Clinic Foundation and the Second Affiliated Hospital of Zhejiang University) with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the institutional review board ethics committee. AUTHOR CONTRIBUTIONS ShaW contributed to the conception, design the study, analysis of the data, interpretation of the results, and drafting the manuscript. BJ revising the manuscript. TA analysis of the data. MK analysis of the data. SJ revising the manuscript. BK revising the manuscript. JG-M interpretation of the results. RP analysis the data. IN and AA interpretation of the results. ShuW, MD, and ZIW interpretation of the results, drafting the manuscript, and final approval of the version to be published.
2018-11-28T22:46:16.543Z
2018-11-27T00:00:00.000
{ "year": 2018, "sha1": "45f714d2b29ed93a945666f2c0f804c63a87cfe3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2018.01013/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45f714d2b29ed93a945666f2c0f804c63a87cfe3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9101097
pes2o/s2orc
v3-fos-license
Biologic and oral disease-modifying antirheumatic drug monotherapy in rheumatoid arthritis Clinical evidence demonstrates coadministration of tumour necrosis factor inhibitor (TNFi) agents and methotrexate (MTX) is more efficacious than administration of TNFi agents alone in patients with rheumatoid arthritis, leading to the perception that coadministration of MTX with all biologic agents or oral disease-modifying antirheumatic drugs is necessary for maximum efficacy. Real-life registry data reveal approximately one-third of patients taking biologic agents use them as monotherapy. Additionally, an analysis of healthcare claims data showed that when MTX was prescribed in conjunction with a biologic agent, as many as 58% of patients did not collect the MTX prescription. Given this discrepancy between perception and real life, we conducted a review of the peer-reviewed literature and rheumatology medical congress abstracts to determine whether data support biologic monotherapy as a treatment option for patients with rheumatoid arthritis. Our analysis suggests only for tocilizumab is there evidence that the efficacy of biologic monotherapy is comparable with combination therapy with MTX. MTX IN COMBINATION THERAPY The enhanced efficacy of TNFi agents used in combination with MTX compared with TNFi monotherapy is supported by data from randomized controlled trials (RCTs). Patients treated with infliximab plus MTX had longer duration of response than those who received infliximab alone; 20% Paulus criteria were maintained for median 16.5 versus 2.6 weeks ( p=0.006). 13 In the PREMIER study, adalimumab in combination with MTX was superior to adalimumab monotherapy; 62% of patients achieved ACR50 response on combination therapy compared with 41% on monotherapy with significantly less radiographic progression (p<0.001, both comparisons). 14 In the Trial of Etanercept and Methotrexate with Radiographic Patient Outcomes (TEMPO) study, higher response rates were achieved with etanercept plus MTX than etanercept monotherapy; ACR20, 86% vs 75%; ACR50, 71% vs 54%; ACR70, 49% vs 27% and 28-joint Disease Activity Score (DAS28) remission, 42% vs 22% ( p<0.01, all comparisons); there was also less radiographic progression (p<0.05). 17 In the A Multicenter, Randomized, Double-Blind, Placebo-controlled Trial of Golimumab, a Fully Human Anti-TNFa Monoclonal Antibody, Administered Subcutaneously, in Subjects With Active Rheumatoid Arthritis Despite Methotrexate Therapy (GO-FORWARD) study, ACR20 response was achieved by 56% of patients receiving golimumab in combination with MTX, which was significantly higher than MTX monotherapy (33%, p<0.001), and 44% receiving golimumab alone, which was not significantly higher than MTX alone (p=0.059). 16 ACR20 responses were reported in 58% of patients treated with certolizumab in combination with MTX in the Rheumatoid Arthritis PreventIon of structural Damage 2 (RAPID 2) study 19 and 46% of patients in the EFficAcy and Safety of cerTolizumab pegol -4 Weekly dosAge in RheumatoiD arthritis (FAST4WARD) study who received certolizumab monotherapy. 21 Similar results have been reported for rituximab; ACR50 response rates were 41% with rituximab in combination with MTX, which was higher than MTX alone (13%, p=0.005), whereas, the response with rituximab monotherapy (33%) was not significantly higher than MTX (p=0.059). 22 Direct and indirect effects potentially account for the enhanced efficacy of TNFi agents coadministered with MTX. MTX may independently reduce inflammation and radiographic progression. [4][5][6][7][8] MTX also may increase the bioavailability of TNFi agents (infliximab 30 31 and adalimumab, 32 though no dose adjustments are required). Infliximab can induce formation of anti-infliximab antibodies that may lower circulating infliximab levels 33 and reduce clinical effect. However, MTX coadministration can promote immune tolerance and increase circulating infliximab levels, prolonging therapeutic effect. 13 A meta-analysis of 17 prospective cohort studies showed that development of antidrug antibodies to adalimumab and infliximab reduced the therapeutic response rates by up to 68%; this was attenuated with concomitant MTX or other immunosuppressive agents (azathioprine, mercaptopurine). 34 Compared with MTX, patients receiving no DMARD, leflunomide or sulphasalazine were more likely to discontinue their first TNFi. 24 Clinical response to infliximab is related to its trough levels. Pharmacokinetic analysis of RA patients treated with infliximab plus MTX showed good and moderate responders maintained trough serum concentrations ≥1 mg/mL through 14 weeks of treatment, whereas poor responders had undetectable trough concentrations. 35 Increasing MTX or corticosteroid dose improved therapeutic response in poor responders after an initial response, although trough serum concentrations of infliximab remained below the detectable limit. Autoantibodies, including antinuclear antibodies (28-100%) and antibodies to double-stranded DNA (0-78%), are detected in patients receiving TNFi agents, particularly infliximab. 36 Increased autoantibody formation correlates with lack of response to infliximab, 37 suggesting immunologic abnormalities influence efficacy. Addition of an immunosuppressant, such as MTX, may reduce the risk of autoantibody development. 36 However, concomitant MTX did not suppress autoantibody development in two small studies, 38 39 and the effect of autoantibody formation on efficacy of TNFis is yet to be confirmed. Patients taking TNFi agents who discontinue concomitant MTX experience reduced efficacy and shorter responses. A long-term study in Japan comparing the efficacy of continuation versus discontinuation of MTX when initiating etanercept in MTX-IR patients showed continuation resulted in better clinical and radiographic outcomes at weeks 52 and 104 than discontinuation. 40 41 Data from a Dutch registry showed discontinuation of DMARDs was not associated with increased disease activity after 6 months. 42 Alternatives to initiating DMARD monotherapy include step-up, parallel, or step-down regimens. 43 The most effective regimen is unknown and may be different for different patients. CHARACTERISATION OF RA PATIENTS NOT TAKING MTX Data from biologic registries 23-28 44 and US claims databases 29 45 indicate approximately 30% of patients taking biologics use them as monotherapy. However, this does not capture patients who fill prescriptions but do not take some or all of the medication. Patients not taking MTX are those who never initiate MTX-MTX is contraindicated or declined-and those who initiate MTX but subsequently discontinue (figure 1). Among patients who never initiate treatment with MTX are those with contraindications to MTX such as patients who are pregnant or breastfeeding, are heavy alcohol users, have alcohol-induced or other chronic liver diseases or have immunodeficiency or pre-existing blood dyscrasias, known hypersensitivity to MTX or lung disease. 46 The ACR recommends MTX not be used in the presence of clinically important RA-associated pneumonitis or interstitial lung disease of unknown cause, or in patients with active bacterial, active tuberculosis or life-threatening fungal or active herpes zoster infection. 2 Additionally, some patients may decline MTX because of the advice to abstain from alcohol consumption; the combination is associated with increased risk for hepatotoxicity. 46 Patients or physicians may discontinue treatment for a number of reasons. Gastrointestinal, hepatic, dermatologic and neurologic adverse events (AE), as well as cytopenia and MTX-induced pneumonitis, have been reported with MTX 46-52 and sometimes cause discontinuation. Even in tightly controlled clinical studies, 5-15% of patients taking MTX discontinued treatment because of AEs. 3 5 7 8 14 17 22 Despite the wellestablished benefits of MTX for the treatment of RA, including favourable drug survival rates 53 and cost-effectiveness, 54 data from observational studies representing real-life clinical practice indicate MTX discontinuation rates attributed to AEs range from 10% to 77% after 3-12.7 years' treatment. 51 55-60 Risk factors for MTX-associated AEs include renal dysfunction, liver disease, active infectious disease and excessive alcohol consumption. 48 52 61 Renal insufficiency is a major risk factor, because lower creatinine clearance rate is associated with reduced MTX clearance, increasing the risk for MTX-related AEs. 52 Patients who initiate and subsequently discontinue MTX include those who do not inform their rheumatologists. In an online survey of 1500 patients, 45% admitted to being less than forthright with their rheumatologists. 62 Some patients might be reluctant to admit discontinuation because of minor AEs or unwillingness to abstain from alcohol, but it appears this subgroup exists. Analysis of 6744 patient records from Canadian private and public drug plans showed that, among patients on their first biologic for >6 months, 45% did not purchase a DMARD and 58% did not purchase MTX; 41% of patients taking a biologic for >24 months did not purchase a DMARD (54% for MTX). Independent patient and physician surveys indicated half the patients did not take MTX but continued their prescribed biologic regularly. By contrast, physician surveys indicated a DMARD was prescribed with a biologic for 80-90% of patients. 63 Another analysis of 1652 patient records from Canadian private and public drug plans (2009-2010) demonstrated a biologic monotherapy prescribing rate of 12%; however, 29% of patients (43% of those prescribed MTX) did not obtain their DMARD within 6 months after starting biologic therapy. 64 Collectively, these results demonstrate a substantial gap between prescriptions written and prescriptions dispensed, and between rheumatologists' perceptions and reality of the medications patients are taking. THERAPEUTIC STRATEGIES IN PATIENTS DISCONTINUING OR NOT INITIATING MTX Patients without a contraindication for MTX who decline its use, and those considering discontinuation, may benefit from counselling and education. Patients can be encouraged to use MTX if the potential for progressive joint damage and loss of efficacy with discontinuation or non-compliance is explained. Several approaches may improve MTX tolerability. Regular monitoring for signs of hepatic, renal or haematological AEs is advised. 50 Dose adjustment or interruption with reinstatement at a lower dose may be considered if hepatotoxicity is evident. 50 Switching from oral to intramuscular or subcutaneous (SC) MTX may benefit patients with poor adherence or gastrointestinal AEs. [65][66][67][68][69][70] A retrospective study of 191 patients in the UK who switched from oral to SC MTX (2003-2011) showed among 53 patients who switched because of intolerance, 40 (75.5%) subsequently tolerated parenteral therapy. 70 Another RCT comparing oral and SC MTX found no difference in tolerability, though SC administration demonstrated better clinical efficacy at the same dosage. 71 An alternative strategy for improving MTX tolerability is twice-weekly dosing, which increases the bioavailability of MTX above once-weekly dosing 72 ; a preliminary study, however, did not demonstrate an efficacy advantage over once-weekly dosing. 69 73 Potential adjunctive therapies to mitigate AEs include folate supplementation, which reduces MTX-associated hepatic AEs, 50 74 and antiemetics, which suppress MTX-induced nausea and vomiting. 75 Switching to another conventional DMARD may be an option in MTX-intolerant patients receiving combination therapy. Registry data and case series indicate rituximab plus leflunomide is a viable alternative to rituximab plus MTX, with potentially better tolerability. 76 77 By contrast, a high incidence of AEs has been reported with infliximab plus leflunomide. 78 Tocilizumab and abatacept, in combination with some non-MTX DMARDs, demonstrated good tolerability. 79 80 Several TNFi agents are effective as monotherapy, and biologic monotherapy is currently prescribed in patients who are, for one reason or another, not going to use MTX. However, the efficacy of these agents is generally enhanced by concurrent MTX administration. [13][14][15][16][17] BIOLOGIC AND ORAL DMARD MONOTHERAPY A summary of biologic and oral DMARDs approved for RA is shown in table 1. The TNFi agents etanercept, adalimumab and certolizumab pegol are approved as monotherapy for patients with RA in the USA and Europe, [81][82][83][84][85][86] whereas, infliximab and golimumab are approved only with MTX. 87-90 Among non-TNFi agents, only tocilizumab is licenced for use as monotherapy in the USA and Europe. 91 92 Tofacitinib anakinra and abatacept are approved as monotherapy only in the USA. [93][94][95][96] Rituximab is approved only with MTX in the USA and Europe. 97 98 Two recent analyses of the CORRONA registry showed the likelihood of starting biologic monotherapy was consistently increased if it was approved for use as monotherapy. 44 99 Other factors that increased the likelihood of a biologic monotherapy prescription included the patient's previous biologic experience and the rheumatologist's prescribing patterns. For use as monotherapy, a biologic or oral DMARD should be superior to placebo; be at least comparable to MTX/ DMARDs and the agent plus MTX/DMARDs in reducing clinical signs, symptoms and radiographic progression; and have an acceptable safety and tolerability profile. Further, duration of efficacy, which is a major concern among rheumatologists familiar with the TNFi combination paradigm, should not be compromised. Trials of biologic and oral DMARD monotherapy that meet these criteria are summarised in table 2. Monotherapy with different adalimumab regimens was better than placebo in DMARD-IR patients. 100 Adalimumab monotherapy was associated with similar clinical but more favourable radiological outcomes than MTX alone. Patients with low disease activity at the end of the randomised phase of the study maintained low disease activity and had minimal radiographic progression after 6 years of adalimumab monotherapy. 101 Etanercept monotherapy results have been inconsistent. Compared with sulphasalazine monotherapy, etanercept alone, or with sulphasalazine, resulted in significant improvements in disease activity. 102 In the ERA trial, etanercept monotherapy had clinical and radiological advantages over MTX sustained for 24 months in MTX-naive patients. 103 104 In the TEMPO study, which included patients with disease durations averaging 6 years, some indices of disease activity and radiographic progression showed greater improvement with etanercept than with MTX. However, the combination was more effective than either agent alone. 3 17 Etanercept plus MTX was also more effective than etanercept monotherapy in the Japanese Efficacy and Safety of Etanercept on Active Rheumatoid Arthritis (RA) Despite Methotrexate (MTX) Therapy in Japan (JESMR) study. 40 107 Golimumab is not approved for monotherapy. However, studies suggest the efficacy of intravenous golimumab monotherapy is comparable to that of MTX; golimumab plus MTX, however, was more effective than MTX alone. 16 108 109 Certolizumab pegol monotherapy demonstrated superiority to placebo in the FAST4WARD study 21 and was similar to concomitant DMARD treatment in the REALISTIC study, regardless of previous TNFi use. 110 Monotherapy with non-TNFi biologics, except for tocilizumab, has not been investigated extensively. In a study involving 214 patients, abatacept monotherapy resulted in a dosedependent increase in ACR20 response compared with placebo after approximately 3 months of treatment. 111 In the ARRIVE study, TNFi-IR patients taking abatacept monotherapy experienced similar efficacy to patients taking abatacept plus DMARDs. 112 Rituximab monotherapy yielded an ACR50 response rate higher than, but not statistically significantly different from, MTX. 22 Anakinra monotherapy demonstrated increased efficacy compared with placebo, but response rates were modest. 113 Tocilizumab has the largest database on monotherapy and has demonstrated greater efficacy than MTX or other DMARDs, including salazosulphapyridine, bucillamine, mizoribine and D-penicillamine, in lowering disease activity and reducing radiographic progression. Results from Actemra versus Methotrexate double-Blind Investigative Trial In mONotherapy (AMBITION) and Study of Active controlled TOcilizumab monotherapy for Rheumatoid arthritis patients with Inadequate response to methotrexate (SATORI) demonstrated higher ACR20, ACR50 and ACR70 response rates with tocilizumab than MTX. 114 115 Furthermore, patients from AMBITION maintained DAS28 and clinical disease activity index low-disease activity and remission thresholds during long-term tocilizumab monotherapy. 116 Tocilizumab monotherapy was more efficacious than nonbiologic DMARDs at slowing joint damage in the Study of Active controlled Monotherapy Used for Rheumatoid Arthritis, an IL-6 inhibitor (SAMURAI) study, even in patients at high risk for structural damage. 117 118 Contrary to findings with TNFi agents, add-on (tocilizumab plus MTX) therapy was not superior to tocilizumab monotherapy in MTX-IR patients in the ACT-RAY study; ACR responses, swollen and tender joint counts, DAS28 change from baseline, DAS28 ≤3.2 and Genant-modified Total Sharp Score were not significantly different between tocilizumab plus MTX and tocilizumab monotherapy ( p>0.05), though proportions of patients achieving DAS28 <2.6 and patients without radiographic progression were significantly higher with tocilizumab plus MTX (p<0.05). 119 120 These differences in efficacy are unlikely due to immunogenicity because the proportions of patients with neutralising antidrug antibodies were similar between monotherapy (4.4%) and combination therapy (3.7%). 121 In the ACT-SURE 122 and ACT-STAR 79 studies, which were real-world-type safety studies in patients with active RA despite receiving biologics or DMARDs, comparable improvements in clinical signs and symptoms were observed in patients receiving tocilizumab monotherapy or tocilizumab plus DMARDs, although precise reasons for not receiving DMARDs are unknown. Long-term data from the Safety and Efficacy of Tocilizumab, an anti-IL-6 receptor monoclonal antibody, in Monotherapy, in Patients With Rheumatoid Arthritis (STREAM) study showed tocilizumab monotherapy is not associated with clinically relevant decline in efficacy over time; ACR response rates and improvements in DAS28 were sustained over 5 years of tocilizumab monotherapy. 123 In the ADalimumab ACTemrA (ADACTA) trial, which directly compared tocilizumab and adalimumab monotherapy in patients who were MTX-intolerant or unable to continue MTX therapy, tocilizumab was superior to adalimumab in reducing signs and symptoms of RA. The AE profile of tocilizumab was consistent with previous findings and comparable with that of adalimumab. 124 Several additional reports support the efficacy of tocilizumab monotherapy. A systematic review of 10 clinical trials demonstrated tocilizumab monotherapy yielded significantly higher ACR20, ACR50 and ACR70 response rates than MTX. 125 Additionally, a meta-analysis of six Japanese clinical studies and their five uncontrolled long-term extensions confirmed high rates of ACR20 (91.3%), ACR50 (73.0%) and ACR70 (51.3%) responses and DAS remission (59.7%) were maintained with tocilizumab monotherapy for 5 years. 126 Finally, a network meta-analysis involving indirect comparison of clinical trials Rheumatologists may recognise biologic monotherapy is sometimes necessary when treating patients with RA and comorbidities or patients who consume alcohol. Although consensus is lacking on biologics as monotherapy, accumulating data can inform rheumatologist decision making to treat such patients optimally. Although we use the generic term 'biologics', these medications have different mechanisms of action that might affect the need for combination with MTX for improved efficacy. It is, therefore, not surprising that biologics appear to differ substantially with respect to the degree of benefit when administered as monotherapy. Tocilizumab monotherapy has greater efficacy than MTX or other conventional DMARDs in lowering disease activity and reducing radiographic progression and has stable safety and tolerability profiles. 114 115 117 119 127 Tocilizumab monotherapy also demonstrated superiority over adalimumab monotherapy in reducing signs and symptoms of RA in patients who were MTX-intolerant, or in whom MTX was considered ineffective or inappropriate. 124 That monotherapy with any biologic is absolutely equivalent to a biologic coadministered with MTX is not a proven notion. Data shed light on how to deal with this treatment issue; treating without MTX appears to be safe and effective when necessary. However, in the subpopulation of patients not taking, or unable or unwilling to take, MTX, but in whom treatment is required, TNFi agents might not be the first choice of monotherapy given the evidence they are less effective as monotherapy than as combination therapy with MTX. Correction notice This article has been corrected since it was published Online First. The names of the studies AMBITION, SATORI, SAMURAI and STREAM have been amended, Table 1 has been updated and the following sentence amended to read: When used as monotherapy, tocilizumab was likely to show better efficacy than TNFi monotherapy and comparable efficacy to tocilizumab plus MTX. 18
2017-04-02T22:48:59.901Z
2013-08-05T00:00:00.000
{ "year": 2013, "sha1": "193d0fe9feb1d27078103eb5e9ab8b99722df001", "oa_license": "CCBYNC", "oa_url": "https://ard.bmj.com/content/72/12/1897.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "193d0fe9feb1d27078103eb5e9ab8b99722df001", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268139744
pes2o/s2orc
v3-fos-license
Bisphenol-A Abrogates Proliferation and Differentiation of C2C12 Mouse Myoblasts via Downregulation of Phospho-P65 NF-κB Signaling Pathway Previous studies showed that bisphenol-A (BPA), a monomer of polycarbonate plastic, is leached out and contaminated in foods and beverages. This study aimed to investigate the effects of BPA on the myogenesis of adult muscle stem cells. C2C12 myoblasts were treated with BPA in both proliferation and differentiation conditions. Cytotoxicity, cell proliferation and differentiation, antioxidant activity, apoptosis, myogenic regulatory factors (MRFs) gene expression, and mechanism of BPA on myogenesis were examined. C2C12 myoblasts exposed to 25–50 µM BPA showed abnormal morphology, expressing numerous and long cytoplasmic extensions. Cell proliferation was inhibited and was accumulated in subG1 and S phases of the cell cycle, subsequently leading to apoptosis confirmed by nuclear condensation and the expression of apoptosis markers, cleaved caspase-9 and caspase-3. In addition, the activity of antioxidant enzymes, catalase, superoxide dismutase, and glutathione peroxidase was significantly decreased. Meanwhile, BPA suppressed myoblast differentiation by decreasing the number and size of multinucleated myotubes via the modulation of MRF gene expression. Moreover, BPA significantly inhibited the phosphorylation of P65 NF-κB in both proliferation and differentiation conditions. Altogether, the results revealed the adverse effects of BPA on myogenesis leading to abnormal growth and development via the inhibition of phospho-P65 NF-κB. Introduction Skeletal muscle is the most abundant tissue of living organisms and plays a major role in body movement and metabolism.Te skeletal muscle tissues are derived from mesoderm during embryonic development.Mesodermal cells become myoblasts, which are muscle precursor cells, under the regulation of myogenic regulatory factors (MRFs) including MyoD, myogenin, myogenic factor 5 (myf-5), and myogenic regulatory factor 4 (MRF4) that play signifcant roles in myogenesis through two crucial steps.First, myoblasts proliferate to increase cell numbers for muscle mass under the regulation of MyoD and myf-5.Te subsequent step is myoblast diferentiation, in which myoblasts diferentiate into myocytes by expressing myosin heavy chain (MHC), the major structural protein in muscle.Tese myocytes eventually fuse together and mature into multinucleated myofbers under the control of myogenin and MRF4 [1]. At the later phase of myogenic development, some mesodermal cells give rise to satellite cells, which are important for muscle growth and regeneration after birth.Te satellite cells are localized between muscle fbers in adult muscle.Upon muscle injury, the satellite cells are activated and expressed MRFs leading to muscle regeneration that involves the proliferation and diferentiation of satellite cells and their survival and integration into functional myofbers [2,3]. Bisphenol-A (BPA, 2,2′-bis(4-hydroxyphenyl) propane) is a monomer employed in the manufacturing of polycarbonate and epoxy resin.In turn, these materials are used worldwide for the production of many consumer products including plastic, internal coating of packaging, and medical devices.In particular, like other chemical monomers, numerous studies have reported the leaching of BPA from these materials [14].Extreme temperature and pH conditions increase the rate of leaching [15].BPA has been found to contaminate various products such as soft drinks [16], kinds of seafood [17], and even breast milk [18].Moreover, the bioaccumulation of BPA in liver, muscle, and brain tissues has been reported [19].Exposure to BPA gradually increases scientist's concern because it has been found in the urine of more than 90% of individuals in the United States, Germany, and Canada [20].Another reason for concern is that it can easily pass the placental barrier and thus afect the developing embryo [21] and can cause human infertility in both male and female adults [22]. Toxicological and epidemiological studies show solid evidence for the efect of BPA on myogenesis, but its underlying mechanisms are poorly understood.Herein, we explored the efects of BPA on two crucial steps during myogenesis, i.e., myoblast proliferation and diferentiation.We observed that BPA signifcantly inhibited myoblast proliferation leading to cellular apoptosis and markedly abrogated myoblast diferentiation by inhibiting the expression of myogenic regulatory factor genes via downregulating the phosphorylation of JNK, p53, and P65 NF-κB proteins. C2C12 Myoblast Cell Culture and Treatment.C2C12 myoblast cell line was purchased from American Type Culture Collection (ATCC; Manassas, VA, USA).Cells were grown in growth medium (GM) composed of Dulbecco's Modifed Eagle's Medium (DMEM) supplemented with 10% fetal bovine serum (FBS) and 1% antibiotics at 37 °C in a humidifed 5% CO 2 incubator.To test the efect of BPA on cell cytotoxicity and cell proliferation, the myoblasts were cultured in serum-free DMEM or GM containing BPA at 0-50 µM for 24, 48, and 72 h.To test the efect on myogenic diferentiation, approximately 80% of confuent cells were cultured in diferentiation medium (DM; DMEM supplemented with 2% horse serum) containing BPA at 0-50 µM for 3 days. Cell Cycle Analysis. After 72 h of treatment, C2C12 cells were trypsinized and were then fxed with 70% ice-cold ethanol overnight at −20 °C.After several washes with phosphate-bufered saline (PBS), the treated cells were incubated with ribonuclease A at 37 °C for 30 min.Te mixture was chilled on ice, and propidium iodide was directly added.Te mixture was kept on ice in the dark for 15 min.Te cell cycle stages of treated cells were determined and analyzed using BD FACSCanto ™ fow cytometer (BD Biosciences) and BD FACSDiva version 6.1.1 software, respectively. Protein Concentration Measurement. After treatment, myoblast cells were collected by trypsinization and washed with PBS.Te protein was extracted using radio-immuno precipitation assay (RIPA) bufer with a protease inhibitor cocktail.Te lysate was centrifuged at 14,000 rpm, 4 °C for 20 min, and protein concentration was determined using a bicinchoninic acid (BCA) protein assay kit. 2.6.Antioxidant Enzymes Assays.After treatment, treated cells were harvested by trypsinization and were lysed in PBS by repeated freeze-thaw method.Te enzyme activities of superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx) were measured using commercial kits following the manufacturer's instructions. 2 Journal of Toxicology 2.7.Western Blot.Equal amounts of proteins were subjected to sodium dodecyl-sulfate polyacrylamide gel electrophoresis (SDS-PAGE) to separate proteins by size.Te proteins were transferred onto a polyvinylidene difuoride (PVDF) membrane using the semidry machine.Te membrane was probed with the desired antibodies for 1 h at room temperature (RT).After several washes, the membrane was incubated with appropriate horseradish peroxidase-conjugated secondary antibody for another 1 h at RT. Te protein expression was visualized with ECL Western blotting detection reagent under the gel documentation system (BioSpectrum AC Chemi HR 410).Te protein band intensity was measured by ImageJ software. 2.8.Immunofuorescence Staining.After treatment, treated cells were washed with PBS and fxed with ice-cold methanol for 10 min.After several washings, the fxed cells were allowed to rehydrate in PBS for at least 30 min.Cells were permeabilized and blocked with 5% normal goat serum and 0.3% Triton X-100 in PBS at RT for 1 h.Ten, the cells were incubated with mouse monoclonal anti-MHC for 1 h.After several washes with PBS, cells were incubated with appropriate secondary antibody conjugated with fuorescein isothiocyanate (FITC) and Hoechst 33342 at RT for 45 min.Te fuorescence signal was observed under a fuorescence microscope (Olympus IX73). Statistical Analysis. All experiments were performed independently, and results were given as mean ± SEM.Te data were compared using one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test with GraphPad Prism version 5.00.Statistical diferences were displayed as follows: ns-not signifcant; * P < 0.05; * * P < 0.01; * * * P < 0.001. BPA Inhibited C2C12 Myoblast Proliferation. In this study, the cytotoxic efect of BPA on C2C12 myoblasts was assessed by observing cell morphology and MTT assay.After exposure to BPA at the indicated concentrations for 72 h, the morphology of the cells in control and in 10 µM BPA-treated groups appeared fat, star-shaped, or fusiform, which represents the normal morphology of C2C12.However, increases in BPA concentration led to abnormal morphology: 25 µM BPA treatment caused numerous and long cytoplasmic extensions whereas myoblasts in 50 µM BPA treatment became round with fewer cytoplasmic extensions and started to detach from the culture dish surface (Figure 1(a)).Following 24, 48, and 72 h exposures to BPA at the indicated concentrations, discernible toxicity was observed at 50 µM from 48 h exposure compared to the untreated control (Figure 1(b)).An equal absorbance value of 50 µM BPA at 48-h and 72-h treated group suggests an inhibition of cell proliferation.To confrm this, cell counting was performed.Indeed, cell numbers in the 50 µM BPAtreated groups at 24, 48, and 72 h showed no signifcant diferences.In addition, 50 µM BPA decreased cell numbers by 24 h compared to control.Also, by 48 h exposure to 25 µM and 50 µM BPA, the number of myoblasts was signifcantly decreased (Figure 1 ).An accumulation of cells at the subG1 phase led to a signifcant decrease in the percentage of cell population in the G0/G1 phase (Figures 2(a) and 2(b)).Furthermore, the percentage of cells in the S-phase after 50 µM BPA treatment was signifcantly higher than those in other groups refecting an accumulation of cells at this phase.In addition, the activity of antioxidant enzymes including superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx) after BPA treatment was measured.Te results showed signifcant decreases in all antioxidant enzyme activities being tested in a dose-dependent manner (Figure 2(c)).Treatments with BPA at 25 µM and 50 µM signifcantly decreased CAT, SOD, and GPx activities compared to the nontreatment group. BPA Induced Cellular Apoptosis. Cellular apoptosis is usually due to an imbalance between oxidant molecules and antioxidant enzymes, which protect the cells from radicalmediated damage.To investigate whether the reduction of antioxidant enzyme activity subsequently causes apoptosis, nuclear fragmentation and apoptotic markers were assayed.As expected, Hoechst staining exhibited round or oval nuclei with homogeneous chromatin in the untreated and 10-25 µM BPA-treated groups, whereas cells exposed to 50 µM BPA showed a reduction in nuclear size and chromatin condensation, although nuclear fragmentation could not be found (Figure 3 Figure 2: BPA induced cell cycle arrest and reduced antioxidant enzyme activity.C2C12 myoblast cells were cultured in a growth medium in the absence or presence of BPA for 72 h.Te treated cells were collected, stained with propidium iodide, and subjected to fow cytometry for cell cycle analysis.Cell cycle distribution graph (a) and percentage of cell population in subG1, G0/G1, and S and G2/M phases (b).Te treated cells were subjected to protein extraction and antioxidant enzyme activity determination (c).* P < 0.05, * * P < 0.01; * * * P < 0.001 compared to the control group.orchestrated by myogenic regulatory factor genes, we hypothesized that BPA may inhibit myoblast diferentiation by suppressing the expression of these genes.Te results showed that, after 72 h in diferentiation condition with BPA at 25-50 µM, the MyoD and myogenin expression levels were downregulated, being only about one-third of the control group.Conversely, the expression level of myf-5 was signifcantly increased in a dose-dependent manner, being nearly 4 folds at 10-25 µM and 7 folds at 50 µM BPA.Interestingly, MRF4 expression level did not change after treatment with BPA at all concentrations (Figure 4(e)). BPA Suppressed Myogenesis via Downregulation of Phospho-P65 NF-κB.Following treatment with 25-50 µM BPA in proliferation condition for 72 h, the activation of p-JNK, p-p53, and p-P65 NF-κB proteins signifcantly decreased compared to untreated control (Figures 5(a) and 5(c)).In contrast, in the diferentiation condition for 72 h, a signifcant decrease in the activation of p-P65 NF-κB protein only was found, while p-JNK and p-p53 were not suppressed compared to untreated control (Figures 5(b Discussion In the present study, we reported the adverse efect of BPA at the doses 1-50 µM on C2 C12 mouse myoblast cell proliferation and diferentiation in both a dose-and a timedependent manner.Te dosage that is cytotoxic (50 µM) and that inhibits proliferation and diferentiation (25-50 µM) was much lower than those reported to induce toxicity in human bone mesenchymal stem cells (250 µM) [25] and to inhibit cell proliferation in human fetal lung fbroblasts (100 µM) [26].Previous studies have also shown that BPA inhibited the proliferation of many cell types including neural stem cells [27] and colonic epithelial cells [28].In contrast, the enhancement of cell growth by BPA has been reported in several cell lines: normal human mammary epithelial cells [29], human normal breast cells (HBL-100) [30], and breast cancer cells (MCF-7 and SkBr3) [31].Tis discrepancy may be attributed to the variation in exposure time and cell type being used.It has been proposed that the action of BPA on cell proliferation is mediated through estrogen receptors (ERs) [30,32], in concordance with the existence of ERs in myoblast cells [33,34].Te ERs mediate the antiproliferative efect of BPA by interaction with Figure 4: BPA suppressed C2C12 myoblast diferentiation.Confuence C2C12 myoblast cells were cultured in diferentiation conditions in the absence or presence of BPA for 72 h.Photomicrograph of immunofuorescence staining for MHC protein (green) and nuclei (blue) in diferentiated myotubes (a).Total proteins were extracted and measured by BCA protein assay kit (b).Proteins were subjected to Western blot analysis with anti-MHC and anti-myogenin antibodies and expressed as relative band intensity to tubulin (c) and (d).Total RNA was extracted and subjected to real-time PCR with specifc primers for the myogenic regulatory factors gene.Gene expression levels were expressed as fold change compared to the control group (e).* P < 0.05; * * P < 0.01; * * * P < 0.001 compared to the control group.Scale bar � 200 µm. epidermal growth factor receptor (EGFR) causing a reduction in cyclin D1 expression and upregulation of cell proliferation inhibitors p21 and p27 [32]. On the other hand, antioxidant enzymes (SOD, CAT, and GPx) act as a defense network against excess production and accumulation of intracellular reactive oxygen species (ROS) by neutralization process [35].Te imbalance between ROS level and antioxidant enzyme activity will lead to oxidative stress which, in turn, stimulates the expression of apoptotic molecules by an intrinsic pathway [36].Treatment with BPA has been reported to induce intracellular ROS through both enzymatic and nonenzymatic formation of phenoxyl radicals [37] causing oxidative stress [38].A lowering of antioxidant enzyme (SOD, CAT, and GPx) activities after BPA treatment in our experiment is likely due to the excess formation of free radicals under the oxidative stress.Exposure to BPA decreased antioxidant enzyme activities not only in myoblasts but also in plasma [39], liver [40], kidney, and testes [41] leading to diferent endpoints.Of interest, our fnding revealed changes in the morphology of the treated cells from fat, star-shaped to round with fewer cytoplasmic extensions.Tis is in concordance with the toxicological study which showed that exposure to toxic chemicals stimulated loss of cellular characteristic morphology, changes in cell shape, development of elongated cytoplasmic extensions/gripping spicules, and cell detachment [42].Te number of cytoplasmic extensions and cell detachment increased proportionally to toxic chemical concentrations [43,44].Tese phenomena could be explained by altering or damaging cell membrane structure leading to changes in the capacity of cells to adhere to the basement membrane. In particular, BPA exposure was shown to stimulate cell cycle arrest at subG1 and S phases.Tese cells underwent cellular apoptosis by activating the expression of cleaved caspase-9 and cleaved caspase-3 proteins.Treatment with BPA has been reported to afect the cell cycle by mediating cell cycle arrest at the G1 phase in prostate cancer cells through activation of the EGFR/ERK/p53 signaling pathway [32] and in human lung fbroblasts by inducing double-strand breaks (DSB)-ataxia telangiectasia mutated (ATM)-p53 signaling [26].Te signaling pathway responsible for the suppression of cell cycle in myoblasts by BPA is not known and warrants future studies.Beyond the cell cycle arrest, it is evident that BPA exposure triggers the cleaving of procaspase-3 and procaspase-9 into cleaved caspase-3 and cleaved caspase-9, respectively.Te cleaved caspase-3 is the primary activator of DNA fragmentation by inactivating DNA fragmentation factor 45 (DFF45)/inhibitor of caspaseactivated DNase (ICAD) protein, in turn leading to cellular apoptosis [45].Te expression of cleaved caspase-9 confrmed that BPA induced apoptosis, in part, through the mitochondrial pathway [46].Increases of ROS by BPA [37,38] can cause mitochondrial membrane potential changes leading to the leak of cytochrome c into the cytoplasm.Cytochrome c can bind with Apaf-1 fnally leading to caspase-9 activation [46]. Regarding myogenic diferentiation, exposure to BPA abolished myoblast diferentiation into multinucleated myotubes.Previous studies have also reported the efect of BPA on the diferentiation of many cell types including neural stem cells [27], bone-marrow-derived mesenchymal stem cells [41], and germ cells [47].Tis may be caused by the activation of ER after BPA treatment and then recruiting both ER genomic and nongenomic signaling pathways leading to aberrant cell biology [41].It has also been reported that BPA reduced the synthesis of several proteins in human granulosa cells [48], which is in accordance with the present study that total protein concentration was decreased by 50%.A previous study suggested that this efect may be caused by a decrease in the expression of translation elongation factor proteins following BPA treatment [49].Treatment with low concentrations of BPA not only causes changes in protein synthesis but also induces changes in protein expression profles [49], which, in turn, afect various cell types at diferent endpoints. Exposures of myoblasts to several substances have been reported to attenuate MRF gene expression subsequently leading to impaired myogenic diferentiation.For example, lipopolysaccharide (LPS) signifcantly decreases MyoD and myogenin expression [50], and arsenic suppresses myogenin expression in both in vitro and in vivo models [51].Similarly, our experiment showed that treatment with BPA signifcantly suppressed both MyoD and myogenin expressions leading to impaired myogenic diferentiation.Te function of MRF4 is still unclear, but several publications have revealed its role in terminal diferentiation.Its expression is upregulated at the late stage of diferentiation when myogenin reaches the maximum level and gradually decreases [52].Myf-5 expression is activated in muscle precursor cells which have a distinct role in muscle cell lineage determination and proliferation [53].Tis factor is not upregulated during myoblast diferentiation [52].Te high level of myf-5 expression in our experiment may be due to the downregulation of MyoD, which, in turn, maintains the proliferative status of myoblasts expressing myf-5.Another explanation is that the low level of myoblast diferentiation after BPA treatment leads to a low myotube to myoblast ratio.Tese two cell populations showed diferential expression of myf-5: myf-5 expression is downregulated in multinucleated myotubes whereas the residual myoblasts called quiescent "reserve cells" still carry on expressing myf-5 [52]. Exposure to BPA provoked toxicity during myoblast proliferation and diferentiation by downregulating the expression of p-P65 NF-κB protein.As BPA has been reported to mimic estrogen [30,32] together with the existence of ER in myoblast cells [33,34], it is possible that BPA provoked toxicity through ER.Te activation of ER has been reported to suppress NF-κB activity [54].Inhibition of the NF-κB signaling pathway has been reported to induce apoptosis and suppress proliferation of human fbroblast-like synovial cells [55] since the NF-κB mediated antiapoptotic function [56].Accordingly, JNK and P53 proteins may contribute to cell cycle arrest and apoptosis under the infuence of NF-κB.Data from gain-and loss-of-function approaches revealed the crosstalk between NF-κB and p53, and p53 was necessary for the NF-κB-mediated gene expression [57].Besides, JNK has been shown to play a crucial role in cell proliferation, and the balance of JNK signaling will determine whether the cells are committed to proliferation or programmed cell death [58].However, under the extended diferentiation condition, activation of NF-κB activity has been reported as a positive regulator of myoblast diferentiation by stimulating MHC and myogenin expressions [59,60].BPA exposure may impair NF-κB activity via ER-Akt signaling since BPA treatment has been reported to suppress Akt signaling in myoblast leading to suppression of myoblast diferentiation [13].In addition, Akt has been shown to mediate the regulation of NF-κB activity by inducing phosphorylation and subsequent degradation of inhibitor of κB (IκB) [61].Moreover, inhibition of NF-κB activity has also been reported to interfere with the expression of myogenic regulatory factors [60].However, the relationship between NF-κB, JNK, and P53 proteins in myoblast exposed to BPA in both proliferation and diferentiation conditions needs further elucidation. Conclusions Tis study addresses the efects of BPA on myoblast proliferation and diferentiation, providing a better understanding of the adverse efects of the environmental contaminant BPA. Journal of Toxicology In this study, we demonstrated that exposure to BPA significantly inhibited myoblast proliferation by stimulating cell cycle arrest, which, in turn, led to cellular apoptosis.In addition, exposure to BPA during myogenic diferentiation suppressed myoblast diferentiation and muscle protein synthesis.Te inhibitory efect on myoblast diferentiation was associated with modifcation of myogenic regulatory factor gene expression, which could occur via inhibition of phosphorylation of P65 NF-κB.It is concluded that BPA impairs the myogenesis of muscle progenitor/stem cells.However, this research was conducted in in vitro cell culture system which has limitations for long-term experiments, but a higher concentration of BPA than the normal range found in living organisms could be used to see the efects.Tus, the cell culture system cannot fully replicate the complexities of living organisms.Further studies in in vivo system are, therefore, required to confrm and explore the toxic efects of BPA on myoblast cell proliferation and diferentiation. Figure 1 : Figure 1: BPA inhibited C2C12 myoblast cell proliferation.C2C12 myoblast cells were cultured in a growth medium in the absence or presence of BPA at the indicated concentrations for 24, 48, and 72 h.Photomicrographs of cell morphology before and after BPA treatments (a).Cell viability was assessed by MTT assay (b) and cell counting (c).Arrowheads indicate cytoplasmic extension; n s.: not signifcant; * P < 0.05; * * P < 0.01; * * * P < 0.001 compared to the control group.Scale bar � 200 µm. 3. 4 . BPA Suppressed C2C12 Myoblast Diferentiation.Efects of BPA exposure during C2C12 myoblast diferentiation were assessed by measuring the total proteins, MHC, and myogenin protein expression as markers for myoblast diferentiation.Following a 72 h exposure to BPA in differentiation condition, immunofuorescence staining for MHC was performed.Numerous large and elongated myotubes were present in the 10 µM BPA treatment group, which was similar to the untreated control group.However, BPA treatment at 25 µM caused smaller and shorter myotubes compared to the untreated control.Exposure to 50 µM BPA nearly abolished myoblast diferentiation, myotubes were rare and myocytes were scattered and stained positive for MHC but contained only 1-2 nuclei (Figure4(a)).Consistent with the immunofuorescence staining, after treatment with 25 and 50 µM BPA, total protein synthesis was signifcantly decreased by 25% and 43%, respectively (Figure4(b)).In addition, Western blot analysis further confrmed signifcant decreases in both MHC and myogenin protein expression in 25 and 50 µM treated groups compared to control, which explains the lower level of myoblast diferentiation (Figures4(c) and 4(d)).Since myogenesis is ) and 5(d)). Figure 3 : Figure 3: BPA induced cellular apoptosis.C2C12 myoblast cells were cultured in the absence or presence of BPA at the indicated concentrations for 72 h.Photomicrographs of treated cells stained with Hoechst (blue) to visualize nuclear morphology (a).Total proteins were subjected to Western blot analysis with anti-caspase-3, anti-caspase-8, and anti-caspase-9 antibodies (b) and expressed as relative band intensity to tubulin (c).* * P < 0.01; * * * P < 0.001 compared to the control group.Scale bar � 20 µm. Figure 5 : Figure 5: Molecular mechanism of BPA in C2C12 myoblast proliferation and diferentiation.C2C12 myoblasts were cultured in either growth medium (GM) or diferentiation medium (DM) in the absence or presence of BPA at the indicated concentration for 72 h.Total proteins were extracted and subjected to Western blot analysis with anti-phospho-JNK, anti-phospho-p53, and anti-phospho-P65 NF-κB antibodies.Te activation of each phosphorylated protein was expressed as fold change compared to the control group.* P < 0.05; * * P < 0.01; * * * P < 0.001 compared to the control group.(a and c) Proliferation and (b and d) diferentiation conditions.
2024-03-03T17:45:58.697Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "303579222a60b2d79b51b0037f9142a38edeaed7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jt/2024/3840950.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "daf2e84bfcef4bb0771bc464589c51e91712afa0", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
116750438
pes2o/s2orc
v3-fos-license
Analysis of the application and development of new building materials in modern high-rise buildings With the rapid development of modernization in China, the construction projects are gradually developing towards high-rise buildings in the actual construction to improve the social and economic benefits, reduce the investment of municipal construction and shorten the construction time. Thus with the high requirement for the construction, new building materials are used to fully meet the corresponding needs of modern high-rise buildings and other special functions. This article discusses new building materials and the application and development trend of new building materials in modern high-rise buildings which can provide some references and help for the better development of the two parts. Introduction The increasing population of the city makes modern high-rise buildings necessary to solve the dilemma of high intensity, super concentration, large density, high capacity and so on. High-rise buildings can economize urban land, shorten the development cycle of public facilities and municipal pipe networks, reduce municipal investment and speed up urban construction. Thus, in order to promote the development of high rise buildings more effectively, it is necessary to develop new construction materials with high strength, light quality and durability when it comes to the environment, management, safety, and construction technology of high rise buildings. [1] Application of new building materials in high rise buildings The new building materials play an more important role in the construction of modern high-rise buildings than the traditional building materials including new wall materials, thermal insulation materials, and decoration materials, etc. Application of new wall materials With the promotion of the green concept and the lack of land resources,it has become an inevitable trend to replace the traditional solid bricks with new wall materials in high-rise buildings.The new wall materials include substantial varieties and categories which are different from the solid bricks, lime stones and other traditional wall materials.From the functional level,there are wall materials, decorative materials, doors and windows materials, thermal insulation materials, waterproof materials, sound insulation materials, bonding and sealing materials,and all kinds of hardware, plastic parts,auxiliary materials and so on.From the material level,there are natural materials,chemical materials, metal materials, non-metallic materials, etc.The characteristics of the above are light weight, heat insulation, sound insulation, thermal insulation, formaldehyde free, no benzene, no pollution,etc.Some new composite energy saving wall materials are integrated with fireproof, waterproof, moisture-proof, sound insulation, heat insulation and heat preservation which makes it simple and quick to assemble and have a greater use of space [2] .Many new wall materials are widely used,such as a plaster or cement lightweight partition board, color steel plate, and aerated concrete block and so on. Application of thermal insulation materials The heat preservation of high building is also a key to the construction project.Thermal insulation materials are materials complex with significant impedance for heat flow.The common characteristics of thermal insulation materials are light, loose, porous or fibrous. The insulation conduction can be barriered because of its internal air resistance.What's more, the inorganic material not only isn't flammable but the use temperature is wide.And besides, the chemical corrosion resistance is good.Today, global thermal insulation materials are developing towards the integration of high efficiency, energy saving, thin layer, heat insulation and waterproof protection.While developing new thermal insulation materials , There are also some aspects that are emphasized which include the targeted use of thermal insulation materials, design ,and construction according to standard specifications.and when insulation efficiency is as far as possible to be improved costs are also reduced [3] .There are some thin layer thermal insulation coatings which are researched on at home and abroad,such as Thelma cover and other products of Ceramic-Cover&J.H.International by SPM Thermos-Shield、Thermal Protective Systems in the United States and zs-211 reflective thermal insulation coatings, folding ZS-1 high temperature insulation coating material, heat insulation felt soft sonar in China. Application of new decorating materials The new decoration material is a kind of green, environmental protection, energy saving, heat preservation and fireproof performance superior material.The surface of the material is smooth, and the density is high. It truly realizes the production line of the new building wallboard, and greatly reduces the labor intensity of the production workers with the characteristics of wide source of raw materials, simple production process and low energy consumption in production.For example, a new type of glass material that combines special materials and glass organically can not only realize the effective control of convective heat transfer but also automatic dimming with the change of contact temperature, intelligently control indoor temperature to create a more comfortable environment for people through the use of special technology. [4] Gra.1. Classification The future development trend of new building materials in high-rise buildings There are many advantages of the new building materials ,such as multi-function, reliable, safe and beautiful,making it better adapt to the development of modernization.Therefore, the development of new building materials and effective application to modern high-rise buildings can improve people's life and promote the development of science , technology and economy.New building materials can improve the safety of modern high-rise buildings and the sustainable development of environmental resources.At the same time, the application of new materials can improve the seismic system of modern high-rise buildings effectively, and the use of fire-resistant or flame-retardant new materials can also reduce casualties and economic losses caused by fires [5] . Using new materials with light quality, sturdy structure and durability in actual construction will not only reduce the weight of the building and the consumption of the material effectively,but also promote the development of mechanized construction, improve the efficiency of construction, reduce the cost of construction , and promote the ecological construction of high-rise buildings. 4.The necessity of the application of new building materials in high rise buildings With the city land conflicts have become increasingly prominent, high-rise building gradually appeared, and the size and the number gradually increased, however, the high-rise building from the longitudinal perspective, the transportation capacity is low and is very limited, coupled with the external open space is very small. If an earthquake or a fire is a disaster, it will cause a strong evacuation and the pressure of fire and rescue. Therefore, in order to promote the modern high-rise building fire seismic function, a series of new materials used is very necessary in the actual process of construction, the application of new materials to modern high-rise buildings aseismic system effectively improved, and the use of new materials or refractory flame retardant can also reduce the fire caused casualties and economic loss [6] . At the same time, in the construction process of traditional construction, the vast majority are usually used to clay solid bricks, but because the clay solid brick itself has high energy consumption and heavy pollution in the production of the phenomenon, affecting the sustainable development of the economy, coupled with the high-rise building itself larger, higher consumption of building materials once the weight, the use of clay solid brick buildings will increase, increasing environmental pollution, and the use of new materials, strong and durable quality in the actual construction, not only can effectively reduce the weight of the building and material consumption, but also can promote the development of mechanized construction, improve construction efficiency, reduce the construction cost. To promote the ecological construction of high-rise buildings. Concluding remarks The development of world economy and progress, demand for energy is increasing, but in the process of energy utilization has also appeared a lot of waste, especially in recent years, the energy has become less and less, so the energy problem has attracted more and more attention of all countries in the world [7] . Compared with traditional building materials, new materials have many advantages, such as light weight, high strength, heat preservation and energy saving, etc. [8] In the construction of modernization, the construction of high rise buildings is the power source for the development and use of new building materials, meanwhile the development and research of new building materials also provides basic guarantee for the development of modern high-rise buildings.
2019-04-16T13:28:34.693Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "8dba06b1d5268fc147a0cb8245ffc09c56d3a309", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/382/2/022091", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d06b71bc22a95d733a737a9afa2b90ed06a16be4", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
54861653
pes2o/s2orc
v3-fos-license
Communication and Leadership Skills: A Comparative Study of the Malay Language Specialization Trainee Teachers in Malaysia This research was conducted to compare generic skills involving the Malay language specialization students in Malaysia from two higher education institutions: a public university (PU) and a teacher training institute (TTI). The two skills investigated to study the 127 participants were communication skills and leadership skills. From the total number, 77 participants were from the PU and the remaining 50 from the TTI. The MyGSI instrument was employed to measure the skills and data were analyzed and reported into mean value. The findings suggested an acceptable account of mastery among the trainee teachers in both skills even though a higher proficiency was evident among the PU students. The study implicated 1) a further in-depth research on the syllabus of these institutions as to how the results were not at variance, 2) an effort for a standardization of an improved syllabus to encourage the production of excellent teachers across schools in Malaysia, and 3) a portfolio assessment among the students where the display of generic skills through program participation and management by the trainee teachers is graded officially by the training institutions. Introduction Generic skills are part of the essentials for a person to function efficiently and holistically in everyday life and these skills have been addressed extensively at all levels of formal education in Malaysia especially the higher education.Generic skills are not only limited to the skills related to the subject matters in a student's major (Lizzio & Wilson, 2004), but rather, they include communication skills, teamwork skills, and problem-solving skills (Biggs, 1999) among others.In order to produce a generation of students with a great set of generic skills, teachers should be able to integrate the elements that encourage the development of these skills during the teaching and learning process.Thus, as a guide to school teachers, the Curriculum Development Center (1995) listed the skills that must be possessed by students for the purpose of career preparation, namely skills in: 1) communication, 2) technology and numerical application, 3) planning and administration, 4) teamwork, 5) problem-solving and decision-making, and 6) cross-cultural understanding.Naturally, it is imperative that these skills be mastered by trainee teachers in their training period in order to serve as effective future agents of socialization in the community.Apart from the prerequisite subject content mastery, the trainee teachers should also be acquainted with as many generic skills as possible to meet the requirement in the national education syllabus. Quality teachers are not beyond reach, but will require significant investments.Some are naturally born with the appropriate pedagogical skills while others are willing to strive hard to develop these skills in them.Their roles are of crucial importance to ensure that the national agenda of progressing towards a developed country is achieved by the year 2020.This is evident in the Malaysian Education Blueprint 2006-2010 which expounded the role of education in developing knowledgeable and highly skilled individuals with good values to successfully achieve Vision 2020.Thus, this research attempted to investigate the level of generic skills among a group of future teachers in a public university (PU) and a teaching training institute (TTI) that focused on two aspects, namely communication, and leadership. Problem Statement The higher education institutions are the key players to foster the growth of the society, especially in empowering Malaysia as a knowledge-based developed country.Their roles transcend the traditional responsibility from merely expanding and promoting the culture of knowledge towards fulfilling the national aspiration and expectations.This shift aims at producing outstanding students who are creative and innovative and rational thinkers with high self-discipline and moral values (Dickerson & Green, 2004;Hogarth & Wilson, 2005) in response to the national agenda of education (Hashamiza, 2004;Mohd Izham, 2011).Nevertheless, recent public concern that questions the declining credibility of the students from the current education system, particularly in terms of leadership and self-worthiness, deserves a serious attention.The likely factor that has led to this turmoil is the inclination towards the sole mastery subject matter of their study major while overlooking self-improvement related to value-added skills.The Ministry of Higher Education (MoHE) and the Ministry of Human Resources reported an alarming unemployment rate; 11 reasons had been identified and seven of which were in relation to the shortcomings of generic skills.Thus, a Module of Human Skills Development was introduced in 2006.With this module, the MoHE had enforced its implementation across almost all curricular and co-curricular courses at all higher education institutions in the country. The many public perceptions regarding the poor student standard have prompted important questions as to whether the higher education institutions are indeed preparing all the necessary knowledge and training necessary generic skills for these future teachers.These issues gave impetus to this research as it set out to compare and measure two sets of generic skills among the trainee teachers of the Malay language specialization.In addition, similar studies that involved this cohort of participants are yet to be found as most of them have been mainly concerned with teachers from the technical and vocational stream. Basing on the premise that the duty to instil generic skills falls mainly on teachers with a background in communication, the research decided to choose trainee teachers from the language field as participants of the research.Their teaching content, after all, is related to communication to a great extent as opposed to other technical subjects such as science, information technology and engineering that prioritize on the subject matter.Thus, the research decided to compare one Malaysian public university (PU) with one teaching training institute (TTI), involving third-year students of the Malay language specialization. Research Questions This comparative research was conducted to examine and compare the levels of generic skills of the trainee teachers in a PU and TTI to answer this question: 1) What are the level of Malay language option trainee teachers' generic skills in terms of the communication skills and leadership skills? Research Design This research was a quantitative research that applied the Malaysian Generic Skills Inventory (MyGSi) instrument.The quantitative research design was chosen because of its nature that explains a certain phenomenon and measurements numerically.The items were analyzed using the Rasch model to determine the trustworthiness and the authenticity of the MySGi construction. Research Participants This research was conducted involving two cohorts of trainee teachers from the program of Bachelor of Education in the Teaching of Malay Language in one PU and one TTI in Malaysia.A total of 127 participants of the third-years were randomly chosen as the research participants.The number is higher than 100 as a sample size with less than the number is not reliable because there will be a big fluctuation in calculations, especially when the research is replicated (Chua Yan Piaw, 2006). Research Instrument The instrument used by the researchers in this research was a set of Malaysian Generic Skill Inventory (MySGi) questionnaire, developed by a group of researchers in Universiti Kebangsaan Malaysia (Siti Rahayah, 2003).Using the Likert scale to measure the skills of communication and leadership among the trainee teachers, each item requested the participants to state their level of agreement, ranging from "strongly disagree (1)" to "strongly agree (5)".The questionnaire contained 81 items and consisted of two parts: Section A and Section B. The first section contained a list of questions to collect the demographics of the participants involving the information about their major of study, year of study, work experience and their Cumulative Grade Point Average.The second section comprised three constructs of generic skills viz.communication, leadership and group work and each construct led to many subconstructs.However, only the findings from Section B covering two constructs of generic skills (communication and leadership skills) were reported in this paper. Data Analysis A detailed review of the survey was performed with a computerized analysis using the Statistical Packages for the Social Science (SPSS) version 11.5 to extract data.The analysis of the result was arranged into a table to show the mean and discussion from the cumulative readings was presented separately according to each construction. Construct 1: Communication Skills As indicated in Table 5.1, the participants from both groups displayed a high set of communication skill, which was evident by high mean values for most items in the communication subconstruct.The mean average for each communication subconstruct exceeded 3.67 except for the use of non-verbal skills (mean=3.499).Scholars claimed that a mean average greater than 3.67 indicates that the items in a certain construct have a high reliability (Alias, 1993;Chua Yan Piaw, 2006). The average mean of the communication skills based on the 8 items were in the range between 3.499 and 3.907.The highest mean was in the ability to deliver a presentation with a mean of 3.907.The lowest mean was for the subconstruct on the ability of using non-verbal skills with the mean of 3.499.The remaining subconstructs were high: the ability to summarize (3.896), to practice listening skills (3.887), to negotiate (3.848), to interact (3.799), to present an idea verbally (3.787), and finally, the ability to present an idea in written form (3.701).The average mean for the communication skills as shown by the trainee teachers in the study was 3.791. The highest mean of the participants from the PU was in the ability to make a presentation (4.14), followed by the ability to practice listening skills (4.12), to negotiate (4.01), to interact (4.00), to summarize (3.97), to present an idea verbally (3.96), and to present an idea in written form (3.87). The participants from the TTI showed the highest ability to interact with a mean of 3.81, followed by the ability to make a presentation and ability to negotiate (both 3.72) and the ability to practice listening skills (3.70).The remaining subconstructs were not as impressive; their ability to present an idea verbally recorded a mean of 3.64, the ability to summarize (3.66), and the ability to present an idea in written form (3.57). The data suggested that the participants from the PU surpassed their counterparts from the TTI in every subconstruct except one -the ability to use nonverbal skills with an equal mean of 3.50.This turned out to be the lowest mean for both groups. Construct 2: Leadership Skills The analysis indicated that the participants had a high mean average i.e. above 3.67 for all the subconstructs under the leadership aspect.The ability to consider differing opinions were of the highest mean (4.03), followed by the ability to lead group members to an agreement and the skill to identify and develop potential (both 3.94), the ability to plan and manage and the ability to make proper decisions (both 3.92).They were identified to be less skilful in the ability to accept responsibility and the ability to complete multi-dimensional tasks (both 3.91) and finally the ability to give instructions (3.83).The overall mean was 3.925 which fell into the high category. On a comparative term, the participants from the PU were found to be surpassing their counterparts in all the 8 subconstructs.The trainee teachers from the PU displayed the highest mean in the ability to consider differing opinions (4.23).The second highest mean from the PU trainees was on the ability of identifying and developing potential (4.19) and the third highest was in the ability to make proper decisions (4.09) and this was followed by the ability to plan and manage as well as the ability to complete multi-dimensional tasks (both 4.08) and the ability to accept responsibility and the ability to lead group members to an agreement (both 4.06).The lowest mean from the trainees of the PU was 3.98 in the ability to give instructions. Interestingly, the highest mean of the trainees from the TTI was also the ability to consider differing opinions (3.87).The second highest mean was the ability to lead group members to an agreement (3.84).Meanwhile, the third highest mean was the ability to accept responsibility and the ability to plan and manage (both 3.79).The fourth and the fifth highest mean, with a slight difference, were the ability to make proper decisions (3.78) and the ability to complete multi-dimensional tasks (3.77).These were followed by the ability to identify and develop potential with a mean of 3.74 and finally the ability to give instructions with a mean of 3.70.Total mean average 3.925 Discussion of Result There had been a significant display of competence in most of the subconstructs across the communication and leadership skills as suggested by the mean value that ranged from 3.50 to 4.19.The revelation is promising as the participants are future teachers who are expected to nurture and instil these skills among school students in Malaysia.Mastering these skills is thus prerequisite to enculturation among their future students.However, there are several issues that need to be addressed as reflected by the findings in order to improve the teaching practice at the Malaysian level. The results suggested that the trainees from the PU surpassed their counterparts in every section but one subconstruct, indicating that either they had been receiving good training at the university.In other words, the syllabus and its execution at the PU have been effective.A second interpretation could have been that the training provided was similar between the two institutions but by some means the participants in the PU had managed to accommodate to the training better than their counterparts.In response to this possibility, a further in-depth research on the syllabus of these institutions as to how the results were not at variance needs to be conducted.An effort for a standardization of the syllabus should also be an emphasis to encourage the production of excellent trainee teachers regardless of their higher education institutions. In terms of the communication aspect, the overall mean average of the communication subconstructs was found to be lower as compared to the leadership.However, the mean was still considered high with the trainee teachers from the PU displaying a greater mastery in all subconstructs except one.This is a good indicator because communication is pivotal in a teacher's career as he or she needs to communicate with a great number of stakeholders especially with the students.More importantly, the communication needs to be effective.Weak communication will jeopardize the teaching and learning process by complicating the process of deciphering messages or in a worse scenario, the messages might be wrongly interpreted.A situation as such is not impossible if the students keep their confusion to themselves when dealing with teachers with ineffective communication skills. To overcome this challenge, the academicians in the teacher training institutions of Malaysia are advised to encourage the trainee teachers to be more vocal during discussions in the lectures, tutorials and student presentations.This might appear to be of significant challenge because Asians, due to cultural inhibition, often refuse to express a conflicting opinion so as to save face or to appear friendly in front of the others.However, it is a timely call that Malaysian students need to be taught to be outspoken in a subtle and non-threatening manner. An attempt towards that objective can be achieved by blending outspokenness with humility; voicing an opinion honestly but with a respectful tone and choice of words is the best policy to exhibit courtesy. Considering the current trend among the youth of the country, this effort will not be impossible.Due to the borderless world, they have been exposed to the different cultures of the rest part of the world, particularly the Western countries through the mass media in the form of print, electronic or New Age media.This has led them to be more culturally enlightened and expressive in voicing out their opinions.Exposing the society to different cultures of the world with an appropriate amount of censorship is one of the efforts by the government in educating the people on racial harmony and integrity.The heterogeneity of race at the institutions with students from different ethnicity, demographic background, and social economic status serve to train the participants to tolerate various types of people in their lives and to encourage the socialization process.Thus, it was not surprising when they displayed a high mean for the ability to interact, a crucial skill considering the fact that Malaysia is composed of many different ethnicities. However, from the eight subconstructs which involved both the receptive as well as expressive skills, the participants were good except for the non-verbal communication subconstruct which was low among the trainees from both groups.This is alarming as a Malaysian-based research conducted by Jamaluddin ( 2009) suggested that verbal remarks guaranteed only 10 percent of the effectiveness in the classroom communication process, whereas the remaining 90 percent remained in the application of non-verbal skills such as the facial expressions, eye contact, body gestures, and voice tone. The second generic skill investigated in this study -the leadership skill -promises a successful mastery across all eight sub-constructs among the trainee teachers.Leadership skills are essential for teachers who are constantly engaged in classroom management, apart from leading and managing school activities.These are the skills that are closely related to the communication skills as the integration of good communication skill reflects the leadership ability as well as the ability of supporting, consulting, settling conflicts, solving problem and producing a healthy relationship (Nurul Afizah, 2005). The trainees were recognized to be multi-tasking leaders who were able to take responsibility, execute plans and manage them by taking into account the different views of group members, recognize and develop their potential to lead to the most appropriate decision-making process.There is a probability that this competence is partly due to the system introduced at the students' residential colleges which requires them to be actively involved as organizers of student programs.Conducting a portfolio assessment among the students is thus advisable to involve all students where the display of generic skills through program participation and management by the trainee teachers is graded officially. Another potential contributory factor that has led to a group of multi-taskers is the fact that the students have been trained with many different assignments in their courses that demanded them to present, write term paper and group assignment, and conduct mock teaching sessions as well as creative projects that requires them to interview, act, or dance.These various types of assignments have undoubtedly played a major role in enhancing the students' versatility.Working in a team for an assignment demanded that the students build a good rapport with each other.This was evident by the high level of respect as indicated by their ability to attend to differing views. The indication of good leadership skills suggests that the participants were trainee teachers with a good discerning emotion quotient (EQ), a claim made based on the research by Goleman (1996) that reported a higher leadership performance in social and personal efficiency, emotion management, and self-independence among participants with a high EQ score.A high EQ can create a successful individual in interpersonal and intrapersonal relationships and thus, it can be safely assumed that the participants in the research would serve the teaching community well as leaders to the school and to their future students. Conclusion The role of effective institutions of higher learning needs to transcend the mere effort of producing graduates with intellectual capabilities because the graduates need to be nurtured with generic skills to assist them in career and especially to function well in life.Developing generic skills successfully lies in the skilful nurturing by experienced teachers in an effective classroom.A good teacher-oriented lesson may produce students who are good with theories, but may also produce passive and dependent students who are afraid to take charge. Scholars in past studies have identified that the methods of contributing ideas, presentation, acting, and self-research are the contributing means towards the development of generic skills.Thus, this research was conducted to investigate the current level of generic skills among the trainee teachers in the aspects of communication and leadership.The findings indicated a positive representation of the participants' skills as future teachers in Malaysian secondary schools and this is a good indicator that they will be able to acquaint their future students with the much needed skills.As Malaysian studies involving the Malay language specialization trainee teachers are scarce in number, it is highly recommended that further investigation be conducted to examine other most equally important sets of generic skills among the trainee teachers. Table 1 . Mean average for the communication subconstructs Table 2 . Mean average for the leadership skill subconstructs
2018-12-05T08:50:56.393Z
2013-11-28T00:00:00.000
{ "year": 2013, "sha1": "5dc6250f06115dca986f7d11358ff279f0674b47", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ass/article/download/32392/18866", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5dc6250f06115dca986f7d11358ff279f0674b47", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
202121459
pes2o/s2orc
v3-fos-license
For Researchers SCRO Review Categories Flow Chart [1] ( ): This flow chart shows the type of review necessary for different types of human stem cell research. Stem Cell Matrix [2] ( ): A list of hESC approved for use at Stanford University. Somatic Cell Informed Consent for use in Human Stem Cell Research [3] ( ): A template informed consent that meets requirements for developing human therapies from stem cell sources. SCRO eProtocol Application [4] ( ): A Word version of the questions in SCRO?s eProtocol application to use for training or for gathering your information prior to submission. Training "Let the IRB Staff Come to You!" [8]: Researchers can arrange for 1-on-1 or group sessions For research participants, patients, and others interested in medical practice research -a new resource on the Spectrum website: Research on Medical Practices (ROMP) [9] and ROMP Videos [10]. Do I need an IRB Submission? Does My Project Need IRB Review? [11] ( ) [12]Determination of Human Subject Research Application [12] ( ) This form should be submitted in eProtocol as a Human Subjects Research (HSR) Determination. To submit a 'Determination of Human Subject Research' form in eProtocol [13], select 'Create a Protocol' on the 'My Dashboard' webpage. After completing the requested information, select 'Human Subject Research (HSR)' as your type of review. Complete the application and attach the Human Subject Research (HSR) Determination Form [12] for review (there is also a link to this form in the attachments section of the protocol application). After the IRB has made its determination, the IRB will "Keep" or "Withdraw" the HSR application. Withdrawn applications DO meet the definition of human subjects research, and require that an IRB protocol be submitted and approved prior to any research activities being conducted, including recruiting or consenting prospective participants. To view the IRB's determination in eProtocol: Go to 'My Dashboard' and select 'Non-Active Protocols'. Open the protocol number in question. Click the bottom left red tab, "Print View" for a PDF of the HSR application --the HSR determination will be on page 2. IRB Review Type: What is it and why do I need to know? [14] Medical Research Medical Application Process [19] Filling out the Medical Protocol Application [20] Sample Medical eProtocol applications [21] Clinical trial documents: The following checklists are from the FDA E6 Consolidated Guidance for Good Clinical Practice NIH Funded Studies NIH policy [28] on the Use of a Single Institutional Review Board for Multi-Site Research (effective January 25, 2018). All sites participating in multi-site studies involving non-exempt human subjects research funded by the NIH must use a single IRB (sIRB). Applicants must include a plan for the use of a sIRB in their applications/proposals submitted to the NIH on or after January 25, 2018. Costs associated with the sIRB review may be included as direct costs in the application budget. Work with your Research Process Manager [29] prior to submitting your proposal to NIH. NIH FAQs on sIRB costs [ [36]) that all institutions located in the United States that are engaged in cooperative research conducted or supported by a Common Rule department or agency [35] rely upon approval by a single IRB for the portion of the research that is conducted in the United States The Single IRB plan should be identified and included by the applicant within the grant application or the contract proposal. The proposed budget in the grant application/contract proposal should reflect all necessary Single IRB costs without an approved ?other exception?. Applicants should not assume that an exception will be granted when considering what Single IRB costs to include in the budget. Will the single IRB that is identified in the NIH application/proposal be evaluated during peer review?(NIH FAQs [44]) No. The proposed single IRB will not be evaluated as part of the peer review process and will not affect the overall assigned score of an application/proposal or the overall rating of the acceptability of the Protection of Human Subjects section. Peer reviewers may note if the plan to comply with the NIH single IRB policy is not included in the application/proposal but this will not impact the score. Relying on a Single IRB (sIRB) Stanford's IRB may agree to rely on a single IRB (sIRB) for multisite studies to provide initial and ongoing regulatory reviews. The reliance terms are outlined in an IRB Authorization Agreement (IAA). Stanford has signed on to SMART IRB [45], which supports IRB reliance across the nation. The sIRB is responsible for reviews required by federal regulations at 45 CFR 46, and 21 CFR 50 and 56 (initial review, continuing review, modifications, reportable events). When Stanford?s IRB relies on a sIRB, it retains responsibility to: ensure investigator compliance with the protocol, oversee the sIRB's determinations, ensure applicable federal and state regulations, and ensure Stanford policy. Stanford's IRB also bears responsibility for the local conduct of sIRB studies, including managing noncompliance and unanticipated problems, ensuring training, study monitoring, local ancillary requirements, managing reliance agreements, and handling study specific issues. Reliance on a sIRB is considered on a case-by-case basis for high risk studies when not mandated by NIH Single IRB policy or required by the Revised Common Rule's Cooperative Research Provision (45 CFR 46.114). Some examples might include first-in-human drug or device studies, certain biological agents or Recombinant DNA Vector studies, or studies that involve stem cells or hESC. Stanford?s IRB will not rely on a sIRB when Stanford is the sole site. The Protocol Director (PD) is required to submit a sIRB eProtocol [46] application to request reliance on a sIRB. When the (1) sIRB eProtocol application and the (2) reliance IAA are complete, a Reliance Letter will be issued through eProtocol. Please see the sIRB SOP [47] for more detailed information. See here [48] for additional Relying PI responsibilities. [49] NCI CIRB -the NCI Central IRB Initiative
2019-10-29T12:33:41.138Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "5c8109e70fc01ae76f9e73443a4132efab14bb70", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21276/tr", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "751030bfc0661279600d8808a25c8e01a3e6754c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
51717965
pes2o/s2orc
v3-fos-license
Evaluation of association of DRD2 TaqIA and -141C InsDel polymorphisms with food intake and anthropometric data in children at the first stages of development Abstract The reward sensation after food intake may be different between individuals and variants in genes related to the dopaminergic system may indicate a different response in people exposed to the same environmental factors. This study investigated the association of TaqIA (rs1800497) and -141C InsDel (rs1799732) variants in DRD2/ANKK1 gene with food intake and adiposity parameters in a cohort of children. The sample consisted of 270 children followed until 7 to 8 years old. DNA was extracted from blood and polymorphisms were detected by PCR-RFLP analysis. Food intake and nutritional status were compared among individuals with different SNP genotypes. Children carrying the A1 allele (TaqIA) had higher energy of lipid dense foods (LDF) when compared with A2/A2 homozygous children at 7 to 8 years old (GLM p=0.004; Mann Whitney p=0.005). No association was detected with -141C Ins/Del polymorphism. To our knowledge, this is the first association study of the DRD2 TaqIA and -141C Ins/Del polymorphism with food intake and anthropometric parameters in children. DRD2 TaqIA polymorphism has been associated with a reduction in D2 dopamine receptor availability. Therefore, the differences observed in LDF intake in our sample may occur as an effort to compensate the hypodopaminergic functioning. Introduction The prevalence of childhood overweight and obesity had a dramatic increase between 1990 and 2010, rising from 4.2% to 6.7%, and it is estimated that in 2020 the rate will be 9.1%, or approximately 60 million children (de Onis et al., 2010). The obesity prevalence in developed countries is twice higher than in developing countries. However, most of the affected children (35 million) live in developing countries (de Onis et al., 2010). Moreover, the relative increase rate of obesity in recent decades was higher in developing countries (+65%) than in developed countries (+48%) (de Onis et al., 2010;Oggioni et al., 2014). Obese children are more likely to become obese adults, and have higher risk of developing coronary heart diseases and other related diseases, which diminish life expectancy (Must, 1996;Rossner, 1998;Berenson, 2012). Insulin resistance, metabolic syndrome, and type 2 diabetes are also consequences of childhood obesity (Gupta et al., 2012). Some factors contribute to overweight and obesity, such as low physical activity, high intake of high fat and sugar foods, change from the rural lifestyle to the urban, sociocultural factors, age, gender, and genetic factors (Popkin, 2006;Gupta et al., 2012;Oggioni et al., 2014). The sensation of reward after food intake, especially of palatable foods, may be different among individuals and might cause different amounts of food ingestion (Berridge et al., 2010). The dopaminergic system regulates food intake through a reward system, and although its function in eating disorders is poorly understood, it is known that the use of dopamine D 2 receptors agonists decreases food intake in rats (Terry et al., 1995). A study that analyzed images via Positron Emission Tomography (PET) scans shows that obese individuals have low concentration of striatal D2 dopamine receptors as a mechanism of downregulation due to high levels of dopamine, indicating that the reduction of these receptors could be associated with an addictive behavior also observed in drug users (Wang et al., 2001). DRD2/ANKK1 gene polymorphisms alter the density of dopamine receptors, and thus may explain the different food intake levels in individuals exposed to the same environmental factors (Stelzel et al., 2010). Several studies have associated the TaqIA (rs1800497) polymorphism with obesity, body mass index (BMI), and food intake (Barnard et al., 2009;Winkler et al., 2012;Cameron et al., 2013;Carpenter et al., 2013). However, to our knowledge, there is no study linking the -141C Ins/Del (rs1799732) polymorphism to obesity, although it was associated with other pathologies such as alcoholism and schizophrenia (Jonsson et al., 1999a;Johann et al., 2005;Lafuente et al., 2008a,b). Therefore, the objective of the present study was to analyze the association of TaqIA and -141C Ins/Del polymorphisms with adiposity parameters and food intake of children. Subjects The sample consisted of 270 children followed until 7 to 8 years old on average. The nutritional and anthropometrics data were collected at 12 to 16 months, 3 to 4 and 7 to 8 years. The children included in the present study participated in a randomized controlled trial of dietary counseling on breast feeding and diet during the first year of life. The trial consisted of 500 children, randomized in a control or intervention group, of which mothers received a dietary advice about breastfeeding and complementary feeding during home visits in children's first year of life. This dietary advice was based on the "Ten steps to Healthy Feeding", a Brazilian national health policy for primary care, supported by World Health Organization (2006). More information of the first phase of the study can be found elsewhere (Vitolo et al., 2010), but in Table 1 we described the main characteristics of the sample. A substantial reduction of the sample occurred throughout the study and the main reason for the losses was the inability to locate the participants' homes, usually due to the family moving to another city. Other reasons for losses were refusal to continue and children or maternal death. This intervention was not the primary objective of the present research and the participation in the intervention or control group was used as a confounding factor in statistical analyses. Ethnicity was defined by the interviewer by skin color (i.e., whites and non-whites). More details of the traits studied are described elsewhere (Galvão, 2012;Louzada et al., 2012;Fontana et al., 2015;Miranda et al., 2015). This study was conducted according to the guidelines of the Declaration of Helsinki. The study protocol was approved by the Ethics Committee of the Universidade Federal de Ciências da Saúde de Porto Alegre (n. 286/06), and all participants provided written informed consent before commencing the study. Nutritional status assessment At 12 to 26 months, children were weighted using a portable digital scale (Techline, São Paulo, Brazil) and length was measured by an infant stadiometer (Serwital Inc, Porto Alegre, Brazil). At 3 to 4 and 7 to 8 years, children were weighted using a digital scale (Techline), and height was measured using a stadiometer (SECA, Hamburg, Germany). BMI was calculated [weight (kg)/height 2 (m 2 )], and the values were transformed into Z-scores. Dietary data assessment One 24-hour dietary recall was collected for each child at 12 to 16 months, and two 24-hour dietary recalls, on two nonconsecutive days, were collected for each child at the ages of 3 to 4 and 7 to 8 years. The 24-hour dietary recall was carried out by a trained undergraduate nutrition student, and the child's food intake was recorded on the day before the last home visit. A food portion measurement device and the common household measures (e.g. teaspoons, tablespoons, cups) were used to quantify portion sizes. Feistauer et al. 563 Dietary information was entered into the Nutrition Support Program software from the Escola Paulista de Medicina, Federal University of São Paulo, based on the United States Department of Agriculture chemical composition tables. The energy intake was calculated using only one dietary recall or the average of two dietary recalls. The items listed in the response were classified as sugar-dense foods (SDF) if the percentage of simple carbohydrates was higher than 50% (e.g., soda, Jell-O, candies, and artificially flavored juice) and as lipid-dense foods (LDF) if there was more than 30% fat (e.g. fried pastries, cookies with fillings, cold cuts and sausages, fried foods, and chocolate). Statistical analyses Allele frequencies were estimated by gene counting. A chi-square test for goodness-of-fit was used to determine whether the observed genotype frequency distributions agreed with those expected under Hardy-Weinberg equilibrium. Linkage disequilibrium was estimated using the Haploview software Version 4.2 (Broad Institute, Barrett et al., 2005). Pearson's chi-squared or Fisher's Exact Test was used to compare genotype or allele frequencies between white and non-white children. Since the first publication of an association study with the TaqIA polymorphism (Blum et al., 1990), and because of the rare occurrence of the A1 allele, genotypes are normally grouped as A1 allele presence or A1 allele carriers (A1/A1 and A1/A2, n=102), versus A2/A2 homozygotes (n=116). Similarly, due to low frequencies of the Del allele, genotypes of the -141C InsDel polymorphism were grouped by Del allele presence or Del allele carriers (Del/Del and Ins/Del, n=61), versus Ins/Ins homozygotes (n=157). All data are presented as mean and standard deviation. Statistical analysis of SDF and LDF variables were performed on natural logarithm transformed data to normalize their distribution. This allowed including these variables in multivariate analysis; non-transformed values are shown in Table 2. Means of food intake (average daily energy intake, SDF, LDF and average daily energy intake per kilogram) and adiposity (BMI Z-score) parameters were compared among genotype groups by a multivariate general linear model (GLM). The multivariate GLM was performed including all dependent continuous variables in one model, using the categorical variables (1) the control or intervention variable of the randomized trial, (2) sex, and (3) ethnicity as covariates, and genotypes of -141C InsDel (rs1799732) and TaqIA (rs1800497) polymorphisms as fixed factors (see Table 2). This first step of the analysis verified whether the group of dependent continuous variables was significantly affected by the group of independent categorical variables. Only LDF intake at 7 to 8 years old was associated with TaqIA polymorphism, and the covariates did not influence this dependent variable. Therefore, to test the association of TaqIA polymorphism alone with LDF intake at 7 to 8 years, we performed a Mann-Whitney test. A p-value of < 0.05 was considered significant. All tests and transformations were performed using the Statistical Package for Social Sciences, Version 20.0 (SPSS ® , Chicago, IL, USA). Results This longitudinal survey sample was composed of 270 children, 149 (55.2%) boys and 121 (44.8%) girls, followed up from 12 to 16 months until 7 to 8 years old (Table 1). Minor allele frequencies (MAF) of the DRD2/ANKK1 gene variants observed in the sample were 0.14 of Del allele of the -141C InsDel (rs1799732) polymorphism and 0.28 of A1 allele of the TaqIA (rs1800497) polymorphism, which were intermediary to those described in the 1000 Genomes Project database for European (MAF 0.08 (Del) and 0.19 (A1)) and African (MAF 0.57 (Del) and 0.38 (A1)) populations. All genotype frequencies in this sample were in agreement with those expected under the Hardy-Weinberg equilibrium. The two gene variants were not in linkage disequilibrium (D'=0.3, r=0). In Table 2, anthropometric and food intake variables are shown according to the analyzed polymorphisms. As some children could not be found at the third home visit at 7 to 8 years, and some samples could not be analyzed in the laboratory, the total number of children included in the multivariate analysis is different from the initial sample size. Children carrying the A1 allele (TaqIA rs1800497) had higher energy of LDF when compared with A2/A2 ho- Discussion The dopaminergic pathway has been associated with midbrain reward circuit activation (Roth et al., 2013), and individual differences in D 2 receptor expression are hypothesized to contribute to differences in motivated behaviors, such as the motivation to eat (Gluskin and Mickey, 2016). Therefore, polymorphisms of the ANKK1/DRD2 gene are frequently associated with altered perception of food reward and weight gain (Ariza et al., 2012;Muller et al., 2012;Roth et al., 2013). TaqIA is the most commonly tested polymorphism, and is characterized by a single nucleotide change [C(A2)/T(A1)] located downstream of the termination codon of DRD2 gene at the ankyrin repeat and kinase domain containing 1 (ANKK1) gene (Dubertret et al., 2004;Neville et al., 2004;Li et al., 2015;Ponce et al., 2016). This SNP produces a Glu713-to-Lys (E713K) substitution in the ANKK1 amino acid sequence, at the eleventh ankyrin, which may alter the affinity of the ANKK1 protein and its substrate (Neville et al., 2004). It is not clear by which molecular mechanisms the ANKK1 protein could be associated with the dopaminergic system and how ANKK1 polymorphic alleles would impact addiction vulnerability. However, ANKK1 and DRD2 genes belong to the same gene cluster, the NTAD cluster, an ancient cluster of which genes are apparently co-regulated and may have emerged when the central nervous system became more complex (Mota et al., 2012). Since genes of related function are sometimes found in the same cluster, it is possible that ANKK1 is somehow involved in the dopaminergic reward processes via a signal transduction pathway (or other cellular response) (Neville et al., 2004). A few in vitro studies with ANKK1 gene mRNAs and proteins were able to show a potential connection between this gene and the dopaminergic system (Hoenicka et al., 2007;Garrido et al., 2011). Feistauer et al. 565 In our sample, the A1 allele (TaqIA rs1800497) was found associated with higher intake of LDF when compared with children A2/A2 homozygous at 7 to 8 years. This allele has been associated with a reduction in D 2 receptor availability (Pohjalainen et al., 1998;Ritchie and Noble, 2003;Eisenstein et al., 2016). Stice et al. (2008) found that the dorsal striatum is less responsive to food reward in obese relative to lean individuals, probably because obese individuals have reduced D 2 receptor density that compromises dopamine signaling. This hypodopaminergic functioning or reward deficiency syndrome may induce obese patients to overeat in an effort to compensate for this reward deficit; several studies are consistent with this theory (van Strien et al., 2010;Duran-Gonzalez et al., 2011;Winkler et al., 2012;Cameron et al., 2013). van Strien et al. (2010) associated the A1 allele with an increase in emotional eating in Dutch adolescents. The A1 allele was also most frequent in young obese Mexican-American subjects than in non-obese, as well as subjects with central-obesity versus subjects with no central-obesity (Duran-Gonzalez et al., 2011). Winkler et al. (2012) observed in an intervention study that carriers of the A1 allele had a higher BMI at all time-points (baseline, after weight loss, and after weight maintenance), and showed less overall weight loss. Similarly, Cameron et al. (2013) observed that post-menopausal women carriers of the A1 allele lost significantly less body weight and fat mass than women with the A2/A2 genotype after undergoing an intervention-induced weight loss and increased carbohydrate intake. Some studies were not able to find any association of the DRD2 TaqIA polymorphism with adiposity parameters (Hardman et al., 2014). In the present study, no association was detected between DRD2 -141C Ins/Del polymorphism with food intake and anthropometric parameters, despite previous findings relating Del carriers of the DRD2 -141C Ins/Del polymorphism with higher D 2 receptor density (Jonsson et al., 1999b). The DRD2 -141C Ins/Del polymorphism corresponds to a deletion of one cytosine from a run of two cytosines at position -141 of the DRD2 gene (Arinami et al., 1997). This polymorphism has been associated with risk of schizophrenia in different populations (Arinami et al., 1997;Ohara et al., 1998;Jonsson et al., 1999a;Himei et al., 2002;Wu et al., 2005;Lafuente et al., 2008a,b;Cordeiro et al., 2009;Saiz et al., 2010;Xiao et al., 2013;Wang et al., 2016;Zhao et al., 2016), as well as with weight gain and other responses due to schizophrenia drug treatment (Lencz et al., 2006;Zhang et al., 2010). Associations have been described with propensity to alcohol dependence in different populations (Ishiguro et al., 1998;Konishi et al., 2004a,b;Johann et al., 2005;Du and Wan, 2009;Prasad et al., 2010;Lee et al., 2013), suicide attempts (Suda et al., 2009), psychiatric disorders (Kishida et al., 2004;Ujike et al., 2009;Lencer et al., 2014), different responses to medication and higher quit rates in smokers (Lerman et al., 2006). To the best of our knowledge, there is no other study that associated the DRD2 -141C Ins/Del polymorphism with anthropometric parameters or food intake. The lack of associations in the two other phases of development (12 to 16 months and 3 to 4 years) may have occurred because children at these ages have restricted access to food, and depend on adults for meals, despite their own preferences. Notwithstanding, at 7 to 8 years, children have many opportunities to eat without parental supervision (Briefel et al., 2009), and the differences observed in LDF intake in our sample may have occurrred as an effort to compensate hypodopaminergic functioning. Palatability is the induced sensitive response of foods that are usually rich in lipids and/or sugar (Cansell and Luquet, 2016). The sense of taste during food ingestion is the most important aspect in the decision to consume or avoid foods (Besnard, 2016). Contrary to sugar, oral fat perception was considered dependent only on its textural and olfactory cues, but recent identification of lipidreceptors in taste buds of both rodents and humans strongly suggests that lipids might also be perceived by the gustatory pathway (Besnard, 2016). Stimulation of taste buds triggers a signaling cascade leading to subsequent neurotransmitter releases in different brain areas responsible for taste perception (e.g., anterior insula, frontal operculum, orbitofrontal cortex, and the mesolimbic system) (Besnard, 2016). The exchange between these areas results in information of the hedonic experience related to the food's taste (Berridge, 1996). Therefore, not only sugar, but also lipids generate a hedonic experience, producing a positive reinforcement that stimulates dopamine secretion in the brain (Salamone, 1994, Volkow et al., 2002, which is a stimulus associated with "wanting" (Berridge et al., 2010). "Wanting" is an incentive salience or motivation for reward triggered by reward-related cues, such as LDF (Berridge et al., 2010). The attribution of incentive salience makes a cue and its reward more attractive, or more "wanted", without being necessarily more "liked" (Berridge et al., 2010). Consistent with our findings, other studies of our group detected associations of palatable food intake with another polymorphism related to the dopaminergic system in children of the same cohort at 12 to 16 months and 3 to 4 years old (Galvão et al., 2012;Fontana et al., 2015). However, further research is needed to confirm the association of DRD2 TaqIA polymorphism with LDF intake and its potential mechanisms. In summary, our results showed that TaqIA polymorphism may have an influence on the children's eating behavior, due to the presence of the A1 allele associated with lower D 2 receptor density that may lead children to compensate the hypodopaminergic functioning with palatable foods. To our knowledge, this is the first association study of the DRD2 TaqIA and -141C Ins/Del polymorphism with food intake and anthropometric parameters in children at the first stages of development. Notwithstanding, it is nec- 566 Dopaminergic system gene variants essary to replicate this findings in other populations and identify the mechanisms by which the dopaminergic system may influence food intake. Nevertheless, the investigation of other polymorphisms in this and other genes of the dopaminergic system and their relation to food intake and anthropometric parameters may be interesting.
2018-08-06T13:28:57.254Z
2018-07-23T00:00:00.000
{ "year": 2018, "sha1": "44b72165a8f7f16e4f517c80a9fe300dc9a58e74", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/gmb/v41n3/1415-4757-GMB-1678-4685-GMB-2017-0202.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44b72165a8f7f16e4f517c80a9fe300dc9a58e74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
149989572
pes2o/s2orc
v3-fos-license
Knowing , Proposing and Acting : Epistemological Aspects of Medical Practice in the New Millennium In this work, it is analysed how the medical practice is imbued with Cartesian rational thought as well as empiricist thought and it is stated that medicine is an art and is science. It is proposed that the object of knowledge of the medical practice is not the concept of disease but health. It is from the concept of health and normality that medical taxonomy labels individuals as sick. This taxonomy is frequently re-evaluated and reorganized by scientific societies. This sometimes occurs according to new knowledge, but this categorization may also be questioned due to direct intervention or indirect pressure related to interests, especially economic, that are sometimes not clearly visible. Accordingly, an ongoing discussion is needed to keep the medical practice neutral against struggles of interest derived from the health industry. These topics must be considered and debated in medical schools including undergraduate and postgraduate programs. DOI : 10.14302/issn.2640-690X.jfm-18-2180 Corresponding author: Rafael Vargas, Universidad Antonio Nariño, Cra. 3 Este # 47 A-15. Bogotá, Colombia. E-mail: rvargas3200@hotmail.com; antonio.vargas@uan.edu.co Running head: Epistemology and medical practice Introduction Within any scientific discipline, there are three fundamental questions that are articulated in epistemology.What area of reality does the discipline in question know?How do you know this reality?What is known scientifically for that discipline?These three questions apply to all areas of human knowledge, including medicine, and allow us to define scopes, actions, and limits 1,2 .Medical practice is imbued with Cartesian rational thought as well as empiricist thought and for many years has been stated that medicine is an art and is science 3 . "What Area of Reality does Medicine Know?" -Objective of Medical Practice For the first question, "what area of reality does medicine know?", there is no clear answer.Although for society, including health professionals, the objective of the medical practice is tending to patients and their illness, the departure point and the axis on which the medical practice fluctuates in reality is the concept of health and normalcy.The area of reality that the clinician knows is the occasional state of a person's health insofar as he or she is susceptible to becoming ill or suffers a disease and is therefore qualified for a medical intervention, either preventive or curative, that is performed through a medical prescription.Establishing the health status of an individual at a given time is therefore the primary objective of medical practice [4][5][6] . The concept of health has evolved and changed periodically throughout history with accumulated knowledge and especially with the emergence of scientific societies that establish norms, revise them, eliminate them and / or modify them according to the progress of medical-scientific knowledge about health and disease 7 .These decisions can greatly impact the epidemiological profile of society.We can think, for example, about the criteria for dyslipidaemias diagnosis. Changing the values of lipids that are considered normal can cause a large part of the world population to be included or excluded 8 . Currently, there are various definitions of health, although there is no consensus on the matter.The most accepted definition is one that presents health as the complete physical, mental and social well-being of the individual and not simply the absence of the disease 9 . Each of these elements that establish the concept of health, including physical, mental and social elements, are properly defined, classified, standardized and regulated, although some authors have shown that a change towards this holistic concept is not easy to apply in the medical practice 10,11 . Any element that does not comply with such patterns is beyond feasible, acceptable patterns, and it is considered "abnormal."This discrepancy has obviously created a series of controversies and discussions in the biological field in which genetics, in addition to environmental, cultural, and other factors, can give rise to a series of morphophysiological variations and diversities that can be considered normal without being common 11 .Given the propensity in the medical field to organize and classify, many individuals are categorized and labelled as specific syndromes or diseases.This tendency to diagnose and / or overdiagnose has permeated other areas of society, which generates a tendency to medicalize life, each of its stages and even natural physiological processes 12 .An example of this is the increasingly important presence of medicine in fields as diverse as sports, where high-performance athletes and amateurs are inundated with concepts related to that generated questions at all levels of society.In the field of medicine, the nonconformity generated by rigid theories and practices of classical psychiatry favoured the development of a movement that questioned those practices and generated an anti-psychiatry movement in response 15,16 .All of this implies that the concept of health and / or illness is intimately linked to sociohistorical moments and the forms of perception of reality that predominate in that moment.In this case, all that is valid from the medical point of view today may be totally distorted in the light of new knowledge tomorrow. However, we can affirm that in answer to the question, "Which area of reality is key to know for medical practice?" the answer is the health of the individual. "How is this Reality known?" -Methods in Medical Practice For the next question, "How is this reality known?", we can see how the approach to the individual health status is achieved based on technical-operative knowledge, which is applied throughout medical practice.This approach comprises various phases that constitute the patient's medical history or medical record and that include anamnesis, physical examination, hypothesis proposals, and confirmation of the hypothesis, diagnosis and treatment. It is important to note that in medical practice, the clinician tries to know the reality of another person following a process of thought strongly impregnated with Cartesian rationalism [17][18][19] .We can clearly see within this activity three key elements that are differentiated within the process of rationalist knowledge: a thinking subject (doctor), a thought object (patient) and an act of thought (clinical judgement). Regarding the method of knowledge in medical practice, the requirements and characteristics that a valid method must have to reach a truth are met.In this method, the entire process of research is repeated.This movement leads to maximum simplification, which is achieved through the exploration of the patient to obtain signs and symptoms.This approach proceeds to establishing rigorous associations between these data, which is achieved using various methods.First, there is a deductive method when the diagnostic possibility is clear and unique.Second, there is a probabilistic method when the data point to a specific pathology without absolute certainty.Third, there is a falsification method when differential diagnoses are used to compare the main diagnoses with alternative diagnoses (Fig. 1).With these methods, the clinician tries to generate an approximate model of perceived reality that is capable of being adjusted to existing theoretical models.In recent years, medical practice has focused on medical practices that are based on clinical evidence.In this model, pre-existing theoretical models are used, and recent or current information must be gathered to support the diagnostic or therapeutic decisions that are made 20,21 . Along with this technical analysis and an approach to determine the reality of another person, clinicians can order a series of technical resources and aids.These resources include laboratory tests, imaging exams, and physiological tests.These para-clinical tests allow clinicians in many cases to clarify the diagnosis, but they cannot determine it alone 22 .This affirmation allows us to deduce that a medical practice supported exclusively in one of these stages or in any of these processes is weak and methodologically incorrect.In conclusion, the answer to this second question, "How is reality known?", is through an adequate rational and methodical medical practice that can be complemented with paraclinical tests and supported by scientific evidence. "What is known Scientifically for Medicine?" -Scientific Basis of Medical Practice To the third question, "What is known scientifically for medicine?",there has always been a debate about whether medicine is an art or science 23 .It is stated that medicine is an art and is a subjective act that depends on intuitive, emotional aspects and on the empathy between doctors and patients.However, it is also a science because it is an objective act in which the clinician applies knowledge, techniques and interventions that have been validated and previously published in prestigious journals, which are recognized by the discipline and the bulk of the scientific community (Fig. 2).These are adverse reactions, side effects, and idiosyncratic reactions that can cause disorders that in many cases are lethal for the patient.There is also concern that despite careful and judicious clinical exercise as well as sufficient paraclinical support, inaccurate or erroneous diagnoses could be made that could lead, most likely, to therapeutic behaviours that are not harmful, or are at least innocuous, which may imply a progressive and irreversible progress of the pathology in question 26,27 . Medical Practice and Chain of Trust Agreements Although here there is no reflection that follows the characteristic patterns of a proper philosophical reflection, since the physician moves within the probable, the steps that are carried out to clarify signs and symptoms are permeated with the scientific method, including observation, order and disposition to reach the truth.This is a truth that is probably volatile security, trust, and hope, among other things 29,30 . Conclusion The basis of medical knowledge is the concept However, due to the technical and economic interests that push for increasing technology use in medical practice, diagnoses are often supported only in the medical arsenal surrounding the patient.Again, it is evident that technology use in medicine has permeated aspects of common life.Some examples include sports practices that have been changed by various types of instrumentation that allow individuals to programme their level of physical activity, which allows them to monitor their physiological variables permanently and suggest what meals they must to consume according to their metabolic expenditure.In general, great advances in technology and the miniaturization of many devices have allowed daily use of devices that measure and evaluate every moment of our lives, both in wakefulness and during sleep and in sick individuals as well as in healthy individuals. Figure 1 Figure 1.Medical practice process.Rational and empirical points of view participate in all clinician-patient relationships.However, rational thought implies the simplification and association of ideas and is dominant during the first step of clinical examination.Empirical thought is linked to evidence; this type of thought is necessary to form hypotheses, achieve a diagnosis and develop intervention steps. Figure 2 . Figure 2. Medical knowledge and medical practice.Medical knowledge is inserted from all sides of clinical practice and determines the success or failure of maintaining or recovering a patient's normal health condition. and temporal (the diagnosis of a disease made two centuries ago is probably not the same when evaluated by today's doctor) and depends on the non-rupture of a labile chain of trust that ties the participants of medical practice.A doctor trusts the word of others (patient, family).The patient trusts the wisdom and the doctor´s skills.Both rely on the skills and technical capacity of a third party that processes and analyses samples to send a report that makes the diagnosis possible (technicians and associate professionals).The doctor also trusts what is achieved by science and technology through thousands of studies and experiments (medical knowledge), which allows him to preserve or recover an ideal state of health in the patient 28 .The patient relies instinctively on the same things.Any failure in this chain of trust can ruin the medical practice.This disruption can cause the loss of credibility for medicine and force the patient to seek refuge in other medical systems that offer what apparently allopathic medicine cannot offer: of health.It is from the concept of health and normalcy that medical taxonomy emerges, which labels the individual a sick individual.This taxonomy is frequently re-evaluated and reorganized by scientific societies.Sometimes the re-evaluation is based on new knowledge, whereas other times this reorganization of the classification of diseases is questioned when it is evident that it is carried out by interventions or pressure exerted by individual groups, which could be patient associations, health institutions or economic corporations that defend particular interests.Therefore, an ongoing discussion is needed to guarantee that medical practice remains neutral in the face of the strong interests of health-related businesses.All these aspects around medical knowledge and the relationship with medical practice must be considered and debated in medical schools during undergraduate and postgraduate medical training.
2019-05-12T14:24:38.391Z
2018-07-04T00:00:00.000
{ "year": 2018, "sha1": "12ca82b873cbbeac3b7f02ee709196fd7e25c450", "oa_license": "CCBY", "oa_url": "http://openaccesspub.org/article/786/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "12ca82b873cbbeac3b7f02ee709196fd7e25c450", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
5324566
pes2o/s2orc
v3-fos-license
Rosiglitazone treatment and cardiovascular disease in the Veterans Affairs Diabetes Trial Aims To evaluate the relationship between patterns of rosiglitazone use and cardiovascular (CV) outcomes in the Veterans Affairs Diabetes Trial (VADT). Methods Time-dependent survival analyses, case–control and 1 : 1 propensity matching approaches were used to examine the relationship between patterns of rosiglitazone use and CV outcomes in the VADT, a randomized controlled study that assessed the effect of intensive glycaemic control on CV outcomes in 1791 patients with type 2 diabetes (T2D) whose mean age was 60.4 ± 9 years. Participants were recruited between 1 December 2000 and 31 May 2003, and were followed for 5–7.5 years (median 5.6) with a final visit by 31 May 2008. Rosiglitazone (4 mg and 8 mg daily) was initiated per protocol in both the intensive-therapy and standard-therapy groups. Main outcomes included a composite CV outcome, CV death and myocardial infarction (MI). Results Both daily doses of rosiglitazone were associated with lower risk for the primary composite CV outcome [4 mg: hazard ratio (HR) 0.63, 95% confidence interval (CI) 0.49–0.81 and 8 mg: HR 0.60, 95% CI 0.49–0.75] after adjusting for demographic and clinical covariates. A reduction in CV death was also observed (HR 0.25, p < 0.001, for both 4 and 8 mg/day rosiglitazone); however, the effect on MI was less evident for 8 mg/day and not significant for 4 mg/day. Conclusions In older patients with T2D the use of rosiglitazone was associated with decreased risk of the primary CV composite outcome and CV death. Rosiglitazone use did not lead to a higher risk of MI. Introduction Cardiovascular (CV) morbidity and mortality in patients with type 2 diabetes (T2D) are major problems in clinical practice. Thiazolidinediones improve glycaemic control by reducing insulin resistance and have beneficial effects on various CV risk factors. Pioglitazone has been shown to reduce progression of atherosclerosis and possibly reduce CV events [1,2]; however, previously published meta-analyses have raised uncertainty about the CV safety of thiazolidinediones, and rosiglitazone in particular [3][4][5][6]. An analysis in the Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycaemia in Diabetes (RECORD) trial, a randomized, multicentre, open-label, non-inferiority study in patients with T2D who had inadequate glycaemic control while receiving metformin or sulphonylureas, showed an increased risk of congestive heart failure associated with rosiglitazone, but there were no statistically significant differences between the rosiglitazone group and the control group for myocardial infarction (MI) and death from CV causes or any cause [7,8]. Recently, the US Food and Drug Administration (FDA) released a drug safety communication requiring removal of some prescribing and dispensing restrictions for rosiglitazone-containing diabetes medicines [9]. This decision was based on a re-evaluation of RECORD endpoints (CV death, MI and stroke), performed by the Duke Clinical Research Institute and presented to an FDA advisory committee, which did not show an increased risk of MI associated with rosiglitazone. A joint consensus statement from the American Diabetes Association (ADA) and American Heart Association has provided recommendations about the use of thiazolidinediones and the risk of fluid retention and congestive heart failure in patients with T2D, particularly when combined with insulin [10]. Increased sympathetic nervous system activity, altered interstitial ion transport, alterations in endothelial permeability, and peroxisome proliferator-activated receptor--mediated expression of vascular permeability growth factor represent other possible mechanisms for oedema with these agents [11]. The Veterans Affairs Diabetes Trial (VADT) assessed the effect of intensive glycaemic control on a composite CV endpoint that included: MI, CV death, stroke, congestive heart failure, invasive revascularization, inoperable coronary artery disease and amputation for ischaemia [12]. In the present analysis we evaluated patterns of rosiglitazone use and its association with CV outcomes in older patients with T2D. Through post hoc analyses, we specifically tested the hypothesis that treatment with rosiglitazone in patients with T2D would not be associated with increased risk of MI, CV deaths and other CV outcomes. Study Design Details of the VADT study protocol, including patient selection criteria, and the primary study results have been reported previously [12,13]. In brief, the VADT was a prospective, randomized study of intensive versus standard glucose treatment [expected separation of at least 1.5% in glycated haemoglobin (HbA1c)] effects on CV events in patients with long-standing T2D. Blood pressure, lipids, diet and lifestyle were treated identically in both arms in accordance with ADA management recommendations. Study recruitment was initiated on 1 December 2000 and ended on 31 May 2003. Participant follow-up was completed on 31 May 2008, resulting in a treatment and follow-up duration of 5-7.5 years (median 5.6). The study protocol was approved by the institutional review board at each of the 20 participating sites. An independent data and safety committee monitored CV events related to group assignment and rosiglitazone use throughout the study duration. Patients Men and women aged ≥41 years with T2D and inadequate response to maximum doses of an oral agent or insulin therapy were included. Those with an HbA1c level <7.5%, as well as those with a CV event during the previous 6 months, advanced congestive heart failure (class III-IV), severe angina, a life expectancy of <7 years, a body mass index (BMI) >40 kg/m 2 , a serum creatinine level of >1.6 mg/dl (141 μmol/l), and an alanine aminotransferase level of more than three times the upper limit of normal range were excluded. All patients provided written informed consent. Treatment Protocol In both intensive and standard glycaemic control groups, patients were started on two oral agents, one of which was rosiglitazone. The other agents were metformin (for those with BMI ≥27 kg/m 2 ) or glimepiride (for those with BMI <27 kg/m 2 ). Initiation of these agents was protocoldriven. A tool box of recommended treatment options was available for addition of other available drugs (except for incretin-based medications), or use of any combination of medications, needed to achieve study HbA1c goals, at the discretion of the investigator. Changes in medication, including insulin initiation and/or discontinuation of oral agents, were determined according to protocol guidelines and local assessment. This included protocol safety guidelines about rosiglitazone use and discontinuation that were consistent with the drug's FDA-approved prescribing information (package insert). Statistical Analysis The primary study results showed no significant difference between the intensive and standard glycaemic control groups for the primary composite outcome, any component of the primary outcome, or in the rate of death from any cause [13], but the effect of rosiglitazone on the CV outcomes remained unanswered; therefore, participants in both treatment groups were aggregated for the analyses in the present report. Although the original study was based on the intention-totreat principle, the present investigation is a post hoc analysis. The effect of rosiglitazone on CV outcomes was analysed using a time-dependent covariate survival analysis for the whole population, and scrutinized by two additional approaches to attempt to overcome the limitations of the post hoc analysis. Baseline variables are expressed as means and standard deviations or percentages or numbers. All analyses were conducted with sas software (version 9.3; SAS Institute Inc., Cary, NC, USA) with a significance level <0.05 using a two-sided test. Time-dependent Covariate Survival Analysis. Survival analysis compared the time from randomization to the occurrence of the first VADT composite outcome in patients treated with rosiglitazone (4 or 8 mg daily) compared with those not treated with rosiglitazone. Separate analyses were performed for individual events from the composite CV outcome, CV death, MI and coronary revascularization. For each outcome, three Cox proportional hazard models were used to calculate relative risk estimates and 95% confidence intervals (CIs): (i) unadjusted (except for the time of publication of Nissen and Wolski paper in 2007 [3]), (ii) additionally adjusted for baseline covariates; and (iii) adjusted for baseline and time-varying covariates, including baseline age, race/ethnicity, smoking status, education, diabetes duration, previous CV event, HbA1c, baseline and on-study BMI, blood pressure, total cholesterol, HDL cholesterol, LDL cholesterol, triglycerides, severe hypoglycaemic episodes, and baseline and on-study use of insulin, other oral antihyperglycaemic agents, statins and aspirin. As the VADT did not randomize by the use of rosiglitazone, two other methods were used: case-control matching and 1 : 1 propensity matching. Case-control Matching Method. Patients with the study outcome (cases) were matched to patients without the outcome (controls) using the case-control matching method [14]. Matching criteria included: age, diabetes duration, previous CV event, insulin use at baseline, duration of follow-up, BMI, systolic blood pressure, diastolic blood pressure, HbA1c, total cholesterol and HDL cholesterol. Rosiglitazone use was compared between cases and controls on: baseline use, on-study use at any time, number of visits prescribed, and average dose per visit, using Student's t-test for continuous variables or a chi-squared test for categorical variables. This method enabled us to see the difference according to rosiglitazone use when the patients had the same level of baseline covariates between the cases and the controls. Propensity Exact Matching Method. Patients were stratified into two groups; those who never used rosiglitazone and those who used it regularly (always) from the beginning to the end of the study. This analytical method [15] approximates to an intention-to-treat approach for the comparison of events among rosiglitazone-treated patients versus matched patients not treated with a thiazolidinedione. Using a stepwise logistic model, including baseline age, diabetes duration, HbA1c, BMI, blood pressure, total cholesterol, LDL cholesterol, HDL cholesterol, triglycerides, creatinine, gender, blood pressure medication use, race, smoking status, previous CV event, insulin use and intensive glycaemic treatment, propensity scores were produced to match 1 : 1 between the two groups. The CV risk factors chosen were compared before and after matching (Tables S1 and S2), which suggested good balance between the two groups. After the matching, Cox proportional hazard regression models were used before and after adjusting for these CV risk factors to assess the association of rosiglitazone use (always vs never) with CV outcomes ( Figure S2). This method was used in an attempt to draw a causal inference from rosiglitazone use with regard to the outcome after matching baseline variables between the two stratified groups. Results Overall, rosiglitazone was used more frequently and at a higher dose in the intensive treatment group than in the standard treatment group (p < 0.05); however, the use of rosiglitazone in both groups initially decreased gradually over time according to protocol safety prescription guidelines. After the publication of the meta-analysis by Nissen and Wolski (2007), a more marked drop-off was observed ( Figure S1) and study participants required increasing use of other agents to maintain appropriate separation in HbA1c values (median HbA1c after 6 months: 6.9% in the intensive treatment group vs 8.4% in the standard treatment group). In particular, the median insulin dose progressively increased in both arms but was 20% higher in the intensive treatment arm throughout the study. Time-dependent covariate analyses showed that both daily doses of rosiglitazone (4 and 8 mg) were associated with a lower risk of the primary composite CV outcome after adjustment for baseline and time-dependent risk factors (Table 1 and Figure 1A). These relationships remained significant in a separate analysis that accounted for the effect of rosiglitazone discontinuation after June 2007, related to the CV safety concerns raised in the meta-analysis by Nissen and Wolski [3]. A reduction of CV mortality risk associated with rosiglitazone use was also observed ( Figure 1B); however, the association with a reduced incidence of MI and coronary revascularization was less evident after accounting for the contribution of the same baseline and time-dependent covariates (Figure 2A, B). For the primary composite CV outcome in the case-control analysis, both groups had similar age, degree of obesity, blood pressure and HbA1c and LDL cholesterol levels at baseline (Table 2), although diabetes duration was slightly longer and HDL cholesterol levels were lower in the cases. A similar percentage of cases and controls were using rosiglitazone in the first year (82.6 vs. 84.8%) but fewer participants who had the primary CV outcome (cases) used rosiglitazone at any time during the study compared with participants who did not [controls; 90.5 vs. 96.0%, respectively; p < 0.01 ( Figure 3A)]. Similarly, the number of study visits with rosiglitazone prescribed (9.8 vs. 10.9; p < 0.05) and the average rosiglitazone dose (4.9 vs. 5.4 mg/day; p < 0.01) was lower in cases than in controls ( Figure 3B). In survival analysis with 1:1 propensity matching data, rosiglitazone use was associated with a lower incidence of the primary composite outcome [Hazard ratio (HR) 0.361, 95% CI 0.204-0.64; p < 0.001)], even after adjusting for age, previous CV event, BMI and baseline insulin use (HR 0.312, 95% CI 0.174-0.558, p < 0.001). Compared with the subjects who had never taken rosiglitazone, those who had taken rosiglitazone were 65% less likely to experience the primary CV outcome ( Figure S2). Further analyses stratifying participants by obesity, smoking status and baseline HbA1c showed a similar association between rosiglitazone use and primary CV outcome (Table S3). Discussion The initial VADT report comparing standard with intensive glycaemic control found no significant difference in the risk of the primary composite endpoint between treatment groups Figure 2. Effect of rosiglitazone dosage on time to (A) myocardial infarction and (B) coronary revascularization. *Baseline and **time-dependent covariates include: age, race, smoking status, diabetes duration, previous cardiovascular event, glycated haemoglobin, baseline and on-study BMI, blood pressure, total cholesterol, HDL cholesterol, LDL cholesterol, triglycerides, severe hypoglycaemic episodes, and baseline and on-study use of insulin, other oral agents, statins and aspirin. [13]. Given the controversy about CV risk associated with rosiglitazone use and the recent FDA drug safety communication [7] about rosiglitazone after the re-evaluation of RECORD endpoints (CV death, MI and stroke), it seemed valuable to perform a post hoc analysis of rosiglitazone use and CV outcomes in the VADT. Three different but supportive approaches were used: a case-control analysis, a time-dependent covariate analysis and a survival analysis after propensity exact matching method. Rosiglitazone was used more frequently and at a higher dose in the intensive treatment compared with the standard treatment group throughout the study duration. We found that rosiglitazone was not associated with increased CV risk, but conversely, may have been associated with a reduction in the occurrence of the primary composite outcome and CV mortality. Moreover, both daily doses of rosiglitazone (4 and 8 mg) were associated with a lower risk of these outcomes, even after adjusting for a broad array of covariates. These data are consistent with results from the PROactive Study [1] and a report suggesting that rosiglitazone use was associated with a 5%, non-significant, reduction in mortality [16]. These results also support preliminary analyses in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial that did not indicate an increased risk of CV events related to rosiglitazone use [17]. Furthermore, a recent post hoc analysis from the Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D) trial, using on-treatment and propensity-matched analyses, showed a lower incidence of the composite death, MI and stroke outcomes (HR 0.72, 95% CI 0.55-0.93) associated with rosiglitazone treatment in patients with T2D and coronary artery disease [15]. These findings are not consistent with meta-analyses that suggested that an increased MI risk was associated with rosiglitazone [3][4][5][6]; however, there is ongoing debate as to the applicability of these findings to patients with T2D in general, with some criticism of the meta-analysis by Nissen and Wolski [3] on methodological grounds [18]. A potential harmful effect of thiazolidinediones was also surprising because thiazolidinediones reduce insulin resistance and therefore provide a therapy that directly corrects a key defect underlying T2D. In part because of this mechanism of action, thiazolidinediones not only lower blood glucose but also have beneficial effects on several processes associated with atherosclerosis, including inflammation, high blood pressure and microalbuminuria. Moreover, data from the PROactive trial suggested that addition of pioglitazone to existing therapy in high-risk patients with diabetes and atherosclerotic disease improved CV outcomes in that trial [1]. In addition, pioglitazone therapy has been reported to slow progression of atherosclerosis [2]. Further evidence that rosiglitazone does not cause increased CV events comes from the RECORD trial, which was designed specifically to study the CV effects of rosiglitazone treatment. Results from a well-designed, adequately powered clinical trial are usually more reliable than results from a meta-analysis [19]. Similarly to the VADT, the RECORD trial did not show an increased risk for MI or CV mortality associated with rosiglitazone use [8]. Furthermore, a recent comprehensive and independent re-evaluation of RECORD endpoints requested by the FDA showed no increased risk of CV death (HR 0.90, 95% CI 0.68-1.21), MI (HR 1.13, 95% CI 0.80-1.59) or stroke (HR 0.79, 95% CI 0.54-1.14) in those treated with rosiglitazone compared with a standard-of-care treatment combination (metformin and sulphonylurea) [9]. Since in the VADT changes in rosiglitazone use were determined based on protocol safety guidelines, consistent with FDA prescribing information, the results of the present analysis of the VADT provide additional perspectives on CV safety for this drug, which is consistent with the FDA's risk and mitigation strategy. A consensus statement of the ADA and the European Association for the Study of Diabetes (EASD) for hyperglycaemia management includes information on thiazolidinediones and advises caution in using these drugs on the basis of their increased risks of fluid retention and congestive heart failure as well as increased incidence of fractures in women and perhaps in men [20]. The ADA/EASD statement acknowledged that the meta-analyses reporting potential CV risk associated with rosiglitazone were not conclusive but still advised against using this thiazolidinedione on the basis of the possibility of increased risk of MI with rosiglitazone. By contrast, the evidence-based Canadian Diabetes Association clinical practice guidelines for the prevention and treatment of diabetes did not find that there was cause to exclude rosiglitazone based on the evidence from the ACCORD, RECORD and VADT studies, which did not show an increased risk of MI or CV mortality [21]. Despite the ongoing controversy, however, most organizations believe that the potential benefits and risks of these agents should be carefully considered before they are initiated. Both pioglitazone and rosiglitazone are contraindicated in patients with class III and IV heart failure. Patients with class II or worse heart failure were excluded from the PROactive study, and patients with any known congestive cardiac failure (class I-IV) were excluded from the ADOPT study [22]. Because patients with heart failure are more likely to be adversely affected by thiazolidinedione-associated fluid retention, any significant degree of heart failure, including class II or higher, could be regarded as a contraindication to use of thiazolidinedione. Importantly, high-risk characteristics in patients with T2D, such as longer diabetes duration, insulin treatment and multiple comorbidities, may help identify those more susceptible to congestive heart failure and adverse CV outcomes, thus avoiding initiation or continuation of therapy in these individuals. This approach may have been followed in the VADT because a decline in the use of rosiglitazone was observed over time, as site investigators followed safety guidelines about its use. This pattern of discontinuation may have left a group of older adults with T2D who might have benefited from rosiglitazone treatment and therefore continued receiving this insulin sensitizer; however, as VADT rosiglitazone use and discontinuation guidelines were generally consistent with the FDA-approved prescription information, our results may apply to many patients with T2D who may have been suitable candidates for treatment with this thiazolidinedione. These results suggest that for patients on rosiglitazone who are achieving glycaemic goals and tolerating the therapy without apparent complications, rosiglitazone may be continued [23]. The present study results should be interpreted with caution because the analysis of the effect of rosiglitazone was not planned a priori. The approaches used in this VADT post hoc data analysis (i.e. epidemiological analyses of data collected within a randomized clinical trial) provide a lower level of evidence than that obtained from a carefully performed prospective randomized, controlled trial. A case-control approach, matching baseline risk factors between the case and the control, and to a lesser extent, a time-dependent covariate survival analysis may be limited by the potential role of confounding by indication [14], as investigators may have prescribed less rosiglitazone (or stopped it sooner) for patients with higher CV risk or comorbidities. Furthermore, there is also a possibility of bias associated with heart failure, oedema or physicians' views of interactions with other medications. The low HRs associated with rosiglitazone use could thus be partly explained by the less healthy individuals being taken off this medication; however, because similar effects of rosiglitazone were seen in individuals receiving 4 or 8 mg, and the rosiglitazone dose regimen was part of the randomization medication treatment algorithm, there should be less confounding by indication. Moreover, even with propensity matching, there remained evidence for benefit, not harm, with rosiglitazone use. In summary, treatment with rosiglitazone in older adults with T2D was not associated with increased risk of the primary composite CV outcome, CV death or MI in the VADT. These results are consistent with more recent evidence that rosiglitazone does not enhance CVD and supports the recent FDA panel recommendation easing restrictions on rosiglitazone. Concerns about specific adverse events (bone loss, oedema and congestive heart failure) with use of thiazolidinedione agents remain [24], however, and decisions to use these agents require careful balancing of risks and benefits. Figure S2. Effect of rosiglitazone use on cardiovascular (CV) outcomes based on 1 : 1 propensity score matching method. It compares patients who took rosiglitazone regularly [always between the start and end of Veterans Affairs Diabetes Trial (VADT)] and those who never took it during the VADT. Outcomes include: myocardial infarction (MI); coronary revascularization (Cor Revas), cardiovascular-related death (CV-deaths), and the primary composite CV outcome. Each outcome is shown unadjusted and after adjusting for baseline CV risk factors. *Adjusted for age, prior CV event, BMI, and baseline insulin use. Table S1. Baseline cardiovascular (CV) risk factors and CV events among patients who were always or were never on rosiglitazone therapy, before the propensity matching method was implemented. Table S2. Baseline cardiovascular (CV) risk factors and CV events among patients who were always or were never on rosiglitazone therapy, after the propensity matching method was implemented. Table S3. Cox proportional models for the primary composite cardiovascular outcome in participants in the Veterans Affairs Diabetes Trial according to rosiglitazone doses and stratified by obesity, smoking status and baseline glycated haemoglobin values. File S1. Members of the VADT Research Group.
2018-04-03T06:22:38.890Z
2015-06-17T00:00:00.000
{ "year": 2015, "sha1": "73072aede7e8bcf97b0fd1dee31fc075715c93b2", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1111/dom.12487", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "73072aede7e8bcf97b0fd1dee31fc075715c93b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219462020
pes2o/s2orc
v3-fos-license
Tempest in a teapot? Toward new collaborations between mainstream policy process studies and interpretive policy studies “Tempest in a teapot” is an idiom that refers to a problem that has been blown out of proportion, which is how we see the supposedly divisive relationship between two research traditions: mainstream policy process studies and interpretive policy studies. In this commentary, we explore both research traditions, comparing and contrasting their views of public policy and policy processes, uses of theories, and approaches to research. Our aim is not to unite them or reject points of debate. Instead, we offer strategies for more productive collaborations, including side-by-side research, integrative research, engagement in constructive discussions of research techniques, and applied research. Introduction A "tempest in a teapot" is a problem that has been blown out of proportion. This idiom encapsulates ongoing miscommunications and animosities between mainstream and interpretivist policy scholars. The difference between these traditions has generated conflict in policy studies for decades, with some of the divisions passed down through generations. Of course, conflict can fuel learning and, if properly handled, resulting in more robust and productive relations. Yet, the opposite seems to have happened. Our purpose here is not to relive and reiterate these old debates-after all, let bygones be bygones. Instead, we compare and contrast both research traditions. While we see differences in their orientations and methodologies, we also see goal similarities given their distinct foci and emphases. Moreover, collaboration offers the potential to conduct research together in order to advance knowledge and contribute to society. This commentary attempts to evenhandedly describe these two traditions for both new and experienced policy scholars. 1 As two scholars in each of these two traditions, we wrote this commentary jointly in order to develop a common conceptual terminology and an understanding of research approaches. Two people cannot summarize every aspect of both traditions, but we trust that our interpretations are not too narrow and not too far off. While we maintain our lofty aspirations in offering a better understanding of both traditions and ideas for collaborations, our more humble ambition is to provide a shared language and a better understanding of more fruitful communications. Mainstream public policy studies and interpretive policy studies Let us start with how both traditions might study the game-of-chess as if it were a policyrelated phenomenon. The mainstream policy scholar would approach the game from two perspectives. From the first perspective, and among those who practice "mainstream policy analysis," the pros and cons might be analyzed in order to determine the next move for one of the chess players. They then might forecast the risks and benefits of different moves and communicate these tradeoffs back to the chess player to support decision-making. Alternatively, these mainstream policy analysts might evaluate a prior move. Was it the right move? What were the benefits and costs of that move? From the second perspective and among those who practice "mainstream policy process studies," understanding how all the chess pieces move and interact over time might be important. The game becomes complex because of the rules and the varied strategies that chess players use, prompting the need for theories to capture the dynamics. The resulting insights would then be communicated back to the chess players as general understandings about the nature of the game and ways of playing. The interpretive policy scholar might start from a point of curiosity about the rules of the game and the ways these rules are contingent as constituting part of a particular culture, social group, or geopolitical context. They might then understand how the contingency of these rules oppress and limit options to make moves and develop strategies. Then they might uncover the reasons some potential players are discouraged from playing. They might explore the pieces as objects and their movements as acts-both of which would be embedded with the meaning of values, beliefs, and feelings. Knowing that meaning is conveyed through situated interactions, they might analyze how the players interrelate through language. Additionally, recognizing the subjective orientations of the players and themselves, interpretive policy scholars might immerse themselves in the game and interact with the players to develop an understanding of how this one game-of-chess is being played. Through this one game, they might try to understand how the rules were and 1 There have been attempts to compare interpretive policy studies and mainstream public policy studies (deLeon 1998;Fischer 1998;Lejano and Leong 2012;Weber 2004) and integrate them (Lin 1998;Jones and Radaelli 2015;Boswell and Corbett 2015). However, none of these past efforts have compared and contrasted the terminology and research approaches in both traditions, especially with a focus on mainstream policy process studies and interpretive policy studies. Additionally, one conclusion from past debates has been that, while the two traditions should communicate more, combining them is impossible (Dodge 2015, p. 366), a position that this commentary refutes. Further, one argument claims that interpretive approaches might be more pronounced in Europe than in North America, at least in the study of public policy, but we do not address the validity of such an argument in this commentary. are established via situated interactions of players. Part of the interpretive policy scholar's agenda involves questioning the establishment of these rules and the connections between players and non-players. This simple game-of-chess analogy succinctly portrays mainstream policy analysis, mainstream policy process studies, and interpretive policy studies. It shows that these traditions can offer complementary ways of understanding the same phenomenon. We provide brief descriptions of these traditions below. Mainstream public policy studies We use the term "mainstream public policy studies 2 " to encompass both mainstream policy analysis and mainstream policy process studies, excluding interpretive policy studies. We consider "mainstream" an apt description because it has been dominant in shaping many of the norms that ostensibly run counter to interpretive policy studies. We use the term "mainstream policy analysis" to refer to the area of study that offers practical or client-oriented advice in evaluating past decisions or assessing future decisions (Bardach and Patashnik 2019;Weimer and Vining 2017). Mainstream policy analysis is often associated with decision-making tools (benefit-cost analysis, distributional analysis, etc.). "Mainstream policy process studies" represent the area of study that describes and explains the varied interactions that embed and surround public policy (Weible 2018;Cairney 2011). While mainstream policy process studies can pivot around a single public policy, encompass multiple policies across space or time, deal with the politics and impacts surrounding public policies, or focus on substantive policy issue (and many public policies therein), they involve a range of factors that include, but should not be limited to, actors and organizations, political behaviors, events, contexts/settings, and outcomes. Whereas mainstream policy studies are associated with tools that help inform policy decisions, mainstream policy process studies are associated with theories that help describe and explain public policy (Lubell 2013;Weible and Sabatier 2018). Interpretive policy studies We use the term "interpretive policy studies 3 " to encompass the various approaches to investigating public policy through its discursive nature. This means that meaning can be uncovered and can differ in acts, actors, and objects around public policy and in events that happen to public policy. The language used to describe public policies and to discuss or negotiate them shapes who becomes legitimate or powerful and, more generally, how the policy process develops over time. This distinctive way to understand and highlight public policy relates to the capacity of language to deliver contextual information about a situation and to change the situation. Placing that interest in the language above all other inquiry, interpretive policy studies perceive itself in opposition to "positivist policy analysis," which is considered a form of knowledge oppression because it limits analysis to certain questions and organizational or institutional spaces and prevents the analyst from uncovering the conditions that have established these limits. Interpretive policy studies build on the possibility of multiple meanings and then analyze how these meanings coproduce policy processes, that is, which meanings are attributed by whom and where, thereby seeking to explain what practices and what power structures these specific meanings reveal (Bacchi 2005;Durnová et al. 2016). We next explore the differences and similarities of these two traditions. We deliberately narrow comparisons of interpretive policy studies to mainstream policy process studies and not to mainstream policy analysis. We do so because mainstream policy analysis is notably different from policy process studies and including both would complicate our commentary. Similarly, we do not discuss the differences between the sub-traditions found within interpretive policy studies but focus on the tradition's overarching principles and practices. Table 1 compares and contrasts the scopes and views of public policy and policy processes from interpretive and mainstream traditions. Mainstream policy process studies typically view public policies as products and sources of politics or constituting the institutional landscape that shapes and is shaped by political behaviors. Public policies can be viewed as translations of understandings, interests, values, or beliefs (Sabatier 1988). Mainstream policy process studies make a distinction between public policies as either "rules-in-form" or "rules-in-use" that, respectively, represent policies written and adopted by a decisionmaking venue (e.g., in a statute or regulation) or the regularized behavior of government officials, street-level bureaucrats, or other actors engaged in the practices of government (Ostrom 2005). Mainstream policy process studies then focus on the contexts, events, actors, and outcomes that surround and embed public policies. Scopes and views of public policy and policy processes For interpretive policy scholars, since public policies are manifestations of meanings that actors create and that can be conveyed through the artifacts of language (Yanow 2003;Bacchi 2005;Torfing 2005;Hay 2011), discourse becomes an important concept and a way of understanding and representing the policy process (Bacchi 2009;Fischer and Gottweis 2012;Dodge and Metze 2017). While actors see and transform the world through discourse, these actors are shaped through the same discourse and can be transformed by it. Interpretive policy studies view the formation of the policy process through semantic categories used in everyday interactions, observed through the use of specific words, arrangement of these words in sentences, narratives, metaphors, arguments and rhetorical figures that frame actors attempting to influence public policies and the intended receivers. The aim of interpretive policy studies is to focus on the conceptual understandings of why public policies emerge in these specific forms. These traditions share some commonalities. Both traditions focus on public policies to understand governments (although interpretive policy studies will often seek this understanding outside of the usual institutional structures of governments for reasons explained below). Both traditions view public policies as something written or in use that Views of public policy View public policies as both rules-in-form and rules-in-use and as manifestations of meanings, which are associated with values, beliefs, emotions, feelings, and power structures View public policy as both rules-in-form and rules-in-use and as sources and products of politics and translations of understandings, interests, values, and beliefs Views of policy processes View policy process as involving the study of meanings in, and of, the interactions surrounding public policies, in particular actors, objects, and language, wherein there is a heavy emphasis on discourse and underlying power structures View policy process as involving the study of all the interactions surrounding public policies, including actors, events, contexts, and outcomes 1 3 shape outcomes. Both focus on the politics surrounding public policies that involve interest groups, powerful entities, and others. Both aspire to understand the processes through which these policies emerge and expire. However, interpretive policy studies emphasize the power of language more as something that shapes policies while also being shaped by them. Interpretive policy studies want to understand how knowledge is both constructed and performed, who gets a say in the policy process, who is considered a legitimate actor, and who becomes marginalized, silenced, or omitted. Mainstream policy process studies also study power and language but emphasize it less and research it differently. For example, mainstream policy process studies might analyze shifts in the news media discourse as an expression of power and in relation to changes in public policies. Conversely, interpretive policy studies focus more on how language constructs the relation between expressions of power and changes in public policy. As mainstream policy process studies focus less on these underlying structures of language and power, interpretive policy studies have perceived mainstream policy process studies as contributing to concealing power relations and oppressing certain forms of knowledge and, thus, becoming part of the discursive landscape that needs to be analyzed. Table 2 compares and contrasts the uses of theories in the two traditions. Mainstream policy process studies generically use theories as a way to organize inquiry, establish the scope of such inquiry (e.g., types of questions asked), specify assumptions, define concepts, and relate those concepts (e.g., in the form of principles, hypotheses, or propositions). Theories act as platforms for organizing research programs that enable collaboration among groups of scholars. This supports the production of simultaneous theory-guided research applications in different locales, in different points in time, on different topics, through different methods, and by different researchers that can inform and contribute to knowledge, which can then be used to revise and update theories. Theories, thus, become continuously revisited and updated reservoirs of knowledge about policy processes. Theories also help mitigate subjectivity and bias of the researcher (see the next section). Uses of theories The relational forms (such as relating concepts in hypotheses or propositions) posited in theories vary in their utilization within mainstream policy process studies. 4 For some, these relational forms state associations to confirm or refute. For others, relational forms serve direct inquiry and organize and communicate the presentation of findings. Sometimes relational forms specify causal drivers and emphasize processes (mechanisms) or variances (effects). When a causal argument is made, a theory usually offers the rationales underlying the relationship, often tied to a model of the individual (e.g., what is assumed about an individual's motivations and cognitive abilities). Other relational forms are more prescriptive in specifying the conditions associated with the likelihood of a phenomenon to exist or are descriptive by positing patterns. Sometimes these relational forms direct researchers to specify the context on which they depend. In this way, relational forms are stated with broadly defined concepts that are adaptable to different contexts given the rationale or logic established in the theory. Uses of theories Provide insight into value orientations and value constitutions and use reflexivity in the systematic back and forth between field and theories Provide a platform (shared vocabularies and assumptions) for establishing research programs, serve as reservoirs of knowledge, simplify and guide research, and organize inquiry around relational forms (e.g., hypotheses) Relational forms Include hunches or propositions, which highlights the interdependence of what is studied and who studies it. Include also grounded theory, which emphasizes porosity between field and understandings. Use reflexivity as a way to assess personal or topical biases that necessarily emerge in any analysis of public policy phenomena Include hypotheses, propositions, and principles for varied purposes, such as to refute or confirm expectations and to posit explicit causal or non-causal associations in helping to organize inquiry. Adopts a major goal of generalizability of theoretical arguments given contextual conditions Major theoretical approaches Include (but is not limited to) argumentative policy analysis, interpretive policy analysis, deliberative policy analysis, poststructuralist policy analysis, critical policy studies, narrative policy analysis, and rhetorical policy analysis Include (but is not limited to) punctuated equilibrium theory, multiple streams framework, institutional analysis and development framework, the advocacy coalition framework, diffusion of innovation, policy feedback and historical institutionalism, social construction and policy design, narrative policy framework, and ecology of games framework In interpretive policy studies, there is a deliberate absence of hypotheses. Concepts and their interrelations, as we usually find them in mainstream policy process studies, also exist in interpretive policy studies and can be understood both as associations to confirm or to refute and as guideposts to organize the analysis. What differs in interpretive policy studies is that they are created from the inquiry and analysis in the field rather than previously derived from a theory. The relation between the use of theories and the way to proceed in the inquiry in interpretive policy studies can be summarized under the "logics of inquiry," a term that encompasses norms and strategies for guiding interpretive scholarship (Schwartz-Shea and Yanow 2013). Two key terms that summarize aptly the logics of inquiry are "intersubjectivity" and "interdependence." "Intersubjectivity" means that knowledge emerges from the interpretation of interactions between acting subjects, objects, or texts and, as such, it can be accessed only contextually (Durnová 2015). It is also not something that exists independent of the researcher or as something to be found. 5 These interactions are studied through all kinds of practices. As a consequence, interpretive policy studies scholars often refer to such contextualization as situated interactions. An important aspect of situated interactions is the view of interpretive policy scholars that they, as researchers, are a part of such interactions and that their observational position (e.g., social, cultural, and national origin) is part of the analysis of their research. Thus, interpretive policy scholars practice a degree of self-awareness in collecting and analyzing data in what is called "reflexivity." Interdependence relates to the way theories in the interpretive tradition layout assumptions about the policy process (Hajer and Wagenaar 2003). Theories informing interpretive policy studies presuppose contingent formations of social phenomena and layout possibilities of studying and following that contingency. They transcend with what they see as the objectivistic, reductionist, and rationalistic bias of modern social science theories that shape understandings of the surrounding world (Torfing 2005) and highlight the (socially) constructed character of norms, values, symbols, identities, and knowledge paradigms. While theories in interpretive policy studies have explanatory value, their aim is not to establish covering laws or to reveal the intrinsic causal properties of social objects. Theories in interpretive policy studies aim, instead, to understand why particular policies were constructed, stabilized, or transformed (Torfing 2005) and how this happened. Approaches to research The major differences between mainstream policy process studies and interpretive policy studies can be found in how they conduct research (see Table 3). They differ in their ontological and epistemological orientations, in assessing quality, in handling human bias, and in treating generalizability. Yet, there are also similarities. For example, both care about human bias but handle it differently, and both care about quality science but gauge it differently. Ontological & epistemological orientations Assumes a constructive ontology and interpretive epistemology and followed by consistent methodologies and method Focuses heavily on methodology and methods and rarely emphasizes ontological or epistemological orientations Dealing with human bias of the researcher Accepts reflexivity in assessment of data collection and data analysis and embraces human bias during research and in discussing the results Mitigates human bias through specifying all assumptions and steps in the research process, including clear conceptual definitions and measurement usually embedded and guided by theory View of generalizability Emphasizes particularities or singularities of a case, however, might address conceptual inference (relationships between categories) Emphasizes generalizability with the goal of teasing apart localized insights from those that generalize over time or space Gauging quality Judges research by its credibility, dependability, and confirmability and overall transparency and clarity Judges research by its reliability and validity and overall transparency and clarity Philosophical foundations Interpretive scholars moor their philosophy of science to a constructive ontology and interpretive epistemology (Dodge 2015;Schwartz-Shea and Yanow 2013). Constructive ontology means that public policy phenomena are constructed through meanings assigned to them by various actors, and interpretation is then seen as the suitable (epistemological) means to reveal the rules and operations of that construction. Under this philosophical orientation, the construction will always interrelate dynamically with structural conditions and agencies challenging them. Interpretive policy studies are oriented as consciously antipositivist and conceive "positivism" as a form of procedural oppression that obscures hierarchies between included and excluded actors and the corresponding creation of meaning and established understandings. Mainstream policy process scholars are then assumed to be part of the group representing "positivism" and "the other." In turning mainstream policy process scholars into the other, depictions of their philosophical orientations have become exaggerated and erroneous caricatures. These depictions of mainstream policy process scholars' philosophy of science have included the following: researchers are without presuppositions or biases and perceive the world as independent of them; causality is akin to "hitting a cue ball on a pool table"; discovering covering laws and causal explanations that span all contexts is the sole purpose of research; context is irrelevant; the world is stable; public policies signify objective instruments of rationality rather than translations of beliefs and values or products of politics; and conceptual measures are objective representations of the truth. While these caricatures are unsubstantiated or exaggerated, a question then arises: what are the philosophical foundations of mainstream policy process studies? To such a question, the strawman caricature described in interpretive policy studies needs updating and corrections, but it is beyond the scope of this paper to detail what mainstream policy process scholars believe and practice as their philosophy of science. We speculate (without much of a basis beyond our own observations) that most mainstream policy process scholars recognize their presuppositions and biases, the lack of objectivity in their measures, the inherent challenges in any attempt to specify causality and, hence, the emphasis on associations and patterns and, at best, probabilistic relationships, the importance of contexts, the value of quantitative and qualitative approaches, dynamism of policy processes, and public policies are translations of beliefs and values and, hence, reflect and influence power and politics. Dealing with the researcher's human bias The two traditions differ in dealing with human biases, which we describe through another analogy. Imagine how scholars from both traditions would grocery shop. Mainstream policy process scholars, concerned about their own cognitive limitations and biases, might use a shopping list. Analogous to the use of theory, a shopping list might help guide what to pay attention to and what to ignore and, thus, guard against researchers' presuppositions to shop only in one aisle or on a whim of hunger. Behind this shopping list might be one or more chosen recipes. The extent that researchers would shop beyond their list is contextually dependent-sometimes they will shop beyond the list and other times not. Mainstream policy process scholars would update and modify their grocery lists or have multiple lists depending on their values, objectives, and the store visited. Interpretive policy scholars are also concerned about the subjective nature of their research but embrace it. Interpretive policy scholars might shop without a grocery list. They might view a grocery list as both oppressive on their shopping choices and as a false way to mitigate their biases. Their purpose in shopping would be to cook a meal that portrays their interaction with, and the identity of, the grocery store. When revisiting a grocery store, they might draw insights from prior shopping experiences not as a list but as conceptual suggestions about how next to grocery shop. In dealing with human bias, mainstream policy process scholars want their publications to hold as much detail as possible about the conduct of the research because this is a reflection of clarity and transparency and a means of mitigating human bias. For example, they might want to see the grocery list in the analogy above (i.e., the interview questions or survey questions used in research). This then becomes a common criticism from mainstream policy process studies of interpretive policy scholars, that is, interpretive scholars are not clear enough in their data collection and analysis. On the other hand, interpretive scholars want to see in their publications not a shopping list but explicit acknowledgment of the dynamics between the researcher and the phenomenon studied in a reflexivity assessment. Interpretive scholars do not treat subjective nature and human bias as limitations but rather as a legitimate part of research and a reason to conduct research. Assessing reflexivity means expressing awareness of the situated relationship between the studied phenomenon and the researcher. This assessment might include the structural conditions that affect the inquiry, especially the know-ability of the phenomenon (Shehata 2006). A substantial part of this reflection includes expressions of the contradictions and limitations affecting the choice of the researcher and the methods used. Expressing reflexivity in the research, thus, becomes a statement of clarity and transparency. To return to our grocery store analogy, interpretive policy scholars want their publications to show how shopping and buying food has contributed knowledge about the grocery store, how hunger and nutritional needs of the researcher interacted with the shopping experience, and how-knowing all that-we can interpret the food in the store and how that food might relate to particular practices of shopping and cooking. This assessment usually appears not in the methods section but is expressed throughout the publication. According to interpretivists, the errors of using theories without taking into account their ontological background (i.e., analogous to the shopping list above) show a lack of reflection on the discourse that springs from the tradition of positivist objectivism. Reality, as depicted in a positivists' way of thinking, is simply "out there" and can be known through purely objective rational procedures. That is why they perceive mainstream policy process scholars as using the same theories repeatedly without considering contexts. Such perceptions conflict with interpretivists, who embed knowledge as a part of all sorts of social relations and interactions with the context and with power dynamics that affect the production of knowledge. Yanow (2003) considers both contextual information from interviews and the field that is often used to inform more of the "science" phase of mainstream policy process studies as problematic. For interpretivists, engaging with the field is already part of the scientific process. This importance of a philosophical rigor within the use of theories helps explain why interpretivists do not visualize theories as tools in a toolbox. They want to define their tools after they have seen the policy problem they are analyzing. Toolbox imagery, from an interpretive perspective, can be limiting because it might divert researchers from the start toward categories that misrepresent the experience from the field. This further explains the related perception among interpretivists that the use of theories as practiced by mainstream policy process scholars limits their understanding of the phenomena studied. Their use of 1 3 theories also may obscure alternative narrations of the policy problem and meanings held by marginal social groups not considered in the theories. However, mainstream policy process scholars neither assume that reality simply exists "out there" independent of them nor argue that their research is objective. Indeed, the explicit statement of research methods in publications signal mainstream policy process scholars' uneasiness with the lack of objectivity in their procedures and the potential biases their presuppositions might bring to their research. Moreover, theories (i.e., the grocery lists) in mainstream policy process studies are not static or applied blindly. Theories can incorporate decades of research and experience and, thus, are updated and adapted to new contextual situation. Indeed, applying a theory thoughtlessly without contextual considerations is bad science. For example, a theory might offer a broadly defined concept that enables the researcher to adapt and apply it appropriately to a given setting. To make this happen, some mainstream policy process scholars might apply various forms of applied scholarship to design, pretest, or ground-truth their data collection instruments (Van de Ven 2007), which is a part of the scientific process. Mainstream policy process scholars also use the framework-theory distinction to conduct their research comparatively and to incorporate context (Ostrom 2005). Through this distinction, a framework provides a portable and very general platform in the types of questions asked, concepts and shared language, and general relations among them. A theory then might incorporate a subset of the framework's concepts, and maybe additional concepts relevant to a case, to help understand and explain a particular situation. In this regard, frameworks provide portability across contexts and theories provide the adaptability to a particular context. Mainstream policy process scholars might not immerse themselves in the field as much as interpretive scholars, but they certainly incorporate it into their research. For mainstream policy process scholars, the use of theories (generally stated) helps bolster or refute parts or all of their knowledge through seeking errors and making corrections, finding surprises or confirmations, and learning from past experiences. By employing more than one theory, mainstream policy process scholars recognize and mitigate the oppressive way of thinking imposed by any one theory or approach. Thus, they use theories as lenses to guard against seeing the world from a biased or singular perspective. Of course, they maintain their common sense and instincts, but they also approach their phenomenon from distinct vantage points as suggested from different theoretical perspectives. Hence, for mainstream policy process scholars, theories (akin to tools in a toolbox) provide a means for critical thinking, a freedom to approach research using multiple perspectives, and a platform to build knowledge and learn from mistakes. Generalizability The two traditions differ in how they approach generalizability. A common perception among mainstream policy process scholars is that interpretive policy studies are plagued by relativism. Indeed, interpretive policy scholars view the conduct of comparative research and the practice of finding generalizable lessons as antithetical to their goals. Constructive ontology of interpretive policy studies underscores the situated character of actions and contingency, which downplays possibilities of generalizations. Interpretive policy scholars endeavor, instead, not to generalize but to show how actions, actors, and objects are situated with meaning, by whom they are situated, and how this might affect the way the meanings are understood. Interpretive policy scholars approach the issue of generalizability by exploring how insights are constructed, reflect power structure, and omit certain knowledge. Interpretive policy scholars might even ask why society highlights generalizations as the goal of scientific expertise. From a different perspective, defining generalizations as repeated patterns of actions or a configuration of actors, interpretive policy studies might offer analyses of repeated contingencies or relations between actors and contexts in different policy areas and in different times or different countries. Interpretive policy scholars might also focus on conceptual inference by drawing conclusions from their data on the relationships between categories (in the meanings of objects, actors, or words) as instances of broader recognizable patterns or features (Williams 2000;Schwartz-Shea and Yanow 2013). For mainstream policy process scholars, generalization (i.e., external validity) is a central component of their research enterprise. This is not about finding "covering laws" that fit all contexts. The challenge for mainstream policy process scholars is separating the particularities that fit to a localized context versus those that generalize across contexts and, when such generalizations occur, to what extent and under what conditions. While both traditions design their inquiry with the help, and on the basis, of theoretical frameworks, they do it differently. Interpretive ontology assumes a strong connection to the field, assumes that people are meaning-making creatures, and that research subjects and researchers comprise the studied world. Interpretivist might, thus, use "hunches" and questions prior to their fieldwork, but they do not posit hypotheses from them. Hunches are grounded in the research literature and often stem from prior knowledge of related study settings. Most importantly, hunches serve as starting points of inquiry that are designed to be revised after the initial field experiences. Adapting the research goals and approach while it progresses is not just allowed but expected. Gauging quality Interpretive policy studies and mainstream policy process studies gauge the quality of their research differently. Borrowing from Dodge et al. (2005, p. 295), interpretive policy scholars assess their research based on its credibility (i.e., is the research plausible and supported through the data) and dependability and confirmability (i.e., is the research "fair, unbiased, or coherent by people who are external to the process"). In contrast, mainstream policy process scholars primarily assess the quality of their research based on measures of validity (i.e., accuracy in measurement or removing alternative explanations in research design) and reliability (i.e., related to the consistency in measurement). Opportunities for collaboration The fundamental tenet in the interpretive tradition is to avoid combining "positivist" methodology and methods with "interpretive" ontological and epistemological orientations. However, societal challenges have never been steeper as we increasingly face culture wars and backlashes, threats to democracy, and social change due to a pandemic of historic proportions (Fishkin and Mansbridge 2017;Norris and Inglehart 2019;Offe 2017;Weible et al. 2020). In the face of such global calamity, a shared focus on governments, politics, policies, and related outcomes should be emphasized more than the differences in how both sciences conduct themselves. Indeed, the boundaries that offered the intellectual separation between these traditions, especially in the interpretivist tradition, now need to be jettisoned. Given this situation, we categorize opportunities for collaboration in four ways. 1. Side-by-side research Scholars from both traditions could analyze the same topic using their respective methodologies and methods, and insights could then be combined into a written product. For example, a publication on the role of conflict in public policy could offer two distinct analyses, one using interpretive methods and one using mainstream methods. The results from both would then be combined to help inform the conclusions. This approach need not require researchers from both traditions to compromise how they conduct their research; it only requires collaboration in writing up the results in a publishable form. 2. Integrative research The two traditions can integrate their research on the same project in sequence or parallel. For example, the Comparative Agendas Project has generated large datasets of agenda items and various types of public policies across countries over time. 6 Interpretive policy scholars could explore this population of cases as a starting point to begin their research and then provide in-depth analysis of one of the cases. This would provide the interpretive scholar a way to articulate how their case might relate to other cases. Likewise, mainstream policy process scholars could conduct their research based on the findings from interpretive policy studies. In this scenario, an interpretive scholar might discover a number of commonly used discourses in a particular case and setting. Mainstream policy process scholars could then use these discourses as points of departure in conducting large-n quantitative analysis of their propensity across space and time. Obviously, we recognize that these integrative approaches might run counter to some of the goals and norms of both traditions. We are not asking either tradition to compromise their research integrity but rather to accept the integrity of the other tradition. In both examples, science is being conducted and one tradition is neither inferior nor superior to the other. 3. Constructive comparisons of research techniques The two traditions share similar foci on policy processes but differ in how they conduct research, ask questions, use theory, and gauge quality. These differences have been viewed as points of separation but can also be viewed as opportunities for cross-fertilization and learning. For example, assessment of quality in the interpretive tradition (credibility, dependability, and confirmability) and reflexivity could be used to improve aspects of mainstream policy process studies. Additionally, mainstream policy process studies might draw inspiration from how interpretive policy studies anchor their research to ontological and epistemological foundations. At the same time, interpretive policy studies can draw insights from mainstream policy process studies in communicating the transparency in their methodology and methods. 4. Applied research Given that both traditions seek to inform people outside of academia and to contribute to the betterment of society, both traditions could begin by recognizing that nontrivial problems exist and that academics have an opportunity and an obligation to help inform societal responses. Both traditions should, thus, set aside in consequential concerns about their differences. Leaders and non-leaders, the powerful and the powerless, and the decision-makers and the impacted care little about epistemological or ontological orientations, the value of hypotheses versus hunches, the importance or use of theory, and the criteria for gauging research quality. What they want is useful information that can help them make sense of their past, present, and future. For all scholars of public policy, there is a need to put differences aside in jointly conducting applied research in contributing to society (see Weible et al. 2020). Several research areas are ripe for implementing these opportunities for collaboration. This includes analyzing discourse and stories using interpretive approaches with the Narrative Policy Framework (Jones and Radaelli 2015;Dodge 2015) or with the Social Construction Framework (Barbehön 2020). Similar effort could explore Discourse Coalitions (Hajer 2005) and the Advocacy Coalition Framework (Jenkins-Smith et al. 2017) or the use of language in shaping rules and behavior (Hay 2011;Ostrom 2005). More research could also be conducted on implementation (Maynard-Moody and Musheno 2000). Additionally, both traditions study political conflict that could be integrated (Weible and Heikkila 2017;Dodge and Metze 2017). Finally, one omission in the study of human behavior in both traditions has been the role of emotions, herein, interpretive scholars have begun to develop insights (Durnová and Hejzlarová 2018;Durnová 2018), and this effort could be supplemented with mainstream methodological techniques. Conclusion This commentary seeks to reorient discussions about both mainstream and interpretive policy traditions toward more constructive dialogues and collaborations. Although differences exist and persist, these two traditions are not as polarized as often presented, and they can offer tangible benefits to each other. Both traditions aspire to understand policy-related phenomena, but they differ in their approaches. Mainstream policy process studies focus more on questions of generalizability and often use theories to build and advance knowledge. Interpretive policy studies focus more on underlying or emerging power structures that shape discourse that then reveals those power structures. Even though mainstream policy process studies contextualize their research, interpretive policy scholars immerse themselves more in the field and adapt their research accordingly. While mainstream policy process scholars might mitigate effects of presuppositions through theories and transparency in their methods, interpretive policy scholars might practice reflexivity while embracing their relationship with their research subjects. Both traditions care about the quality of their research but gauge it differently. There are untapped opportunities for these traditions to collaborate in conducting sideby-side research, integrating research, constructively comparing research techniques, and applying their scholarship jointly to better society. Collaborations between these traditions could be fostered if mainstream policy scholars would accept a broader definition of social science and if interpretive policy scholars would avoid forcing divisions based on ontological and epistemological orientations. Possibly the best way to help these traditions work together is for scholars to focus on their mutual understandings of methodology and methods in approaching public policy in order to conduct both sciences more transparently and to strengthen their societal pertinence.
2020-05-28T09:14:43.423Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "334d4a14ff36a31d893813dcc3d54698c831ea42", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11077-020-09387-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8c766a0403760e7c83fd3a665ee597fd46aa41a6", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine", "Political Science" ] }
231954855
pes2o/s2orc
v3-fos-license
Genetic disorders of the surfactant system: focus on adult disease Genes involved in the production of pulmonary surfactant are crucial for the development and maintenance of healthy lungs. Germline mutations in surfactant-related genes cause a spectrum of severe monogenic pulmonary diseases in patients of all ages. The majority of affected patients present at a very young age, however, a considerable portion of patients have adult-onset disease. Mutations in surfactant-related genes are present in up to 8% of adult patients with familial interstitial lung disease (ILD) and associate with the development of pulmonary fibrosis and lung cancer. High disease penetrance and variable expressivity underscore the potential value of genetic analysis for diagnostic purposes. However, scarce genotype–phenotype correlations and insufficient knowledge of mutation-specific pathogenic processes hamper the development of mutation-specific treatment options. This article describes the genetic origin of surfactant-related lung disease and presents spectra for gene, age, sex and pulmonary phenotype of adult carriers of germline mutations in surfactant-related genes. Surfactant proteins B and C The hydrophobic proteins SP-B and SP-C increase the rate of surfactant spreading over the alveolar surface. Furthermore, they enhance surface activity, contribute to innate immune defence of the lung and may be involved in surfactant catabolism through enhancement of the uptake of surfactant phospholipids [5,[20][21][22][23]. Synthesis occurs in alveolar Type II (AT2) epithelial cells and involves production of large precursor proteins, known as pro-SP-B and pro-SP-C, followed by intracellular proteolytical processing into the final mature forms. Mature SP-B and SP-C are secreted in the alveolar space via fusion of the lamellar body membrane with the alveolar cell membrane. Presence of SP-B in lamellar body membranes has been shown to be essential for correct processing of SP-C, whereas SP-C is not essential for processing of SP-B [12,20,[24][25][26][27][28][29]. SFTPB gene mutations The first mutation in SFTPB was identified in a full-term neonate with SP-B deficiency and alveolar proteinosis, who was homozygous for the so-called "121ins2" mutation [12] (table 1). The 121ins2 mutation in exon 4, officially referred to as c.361delCinsGAA, causes a frame-shift and a premature stop codon in exon 6, resulting in an unstable transcript as well as an absence of mRNA and SP-B protein [12]. Inheritance of disease due to mutations in SFTPB is autosomal recessive, with patients carrying bi-allelic homozygous or compound heterozygous mutations [2,30,31]. Approximately 50 different mutations have now been reported throughout the gene, of which the 121ins2 mutation is the most common and may account for over half of the SFTPB cases in white cohorts with respiratory distress syndrome (RDS) [2,[32][33][34][35][36][37]. Whereas homozygosity is deleterious, parents carrying the mutation in a heterozygous state are not affected, suggesting that a 50% decrease in SP-B has no major health effects. Disease due to SFTPB mutation typically presents in neonates as RDS with a fatal outcome within the first months after birth. However, some patients are reported to have longer survival times and later onset of disease. These patients carry mutations that allow partial production of functional SP-B protein [2,[34][35][36]38]. Although numbers are low, it is therefore suggested that a genotype-phenotype correlation may exist, with mutations causing complete deficiency in SP-B associating with fatal neonate RDS and mutations causing partial deficiency in SP-B associating with delayed onset and prolonged survival. However, survival is rare and, although some children age, no adults with pathogenic bi-allelic SFTPB mutations have been reported thus far. SFTPC gene mutations The first family with a pathogenic mutation in SFTPC included not only a 6-week old, full-term infant with tachypnoea and cyanosis, but also her mother [39]. Candidate gene sequencing of SFTPC yielded a heterozygous mutation (c.460+1 G→A) that causes a deletion of exon 4 [39]. The mother, who died after delivery from respiratory insufficiency, had been diagnosed with desquamative interstitial pneumonitis (DIP) at the age of 1 year and was treated with corticosteroids until the age of 15. In contrast with SFTPB, inheritance of SFTPC-mediated disease is dominant and affects subjects of all ages ranging from newborns to the elderly [40,43,44]. As an exception, several bi-allelic mutation carriers were reported with early-disease onset [15,45]. The pathogenic effect of an SFTPC mutation is primarily the result of aberrantly processed mutant protein, which has toxic consequences for the AT2 epithelial cell. These intracellular effects are associated with the position of the mutation in the gene and the type of mutation [8,42,46]. The most common SFTPC mutation in both children and adults is the so-called "I73T" linker domain mutation. SFTPC I73T, officially named c.218T>C, alters trafficking of the pro-peptide to early endosomes [47] and causes dysregulated proteostasis in AT2 epithelial cells [48]. Furthermore, alteration of surfactant lipid composition and activation of immune cells are reported for this mutation [49]. Other mutations, situated in the C-terminal BRICHOS domain, cause an increase in endoplasmic reticulum (ER) stress and activation of the unfolded protein response (UPR) [50][51][52][53] in AT2 epithelial cell experiments, leading to a toxic intracellular accumulation of the pro-protein and apoptosis [8,54,55]. SFTPC mutations have been frequently reported in neonates and juveniles suffering from RDS or chronic interstitial lung disease (ILD) and many were de novo findings [15, 39-41, 43, 44, 56-66]. Mortality is high, with disease worsening in approximately 50% of paediatric cases [61]; however, children have also been reported to experience disease stability or recovery with (or even without) long-term immunomodulation therapy [39,61,62,67]. It does appear, however, that after years of relative stability symptoms often resurface or aggravate unexpectedly and disease progresses towards treatment refractory pulmonary fibrosis. Studies using knock-in Sftpc I73T mice show that induced expression in heterozygous mice results in early inflammation, which does not proceed to fibrosis, whereas, in homozygous mice, progression towards fibrosis does occur [68]. This shows that fibrogenesis is preceded by an early inflammatory phase and that dosage of the mutant allele influences disease evolution. Adults with SFTPC mutations present at all ages with a form of idiopathic interstitial pneumonitis (IIP) characterised by progressive pulmonary fibrosis, while no pulmonary disease is usually detected in childhood [39-41, 43, 44, 56, 57, 63-66]. Disease penetrance is extremely high and only a few asymptomatic family members carrying a pathogenic SFTPC mutation have been reported. Pulmonary screening of these relatives often unveils subclinical disease with below normal lung function or the presence of ILD changes on high-resolution computed tomography (HRCT) [15,69]. Lamellar body protein gene ABCA3 The lamellar body ATP-binding cassette 3 protein encoded by the ABCA3 gene is essential for intracellular processing and transport of surfactant proteins B and C [20]. Given the important role of lamellar bodies in surfactant metabolism, the gene was investigated in 21 ethnically diverse infants with severe neonatal surfactant deficiency of unknown aetiology. Fourteen infants had bi-allelic, mostly homozygous ABCA3 mutations and aberrantly formed small dense bodies in their AT2 epithelial cells [70]. Mutations in ABCA3 are now known as a common cause of both fatal RDS in the neonatal period and chronic ILD in older infants and children [31,70,71]. Inheritance of disease due to mutations in ABCA3 is autosomal recessive, with patients carrying homozygous or compound heterozygous mutations. The most frequent mutation is c.875A>T, known as "E292V", which was first discovered in paediatric patients with ILD [72]. Studies have shown a significant reduction in lamellar body size and secreted phospholipids in homozygous E292V knock-in mice, which developed alveolitis and age-dependent lung remodelling [73]. Homozygosity for E292V is rare, with only one infant with fatal neonatal respiratory distress and one adult with idiopathic pulmonary fibrosis (IPF) reported [64,74]. The E292V mutation is most commonly found in compound heterozygous patients. Five-year survival in subjects with two disease-causing ABCA3 mutations is <20% [61]. To date, over 200 different mutations have been discovered and genotype-phenotype correlations are beginning to emerge, although the role of many mutations is still unknown and difficult to predict. Most frequent are null mutations (frame-shift and nonsense mutations) predicting complete absence of functional ABCA3. A study of 185 children with bi-allelic ABCA3 mutations has shown that patients who are homozygous for null mutations have respiratory failure at birth, resulting in either a fatal outcome or lung transplantation, whereas absence of null mutations on one or both alleles frequently results in later onset and better survival [74]. Similar results have been retrieved in a study of 40 European patients [75]. The non-null mutations result in partial functional impairment or induce cellular toxicity [1] and correlate with better survival. Although extremely rare, a recent review and case series [76] describes seven adult patients with interstitial pneumonitis who carry bi-allelic mutations in ABCA3, of which only one patient carried a null mutation [71,[75][76][77][78]. Carrying a single ABCA3 missense mutation may also increase risk for disease and has been found to be associated with increased risk of neonatal RDS in late preterm infants [79,80]. Moreover, it has been suggested that heterozygous carriers of a single ABCA3 mutation might influence disease development of carriers with an SFTPC mutation [69,81]. Very few such cases are documented however and, due to the variable disease course in SFTPC mutation carriers, the link is difficult to study [40,69,81]. Surfactant proteins A and D The hydrophilic surfactant proteins SP-A and SP-D are structurally related and play an important role in adaptive and innate immunity [6,7]. The proteins consist of four domains: a short N-terminal involved in oligomerisation, a collagen-like domain, a coiled neck region important for oligomerisation and spacing the lectin domain and the C-terminal carbohydrate recognition domain (CRD). The CRD induces opsonisation by binding carbohydrates at the surface of pathogens [6,82]. Expression of SP-A and SP-D is not limited to the lung, and both RNA and immunohistochemical expression have been reported in the epithelia of multiple organs in congruence with a role in host defence https://doi.org/10.1183/16000617.0085-2020 [6,[82][83][84]. In the lung, SP-A and SP-D expression appears highest in AT2 epithelial cells, but is also observed in sub-mucosal and club cells [85,86]. In contrast with SP-B and SP-C, secretion of SP-A and SP-D in AT2 epithelial cells bypasses the lamellar bodies [87]. Within AT2 epithelial cells, SP-A localises mainly in the small vesicles and multivesicular bodies [88], whereas SP-D is highly localised in the ER, as well as being present in the Golgi complex and in multivesicular bodies [86]. Furthermore, SP-A is enriched in the outer membranes of unwinding lamellar bodies in the alveoli [88] and plays a role in tubular myelin formation. In mice lacking SP-A, tubular myelin was missing and increased susceptibility to pulmonary infections was observed [89]. Two slightly different forms of SP-A (SP-A1 and SP-A2) exist and are encoded by SFTPA1 and SFTPA2. These genes are highly similar, sharing 94% sequence homology [6], with the main difference consisting of four amino acids in the collagen-like domain [82,90]. However, both quantitative and qualitative differences exist: SP-A2 is more efficiently translated, is more abundant and more effectively enhances bacterial phagocytosis than SP-A1 [91,92]. To date, only mutations in SFTPA1 and SFTPA2 have been associated with monogenic lung disease. Common variants in SFTPD are associated with numerous respiratory diseases, as recently reviewed by SORENSEN et al. [82], but mutations that specifically cause monogenic parenchymal lung disease are not known. The absence of SFTPD mutations suggests that either such mutations are deleterious in utero, do not cause a pulmonary phenotype, or may be resolved without pathogenic consequences. For instance, the low production of SP-D in AT2 epithelial cells may preclude the deleterious consequences of mutant alleles. SFTPA2 gene mutations Involvement of the surfactant biolectin genes in monogenic lung disease was discovered through genome linkage analysis in a large family with adult-onset pulmonary fibrosis. The analysis pointed towards a large region at the long arm of chromosome 10 which contained about 120 genes including SFTPA1, SFTPA2 and SFTPD [93]. Subsequent sequencing revealed a heterozygous mutation (GGG/GTG) in codon 231 of SFTPA2 (c.692G>T; p.(G231V)) which segregated with disease in the family. Additional sequencing in 58 unrelated probands with familial pulmonary fibrosis revealed a second heterozygous mutation in SFTPA2. Both mutations involve amino acid substitutions in exon 6 (which encodes the CRD) and were predicted to destabilise the protein [93]. In transfected A549 cells, ER retention of the mutant protein and ER-stress were observed; furthermore, the mutated protein was not present in the patient's lavage fluid [94]. Later studies not only showed that harmful mutations are limited to exon 6, but also that, next to pulmonary fibrosis, lung cancer was always present in mutation carrying families [64,95]. Six mutations are now known and inheritance of disease due to mutations in SFTPA2 is autosomal dominant. Disease has only been reported in adults [64,93,95], although the age range is wide. The youngest patient presented at the age of 20 years with a significantly reduced forced vital capacity (FVC) of 57% predicted and diffusing capacity of the lung for carbon monoxide (D LCO ) of 49% predicted [95] (suggestive of preclinical onset of disease pathogenesis at an even younger age). It is our experience that survival is limited, mimicking the outcome in IPF and that lung transplantation is a successful therapy in adults. SFTPA1 gene mutations The involvement of mutations in the SFTPA2 gene in families with both pulmonary fibrosis and lung cancer, led to the compilation of a small cohort characterised by the co-existence of both diseases. Candidate sequencing of SFTPA2 in the 12 probands yielded no results; however in SFTPA1, a disease segregating heterozygous mutation (c.631T>C) causing an amino acid substitution at position p.(W211R) was detected. Similar to previous SFTPA2 findings, the mutation was located in exon 6 and inheritance was autosomal dominant [96]. Experiments with transfected HEK293T cells showed an absence of mutated protein in the cell medium. Although the study involved families with adult-onset pulmonary fibrosis, a baby of 9 months old who died from severe respiratory disease was a member of the mutation carrying family. The child's biopsy demonstrated excessive intra-cytoplasmic SP-A staining in hyperplastic AT2 epithelial cells, highly suggestive of a deleterious effect due to mutated protein in the postnatal period [96]. Thereafter three other families were described, each with their own unique exon 6 SFTPA1 mutation [97][98][99], including the homozygous adult sons of consanguineous heterozygous Japanese parents [99]. The parents were asymptomatic but, upon examination, subclinical disease with a D LCO <63% predicted was detected [100]. Multisystem diseases: NKX2-1 There are several other genes directly or indirectly related to surfactant homeostasis, in which mutations may cause pulmonary fibrosis. For these genes, pulmonary disease is often only reported in paediatric patients or in the context of a multi-system disease in children and adults [101,102]. One of these genes is the transcription factor NKX2-1, which directly regulates expression of all the above surfactant proteins and ABCA3, as well as its own expression via a positive feedback loop [103]. Heterozygous mutations in NKX2-1 cause a spectrum of pulmonary phenotypes, presenting as RDS in neonates or ILD in older children or adults affected with pulmonary fibrosis or recurrent pulmonary infections. NKX2-1 associated disease is expressed as brain-lung-thyroid syndrome. While neurological features are most common, the respiratory problems may appear isolated in up to 25% of patients and can improve or become life threatening [104,105]. Inheritance of disease is autosomal dominant and caused by haploinsufficiency, mostly conferred by nonsense and frame-shift mutations or by complete gene deletions [106]. Both penetrance and expressivity are extremely variable, corresponding with an overall lack of genotype-phenotype correlations [104][105][106][107]. Many patients survive into adulthood; however, diagnosis at adulthood without prior severe respiratory problems is only described in four subjects. These subjects had an age at diagnosis of between 25 and 40 years. Two subjects had pulmonary fibrosis, one had ILD and one was asymptomatic with early signs of pulmonary fibrosis [104,105]. Among the many syndromes associated with pulmonary fibrosis and surfactant homeostasis, Hermansky-Pudlak syndrome (HPS) is worth mentioning. HPS is an autosomal recessive, multisystem disease characterised by oculocutaneous albinism. The disease is caused by dysfunctional vesicles in multiple organs, such as the skin and the eyes, as well as the blood and AT2 epithelial cells. Mutations in the HPS1, AP3B1 (HPS2) and HPS4 genes, encoding major components of lamellar bodies, are involved in the development of chronic progressive pulmonary fibrosis [108]. Onset of pulmonary fibrosis usually occurs at the age of 30-40 years and medium survival is approximately 10 years [109]. HPS has one of the best organised patient societies, facilitating not only patient empowerment but also scientific research and conductance of clinical trials (www.hpsnetwork.org). Pathogenesis Two different kinds of deleterious surfactant-related mutation exist. The first, loss-of-function mutation, results in an absence or a decrease in functional protein. This type of mutation is generally found in recessive disease and indeed is an important mode of action of mutations in ABCA3 and SFTPB; however, loss-of-function is also the mechanism behind the dominant NKX2-1 disease. Loss-of-function mutations cause reduced protein content resulting in dysfunctional lamellar bodies or disrupted surfactant homeostasis. The second mechanism, gain-of-function mutation, involves an increase in the amount of a protein or alters its functionality. This type is most common in dominant disease and is the main mode of action of mutations in SFTPC, SFTPA1 and SFTPA2. Several studies have shown that mutations in these genes result in increased protein misfolding, aberrant protein trafficking, intracellular retention or accumulation, increased ER stress, activation of the UPR, increased pro-apoptotic signalling and apoptosis [47,55,94]. The commonality involved is not the mutation specific aberrant process but the involvement of the AT2 epithelial cell, the alveolar progenitor cell that is key to alveolar maintenance and repair [110]. Several studies have provided evidence that injurious processes in AT2 epithelial cells, which may be mediated by apoptosis, are sufficient for the development of pulmonary disease [111]. However, not all studies could detect involvement of apoptosis, even when increased ER-stress could be detected [94]. Recently, a different form of regulated cell death, necroptosis (also known as programmed necrosis) has been detected in surfactant-related disease. Knock-in mice carrying the Sftpa1 c.622T>C mutation spontaneously developed pulmonary fibrosis at 20 weeks of age, with further deterioration on aging or on infection with influenza virus. Most importantly, enhanced necroptosis rather than apoptosis of AT2 epithelial cells was detected in the knock-in mice and in the affected patient biopsies [99]. Overall, it can be concluded that the pathogenesis of surfactant-related mutations is only partly overlapping, which complicates the development of targeted therapeutic interventions. Mutation spectrum Mutations in surfactant-related genes are an accepted cause of severe pulmonary disease, but the overall relative and gene-specific contribution to disease is not well known. Figure 1a provides the mutation spectrum in 221 adult cases (>18 years old) of probands with familial pulmonary fibrosis in the national Dutch ILD biobank at St Antonius Hospital, using whole exome sequencing (WES) and mutation detection in all genes associated with familial pulmonary fibrosis. Mutations in surfactant-related genes are present in nearly 8% of probands, whereas nearly 36% had a telomere-related gene mutation and in 56% no causal mutation was found. These data are highly congruent with the previously published mutation spectrum for French patients suspected of monogenic pulmonary fibrosis [112]. The mutation frequency varies widely in paediatric cohorts of RDS and ILD, but experts suggest that 10-20% of cases have monogenic surfactant-related disease [2,3,113]; however, the specific genes involved are dependent on age, ethnicity and disease phenotype. Comparing the results of multiple studies shows that involvement of the recessive genes SFTPB and ABCA3 differs between populations [16,[35][36][37][114][115][116][117][118][119]. Furthermore, next to clinical differences between the studied cohorts, the background frequency of deleterious recessive alleles also differs between populations. Haplotype and population analysis have shown that in some healthy white populations the most frequent SFTPB 121ins2 and ABCA3 E292V mutations have allele frequencies of 0.03-0.1% and 0.3-0.4%, respectively, but are absent or extremely rare in healthy African and Asian cohorts [16,17,33]. Furthermore, the origin of the parents has been shown to influence the mutation spectrum (e.g. one study has found that in patients of Middle-Eastern descent, not E292V but Y1515X is the most frequent mutation [74]). Homozygosity for deleterious alleles is frequent, however, this is not due to high background frequencies, but is instead caused by consanguinity [74,75]. For lethal dominant gene mutations there is no population background frequency. The most frequent SFTPC mutation (I73T) is absent in all healthy cohorts but present in disease worldwide [15,17,40,58,64,65,120]. Haplotype analysis in three Dutch families with pulmonary fibrosis and the I73T mutation has shown that the mutations are of independent origin [40]. The SFTPC I73T mutation clearly represents a mutational hotspot, although extremely high disease penetrance and lethal expressivity prevent it from spreading in the general population. When comparing the frequency of SFTPC I73T and ABCA3 E292V between our 120 probands with familial pulmonary fibrosis and the GnomAD database ( found that the three SFTPC I73T mutations were statistically over-represented in familial pulmonary fibrosis, whereas the three ABCA3 E292V mutations in familial pulmonary fibrosis may be expected on the basis of an overall low background frequency in white subjects. Age The involvement of surfactant-related genes in disease is highly associated with age. Mutations in the recessive SFTPB and ABCA3 genes predominantly involve neonates and infants, while in children with ILD mutations are most commonly present in the ABCA3 and SFTPC genes [121]. The age spectrum of adult surfactant cases was not studied previously. As such, we compiled an age spectrum based on all reported adult cases (n=84) together with unpublished cases from our own cohort (n=7) [39-41, 43, 44, 56, 57, 63-66, 71, 93, 95-100]. Comparison between the age spectrum of this aggregated surfactant cohort and our Dutch pulmonary fibrosis cohort with telomere-related gene mutations shows that patients with surfactant mutations present with early-onset disease significantly more often (figure 1b). Median age in adult patients with surfactant-related mutations is 45 years versus 62 years in patients with telomere-related mutations ( p<0.0001 by Mann-Whitney U-test). To further analyse differences between surfactant genes, we subdivided the aggregated cohort. The median age in the SFTPC group is 37 years, which is significantly lower than the median age of 48 years in the SFTPA1/SFTPA2 group ( p=0.0036 by Mann-Whitney U-test). The age difference is in congruence with the rarity of SFTPA mutations in childhood (only one documented case versus 51 adult cases) and suggests that SFTPA may be considered as a disease for adults, while SFTPC-related disease may present at all ages. While genotype-phenotype correlations arise for the recessive SFTPB and ABCA3 genes, with complete protein deficiency causing fatal neonatal disease and partial functionality enabling prolonged survival, no such age-associated correlations exist for the dominant genes. Evaluation of paediatric SFTPC cases suggests that patients with mutations in the BRICHOS domain present at an earlier age than non-BRICHOS cases [61,62]. However, there is no significant difference ( p=0.1) between the age of adult patients with BRICHOS mutations (n=21; mean age 38 years) versus non-BRICHOS mutations (n=17; mean age 45 years) in this review. The gene-specific age spectrum, for which only data from patients diagnosed at age 18 years or older is used, is shown in figure 2a. Even so, it remains possible that the patients included have had either earlier preclinical onset of disease or an episode of clinical disease caused by the as yet unrecognised surfactant disorder during childhood. In figure 2b we present a model for the relative contribution of each gene to disease. The break at age 18 years represents the lack of knowledge on how frequencies below and above 18 years relate to each other. For SFTPC, it is not yet known whether the risk of developing disease is greater in childhood or adulthood (many studies report paediatric SFTPC cases, but reports have also been made on many families dominated by adult cases). Sex in adult-onset disease Differences in lung growth and production of surfactant are known to exist between males and females. Premature male infants are more susceptible to RDS and adult males are more susceptible to pulmonary fibrosis [122]. Knock-in mice with induced Sftpc I73T expression show significantly worse survival in male mice versus female mice [68]; furthermore, males predominate in telomere-related disease [123,124]. That males (52%) and females (48%) are equally affected by surfactant mutations is shown in figure 1c. In addition, both sexes appear equally distributed across all ages ( figure 2a). This suggests that in monogenic surfactant-related disease sex, as well as environmental factors related to sex, do not have a major influence on development of disease. Radiological and histological findings Mutations in surfactant genes cause a variety of pulmonary phenotypes that are most strongly associated with age, ranging from neonatal RDS (ABCA3, SFTPB and NKX2-1) to children with ILD (ABCA3, SFTPC and NKX2-1) and adults with ILD and development of pulmonary fibrosis or lung cancer (SFTPC, SFTPA1, SFTPA2 and NKX2-1). Adult patients with surfactant-related mutations develop pulmonary fibrosis; however, the radiological pattern is usually inconsistent with usual interstitial pneumonia (UIP) and is often described as nonclassifiable interstitial pneumonitis. Features described for patients with an SFTPC mutation include bilateral reticular abnormalities, septal thickening, traction bronchiectasis or bronchiolectasis, ground-glass opacities (GGOs) and infiltrates [40,43,57,63,65,66]. HRCT images commonly show parenchymal lucencies in the form of one or more scattered cystic lesions, most often in both lungs and varying in size between 0.5 cm and large bullae, or emphysematous changes [40,43,57,63,65,66]. For children with SFTPC mutations, where HRCT shows GGOs, increasing signs of fibrosis and cyst formation are observed with increasing age [61]. Follow-up of such cystic changes in our adult patients shows that the clearly walled cysts are not static, but may collapse in time (figures 3a and 3b) or, conversely, develop into large bullae-like structures (figures 3c and 3d). These structural changes in adults are likely caused by mechanic forces that arise when the lungs shrink from progressive fibrosis. In contrast with radiological findings, the histological findings in adult patients with SFTPC mutations most commonly meet the criteria for a UIP classification and fibroblast foci have been detected even in mildly affected patients [40,43]. The UIP pattern in adults is often superimposed by other inflammatory interstitial pneumonitis patterns, such as DIP or cellular nonspecific interstitial pneumonia (NSIP) [39,40,43,63,66]. Young children with SFTPC mutations present with histological patterns like cellular NSIP, DIP or pulmonary alveolar proteinosis (PAP) [39,61,66,125] and may only develop fibroblast foci when entering adolescence. Furthermore, the proteinaceous material may be concentrated in abundantly present foamy alveolar macrophages [120]. However, even more diverse histological findings have been observed in adults, such as lesions suggestive of chronic hypersensitivity pneumonitis (HP) with airway inflammation and granuloma (along with a UIP pattern) [63] and lesions suggestive of sarcoidosis with well-formed noncaseating granuloma (own data). Furthermore, a patient with an SFTPC mutation has been found among a large cohort of adults with rheumatoid arthritis-associated ILD (RA-ILD) [56]. This is further proof of the idea that the different forms of pulmonary fibrosis may have highly overlapping disease pathogenesis, with molecular make-up not so much determining disease phenotype as determining disease outcome. For example, in telomere-related pulmonary fibrosis it has been shown that regardless of the disease phenotype the disease has a fatal outcome [126]. This may also apply to surfactant-related mutations in adults but more research is needed. HRCT of SFTPA1 and SFTPA2 mutation carriers is difficult to classify, but most commonly contains septal thickening and GGOs and may be described as NSIP [95,97,99]. The few histological descriptions of SFTPA mutation carriers most often describe a pattern of UIP, but NSIP, DIP and organising pneumonia (OP) have also been mentioned [93,95,97,99]. While there may be some gene-related radiological characteristics, such as parenchymal lucencies (i.e. solitary cysts) in SFTPC mutation carriers, histological examinations have yet to reveal specific gene-related features. However, reviewing all evidence, we suggest that the development of fibroblast foci is the common denominator in biopsies of all adult progressive surfactant-related mutation carriers and determines outcome. Development of other features, such as infiltrates or granuloma may be the result of intrinsic or extrinsic factors that differ between patients. Lung cancer In general, patients with pulmonary fibrosis have an increased risk for development of lung cancer [127]. In families carrying an SFTPC mutation, incidental cases of lung cancer have been reported [40]; however, in families carrying either SFTPA1 or SFTPA2 mutations, the number of cases with lung cancer exceeds the number of expected cases [90,95,96]. Summarising data from SFTPA mutation carriers shows that 37% were diagnosed with lung cancer, of which two-thirds had a combination of both lung cancer (most commonly adenocarcinomas) and pulmonary fibrosis (figure 4a). Importantly, while the age of SFTPA patients ranges between 19 and 71 years, lung cancer was only present in patients aged >40 years. Furthermore, even though lung cancer is an aging disease, numbers are not highest in elderly patients (figure 4b). More data is needed to see if these patterns will last and to investigate their cause. Although the mechanistic link with lung cancer is not understood, several possibilities are worth mentioning. Firstly, recent investigations into the pathogenesis of SFTPA1 mutations have shown that necroptosis is increased but not apoptosis [99]. For necroptosis, both tumour-promoting and tumour-suppressing effects are reported [128]. In the study of a family with an SFTPA1 mutation and evidence of necropsis, one uncle with lung cancer was reported; however, his mutation status was unknown and tumourigenesis was not studied [99,100]. Secondly, the expression of SP-A protein is not limited to AT2 epithelial cells but also occurs in club cells. A recent study in mice has shown that adenocarcinomas may originate from club cells after exposure to smoke [129]. Aberrant processes in club cells may therefore promote tumourigenesis. Thirdly, there is the role of the SP-A protein itself in preventing tumourigenesis. SP-A may suppress tumour development via recruitment and activation of natural killer cells and control of tumour-associated macrophage polarisation [130], or via epidermal growth factor receptor (EGFR) binding and down-regulation of epidermal growth factor (EGF) signalling [131]. Quantitative or qualitative changes due to SFTPA mutations may thus inhibit the protein's tumour suppressing abilities. Further studies are needed to elucidate the direct relationship between SFTPA mutations and lung cancer. NKX2-1 plays a double-edged role in cancer as a lineage-survival oncogene in lung adenocarcinomas and an inhibitor of invasion, metastasis and progression, thereby conferring better prognosis. Somatic NKX2-1 loss of function mutations have been identified in adenocarcinomas and cause loss of tumour suppressing abilities [132,133]. A similar process is likely responsible for the increased risk for pulmonary carcinoma in young adults with germline NKX2-1 mutations [134]. Genetic testing The contribution of surfactant-related genetic defects in patients suspected of monogenic disease varies between 5% and 25% [16, 35-37, 114-119, 135]. However, the contribution is low in sporadic patients with pulmonary fibrosis, especially when familial illness has been properly questioned [56,95,136]. In ILD guidelines, recommendations regarding genetic testing are absent even though testing is highly informative in patients with early-onset and familial disease [112,137]. As surfactant mutation carriers are difficult to diagnose from radiological imaging alone, genetic testing could aid diagnosis while removing the need for a biopsy. A recent study in children with ILD (age >2 years) has shown that genetic tests contribute to 15% of the diagnoses, slightly better than lung biopsies which contribute to 13.5% [116]. Furthermore, genetic testing aids clinical management with regard to disease prognostication, drug choice and timing, and type of lung transplantation. In addition, the patient's choice for genetic testing is guided by counsellors, which will aid patients in making informed decisions about family planning and other life-changing events. of short-telomere syndrome; or 4) are below 55 years of age. Together with the clinical genetics department, we built a two-step gene panel for exome sequencing analysis of eligible adult patients with pulmonary fibrosis. The first panel includes all genes published to be involved in adult pulmonary fibrosis and all genes published to be involved in short-telomere syndrome. If negative, the second exploratory panel includes all genes published as involved in paediatric ILD and all genes involved in telomere length maintenance. Results from the exploratory panel are more difficult to interpret but may direct additional clinical testing of subjects (e.g. towards telomere length measurement or a bronchoalveolar lavage (BAL) procedure). Due to the rarity of disease and the predominance of unique family specific mutations, the clinical significance of many genetic findings remains uncertain. Many mutations are categorised as variant of uncertain significance (VUS), meaning that the significance of the mutation to the function or health of the patient is not known. Gathering genotype-phenotype information in a worldwide database is of the utmost importance in better understanding of the impact of mutations on health. The Leiden Open Variation Database (LOVD; https://lovd.nl) is a large community-owned public variant database collecting case-associated genomic variants and phenotypes [138]. LOVD covers all important aspects of a case (i.e. data on the individual, the phenotype, longitudinal clinical changes, (combinations of ) identified variant (s) and their classification). We have started gathering and curating published data and call on other researchers to supplement it for surfactant-related genes (e.g. www.databases.lovd.nl/shared/individuals/ SFTPA2). Therapy To date no proven effective drug therapies for surfactant-related genetic disease exist. Immunomodulation therapies yield variable results, have considerable side-effects in children [2,139] and require careful consideration in adults because of the harm observed in patients with IPF [140,141]. Since disease in adults resembles IPF, the drugs of first choice are the anti-fibrotics pirfenidone and nintedanib. However, efficacy is unknown and a small trial investigating pirfenidone in HPS was stopped due to futility. Furthermore, side-effects leading to dose reduction of anti-fibrotics and treatment discontinuation are common in IPF [142] and challenge optimal timing for start of therapy in familial patients with early disease. We recently performed a review of drug effects in patients with a surfactant-related mutation, as well as cell or mouse models and concluded that the outcome of drug treatments was highly variable and most likely mutation specific [143]. Studies evaluating the outcome of drugs are hampered by the large number of different mutations with different pathogenic effects. Drug development in cystic fibrosis has profited tremendously from the CFTR mutation classification scheme. A first attempt to classify ABCA3 mutations has divided them into two types: Type I, which cause abnormal intracellular protein localisation, protein misfolding, ER stress and induction of apoptosis; and Type II, which associate with the catalytic domains of the transporter and result in normally localised proteins with a functional deficit in ATP hydrolysis and impaired lipid transfer [1,144]. However, insufficient understanding of ABCA3 dysfunction and insufficient homology with cystic fibrosis transmembrane conductance regulator (CFTR) hamper development of a clinically useful scheme. Analogous to successful CFTR therapy, the potentiators ivacaftor and genistein may rescue protein functionality. Recently, these potentiators were shown to rescue the ABCA3 phospholipid transport function of three different mutations stably expressed in A549 cells [145]. Gene-based therapies, such as gene replacement or editing, hold promise for the future. Indeed, studies performed in mice and in organoid-like cell systems provide proof of principle for successful gene correction and restored functionality in SP-B deficiency [146][147][148]. For now, lung transplantation is the most successful option in infants (5-year survival: 55%) and in older children (5-year survival: >75%) [149], as well as in adults with end-stage disease (figures 3e and 3f). In the case of lung transplantation, it is of the utmost importance to recognise patients with SFTPA mutations, particularly in countries with a shortage of donor lungs where unilateral transplantation is common. The high probability of developing lung cancer in the native lung justifies bilateral transplantation in cases with SFTPA mutation (we have previously reported metastasised lung cancer in a patient with an SFTPA2 mutation having undergone unilateral lung transplantation [95]). The successful outcome of bilateral lung transplantation in an end-stage patient with an SFTPA2 mutation is presented in figure 3f. Conclusion Mutations in surfactant-related genes cause severe parenchymal lung disease in a significant group of patients of all ages. Genes associate primarily with age but not with sex and genetic testing may aid diagnosis and disease management. Monogenic disease provides the opportunity for detection of early disease and holds the promise of early treatment and prolonged survival. Clinical counselling, offering the option of genetic and pulmonary screening, is therefore recommended for patients and first-degree family members. Most importantly, there is a need for better understanding of the impact of family specific mutations on health. The rarity of the disease strongly warrants worldwide gathering of genotype and phenotype data to improve clinical management.
2021-02-19T06:16:14.536Z
2021-02-16T00:00:00.000
{ "year": 2021, "sha1": "0bd8efb1aec67c07725ec4823124512a71a04429", "oa_license": "CCBYNC", "oa_url": "https://err.ersjournals.com/content/errev/30/159/200085.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87db9c1eaae98fd26c5b7e7bbe38ca7eb408a5c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11360079
pes2o/s2orc
v3-fos-license
Interferon-Gamma Release Assay for the Diagnosis of Latent TB Infection – Analysis of Discordant Results, when Compared to the Tuberculin Skin Test Background With the Interferon-γ release assays (IGRA) a new method for the diagnosis of latent tuberculosis infections (LTBI) is available. Due to the lack of a gold standard for the diagnosis of LTBI, the IGRA is compared to the Mantoux Tuberculin Skin Test (TST), which yields discordant results in varying numbers. Therefore we assessed to which extent discordant results can be explained by potential risk factors such as age, BCG vaccination and migration. Methods and Findings In this pooled analysis, two German studies evaluating the Quantiferon-Gold In-Tube test (QFT) by comparison with the TST (RT23 of SSI) were combined and logistic regressions for potential risk factors for TST+/QFT− as well as THT−/QFT+ discordance were calculated. The analysis comprises 1,033 participants. Discordant results were observed in 15.4%, most of them being TST+/QFT− combinations. BCG vaccination or migration explained 85.1% of all TST+/QFT− discordance. Age explained 49.1% of all TST−/QFT+ discordance. Agreement between the two tests was 95.6% in German-born persons younger than 40 years and not BCG-vaccinated. Conclusions After adjustment for potential risk factors for positive or negative TST results, agreement of QFT and TST is excellent with little potential that the TST is more likely to detect old infections than the QFT. In surveillance programs for LTBI in high-income, low TB incidence countries like Germany the QFT is especially suited for persons with BCG vaccination or migrants due to better specificity and in older persons due to its superior sensitivity. Introduction The burden of tuberculosis (TB) in healthcare workers (HCW) remains high in low-and middle-income countries [1] as well as in high-income countries [1][2][3][4][5]. Therefore an efficient strategy for surveying exposed HCW or anyone else exposed to active TB patients is needed. With declining incidence of active TB, the purpose of these screening programs widened from early active TB detection to latent TB infection (LTBI) detection and preventive chemotherapy [6] below. For these screenings a specific test that allows for the diagnosis of recent TB infections [7] likely to progress into active TB is warranted. For the diagnosis of latent tuberculosis infections (LTBI) the recently developed Interferon-c release assays (IGRA) are good alternatives to the unspecific tuberculin skin test (TST) [8][9][10][11], which has been in use for nearly 100 years [12]. Due to the lack of a gold standard for the diagnosis of LTBI, IGRA is compared to TST in evaluation studies. In a meta-analysis of studies in healthy populations with varying risk for LTBI, discordant results between IGRA and TST were found in 21% (ELISpot, T-SPOT.TB) or 29% (ELISA, QuantiFERON-Gold) of the participants [13]. Among discordant results, TST-positive and IGRA-negative (TST+/IGRA2) combinations prevailed. This is easy to explain because the IGRA uses few Mycobacteria tuberculosis-specific antigens (ESAT-6 and CFP-10 of the region of difference, RD1) while the tuberculin of the TST is a mix of about 200 non-specific antigens that are shared with nontuberculous mycobacteria (NTM) as well as with the strains developed from Mycobacterium bovis used for bacilli Calmette-Guérin (BCG) vaccination [9,11]. Nevertheless the higher rate of TST-positive results compared to those in IGRA might also indicate that the TST is more likely to detect resolved or old LTBI while the IGRA mainly detects current or recent infections [7,14]. In a German contact-tracing study only those with a positive IGRA progressed towards active TB, while none of those with a positive TST but negative IGRA (TST+/IGRA2) developed TB in the two years following close contact to an active TB case [15]. As progression to active TB is higher in those with recent infections [16], this study supports the hypothesis that IGRA might rather detect new infections. In the literature little attention is given to TST-negative but IGRA-positive results. Assuming the IGRA to be highly specific it is likely that this combination indicates LTBI [17]. A waning of the TST with age is discussed in literature [18]. Whether the IGRA is waning to the same extent with age as the TST is an open question. So far the extent to which BCG vaccination and NTB explain TST+/IGRA2 discordance is not analyzed and the reasons for positive IGRA that are not verified in the TST (TST2/IGRA+) are unknown [13,19]. The proportion of TST+/IGRA2 results that can not be explained by known risk factors might be explained by a higher sensitivity of the TST for old infections. In consequence the IGRA would be indicative of current or recent infections. If this rationale is true, a relevant proportion of TST+/ IGRA2 results that cannot be explained by BCG vaccination or exposure to NTM should be observed in a population of a country like Germany, which experienced the transition from high to low TB incidence in the last decades. In a country with a decreasing incidence of active TB, prevalence of LTBI in older age groups should be higher than in the younger, less exposed age groups. Again if the rationale is true, this effect should be more pronounced in the TST than in the IGRA. We analyzed risk factors for discordant results when the IGRA is compared to the TST in order to verify the hypothesis that IGRA is sensitive to recent or current infections while the TST is sensitive to both old and recent infections. Methods For this analysis we combined two study populations consisting of 1,040 healthy persons. Due to indeterminate results in the IGRA 7 persons had to be excluded from the analysis. Out of the remaining 1,033 persons (table 1), 601 were part of the general population examined in the scope of contact tracing [20] and 432 were healthcare workers routinely screened for TB [21]. Both studies were carried out in the scope of German legislation concerning TB surveillance. They both used the same study protocol and were carried out by the same principal investigators (R.D., A.N.). Therefore they were suitable for a combined analysis. Information on BCG vaccination, country of birth, age, gender, and previous tests was collected in standardized interviews. BCG vaccination was verified by scars or vaccination records. In Germany, up until 1982 all newborns were BCG vaccinates. Thereafter vaccination was recommended only for newborns with high TB risk. No general recommendation on revaccination was issued [22]. Since 1998 BCG vaccination has no longer been recommended in Germany [23]. In both study populations the TST was performed using 2-TU of PPD RT23 (Statens Serum Institute, Copenhagen, Denmark). The test was administered to the volar side of the forearm of the participants and read 72 to 96 hours after the application. The transverse diameter of the induration was measured. The observers were blinded to the IGRA results. Before TST application, the standardized interview was performed and blood for the IGRA was drawn. For the IGRA, a variation of the QuantiFERON-TB Gold assay (Cellestis Limited, Carnegie, Australia), the QuantiFERON-TB Gold In-Tube test (QFT), was used. This whole-blood assay uses overlapping peptides corresponding to ESAT-6, CFP-10, and a portion of tuberculosis antigen TB7.7 (Rv2654). Stimulation of the antigenic mixture occurs within the tube used to collect the blood. Tubes were incubated at 37uC overnight before centrifugation, and INF-c release was measured by ELISA following the protocol of the manufacturer. All the assays performed met the manufacturer's quality-control standards. The test was considered positive when INF-c was $0.35 IU after correction for the negative control. Observers were blinded to the results of the TST. Due to the lack of a gold standard, sensitivity and specificity were not calculated. The Pearson Chi-square test was used to compare frequencies of test results among different groups of participants. For ordered risks, the proportions of positive test results were compared using the chi-square test of trend. P,0.05 was considered to be statistically significant. Agreement and Kappa values were calculated for the two tests with varying cutoffs for a positive TST (.5 mm, . = 10 mm, . = 15 mm). Odds ratios (OR) for discordant test results depending on different putative predictive variables were calculated using logistic regression. Model building was performed backwards using the chance criteria for variable selection [24]. The expected number of discordant results was calculated by the product of the proportion of discordant results in the unexposed strata and the number of observations in the exposed strata. The difference between observed and expected discordance was considered as the proportion of the discordant results explained by the analyzed risk factor. The expected discordance was used to calculate a corrected agreement between TST and QFT. Data analysis was performed using SPSS, Version 14 (SPSS Inc., Chicago, Illinois). The study protocols of both studies combined for this paper were approved by the ethics committee of the Hamburg Medical Council. All persons gave their written informed consent prior to their inclusion in the studies. Table 1 describes the study population. 30.1% had an induration diameter in the TST of .5 mm and 18.5% had a diameter of 10 mm or more. The QFT was positive in 9.7%. For 5 participants born in Germany and not BCG-vaccinated, the induration diameter in the TST was $15 mm. All of these had a positive QFT (table 2). Kappa was influenced by BCG vaccination and birthplace. Kappa was lowest when a cut-off point .5 mm for TST was used and the participants were foreign-born and BCGvaccinated (0.04). Kappa was highest in German-born, not BCG- vaccinated participants when $10 mm was used as cut-off point for the TST. Agreement between QFT and TST was best with a TST diameter of at least 15 mm as cut-off point (89.8%). But Kappa was best with at least 10 mm as cut-off point for the TST in the whole sample (0.37) and in the different subgroups (table 2). Therefore further analysis was carried out with 10 mm as cut-off point for the TST. Foreign birth was a risk factor for LTBI in both TST and QFT ( Discussion With our data we were able to analyze migration, BCG vaccination and age as risk factors for 159 discordant results in contacts of TB cases tested with TST and QFT simultaneously. The proportion of discordant results we observed in our pooled analysis was somewhat lower (15% instead of 24%) than that described in a recent meta-analysis [13]. This might be explained by a higher proportion of risk factors for discordance in the studies that gave rise to this meta-analysis, i.e. the proportion of participants with BCG vaccination was 59% in the meta-analysis and 43% in our pooled population. As in this meta-analysis, the combination of TST+/QFT2 results dominated the discordant results. Most of these discordant results can be explained by BCG vaccination or birth in a foreign country, which might be an indicator for NTM infection [25]. Being foreign-born and BCGvaccinated explained 95.7% TST+/QFT2 results that occurred in this subgroup. This might be explained by repeated BCGvaccination in juveniles or by older age at which vaccination is performed. Both increase the probability of a positive TST [27]. In Germany BCG vaccination was performed in newborns only while in other countries (e.g. Poland, Czech Republic, Slovakia, Turkey) BCG vaccination is repeated [23,26]. In a meta-analysis [27] it was estimated that depending on the time spent between vaccination and testing (#10 years or .10 years) 21% to 41% of those with a BCG vaccination after the first birthday had a positive TST (diameter 10+ mm) explained by BCG. These estimates were based on comparisons of the TST in unvaccinated and vaccinated populations. Based on our comparison of the TST with the QFT and using the same approach, 85.5% of all TST+/QFT2 results in vaccinated participants are most likely attributable to the BCG vaccination. Table 6. Results of the QFT and the TST with a diameter of 10 mm or more in German born persons younger than 40 years who were not BCG vaccinated. As a limitation of our study, we had no information on the age at vaccination, revaccination or exposure to NTM. Therefore we were not able to analyze the influence of these factors on TST. But we believe it to be likely that the observed interaction between being foreign-born and BCG vaccination might be explained by these factors. Birthplace was a risk factor for a positive TST and QFT as well as for TST+/QFT2 discordance in our data. Similar results were observed in US Navy recruits [14]. For the TST no positive correlation with age is seen in our data. Therefore it is unlikely that these TST+/QFT2 results are explained by old infections due to the higher exposure in the countries where the immigrants were born (Turkey, East-Europe, Africa). In the Navy study it was shown that NTM infections (M. avium) were 5 times more likely in recruits born outside the US [14]. The effect of BCG vaccination could not be analyzed in the Navy study because none of the US-born recruits were vaccinated and therefore vaccination and being born in a country with high TB incidence was strongly correlated. Our data allow for combining birthplace and BCG vaccination. This allows us to analyze the effects of migration and BCG vaccination independently and to analyze the combined effect of migration and BCG vaccination. BCG vaccination might be associated with TB incidence in the sense that countries with high incidence continue vaccination or revaccination [23]. Therefore TST+/QFT2 results might be explained by resolved or old TB infections that are detected by TST and not by QFT [28]. This hypothesis is not supported by our data. BCG vaccination is a strong predictor for TST+/QFT2 results not only in foreign-born but also in Germanborn participants and the assumed association between age and resolved or old LTBI is found with the QFT but not with the TST. Age is a strong predictor for a positive QFT that is not confirmed by the TST. In young people TST2/QFT+ results are rare. In 856 Navy recruits (mean age 20 years) no TST2/QFT+ combination was found [14]. We observed 34 of these combinations that mainly occurred in older participants. Because age is also a predictor for a LTBI it is likely that these discordant results are due to a higher waning of the T-cell mediated immune response to TST than to QFT. Our observation is indirectly confirmed by a Japanese study in which the association between age and LTBI was shown for IGRA but not for TST [29]. So far the immunologic interpretation of this observation is not clear. Either the QFT is positive because it is more sensitive to an old TB infection or the skin loses its capability to react and therefore both former and recent infections do not result in a positive TST with the same likelihood than in younger persons. Comparison of TST sensitivity in patients with active tuberculosis showed that TST sensitivity gets weaker with increasing age of the patient [30]. Thus it is likely that with increasing age the TST not only does not react to former infections but also is less sensitive to recent infections. This might either be due to difficulties to apply the tuberculin correctly into the aging skin or by decreasing mobility of the T lymphocytes that have to migrate to the forearm were the test is applied. The waning of the specific interferon-gamma response after years of tuberculosis infection was described in a Japanese population based on estimates of the expected prevalence of LTBI [18]. Our data suggest that waning is higher with the TST than with the QFT. The hypothesis that TST might be more sensitive to old infections while the IGRA mainly indicates recent infections is not supported by our data. So far two prospective studies investigating the progression to active TB have been published [15,31]. In the German study [15], progression to active TB was observed in those with a positive QFT only while in the Gambian study active TB was observed during the follow-up in 2 contacts, negative in the ELISPOT but positive in the TST at baseline [31]. Therefore so far the risk of progression to active TB can not be ruled out in TST+/QFT2 contacts. However, based on our data, it is ikely that the proportion of those with a TST+/QFT2 result that are at potential risk to progress towards active TB is rather small. The proportion of discordant results that can not be explained by BCG vaccination, being foreign-born or by age is rather small (4.4%), indicating little potential for false-negative or false-positive QFT results. These findings support the hypothesis that the QFT is the test of choice in populations with a high BCG vaccination rate or with an increased chance of exposure to non-tuberculous mycobacteria (NTM), or in people older than 40 years. In young persons not vaccinated and unlikely to be exposed to NTM, the TST and the QFT are of comparable quality and the agreement between the two tests is above 95%. In older populations the TST is less sensitive than the QFT and in the German population with a migration background and/or a BCG vaccination the TST is far less specific than the QFT. In Germany, HCW with regular contact to TB patients are surveyed and in the general population contacts to TB patients are traced [32]. Using IGRA instead of TST would save HCW or other TB contacts from unnecessary follow-up [33]. Following our data, about 40% of migrants with BCG vaccination would profit from replacing TST by IGRA. In conclusion, according to our data, it is not likely that the TST is more sensitive to old LTBI than the IGRA. Therefore we would like to suggest the use of an IGRA as the first test after exposure to a patient with active TB and in periodic screening for LTBI among exposed HCW, especially for those of foreign birth.
2014-10-01T00:00:00.000Z
2008-07-16T00:00:00.000
{ "year": 2008, "sha1": "7af394ca70a542539e8b8a474b205aa1ca915462", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0002665&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7af394ca70a542539e8b8a474b205aa1ca915462", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236320244
pes2o/s2orc
v3-fos-license
Environmental Surveillance for Risk Assessment in the Context of a Phase 2 Clinical Trial of Type 2 Novel Oral Polio Vaccine in Panama Environmental surveillance was recommended for risk mitigation in a novel oral polio vaccine-2 (nOPV2) clinical trial (M5-ABMG) to monitor excretion, potential circulation, and loss of attenuation of the two nOPV2 candidates. The nOPV2 candidates were developed to address the risk of poliovirus (PV) type 2 circulating vaccine-derived poliovirus (cVDPV) as part of the global eradication strategy. Between November 2018 and January 2020, an environmental surveillance study for the clinical trial was conducted in parallel to the M5-ABMG clinical trial at five locations in Panama. The collection sites were located upstream from local treatment plant inlets, to capture the excreta from trial participants and their community. Laboratory analyses of 49 environmental samples were conducted using the two-phase separation method. Novel OPV2 strains were not detected in sewage samples collected during the study period. However, six samples were positive for Sabin-like type 3 PV, two samples were positive for Sabin-like type 1 PV, and non-polio enteroviruses NPEVs were detected in 27 samples. One of the nOPV2 candidates has been granted Emergency Use Listing by the World Health Organization and initial use started in March 2021. This environmental surveillance study provided valuable risk mitigation information to support the Emergency Use Listing application. Introduction Vaccination campaigns with oral polio vaccines (OPVs) have been a part of the global polio eradication initiative by the World Health Organization (WHO) since 1959. Specifically, in 1974, the WHO formulated an Expanded Programme on Immunization to guide programs in developing countries and improve vaccination coverage. As a result of those programs, in 1990 approximately 80% of 1-year-old children had received three doses of OPV and the global morbidity and mortality associated with poliomyelitis decreased considerably [1]. In Panama the prevalence of polio is zero and the country is free from the disease. The last case of wild poliovirus type 2 in Panama was reported in 1971 and in 1972 there was a report of a non-determined wild poliovirus [2]. However, momentum toward poliovirus (PV) global eradication has been impacted by increasing paralytic poliomyelitis cases due to circulating vaccine-derived polioviruses (cVDPV), with 1088 cVDPV cases reported in 2020 compared to 378 and 105 cases in 2019 and 2018, respectively [3]. This increase followed global withdrawal of the Sabin 2 strain from the trivalent oral polio vaccine (OPV) in 2016, as the type 2 vaccine component is more frequently associated with VDPV emergence and circulation. The international spread of polio continues to be designated as a Public Health Emergency of International Concern by the WHO and specifically, cVDPV2 outbreaks are a major concern [4]. The on-going COVID-19 pandemic further complicates PV global eradication efforts due to indirect effects on vaccine supply, financial support, and immunization activities [5]. An additional strategy, environmental surveillance, has been applied by the Global Polio Eradication Initiative for decades to complement the acute flaccid paralysis surveillance with enhanced sensitivity in detecting PVs in the absence of acute flaccid paralysis cases [6]. Environmental surveillance can also assist in identifying residual wild type PV transmission, including excretion from individuals not showing clinical signs of paralysis and in the detection of VDPVs [7]. To mitigate emergence of new VDPV2s, a Consortium for Novel OPV developed candidate OPV strains that are genetically more stable than the Sabin strains and, thus, less likely to revert to neurovirulent phenotype [8]. The nOPV2 vaccine candidates were developed by a consortium of scientists from different institutions such as the UK National Institute for Biological Standards and Control (NIBSC), the US Centers for Disease Control and Prevention (CDC) and the University of California, San Francisco (UCSF), and Bio Farma. Clinical trials were held in Belgium and Panama, funded by the Bill & Melinda Gates Foundation [9]. Phase 1 study took place from May to August 2017 at a container park named "Poliopolis" at the University of Antwerp, this was the first-in-human Phase I, blinded, single-center trial. Its objective was to evaluate the safety and the immunogenicity of two nOPV2 candidates in healthy adult volunteers. The two nOPV2 candidates were liveattenuated serotype-2 polioviruses derived from a modified Sabin type-2 infectious cDNA clone propagated in Vero cells, they had numerous modifications aimed at improving the stability of the vaccine candidate by inhibiting the recombination and reducing replicative fitness, also aimed at reducing transmission. A total of 30 subjects voluntarily spent almost a month living in a 20-bed container village specially conditioned to control the environmental and health risks posed by the study of the novel vaccines, subjects were divided in two groups of fifteen participants who had received IPV as children. Subjects stayed in Poliopolis up to 28 days, or until no poliovirus was detected in any of their stool samples. The laboratory evaluated the induction of protective antibodies and analyzed the shedding of the virus in stool. The study is a step forward in the development of new OPVs in more than fifty years [10]. Given that the aforementioned trial included adults only, a phase 2 study (ID number: M5-ABMG) to evaluate the safety and immunogenicity of two novel OPV2 (nOVP2) candidates compared to a monovalent Sabin OPV was carried out in infants and children in two centers in Panama. Between 19 September 2018 and 30 September 2019, investigators carried out a single-center, multi-site, partly masked, randomized clinical trial in two groups of healthy children (1 to 4-year-old and 18 to 22-week-old infants) to evaluate the safety of the nOPV2 vaccine candidates in infants and young children after administering one or two doses of each dosage level (high or low) and to compare the results with a control sample comprising children with similar age who had received one or two doses of monovalent Sabin (mOPV2) in a prior study (informally called M2), which had taken place between 23 October 2015 and 29 April 2016, and was designed to be used as a historical control for the M5-ABMG study. Another objective of the M5-ABMG study was to evaluate the immunogenicity of a single dose of each of the two nOPV2 vaccine candidates in infants (18-22 weeks of age) after having been previously vaccinated with 3 doses of bOPV and 1 dose of IPV, and to compare these results with the mentioned cohort that served as a control sample. All subjects received one OPV2 vaccination and subsets received two doses 28 days apart. At days 0, 7, 28, and 56, type 2 poliovirus neutralizing antibodies were measured, and stool viral shedding was assessed up to 28 days post-vaccination [11]. The added value from the findings to the underlying reference populations is that participants who had been vaccinated with polio vaccines and were given a new dose of nOPV2 may have a boost for their immunity to poliovirus type 2. Regarding public health, these studies had contributed to the knowledge related to immunogenicity of recently developed OPV2s, supporting that they should be studied further, since they have proven to be viable, effective, and safe, and have the potential to be used in the event of a type 2 circulating vaccine-derived poliovirus outbreak. Environmental surveillance for risk assessment, as an additional safety measure in clinical studies based on population dynamics and in the context of the trial settings, was recommended by the WHO Containment Advisory Group (CAG). The CAG concluded that environmental monitoring for PV should be implemented to check the duration and amount of shedding (using non-polio enteroviruses [NPEVs] as a proxy indicator for environmental surveillance sensitivity) around the nOPV2 phase 2 clinical trial sites in countries performing these trials [12]. Environmental surveillance applied as risk mitigation for the M5-ABMG Clinical Trial, referred to as "ES-M5", was established to assess shedding of nOPV2 into community sewage and to understand the potential risks to the community following the vaccination of children and infants with the nOVP2 candidates during the M5-ABMG phase 2 clinical trial in Panama. This document focuses only on the environmental surveillance effort. Study Design: Environmental Surveillance Sites and Participants To monitor excretion and potential circulation of Sabin-like strains in the sewage during the M5-ABMG clinical trial, the ES-M5 risk assessment using environmental surveillance was conducted from November 2018 to January 2020 in five sites in Panama [the name of each city or township in parentheses]: Las Mendozas (La Chorrera), Villa Real (La Chorrera), David (Chiriquí), Las Lomas (Chiriquí), Nuevo Tocumen (Panama City). Ethically, to protect the possible identification of the study participants, maps are not shown. The participants of the study were 5-to 8-week-old infants who were given three doses of bOPV and one dose of IPV at the age of 18 to 22 weeks; and then, were given the candidate vaccine of the study. Moreover, 1-to 4-year-old children who have already completed the routine polio vaccination scheme of the country, then received two doses of the candidate vaccine of the study. The sewage system in much of Panama consists of small, local treatment plants, rather than large central facilities. The goal was to capture wastewater effluent from as many trial participants as possible in one location before local effluent treatment. Environmental surveillance collection sites were chosen after establishing the location of areas where most of the M5-ABMG trial subjects receiving the nOPV2 vaccines resided. The environmental sites were chosen in relation to the residential areas of the participants receiving nOPV2 vaccines with considerations regarding the location of the households where children were living and the location of the collection point before the local treatment plant. These are densely populated, low-income areas. Field visits were performed to identify sewer networks that aligned to the requirements needed for sampling, a modified approach with respect to the WHO Global Polio Laboratory Network (GPLN) guidelines for environmental surveillance [13]. Details of the collection sites and number of samples from a total of 997 participants are shown in Table 1. 1 Population surrounding the study participants but after the sewage is released from the local treatment plant. 2 The number of clinical trial participants within the community where the sewage was collected upstream from local treatment plants. Collection Method and Frequency Local personnel were trained regarding safe and proper sample collection, handling, and shipping conditions. Two 1-L sewage samples for each site were collected using the grab sampling method at the inlet of local treatment plants where the highest number of study participants were co-located, as per the guidelines for environmental surveillance for the detection of polio in the presence of a sewer network [13]. Samples were stored on ice packs inside a cooler for transport. Monthly sampling for one year, the minimum recommended [13], using the grab sampling method was determined to be a feasible approach to adequately support the clinical trial versus other methods (i.e., composite). All samples were sealed and transported at 2-8 • C from the field to the laboratory supporting the clinical trial (Cevaxin), located in Panama City, where samples were frozen at ≤−20 • C until shipment on dry ice to the polio laboratory at the U.S. Centers for Disease Control and Prevention (CDC-Atlanta), within two to three weeks of collection. Baseline samples were collected in Panama and La Chorrera sites before the start of the M5-ABMG clinical trial in November 2018 (n = 3). Monthly collections during the M5-AMBG trial vaccination took place during February to August 2019 (n = 25). Collections continued for five months after the M5-ABMG trial ended, during September 2019 to January 2020, to detect viral shedding (n = 21). All samples were collected before 9:00 a.m. to provide control in this parameter, in addition to the morning hours being an accepted approach to collect consolidated human waste using the grab sample method. Samples were not collected in December 2018, January 2019, and April 2019 due to holidays and vacations of the personnel; however, samples were collected as required within the 30 to 45 days period following vaccination. For each date and site shown in Table 2, individual samples were processed for a total n = 49. Two, 1-L samples were taken for each site on the dates shown in Table 2, mixed, and assayed. Two, 1 L sewage samples were collected to assure having enough sample for processing and a back-up was frozen. Jan 2020 ∆ -X = immunization for M5 ABGM trial; ∆ = sample collection (two sites); = sample collection (one site); -= no collection. Laboratory Analyses Laboratory analyses of the environmental samples were conducted at the CDC using the standard WHO environmental surveillance methods [14]. Prior to processing, the two 1-L sewage samples were thawed at room temperature (approximately 25 • C) and mixed for at least 15 min in a sterile beaker on a stir plate using a sterile stir bar. Once thoroughly mixed, 500 mL of the sample was processed using the two-phase separation [14], and antibiotics were added to the concentrate at final concentrations of 100 IU/mL penicillin, 100 µg/mL streptomycin, and 50 µg/mL gentamycin. The remaining sample was frozen at −20 • C as back-up for any repeated processing. A portion of the resulting concentrates (500 µL) were inoculated into cell culture on the same day for enterovirus isolation. PVs and NPEV were isolated according to the recommended WHO poliovirus isolation protocol using L20B cells (recombinant murine cells that express human poliovirus receptor) and human rhabdomyosarcoma cells (RD), followed by detection and intra-atypic differentiation of polioviruses by real-time RT-PCR [15]. The ITD assay and algorithm will identify nOPV2 as a PV2, of which the sample would then be sequenced for confirmation. Results Reporting Results of the ES-M5 (code: 827) study were reported to the sponsor, regulatory authorities of Panama, the Ministry of Health of Panama with the approval of the Ministry of Health on 19 December 2018), and local ethics committee, along with an explanation of the results. PV and NPEV Isolation In total, 49 wastewater samples were collected. Sabin-like type 1 poliovirus strains, defined by WHO as "any poliovirus isolate from human or environmental sample with any nucleotide difference from Sabin less than the number that meets the definition of a VDPV" [16] were detected in two samples and Sabin-like type 3 strains were detected in six samples (Figure 1). No Sabin-like type 2 strains were detected in the samples from any of the sites. Laboratory Analyses Laboratory analyses of the environmental samples were conducted at the CDC using the standard WHO environmental surveillance methods [14]. Prior to processing, the two 1-L sewage samples were thawed at room temperature (approximately 25 °C) and mixed for at least 15 min in a sterile beaker on a stir plate using a sterile stir bar. Once thoroughly mixed, 500 mL of the sample was processed using the two-phase separation [14], and antibiotics were added to the concentrate at final concentrations of 100 IU/mL penicillin, 100 µg/mL streptomycin, and 50 µg/mL gentamycin. The remaining sample was frozen at −20 °C as back-up for any repeated processing. A portion of the resulting concentrates (500 µL) were inoculated into cell culture on the same day for enterovirus isolation. PVs and non-polio enteroviruses (NPEV) were isolated according to the recommended WHO poliovirus isolation protocol using L20B cells (recombinant murine cells that express human poliovirus receptor) and human rhabdomyosarcoma cells (RD), followed by detection and intra-atypic differentiation of polioviruses by real-time RT-PCR [15]. The ITD assay and algorithm will identify nOPV2 as a PV2, of which the sample would then be sequenced for confirmation. Results Reporting Results of the ES-M5 (code: 827) study were reported to the sponsor, regulatory authorities of Panama, the Ministry of Health of Panama with the approval of the Ministry of Health on 19 December 2018), and local ethics committee, along with an explanation of the results. PV and NPEV Isolation In total, 49 wastewater samples were collected. Sabin-like type 1 poliovirus strains, defined by WHO as "any poliovirus isolate from human or environmental sample with any nucleotide difference from Sabin less than the number that meets the definition of a VDPV" [16] were detected in two samples and Sabin-like type 3 strains were detected in six samples (Figure 1). No Sabin-like type 2 strains were detected in the samples from any of the sites. Discussion This is the first example of PV environmental surveillance in Central America and specifically, in the context of a clinical trial. No Sabin-like type 2 strains were detected in sewage samples during the trial period, suggesting relatively low risk of transmission of nOPV2-related viruses from study subjects to the community. The ES-M5 study was a risk assessment mitigation measure in the evaluation of the nOPV2 vaccine candidates administered as part of the M5-ABMG clinical trial in response to the unique context created by the confluence of wild type 2 PV eradication certification, OPV2 withdrawal, and WHO PV containment guidelines [8]. Sabin-like PVs were detected in sewage samples, which was expected because bivalent OPV is used as routine immunization in Panama at 18 months and at four years of age, as an adjunct to hexavalent vaccine containing IPV administered at two, four, and six months of age. Moreover, on 21st to 26th October of 2019 there was a campaign to improve polio vaccination coverage, and in Panama Oeste (West) 2075 doses were administered in children of ages 0 to 5 years and 10 years-old, including oral polio vaccine, IPV and hexavalent [17]. Sabin-related PVs are expected in environmental surveillance samples from populations vaccinated with OPV [18]. The presence of NPEVs in sewage waters was also expected due to the virus's presence in human feces. NPEV detection has been used as an indicator of the appropriateness of an environmental surveillance sampling site with an acceptably sensitive site yielding an enterovirus in ≥50% of samples over a 6-month period [19]. Thus, sensitivity of ES-M5 was demonstrated by detection of NPEV and Sabin-like PVs. On the other hand, lack of detection of nOPV2, in a context of a sensitive environmental surveillance system, suggests a low risk for community transmission up to 3.5 years after cessation of OPV2 use. NPEV detection is indicative of positive (method worked correctly and site selection was appropriate). A deeper analysis to relate the lack of poliomyelitis and our ES findings is beyond the scope of this paper and would require a larger sample size. The purpose of this paper was to demonstrate environmental surveillance acting as a "safety net" for the clinical trial. One substantial limitation of this study is that many of the study participants were observed to be in diapers; therefore, their feces would not reach public sewage. A survey of the number of trial participants using diapers was not collected due to privacy reasons and as a consequence, the staff was not allowed to enter the homes of the children wearing diapers. Another limitation of the study was that the sample was small, and no biostatistician was involved in the process. An additional challenge was that the established sewer network and local treatment plants prevented an untreated sewage sample from being collected as a representative sample from a larger population in which the study participants lived. However, the localized sewer system allowed us to specifically target sewage from a large number of clinical trial participants. Published data from two recent studies in young children and infants (phase 4 study with monovalent Sabin OPV2, and a phase 2 study with low and high doses of two nOPV2 candidates) indicate that the nOPV2 candidates produce lower stool shedding rates 28 days after vaccination than those observed with monovalent OPV2 [11]. The findings of the ES-M5 study are consistent with these findings since no nOPV2 strains were detected in any of the months that encompassed the ES-M5 study. The PVs identified in sewage corresponded only to Sabin-like PV types 1 and 3 used at that time in Panama in their childhood vaccination scheme. Since there are considerable limitations to our study, it is not possible to rule out that some nOPV2 strains might have circulated among the population after receiving the nOPV2. A more extended use of nOPV2 in future vaccination campaigns would provide more information, necessary to assess the value of this novel vaccine and its potential role in the global polio eradication strategy in the near future. Conclusions The risk mitigation information provided by the ES-M5 added a community safety component to the existing nOPV2 safety data. Additionally, sensitivity was demonstrated by detection of NPEV and Sabin-like polioviruses. As we get closer to the goal of global eradication and certification, the continuation of high-quality environmental surveillance at existing sampling sites and the expansion of environmental surveillance in the final reservoirs may increase the sensitivity of overall PV detection [20] and help ensure the eradication of polio worldwide. Adding to the eradication efforts, nOPV2 vaccine has been granted a WHO Emergency Use Listing and its use in mass campaigns to control cVDPV2 outbreaks began in March 2021 [21]. Meanwhile, further nOPV2 clinical trials are underway to measure safety and immunogenicity in different populations. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Hospital del Niño "Dr. José Renán Esquivel" (code: 827 and date of approval 17 january 2019). The samples were not shipped and processed until the appropriate approvals were obtained. Informed Consent Statement: Not applicable.
2021-07-26T05:33:51.636Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "525fb0380414d4e32a7a6971754386436e8675a6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/13/7/1355/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "525fb0380414d4e32a7a6971754386436e8675a6", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247185952
pes2o/s2orc
v3-fos-license
Advocating Urban Transition: A Qualitative Review of Institutional and Grassroots Initiatives in Shaping Climate-Aware Cities : Climate change and its challenges have long been incorporated into the policy-making process. Advocacy actions urge to strengthen the socio-ecological resilience through engagement with stakeholders, feedback recollection, and testing of solutions. Several initiatives have been born to boost cities’ actions toward climate change mitigation and adaptation. Institutional coordinated actions such as transnational municipal networks (TMNs) and non-institutional, grassroots movements for climate action, are among them. The study focuses on four TMNs and two grassroots movements, which have an impact on the European and/or worldwide contexts. They are investigated qualitatively, reflecting on the roles and contributions to climate change that they provide both alone and together. The research questions focus on the instruments/elements/factors that they put in place to support the transition, the key messages, and how these are conferred to their key targets. The initiatives have been investigated in both the grey and scientific literature. The main results show that grassroots movements for climate action and TMNs have the potential to better support cities in their climate transition. However, local governments are urged to take advantage of both initiatives’ ability to develop networks of support, innovation and a sense of belonging. In conclusion, the research states that the two initiatives should be effectively connected and integrated with a complementary role concerning planning actions. Introduction Climate change is widely recognized as one of the major issues of the current century. According to the last IPCC report [1], the entire human community is far from being on track to mitigate and adapt to climate change. Likewise, the anthropogenic influence in warming the climate at an unprecedented rate is recognized. Achieving the objectives of mitigation and adaptation to climate change requires an evolution of the model of city government, moving from a "vertical" approach to an integrated governance approach, based on continuous collaboration between the different territorial actors (the economic world, academia, civil society, public administration). The construction of collaborations and alliances with a larger constellation of actors with a stake in societal issues is part of the debate on socio-ecological resilience. The resilience literature states that the adaptation and flexibility of a system (in this case, cities) increases with the enlargement of the actors participating in extending its capacity to react to challenges [2]. This involves some degree of empowerment of active agents and some level of flexibility of the governing system to accommodate change. When talking about climate change and its challenges, the resilient approach has long been incorporated in the policy-making process, with dedicated processes of engagement of interested stakeholders, feedback recollection, piloting, and testing of solutions, technologies, and services. It is widely recognized that ecological transition and transformation can only be achieved through policy change and innovation, rather than the mere implementation of technology: lobbying, sensitization, and support of political action are all moving in this direction. Moreover, the action triggering change is much more effective when it is included in a coherent arrangement of policy intentions, covering the topics of a large number of different countries. This position is what motivated the "green" movement throughout the decades, but it is also what has triggered more structured organizations to come up with various networking initiatives in recent years. Through the European Climate Pact [3] promoted by the European Commission, the enlargement of the agents involved in this change assumes a fundamental role in the transition towards more sustainable and just territories. Therefore, active collaboration is affirmed as a new way of implementing democracy both at the global and local levels. In the last years, several initiatives have been created to boost countries' and cities' actions toward climate change mitigation and adaptation. A distinction can be made between these: institutional network-based coordinated actions and non-institutional, insurgent ones. The first group includes for instance transnational policies, city networks, also called transnational municipal networks (TMNs), and climate initiatives such as ICLEI, C40, Covenant of Mayors initiatives, and others. The second considers bottom-up led experiences such as Fridays for Future, Extinction Rebellion, and others. As a limitation of the field of investigation, only TMNs were considered to represent institutional experiences. Both institutional and non-institutional initiatives have been extensively investigated through the literature. However, to the best of our knowledge, there is still a lack of contributions that bring together both experiences with the aim of comparing and understanding how they are intertwined and whether they both contribute to combating climate change operationally. This contribution is intended to provide this type of analysis, qualitatively, by reflecting on the role and contribution to climate change that those experiences give both individually and together. The emergence of new institutionalized and non-institutionalized initiatives, such as the Green City Accord, calls for a profound reflection on the actual role these experiences are giving to the issue. This contribution is structured in three sections. The first one gives a brief but up-todate literature review on both types of initiatives and clarifies the main research questions underlying this study. The second describes the materials and methods used for the analysis, while the third presents the main findings. Starting from the setting of a few hypotheses and research questions originated by the identification of gaps into the Literature Review, the main characteristics of analyzed initiatives have been summarised in the results chapter and are later discussed in the Discussion section, with a specific link with the set RQs. At the end, conclusions are provided, detailing limitations and potential future research. Literature Review and Research Questions According to several authors [4][5][6][7], TMNs are considered crucial in creating the framework for political actions against climate change. TMNs give cities the opportunities to directly group themselves into transnational networks active on a specific theme or objective. The recognized importance of these networks is directly linked with the key role cities play in taking actions to mitigate and adapt to climate change [4,[8][9][10][11][12]. Cities seem to be leading the transition more than other levels of administration, imposing themselves as effective drivers of sustainable development not only at the European level but also worldwide [13][14][15][16][17][18]. There is, therefore, a perception that TMNs are crucial not only for networking between cities but also for providing them with additional support, such as access to financing, action, and knowledge tools. When talking about the proper networking positive aspects, it is recognized that these can boost horizontal learning, which seems to improve urban adaptation measures [8]. However, some authors reference that there is little proven evidence regarding the totality of the effects of TMNs and city networks [4,7,19,20]. This topic has been partially covered by recent research that aimed to analyze the correlation between network members and climate planning processes [4] as well as demonstrate the link between TMN membership and mitigation planning [21]. According to Heikkinen et al. [22], TMNs are effective in their advocacy, prompting cities to undertake adaptation and mitigation measures through different suggested activities. As described by Frantzeskaki and colleagues concerning ICLEI [22], TMNs embody three main roles such as (i) knowledge roles (translator, educator, and integrator); (ii) relational roles (connector and mediator); and (iii) game-changing roles. Additionally, some of them prompt cities to take specific steps towards developing planning processes for mitigation and adaptation, containing targets, actions, and implementation strategies. This is the case with the Covenant of Mayors (or Global Covenant of Mayors), which requires its member to provide a Sustainable Energy (and Climate) Action Plan within two years of joining the network. Furthermore, Heikkinen and colleagues have also demonstrated that cities that are part of more than one network are more advanced, at least in adaptation measures. However, the same study also points out that there are disparities in cities in high-capacity and wealthier regions, compared to lagging ones. In general, as other studies have also stated, cities in less wealthy countries tend to participate less in TMN, but even when they do participate, they appear less capable of carrying out planning measures. Some authors argue that they probably lack the necessary resources to join networks and follow their rules [5,6,23]. As, again, proved by Heikkinen and colleagues [4], all networks contain cities that are not contributing through plans and actions in the transition. This means that there is high room for improvement especially in the way networks support cities and somehow control their compliance with membership criteria. As stated by Aylett [24], there is a great variation in the levels of participation among the different municipalities on these types of initiatives. Other limitation aspects have been evidenced by Romero and colleagues [25], with specific reference to the Covenant of Mayor initiatives. In their research, the authors highlight the disparities in the participation in the initiative at the European level. According to their data, the initiative is not representative of European cities, as mainly Italian and Spanish cities participate in it (around 80% of participation is covered by cities from these countries). The voluntary nature of these initiatives seems to see higher participation from cities that are lacking national support, in terms of planning and policies [25][26][27][28][29]. The advocacy action of organized city networks, their lobbying efforts and their political influence act to direct policy makers, to produce operational guidelines and to enable fragile cities to access shared knowledge on the issue. The action of another type of network, the one commonly referred to as climate activists, is different. Although the consequences of climate change are well known, some specific groups are still largely absent from the "front lines of climate change policy, advocacy, and research" [30]. This is the case for young people and children, whose well-being is hardly affected by the current climatic crisis. Greta Thunberg and millions of other young activists found this gap and came up with the idea to bring attention to it through protests around the world. Protests have occurred around the world in an orderly fashion, and have served youth groups around the world to self-organise in an ongoing effort to highlight the climate emergency. Climate activist actions peaked in 2018-2019, due to some powerful demonstration actions (see below the description of Extinction Rebellion, and the rise of the Friday for Future movement initiated by Thunberg). That notwithstanding, it is necessary to point out that climate activism is a very well-established practise in Western democracies. What appears to be an empirical evolution of more recent movements is the capacity of such activists to embrace climate change action by including it within a social justice and inclusion framework [31,32]. Climatic justice seems to be the overarching goal of grassroots climate action movement protesters, whose action is directed at triggering emotional notes, mobilising those parts of society that are still sceptical (or indifferent) towards climatic challenges [33], to push politicians to act. Grassroots climate action movements base their advocacy initiatives on forms of civil protest (strikes in the case of Friday for Future, disobedience in the case of Extinction Rebellion, to name but two), renewing the legacy of the anti-war marches and movements of the 1960s. Contemporary movements also add another layer to the climate justice debate, that of temporality, both the past (where the current disruptions originated) and the future (when life on the planet will be indelibly compromised). Activists' messages indicate that they represent a generation that will have to deal with environmental problems created by previous generations. The urgency of action demands immediate concrete intervention, bringing the focus of their goals to the present, to stop the disruption that already appears to be underway. Climate activism movements have a clear grassroots base, with an equally clear message and destination: the challenge in achieving climatic transformation is much more about politics than technology [34]. The argument is that without political commitment, a low carbon revolution will only be a plan without implementation. With this in mind, pressures from outside the political structure are needed. Activists believe that their action is more effective in influencing the direction of environmental policies than the traditional decision-making at the international level. According to the most recent literature on the topic, it seems that both TMNs and grassroots movements for climate action [35] are conceived as pivotal initiatives to boost the development of climate actions, in the form of action plans for the TMNs and outreach and lobbying for the climate movement. However, these initiatives are far from being fully inclusive and representative of European or global actions. Their direct impact on climate change is difficult to measure, although their indirect and awareness-raising effectiveness is clear. Moreover, the value of these initiatives when taken together is unclear, and it is not yet evident whether their tools can be combined for greater contribution to the goal they share. Furthermore, the literature shows a lack of understanding of what precise boundaries and support elements they provide to cities in their transition journey. Finally, to the best of our knowledge, they have never been related even though they seem to share several principles. This study aims to partially contribute to this discussion by providing a reflection on both experiences, highlighting some commonalities and key features. Two hypotheses are at the core of the contribution: Hypothesis (H1). Grassroots movements for climate action and TMNs have the potential to better support cities in their climate transition. Hypothesis (H2). Local governments should take advantage of both initiatives as they have the unique ability to develop networks of support, innovation, and a sense of belonging. In synthesis, the two levels of initiatives should be effectively connected and integrated into a complementary role with respect to the actions put in place through planning. In order to verify these hypotheses, three key questions are addressed. The first one deals with understanding what are the different elements (instruments, factors, risks) that these two initiatives implement in their own experiences. This point is important to verify the common points and the differences in approach. Research Question (RQ1). What are the instruments/elements/factors that the different institutional and grassroots movements for climate action put in place to support the transition? The second question deals more specifically with the key messages that the two initiatives deliver and how these are conferred to their key targets. This question is important for understanding common points and differences in the key messages they conveyed. Research Question (RQ2). What is the core difference in the message that the two experiences provide to the world for making the transition effective? The third question deals with the specific targets to which the key messages are conveyed. This question allows understanding if both messages can be conveyed together or not to the key targets. Research Question (RQ3). How are the two experiences able to reach their targets? How much they are effective in this? The next section underlines the methods used to perform this analysis. Materials, Methods, and Manuscript Structure According to the hypothesis and the RQs highlighted in the previous section, the analysis has been performed as a qualitative one. It has been based on available documents on the initiative websites, and on both the grey and scientific literature retrieved on the most important scientific search engines, namely Google Scholar, Scopus, Web of Science, Doaj, and Jstor. On these search engines, several keywords have been investigated, among them: "transnational municipal networks", "grassroots climate initiatives", "climate rebellion movements", "climate municipal networks", "city networks" AND "energy" OR "climate", "impacts TMNs", "impacts climate initiatives". Finally, publications on the specific initiatives chosen for this study were deepened. The results have been filtered according to both the title and the abstract, to create 4 groups of publications to be further investigated: (1) TMNs related publications analyzing case studies; (2) TMNs related publications addressing impacts and evaluation reflections; (3) climate grassroots initiatives cases; and (4) climate grassroots evaluations and impacts related research. The studies not pertaining to one of these macro-groups have been discarded. The grey literature is intended to be publications outside the academic and scientific domains, such as reports and documents provided by government departments and agencies, civil society or non-governmental organizations, private companies, and consultants. The analysis conducted on the documents included an initial phase of reading and scanning abstracts and conclusions to quickly understand the main points and features of the initiatives as well as to identify the literature gap. Then, a deeper reading has been done to precise our research questions and to cluster information according to them. The last phase of analysis aimed to complete the analytical tables, reported synthetically in the Results section, and to derive the discussion points and conclusion reported in the related parts of this contribution. For each initiative, a description is provided as well as all the information that supports RQs answering. The next section reports these results. It has been divided into two subparagraphs, the first addressing TMNs and the latter addressing grassroots movements for climate action. For each of these sub-sections, a summary table, a description, and key information are provided. The answers to the RQs and the Hypothesis are provided in the Discussion section. The selection of the different initiatives has been done through several filters: (1) Importance and impact of the initiatives. The most important and relevant initiatives at the European and worldwide levels were selected. Initiatives limited to very specific geographical areas or without evident impacts were excluded. In addition, the choice fell on the networks and movements that in the last four years have acted most insistently on the issues of climate change, those that have been mostly taken up and discussed by the media and public debate. (2) Timing. From the first filter, recent initiatives were excluded. As an example, the Green City Accord has been analyzed but included only in the Discussion section, as it was interesting but too recent to already have results. Although it is not known if the 100 Resilient Cities initiative will continue in the future, it was taken as one of the cases because of its impact and worldwide importance. (3) Availability of documents and publications. From the first two filters initiatives with a relevant lack of information were excluded, for example those with no documents or information available online or in scientific publications. From these filters, four TMNs have been identified for the purposed of this research, namely: the C40, the ICLEI, Global and European Covenant of Mayors, ICLEI, and the 100 Resilient Cities. Two grassroots initiatives were considered: Friday for Future and Extinction Rebellion. Top-Down Climate Initiatives Several TMNs are present at the worldwide level. This manuscript focuses on the most famous ones, due to the availability of data both in the literature and online (see Table 1). The selection foresaw four TMNs: the C40, ICLEI, Global and European Covenant of Mayors, and the 100 Resilient Cities. Appendix A includes a table (Table A1) for addressing the geographical coverage of these initiatives. All these initiatives can be considered TMNs as their objectives are to: (i) create networks and link together cities across Europe or the world; (ii) support cities in working toward climate change-related goals; (iii) provide some types of directions, instruments and/or counseling activities; and (iv) being directly addressed to cities. The most famous initiative is the Covenant of Mayors for Climate and Energy (CoM). Launched in 2008 by the European Commission, it has been one of the reference networks for European cities, with "the objective of engaging and supporting mayors to commit in reaching the EU climate and energy targets" [36]. The reference to the main target, mayors, is evident not only because it is part of the name of the initiatives, but also for the attention put into the act of "signing" the participation. A high level of political commitment marks membership in this initiative. CoM started in Europe but was soon extended worldwide, with the name of Global Covenant of Mayors for Climate and Energy in 2016, when the Compact of Mayors joined it [37]. This is a peculiar trait of the CoM: its ability to embed other stand-alone initiatives such as the Compact of Mayors and the Mayors Adapt in 2015 [36]. Its key message is summarised by the first part of the manifesto: "We, Mayors from all over Europe, hereby step up our climate ambitions and commit to delivering action at the pace that science dictates, in a joint effort to keep global temperature rise below 1.5 • C-the highest ambition of the Paris Agreement". The vision set in the manifesto is to become lighthouses in acting toward decarbonization. However, CoM objectives changed according to the global contingencies, from having a 2020 horizon, they shifted to 2030 and, eventually, to 2050. Although CoM mainly tackles mitigation actions (CO 2 reduction and renewable energies), it expanded towards adaptation measures, such as resilience and general adverse consequences of climate change, thanks to the inclusion of the Mayor Adapt initiative. Any kind of city can join CoM. To support cities in maintaining their commitment, the initiative mainly asks for two things: the redaction of a Sustainable Energy Action Plan (SEAP), which assesses the current situation and sets the starting point, and the monitoring of results on a four-year basis, which is through emission inventories named Baseline Emission Inventory (BEM) and Monitoring Emission inventories (MEI) [36,38]. However, as evidenced by Rivas et al., no final monitoring reports are required [38], causing different shortcomings in the effectiveness of the initiative. The support provided to cities by the CoM is mainly given by Coordinators: local authorities (e.g., regions, and others) that decide to be part of the initiative by giving support to a selection of signatory cities. In general, they can help cities financially or, mainly, in completing the assessment and the monitoring reports. Finally, the initiative also includes Supporters, which are local associations, networks, agencies that promote and mobilize several types of additional contributions, such as financial, organizational, and knowledge-based, for their cities' members. According to the official data included on the website, there are more than 10,500 signatories, 230 Coordinators, and 233 Supporters divided into 53 countries. Of this, however, only 6225 SEAPs have been submitted and only 2536 monitoring reports. As stated by Rivas et al. [38], this is a sign that several barriers are still present. The C40 Cities Climate Leadership Group initiative is a network of mayors "of nearly 100 world-leading cities collaborating to deliver the urgent action needed right now to confront the climate crisis", born in 2005 [38]. The main difference with the CoM is that the C40 foresees an entry selection: only cities that are frontrunners in climate change actions can enter the network. The C40 Leadership Standards for 2021-2024, required to enter the initiative, are (i) having an updated climate action plan in line with the Paris Agreement (with also the necessity to demonstrate that the plan is periodically revised); (ii) in 2024 (intermediate check) being on track to reach 2030 targets and effectively implement the action plan; (iii) boldly addressing the climate crisis; (iv) innovating and addressing emissions also beyond the direct actions of the city government; and (v) commitment of the mayor and the city in meeting Paris Agreements [39]. To support member cities, the C40 takes several actions such as defining updated Agendas, establishing task forces for special themes or events (for example, a Task Force for COVID-19 recovery), giving advice to cities for the redaction of their Action Plan, providing advocacy and networking for the supply of finance, and creating a relationship with key financial institutions. Finally, it supports cities in building new initiatives and in networking and learning from others through workshops and summits. It also has a Knowledge Hub providing practical knowledge and solutions to cities on adaptation, air quality, buildings, diplomacy, energy, planning methods, transport, food, waste, finance, and others. According to the 2020 annual report [40], 88 cities are part of the full initiative, having committed to climate actions in line with the Paris Agreements and more than 75 cities and regions committed to one or more declarations of the initiative. The dimension of the initiative is worldwide, with a stronger global dimension than the CoM. Another high level TMN is the ICLEI-Local Government for Sustainability network. In contrast with the previous ones, ICLEI is mainly built to create a network of cities around the sustainability topic. Its first aim is to "build connections across levels of government, sectors and stakeholder groups, sparking city-to-city, city-to-region, local-to-global and local-to-national connections. By linking subnational, national and global actors, policies, commitments and initiatives, ICLEI strengthens action at all levels, in support of sustainable urban development" [41]. As stated in their reference reports [18,42], cities are involved in five interconnected thematic pathways: (1) low emission development, (2) nature-based development, (3) circular development, (4) resilient development, and (5) equitable and people centered development. These pathways are expressed as manifesto lines of actions, with key points, stakeholders, and partnerships. Like the others, this network targets cities with the possibility to open to different dimensions, such as regions. It seems, in fact, that more than a strong political commitment based on mayors' signatures, ICLEI asks for a coral engagement of the government together with the community and the cross-levels. In addition, ICLEI provides a lot of support in policy creation, not only in network constitution. A fee is at the base of the membership, for a network that is global and includes more than 2500 cities, towns, and regions. According to Frantzeskaki et al. [22], ICLEI embeds three core roles as an intermediary: a knowledge role, a game-changing role, and a relational role. From the knowledge perspective, authors evidence how ICLEI is a key intermediary in translating scientific knowledge to policy practices and solutions, making easier the link between these elements and groups of actors. It also provides a policy translation across levels, especially between the local and the global one. Within the knowledge role, authors [22] cite their contribution as educators (of city officers) and as integrators of multiple knowledge forms. The game-changing role is mainly referred to as allowing the co-creation of solutions, policies, and strategies, but also allowing frontier experimentations and learning-by-doing practices. Finally, the third role is the relational one, intended as creating networks among cities and staff as well as mediating across policies' levels. The last TMN analyzed is the 100 Resilient Cities (100RC). In contrast with the other TMNs, the 100 Resilient Cities is the only one being properly funded and led, since its birth in 2013, by a private foundation, The Rockefeller Foundation. Even if its future survivance is not confirmed, this experience is important in the TMNs history as it was the first one directly targeting resilience, both in its shocks and stresses, and providing support to cities in defining innovative strategies. As happened for the C40 and the CoM initiatives, the 100RC also required an application and cities needed to go through a first evaluation process to enter the initiative. As reported by Galderisi et al. [43], "the 100RC Initiative was designed to financially and technically support cities all over the world in enhancing their resilience in the face of multiple and complex challenges". As also reported by the authors, the 100RC partnered with ARUP to provide several instruments to signatory cities, such as a City Resilience Framework and a City Resilience Index [44]. This initiative provided cities with a high degree of support, especially in the beginning phases. Indeed, while selected, a Chief Resilience Officer was appointed to support the city in building its Resilience Strategy. This strategy was generally based, as for the CoM, on a baseline assessment of resilience and then on the proper definition of the Resilience Strategy. As for the C40 Climate Action Plan, the 100RC Resilience Strategy was also intended as a living document, to be updated through time [45,46]. Grassroots Movements for Climate Action It is increasingly common to observe citizens-especially young citizens-organizing to bring institutional attention to a specific urban climate goal that is not yet included in policy [48,49]. In a phase of reflection and uncertainty about the future, between coexistence and overcoming the pandemic, as well as awareness of the challenges of climate change [50], it is urged not to ignore these emerging forces from activists of the urban scene. Despite the great analytical effort to read these forms of movements from the [6,[51][52][53] bottom-up, what still seems to be neglected is the identification of their operational capacity: the understanding of the reasons why the practices materialize, whether in an integrated way, in conflict, or by filling gaps, thus acting in subsidiarity towards the transition; what these "critical agents of change towards resilient future" [54] are saying, who leads them, and in what relationship they enter in relation to the city; and, finally, the institutional role to be entrusted to these subjects, in terms of their power to convey the message to their targets. For the purpose of this analysis, this study takes into consideration a specific field of bottom-up initiatives against climate change: grassroots movements for climate action (see Table 2). This choice is motivated primarily by the exponential growth of the climate movement that arose in the world in the years 2018 and 2019, becoming one of the most widespread environmental social movements in history [51]. On Friday, 20 August 2018, then-16-year-old Swedish Greta Thunberg refuses to go to school and begins protesting in front of the Swedish Parliament building. With her handwritten "school climate strike" sign, she accuses the government of failing to meet the Paris Agreement goals and calls for radical action to prevent global warming [33]. The initiative repeats every Friday, eventually gaining numerous followers through social media coverage, and soon goes viral, spawning the Friday for Future (F4F) movement. On 15 March 2019, the first global climate strike gathers more than 1.5 million young people, in over 2000 locations, expressing their resistance to the failure to act on the climate emergency [55,56]. In addition to strikes, the movement has evolved to create different formats to initiate dialogue with schools, universities, politicians, city councils, media, and businesses. The movement now has a participant base of approximately 7.6 million young people around the world, as also indicated in the map that is updated in real time on their website, which also includes also other green movements [57]. Their demands are varied and can be summarised in the urgency of action to accelerate the efforts of power groups to reduce greenhouse gas emissions and respect the Paris Agreements. Their message proclaims a sense of urgency: in the present, society is called for immediate radical action to interrupt the insurgency of heavy impacts of climate change. The targets of F4F are mainly politicians, political and global leaders throughout the world. Their message is mostly conveyed during international meetings (e.g., World Summits, COPs), where powerful speeches are communicated by the main leaders of the movements, often using radical expressions and wording, to clearly sentence the gravity of the climatic situation, but also to call both leaders and peer to action. The catastrophic tone and rhetoric of climate movements are also recognized as a way of recruiting participants through self-identification in a common cause. F4F's experience started a snowball effect of other small or large activist networks, which over time organized to demonstrate against climate challenges and to raise awareness of local governments for action. Earth Uprising is one of them, a network of about 50 active members around the world, protesting through strikes and motivational speeches, but at the same time offering educational programs on environmental issues aimed at young people. On this last point, they act in collaboration with educational institutions, but also with the support of sponsors to support microgrant funds for educational projects and expenses of the protesters. Shortly thereafter, other parts of the population began to show sympathy and solidarity to these youth movements: Parents for Future [58], Teachers for Future [59], Artists for Future [60], Farmers for Future [61], and Scientists for Future [62] all pledged to build a broad social alliance for climate policy and bold politics. Besides a large digital network of exchanges and knowledge provision, these movements reaffirm the need to strengthen the physical-relational dimension of governance experiences, which in the current contingency is likely to remain in the background compared to the digital dimension. A clear example of this claim is the Extinction Rebellion initiative. Extinction Rebellion (XR) is an international movement that uses nonviolent actions of civil disobedience to push institutional actors and the media to act and communicate urgently towards the current climate and ecological crisis [34]. The founders of Extinction Rebellion are 15 activists who came up with the idea in April 2018 during the protests of the RisingUp! Movement. The movement, born in October 2018 in London, defines itself as "a-political", even though the government is the target of its demands, and "decentralized", having hundreds of volunteers and 650 autonomous local groups in 45 countries [63]. The group is composed of various age groups, professionals and representatives of different communities, including Christian branches. Extinction Rebellion presents three main statements: that climate change is declared a national emergency, that the loss of biodiversity be stopped by reducing greenhouse gas emissions to zero by 2025, and that a citizens' assembly is created to monitor progress. At the heart of the project is the desire to make environmental protests more aggressive and concrete, to attract more attention and better communicate the urgency of the problem. City and strategic infrastructures shutdowns, hunger strikes, and mass arrests are some of the tactics that have been used by XR to get attention from the public on the urgency of climate issues [63]. For example, the November 2018 week of protests (https://www.theguardian.com/environment/2018/nov/17/thousa nds-gather-to-block-london-bridges-in-climate-rebellion (accessed on 10 December 2021) was a great communication success despite, or thanks to, the 85 people arrested, and was instrumental in the UK Parliament declaring an environmental and climate emergency. In response to the solicitations, several cities have declared themselves to be in a climate emergency status (first Great Britain, then Ireland, rising to 12 states worldwide, including 4 in Europe). The declaration of climate emergency recognizes locally the need for action against climate change. In addition, it introduces the tool of climate impact assessment to assess the integration of measures to combat climate change in local policies. This declaration became the worldwide tangible output of the protests, leading to different outcomes and degrees of implementation throughout the countries. Discussion Starting from the hypothesis and research questions expressed above, this part of the manuscript discusses the potentialities of grassroots movements for climate action and TMNs together, in supporting cities in their climate transition. Possible limitations and risks are also highlighted at the end of this paragraph. Research Question (RQ1). What are the instruments/elements/factors that the different institutional and grassroots movements for climate action put in place to support the transition? Two instruments are most used for TMNs: (i) on the one hand, "hard tools" such as key performance indicators, monitoring frameworks and action plans, that aim to provide support in the drafting of plans and strategies; (ii) on the other hand, "soft tools" for advocacy and support, such as working teams, academia and others, supporting officers and networks. The latter exist mainly at two levels: (i) the level of the network itself, with the role of setting general targets, agendas, manifestos, but also of forming working group and task forces, with the aim of supporting all the cities in the network in a global transition; and (ii) the level of cities, with the function of supporting the individual administrations in the construction and drafting of plans and monitoring systems. However, monitoring of individual cities and policy actions is, in most cases, self-organized by the cities themselves. This leaves ample room for possible blockages and interruptions of administrative action and threats of abandonment of planned actions. Grassroots movements, on the other hand, set their action precisely on the verification of the effective operation of governments (at various levels) following the declarations and strategic objectives set by larger agendas. They base their action mainly on communication and advocacy tools. Even in such a case, the phase of setting up the group (not of cities, but of activists) is of great importance: starting with a strong leadership and personality, commitment, and shared needs, an activist base is built. Group formation and recruiting take place through the construction of shared identity based not so much on measurable goals (as in the case of the key indicators of TMN) but on shared ideals and values. The methodologies to voice their demands are purely informal: litigation, demonstrations, strikes, and occupation of public land make explicit the importance of making the problem manifest and communicable, via radical forms of communication, using body language and performance. The physical and corporeal dimension of the message has a digital counterpart, which is just as radical, widespread on all possible channels. Opposition, conflict, and dissent are an integral part of the methodology. Especially in XR, resistance to "power" is also expressed in the form of mass arrests and blockades of large infrastructures. An interesting approach is to interpret these experiences from the perspective of bottom-up small wins, as recently defined by Bours et al. [64]. If TMNs have the task of accompanying cities and their staffs in setting goals and transition strategies, bottom-up movements act when these goals are hidden under agendas with no operational basis, or in the absence of real political commitment to achieve the set goals. In this relation, they have the potential to work together in setting more effective goals and actions to be taken, also taking into consideration new instruments such as media, influencers, and, more in general, the potentialities of informal communications. From an impact perspective, it is possible to say that both TMNs and grassroots initiatives have influenced and produced changes in their field, especially in cities' documents and agendas. TMNs, in particular, have supported cities in the production of scenarios and planning documents, including actions and key steps, while grassroots initiatives have contributed to making climate part of the agenda, as happened, for example, in Bologna with the setting of the Climate Emergency state, raised from the Extinction Rebellion movement [65]. Research Question (RQ2). What is the core difference in the message that the two experiences provide to the world for making the transition effective? Contemporary climate movements differ from the past mainly because they put climate issues and their resolution on a social justice level. They are less environmental movements per se, and more anti-inequality movements that see climate justice as the concrete ground on which to fight. In 2019, Fridays for Future became the center of media attention, but the added value was that it was able to model itself on even radically different territories, reaching out to vulnerable areas and users who traditionally are not given a voice. The presence of a widespread and extended network allowed people living in remote territories to adhere and be the spokespersons of the message towards the local political class. The same approach can be attributed to TMNs, whose vision is wide-ranging, but then generally translated into transitional pathways that can be thematic. The message for both levels is political: TMNs intercept the political dimension from within, including mayors themselves among the signatories, as members of the network. It follows that the commitment of the government is a founding aspect and the basis of any programmed action. It must be underlined, however, the fact that the prestige of political adhesion to some networks could hide objectives that go beyond the achievement of climate agendas, especially concerning private-led or for-profit networks. Moreover, for bottom-up initiatives, the political dimension has to do with a commitment issue. However, in this case, it is measured by the actions put on the ground locally, on which, according to the activists, assessments must be made as to whether the goals can be achieved. Otherwise, in the absence of political actions, the risk is always to glimpse only speeches and agendas, without real progress or, to quote Thunberg, "useless blah, blah, blah". One of the axes on which TMNs and grassroots movements for climate action converge is education and training, as necessary and pervasive responses for the construction of co-created knowledge, not only for dissemination purposes. Specific actions are undertaken on this, both by the movements (collaborations with schools of various degrees) and by the networks (training and construction of shared scientific bases between cities). Research Question (RQ3). How are the two experiences able to reach their targets? How much are they effective in this? For TMNs, the targets are not only achieved but embedded in the network itself. The fame of these initiatives is also growing in the light of how funding is obtained, especially at the European level. Being part of these networks makes it possible to get to know other realities and mutual knowledge can lead to collaboration in the realization of certain objectives and participation in European calls for proposals. The TMNs envisage that targets proceed with thematic planning once they have joined the network. On this, it must be said that all environmental planning falls under voluntary planning. This is one of the reasons why small or distant cities rarely get into planning processes of this kind. It is often convenient for them to be part of these networks, but they can bring little contribution in terms of concrete administrative actions. With regard to grassroots movements, politicians and leaders, high profile figures, and representatives of the most diverse groups (businesses and oil companies, but also religious communities) are strongly urged by the demonstration actions. Their responsibility is to have the grassroots message diffused in society, as well as materialized in tangible, immediate actions. At the moment, there is strong attention from local governments to the activists' demands (a clear example is the media coverage of climate movements actions during the last COP 26), but this does not seem to be followed by incisive operational action. The experience of Extinction Rebellion is the one that has been most able to implant in the public actor the request for a commitment in the form of a declaration of climate emergency and the stable use of climate assemblies. In both cases, it is difficult to measure the effectiveness in reaching decarbonization goals, precisely because of the indirect nature of the advocacy action proposed by the initiatives. However, in both cases, there is a direct response in terms of policy actions provoked (declarations of climate emergency, adaptation plans), which can be directly traced back to the action of these drives. The aspect that seems to be missing, however, is a system-wide transformation in socio-technical systems related to the provision, support, and development of actions to combat climate change. The success of some experiences has over time expanded participation and engagementespecially at the local level-as well as adherence to initiatives proposed by both climate activists and TMNs. However, it remains to be reflected in the amount of additional work that is required from local governments, often already suffocated by bureaucratic resistance and lack of trained staff. In light of the above considerations, it can be stated that TMNs assume more of a role of constant accompaniment and awareness-raising for adaptation and mitigation measures, while the grassroots movement are geared toward offering specific arguments to an issue or problem to be addressed through political action. However, this action is effective only if connected and integrated with a complementary role with respect to the actions put in place through planning. Conclusions and Further Research In conclusion, this research presents a qualitative review of two key types of climate advocacy initiatives: grassroots climate movements and TMNs. In particular, this research highlights the main constitutive elements, targets, and factors with the aim to understand how these two experiences can support cities transition. It is possible to recall the two hypotheses that drove the analysis, trying to give them an answer. Hypothesis (H1). Grassroots movements for climate action and TMNs have the potential to better support cities in their climate transition. The proposed hypothesis has been addressed in the Discussion section. Both types of initiatives have aspects in common, especially targets and methods, with key differences that come from the different nature of them: the first one, TMNs, being institutionalized and the second, grassroots initiatives, being spontaneous. Even with those differences, both initiatives agree on the necessity of taking effective actions against climate change. These actions are perceived from both as imperatives. In addition, this links the discussion to the second hypothesis. Hypothesis (H2). Local governments should take advantage of both initiatives, as they have the unique ability to develop networks of support, innovation, and sense of belonging. This hypothesis is confirmed by the analysis performed. So far, the two types of initiatives have been maintained separated. Recent decisions from cities across Europe are going towards the support of grassroots climate initiatives, for example, in the form of declaring the climate emergence, to show room for collaboration, and for working together in making the transition effective. The theme of temporality seems relevant here: both activities claim that it is necessary to act now, as the "future" has become a political category of the present. The rhetoric of the message, of which movements and networks are the spokespersons, is based on a critique that draws on a topos of urgency, since the space for maneuvering decreases abruptly and rapidly. Especially in the charismatic words of climate movement leaders, their rhetoric implies that catastrophe is already upon us [56]. This posture, however, implies a certain risk of acting in a regime of urgency, an attitude that runs the risk of neglecting a substantial and necessary part of the transition, i.e., leaving the emergency regime behind and fostering a long-term strategic planning that is not only vertical on the climate emergency but includes it transversally in every aspect of future development. When talking about sustainable development, in fact, it is useful to recognize that the urban crisis should no longer be understood as a contingent and removable event, but as a perennial condition, a permanent situation of emergence. These global challenges burden a complex reality that deeply intertwines the health, socioeconomic, communication, and environmental dimensions. This requires a mix of technical and advocacy instruments to allow for both the development of forward-looking visions and the incorporation of contingent actions, including from bottom-up stimuli. Finally, the study highlights the constant emergence of new movements on both sides. As an example, a recent initiative is the Green City Accord (GCA). It is the most recent TMN created in Europe, by the European Commission. GCA aggregates European mayors aiming to improve greener and healthier cities in Europe. Similar to the CoM, the GCA is also directly targeting cities' mayors and requires them to sign an accord, entailing a strong political commitment. It is open to all cities in Europe with more than 20,000 inhabitants, including agglomerations. This initiative is aimed to contribute directly to the achievement of the Green Deal and the SDGs, through the commitment in five key areas: air quality, nature/biodiversity, water, waste/circular economy, and noise. A specific monitoring framework is detailed in the application process, requiring cities to provide a baseline assessment and periodic reporting, based on a predefined set of indicators [47]. The novelty of this initiative, which was created in 2020, does not already allow for reporting deep feedback on barriers, however, it is possible to argue that the structure is very similar to the CoM one. Thus, it will be important to take into consideration all the already known barriers of the CoM to reduce shortcomings and to improve the quality of such an initiative. This recent birth should take the form of a mixed initiative if the important role of TMNs together with grassroots climate initiatives is detected. Instead of building another "new and similar" network, which will require new work of assessment from municipalities, it should be possible to work in collaboration with already existing actions and with highly conscious young people in the world. This research will proceed in the future analyzing more closely potential methodologies of collaboration among these two initiatives as well as the forms in which this relation can take place also on an impact perspective on local agendas, procedures, and documents. Conflicts of Interest: The authors declare no conflict of interest. Appendix A This appendix includes a table providing a geographical coverage of the TMNs analyzed. Please note that the 100RC program is not covered as the website is no longer available (26 January 2022). All data have been derived from the initiative's websites. Data on number of municipalities have been taken from the official website of the country.
2022-03-02T16:14:23.651Z
2022-02-25T00:00:00.000
{ "year": 2022, "sha1": "f505aaa2c550a40609e8090f90afac45760e5dfb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/5/2701/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "29c5a73f5a658f04eea567ef0a680158303ec29f", "s2fieldsofstudy": [ "Environmental Science", "Sociology", "Political Science" ], "extfieldsofstudy": [] }
149457329
pes2o/s2orc
v3-fos-license
Donald Trump and Institutional Change Strategies This article integrates three fields of study: the “regime politics” paradigm in law and courts, the “institutional change” approach in public policy, and the “unilateral presidency” literature. In doing so, we show how law, politics, and public policy are inextricably linked, and that researchers can borrow assumptions, methods, and theories from a variety of fields. We use Donald Trump’s early presidency to show how political actors (especially presidents) can use four different change strategies. In the case of Trump, we highlight: shifting of decision-making authority via insurrectionary displacement; the elimination of the individual mandate via subversive layering; a change in drone use policy via opportunistic conversion; and a gradual desensitization and change in school choice education policy via symbiotic drift. We conclude by offering lessons for all three literatures we incorporate, as well as a way forward for studying a presidential administration that many find difficult to analyze. Introduction Using Donald Trump's early presidency as a case study, this article merges different literatures to demonstrate and advocate an innovative way of studying American politics.We leverage the "institutional change" approach (Mahoney and Thelen 2010; Peters 2011; Lowndes and Roberts 2013;Van der Heijden and Kuhlmann 2016) to explore entrepreneurial possibilities available to US presidents in an "intercurrent" system (Orren and Skowronek 2004).At the heart of our analysis is the underlying conviction that law, politics, and public policy are inextricably linked.In particular, the law helps shape the limits and opportunities available to those who seek to change and/or protect public policy. The article starts with a key assumption from the "regime politics" paradigm.We then bring in the public policy literature's institutional change approach, describing its dimensions, typology, and strategies (displacement, layering, conversion, and drift).Next, we transport the assumption of regime politics and the methods of institutional change to the study of the unilateral presidency.The following data section describes how Trump has encountered contexts, and developed responses, predicted by the typological framework.We give examples of Trump using displacement, layering, conversion, and drift.The conclusion discusses the study's implications for examining law, politics, and public policy from different institutional angles.It closes with a note on studying the Trump presidency. Three Literatures This section lays out the assumptions of one literature, the approaches of another, and the application of those two to a third body of work.The "regime politics" paradigm maintains that institutions cannot be studied in isolation.Through the lens of the Supreme Court (but transferable to other institutions), it shows how "intercurrent" processes affect political outcomes.Meanwhile, the public policy literature has devised a powerful approach for studying institutional change.Using a two-dimensional framework, public policy scholars explain a great deal of behavior within intercurrent governance.We close the section by integrating assumptions and approaches into a third subfield: the unilateral presidency. A Regime Politics Assumption Within the study of law and courts, "regime politics" seeks to counter the epistemological outlook of behavioral studies (Whittington 2000).Instead of trying to explain judicial behavior (e.g., Justices' votes) with parsimonious variables (e.g., attitudes) (Segal and Spaeth 2002), regime politics seeks to show the developmental trends of a Supreme Court that operates in a broader network of institutions and preferences.Regime politics scholars believe that reducing judicial politics to items such as individual Justices' votes misses the "intercurrent" (Orren and Skowronek 2004) nature of American politics.In other words, understanding Supreme Court decisions requires contextualizing those decisions within a larger web of overlapping institutions-working with and against each other to achieve change or sustain the status quo (Clayton 2002;Gillman 2006a;Barnes 2007;Keck 2007;Stazak 2015).In adopting intercurrent assumptions, these studies revolutionized the way we saw the Supreme Court.They exposed a subtle, clever, and sometimes manipulative relationship between law and politics-a relationship in which elected officials can turn to courts as a means for changing or preserving public policies (Whittington 2005). 1 Although regime politics scholarship brought about essential findings, it saw judicial review largely as a tool, to be used at whim, for the dominant coalition.Shapiro suggested a broader outlook that we might think of as a return to the theoretical roots of the paradigm: understanding courts as both respondents and instigators within a larger political system.In other words, whereas the field largely saw a one-way principal-agent relationship in which the Court served the interests of the coalition, Shapiro considered a two-way relationship in which politics not only affected the Court, but the Court also affected politics (Shapiro 1964(Shapiro , 1993;;Kritzer 2003;Gillman 2004).This shift in thinking has opened up new theoretical possibilities to regime politics.For example, in rare cases, the Court is counter-majoritarian. Affiliated elected members of the dominant coalition can then either cleverly manage (McMahon 2011;Keck 2014;Keck and McMahon 2016) or bungle (Bridge and Nichols 2016;Bridge 2015Bridge , 2016) ) the political fallout.Moreover, the shift toward a two-way relationship opened the paradigm to a powerful method: "the institutional change" approach. A Public Policy Approach Mahoney and Thelen (2010) explain that the relative positions and two-way relationship between a pair of institutions generates a particular context that structures actors' choices and behavior.Specifically, they conceptualize a two-by-two typology of institutional change strategies based on two contextual dimensions. First, change agents' discretion in interpreting rules can be low or high.All institutions have rules that range from loosely structuring to rigidly determining actors' behavior within those institutions.The wording, scope, and reasoning behind any set of rules varies from institution to institution and helps sculpt the range of choices available.Put simply, the law matters, and it opens up or closes off possible courses of political action.Imagine if the tax code suggested that the Internal Revenue Service prosecute cases of "significant" tax fraud versus mandating that the agency prosecute "all" 1 For example, Justices can issue favorable rulings (Clayton and Pickerill 2004;Pickerill and Clayton 2004), get around immovable roadblocks in the Legislature (McMahon 2004), handle coalition-splitting issues that Congress has passed off (Graber 1993(Graber , 2005(Graber , 2006;;Clayton 2002;Lovell 2003;Lemieux and Lovell 2010), rein in regional outliers that deviate from national norms (Klarman 1996;Graber 1998;Powe 2000), or provide rear-guard protection against new-guard insurgents (Gillman 2002(Gillman , 2006b)). cases of tax fraud.The latter presents a clear directive to auditors: even if a tax-payer is off by a penny, they must be prosecuted.The former is suggestive and grants those in the IRS the ability to interpret what constitutes "significant" fraud.Is it a set dollar amount (e.g., more than $100)?Is it a percentage of one's tax bill (e.g., the payment is off by more than 5%)?Does "significant" mean the IRS should target high-profile (e.g., celebrities) and/or high-earning (e.g., CEOs) tax reports?In both circumstances, the rules help determine how much latitude the institution's agents hold.When agents hold high discretion, they have more of an ability to affect change through their own choices.With low discretion, they are more hamstrung by the rules in place. Second, change agents will almost always meet some level of resistance from status quo defenders.These defenders can be weak or strong in their ability to protect existing policies.We point out two additional items regarding status quo defenders.First, the quantity and jurisdiction of defenders varies by policy area."Policy regimes" (McGuinn 2006) carry different constellations of institutional decision-makers, interest groups, and bureaucratic players.It would be foolish to say that the groups interested in one issue are exactly the same as those interested in another.Thus, while the time period under investigation is relatively narrow (2017)(2018) with no variation in congressional partisan majorities, the variation in policy regimes allows us to distinguish between cases of different status quo defenders.In addition, devices for status quo defense may vary; but all of them either present defenders as weak or strong.For example, some detractors, such as the House Minority Leader, may hold key political offices but are in a weak position to stop change.Others might be extra-governmental actors but occupy a firm grip on an important gate in the policy-making process.We do not discuss firearms in detail, but the issue serves as a useful example, as the National Rifle Association exerts a powerful influence over many conservative members of Congress.In many cases, the NRA holds a de facto veto on gun regulation bills. Altogether, then, the two dimensions create four change contexts that emerge from the different combinations of institutional settings.Table 1 shows the dimensions and contexts.We discuss each. Low High Status Quo Defenders Weak Insurrectionary Displacement Strong Subversive Layering Symbiotic Drift Insurrectionary Displacement This strategy features change agents with low discretion and status quo defenders with weak veto abilities.Of all the institutional change strategies, displacement is the most likely to produce quick, visible change because change agents have incentive to do away with an institution.With rules that do not allow discretion, change agents cannot use them to their advantage.Moreover, those trying to thwart change come to the table from a position of weakness.In these instances, change agents might see no need to keep the institution in existence.It might be easier to displace it altogether.Thus, change agents are "insurrectionists": rather than co-opt uncooperative institutions, change agents will either eliminate them or their ability to affect policy. Displacement is the closest we come to the dramatic change that many political candidates-especially those for president-promise in their campaigns.Still, we should not automatically equate this strategy with "order shattering/order creating" (Skowronek 1997(Skowronek , 2011;;Nichols and Myers 2010;Nichols 2011), a term which implies historic reconstruction of the way both politicians and the people conceive of the state (e.g., emancipation, the New Deal).In less spectacular ways, displacement may be used to lessen an institution's capacity to make decisions.For example, Congress' constitutional ability to regulate the federal judiciary could shatter and build courts.But less dramatically, Congress could restrict certain cases from ever reaching those courts (Rosenberg 1992;Clark 2011;Engel 2011;Nichols et al. 2014).Moreover, the courts themselves have weak vetoes in this area-they cannot stop a bill that reduces jurisdiction.Thus, Congress need not shatter or build to displace judicial authority. Subversive Layering As with displacement, change agents in this context have low discretion.It differs in that status quo defenders are strong.Here, the change strategy is to layer new rules or institutions on top of, or alongside, existing ones.The new layers do not replace the old; instead, they add to procedures in a way that transforms the operational capacities, decision-making, or mission of an institution.It is a subversive tactic, akin to "termites in the basement," who eat away at the structural foundation (Mahoney and Thelen 2010, p. 31;Nichols 2014, p. 287). A prime example is Ronald Reagan's 1981 strategy to exact spending cuts from Congress (Nichols 2015).After witnessing 50 years of Republicans losing policy debates and elections on balanced budget planks, Reagan adopted a strategy of layering tax cuts on top of mandatory outlays.He saw policy and political benefits.Administration advisors believed that bloated deficits via tax cuts would force Congress to cut spending.This "starve-the-beast" tactic did not work-for Congress accepted the growing deficit-but it still gave the GOP a political boost.Previously, the party struggled to compete with the "Democratic Santa Claus," who showered voters with spending gifts (Wanniski 1976).With tax cuts, though, the "Republican Santa Claus" delivered a tangible good to the electorate.Tax cuts soon became self-reinforcing after voters became accustomed to refunds and economic conservatism locked into path dependent (Pierson 2004) avenues for, at least, 30 years (Crockett 2011;Skowronek 2011;Schier 2011). Opportunistic Conversion With opportunistic conversion, we have the inverse of subversive layering.Whereas status quo defenders had strong veto capabilities and the change agents had low discretion in the context of subversive layering, in conversion the roles are reversed: status quo defenders are weak while change agents have high discretion to interpret institutional rules.Here, more than in any other strategy, the law empowers change agents, who refrain from displacement because they realize the prospect of converting an unfavorable institution into a favorable one.In this context, change agents are "opportunists," reconfiguring rules, norms, and procedures to promote goals radically different from those in the institution's original charter (Mahoney and Thelen 2015, p. 24). A superb example of this tactic comes from Daniel Béland's developmental history of Social Security, showing how Democrats have converted the program multiple times (Béland 2007).FDR originally conceived that self-supporting account-balancing funds would come in and go out, meaning Social Security would not require constant general revenue financing from federal tax dollars.That changed when liberal congressional Democrats who scraped by in the 1938 midterm elections altered the program to improve their reelection chances.They changed Social Security to include spousal and survivor benefits, or, from a program that favored singles to one that favored working and middle-class couples.Later, in the 1970s, Democrats again felt electoral pressure, and using "ad hoc increases," converted Social Security into a "retirement wage" similar to those in Western Europe (Béland 2007, p. 23).In both instances, Social Security did not go away.Instead, politicians converted it into a different form that served their interests. Symbiotic Drift What can change agents do when they maintain high discretion but status quo defenders hold strong veto possibilities?They cannot employ conversion, for defenders can guard against that kind of overhaul.Instead, change agents must work within the existing system to exploit ambiguous rules.Mahoney and Thelen (2010, p. 24) describe the relationship as "parasitic," whereby change agents avoid an aggressive attack (e.g., displacement or conversion) in favor of an apparent public willingness to cooperate combined with behind-the-scenes exploitation of rules and norms to undermine the institution.Drift is often subtle, unseen to many until gradual shifts eventually revolutionize the operation and/or output of an institution. Calling drift the "most pervasive dynamic" in modern politics (Hacker 2004, p. 247), Jacob Hacker shows how political entrepreneurs have sought to nudge the welfare state in one direction or another (Hacker 2005).Accounting for changing social norms (e.g., more two-income households) and increased risk and uncertainty, Hacker explains how gradual shifts in policies over decades changed insurance and retirement structures.In the end, constant drift between and among proposals created the current public-private hybrid system. Drift is exceptionally hard to measure (Rocco and Thurston 2014), and Barnes (2008) offers two helpful keys for thinking the dynamic.First, episodes of drift need not be viewed in isolation.With intercurrent rules and institutions, changes in-or changes in the approach to-one part of a policy can greatly affect another part of that policy.Political entrepreneurs know this, and therefore researchers should not limit analysis to a quarantined set of institutions, rules, or norms.Again, each policy regime is different, and the constellation of factors in each warrants individual attention.Second, Barnes says that drift itself need not be the mechanism that ultimately creates change.Instead, it can lead to other strategies (e.g., conversion) that present more measurable change. Intercurrence, Institutional Change, and the Unilateral Presidency Other fields, such as comparative politics (Falleti 2010;Van de Walle 2011;Staniland 2012) and American public policy (Hacker and Pierson 2010;Burke and Barnes 2009;Barnes andBurke 2015, 2018), have adopted the institutional change approach.Often, these tell the stories of how one policy preference subtly and/or gradually replaced another seemingly dominant one.This scholarship is important, for it not only tells the individual histories of key public policies, it also opens avenues for new applications of institutional change research (Barnes 2008). One subfield of American politics, the presidency, has curiously not used the technique to its fullest.To do so, we suggest pushing the boundaries of the "unilateral presidency" literature.A response to Richard Neustadt's classic claim that the "power of the president is the power to persuade" (Neustadt), this body of work prioritizes the legal structure of the presidency before the personal attributes of any given president.It asserts that presidents retain significant powers inherent to the institution.Instead of pursuing ways in which single presidents persuade other political elites, the unilateral presidency literature examines how all presidents can use their constitutional and extra-constitutional powers directly (Moe andHowell 1999a, 1999b;Crenson and Ginsburg 2007). Constitutionally, presidents hold powers that grant them a significant voice in the policy-making process.Various scholars have looked at different constitutional powers that give the president significant leverage.For instance, the veto not only allows presidents to block laws that they find objectionable, but they can also employ the threat of a veto to help shape the wording and nature of a bill into something closer to their personal preferences (Conley 2003a(Conley , 2004;;Conley and Yon 2007).Chief legislator, chief diplomat, commander-in-chief-these are all roles that stem directly from constitutional provisions (respectively: making legislative recommendations, negotiating treaties and receiving ambassadors, and commanding the military).These constitutional powers alone allow even the most "isolated" of presidents to achieve significant policy victories. 22 Take John Tyler.Though unelected, facing divided government, and expelled by his own party, Tyler overcame the most hostile of Congresses to implement his preferences on major issues of the day: setting the Canadian boundary, a trade agreement with China, preventing the institution of a national bank, and the annexation of Texas (Cash 2018).If a president as isolated as Tyler could use these powers under constrained conditions, then we might think that other presidents should be even more likely to accomplish their goals. In the modern era, presidents have gone beyond constitutional directives in using extra-constitutional powers.Whether "imperial" (Schlesinger 2004) or not, presidential powers have undeniably grown in the last 80-and especially 50-years.William Howell's Power without Persuasion (Howell 2003) is a foundational text.It shows the ability and frequency in which presidents have employed a broad range of powers not delineated in the Constitution.Others, too, have added explanations of the extra-constitutional tools in the president's belt, including signing statements, proclamations, executive orders, and executive agreements (Mayer 2001;Rottinghaus and Maier 2007;Krutz and Peake 2009;Conley 2011Conley , 2013)). We agree with the unilateral paradigm's assumption that the key to presidents' success is not simply "persuading" others, but rather functioning as a powerful instigator and veto player who employs broad constitutional and extra-constitutional authority.Despite what Neustadt advocates, presidents have tremendous unilateral powers that allow them to accomplish much.While studying their unilateral actions has merit, we suggest studying their unilateral actions in context-especially in situations where they want to create change (while holding varying levels of discretion) and face those who stand in the way (with varying levels of ability to maintain the status quo).Put differently, the unilateral presidency literature should not only look at presidential powers; it should look at how, when, and where those powers are used.We believe that the institutional change approach provides an appropriate framework for helping that investigation.More generally, and in line with the theme of this special issue, we believe that the law shapes how decision-makers approach politics and policy-making.The law, therefore, shapes the rules of the game, saying who can and cannot play, and what players can and cannot do in the political, policy-making, and administrative processes. In what follows, we show how President Donald Trump has used unilateral powers to try to enact change.We do not argue that he consciously pursues strategies of displacement, layering, conversion, and drift.Nonetheless, this has been the dynamic at work.At the very least, Trump has behaved in ways that comport with change strategy predictions.This is neither in lieu of, nor in addition to, his unilateral powers.Instead, we put forward that his actions are an essential part of his unilateral abilities. Donald Trump's Use of Change Strategies This section analyzes each of the four change strategies theorized by Mahoney and Thelen (2010).We show how Donald Trump has taken actions in various policy domains that reflect displacement, conversion, drift, and layering. Insurrectionary Displacement Trump's employment of insurrectionary displacement is notable in that he has not focused on destroying institutions.Instead, he has shifted policy-making and executive authority to individuals and agencies more directly under presidential control and/or more in line with Trump's preferences.The outcome is better opportunities for the president to force shifts in the status quo. In domestic policy, this strategy is most apparent in how Trump has approached the FBI.Despite members of his administration facing investigation (Bump 2018), Trump attempted to establish more direct oversight over the Bureau, allegedly even pressing former Director James Comey for his loyalty and making requests about investigations that Comey found "inappropriate" (Kenealy 2017;Stracqualursi and Liddy 2017).When these efforts failed, Trump sought to shape the FBI hierarchy more to his liking, firing Comey in May 2017.Ten months later, Attorney General Jeff Sessions fired Deputy Director Andrew McCabe-much to Trump's tweeted delight (Trump 2018).When Robert Mueller was appointed as Special Counsel to lead the Russia investigation, Trump nearly fired him as well, abstaining only after White House Counsel Donald McGahn threatened to resign in protest (Shear and Apuzzo 2017;Schmidt and Haberman 2018).Nevertheless, Trump has attacked the investigation as partisan (Baker 2018).Trump has also indicated that he would like to fire Deputy Attorney General Rod Rosenstein, who oversees the Mueller investigation (Murray et al. 2018).Finally, we contend Trump's very public frustration with Sessions-regarding Sessions' recusal from the Russia investigation, delay in firing McCabe, and the lack of an investigation into Obama administration ties with Russia (Nelson 2018)-stem from the president's inability to personally direct federal law enforcement to fit his preferences. Apart from attempting to control FBI leadership, Trump has attacked the Bureau by controlling the flow of information.He declassified the Nunes memo, which purported illegal actions on the part of FBI officials in investigating the Trump campaign (McCarthy and Yuhas 2018).Conversely, he redacted much of the Democratic-authored Schiff memo, a response to Nunes that likely presented the FBI more favorably (Herb 2018).Additionally, Trump has sought to turn the tables on the FBI and Justice Department by expressing support for a second special counsel to investigate both agencies (Breuninger 2018). Of course, despite veiled threats to shut down Mueller, the Russia probe still moves forward.The point is not that Trump has stopped the investigation.The point is that Trump has redirected federal law enforcement-an issue that encompasses the Russia probe, but expands much further.In short, Trump has delegitimized and displaced FBI and Department of Justice leadership to bring federal law enforcement more directly under the president's purview.In the case of Russia, such a move seeks to give presidential oversight over actors who themselves are supposed to oversee the responsible use of presidential power.It is, in a word, insurrectionary. In foreign policy, Trump's use of displacement has focused on shifting the locus of foreign policy decision-making from the State Department to the Pentagon.This change is observable on two levels: the personal relationships between Trump and the Secretaries of State and Defense; and the resources devoted to each department.Regarding the former, Trump has a positive relationship with-and seems quite deferential to-Secretary of Defense James Mattis (Klimas and Morgan 2017;Mitchell 2018).Oppositely, Trump's relationship with former Secretary of State Rex Tillerson always appeared strained.The president contradicted, publicly quarreled with, and eventually fired Tillerson, replacing him with the more hawkish Mike Pompeo, former Director of the CIA (Nelson 2018;Shane 2018). 3egarding resources, Trump has displayed a clear preference for the Defense Department.In his budgets, Trump proposed to cut the State Department's budget by up to 30 percent, in addition to deep personnel cuts (Morello 2017;Finnegan 2018).By contrast, Trump's 2018 and proposed 2019 budgets dramatically increase military spending, and his freeze on federal hiring specifically exempted the Defense and Homeland Security Departments (Cloud 2018;Jaffe and Paletta 2018;Nelson 2018).Finally, we note that Trump has surrounded himself with former military personnel.In addition to Mattis, Trump's Chief of Staff (John Kelly) is a former general, as was his former National Security Advisor H.R. McMaster.While McMaster was fired by Trump, his replacement, John Bolton, is known as a strong hawk (Lucey et al. 2018), further signaling that Trump is surrounding himself with figures supportive of strong military action and giving his foreign policy a distinct military character.It thus appears Trump is displacing State Department influence in favor of military influences.Needless to say, this leads to significant policy changes. Subversive Layering For an example of subversive layering, we turn to Trump and the GOP's repeated attempts to repeal and replace Obamacare.The strong veto possibilities available to members of Congress doomed efforts to outright repeal Obamacare throughout 2017 (Nelson 2018).Ironically, Republicans presented the most impactful roadblocks.Below, we describe failed efforts to overhaul healthcare.In the end, Republicans leaders-including Trump-learned that because status quo defenders held strong vetoes, a displacement or conversion strategy would not work.Instead, change agents chose to layer new rules on top of other laws. In the first effort to repeal Obamacare, Speaker Paul Ryan pulled the bill before it even came to a vote.With objections from the conservative Freedom Caucus, specifically concerning a proposed four-year extension of Medicaid payments to the states, the Republican leader knew he did not have the votes.Ryan and Vice President Mike Pence then re-worked the bill so that it appeased both moderate and conservative Republicans in the House.Soon thereafter, though, Senate Majority Leader Mitch McConnell ran into problems similar to those Ryan had faced in the lower chamber.With only 52 Republicans Senators and solid Democratic opposition, McConnell had a thin margin to work with.In his first effort, conservative Republicans insisted the repeal did not go far enough.Yet, in mollifying these senators, he alienated the party's moderate wing. Trying to find some common ground, McConnell then presented three different measures at the end of July 2017.The first was full-scale repeal-and-replace that contained both moderate and conservative elements.It failed as nine Republicans-conservatives and moderates-voted against it.The second was a "straight repeal" without replacement provisions.It also failed when seven moderate Republicans defected.The third was a "skinny" repeal that only eliminated Obamacare's individual mandate.It too failed when moderate Republicans Lisa Murkowski (AK), Susan Collins (ME), and John McCain (AZ) voted against the measure, with McCain dramatically and literally giving the measure a thumbs-down during the floor vote. After outright displacement efforts flopped, the GOP turned to a conversion strategy.Lindsey Graham (R-SC) and Bill Cassidy (R-LA) attempted to repeal-and-replace via the filibuster-proof budget reconciliation process. 4While reconciliation typically involves perfunctory proof-reading of the budget, Trump endorsed converting the process into a policy-making mechanism. 5Using altered parliamentary procedures gave repeal-and-replace yet another floor vote.But when both conservative and moderate Republicans opposed it, McConnell pulled the measure (Nelson 2018). The inability to repeal Obamacare-a key Trump campaign promise-speaks to the strong veto capacities of defenders of the status quo.Facing unified opposition from the Democrats, as well as intra-party squabbles, Trump could not deliver on a primary element of his legislative agenda.We believe the GOP failed primarily due to underestimating status quo defenders' strength.The problem was especially acute in the minoritarian and factional Senate, where McConnell swung and missed three times.We believe this early failure to repeal, let alone repeal-and-replace, came from choosing the wrong strategy.With strong vetoes, conditions were not ripe for displacement.Indeed, even a switch to a conversion strategy with reconciliation failed because, again, the GOP misread the context. Despite failing to repeal and replace Obamacare wholesale, Trump pivoted the legislative agenda toward what we might consider the least common denominator for congressional Republicans: tax cuts.With reliable conservative support for changing the tax code, Trump faced much less Republican factionalism than that which had plagued Ryan and McConnell.On fiscal policy, the GOP faced weaker veto players, and used the opportunity to attach skinny repeal to tax cuts.They did so in Part VIII of H.R. 1 (the tax revision bill), which amends Section ??a(c) of the US tax code.This section dealt with the Obamacare penalty for taxpayers who failed to obtain minimum healthcare.Under the Affordable Care Act, offenders were required to pay the lesser of 2.5% of the excess of household income (Part 2, B, iii) or $695 (Part 3, A).The 2017 tax bill amended these penalties to 0% of income and $0, respectively.In other words, the tax for not securing minimum healthcare became $0.6 In pursuing this route, the Republican Party turned the constitutionality of Obamacare against the Democrats.When the Supreme Court sustained Obamacare in National Federation of Independent Business v. Sebelius, Chief Justice John Roberts' opinion declared Congress could legislate the individual mandate under the Taxing and Spending Clause.Republicans used this logic to include skinny repeal in a tax cut bill.And far from fearing that the Court will defend the individual mandate against this change, some conservative legal analysts welcome going back to the Supreme Court.They argue that in eliminating penalties, H.R. 1 also undermined the Court's logic that undergirded Obamacare.For while Roberts joined the Court's four liberals on the taxing power, he had joined with the Court's other four members in rejecting the argument that Congress could institute the individual mandate under the Commerce Clause.Thus, conservative analysts advocate going back to the Court, arguing that without penalties there is no real tax and therefore no Taxing and Spending Clause issue involved.Eliminating that concern gives conservative analysts hope that five Justices would find the entire Affordable Care Act unconstitutional under the Commerce Clause due to its unseverability from the individual mandate (Larkin 2018).Not coincidentally, 20 states have filed a case in a US District Court claiming this exact argument (Leonard 2018). In the end, healthcare reform did not come about via new legislation.Trump, McConnell, Ryan-none of them ever guided an anti-Obamacare bill through to the end.Instead, skinny repeal passed only because it was attached to tax cuts, maybe the single-most orthodox Republican issue.Thus, layering was both the appropriate institutional change strategy and the tactic by which to accomplish that strategy.To overcome strong vetoes, Republicans placed new rules on top of existing institutions; technically, they did not eliminate the penalty as much as they issued a new rule saying the penalty was nil.Moreover, to pass this rule, they layered the reduction of the new penalty on top of tax cuts, a tactic Trump strongly embraced (Nelson 2018).In doing so, change agents (including Trump) circumvented strong veto points (i.e., unified Democratic opposition and internal Republican divisions) that status quo defenders had previously used to prevent repeal.And if conservative legal analysts are correct, they might also have created an opportunity to relitigate Obamacare and have it ruled unconstitutional. Opportunistic Conversion Of the four strategies, Trump seems most comfortable using opportunistic conversion via his broad discretionary powers in the face of weak veto opportunities.This is perhaps unsurprising given Trump's embrace of strong executive power and unitary executive theory (Crouch et al. 2017).One need not dig far to find an example: pulling out of the Paris Climate Accord and the Iran nuclear deal, imposing steel and aluminum tariffs, and ending Obamacare subsidies, among others.We focus, though, on an issue that has avoided public scrutiny-Trump's relaxation of the procedures concerning unmanned drones.We choose this issue specifically because it is relatively unstudied and provides more information on a topic that would otherwise fly under the radar.We spend time discussing Obama-era regulations to set up a discussion of Trump's alterations to the same institution.As will be shown, a subtle conversion of the previous administration's rules has led to a dramatic difference in foreign policy practices. Under the Obama administration, directorship of drones depended on the country in which the strike occurred.The CIA directed strikes in Pakistan; the Joint Special Operations Command in Somalia; both agencies in Yemen; and the Air Force in "clear war zones" (e.g., Iraq, Afghanistan).House and Senate intelligence and armed services committees maintained oversight, but the decision-making process excluded congressional input, and was handled solely within the executive branch.As part of that decision-making process, the Obama administration developed its own four criteria for possible targets: (1) the person was a senior operational leader of al-Qaeda or an associated force; (2) the person posed "an imminent threat of violent attack against the United States"; (3) capture was infeasible; and (4) the strike could be conducted according to "law of war principles" (e.g., minimizing civilian casualties).If a target met all four criteria, his/her name was placed on a list given to the president. Obama would then personally approve each strike after considering "whether the potential collateral damage was 'legally and morally justified.'"After gaining presidential approval, the strike was delegated to the appropriate agency and reported to congressional leaders (Cash and Bridge 2018, pp. 129-31). While Congress was informed of the process, the third branch of government recused itself from addressing questions related to drone strikes.In the federal district case Al-Aulaqi v. Obama (2010), the petitioner, Nasser al-Aulaqi (also spelled al-Awlaki), argued that his son, Anwar al-Aulaqi-an Islamic cleric and American citizen-had not been indicted or convicted of a crime and should not have been targeted by a drone strike.Judge John Bates, writing for the Federal District Court of the District of Columbia, stated that al-Aulaqi lacked standing.Curiously, Bates went on to say that the broader legal and constitutional issues al-Aulaqi raised were non-justiciable political questions.By dismissing the case in such a fashion, the district court effectively set a precedent that drone strikes were political decisions beyond the jurisdiction of the courts, thereby depriving federal courts of a veto in drone policy (as of early 2018). Since coming into office, Trump has loosened Obama's procedures through various administrative changes.By re-categorizing parts of Yemen and Somalia as areas of "active hostilities," Trump exempted these areas from Obama's four criteria.Similarly, Trump has cleared military commanders to order drone strikes without prior White House approval.Such changes have led to a sharp spike in unmanned drone strikes under the Trump administration.Moreover, without a strict adherence to the fourth criterion, there have been reports that collateral damage is on the rise (Feldstein 2017).We do not make a normative claim here, but point out that these changes do align with Trump's campaign pledges. 7To fulfill those pledges, Trump converted old structures-left in place by Obama-to suit his own ends (Dilanian and Kube 2017;Purkiss et al. 2017;Savage and Schmitt 2017). Symbiotic Drift Examples of drift in the Trump administration come from his appointments.Many appointees seek not to destroy the institutions they lead, but rather to redirect them in ways that opponents with strong vetoes cannot fully counter.We focus on one of Trump's highest-profile nominations, Secretary of Education Betsy DeVos.A longtime advocate of school choice, DeVos was a known quantity when Trump selected her for Secretary of Education. 8 A year in, overhauling public financing of education to allow for school choice remains distant.We argue, though, that DeVos has chipped away at the public school defenders who stand in the way of a new national voucher program.In other words, the nation stands closer to instituting school choice-or elements of school choice-than it did on Inauguration Day 2017. For starters, the Trump administration has changed the conversation such that school choice is a real option on the table.Under Democratic presidents, it received scant attention. 9While recent Republican presidents might have voiced support for vouchers, none invested serious political capital into the effort.For instance, George W. Bush's No Child Left Behind tackled accountability rather than choice.DeVos operates differently, expressing support for school choice at every turn (DeVos 2017a).In doing so, she has attacked public schools, a long-standing strong status quo defender. It is a cultural change, and probably a necessary prerequisite if Republicans stand any chance of presenting, let alone debating and passing, a national school choice bill.As a Washington Post editorial states, "Perhaps most important, being Education Secretary has given DeVos a national platform to mainstream her school choice ideas, which once were considered radical" (Strauss 2017b).We agree, 7 For example, Trump said he would not announce beforehand that he would bomb areas so as to limit collateral damage.8 She had spent decades promoting her views with policy successes along the way, including a major 1993 charter school victory in Michigan.Unequivocal in her support of charters and private schools, DeVos has said that school choice should be equivalent to choosing between Uber, Lyft, or another mode of transportation (Strauss 2017a).9 Although, Obama did implement some policies that benefited charter schools (Strauss 2016). Laws 2018, 7, 27 and highlight the specific conversation point that Republicans tackle: that choice does not advantage one group over another.This topic was at issue during a House appropriations subcommittee hearing, where Rep. Katherine Clark (D-MA) suggested that Trump's policy on transgendered students would prohibit choice. 10Conjuring up the days of segregation, Clark asked DeVos how she would respond if a state used federal funds to provide vouchers for a school that forbid LGBT values.Office of Civil Rights and Title IX protections stood "applicable across the board," DeVos stated, "but when it comes to parents making choices on behalf of students-" [Clark subsequently interrupted DeVos' response] (Jackson 2017). The Secretary did two things that warrant recognition.First, she accepted that while states and local communities could dictate their education systems (a longstanding Republican view), it would still require some federal input.We believe DeVos was mindful of the political landmine and tried to deflect possible accusations of school choice as inherently discriminatory-a charge that harks back to the Jim Crow era, when "choice" meant white parents could opt out of black schools without the intervention of federal courts.Second, even in recognizing a possible issue, in her very next words, DeVos signaled that it would not-or, that she would not let it-stand in the way of school choice options.Politicians such as Clark may never agree, but we believe that DeVos' repeated and strategic advocacy of school choice is an effort to culturally sensitize Washington and the American public to the policy.Again, we see this as a prerequisite for the kind of change she, Trump, and many other conservatives seek. DeVos has also taken aim at another potential legal roadblock: that vouchers to private religious schools might violate the "separation of church and state."Detractors argue that providing public funds for sectarian schools violates the Establishment Clause.They claim, too, that such funding would violate 30 or so various "Blaine Amendments" in state constitutions.Modeled after Rep. James Blaine's (R-ME) 1875 proposal to amend the US Constitution, most state-level Blaine Amendments were reactions to sudden increases in Catholic immigrants.Most contain some variation on Blaine's original's line that "no money . . .shall ever be under the control of any religious sect" (DeForrest 2003).These state-level provisions are a strong barrier that status quo defenders use against those who wish to implement school choice. Instead of confronting status quo defenders on their terms (i.e., the Establishment Clause), DeVos has shifted debate toward the long-standing conservative claim that Blaine Amendments violate the Free Exercise Clause because they limit religious institutions' access to educational opportunities.The issue came to the fore in the wake of the Supreme Court's 2017 ruling in Trinity Lutheran v. Comer.Missouri instituted a program whereby preschools could apply for state funds to be used to purchase recycled tires to resurface playgrounds.Religious preschools were prohibited from applying.The Court struck down the prohibition by 7-2, but a footnote limited the scope of the ruling: "This case involves express discrimination based on religious identity with respect to playground resurfacing.We do not address religious uses of funding or other forms of discrimination". On the same day the Court issued Trinity Lutheran, it also vacated and remanded a Colorado Supreme Court decision that had invalidated a voucher policy.Douglas County had passed the Choice Scholarship Program, which granted 500 scholarships that parents could use on private school education.The state supreme court struck down the Program under the state's Blaine Amendment.The US Supreme Court, however, called for Colorado courts to reconsider the decision in light of Trinity Lutheran.The president of the group representing the pro-voucher litigants commented that the remand gave hope to those "who simply want the right to choose the schools that are best for their kids" (Kramer 2017). Whether Trinity Lutheran was a victory for school choice or public schools is a matter of interpretation.The President of the National Education Association released a statement that applauded the "Court's refusal . . . to issue a broad ruling" (Garcia 2017).A senior staff attorney for the ACLU's Program on Freedom of Religion and Belief tepidly agreed, but also worried that the Court might cross the "constitutional line" of validating vouchers (Weaver 2017). 11Taking Trinity Lutheran and the Colorado remand together, one liberal commentator saw the situation as more dire.She highlighted that Trinity Lutheran had clear implications beyond scrap tires and church playgrounds."While the Court did not outright kill the Blaine Amendments," she wrote, "it certainly gave heart to school choice advocates who see the Court veering in their direction" (Strauss 2017c). Indeed, this is exactly how DeVos interpreted the ruling.In a statement issued by the Department of Education, the Secretary claimed victory: "This decision . . .sends a clear message that religious discrimination in any form cannot be tolerated in a society that values the First Amendment.We should all celebrate the fact that programs designed to help students will no longer be discriminated against by the government based solely on religious affiliation" (DeVos 2017b).The school choice overtones are obvious.Less obvious is the subtle shift from discussing vouchers in terms of the Establishment Clause and their possible unconstitutionality to discussing them in terms of their absolute necessity to uphold free exercise. DeVos has gone beyond symbolism in taking steps toward implementing conservative educational policies.Related to Trinity Lutheran, a leaked copy of the Education Department's 2018 agenda shows that DeVos intends to soon change regulations on faith-based entities to obtain grants, as well as to rewrite regulations that restrict faith-based colleges from receiving federal money (Stratford 2018).Furthermore, DeVos lobbied for a 2018 federal budget that sought: cuts in her own department; $400 million dollars to expand vouchers for private and religious schools; and another $1 billion to encourage public school systems to adopt choice-friendly policies.In a move that speaks volumes, the budget sought to merge the private-and-charter school office with the K-12 public education office.Lastly, via executive order Trump directed the Secretary of Education to revise or rescind federal regulations that she felt did not comply with broad, open-to-interpretation federal laws that prohibit the Department from interfering with state and local control.Amidst strong veto stakeholders (e.g., teachers' unions such as the NEA, the ACLU, Democrats), this directive gives DeVos discretionary power to make conditions more amenable for those who seek to institute school choice. Finally, DeVos and Trump have crafted policies on the margins that indeed institute elements of school choice.For instance, the new tax code includes credits for working and middle-class families who put money into Education Savings Accounts for K-12 education.Carol Burris, Executive Director of the Network for Public Education, warns that DeVos is playing "the long game" in an effort to enact major change at a gradual pace (Strauss 2017d).According to Burris, Education Savings Accounts are now the method by which the GOP seeks to create a national voucher program.The implementation strategy has been to offer them first to "student groups that evoke public sympathy-students with disabilities, low-income kids, the children of parents in the armed forces."Then, "each legislative season, the selected groups expand and the caps are raised."The goal, then, is to slowly enlarge choices' application until it is the dominant policy.If true, then both the appointment of DeVos and her subsequent preferred strategy appear to be symbiotic drift.Desensitizing the public to choice, reframing the issue from Establishment to Free Exercise, promoting legal challenges to Blaine Amendments, allowing a few groups access to vouchers-the collection of subtle actions add up a significant, if slow-moving change in national understandings and, DeVos and Trump certainly hope, policies. Concluding Discussion Applying institutional change strategies to unilateral presidential actions in an intercurrent state points to lessons for all three literatures.We discuss those and finish with a suggestion for approaching the study of Donald Trump. 4.1.Implications for the Regime Politics, Institutional Change, and the Unilateral Presidency Regime politics took the study of the Supreme Court away from behavioralism.In doing so, though, it largely views courts as agents of the dominant national coalition.Sometimes this is true, for the Court does carry out the will of its affiliated elected officials.Nevertheless, in "bringing the courts back in," regime politics left out elements of judicial agency (Shapiro 1993;Barnes 2007).Put simply, judges are not merely agents of elected politicians.They have preferences, authority, and input within a system of separation of powers.The relationship between courts and other institutions is two-way-they affect each other, and using the institutional change approach allows us to see courts as institutions that may thwart political actors. Though courts do not appear in every issue, they have run the gamut of strong-to-weak veto players.Lower federal courts have challenged, and the Supreme Court has upheld, the Trump travel ban.Oppositely, as discussed above in the drones section, one federal judge saw the president's deployment of drones as beyond the jurisdiction of his court.In between the extremes of vetoing a policy and excusing oneself from the debate, we have seen other ways in which law and courts structure the policy-making process.In repealing the individual mandate, Republicans leaned on Chief Justice Roberts' majority opinion to link health care to a tax cut bill.Some conservatives even believe that removing the tax penalties opens the door to further judicial review of the entire Obamacare package.Meanwhile, the constitutional framing of school choice between Establishment and Free Exercise concerns helps structure who obtains the upper hand in the pursuit of, and defense against, vouchers. Combining regime politics assumptions with institutional change approaches not only "brings the courts back in" (Shapiro 1993;Barnes 2007), but it also "puts the law back in public law."A central tenet of regime politics is the notion that the law itself is a driving force of judicial behavior and that it matters beyond simply constraining (or failing to constrain) judicial decision-making (Gillman 2001).As regime scholars would point out, identifying determinants of Supreme Court votes is not the final word in the study of law and courts.In the Trump cases above, even in instances where courts are entirely missing from the policy narrative, the law itself plays a vital role in structuring the options and strategies of political actors.For instance: in granting different levels of discretion, the law affects political strategies; in detailing the policy-making and administrative processes, the law helps establish who is allowed to be part of a policy regime (McGuinn 2006).This study focuses attention on how law shapes the constitutive rules of the game, which create the playing field of politics-whether or not courts are involved. Beyond focusing on law and courts, we advocate expanding the institutional change approach into the study of American institutions.Many institutions are involved in the policy-making process and administration of law.Congress formally passes bills, and sometimes takes the lead role in pushing through change (such as Speaker Nancy Pelosi's original leadership in the nationalized healthcare fight).Individual congressional committees, interest groups, bureaucratic agencies, state governments-these are all players in an intercurrent system.They all play the role of change agents (with high or low discretion) and/or status quo defenders (with strong or weak vetoes).They can all be interpreted through the lens of displacement, layering, conversion, and drift. We especially advocate exploring presidentially-led change via the four strategies.The study of unilateralism provides insight into presidential powers, but it stops short of showing how those powers operate in context.Presidential leadership is not just about one's ability to act, but one's ability to act shrewdly and effectively.Vetoes, signing statements, treaty-making, executive agreements, and the like are important tools available to any president.But the simple use or non-use of these powers does not tell the full story of how presidents create, start, or fail to enact change.Put differently, there is more to unilateral presidential leadership than the blunt use of constitutional and extra-constitutional powers.It also requires knowing how, when, and where to use those powers. We believe the use of unilateral powers not only gives presidents strong control over many areas (e.g., command of the military, ability to shape legislation), but that those powers also give them entry into institutional change pathways.For example, Trump's use of executive authority to enact a hiring freeze is not simply a brazen attempt to "drain the swamp."It also has far-reaching effects into domestic policymaking and international diplomacy, as adding Defense personnel while freezing State Department hiring helps displace authority from the latter and into the former.That move itself probably says something about the way in which Trump intends to approach international relations.Moreover, as Barnes (2008) contends, one institutional change strategy can be a catalyst and partner to another.As shown, some conservative analysts believe that layering the repeal of the individual mandate is the first step toward complete displacement of Obamacare writ large. Implications for the Study of Donald Trump We close with a simple but perhaps easy to overlook observation: for better or worse, when political actors pursue change strategies, it injects a great deal of agency into a political system.This is especially true, we believe, in the intercurrent United States, where overlapping powers gives a multitude of institutions the ability to affect politics and policy.Take the presidency.Executive leadership is not just a function of congressional majorities or divided government (Conley 2003b;Mayhew 2005), one's place in recurring governing cycles (Skowronek 1997;Nichols 2012), or the waxing and waning norms of unilateral presidential powers (Howell 2013;Cash 2018).Presidential leadership does not merely require the power to persuade or even to use unilateral powers.It also requires ingenuity and creativity. In an intercurrent system, presidentially-led change rarely comes about in order shattering and order creating (Skowronek 1997) ways.Instead, it requires actors to take stock of their context: to identify their own discretion levels, as well as the strength of their opposition.It then requires them to come up with a strategy based on that context.As the descriptors of the strategies indicate-subversive, insurrectionary, opportunistic, and symbiotic-change pathways need not always be ethical (Nichols 2014(Nichols , 2015)).Though this article focuses on how, when, and where, we close with another question.It might be trite, but we believe our analysis makes the following unargumentative fact even more pressing: who occupies an office matters.And while some have treated Trump as an interregnum-as a character who goes beyond our traditional assumptions about American politics-we offer the opposite conclusion.As a person, Trump might seem different; but as a president, he faces the same challenges as his predecessors: he wants to enact change but faces a host of challenges that vary issue to issue. In this way, we believe that unifying regime politics assumptions, unilateral presidency insights, and the institutional change approach might provide an entry point for studying the 45th president.It would be easy to say that Trump is unpredictable or philosophically indiscernible.His economic preferences (e.g., deregulation, tax cuts) are tractable; but what are Trump's moral, ideological, and constitutional commitments?Is he a good-faith traditionalist?Is he an originalist, a structuralist, or something else? If anything, Trump's early administration shows him to be a strong proponent of executive power.Now, most presidents-regardless of era and political party-favor executive power once they reach the Oval Office.John Adams oversaw the Alien and Sedition Acts.Thomas Jefferson purchased Louisiana despite his own constitutional reservations.Modern presidents, too, have expanded Article II powers. 12On one hand, maybe Trump falls in line with his presidential predecessors.On the other hand, maybe he is more extreme-wholly unconcerned with separation of powers and wholly concerned with functional use of authority to implement individual policies.The truth is probably somewhere in between.To be sure, Trump has not hesitated to use executive power to institute a travel ban, pull the US out of international accords, and levy tariffs on heavy metals.Less publicized, he has also empowered himself (and those answerable to him) to more easily use drones and to curtail regulations on private schools.Even in arenas where Trump has had relatively restricted discretion, he has used control of his branch to affect changes in governance.The hiring freeze at the State Department and the increase in the Defense budget are not themselves policy ends, but means toward effecting a change toward a more militaristic foreign policy. Moreover, Trump's rise in pop culture came via celebrity status on The Apprentice.He understands media attention and thrives on using rhetoric to control national narratives.The most well-known example is Trump's personal nicknames for his political enemies (e.g., Crooked Hillary, Lying James Comey, Cheatin' Obama).More substantively, Trump has used rhetorical devices to change the terms of policy debates.We show above how the administration has sought to desensitize others to school choice and to limit the effect of LGBT and Establishment Clause concerns. Half the battle in creating change-especially in an intercurrent system-is locating oneself in the typology.Can you leverage discretionary rules to convert the ways of the old guard to your preferences?Can you overpower and displace weak defenders of the status quo?Although Trump might have come to office with no political experience, these are questions that he would have faced in his previous corporate career.In addition, the other half of the institutional change battle is entrepreneurialism.What can you do with limited discretion and strong opponents?How can you redirect pathways with strong veto points?In fighting Obamacare, Republicans tacked the repeal of the individual mandate on top of a bill that most co-partisans could not deny.In turning toward school choice, Trump has avoided a head-on all-or-nothing debate.Instead, the administration has issued a rhetorical assault, as well as taken incremental steps that slowly break from the ways of the past.We do not take a stance on the normative impact of either policy, but we would argue that layering of Obamacare and drift in school choice are entrepreneurial solutions to overcoming entrenched status quo interests. Perhaps Donald Trump is less predictable than other presidents.But there are some items that we have come to count on: strong unilateral actions, assessment and attack of political enemies, and fairly innovative methods in which to pursue policy preferences.The institutional change approach captures and highlights these items, making it a useful tool for exploring the intricacies of an administration that many scholars see as analytically impenetrable.
2018-12-06T18:10:35.775Z
2018-07-06T00:00:00.000
{ "year": 2018, "sha1": "278552c4c2b22747f5c972188254f1860bcbed19", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-471X/7/3/27/pdf?version=1530873364", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "278552c4c2b22747f5c972188254f1860bcbed19", "s2fieldsofstudy": [ "Law", "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
233175981
pes2o/s2orc
v3-fos-license
Nanoencapsulation for Agri-Food Applications and Associated Health and Environmental Concerns Food safety and security are vital to guarantee a sustainable and reliable energy source for human. There is a recent trend of nano-encapsulating bioactive compounds from both plant and animal sources and their utilization for various food applications (1). Nanoencapsulation has gained special attention because of its unique feature for efficient encapsulation, enhanced stability, and better controlled release of encapsulated materials (2, 3). Nanoencapsulation is also applied for food packaging system with the use of biodegradable polymers reinforced with nanofillers as a sustainable and environmentally friendly option (4). However, the incorporation of compounds into the food packaging system at nanoscale (particle size between 1 and 100 nm) has raised concerns on their migration and release into food matrices and the health effects lying with the consumption of such foodstuffs. Hence, it becomes paramount to study the migration behavior into food matrices and associated toxicity after entering the human body as well as the biodegradability/toxicity and its role in the environment. Several concerns on the use of nanoparticles (NPs), their release kinetics, absorption behavior in the body, degradation kinetics and their long-term effects are uncertain and unexplored, therefore, in-depth research is required on these aspects to understand the broader figure of the story. Hence, we strongly recommend exploring these aspects to reveal and disseminate the underlying safety concerns associated with the use of nano-encapsulated particles and to avoid any unfortunate and unprecedented outcomes in the future. INTRODUCTION Food safety and security are vital to guarantee a sustainable and reliable energy source for human. There is a recent trend of nano-encapsulating bioactive compounds from both plant and animal sources and their utilization for various food applications (1). Nanoencapsulation has gained special attention because of its unique feature for efficient encapsulation, enhanced stability, and better controlled release of encapsulated materials (2,3). Nanoencapsulation is also applied for food packaging system with the use of biodegradable polymers reinforced with nanofillers as a sustainable and environmentally friendly option (4). However, the incorporation of compounds into the food packaging system at nanoscale (particle size between 1 and 100 nm) has raised concerns on their migration and release into food matrices and the health effects lying with the consumption of such foodstuffs. Hence, it becomes paramount to study the migration behavior into food matrices and associated toxicity after entering the human body as well as the biodegradability/toxicity and its role in the environment. Several concerns on the use of nanoparticles (NPs), their release kinetics, absorption behavior in the body, degradation kinetics and their long-term effects are uncertain and unexplored, therefore, in-depth research is required on these aspects to understand the broader figure of the story. Hence, we strongly recommend exploring these aspects to reveal and disseminate the underlying safety concerns associated with the use of nano-encapsulated particles and to avoid any unfortunate and unprecedented outcomes in the future. ABSORPTION BEHAVIOR OF NANOPARTICLES IN THE BODY The particles within nanometer range may behave differently within the human body with different biological fate i.e., the levels of absorption, distribution, metabolism, excretion and potential. The biological fate of NPs is dependent on their physicochemical properties (e.g., composition, dimensions, interfacial properties structure and physical state) as well as the changes they undergo while passing the gastrointestinal tract (GIT) (5). For example, the biological fate of lipid NP varies depending on whether it is directly absorbed or normally digested by the human body (6,7). The smaller indigestible NPs accumulate in organs at a faster rate compared to a larger size. Besides this, metallic (Ag and Au) and inorganic (TiO 2 and SiO 2 ) non-digestible nanoparticles are reported to cross the layer of epithelial cells through various routes such as paracellular, transcellular, or persorption (8). Similarly, mineralo-organic NPs formed from calcium, carbonate and phosphate can lead to ectopic calcification and kidney stones. Further the mineral particles may be involved in the immune tolerance against the gut microbiota and food antigens (9). The NP may be either digested, accumulated or transferred into the systemic circulation via the blood or lymph systems after being absorbed into an epithelium cell (10). NPs may translocate through the human body followed by metabolization, excretion, or accumulation within certain tissues after exiting Graphical Abstract | Applications of nanoencapsulation for agri-food industry. epithelial cells (6). However, an in-depth investigation is desired to reveal the fate of direct adsorption of both indigestible and digestible lipid NPs in humans. Fu et al. (11) observed the toxic effects of Ag NPs after being absorbed through the intestine to the liver of the mice when administered orally. The Ag NPs were also reported in spleen and liver when administered intravenously. However, with a lower absorption rate and the NPs were finally excreted through urine and feces. Choi et al. (12) demonstrated that non-cationic surface charged NPs (<34 nm) could efficiently translocate from the lungs to mediastinal lymph nodes. NPs (<6 nm) were rather translocated rapidly from lungs via lymph nodes to the bloodstream and finally cleared by the kidneys. Further, Gerloff et al. (13) reported the cytotoxic with DNA damaging effects of TiO 2 , SiO 2 , ZnO and MgO NPs along with carbon black on human intestinal Caco-2 cells. Alterations in the microbiome in GIT leads to various gut disorders like inflammatory bowel diseases and metabolic syndrome (14). It is speculated that TiO 2 and Ag NPs may alter the gut microbiota (15). This is attributable to the antimicrobial property of NPs and the production of reactive oxygen species (ROS) (15,16). However, further indepth research is required to reveal the absorption behavior and biological fate of various NPs. RELEASE KINETICS OF NANOPARTICLES The release of bioactive compounds is referred to their translocation from one site to another over a time period. Several factors influence the release of NPs, namely (i) thermodynamic factors; (ii) kinetic factors; (iii) chemical structure, particle size and weight of nano-encapsulates; (iv) physicochemical properties like volatility and hydrophobicity; (v) concentration; and (vi) oral processing behavior (17). Release of the bioactive components at the target site known as "Targetability" is another important aspect of release kinetics. This can be achieved through the utilization of liposomes, nanoliposomes or any other nanocarrier systems. The target mechanism can be either active or passive. For example, incorporating antibodies in the lipid carriers is a form of active mechanism while targeting through the particle size of the vesicles is a passive form (18,19). Several in vivo studies using various NPs demonstrate hazard identification, however, caution should be taken while extrapolating their mechanistic results for hazard characterization and subsequent risk on human health (20). Different NPs have been reported to trigger the release of ROS and subsequently leading to oxidative stress and inflammation via their interaction with the reticuloendothelial system (20). NPs do not bind to the cell membrane but have direct access to the intracellular proteins, organelles and DNA, thereby leading to potential toxicity (21). Having said that, the plausible interactions of NPs with cell components are not fully understood and need further in-depth research. Few studies suggest that NPs may pass the blood-brain barrier but still not clear whether it is a generic effect or shown only by specific subgroup (22). Further, the transfer of NPs across the placenta or via breast milk has the potential for embryotoxicity (23). The data on distribution behavior of NPs in the reproductive cells are insufficient to draw any conclusion. Therefore, investigation should be focused on repro-toxicity of NPs and their passage through the placenta (6). In addition, after the release of NPs into the environment, surface water forms on the NP's surface creating the entry points and dispersion into soil and soil biota. At this stage, NPs undergo various transformations such as physical, chemical and biological transformations (24). NPs can be bio/geo-transformed in the soil leading to their toxicity, generation of oxidative stress, and absorption by plants that ultimately pose alarming concerns for human health via entering into the food chain (25). NPs get absorbed through roots and then translocate and accumulate to aerial parts via biotransformation and bioaccumulation (26). These scenarios highlight the need for in-depth kinetic studies of NPs and their potential health and environmental concerns. TOXICITY OF NANOPARTICLES Among the various compounds, food-grade TiO 2 is widely used for food applications and hence the safety aspects of its NPs should be evaluated (27). Studies suggest TiO 2 NPs to be more toxic compared to larger particles of TiO 2 (28). Oral ingestion of TiO 2 (>100 nm) has a lower toxicity than TiO 2 (<100 nm) (29). A study revealed an elevated level of elemental Ti in human blood for 6 h after intake of food-grade TiO 2 (30) suggesting easy absorption of NPs in the GIT (31). Besides this, TiO 2 NPs may lead to reproductive issues upon acute oral exposure. For example, Philbrook et al. (32) observed fatality in pregnant mice treated with 100 and 1,000 mgkg −1 TiO 2 NPs. In addition, the effects on cardiac and inflammatory responses (33); blood and bone marrow system (34) were noticed in mice exposed to TiO 2 NPs. Another aspect of toxicity lies with how NPs interact with various food components. NPs can interact with food components (e.g., phospholipids, sugars, nucleic acids) and influence the absorption and release kinetics (35). Proteins along with other biomolecules bind and trap NPs which affects their digestion process. Similarly, carbohydrates, fatty acids and proteins play a significant role in the uptake of Ag NPs into Caco-2 cells where food components resulted in enhanced uptake of NPs by 60% (36). In addition, water activity of food impacts the release kinetics of NPs (37). pH and composition of the food also affect the stability, dissolution, and toxicity of NPs (38). Solubility is another crucial factor for the toxicity of NPs. TiO 2 and SiO 2 NPs are insoluble while Ag and ZnO NPs are partially/completely soluble in GIT fluids (37). Therefore, uptake and toxicity of soluble NPs (e.g., ZnO NPs) are enhanced (39). Further, the cytotoxic effects of Ag NPs via altered membrane permeability and integrity has been observed in mammalian cells (40). NPs entering the mammalian cells via endocytosis are translocated to lysosomes, while NPs passing the plasma membrane via diffusion enter the cytoplasm and are less toxic compared to earlier (41). NPs attach to the membrane proteins, damage mitochondria and DNA, produce ROS, alter enzyme activity, integrity and functions of cells (40). The toxicity is a result of Ag NPs interaction with proteins and creation of protein corona with altered functions (42). Even the changes and mutation in DNA may occur (43). However, toxicity varies with the type of NPs and therefore, proper hazard analysis of all types of NPs for food applications are essential (44). The wide applications of nano-fertilizers and nano-pesticides into the cultivated soils for agri-food production have concerns for their long-term effects which needs to be evaluated (45). For Lime EO Nanoprecipitation Chitosan Antibacterial properties against food-borne pathogens (63) Frontiers in Nutrition | www.frontiersin.org example, it is estimated that 95% of Cu used will eventually end accumulating with a concentration of 500 µgkg −1 (46) and ZnO up to 16 µgkg −1 (47) in the soil and aquatic sediments. Studies have shown NPs causing damage to the lung in rats by the consumption of TiO 2 NPs (20 nm) and Fe NPs (48). Further, TiO 2 NPs potentially damage the brain in dogs and fish (49). Both Ag and TiO 2 NPs have demonstrated cytotoxic and genotoxic effects due to ROS generation leading to cell proliferation and DNA damage in mice and human cells (50). Therefore, NPs could be dangerous for both human beings and the environment. Hence the application of nanoencapsulation and associated NPs for agri-food application should be tackled with great care and responsibility. The unauthorized and haphazard use of NPs can contaminate both soil and plant systems and ultimately intoxicate the agricultural ecosystem (51). DISCUSSION Nanoparticles are widely used in agriculture and food sector for enhancing the productivity and quality of foods (52). Despite the various positive applications of nano-encapsulated bioactive compounds widely reported by several studies for agrifood applications (Table 1), the associated potential risks for human health should not be underestimated. The mechanisms of release kinetics of nanoparticles from various formulations and production processes need to be characterized and fully elucidated. In addition, the knowledge gap concerning the biological fate, distribution and accumulation of NPs in humans raises concerns for their use and potential toxic effects (6). The applications of NPs for food packaging can cause toxic effects upon migration of the NPs from packaging system to the foodstuffs (64). Since nanotoxicology and nanoecotoxicology are novel scientific fields, therefore risk assessments are paramount (65). In addition, the proper rules and regulations need to be set up to check the haphazard and extensive use of NPs without investigating the possible long-term effects on human and animal health. The lack of information on risk assessment and proper regulation highlights the need for further in-depth research (66). Therefore, with these unanswered questions and valid reasons for health and environmental concerns associated with the use of NPs for agri-food application, we strongly advocate for in-depth and unbiased additional research in the field to ensure safe and sustainable agri-food products. This will further guarantee the safety and security of food and nutrition for human and animal health besides a sustainable environment. CONCLUSION As concluding remarks, there is an urgency to increase and spread the knowledge and perception on NPs, their beneficial applications as well as associated risk for agri-food applications and how to tackle them to guarantee the safe, healthy and sustainable agri-food and environment for future generations. AUTHOR CONTRIBUTIONS DKM wrote the original draft. AKM help in editing during article. PK conceptualized the manuscript and did the final editing of all sections.
2021-04-08T13:16:58.711Z
2021-04-08T00:00:00.000
{ "year": 2021, "sha1": "83fd1314c4a8591d3ad625c625409b8b85286cd3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.663229/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83fd1314c4a8591d3ad625c625409b8b85286cd3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
49543743
pes2o/s2orc
v3-fos-license
Selective Surfaces of Black Chromium for Use in Solar Absorbers One of the applications of selective surfaces is to improve performance of solar absorbers. The purpose of this work is to produces selective coatings with high absorption of solar radiation in the range of UV/Vis/NIR. It was prepared a selective surface composed of black chromium (Cr/Cr2O3) deposited on substrates of AISI 304 stainless steel using the technique of electrolytic deposition for application in solar thermal absorbers. The great parameters for deposition consisted of a continuous electric current of 2A for 90s, at a constant temperature of 40°C. After deposition, the samples under went to a heat treatment at 600°C for 2h for oxidation. The coatings thicknesses were determined. From the SEM analysis coupled with EDS, it was found that the microstructures reported sample of cermets. The XRD results show diffraction peaks related to metallic chromium (Cr) and chromium oxide (Cr2O3). Spectral absorptance values more 90.0% were found. Introduction The solar energy is currently conceived as the most promising energetic resource to use in the next years, mainly the conversion of light energy into electrical energy 1 .One of the parameters to be optimized for this high use is increasing the energy conversion efficiency, whereas that in heliothermal power plants; which has as function, convert light energy into heat energy and then into electrical energy.The selective surfaces combine in high absorptance of radiation ultraviolet, visible and near infrared (UV/Vis/ NIR regions), obtaining values more than 85.0%, with low emittance (medium/distant infrared regions), with values less than 15.0%.For the temperature range in which the surface emits radiation 2 , obtaining then a factor known for selectivity, which is given by the ratio of absorptance and emittance, being non-dimension and that the higher is your module, better will be the optical characteristics of selective surface. The goal of this work is establish the parameters that best provide the optical properties of absorptance more than 90.0% on the selective coatings.Black chromium is one of the most commonly studied and used solar selective coatings in solar collector systems for the efficient conversion of solar energy into thermal energy 3 .It has the required high absorptance in the visible spectral and low emissivity in the infrared spectral to make it a desirable solar selective coating 4 .A variety of deposition techniques such as chemical vacuum deposition, CVD, spray, sputtering and electroplating are available for coating preparation.Electroplating technique has advantages such as deposition on large areas, low cost and simplicity 5 , being indicated for application in solar thermoelectric plants.In this paper, was studied the electrolytic deposition of black chromium varying the deposition parameters. Experimental Techniques The black chromium films were deposited by electrolytic deposition on substrates of stainless steel AISI 304 3 cut in the dimensions of 30mm x 20mm x 3mm. The electrodeposition of black chromium Mechanical polishing was done with a grinding paper of 240-600 mesh and polishing paste of 9µm and 3µm, in order to obtain smooth surfaces.After polishing, the steel substrates were immersed in alkaline solution of 10 vol% NaOH for 60 seconds to be degreased and then washed in distilled water for 30 seconds.In the sequence, the substrates were immersed in acid solution of 10 vol% HCl for 30 seconds to perform surface activation, followed by rinsing in distilled water 6 .The chemical bath for black chromium electrodeposition is showed by Table 1. Several combinations of electric current and deposition time 6,7 were made to optimize the selective coating, according to Table 2.After coating, the sample was heated 600°C for 2 hours for oxidation and obtaining chromium oxide.The experiments were carried out in triplicates. Characterization of the coatings The thickness of the black chromium coatings was obtained using the thickness meter for ferrous and nonferrous materials of the Digimess model TT-210.The surface morphology of the coating and the elemental analysis were determined using a scanning electron microscope (SEM), Shimadzu SSX-500 model, with energy dispersive X-ray spectroscopy (EDS).The crystalline structure and identify of phases of the deposited black chromium were studied by X-ray diffraction using a diffractometer Shimadzu, XRD 7000 model. Spectroscopy analysis of UV/Vis/NIR Spectral absorptance was measured using a UV-VIS-NIR spectrophotometer, Shimadzu UV-3600 model, operating with diffuse reflectance accessory using an integrating sphere.Spectral reflectance measurements in the UV/Vis/ NIR regions were used to investigatethe optical selectivity of black chromium coating eletrodeposited on substrate. The reflectance spectral of the coating was used to derive the absorptance spectral.Absorptance is the fraction of incident energy, at a given wavelength and direction, absorbed by the material.Absorptance and reflectance are related by the Equation 1, where "r" is the reflectance and the transmittance ("t") is zero for opaque materials 8 . (1) The average absorptance (ᾱ) was calculated through the graph of the absorptance as function of the wavelength (λ) of the radiation (Equation 2). (2) Results and Discussion The samples 1 and 5 presented the best results regarding solar absorptance in the UV/Vis/NIR regions.These results of solar absorptances were more than 90.0%. Thicknesses of samples coated The coatings thicknesses of the samples 1 and 5 are showed by Table 3.These are the best samples.Bayati et al 7 assert that increasing the coating thickness to a critical value of 25.0 µm improves the absorptance.Then these values are as expected. The X-ray diffraction The X-ray diffraction measurements reveals that the structure of the black chromium film was mainly consisted of crystalline metallic chromium and chromium oxide.However, all the peaks that occurred coinciding with those given in the Joint Committee on Powder Diffraction Standards 9 ''JCPDS'' card 74-0326 of the Cr 2 O 3 structure.The Fe-γ phase peak of the substrate is confirmed by ''JCPDS'' card 33-0397.The diffractograms taken on black chromium film (sample 1 and 5) electrodeposited and stainless steel AISI 304 substrate (reference) annealed at 600°C for 2 hours are reported in Figure 1.It can be seen, for the uncoated substrate (reference) that all peaks of the XRD pattern were found to be indexed to Fe-ɣ phase with peaks at 2θ = 43.3°,50.5° and 74.3°.These peaks are related to the stainless steel AISI 304 10 .As Figure 1 shows, the diffraction pattern of black chromium film exhibited peaks that are assigned to crystalline metallic chromium, 2θ = 44.4°and 81.5°, and peaks of chromium oxide, 2 θ = 35.5ºand 64.0°.Peaks characteristic of the substrate also can be seen at the coated sample 4,11 . Surface morphology of the coatings The Figure 2 show that substrate before the coating with black chromium.It can be seen from the photomicrograph of the substrate that it presents a uniform and homogeneous surface.Surface microcracks resulting from the substrate preparation process can be seen with the sanding and polishing steps.Lira-Cantu et al 12 also observed similar surfaces in their analyzes.As the substrate is stainless steel it contains besides iron and carbon, metals such as nickel and chromium. A typical surface morphology of prepared black chromium eletrodeposited (parameters 4V, 2A for 90s) on substrate after heat treatment a at 600°C for 2h, is shown in Figure 3.This is the sample 1 that is presented by the scanning electron microscopy (SEM) of magnification 1000X for the black chromium films.The surface topography reveals particles highly irregular in size and shape.The chemical composition is presented by Table 4 and these two points (A and B) show a very different composition.Therefore, is an inhomogeneous film. In point "A" the chemical composition is rich in chromium (Cr) and poor in chromium oxide (Cr 2 O 3 ), because the weight percentage of oxygen is low.Otherwise, in point "B" the composition is poor in chromium and rich in chromium oxide, because the weight percentage of oxygen is high.This is a heterogenous films of cermet (Cr/Cr 2 O 3 ).This fact may have been caused by the adsorption of oxygen during the heat treatment of the sample.SEM and EDS results indicate that microstructure is mainly consist of metallic chromium and chromium oxide, which leads to black chromium coating. A typical surface morphology of prepared black chromium eletrodeposited (parameters 4V, 2A for 120s) on substrate after heat treatment a at 600°C for 2h, is shown in Figure 4.This is the sample 5 that is presented by the scanning electron microscopy (SEM) of magnification 1000X for the black chromium films. Sample 5 presented a morphology very similar to that of sample 1 in which the structure presented a rough and uniform shape, but with distinct regions that had their compositions analyzed by EDS. The point "A" represents a region with a much more significant amount of chromium than that of oxygen, which also leads to the conclusion that the presence of chromium oxide at this point is quite significant. The point "B" also indicates the presence rich in chromium oxide, because of the higher oxygen content in relation to chromium.Table 5 shows the chemical composition obtained by EDS for sample 5 at points "A" and "B". The UV-VIS spectroscopy The Figure 5 shows the determined absorptance spectral for substrate (reference) and for black chromium coatings for the sample 1 and 5.In this work, it was verified that the changing on the deposition parameters, such as, electric current, deposition time, bath temperature and heat treatmente after deposition affect the optical absorptance values. The samples selected to be shown, it was the best combination of the variables deposition.It was observed that spectral absorptance of the black chromium coatings heated at 600•C for 2 hours was increased compared to the substrate.This is an indication that heat treatement step provides the formation of chromium oxide phase at the coating, since this phase is directly responsible for the quality of the deposited film 13,14 .It can be seen that the absorptance in UV/Vis/NIR regions is high for the selected black chromium thin film, sample 1, with lowest thickness value 18.21 µm. The absorptance spectral for reference sample, uncoated substrate, has shown less values of spectral absorptance.The black chromium film has increased significantly the absorptance in the spectral region.The calculated average absorptance, in the range of UV/VIS/NIR for the uncoated substrate was 39.7% and for black chromium film was 95.3% (Sample 1).The Table 6 presents the results of absorptance of the sample 1 and 5. The other samples from this experiment had average absorptance values less than 90.0%, outside of the purpose of this work.Lee 13 has prepared multilayer type Cr/Cr 2 O 3 selective surfaces by electrolytic deposition with solar absorptance value of 80.0%.Also, Wijewardane and Goswami 14 reported the absorptance value for black chromium in stainless steel substrate of 85.0%. Conclusions Electrodeposited black chromium coatings were formed on stainless steel AISI 304 substrate using an electrodeposition technique.Among samples in which there was black chromium deposit by electrolytic deposition technique, the sample 1 and 5 was that achieved the prerequisites for selective surface with absorptance more than 90.0%. The XRD measurements indicate that the structure of the black chromium film was mainly consisted of crystalline metallic chromium and chromium oxide.The black chromium films have good optical properties for solar energy absorption. This work shows that the technique of electrolytic deposition of black chromium followed by a heat treatment for 2 hours at a temperature of 600°C used to produce coatings for application to selective absorbers for solar concentrator tubes.It been noticed from the results that the great tension is 4.0V, electric current is 2.0A and the deposition time is 90s. The heat treatment influenced the optical properties in a positive manner, making the rate of absorptance obtained from samples.This is due to the increased phase of chromium oxide at 2θ=35.5ºand 2θ=64.0º. Figure 1 . Figure 1.Diffractograms: of the substrate (reference) and of the samples 1 and 5. Figure 2 . Figure 2. The substrate of stain steel after coating by black chromium. Figure 3 . Figure 3. Microstructure of the sample 1 coated with black chromium. Figure 4 . Figure 4. Microstructure of the sample 5 coated with black chromium. Figure 5 . Figure 5. Spectroscopy of the substrate and the samples 1 and 5 coated. Table 1 . Chemical bath for electrolytic deposition Table 2 . Parameters for black chromium electrolytic deposition Table 3 . Thicknesses of the samples 1 and 5 coated and heated. Table 4 . Chemical compositon of the sample 1 in differents points. Table 5 . Chemical compositon of the sample 5 in differents points. Table 6 . Average absorptance of the substrate and sample 1 and 5.
2018-07-01T13:01:04.639Z
2017-11-09T00:00:00.000
{ "year": 2017, "sha1": "1ba8e84ed8cb4b7767cfd9834d3c2d415742769b", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/mr/v21n1/1516-1439-mr-1980-5373-MR-2017-0556.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1ba8e84ed8cb4b7767cfd9834d3c2d415742769b", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
54214417
pes2o/s2orc
v3-fos-license
Production of Sigma{\pm}pi?pK+ in p+p reactions at 3.5 GeV beam energy We study the production of Sigma^+-pi^+-pK^+ particle quartets in p+p reactions at 3.5 GeV kinetic beam energy. The data were taken with the HADES experiment at GSI. This report evaluates the contribution of resonances like Lambda(1405$, Sigma(1385)^0, Lambda(1520), Delta(1232), N^* and K^*0 to the Sigma^+- pi^-+ p K+ final state. The resulting simulation model is compared to the experimental data in several angular distributions and it shows itself as suitable to evaluate the acceptance corrections properly. Introduction For already half a century the Λ(1405) is a well known resonance with strangeness S = −1, Isospin I = 0 and spin 1 2 . Even though its four star character suggests a good understanding of this baryon, its inner structure is still a topic of investigation. Indeed it is difficult to describe the Λ(1405) as a three quark baryon, as it is lighter than its nucleon partner, the N * (1535). Also the large mass difference to the Λ(1520) can not be understood in terms of spin-orbital coupling [1]. With the mass of the Λ(1405) lying slightly below theKN threshold another picture of this particle was established. From the analysis of theKN scattering length Dalitz and Tuan predicted the Λ(1405) in 1959 [2,3], already two years before its experimental discovery. Nowadays the Λ(1405) is described in a coupled channel approach based on chiral dynamics [4]. Here this baryon is generated dynamically as an interference of two states, a K − p bound state and a Σπ resonance. However, this two pole structure cannot be observed directly in the Σπ invariant mass spectrum, as the Σπ pole is located far in the imaginary part of the complex energy plane. With these predictions, the structure of the Λ(1405) is interesting in terms of a deeper understanding of the kaon-nucleon interaction. Experimental data are available for π − +p [5], K − +p [6] and γ+p [7] reactions. First results on p+p data were reported in [8]. But only the neutral decay channel (Λ(1405) → Σ 0 π 0 ) was investigated. We also study p+p reactions and concentrate on the charged decay channels (Λ(1405) → Σ ± π ∓ ). In order to extract precisely the spectral function of the Λ(1405), all the reactions that contribute to the Σ ± π ∓ pK + particle quartet have to be considered. This includes the production of resonances like K 0 * , N * and ∆ ++ (1232). A sim-plified model, assuming only an incoherent sum of these contributions is finally used to describe the experimental data and extract the acceptance corrections. This model reproduces the experimental data for many kinematical variables. The analyzed data were taken with the High Acceptance Di-Electron Spectrometer (HADES) [9] at GSI in Darmstadt, Germany. In this beam time a proton beam of 3.5 GeV kinetic energy was incident on a liquid hydrogen target and a total statistic of about 1.2 billion events was collected. Evaluation of resonances contributions The presented analysis concentrates on the production of the Λ(1405) together with a proton and a K + followed by the decay of the Λ(1405) into Σ ± π ∓ : The general analysis steps to extract the Λ(1405) signal are presented in detail in [10,11]. These steps consist in identifying the four charged final state particles (p, K + , π + , π − ) and the reconstruction of the neutron via the missing mass to the four particles. The neutron component can be enhanced by an appropriate cut on the corresponding missing mass. After this selection, the Σ + and Σ − hyperons are reconstructed via the missing mass of p, K + , π − or p, K + , π + , respectively. By extracting the hyperon signals in the two spectra, the data sample is further purified, and at the same time it is physical signal, in both pictures the misidentification background is already subtracted. The treatment of this misidentification background is discussed extensively in [12]. The data (black dots) are compared to a sum of simulations. The strengths of the different contributions were determined by a simultaneous fit to four different observables, namely the two missing masses in fig. 1 and the two p, K + , π ∓ missing mass spectra where the missing mass of Σ + and Σ − are visible. Details about the fitting procedure to the four spectra can be found in [10]. To give a full description of the experimental data, several contributions have to be taken into account in the simulation. A list of considered channels with particles in the same final state as in reaction (1) (p, K + , π + , π − ) is given in table 1. The channels 9 and 10 can be rejected from the data sample, as demonstrated in [10]. The other channels, however, contain all the same final and intermediate state particles and can therefore contribute to the data in fig. 1. They are classified into two categories: • "Σπ resonant" are all channels, where the Σ and the π stem from the same mother particle. They should be visible as resonances in both MM(p, K + ) spectra of fig. 1. pair as an intermediate state, but the two particles are not stemming from a common mother particle. These channels give a broad, phase space like distribution in the spectra of fig. 1. The "Σ + π − non resonant" channels can only contribute to fig. 1 a), whereas the "Σ − π + non resonant" channels can only give significant contribution to fig. 1 b). MeV/c 2 , which then results in the solid gray histograms. To describe the spectra in fig. 1 completely, also phase space like distributions (red histograms), coming from the "Σπ non-resonant" channels, are needed. A priori it is not clear to which extent the different channels in table 1 contribute, as the spectra in fig. 1 are not sensitive to this information. Therefore, other observables have to be studied. shows the same data set as in fig. 1 a), but before subtracting the misidentification background (blue histograms). Fig. 2 a) scaling of the different channels is known from the simultaneous fit mentioned above. For the "Σ + π − non-resonant" contribution only channel 4 is included. This assumption gives already a rather good description of the data. As indications neither of ∆/N * nor of K * 0 resonances are visible, only this channel 4 is used in the further analysis. Indeed, also the "Σ + π − nonresonant" part shown in fig. 1 contains only this channel. However, possible contributions due to the channels 5 and 6 can not be excluded completely by this analysis. For example the production of a K * 0 (892) via channel 6 is only slightly above threshold and thus the cross section might be just too small to see a clear contribution to fig. 1 b). To identify the different contributions to the "Σ − π + non-resonant" part, the invariant mass of the proton and the π + (M(p, π + )) is studied in fig. 3. Only shows the result where the "non resonant" simulations contain only channel 7. The scaling factors for the different channels are again known from the fitting procedure. The data show an enhanced structure, which can not be described by the simulations. As a comparison, fig. 3 b) shows exactly the same data, but now including channel 7 as well as channel 8 into the simulations. The relative contribution of these two channels is a free parameter, which is obtained by a χ 2 fit to the experimental data points in fig. 3. The fit results in a negligible contribution of channel 7. With the inclusion of the ∆ ++ (1232) the experimental data can be described rather well. Due to this result, it is concluded that the "Σ − π + non-resonant" contribution is dominated by ∆ ++ (1232) production. Therefore, only the channel 8 is used in the simulation, which is already taken into account in fig. 1 b). With the presented analysis a simulation model with several contributions is obtained, which gives reliable descriptions of the observables investigated so far. However, the goal of this analysis is to understand and to describe the experimental data within the full HADES acceptance. This asks for acceptance corrections. For this purpose it might be not sufficient to study only invariant mass distributions, as they are not very sensitive to e.g. angular distributions of the produced particles. Therefore, angular distributions in the Center-Mass System (CMS), Gottfried-Jackson system and helicity system are studied in the next part. Detailed information about the properties of these frames and the corresponding angular distributions can be found in [12,15]. Angular distributions Starting point is again reaction (1). Here, three particles are produced in the entrance channel (Λ(1405), p and K + ). The momentum of the possible Λ(1405) is reconstructed via the missing four-vector to the proton and the K + (MV (p, K)). It is clear from the results above that this hypothetical particle does not always refer to a Λ(1405), but can also stem from all other channels of table 1. Fig. 4 and 5 show all angles between the three momenta in the three different frames for the subsample with an intermediate Σ + or Σ − hyperon, respectively. The nomenclature is the following: • θ A CM S : angle between particle A and the beam(target) direction in the Center-Mass System. • θ A−B A−C : angle between particle A and particle B in the reference frame where particle A and particle C are going back to back and have equal momenta (helicity angle). • θ A−bt A−C : angle between particle A and the beam/target-type protons (bt) in the reference frame where particle A and particle C are going back to back and have equal momenta (Gottfried-Jackson angle). By permutation of particles A,B and C nine different observables are obtained, where some of them are not kinematically independent. The different panels in fig. 4 and 5 show the comparison between experimental data and simulations. A reasonable agreement could not be obtained by using only phase space simulations for the different channels of tab. 1. The production of the Σ(1385) + in the CMS was found to be rather anisotropic [12]. The production of the Σ(1385) 0 is assumed to show the same behavior. Therefore, the simulation of channel 2 was folded with an angular distribution in cos θ . Additionally, the data sets in fig. 4,a) The overall good agreement between experimental data and simulations is a necessary precondition to use the obtained simulation model for acceptance and efficiency corrections and to finally extract cross sections for the different channels. Acknowledgements The author gratefully acknowledge support from the TUM Graduate School. The following funding are acknowledged. LIP Coimbra, Coimbra (Portugal):
2012-02-13T14:06:12.000Z
2012-02-13T00:00:00.000
{ "year": 2012, "sha1": "82b84d23c6d776d34c30d2d32f731e0208c06927", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1202.2734", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "82b84d23c6d776d34c30d2d32f731e0208c06927", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255968545
pes2o/s2orc
v3-fos-license
Association between sleep duration and quality with rapid kidney function decline and development of chronic kidney diseases in adults with normal kidney function: The China health and retirement longitudinal study Research have shown that sleep is associated with renal function. However, the potential effects of sleep duration or quality on kidney function in middle-aged and older Chinese adults with normal kidney function has rarely been studied. Our study aimed to investigate the association of sleep and kidney function in middle-aged and older Chinese adults. Four thousand and eighty six participants with an eGFR ≥60 ml/min/1.73 m2 at baseline were enrolled between 2011 and 2015 from the China Health and Retirement Longitudinal Study. Survey questionnaire data were collected from conducted interviews in the 2011. The eGFR was estimated from serum creatinine and/or cystatin C using the Chronic Kidney Disease Epidemiology Collaboration equations (CKD-EPI). The primary outcome was defined as rapid kidney function decline. Secondary outcome was defined as rapid kidney function decline with clinical eGFR of <60 ml/min/1.73 m2 at the exit visit. The associations between sleep duration, sleep quality and renal function decline or chronic kidney disease (CKD) were assessed based with logistic regression model. Our results showed that 244 (6.0%) participants developed rapid decline in kidney function, while 102 (2.5%) developed CKD. In addition, participants who had 3–7 days of poor sleep quality per week had higher risks of CKD development (OR 1.86, 95% CI 1.24–2.80). However, compared with those who had 6–8 h of night-time sleep, no significantly higher risks of rapid decline in kidney function was found among those who had <6 h or >8 h of night time sleep after adjustments for demographic, clinical, or psychosocial covariates. Furthermore, daytime nap did not present significant risk in both rapid eGFR decline or CKD development. In conclusion, sleep quality was significantly associated with the development of CKD in middle-aged and older Chinese adults with normal kidney function. Introduction Chronic kidney disease (CKD) is a detrimental public health issue with an increasing prevalence and complications worldwide (1). In 2012, the overall prevalence of CKD was 11% in Chinese adults (2,3). As CKD is closely linked to the increased risk of various disease, such as diabetes mellitus (DM), hypertension, metabolic disorders, and cardiovascular disease (2), early identification and intervention of modifiable lifestyle-related risk factors for CKD are recognized as an effective option for preventing the development of this disease (4). Sleep is an indispensable element for optimal heath and quality of life. In recent years, accelerated aging in China raises serious concerns for middle-aged and older persons, where the circadian mechanisms increasingly become less efficient. Consequently, older people tend to sleep less and have poor sleep quality, which may lead to multiple chronic diseases, such as depression, headache, memory loss, CKD, obesity, DM, and hypertension (5)(6)(7)(8)(9). Epidemiological studies demonstrated that prevalence of sleep disturbances in CKD was ∼80% (10), and sleep duration and quality were modifiable risk factors that could effectively prevent CKD. Mechanistically, this relationship is associated with sympathetic overreaction, circadian rhythm and metabolic disorders (11,12). Several studies showed that inadequate duration and poor quality of sleep were increasingly associated with decline of kidney function and development of proteinuria (13)(14)(15). Furthermore, a recent study revealed that short or long sleep duration were related to the increased risk of CKD when compared with intermediate sleep duration (8). In contrary, previous meta-analysis indicated that short sleep duration was closely related to proteinuria rather than CKD development (16). Inconsistent findings such as these indicate that further studies focused on the association between sleep disturbances and CKD development needs to be evaluated. To address the above inconsistencies, this study explored whether sleep duration, quality were deleterious factors for rapid decline of renal function and the development of CKD in middle-aged and older Chinese adults within The China Health and Retirement Longitudinal Study (CHARLS) database, a nationally representative, longitudinal cohort with the measurements of serum creatinine and cystatin C. Study participants and design The China Health and Retirement Longitudinal Study (CHARLS) (17) was a project implemented using a multistage, stratified and proportionate-to-size sampling method. CHARLS included 17,708 participants from 150 counties and 450 villages within 28 provinces in mainland China. The baseline survey was carried out from June 2011 to March 2012. The detailed design and methods on the demographic, lifestyle factors, clinical or biochemical measurements and blood samples in the study were reported previously (17,18). CHARLS data, which were collected from representative participants of 45 years old and above from among the Chinese population, aimed to establish a higher quality database. The CHARLS prospective longitudinal cohort included data collected from two time-points (2011,2015). The exclusion criteria for this study were as follow: participants whose ages were under 45 years old; participants with missing information of baseline sleep duration, baseline sleep quality, baseline kidney functions, exit kidney outcomes and related information such as demographic, lifestyle factors, clinical or biochemical measurements. Based on these criteria, data from 7,761 participants were excluded, and a total of 4,086 participants with eGFR ≥60 ml/min 1.73 m 2 at baseline were included (Supplementary Figure 1). Ethical approval of CHARLS was authorized by the Biomedical Ethics Review Committee of Peking University (IRB00001052-11015) (17). All participants have signed and provided written informed consent before participating in the survey. Information on the materials for this study are available on the CHARLS project website. Assessment of sleep duration and quality Sleep duration and quality were collected from the baseline survey carried out in 2011 (17). The standardized question used was, "How many hours of sleep did you get per night (average hours per night-time sleep) during the past month?" The night-time sleep duration were stratified into three categories: short (<6 h/night), intermediate (6-8 h/night), and long (>8 h/night) (8,14,19). We selected <6 h as the definition for short sleep in the analysis to include those who have short sleeping duration despite self-reported sleep duration. Sleep quality was assessed by "How many days of restless sleep in a week?." We classified sleep quality into two categories: rarely or a little (0-2 days/week); and occasionally, most or all of the time (3-7 days/week). Assessment of kidney function The estimated glomerular filtration rate (eGFR) was calculated from serum creatinine and/or cystatin C using the Chronic Kidney Disease Epidemiology Collaboration equations (CKD-EPI) (22). The eGFR cr−cys was calculated using the CKD-EPI creatinine-cystatin C equation: where Scr is serum creatinine in mg/dl, Scys is serum cystatin C in mg/l, k is 0.7 for females and 0.9 for males, and α is −0.248 for females and −0.207 for males. Study outcomes The primary outcome was rapid kidney function decline, which was defined as an annualized decline in eGFR cr−cys or eGFR cr or eGFR cys of ≥5 ml/min/ 1.73 m 2 (23). Annualized eGFR decline was estimated by the formula of (eGFR at baseline -eGFR at exit)/followup time (4 years). The secondary outcome was the progression to CKD, which was defined by an annualized decline in eGFR of ≥5 ml/min/1.73 m 2 and final eGFR <60 ml/min/1.73 m 2 at exit. Assessment of covariates Participants voluntarily provided their demographic information, health related data and laboratory results at baseline from the questionnaires in CHARLS survey. Marital status was categorized into two groups: unmarried and married, unmarried included never married, separated, divorced and widowed. Educational level was categorized into four groups: illiterate, literate, primary school and middle school or above. Blood pressure, height and weight were measured with calibrated equipment. Body mass index (BMI) was calculated as weight/height 2 . Health-related factors, such as smoking, drinking, diabetes and heart disease were self-reported. Diabetes was defined as random glucose ≥200 mg/dl, fasting glucose ≥126 mg/dl, hemoglobin A1c ≥7% and physician-diagnosed diabetes or the use of hypoglycemic drugs. Statistical analyses Baseline characteristics of the population are shown as means ± standard deviations (SD) for continuous variables and as numbers and proportions for categorical variables in the categories of sleep duration or quality. One-way ANOVA analysis of variance, student's t-test or chi-squared tests were used to compare the characteristics of participants based on the categories of sleep duration or quality. Univariate and multivariable logistic regression models were used to investigate the association between sleep duration, quality and kidney outcomes with adjustments for baseline eGFR in model 1 and model 2. In addition, adjusted covariates in model 2 included age, sex, BMI, smoking status, living residence, blood pressure, self-reported heart disease, glucose, total cholesterol, triglycerides, high-density lipoprotein (HDL) cholesterol and uric acid. These were presented as adjusted odds-ratios (ORs) with 95% confidence interval (95% CI). Furthermore, potential modifications of the relationship among sleep duration, quality and rapid kidney function decline were investigated for the following variables: age, sex, BMI, smoking status, living residence, marital status, educational level, diabetes, total cholesterol, uric acid and high-sensitivity C-reactive protein (CRP) via stratified analyses and interaction testing. These variables were either suspected or traditional risk factors for kidney function decline. IBM SPSS version 23.0 (IBM Corporation, Armonk, NY, USA) was used for statistical analyses in our study. P < 0.05 was considered as statistically significant in all analyses. Participants baseline characteristics Supplementary Figure 1 illustrated the baseline characteristics of participants, where a total of 4,086 analyzed participants with eGFR cr−cys ≥60 ml/min/ 1.73 m 2 at baseline, their sleep duration and quality from CHARLS were included. The mean age of the included participants was 58.9 ± 8.7 years, 1,755 (43.0%) were male, which is shown in Supplemental Table 1, the mean eGFR cr−cys was 86.2 ± 14.4 ml/min/1.73 m 2 at baseline. Compared with participants of 6-8 h of night-time sleep, majority of those in the category of <6 h or >8 h of night-time sleep were unmarried, farmers, and less educated. Additionally, those in the category of <6 h of night time sleep showed a trend toward higher HDL cholesterol levels and lower baseline eGFR (Table 1). According to the daytime nap duration (Table 2), participants in the extended nappers group were mostly males, with a trend toward lower HDL cholesterol levels. Table 3 showed the baseline characteristics of participants in sleep quality (0-2 or 3-7 days of poor sleep quality). Participants with poor sleep quality were older and less educated, mostly females, non-smokers and non-drinkers. There were also a higher percentage of heart disease observed in these participants, presenting with lower BMI and uric acid in serum. Supplementary Table 1 showed the baseline characteristics of excluded participants. Compared with those who were included, participants excluded were more commonly urban men, smokers, non-drinkers, more educated and had higher triglycerides and uric acid. Association between sleep duration, quality, and study outcomes Based on data followed up for a median of 4 years, 244 (6.0%) participants developed rapid declines in kidney function, and 102 (2.5%) progressed to CKD. In the demographic, clinical, or psychosocial covariates adjusted model (Model 2), participants in the category of <6 h or >8 h of night-time sleep were similar in their risks for both rapid eGFR decline and CKD development compared to those with 6-8 h of nighttime sleep (Supplementary Table 2). The effects of day time nap on kidney function is shown in Supplementary Table 3. Non-nappers, short-time nappers and extended-time nappers were similar in their risks for both rapid eGFR decline and CKD development compared with moderate nappers in this analysis. When sleep qualities were assessed, the adjusted ORs for participants with 3-7 days of poor sleep quality who developed CKD was 1.86 (95% CI, 1.24 to 2.80) compared with those with 0-2 days of poor sleep quality ( Table 4). The associations of sleep duration, quality and the kidney function were further investigated in Supplementary Tables 4-9. Similar trends were observed in the association between sleep duration, quality, kidney primary and secondary outcomes defined by eGFR cr (Supplementary Tables 4-6) or eGFR cys (Supplementary Tables 7-9), though some of the comparisons were not statistically significant. Stratified analyses by potential e ect modifiers Stratified logistic regression analysis for associations between sleep quality and rapid eGFR cr−cys decline through the adjustment of several variables are shown in Table 5 and Supplementary Tables 10-13. None of the variables such as age, BMI, drinking, marital status, diabetes, education level, total cholesterol, uric acid, high sensitivity CRP or uric acid significantly modified the associations between quality and rapid eGFR cr−cys decline (P > 0.05 for all). Discussion To the best of our knowledge, the present prospective longitudinal study was the first study to demonstrate that poor sleep quality was associated with increased risk of CKD development in Chinese middle-aged and older people with normal kidney function, which provided clues to the risk factors affecting kidney function. In recent years, the phenomenon of accelerated aging in China raises serious concerns for middle-aged and older people. As the circadian mechanisms becomes less efficient in the elderly, they tend to sleep less and have poor sleep quality, which may contribute to a series of health problems, such as cardiovascular diseases, depression, headache and memory loss (9). Some studies showed that poor sleep quality was associated with higher risk of coronary heart disease (24-26). In addition, higher proportion of depressive symptoms was associated with higher risk of rapid eGFR decline or CKD Frontiers in Public Health frontiersin.org . /fpubh. . development in Chinese middle-aged or older adults with normal kidney function (27). Consistent with our study, another study from CHARLS suggested that long night-time sleep duration and poor sleep quality were associated with increased risk of CKD in middleaged and older Chinese (18). This study highlighted the significant association between poor sleep duration and quality with the risks of CKD development in Chinese middle-aged or older people. CKD were previously reported to be associated with nighttime sleep duration and quality in middle-aged or older adults (18). However, previous studies demonstrated inconsistent results regarding the relationship between sleep duration and kidney function decline or CKD progression. An observed cohort study of 502,505 UK Biobank participants through clinical and genetic analyses indicated that either <6 or ≥9 h of sleep duration was associated with a higher risk of CKD. Moreover, only short sleep duration was associated with the risk of end-stage kidney disease (ESKD) outcome when their study population was limited to males. While short sleep duration was associated with higher odds for CKD in genetic analysis (8). McMullan et al. (19) found that <6 h sleep duration was significantly associated with higher risk of a faster eGFR decline in over 4,000 females. Moreover, a previous study of 3,600 Japanese workers indicated that short sleep duration (5 h/night) increased the risk of CKD in shift workers instead of non-shift workers, while long sleep duration was not a risk factor for CKD in shift and non-shift workers (28). Nevertheless, Nakajima et al. (29) showed that shorter sleep duration reduced the risk of CKD . /fpubh. . in Japanese males. The possible explanations for the inconsistency across different studies is probably because of the different reference groups used in the comparison and the different classifications of sleep duration, for example, 6-7 h (30), 6-8 h (14), and 7-8 h (29) were set as reference groups. In addition, <6h, ≤5h and <4h of nighttime sleep were regarded as short sleep duration in the different studies (14, 29, 30). Overall, the various studies showed that the relationships between sleep duration and the risk of kidney function decline remained unclear. In this study, neither night-time sleep duration nor daytime naps had any significant effect on rapid kidney function decline and the development of CKD in middle-aged and older adults with normal renal function. More studies are still needed to explore the reason for the inconsistencies. In the older population, creatinine-based eGFR was inaccurate because diet, physical activity, and muscle mass could affect the creatinine levels (31). Cystatin Cis a cysteine protease inhibitor produced by nucleated cells. The serum cystatin C may vary due to insulin resistance or inflammation (32, 33). The kidney outcomes were assessed by eGFR cr or eGFR cys alone in previous studies (18,34). Therefore, taking both cystatin C and creatinine measurement into consideration to determine the eGFR could improve the accuracy (35). The associations between sleep duration, quality and kidney outcomes defined by eGFR cr−cys were not fully explored in previous studies. Some interesting findings were observed in our clinical analysis. The use of eGFR cr−cys to evaluate kidney outcomes was more accurate compared to eGFR cr or eGFR cys alone (22,36). The serum creatinine and cystatin C measurements across a longitudinal study (27) provided an opportunity to explore the associations between sleep duration, quality and rapid decline of kidney function among middle-aged or older population, which could be adjusted for known co-variables and stratified by various clinical characteristics. Multi-center sleep and CKD studies could be conducted to further investigate the findings of our study and to confirm the reciprocal relationship between them in the future. Moreover, although analysis showed that no variables significantly modified the association between sleep quality and rapid eGFR decline (P interactions values > 0.05 for all), While BMI <24 kg/m2, non-smoking, non-diabetic, married female participants strongly affected the associations between sleep quality and kidney function. However, given the multiple testing and similar directionality of most associations, these results may not have significant clinical impact. Meanwhile, the ability to detect moderate interactions was limited in the current sample size and larger number of samples are needed to verify the lack of influence by these variables in the future. The mechanisms underlying the relationship between sleep and renal function need to be investigated further. We speculated that several potential mechanisms may participate in sleep duration and quality which affect renal function. Firstly, growing evidence showed that sleep duration was associated with the upregulation of inflammatory markers such as IL-6,TNF-a,CRP,AP-1 and STAT protein families which may activate immune response, aggravate kidney fibrosis and accelerate the decline of kidney function (37)(38)(39)(40)(41)(42)(43)(44). However, there were no significant differences in high-sensitivity CRP levels between the different groups with different sleep durations and restlessness. Secondly, poor quality of nighttime sleep may disrupt circadian rhythms, which cause changes in serum hormone levels, insulin resistance, inverted cortisol rhythms and increased blood pressure (45, 46). These showed that sleep duration and quality were modifiable determinants of these established CKD risk factors (47)(48)(49). However, adjusting for blood pressure, glucose or selfreported heart disease did not affect the estimates for the risk of rapid renal function decline or developing to CKD in relation to sleep duration and quality. This suggested that either short sleep duration or quality are associated with rapid decline of renal function or progression to CKD via mechanisms independent of these known risk factors; or these endpoints did not eventually capture the vascular and metabolic consequences related to alterations in sleep duration and quality. Overall, these findings need to be verified, and their mechanisms investigated in the further studies. This study has several limitations. Firstly, sleep duration and quality were self-reported, which may cause recall bias of sleep duration and quality. Self-reported sleep duration and quality is different from objectively measured sleep. In a study of 669 individuals, those with objective sleep measured as 5 h per night may overestimate their sleep duration by 1.3 h. While those participants with objective sleep measured as 7 h per night self-reported their sleep duration accurately (50). Secondly, the measurements of eGFR were only assessed at baseline and at the exit visit. If eGFR was changed due to other factors, the decline of eGFR from 2011 to 2015 would not accurately reflect the underlying change of eGFR during that period. More frequent measures of eGFR would improve . /fpubh. . accuracy for evaluating the progression of CKD over time. Thirdly, urine albumin was not measured at baseline in our cohort so neither adjustment for albuminuria nor analysis of the influence of urinary albumin secretion on the relationship between sleep duration, quality and kidney function could be performed. Fourthly, all of our study participants aged 45 years old and above were from China. Thus, it is unclear whether these findings can be applied to younger individuals or other ethnic groups. Fifthly, the number of participants whose sleep duration were >8 h was limited, so we were unable to assess whether there were associations between Frontiers in Public Health frontiersin.org . /fpubh. . sleep duration and the decline of creatinine or cystatin C based on eGFR. Finally, in this observational study, some of the covariates used in the analyses were self-reported values. Hence, we are unable to rule out the possibility that our findings were confounded by unidentified factors. Conclusions In summary, our analysis demonstrated that poor sleep quality was significantly associated with progression to CKD among Chinese middle-aged or older adults with normal kidney function. These findings paved the way for finding evidence toward potential therapeutic interventions to improve primary prevention of CKD. Data availability statement Publicly available datasets were analyzed in this study. This data can be found here: http://charls.pku.edu.cn/, The China Health and Retirement Longitudinal Study (CHARLS) database. Ethics statement The studies involving human participants were reviewed and approved by the Biomedical Ethics Review Committee of Peking University. The patients/participants provided their written informed consent to participate in this study. Author contributions LZ and SX designed the study. LZ, SX, QD, FZ, YW, JJ, CG, JG, ML, and HZ analyzed the data. SX, YY, HJ, HX, SC, and HZ made the figures. ZH, SX, FZ, and JJ drafted and revised the paper. All authors approved the final version of the manuscript. Funding This work was supported by National Natural Science Foundation of China (82072523 to ZH), National Natural Science Foundation of China (82200753 to SX).
2023-01-19T21:32:42.547Z
2023-01-18T00:00:00.000
{ "year": 2022, "sha1": "94094b3e0f44b9bc4834273fec5dfa02ee770840", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "94094b3e0f44b9bc4834273fec5dfa02ee770840", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
202224993
pes2o/s2orc
v3-fos-license
Design and development of fixture and modification of existing AFM setup to magnetic abrasive flow machining (MAFM) process setup To finish multifaceted geometries, miniaturized parts, especially of internal inaccessible cavities or recesses by the use of abrasives media with other constituents is known as abrasive flow machining (AFM). With the help of hydraulic pressure system, the media is extruded to-and-fro over the surface in this process. Recently, numerous amendments have been made for improving the performance of the AFM process. This paper presents a new modified AFM process known as magnetic abrasive flow machining process (MAFM) that is used to finish internal cylindrical surfaces. In MAFM process the electromagnet is made-up to locate around the cylindrical work-piece. For providing maximum magnetic field nearby the whole internal surface of the work-piece, there are two poles that are bounded by copper coils. In MAFM, aluminium fixture is considered to enhance the magnetic effect around the workpiece surface that helps in increasing the MRR and change in surface roughness (ΔRa). Introduction The concept of AFM is developed in USA by the "Extrude Hone Corporation" in 1960s, which possesses excellent capabilities for polishing unsymmetrical surfaces and interior structure of parts. Abrasive flow machining also called the abrasive flow deburr or extrude polishing, is a surface finishing process that is characterized through passing an abrasive-laden fluid from the interior surface of the workpiece. Abrasive medium, machine and tooling are the essential elements of abrasive flow machining process. Although AFM process has many advantages such as excellent control of process, finishing and intricate shaped components, radius generation, faster change of work piece, faster changeover of the media etc. while it has some drawbacks also i.e. the finishing rate is low, it is not capable of correcting the form geometry, closed environment and start-up hole is required for this process. Numerous researches have been made for improving finishing rate, integrity in surface and compressive residual stresses formed on the workpiece surface which introduced many setups for improved AFM process such as magneto abrasive flow machining (MAFM), magnetorheological abrasive flow finishing (MRAFF) etc. Ahmad et al. [1] used a tool named magnetic abrasive particles for finishing throughout the magnetic abrasive finishing (MAF) process. Sintering method was used to produce the magnetic abrasive particles. Once the sintering process was completed, authors found the NFEST IOP Conf. Series: Journal of Physics: Conf. Series 1240 (2019) 012009 IOP Publishing doi: 10.1088/1742-6596/1240/1/012009 2 magnetic abrasives which were of better quality and those abrasive particles stick on the base metal matrix. They used alumina powder as an abrasive particle and iron powder was taken as the magnetic particle. They performed experiments on stainless steel 202 by using Taguchi's orthogonal array (L 9 ). Mittal et al. [2] created the samples with aluminium (base metal) and investigated the SiC MMCs by using AFM process. Authors chose various input variables like extrusion pressure, mesh number, work piece material and concentration of abrasives etc. for finding out the output variables like Δ Ra and MRR. Mittal et al. [3] used aluminium (base material) with different values of SiC in different percentages (i.e. 20%, 40% and 60%) for investigation of Al/SiC MMCs. They analysed the parameters effect by using Taguchi design method (L 27 orthogonal array). Sadiq et al. [4] established magneto-rheological abrasive honing process and analysed the magnetic field's effect on workpiece that was fixed in a holder and subjected to MRAH. Wani et al. [5] developed a finite element model for finding out magnetic potential distribution present within the magnetic abrasive brush that was designed through finishing action. Then authors used that to calculate surface finish and material removal. From experimental outcomes Singh et al. [6] recommended that there was a strong effect of magnetic field on the material removal in AFM by using magnetically assisted AFM. Singh et al. [7] observed that when brass was used as workpiece material, the magnetic field had a major effect on MR and change in surface. Nagdeve et al. [8] observed it very difficult and challenging task to attain the surface finish that was uniform nanoscale into the contact zone, particularly on those sculptured surfaces that had dissimilar curvatures at different locations. Kathiresan et al. [9] used magneto rheological abrasive flow finishing process for improving the AISI stainless steel 316L surface quality to the nano-level surface finish. Authors built up a response surface model for determining the effect of various input parameters on the output parameters like final surface roughness (SR) and MRR. Experiments were conducted by Guo et al. [10] for magnetic field-assisted finishing using a dual magnetic roller tool combined with a 6-axis robot arm. Judal et al. [11] fabricated vibration assisted cylindrically magnetic abrasive finishing process and reported the experimentally investigated effect of many process variables on VAC-MAF setup through finishing of aluminium workpieces. Mulik et al. [12] measured normal force and finishing torque using kistler's dynamometer at various processing conditions during ultrasonic assisted magnetic abrasive finishing methodology. Authors found that torque and finishing forces were mainly affected by the finishing gap and the voltage that was supplied to the electromagnet. The magnetorheological abrasive honing (MRAH) setup was designed and developed by Sadiq et al. [13]. Authors used a direct current (DC) electromagnet having the pole faces that were cylindrical in nature for measuring the magnetic flux density. They conducted the experiments with aluminum alloy (AL6063) and austenitic stainless steel (SS316L) workpiece for understanding the magnetic field's effect. Jain [14] offered a generalized tool of MR for numerous flowing abrasive based micro-/nano-machining (MNM) processes. The MNM processes like AFF, MAF, elastic emission machining, magnetorheological finishing, MAFF, and magnetic float polishing had been discussed. Das et al. [15] discovered a novel accurate finishing method known as magnetorheological abrasive flow finishing (MRAFF), that was developed by blending AFM with magnetorheological finishing, and especially designed for nano-finishing of various parts and difficult geometry for a broad variety of industrial applications. Jayswal et al. [16] designed a finite element model for finding out the magnetic force distribution over the workpiece surface. They declared that by the use of magnetic abrasive finishing (MAF) process, very little quantity of material was removed by indentation and revolution of magnetic abrasive particles into the circular tracks. Singh et al. [17] examined the magnetic abrasive finishing setup and used Taguchi experimental design method L 9 (3 4 ) orthogonal array for attaining the significant parameters that had an influence on the surface quality produced in the MAF. The negative imitation of the knee joint implant like a fixture was designed by Nagdeve et al. [18] and they used rotational-magnetorheological abrasive flow finishing method for finishing that. In this paper the concept of magnetization has been added to enhance the efficiency of AFM process. Also the aluminium fixture has replaced the nylon fixture due to its low magnetic permeability for magnetic line of forces which can hinder the process of finishing of internal cavity in the workpiece. MAFM experimental set-up In the present research work, an existing AFM setup has been modified by replacing nylon fixture with an aluminium fixture and by applying electromagnetic effect around the workpiece. Figure 1(a) shows the structural representation of MAFM and figure 1(b) shows the pictorial view of MAFM setup. The designed setup has a maximum extrusion pressure of 10MPa. All through the forward stroke, the media is extruded by two hydraulic actuators and passed from one media cylinder to the other, through the workpiece. After completing this stroke, the reverse process is repeated and both of these forward and backward strokes establish one complete cycle. The stroke length is kept at a constant value of 250 mm and value of media volume is taken as 300cc. During the abrasion process, when the extrusion of abrasive laden media through the workpiece occurs, it causes the finishing of the inner cylindrical surface of the workpiece. When the magnetic field is applied, the abrasion only takes place around the workpiece surface where it is applied, while the remaining areas are unaffected. The location of the electromagnet is around the cylindrical workpiece. For providing maximum effect of magnetic field nearby the whole inner surface of the work-piece, there are two poles that are bounded by copper coils. When smoothening of the workpiece is done in the presence of magnetic field and with the help of magnetic and abrasive particles, this process is known as magnetic abrasive finishing (MAF Design and fabrication of novel MAFM from existing AFM set-up An experimental setup has been designed that is powered hydraulically and fabricated for MAFM process in the laboratory. The basic useful requirements of various parts and vital mechanisms of the process are kept in mind while developing the MAFM setup. Also, we know that Hydraulic unit. A hydraulic unit having the preferred pressure of up to 10 MPa has been designed for MAFM Process. There is a hydraulic gear pump whose function is to pump the hydraulic oil from tank and pass it to the whole circuit. The hydraulic oil number 68 has been used. Here are the design aspects of hydraulic pump: Pressure requirement used for experimentation = 10 MPa ≈ 1450 psi The formula to drive Power is given as: Hence, (5) we get, pressure: p = 1544 psi The hydraulic oil is drawn from the tank through the filter by the electric motor driven gear hydraulic pump and passes it to both the manually actuated direction control valves DCV1 and DCV2 through pressure relief valves PRV1 & PRV2. To control the pressure to a desired value, pressure relief valves are used in the system. The pressure in the upper hydraulic cylinder is kept high by using PRV1 and low in upper cylinder using PRV2 during downward stroke pressure. By actuating DCV1 and by keeping knob of DCV2 in central position, downward stroke is completed. After completing the downward stroke, using PRV2 the pressure in lower cylinder is maintained at a high level and in upper cylinder, it is maintained at low level using PRV1. Positions of DCV1 and DCV2 have been reversed. After completing the upward stroke, one cycle is completed. For counting the number of cycles, a digital counter is used in MAFM setup. Design and development of novel workpiece fixture. Work piece fixture guides the media that is to be machined to pass through the work surface, so it plays a significant role. The material used for the fixtures in earlier work as per literature survey is nylon. The following drawback has been observed for using nylon as fixture material in MAFM. Nylon has low magnetic permeability for magnetic line of forces; therefore it can hinder the process of finishing of internal cavity in the workpiece. Therefore, a novel aluminium fixture has been developed and fabricated in which the workpiece comes exactly between magnetic field generated by electromagnet coils of MAFM setup. Aluminium is taken as the material for the work piece fixture. There is a hole cut in the fixture that holds the work piece which is same as the outer shape and size of work piece. During machining to avoid the vibration, the fixture diameter is decreased gradually. The fixture is designed so that it can accommodate the electromagnet poles and can generate maximum magnetic pull near the inner surface of the workpiece. 4. Design and development of novel magnetization system. A novel electromagnetic system has been developed for experimental MAFM setup. As per rigorous literature survey it has been found that the researchers have used permanent magnets for their work. Such kinds of magnets have the capability to produce fixed value of magnetic flux density, which cannot be varied if needed for finishing of different materials, therefore limiting their use for certain materials only. For newly developed experimental setup coil type magnets are developed in which magnetic flux density can be varied from 0-2 Tesla. Such type of modification has made the developed MAFM setup more versatile. Below is detailed description of novel magnetization system. The electromagnet. The electromagnet is fabricated and designed in such a way so that it can locate around the cylindrical work-piece. There are two poles surrounded by the copper coils which are arranged to deliver the magnetic field to be maximum near the whole inner surface of the workpiece. The specifications of electromagnet are shown in table 1. figure 6 and 7 respectively. For avoiding larger magnetic field gradient at the corners, the diameter of the core is taken more than workpiece length. The Pictorial View of c-shaped core (part B) is shown in figure 8. Functioning of magnetization system. It has been observed from the literature that with an increase in the applied magnetic flux density the material removal rate (MRR) is also increased. Each coil is prepared by copper wire winding of 2500 turns per coil. The working gap between the electromagnet poles and the workpiece is taken as 1-2 mm and the maximum flux density of 0-2 tesla is used between them. Magnetic flux density can be changed by applying different values of input current to the poles and digital gauss meter (model DGM-202) is used for measuring it with the probe of DGM-202. Figure 11 and 12 shows the coil vobin drawing and the pictorial view of coil vobin (i) before coiling (ii) after coiling. Figure 13 shows the magnetization effect. (i) before coiling (ii) after coiling. Figure 13. Magnetization effect. Results and discussion The experimental MAFM setup was tested to analyse whether it is performing as per requirements i.e. the range of magnetic flux density, effectiveness of newly developed aluminium fixture. To check magnetic flux density range, digital gauss meter (Model DGM-202) has been used. The necessary variation was checked by varying the voltage from 0-240 volts. The magnetic flux density of 0-2 Tesla has been obtained. The observation of magnetic flux density in tesla with varying the voltage and current are shown in table 2. Conclusion After making modification in existing AFM setup, the novel MAFM setup has been developed successfully. In this setup the permanent magnets have been replaced by coil type magnets in which magnetic flux density can be varied from 0-2 Tesla. Such type of modification has made the developed MAFM setup more versatile. For creating a restrictive passage or directing the media to desired locations in the workpiece, a fixture is usually required. An aluminium fixture has been used and proved to be very effective in comparison to nylon fixture, as this fixture does not exist between workpiece and electromagnet coils due to which the role of magnetic permeability of fixture material gets vanished.
2019-09-11T02:02:49.856Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "e2365a8913717e6297ad1cf5628cce36d0af5e4e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1240/1/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1e8aa07f92fe78c4efbf06a5293f0b3da3156232", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
245715726
pes2o/s2orc
v3-fos-license
Complementary Presence of HBV Humoral and T-cell Response Provides Protective Immunity after Neonatal Immunization Background and Aims Hepatitis B vaccination is the most cost effective way to prevent hepatitis B virus (HBV) infection. Hepatitis B vaccine (HepB) efficacy is usually assessed by anti-hepatitis B surface antigen (HBsAg) level, but there are few reports of humoral and cellular immune responses to HepB in children after neonatal vaccination. Methods A group of 100 children with a history of primary hepatitis B immunization were included in this study to evaluate the efficacy of HepB. Blood samples were obtained from 80 children before, and 41 children after, a single HepB booster dose. Children with low anti-HBsAg (HBs) titers of <100 mIU/mL received a booster dose after giving their informed consent. Anti-HBsAg, T-cell response and percentage of B-cell subsets were assayed before and after the booster. Results Of the 80 children, 81.36% had positive T cell and anti-HBsAg responses at baseline. After the booster dose, the anti-HBsAg titer (p<0.0001), positive HBsAg-specific T-cell response (p=0.0036), and spot-forming cells (p=0.0003) increased significantly. Compared with pre-existing anti-HBsAg titer <10 mIU/mL, the anti-HBsAg (p=0.0005) and HBsAg-specific T-cell responses (p<0.0001) increased significantly in preexisting anti-HBsAg titer between 10 and 100 mIU/mL group. Change of the HBV-specific humoral response was the reverse of the T-cell response with age. Peripheral blood lymphocytes, B cells, and subset frequency decreased. Conclusions HBV immunization protection persisted at least 13 years after primary immunization because of the complementary presence of HBV-specific humoral antibodies and a T-cell immune response. One dose of a HepB booster induced protective anti-HBsAg and promoted an HBsAg-specific T-cell response. In HBV endemic regions, a HepB booster is recommended to children without anti-HBsAg because of effectiveness in HBV prevention. Introduction Hepatitis B vaccination is the most cost effective way to prevent acute or chronic HBV infection and reduce complications of hepatitis B infection. The Chinese government has adopted routine hepatitis B vaccination, and importation of a yeast-derived hepatitis B vaccine (HepB) began in the late 1980s. Routine immunization began in 1992, and HepB was integrated into the Expanded Program of Immunization in 2002. 1 Following the implementation of routine HBV vaccination China has successfully changed from a highly endemic to a moderately endemic country. Serosurveys found that HBsAg seropositivity decreased by 52%, from 9.8% to 4.7% in the general population; by 97%, from 9.7% to 0.3% in children <5 years of age; and by 92.4%, from 10.5% to 0.8% in children <15 years of age from 1992 to 2014. [2][3][4] It is estimated that 80 million acute HBV infections and 20 million chronic HBV infections have been prevented since 1992. 5 The need for a HepB booster in children after neonatal immunization is controversial. Many studies have not identified a need of booster immunization in healthy children. The protection afforded by primary HepB immunization can last 30 years; only 0.7% of vaccinees had HBV breakthrough infections in the 5-20 years after neonatal HBV vaccination. [6][7][8] Immune memory for HepB persists in children with waning or undetectable anti-HBsAg concentrations, 9 but the loss of HepB immune memory has been reported in 25-50% of vaccinees after 15 years of age, 10,11 and 10.1% had no immune response to a HepB booster after the initial vaccination. 12 A HepB booster has been recommended for at-risk youths who with a history of primary immunization. We previously reported that anti-HBsAg declined with age in children from 93.7% at 1 year of age to 42.3% at 9 years of age. 13,14 Whether a protective immune response is elicited in children without anti-HBsAg is no known, and the need for booster doses has not been resolved. This study investigated the protective humoral and cellular immunity responses following primary immunization and a HepB booster for children who had lost protective antibodies. The efficacy of a HepB booster in children with low baseline anti-HBsAg levels between 10 and 100 mIU/ mL was evaluated. Design and trial participants This prospective single-center cohort study was performed at Clinical Research Center of Children's Hospital of Chongqing Medical University, a general children hospital with patients from all over the country. The study was approved by the institutional ethics review committee of and registered at ClinicalTrials.gov (NCT03867643). All children and their legal guardians provided written informed consent. All procedures were conducted following the ethical principles of the Declaration of Helsinki. Children born after January 1, 2005 in Chongqing, China who completed primary vaccination with a series of three doses of HepB containing 10 µg HBsAg each beginning at birth, and not receiving a booster dose were eligible for inclusion. Children with a history of allergy or adverse reaction to the vaccine, immunosuppressive treatment or immunodeficiency, any vaccination in the previous 4 weeks, with an acute disease or anti-infective therapy in the past 4 weeks, fever (axillary temperature ≥38°C) in the previous week, history of blood transfusion, history of infectious diseases (e.g., hepatitis, AIDS, syphilis, gonorrhea, etc.), family history of HBV in three generations of lineal relatives, or abnormalities on physical examination were excluded. Figure 1 is a flowchart of participant selection. A group of 100 children aged 1-13-year were included via the hospital's official website. Blood samples were obtained from 80 children before the HepB booster, which contained 20 µg HBsAg (Huabei Pharmaceutical Co., Hebei, China), and from 41 children 1 month after the booster. HBV seromarkers Blood samples were collected for determination of HBV seromarkers by chemiluminescent microparticle immunoassay (CMIA) with the Architect system (Abbott Laboratories). HBsAg seropositivity was >0.05 IU/mL and anti-HBsAg titers ≥10 mIU/mL were considered seroprotective. Sample cutoff values of anti-hepatitis B e antigen ≥1.0 and antihepatitis B core antigen of ≥1.0 were considered positive. Detection of interferon (IFN)-γ-secreting HBsAg-specific T cells Peripheral blood mononuclear cells (PBMCs) were isolated by density gradient centrifugation, and HBsAg-specific cytokine-secreting T cells were identified with a human IFN-γ ELISpot PLUS assay (Mabtech, Stockholm, Sweden). IFN-γ precoated 96-well plates were preincubated with Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Invitrogen, USA) for 30 minutes at room temperature before adding 5×10 5 PBMCs/well in 200 µL RPMI 1640. PB-MCs were stimulated with 10 μg/ml of recombinant HBsAg (Bersee, Beijing, China). Wells containing PBMCs and RPMI 1640 with anti-CD3 mAb (Mabtech, Stockholm, Sweden) were positive controls and wells without any stimulant were negative controls. The culture plates were incubated for 48 h at 37°C in a 5% CO 2 atmosphere and were then read with an ELISpot reader (AID, Strassberg, Germany). A response two-fold greater than that of the negative control was considered positive. 15 Statistical analysis The data were analyzed and compared by SPSS version 20.0 (IBM Corp. Armonk, NY, USA); graphs were drawn by Graphpad prism (version 8.0). Continuous variables were compared with the Student t-test. Comparisons of categorical variables were performed by χ 2 or Fisher's exact tests. The Spearman rank correlation was used to evaluate the associations between ELISpot results and anti-HBsAg titers. P-values <0.05 were considered statistically significant. Participant baseline characteristics The characteristics of the 80 available subjects are shown in Table 1 and the characteristics of the 51 participants with prebooster anti-HBsAg titers <100 mIU/mL are shown in Supplementary Table 1. All participants had received a three-dose primary neonatal HBV vaccination and were grouped by anti-HBsAg titer. Twenty-one had baseline titers of <10 mIU/mL, 30 had titers ≥10 and <100 mIU/mL, and 29 had titers ≥100 mIU/mL. Between-group comparisons of sex, age, weeks of pregnancy week, birth weight, and disease history among each group found that boys were at a lower titers than girls (p=0.049) and more preterm infants than normal term infants were anti-HBsAg negative at baseline (p=0.039). HBsAg-specific T-cell responses are frequent in antibody-negative participants The ELISpot assay results of HBsAg-specific T-cell responses in children without anti-HBsAg and the distribution of positive and negative humoral and cellular immunity is shown in Figure 2. Of the antibody-negative participants, 85.71% had positive HBsAg-specific T-cell responses, 18.64% of the antibody-positive subjects had negative responses, and 96.25% of the children had positive HBsAg-specific T cell or anti-HBsAg responses. Of 41 children given a booster dose, all antibody-negative subjects became positive and their HBsAg-specific T-cell responses were enhanced. Children with high prebooster anti-HBsAg titers had high post-booster humoral responses Pre-and post-booster anti-HBsAg titers are shown in Figure 3. The titers increased after booster administration to >100 mIU/mL in all children (p<0.0001) and to >1,000 mIU/mL in 56.25% those with prebooster titers of 0-10 mIU/mL and 100% of children with prebooster titers from 10-100 mIU/ mL (p=0.0005). Children with high prebooster anti-HBsAg titers had higher humoral responses than those with low prebooster titers. Post-booster IFN-γ-secreting HBsAg-specific T-cell response depended on the prebooster anti-HBsAg titer The pre-and post-booster HBsAg-specific T-cell responses are shown in Figure 4. The ELISpot results indicated significant increases of the percentage (p=0.0036) and the magnitude of response (p=0.0003) of spot-forming cells (SFCs) following booster administration. The magnitude of the response in IFN-γ-secreting HBsAg-specific T cells was not associated with the anti-HBsAg titer after neonatal immunization (p=0.1140). The post-booster T-cell response showed the same change trend as the humoral response, with a significant increase (p=0.0004) in the number of IFN-γ-secreting HBsAg-specific T cells. Compared with children with low anti-HBsAg prebooster titers (0-10 mIU/ mL), those with pre-existing titers between 10 and 100 mIU/mL had significantly stronger HBsAg-specific T-cell responses to the booster vaccination. The intensity of Tcell immunoreactivity depended on the pre-existing anti-HBsAg titer. Association between HepB humoral and T-cell response and age The association between humoral and T-cell response and age is shown in Figure 5, which shows the changes of anti-HBsAg titers and HBV-specific T-cell response in children of different ages. Post-booster anti-HBsAg titers increased significantly in all four age groups of the 41 children who were vaccinated. The anti-HBsAg titers were higher in 1-to 3-year-old than in 10-to 13-year-old children (p=0.031) and pre-and post-booster ELISpot assay results were significantly different in 10-to 13-year-old children (p=0.0172). As shown in Figure 5C, differences in prebooster anti-HBsAg titers were the opposite of T-cell values in each age group. The percentage of protective antibodies initially decreased with age and then increased in children 1-13 years of age. The T-cell response increased initially and then decreased. After the booster dose (Fig. 5D) the percentage of protective antibody titers decreased in each age group from 1 to 13 years of age, but the positive T-cell response increased in each age group. The overall positive anti-HBsAg and T-cell response rates in each age group were similar. The results show that the change of the HBV-specific humoral response was associated differences in the T-cell response in the four age groups. Changes of immune B-cell subsets after booster administration The gating strategy for definition of the B-cell subsets is depicted in Figure 6A. Changes in the immune B-cell subsets before and after booster vaccination (Fig. 6) included a decrease in B-cell frequency in peripheral lymphocytes (p=0.0002), and antibody-secreting cells included plasmablasts. Changes in the numbers of CD19 + B-cells were similar in participants with low and with high baseline anti-HBsAg titers. One month after booster vaccination, class-switched and unswitched memory B-cell frequencies decreased significantly. In children with low pre-existing anti-HBsAg titers (0-10 mIU/mL), there were a significant rise in naïve B cells (p=0.0387) and DN B cells (p=0.0134). In children with high baseline anti-HBsAg titers (10-100 mIU/mL), the percentages of both naïve B cells (p<0.0001) and DN B cells (p=0.0013) increased. Discussion The Chinese Center for Disease Control and prevention (CDC) reported that three-dose HBV vaccination coverage before 1 year of age was 83-99.53% between 2001 and 2017. 16 Our previous serosurvey found that 46.03-72.29% of children from 1 to 14 years of age were seroprotected, and that 3.33-25.79% of all age groups had anti-HBsAg titers of <10 mIU/ml, 13 which was consistent with a CDC sur-vey of HBV seroprevalence in various age groups in China. 4 HepB is one of the safest available vaccines. It prevents HBV infection and reduces the occurrence of liver cancer. 17 All the children in this study had completed the threedose primary vaccination series that begins with a dose at birth. We analyzed the immune response to a HepB booster dose after completing neonatal immunization to determine whether children without detectable anti-HBsAg (i.e., titers <10 mIU/mL) were still protected and whether or not children without anti-HBsAg need a HepB booster vaccination. This is the first study to show that protective immunity from neonatal immunization exists in children because of the complementary presence of HBV-specific humoral and T-cell immune responses. A detectable T-cell response to HBsAg was found in 85.71% of children with anti-HBsAg titers of <10 mIU/mL. The presence of HBsAg-specific INF-γ in children up to 13 years of age suggests that protection may be long lasting. A study by Wang et al reported that most anti-HBsAg negative vaccinees had positive HBsAgspecific immune-cell responses. 18 Leuridan et al reported activation of immune cells in vaccinees based on cell proliferative response. 19 Long lasting cellular immunity has also been shown by detection of secretion of cytokines by Th1 and Th2 lymphocytes after stimulation by HBsAg. 7 The previous results confirm that T cell immunity persists regardless of anti-HBsAg, which is consistent with our results. HBsAg-specific T-cell responses initially increased and then a decrease with age, which was the reverse of changes in anti-HBsAg titers. In neonates, adaptive immune responses to pathogens are relatively weak and narrowly focused, causing T-cell hyporesponsiveness. 20 In younger children, HBV-specific T cells are lacking and fail to produce adequate amounts of IFN-γ, but the response gradually improves with age. 21 Before and after HepB booster, the direction of change in anti-HBsAg titer in this study was opposite that of the HBsAg-specific T-cell response in each age group. In vaccine development, determining the balance between humoral and cellular responses is the key challenge. 22 The complementary existence of anti-HBsAg and T-cell responses is important for the persistence of protection following vaccination. There is no need to worry about the decline in anti-HBsAg in populations. It is precisely because of the dynamic balance that screening for HBsAg-specific T-cell immunity is not recommended for the general population. Routine screening for anti-HBsAg in vaccinees is sufficient to evaluate the protection afforded by HepB. One dose of HepB booster was effective in children without anti-HBsAg. All those given a HepB booster dose produced protective anti-HBsAg and an enhanced HBV-specific T-cell response 4 weeks after the vaccination. All children with anti-HBsAg <10mIU/ml produced anti-HBsAg with ti-ters >100 mIU/mL, demonstrated an anamnestic response to the booster dose, 23 even when detectable anti-HBsAg were absent at the time of exposure. We found that humoral and T-cell responses to the HepB booster depended on the pre-existing anti-HBsAg titer. Only 56.25% of children with pre-booster anti-HBsAg <10mIU/ml had anti-HB-sAg ≥1,000 mIU/mL 4 weeks post-vaccination. Those with prebooster anti-HBsAg <10mIU/ml were less likely to produce high titers of anti-HBsAg compared with children who had anti-HBsAg titers from 10 to 100 mIU/ml. Equally, the intensity of the T-cell booster response also depended on the prebooster anti-HBsAg titer. After the booster, the numbers of IFN-γ-secreting HBsAg-specific T cells in children with prebooster anti-HBsAg titers of 10-100 mIU/ml were significantly increased compared with children with anti-HB-sAg titers <10mIU/ml. That has been previously reported. 24 Establishment of immune memory by routine vaccination against HBV at birth is key for the effectiveness of the HepB booster and for long-term immunity. Although immune memory for HepB is persistent in children, a booster is recommended for children without an- ti-HBsAg in HBV endemic regions. The available evidence does not provide a compelling basis for recommending a booster dose of HepB. 6,25 Moreover, chronic HBV infection is on the decrease after primary immunization even in children without detectable (<10 mIU/mL) anti-HBsAg. 6,26 More attention should be paid to children over 10 years of age. According to our previous study, the prevalence of HBsAg and anti-HBc increased from 0.46% to 1.40% between 11 and 16 years of age compared with 5.69% to 7.8% between 1 and 10-years of age. 14 That suggests that the risk of exposure to HBV is increased in children who are older than 10 years of age. A HepB booster should be given at that age to reduce the risk of breakthrough infection. In this study, a significant number of vaccinees with low anti-HBsAg following neonatal vaccination had large increases of anti-HBsAg titer within 4 weeks of a single booster dose. HepB has been continuously improved since its launch in 1986. The safety of HepB has been confirmed and vaccination coverage in China has continuously improved. 1 Serosurveys show that the prevalence of HBV has significantly decreased, [2][3][4]27 and that the change is closely associated with hepatitis B vaccination. Some individuals with anti- HBsAg <10mIU/ml and at high risk of HBV exposure will require only one HepB booster to achieve seroprotective anti-HBsAg titers. In addition to a T-cell response, B cells respond to HepB by generating a protective anti-HBsAg titer. Our results found that total B cells, including some antibody-secreting cells, significantly decreased after booster vaccination. Decreases in plasmablasts, memory B cells, and unswitched memory B cells were observed in children with pre-existing anti-HB-sAg titers of 10-100 mIU/ml. Immunization is known to be followed by rapid activation of circulating memory cells to terminally differentiate into low-affinity plasma cells or to form germinal centers, which mediate further proliferation and selection for antigen binding later. 28,29 In this study, there were declines in memory B cells, unswitched memory B cells and plasmablasts at 4 weeks post-booster. But the children did show a rise in anti-HBsAg in the peripheral blood, so they may have produced high affinity antibodysecreting cells at before blood collection. 30 The main limitation of this study is the limited sample size. This study was a clinical trial and it was difficult to include a large numbers of children in each age group due to children's particularity. In addition, evaluation on the efficacy of one dose of HepB booster may be insufficient in this study. We were unable to assess the expression of other more activation markers or cytokines. Further study is warranted to evaluate more biomarkers in cellular response to HepB. More doses and long-term follow-up may also be required in the follow-up study. In conclusion, this study had comprehensively analyzed humoral and cellular immune response to HepB booster in children after neonatal vaccination. Protection from primary HBV immunization persists at least 13 years after primary immunization on account of the complementary presence of HBV-specific humoral and T-cellular immune response. In addition, we demonstrated that one dose of HepB booster is efficient enough to produce protective anti-HBsAg and enhance HBsAg-specific T-cell responses. As an effective way, HepB booster immunization could be recommended to children without anti-HBsAg in the endemic areas to prevent HBV infection. for sample collection, Lina Zhou, Lu Huang and Linlin Niu for helpful discussion and all nurses for drawing blood. We also thank Biobank Center of Children's Hospital of Chongqing Medical University for sample storage. We especially thank all the children and their parents who participated, and thank them for their courage and kindness. Funding This study was supported by the National Clinical Research Center for Child Health and Disorders General project (No. NCRCCHD-2019-GP-04), Central Government Guides Local Science and Technology Development projects-demonstration of Science and Technology Innovation projects, National Natural Science Foundation of China (No. 81371876), Outstanding Youth Foundation of Children's Hospital of Chongqing Medical University. Conflict of interest The authors have no conflict of interests related to this publication. Author contributions YZ, HX, AH and YH were responsible for the study concept and design, YH, YY, TW and ZL collected participant samples, YH, YY, TW and ZL performed study procedures, YH and YZ performed the statistical analysis and drafted the manuscript. Data sharing statement The datasets in this study are available from the corresponding author upon reasonable request.
2022-01-06T16:25:18.803Z
2022-01-04T00:00:00.000
{ "year": 2022, "sha1": "6045630ee807c09404b0d6a246d80b4d6858985a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.14218/jcth.2021.00272", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e477e7747d11e4f1e7d0d39c82d9083650e35460", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246298549
pes2o/s2orc
v3-fos-license
Serum Metabolomic Profiling Reveals Biomarkers for Early Detection and Prognosis of Esophageal Squamous Cell Carcinoma Esophageal squamous cell carcinoma (ESCC) is one of the most common aggressive malignancies worldwide, particularly in northern China. The absence of specific early symptoms and biomarkers leads to late-stage diagnosis, while early diagnosis and risk stratification are crucial for improving overall prognosis. We performed UPLC-MS/MS on 450 ESCC patients and 588 controls consisting of a discovery group and two validation groups to identify biomarkers for early detection and prognosis. Bioinformatics and clinical statistical methods were used for profiling metabolites and evaluating potential biomarkers. A total of 105 differential metabolites were identified as reliable biomarker candidates for ESCC with the same tendency in three cohorts, mainly including amino acids and fatty acyls. A predictive model of 15 metabolites [all-trans-13,14-dihydroretinol, (±)-myristylcarnitine, (2S,3S)-3-methylphenylalanine, 3-(pyrazol-1-yl)-L-alanine, carnitine C10:1, carnitine C10:1 isomer1, carnitine C14-OH, carnitine C16:2-OH, carnitine C9:1, formononetin, hyodeoxycholic acid, indole-3-carboxylic acid, PysoPE 20:3, PysoPE 20:3(2n isomer1), and resolvin E1] was developed by logistic regression after LASSO and random forest analysis. This model held high predictive accuracies on distinguishing ESCC from controls in the discovery and validation groups (accuracies > 89%). In addition, the levels of four downregulated metabolites [hyodeoxycholic acid, (2S,3S)-3-methylphenylalanine, carnitine C9:1, and indole-3-carboxylic acid] were significantly higher in early cancer than advanced cancer. Furthermore, three independent prognostic markers were identified by multivariate Cox regression analyses with and without clinical indicators: a high level of MG(20:4)isomer and low levels of 9,12-octadecadienoic acid and L-isoleucine correlated with an unfavorable prognosis; the risk score based on these three metabolites was able to stratify patients into low or high risk. Moreover, pathway analysis indicated that retinol metabolism and linoleic acid metabolism were prominent perturbed pathways in ESCC. In conclusion, metabolic profiling revealed that perturbed amino acids and lipid metabolism were crucial metabolic signatures of ESCC. Both panels of diagnostic and prognostic markers showed excellent predictive performances. Targeting retinol and linoleic acid metabolism pathways may be new promising mechanism-based therapeutic approaches. Thus, this study would provide novel insights for the early detection and risk stratification for the clinical management of ESCC and potentially improve the outcomes of ESCC. INTRODUCTION Esophageal cancer is the eighth most common form of cancer and the sixth leading cause of cancer death in the world (1). Esophageal squamous cell carcinoma (ESCC) remains the predominant histological type globally (accounts for 90%), especially in northern China (2). Due to occult symptoms in early stage, 80% of ESCC patients are in the middle or advanced stage at the time of diagnosis, with a 5-year survival of only 20% (3). Therefore, identifying phenotypic characteristics and predictive biomarkers are of great significance for the early detection and improvement of the prognosis of ESCC. Metabolomics has emerged as a new high-throughput "omics" technology for screening low molecular weight metabolites (<1,000 Da) in biological samples, which can directly reflect the pathological state after gene mutation and/ or protein variations (4,5). It has been generally accepted that cancer is a metabolic disease with metabolic reprogramming (6). Hitherto, metabolomics has been used to examine global metabolite profiles and screen biomarkers for early warning and monitoring of multiple cancers (7)(8)(9)(10), as well as to further gain insight into the potential mechanisms of tumorigenesis and progression of cancers (11). In recent years, the exploration of metabolic characteristics and diagnostic and prognostic markers for esophageal cancer has attracted much attention. For instance, Wang et al. showed that 16 biomarkers as ESCC-related metabolic signatures could be used for diagnosis, among which dodecanoic acid, LysoPA (18:1), and LysoPC (14:0) could be markers of disease progression (12). Another study constructed an effective diagnostic model based on eight metabolites consisting of hypoxanthine, proline betaine, indoleacrylic acid, inosine, 9decenoylcarnitine, tetracosahexaenoic acid, LPE (20:4), and LPC (20:5) and found that indoleacrylic acid, LPC (20:5), and LPE (20:4) had association with ESCC progression (13). One research by Chen et al. showed that four circulating metabolites, kynurenine, 1-myristoyl-glycero-3-phosphocholine [LPC (14:0) sn-1], 2-piperidinone, and hippuric acid, acted as potential ESCC prognostic biomarkers (14). However, due to dynamic and sensitive features of the metabolome, these studies may be limited by small sample size or lack of validation groups; thus, our understanding of metabolism-related changes in esophageal cancer remains limited. We therefore did this multicenter, large-scale cohort study to determine global alterations of metabolites and screen biomarkers. We performed a widely targeted metabolome by UPLC-MS/MS on serum samples of 450 ESCC patients and 588 controls consisting of the discovery group (training set and test set) and two validation groups to identify biomarkers for early detection. The relationships between these biomarkers and t u m o r s t a g e w e r e f u r t h e r e x p l o r e d . I n a d d i t i o n , clinicopathological indicators and metabolites were integrated to identify molecular markers with prognostic value. Differentially expressed metabolites in tissues provided further validation of these markers from serum. Biomarkers for early detection will facilitate and supplement the criteria of high-risk population of esophageal cancer, improving the efficiency of non-invasive detection and monitoring. Meanwhile, prognostic biomarkers potentially shed new light on risk stratification and management for patients with esophageal cancer. Finally, pathway analysis based on these findings could contribute to uncovering pathogenetic mechanism and identifying potential therapeutic targets. Participants and Sample Collection A total of 1,038 cases (450 ESCC patients and 588 healthy controls) were recruited from multicenter esophageal and gastric cardia carcinoma databases Three hundred and seventy-one patients were enrolled from September 2013 to January 2020, divided into two groups: 225 ESCC patients (November 2019 to February 2020) as the discovery set and 146 cases (September 2013 to October 2019) as the verification set 1. All clinicopathological features of patients were extracted from medical records, including age, gender, family history, tumor site, T stage, N stage, and TNM stage. An independent group of 79 patients as another external verification set was obtained from high incidence areas of esophageal cancer in China during clinical epidemiological investigation. Clinical and pathological data were collected by questionnaires and hospital information system retrospectively. For healthy controls, 588 individuals were enrolled after excluding any upper gastrointestinal tumors via gastroscopic biopsy between 2012 and 2020, and they were randomly selected and matched into the discovery group and verification groups with 363, 165, and 60, respectively. Fasting blood samples of patients in the discovery set and verification set 1 were collected before surgery during hospitalizations, and those of verification set 2 and healthy controls were taken during site investigation and gastrointestinal endoscopy, respectively. All samples of cases and controls were drawn into blood non-anticoagulant tube, standing at room temperature for 30 min awaiting natural coagulation. After centrifugation at 12,000 r/min for 3 min, the supernatant fraction was collected and divided into equal parts (0.5 ml) and then stored in a refrigerator at −80°C until further analysis. Three pairs of tissues (tumor and adjacent normal samples) obtained from the same patients in the discovery group were used for further validation of serum biomarkers. Tissue samples were collected within 30 min after operation, frozen in liquid nitrogen, and stored at −80°C. The study design and procedures are presented in Figure S1. All patients were confirmed as esophageal cancer by two pathologists independently. Familial history was considered positive if the proband had one or more cancer-affected relatives in three consecutive generations. Regions with ESCC incidence over 60/10 million were classified as high incidence areas of esophageal cancer, and low incidence areas otherwise. TNM staging was performed according to the sixth Union for International Cancer Control (UICC) TNM classification system due to the large span of diagnosis. Stages I and IIA were defined as early cancer, and stages IIB, III, and IV as advanced cancer. The follow-up for overall survival was via telephone or home investigation every 3-6 months. Each participant signed the informed consent form, and ethical approval for this study was obtained from the Medical Ethics Committee of the First Affiliated Hospital of Zhengzhou University. Serum Pretreatment Serum samples were removed from −80°C and thawed on ice immediately until thawing completely. After vortexed for 10 s, 50 ml of each sample was transferred to a centrifuge tube with the corresponding number, mixed with 300 ml pure methanol, then whirled for 3 min, and centrifugated at 12,000 r/min at 4°C for 10 min. The supernatant (200 ml) was absorbed to a new centrifuge tube, followed by standing at −20°C for 30 min. After centrifuged at 12,000 r/min for 3 min at 4°C, 150 ml of the supernatant was taken to the corresponding injection bottle for metabolomic analysis. Tissue Pretreatment Tissue was taken out from −80°C and kept on ice throughout the process. Thawed tissue was minced and 20 mg of sample was weighed by multi-point sampling then transferred into a centrifuge tube to homogenize (30 Hz) for 20 s with a steel ball. After centrifugation at 3,000 r/min, 4°C for 30 s, the pellet was added into 400 ml of 70% methanol water internal standard extractant with shaking (1,500 r/min) for 5 min and then kept on ice for 15 min. The supernatant (200 ml) was recovered after centrifugation (12,000 r/min, 10 min, 4°C) and then was allowed to stand at −20°C for 30 min. After centrifugation (12,000 r/min, 4°C) for 3 min, 200 ml of supernatant was collected for analysis. Sample extract mixtures as quality controls (QCs) were inserted into every 10 testing samples to monitor the repeatability of the analysis process. Data Processing, Quality Control, and Statistical Analyses After MS data were analyzed with software Analyst 1.6.3 (AB SCIEX, Ontario, Canada), qualitative analysis of metabolites was done according to retention time (RT), ion pair, and second spectra, based on MetWare database (http://www.metware.cn/) and publicly available metabolite databases, such as HMDB (http://hmdb.ca/) and MassBank (http://www.massbank.jp/). The quantitative analysis steps were as follows: first, screening the characteristic ions of each substance by triple quadrupole, we obtained the signal strength of characteristic ions in the detector. Then through integrating and correcting chromatographic peaks by the software MultiQuant, we obtained the relative content of the corresponding substance represented by the area of each chromatographic peak. Metabolite annotation was performed based on the KEGG compound database (http://www.kegg.jp/kegg/compound/). Data quality was assessed using principal component analysis (PCA, princomp function in R) and coefficient of variation value (CV: ratio of standard deviation to mean, Microsoft Excel 2016 and ggplot2 in R). These results supported the reliability of data -that QC samples clustered together clearly in the PCA plots of the three groups and more than 85% metabolites with CV values were less than 15% ( Figure S2). Multivariate statistical investigations were performed including PCA and orthogonal partial least squaresdiscriminant analysis (OPLS-DA, ropls package in R). Model quality of OPLS-DA was estimated by R2Y and Q2 values. Volcano plots were generated based on log2 fold changes and −log10 (p-values) by R. Heatmaps were performed and visualized with pheatmap package. Statistically significant metabolites were selected by p <0.05 (Student's t-test or Wilcoxon test), and variable importance in the projection (VIP) generated from the OPLS-DA model was referenced as supplement. Least absolute shrinkage and selection operator (LASSO) regression model and random forest were performed to screen potential biomarkers for diagnosing ESCC (glmnet package and randomforest package). The logistic regression model was trained using the function glm in R. Receiver operating characteristic curves (ROC) were plotted to evaluate the predictive accuracy of the diagnostic model based on metabolites with GraphPad software. Violin plots produced with ggviolin package were used to visualize differences in metabolites between early and advanced cancer patients based on the Wilcoxon test. Kaplan-Meier survival curves and log-rank test were used to calculate survival rate and compared survival curves between groups, respectively (survival and survminer package). Cox proportional hazards regression test was carried out to analyze prognostic factors for overall survival and compute hazard ratio (HR) and 95% confidence interval (CI) of multivariate survival analysis (survival package). Moreover, forest plots were generated with the forestplot package based on the results of Cox regression analyses. Correlations between serum and tissue metabolites were assessed by Pearson's correlation coefficients using R function cor, and the network diagram was visualized in Cytoscape (version 3.8.0). Pathway analysis was undertaken with MetaboAnalyst 5.0 online software using the KEGG pathway database. Statistical analyses were conducted using R software (version 4.0.4) and Prism8 (GraphPad) software, and p <0.05 was considered statistically significant. Clinical Characteristics and Metabolomic Study of ESCC To explore the metabolomic profiles and biomarker candidates for ESCC, 1,038 participants (450 patients and 588 normal individuals) were enrolled: the discovery cohort consisted of 588 (225 patients and 363 controls), while the two validation groups consisted of 311 and 139 (146 patients and 165 controls and 79 patients and 60 controls, respectively). Their clinical characteristics are shown in Table S1. We performed UPLC-MS/MS analysis in serum samples of all subjects to profile the entire metabolome of esophageal cancer. Qualitative and quantitative analyses of metabolite levels were performed based on metabolic databases. A total of 963 metabolites were finally annotated (Table S2). In the discovery set, 524 compounds (155 upregulated and 369 downregulated metabolites) were with significant difference between ESCC and controls (p < 0.05) ( Table S3). PCA and OPLS-DA were applied to characterize the metabolic patterns of ESCC, exhibiting a clear separation between ESCC and normal ( Figures 1A, B). Ultimately, a total of 105 differential metabolites (21 upregulated and 84 downregulated metabolites) were validated as reliable biomarker candidates maintaining the same tendency as the discovery set ( Figures 2A, B and Table S6). Twenty-one upregulated metabolites mainly included four glycerophospholipids, three sugar alcohols, three benzene and substituted derivatives, two aldehydes, two oxidized lipids, and two nucleotides and its metabolites. Meanwhile, the major categories of 84 downregulated metabolites were fatty acyls, amino acids, indoles, bile acids, organic acids, oxidized lipids, and glycerophospholipids. We noted that almost all serum amino acid-related (alanine, histidine, glycine, serine, phenylalanine, cysteine, isoleucine) and lipid-related metabolites (acylcarnitine, fatty acids) were decreased, lysophosphatidylcholine (LPC) was upregulated, and lysophosphatidylethanolamine (LPE) was downregulated, while benzene and substituted derivatives as external environmental factors were upregulated. Metabolomic data were also visualized using heatmaps with metabolites arranged by major classes ( Figure 2C). In summary, the global alterations of the metabolic profile for ESCC were significantly different compared with normal groups. Metabolite Diagnostic Biomarkers for ESCC LASSO regression and random forest were performed in the discovery groups to further identify metabolite biomarkers for distinguishing ESCC patients from the healthy population. All quantitative data of metabolites were analyzed after normalization and logarithm transformed. Individuals including ESCC patients and healthy controls in the discovery group were divided randomly into training set (150 patients and 242 controls) and test set (75 patients and 121 controls) at a ratio of 2:1. A 10-fold cross-validation was used to estimate the optimal parameter (lambda) of the model and select the optimal combination of variables. Fifteen metabolites were selected as the most important predictors by combinations of two methods, including all-trans-13,14-dihydroretinol (upregulation) and (±)-myristylcarnitine, (2S,3S)-3-methylphenylalanine, 3-(pyrazol-1-yl)-L-alanine, carnitine C10:1, carnitine C10:1 isomer1, carnitine C14-OH, carnitine C16:2-OH, carnitine C9:1, formononetin, hyodeoxycholic acid, indole-3-carboxylic acid, PysoPE 20:3, PysoPE 20:3(2n isomer 1), and resolvin E1 (downregulation) ( Table S7). We constructed and trained this metabolite-based model using logistic regression analysis showing high diagnostic performance for ESCC: an accuracy of 94.9% in the training set, 92.57% in the test set, and 94.22% in the entire discovery group, with area under the ROC curves (AUC) greater than 0.98 (Figures 3A-C and Table 1). To test the generalizability of this panel, we did same analyses in the two independent validation cohorts. Fortunately, we got similar results as the discovery set ( Figures 3D, E). Additionally, the levels of four downregulated metabolites [hyodeoxycholic acid, (2S,3S)-3-methylphenylalanine, carnitine C9:1, and indole-3-carboxylic acid] were significantly higher in I-IIA stage (early cancer) than in IIB-IV stage (advanced cancer) (p < 0.05, Wilcoxon test) ( Figures 4A-D). In conclusion, this combination of 15 serum metabolites could be used in different populations as reliable potential biomarkers for the early detection of esophageal cancer. Prognostic Metabolic Biomarkers for ESCC We performed Kaplan-Meier log-rank tests and univariate and multivariate Cox regression analyses on all patients in the three groups to identify prognostic factors. Results revealed that 19 metabolites had statistical significance in both survival analysis by a median-split (p < 0.05) and univariate Cox regression analyses (p < 0.2) ( Table S8). After excluding four metabolites that affected the survival rate with time through proportional hazards assumption (PH), each metabolite was further subjected to multivariate Cox regression analysis adjusted for clinical covariates (age, TNM stage, N stage, family history, and high or low incidence areas) separately. These observations demonstrated that 10 metabolites were significantly associated with overall survival. Then multivariate stepwise Cox regression analyses with backward elimination were conducted for 10 Table S7. Table S6. metabolites with and without clinical indicators to further assess the prognostic value of metabolites. Finally, three metabolites remained independent prognostic factors of overall survival. The correlations between the three metabolites and survival were similar in the combination model with or without clinical factors ( Figure 5A). As we observed, a high level of MG(20:4)isomer (HR = 1.62) and low levels of 9,12-octadecadienoic acid (HR = 0.67) and L-isoleucine (HR = 0.56) correlated with poor overall survival in the single biomarker model. We then calculated the risk scores for all patients using coefficient values of the three metabolites in the single biomarker model. Each patient was stratified into low-and high-risk groups by median of risk scores. As expected, patients in the high-risk group had worse survival than in the low-risk group as shown in Figure 5B. Furthermore, there was a trend that more deaths, a higher level of MG(20:4)isomer, and lower levels of 9,12-octadecadienoic acid and L-isoleucine were in the high-risk group compared with the low-risk group ( Figure 5C). Retinol Metabolism and Linoleic Acid Metabolism as the Most Perturbed Metabolic Pathways Differential metabolites were mapped to KEGG pathways using pathway analysis module of MetaboAnalyst 5.0. Pathway analysis of commonly altered 105 metabolites from patients versus controls revealed 23 tumor-related metabolic pathways including metabolism of cofactors and vitamins, amino acid metabolism, nucleotide metabolism, lipid metabolism, and carbohydrate metabolism. Among them, retinol metabolism was the most perturbed pathway (p < 0.05, Figure 6A early detection and prognosis revealed that linoleic acid metabolism was the most significant pathway (p < 0.05, Figure 6B) mapped by 9,12-octadecadienoic acid. DISCUSSION We conducted a multicenter, large-scale metabonomic study of esophageal cancer through UPLC-MS/MS. Our analysis revealed that patients with esophageal cancer exhibited distinctive metabolic characteristics compared with healthy controls by verification in different cohorts. Such metabolic alterations were likely attributed to dysregulation of multiple metabolic pathways. Significantly, we identified robust serum biomarkers for early detection and prognosis of esophageal cancer, offering an opportunity for the targeted screening of high-risk groups and individualized management. We identified 105 serum metabolites as reliable metabolic profiles for patients with ESCC in the discovery and two validation cohorts, mainly relating to changes of amino acids and fatty acyls (acylcarnitines and glycerophospholipids). First, almost all amino acid-related metabolites were reduced, including glycine, serine, leucine, isoleucine, phenylalanine, tryptophan, glutamic acid, aspartic acid, alanine, arginine, histidine, and cysteine, except methionine metabolites. Similar findings have been observed in previous studies on both ESCC and EAC (esophageal adenocarcinoma) (15,16). Amino acids are the most frequently reported altered metabolites in cancer (17) which relate to increased oxidative metabolism, gluconeogenesis, and energy production in cancer patients (18). Specifically, methionine cycle flux specifically influences the epigenetic state of cancer cells and drives tumor initiation (19), and the cross-talk between glucose and methionine regulates life span (20). Serine metabolism supports the methionine cycle and DNA/RNA methylation through de-novo ATP synthesis in cancer cells (21). Serine/glycine biosynthesis affects cellular antioxidative capacity, thus supporting tumor homeostasis (22). Altered branched-chain amino acid (BCAAs: valine, leucine, and isoleucine) metabolism has been implicated in cancer progression and the key proteins in the BCAA metabolic pathway serve as possible prognostic and diagnostic biomarkers in human cancers (23). Aromatic amino acids tyrosine, phenylalanine, and tryptophan represent potential biomarkers and relate to gastroesophageal cancer (24). Tryptophan metabolism through the kynurenine pathway (KP) is involved in the regulation of immunity, neuronal function, and intestinal homeostasis (25). Our observations that disorders of amino acid metabolism were common alterations in ESCC have important implications for further investigation into the relationship between metabolic alterations and carcinogenesis. Another major class of disordered metabolites was acylcarnitines in our study: that medium-to long-chain acylcarnitines (octanoylcarnitine, nonanoylcarnitine, decanoylcarnitine, undecanoylcarnitine, dodecylcarnitine, tetradecanoylcarnitine, hexadecadienoylcarnitine, stearidonyl carnitine) in ESCC patients were significantly decreased compared with controls. A previous study by Xu et al. also reported the downregulation of acylcarnitines (octanoylcarnitine, nonanoylcarnitine, decanoylcarnitine, and undecanoylcarnitine) in ESCC patients (26). Given that these acylcarnitines as the main substrates of mitochondrial lipid oxidation regulate energy balance through promoting ketogenesis and reducing protein consumption (27), the low levels can reflect alterations of the tricarboxylic acid cycle (TCA cycle) activity and b-oxidation in patients with ESCC in the present study. We also observed alterations in the two groups of glycerophospholipids metabolites, LPE and LPC. Previous data demonstrated that significant alterations of LPE and LPC were in the serum of patients with esophageal squamous cell carcinoma, pancreatic ductal adenocarcinoma, liver cancer, and ovarian cancer (13,(28)(29)(30)(31). These findings substantiated the diagnostic value of LPE and LPC. Of interest, benzene derivatives (xylene, ethylbenzene, and o-xylene) were elevated relatively in the serum of patients with ESCC in our study. Benzene overexposure strongly elevates the incidence of cancer and risks of mortality, through increasing oxidative damage and cytogenetic changes (32,33). While understanding of the carcinogenicity of benzene derivatives in ESCC is limited, this finding resulted likely from the interaction between environmental and genetic factors. One of our overarching goals was to identify metabolite-based biomarkers for early detection. Here, we trained diagnostic models using logistic regression after LASSO and random forests. A set of 15 metabolites was screened as novel diagnostic markers, including all-trans-13,14-dihydroretinol, (±)-myristylcarnitine, (2S,3S)-3-methylphenylalanine, 3-(pyrazol-1-yl)-L-alanine, carnitine C10:1, carnitine C10:1 isomer1, carnitine C14-OH, carnitine C16:2-OH, carnitine C9:1, formononetin, hyodeoxycholic acid, indole-3-carboxylic acid, PysoPE 20:3, PysoPE 20:3(2n isomer1), and resolvin E1. Importantly, our diagnostic biomarker panel performed excellently in the training and validation groups (test set, the entire discovery group, two independent validation cohorts) with accuracies more than 90%, which would be a very meaningful subject of our further study. Intriguingly, almost all of these have been previously associated with cancers. For instance, increased levels of all-trans-13,14-dihydroretinol, metabolites of vitamin A (all-trans-retinol) produced by retinol saturase (RetSat) (34,35), result in an accelerated apoptosis induction through reduction of all-trans-retinoic acid (atRA) (36). Acylcarnitine, generated by mitochondrial metabolism of amino acids and fatty acids, has been implicated in mitochondria-mediated inflammation and cellular stress promotion (37). Medium-chain acylcarnitines (C6-C12) are positively associated with the risk of prostate cancer progression, while long-chain acylcarnitines (C14-C18) are inversely associated with advanced stages (38,39). Past research found that octanoylcarnitine and decanoylcarnitine were closely correlated with the treatment effect of ESCC (26). (2S,3S)-3-Methylphenylalanine can prevent mitochondrial damage and reduce apoptosis of cells (40). The level of 3-(pyrazol-1-yl)-L-alanine is closely related to gastric cancers (41). Indole-3-carboxylic acid, a microbial tryptophan metabolite, can enhance tumor malignancy and suppress antitumor immunity by activating aryl hydrocarbon receptor (AHR) (42,43). Formononetin (FMNT), a isoflavonoid, possesses anti-inflammatory, antioxidant, and antitumoral properties (44)(45)(46), and supplementation of isoflavonoids can reduce the incidence and mortality of cancers (47,48). Hyodeoxycholic acid (HDCA) can suppress intestinal epithelial cell proliferation through the FXR-PI3K/AKT pathway (49). The reduced levels of lysophosphatidylethanolamine (LPEs), key components of cellular membranes, can explain the rapid cellular proliferation of malignancies (50). Resolvin E1 inhibits oxidative stress, autophagy, and apoptosis by targeting Akt/ mTOR signals (51), suppresses tumor growth, and enhances cancer therapy (52). Furthermore, we detected higher levels of four downregulated metabolites [hyodeoxycholic acid, (2S,3S)-3methylphenylalanine, carnitine C9:1, and indole-3-carboxylic acid] in I-IIA stage (early cancer) than in IIB-IV stage (advanced cancer), which could serve as predictive biomarkers of early cancer, in which decreasing levels were associated with increased tumor burden. In summary, this metabolite-based diagnostic panel would be effective tools for early screening and diagnosing ESCC patients from high-risk populations in China. Here, potential prognostic predictors were explored by Kaplan-Meier log-rank tests and univariate and multivariate Cox regression analyses. Our findings indicated that MG(20:4) isomer, 9,12-octadecadienoic acid, and L-isoleucine remained prognostic biomarkers of overall survival for ESCC with similar results regardless of whether prognostic clinical factors were incorporated or not. It suggested that this model based on three indicators, which did not involve the clinicopathologic information of patients, could be easily used in clinical practice. MG(20:4)isomer was upregulated and correlated with poor prognosis in ESCC patients, while 9,12-octadecadienoic acid and L-isoleucine did the opposite. When patients were stratified into low-and high-risk groups based on this model, patients in the high-risk group tended to have lower rates of survival and more deaths. Similarly, previous studies have also reported associations of three metabolites with different malignancies. MG(20:4)isomer, namely, eicosanoic acid monoglyceride, an arachidonic acid derivative and canonical endocannabinoid, is an isomer of 2-arachidonoylglycerol (2-AG) and 1-arachidonoylglycerol (1-AG). Canonical endocannabinoids have anti-inflammatory and anticancer properties by activating cannabinoid receptors CB1 and CB2. One study reported that both 2-AG and the activity of 2-AG decomposing enzymes [catabolic enzyme monoacylglycerol lipase (MAGL)] were elevated in lung squamous cell carcinoma tissue compared with normal adjacent lung tissue (53). Another study suggested that treatment with mixed CB1/ CB2 agonist WIN-55,212-2 resulted in inhibition of skin tumor growth (54). A study showed that 9,12-octadecadienoic acid, belonging to linoleic acid metabolism, was significantly increased in preoperative lung cancer patients compared with healthy volunteers and postoperative lung cancer patients (55). Significant alterations of linoleic acid metabolism have been observed in many other cancer types associated with inflammatory-mediated damage, immune response, and cell proliferation (colorectal cancer, bladder cancer, and renal cell carcinoma) (56,57). Linoleic acid has also been reported as one of the biomarkers for the diagnosis (12,13,58) and therapeutic efficacy (59) of patients with ESCC. Interestingly, the model constructed by linoleic acid and other 11 differentiating metabolites showed good predictive values for distinguishing EAC, high-risk (BE and HGD), and control groups (16). L-Isoleucine affects cancer cell state as well as systemic metabolism in individuals with malignancy (60). The deficiency of Lisoleucine is one metabolic characteristic for patients with gastric cancer after chemotherapy, and the correction of this metabolic deficiency improves the life quality of patients (61). Together, our findings have potential important implications for therapeutic decision-making and risk stratification for the management of patients with ESCC. The pathway analysis of 105 metabolites from patients versus controls also confirmed that dysregulation of amino acid metabolism, lipid metabolism, vitamin metabolism, nucleotide metabolism, and carbohydrate metabolism was related to ESCC. We identified retinol metabolism as the most perturbed pathway with elevation of all-trans-13,14-dihydroretinol and reduction of 11-cis-retinol and 4-hydroxyretinoic acid in cancer. Under normal circumstances, atRA is the most biologically active retinol metabolite binding to retinoic acid receptor a (RARa) playing important roles in cell differentiation, proliferation, and apoptosis (62). Normal RetSat catalyzes all-trans-retinol to atRA, otherwise to all-trans-13,14-dihydroretinol. Deficiency of atRA has proven to contribute to colon carcinogenesis, while returning to normal level reduces the risk of cancer (36). In addition, atRA not only inhibits angiogenesis and metastasis of ESCC though angiopoietin receptor Tie2 (63) but also induces apoptosis of metaplastic Barrett's cells via p38 and caspase pathways (64). In the research of Barrett's esophagus organotypic model, atRA alters the squamous cytokeratin profile of EPC2 toward a more columnar expression pattern (65). This suggests that RetSat in retinol metabolism pathway is emerging as a promising therapeutic target for ESCC, and atRA has a potential role in therapy and chemoprevention of patients with ESCC and Barrett's esophagus. Importantly, linoleic acid metabolism was also identified as the most significant pathway in the pathway analysis of 18 biomarkers, which validated a similar observation from previous smaller studies of patients with ESCC (12,13,58) and EAC (16). Linoleic acid metabolism is mediated by cytochrome P450 (CYP1A2, CYP2C, CYP2J, CYP2E1, and CYP3A4) to proinflammatory and proangiogenic oxylipins resulting in tumor growth or metastasis (66). Previous studies suggested that 12,13-epoxyoctadecenoic acid (EpOME) as a metabolite of linoleic acid produced by CYP monooxygenase increased cytokine production and JNK phosphorylation in vitro and exacerbated AOM/DSS-induced colon tumorigenesis in vivo, which revealed CYP2C enzymes being a novel therapeutic target for patients with colon cancer (67). Together with a previous work, it is therefore speculated that cytochrome P450 will be a novel therapeutic target for ESCC and deserves further investigation. Correlation analysis of serum and tissue metabolites in different samples from the same patient demonstrated that these molecules mainly belonged to amino acid and lipid metabolism. Although differences were observed, there were significant positive correlations between serum biomarkers and tissue differential metabolites. The decreases of PysoPE 20:3 and PysoPE 20:3(2n isomer 1) in serum were mainly due to the increasing consumption of LPEs for constituting cell membranes, which were consistent with the increases of N-acetyl-D-glucosamine and N-acetyl-Dglucosamine in tissue caused by high activity of nucleotide synthesis and cellular proliferation. (2S,3S)-3-Methylphenylalanine reacts with 2-oxoglutarate into L-glutamate and (3S)-2-oxo-3phenylbutanoate by 2-oxoglutarate aminotransferase in the process of glutamate providing 2-oxoglutarate for TCA cycle (68). For its correlated tissue metabolites, for instance, DL-stachydrine, as a derivative of proline, can not only be degraded to but also synthesized from glutamate (69,70). Increased transport of choline into cancer cells results in a high level of phosphocholine in tissues (a substance converted from choline via phosphorylation by choline kinase) thereby promoting cell growth and proliferation (71). Bis(1-inositol)-3,1′-phosphate 1-phosphate and CMP convert into CDP-1L-myo-inositol and inositol 3-phosphate in inositol phosphate metabolism providing second messengers in cellular signal transduction (72). Elevation of 2,2,2-trichloroethanol in tissue may be related to hyperactive metabolism of xenobiotics by cytochrome P450. Both the high level of 13-HOTrE in tissue and the low level of (±)-myristylcarnitine in serum were the result of dysregulation of lipid metabolism in cancer. In conclusion, although serum and tissue metabolites were quite different, they maintained strong metabolite correlations from tumor-derived metabolic disorders. It further suggested that these serum metabolites could serve as non-invasive biomarkers for patients. The representativeness of the study population was ensured by multicenter, large-scale data for ESCC patients and normal controls. Detailed clinicopathological information and long-term follow-up minimized the influence of confounding factors on screening biomarkers. Validation in two independent cohorts could support further extension and application of these diagnostic biomarkers. Different combinations in multivariate Cox regression analyses confirmed the reliability and clinical utility of prognostic biomarkers. Nevertheless, some limitations should also be acknowledged. Targeted metabolomics analysis is necessary to further verify these serum metabolite biomarkers, and prospective larger cohorts are needed to validate prognostic biomarkers given the retrospective design and non-uniform follow-up. Altogether, we revealed serum metabolic profiles for patients with ESCC using UPLC-MS/MS-based metabolomics technology. Novel serum metabolic diagnostic biomarkers could effectively distinguish esophageal cancer patients from healthy controls, offering an opportunity for the early detection and diagnosis of esophageal cancer patients in asymptomatic population. Moreover, prognostic biomarkers would provide a new direction for the risk-stratified management and individualized therapeutic decision-making for both patients and doctors. Significant metabolic pathways provide mechanistic insight into future targeted therapies. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS LDW conceived and designed this study and obtained financial support. Subject recruitment and biological material collection in Henan Province were supervised by LDW and carried out by PPW, XS, XNH, RHX, RW, ZMF, MMY, KZ, LLL, LYL, YC, JJJ, and YZY. The following authors from the various collaborating groups undertook the collection of samples and data in their respective regions: SGG in the First Affiliated Hospital of Henan University of Science and Technology, FYZ in Anyang Tumor Hospital, JLR in the Second Affiliated Hospital of Zhengzhou University, XML in Hebei Provincial Cixian People's Hospital, and XZW in Linzhou People's Hospital. PPW, MXW, and LDW participated in the study design, discussion of results, and manuscript preparation. PPW, XS, XKZ, MXW, JFH, KZ, YJC, and JL performed data curation, statistical and bioinformatic analyses, and original draft preparation. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS We thank the individuals who participated in this study for making this work possible; the study teams involved in the study for sample and clinical data collections; our colleagues for sample handling, clinical and pathological data collation, and follow-up data collections; Jing Li Ren and Xue Min Li for assistance in the pathological diagnosis of cancer patients; our colleagues for sample handling; and Hui Zheng for bioinformatic analyses.
2022-01-28T14:26:58.947Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "7132fd8c5d2f860e141da2f6dacbfa5b6b1021f4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "7132fd8c5d2f860e141da2f6dacbfa5b6b1021f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269720051
pes2o/s2orc
v3-fos-license
Do women have a say? A moderated mediation model ’ s influence on the leverage policy toward corporate sustainable growth Purpose – Examining the roleofwomenon board(WoB)towardcorporatesustainablegrowth(CSG)through leverage policy (LP). This research also investigates the interaction effect of WoB and LP on improving CSG. Design/methodology/approach – This study uses a moderated mediation model to examine the impact of WoB on CSG, mediated by LP. Data from 48 KEHATI IDX ESG Sector Leaders Index companies observed from 2015to2021wereanalyzedusingthestructuralequationmodelpartialleastsquare(SEM-PLS)Warp.PLS8.0.Theresearchappliesinstrumentalvariables(IV)totestandcontrolendogeneityduetononrandomsampleselection. Findings – We found evidence that LP acts as a full mediator between the presence of WoB and CSG. ThepresenceofWoBplaysamoderaterolebyslightlyweakeningtheinfluenceofLPonCSG.Furthermore,we obtainedevidenceshowingthattherelationshipbetweenWoBandCSGisJ-curve-shaped,anonlinearrelationshiprelatedtocriticalmass.WheretheWoBratioisatleast8.35%orhigher,itwillincreaseCSGin companiesthathaveimplementedtheconceptofenvironmentsocialgovernance(ESG)inIndonesia. Originality/value – This model uses a moderated mediation model and J-curve analysis; there is an interaction between WoB and LP on different paths of the mediator to CSG. This model examines the role of WoB as a moderator of the effect of LP on CSG. A nonlinear J-curve test was conducted to determine the minimum level of WoB that can influence the increase of CSG. Introduction Gender serves as the primary dividing line where development is unevenly distributed (Chakraborty and Sengupta, 2023).Sustainable Development Goal (SDG) 5 on Gender Equality aims to attain gender equality and empower all women, as proclaimed in the 2030 Agenda.SDG 5 strives to guarantee that women can fully and effectively participate, with equal opportunities for leadership at all levels of decision-making in political, economic, and public life (WEF, 2015).Subsequent studies have acknowledged the importance of women's financial independence for economic development and gender equality (Roy and Xiaoling, 2022). The lack of gender diversity in senior management stands out as a significant challenge for contemporary companies (Garcia-Blandon et al., 2022).A key benchmark for the success Policy toward corporate sustainable growth of SDG 5 targets is the proportion of women in managerial positions, contributing to gender diversity in corporate governance and the sustainable growth of the company (Binder, 2019;Paoloni et al., 2019).The SDGs advocate for changes in business strategy, urging companies aiming at long-term sustainability to integrate a human-centric approach (Kelan and Wratil, 2021).Sustainable growth involves aligning corporate growth goals with internal and external resources to formulate a cohesive strategic plan (Patel et al., 2020).Corporate governance geared towards sustainable growth can ensure alignment of interests among companies, shareholders, managers, and stakeholders (Ludwig and Sassen, 2022). Women are figures who respect competitiveness, organizational skills, and teamwork (Nicol o et al., 2021).Women have superior skills in terms of internal monitoring mechanisms and improving the quality of financial reporting (Gull et al., 2018;Usman et al., 2021).The female leadership style will ultimately improve business and corporate culture, and a more prosperous and peaceful work environment (Akkaya and € Ustg€ or€ ul, 2020).Women are a unique resource regarding ideas, business solutions and corporate decision-making (Javaid et al., 2021).Female directors are also more sensitive to social, environmental and ethical issues and pay more attention to stakeholder interests than male directors.For this reason, the presence of women on the board of directors can become agents of change and sustainability (Kelan and Wratil, 2021). Women often bring backgrounds more closely tied to nonfinancial matters, displaying a heightened ethical inclination and a tendency to steer clear of breaches in social and environmental policies, rendering them more effective (Li et al., 2022).The presence of gender diversity encourages companies to embrace socially responsible behavior and sustainable practices, positively influencing accountability and transparency (Paoloni et al., 2019).The inclusion of women on corporate boards aims to bridge the gender gap in economic sector participation and foster the advancement of women within organizations (Neschen and H€ ugelsch€ afer, 2021). The correlation between female directors and financial performance remains a topic of debate (Ciappei et al., 2023).Women continue to face diverse challenges in pursuing a career and attaining leadership roles.The underrepresentation of women in leadership positions perpetuates the perception of their perceived lack of capability ( € Oberg, 2021).The presence of women in managerial roles can serve as an indication that they are capable of full participation and have equal leadership opportunities in the workplace (Mart ınez-Jim enez et al., 2020).On average, only 20.2% of women occupy board seats in the top 100 companies listed on the leading stock exchanges in G20 member countries. The background of this research conducted in Indonesia is shaped by several phenomena.According to data from the Ministry of Women's Empowerment and Child Protection 2022, the proportion of women in managerial positions is projected to reach only 25.84% between 2021 and 2022, marking a significant decrease of 40.54% from 2020.Data from the World Economic Forum Global Gender Gap (WEFGG) in 2022 revealed that Indonesia scored 69.7 on the Social Institutions and Gender Index, ranking 92nd among 146 countries.This score is below the global average for Economic Participation and Opportunity in East Asia and the Pacific at 72.2%, with Indonesia securing the 9th position.The share of women in senior managerial roles in the professional sphere stands at 32.4% (World Economic Forum, 2022). Regulation Number 33/POJK.04/2014from the Financial Services Authority regarding Directors and Board of Commissioners of Issuers or Public Companies in Indonesia does not address the proportion of women on company boards.This stands in contrast to several European Union countries like Spain, Italy, France, Belgium, Sweden, Austria, and the Netherlands, which have established a 40% threshold as the highest level to achieve gender diversity on boards, often referred to as a "critical mass" (Lefley and Jane cek, 2023;Sousa and Santos, 2022;Tapver et al., 2020).A gender-balanced configuration, where the proportion of women in boardrooms ranges between 40 and 60%, is noted to impact economic and riskoriented performance in financial firms (Lafuente and Vaillant, 2019). Evidence indicates a connection between Corporate Governance's internal control mechanisms led by women's boards and capital structure in developed countries, with these efforts still relatively minimal in developing markets (Zaid et al., 2020).Women directors' influence extends to social performance ( € Oberg, 2021), with positive impacts noted in family companies in France (Mnif and Cherif, 2020), Japanese companies (Kubo and Nguyen, 2021), Italian banking firms (Mazzotta and Ferraro, 2020), China (Alkebsee et al., 2021), and Korea (Kim and Kim, 2023).Sraieb and Akin (2021) find evidence linking female directors to positive correlations with corporate environmental performance.The board of directors serves as an internal oversight mechanism for corporate governance, acting as a source of sustainable competitive advantage, particularly in monitoring financial policy reporting (Bhardwaj, 2022).Women on the board positively impact sustainable company performance (Guizani and Abdalkrim, 2022).The positive effect of women on board extends to leverage policy (Fernandes et al., 2023;Schopohl et al., 2021).Board effectiveness in overseeing management performance increases with women on board (Nicol o et al., 2021). Results from a contrary study indicate that the impact of women on board has a negative effect on the sustainable performance of Malaysian companies (Ahmad et al., 2020) and those in West Africa (Boubacar, 2020).The presence of female directors is associated with a negative and significant relationship between board gender diversity and earnings management (Yami et al., 2023).Women's boards do not have a direct impact on corporate sustainability.The optimal proportion of women on a company's board of directors, necessary for maximizing sustainable performance, remains unidentified (Bhardwaj, 2022;Conde-Ruiz et al., 2020).The relationship between leverage and corporate sustainability, analyzed by Ludwig and Sassen (2022), highlights that capital structure is not extensively discussed, and there remains insufficient research information about its relationship with corporate sustainability. This research proposal focuses on the impact of women's representation on corporate boards in Indonesia, aligned with SDG 5 on gender equality.It examines how women on boards influence funding policies and sustainable growth.This study is crucial as Indonesia lacks regulations specifying female board representation.By using a unique research design, including moderated mediation analysis, the research reveals that companies in Indonesia implementing ESG need at least 8.35% female board members for sustainable growth.The findings also emphasize the positive interaction between women on boards, leverage policies, and sustainable growth.This research contributes to the literature by exploring the internal control mechanism of corporate governance and highlighting the WoB quota's moderate role as a decision-maker for ESG-oriented companies, promoting sustainable growth. Given the premise above, this research is presented in the following order: Section 2 discusses the theoretical background regarding the relevance of women's presence on boards and the continued growth of corporations that implement ESG.As part of a literature review, this research develops hypotheses to answer research questions.Section 3 presents information about data sources, measurements, descriptive statistics and statistical models.Section 4 discusses the main empirical findings based on panel data regression.In the end, section 5 conveys the conclusions of the research results, complemented by practical implications and future research. Literature review and hypothesis development 2.1 The critical mass, shareholders theory and sustainable growth The critical mass theory, initially introduced by (Granovetter, 1978), posits that a certain quantity of women on company boards can lead to a turning point, affecting group interactions and behavior.This theory suggests a nonlinear relationship between the number of women on boards and board gender diversity impact (Torchia et al., 2011).The shift typically occurs when Policy toward corporate sustainable growth women exceed 30% representation in decision-making positions (Lafuente and Vaillant, 2019;Lefley and Jane cek, 2023).Women are assuming leadership roles based on merit and internal control mechanisms, enhancing corporate control for long-term financial and nonfinancial goals, including environmental and social governance (Husted and Sousa-Filho, 2019).Improved corporate governance quality, influenced by the board of directors, reduces financial leverage (Yosra and Sioud, 2011).Shareholder theory emphasizes the role of generating profits for sustainable company growth, requiring a stable operating environment.Economic policy changes create uncertainty and increase risk (Ahsan et al., 2021).Female directors enhance investment efficiency, mitigate agency problems, and reduce conflicts of interest (Shaheen et al., 2022).Agency problems can be addressed by increasing control over the company, often achieved through debt financing as a bonding mechanism (Indah Lestari et al., 2020).Evidence suggests that female directors or gender diversity on boards are associated with lower selfesteem and increased risk aversion (Menicucci and Paolucci, 2022;Nadeem et al., 2019).Sustainable growth refers to a company's ability to maintain a growth rate over time.The company's Sustainable Growth Rate (SGR) serves as a key indicator, integrating growth objectives with internal and external resource empowerment to formulate a consistent strategic plan.SGR gauges the alignment between internal resources and inherent industry growth opportunities, aiming for higher returns.This method estimates a firm's earnings growth rate, with a faster SGR being advantageous in high-return industries.SGR is equivalent to Return on Equity (ROE), where ROE represents the Net Income to Equity ratio (Escalante et al., 2009;Patel et al., 2020).SGR breaks down the return on equity into four components (Higgins, 1981): profit margin (income/sales), retention (1 À owner withdrawals), asset turnover (sales/assets), and leverage (1 þ debt/equity).This model, closely linked to DuPont analysis, combines these factors to determine ROE.A decrease in any of these ratios diminishes sustainable growth and increases the likelihood of requiring financial leverage to sustain the company.The board of directors, as strategic planners, considers these components to adjust and achieve a higher or lower ROE. Hypothesis development Female directors play a significant role in predicting sustainability, contributing to the reduction of agency conflicts between shareholders and managers through enhanced monitoring (Amin et al., 2021;Fernandes et al., 2023).Internal corporate governance mechanisms, including diverse boards, guide companies toward sustainability and integrated success.Implementing robust internal corporate governance mechanisms is essential for supporting effective and sustainable management within organizations (Franczak and Margolis, 2022).Board gender diversity is recognized as a positive contributor to sustainable growth.Gender diversity, measured by the percentage of women on the board, including female independent directors, is acknowledged as crucial for effective governance (Guizani and Abdalkrim, 2022). H1.The existence of Women on Board as a manifestation of SDG 5 has a positive effect on the corporate sustainable growth The sustainable growth rate represents a company's long-term target growth rate, aligning with internal resources to avoid straining limited assets.Both excessively fast and slow growth pose challenges, affecting investor confidence and creating missed opportunities.Faster growth makes the capital structure riskier.Warmana et al. (2020) introduces the "dynamic trade-off theory," evaluating trade-off theory and pecking order theory simultaneously in capital structure decisions.Leverage structure choices significantly impact a company's profitability, even without an ideal leverage ratio.Optimal leverage decisions contribute to sustainability by reducing agency problems and default risk, with managerial perspectives focusing on stakeholders and the company's capabilities (Zhou et al., 2021). The board of directors, as a relevant internal corporate governance mechanism, plays a crucial role in financial reporting through the advisory and monitoring roles of female directors.Female characteristics are often associated with risk aversion, conservative attitudes, and financial caution, and tend to reduce excess funds based on the percentage of female directors and female independent directors.Companies led by female boards with greater cash holdings prefer leveraging, calculated as the total debt-to-total assets ratio.Women are inclined to avoid risk, making them less likely to choose risky policy options.Gender diversity and cash holdings depend on the role of female directors in the board.An increased number of female directors significantly and positively influences leverage risk (Birindelli et al., 2020).Specifically, evidence suggests that women in supervisory roles, led by female directors, lead to a decrease in cash reserves (Cambrea et al., 2020). H2.The existence of women on the board has a positive effect on increasing leverage policies Appropriate debt policies can enhance a company's financial efficiency, thereby supporting sustainable growth.The Sustainable Growth Rate (SGR) calculation requires companies to maximize Return on Equity (ROE) and income to improve leverage ratios (Adams et al., 2015). Companies consistently manage debt financing to achieve sustainable growth and use equity financing for long-term sustainable growth.Leverage is the total liabilities-to-total assets ratio used as a measure of debt policy. H3. Leverage policies can increase the corporate sustainable growth Mediating variables are dependent variables expected to be influenced by independent variables.Applying mediating variables aims to demonstrate their role in the causal relationship between independent and dependent variables.Mediating variable depict conditions, such as activities, behaviors, or processes in progress (Wu and Zumbo, 2008).Sound debt policies can positively contribute to a company's financial performance, thereby supporting sustainable growth.The presence of female directors actively involved in designing and overseeing debt policies can be a crucial factor in achieving sustainable growth. H4. Leverage policies may mediate the effect of women on board presence on firm corporate sustainable growth. The application of moderating variables is employed to test whether the strength of the causal relationship between independent and dependent variables depends on the moderator variable.The WoB variable may alter the strength of the relationship between the leverage policy variable and a company's sustainable growth from strong to moderate or eliminate the connection altogether.In this study, we suspect that there is an interaction between WoB and LP concerning CSG, providing a reason to investigate its role as a moderating variable. A moderating variable possesses inherent characteristics and attributes (Wu and Zumbo, 2008), prompting us to test the role of WoB as a moderating variable. H5.The existence of women on the board can moderate the effect of leverage policies on the corporate sustainable growth. The research model displayed in Figure A1 [1] connects the hypotheses that have been generated and indicates how the research has been conceptualized. Methodology The study covers all companies listed on the Indonesia Stock Exchange, totaling 833 issuers.Wu and Zumbo (2008) explanation, focuses on the moderating effect's timing or influence on an independent variable's strength or weakness, impacting the dependent variable.The moderator adjusts the strength or direction of the causal relationship.Mediation, explaining the "why" and "how" of cause and effect, seeks to identify intermediate processes between the independent and dependent variables.So, we examine the variable funding policy with debt as a mediating effect of the presence of women's councils on corporate sustainable growth.For this reason, we apply a moderated mediation model to test which variables influence corporate sustainable growth most. The paragraph presents equations related to regression analysis.It outlines the coefficients and terms used, such as intercepts a 0 and b 0 , coefficients for Women on Board (WoB) and Leverage Policy (LP) represented by a 1 and b 1 , and the moderation coefficient for LP with WoB denoted as b 2 : The regression residual is indicated by r.Equation ( 2) clarifies how the conditional regression of Corporate Sustainable Growth (CSG) on LP can be considered contingent on Women on Board (WoB).It introduces the concept of conditional indirect effect, expressed as fð The PLS-SEM method was applied using the WarpPLS Version 8.0 software to perform the analysis.Recent methodological research has been conducted to accommodate more complex model structures or address data deficiencies such as heterogeneity in indicator variables and structural paths without imposing normal distribution assumptions on the data.The indicator approach to mediation is moderated using the PLS structural equation model (Kock, 2021).The primary challenge in using the PLS method for SEM equations lies in its inadequate consideration of measurement errors, leading to underestimated path coefficient estimates. Recent research emphasizes the importance of addressing endogeneity issues in econometrics by proposing the use of instrumental variables (IV) for causal relationship estimation with observational data.The Two-Step Heckman procedure is a suitable method, accompanied by various tests, techniques, and considerations outlined by researchers when employing IV (Bascle, 2008).Non-random sample selection introduces specific endogeneity challenges (Antonakis et al., 2010).Endogeneity arises when researchers misidentify the causal relationship between independent and dependent variables, attributing the observed relationship to a third factor (mediating or moderating variables).IV is instrumental in testing and controlling endogeneity, especially when structural error terms for an endogenous variable correlate with predictor variables.IV selectively shares variability with another variable.Utilizing the "Explore analytic composites and instrumental variables" menu in WarpPLS 8.0 with the "Single stochastic variation sharing" sub-option facilitates the creation and testing of IV for endogeneity control (Kock and Sexton, 2017).If the IV-link's path coefficient to the dependent variable is small and nonsignificant, it indicates successful implementation of the Heckman procedure, signifying no significant endogeneity in the model (Certo et al., 2016). Result 4.1 Result Table A1 [1] presents the descriptive statistics for the research variables.The WoB variable has an average of 0.176 with a standard deviation of 0.147, indicating that the presence of women on boards in ESG Sector Leaders IDX KEHATI is 17.6%, still below the G20 average of 20.2%.The LP Variable, measured by Loan to Total Assets, has an average of 0.617, and the CSG variable has an average of 15.073, indicating a Return on Equity ratio of 15.073x.Construct variables LP and CSG, use formative indicators, and Table A2 [1] displays Cronbach's Alpha, Composite Reliability (CR), and Average Variance Extract (AVE) results.Both Cronbach's alpha and CR values are satisfactory, exceeding 0.7, indicating internal consistency.The AVE surpasses 0.50, meeting convergent validity requirements and explaining at least 50% of item variance.Table A3[1] presents the correlation among latent variables with the square root of AVEs for assessing discriminant validity.The values on the table's diagonal containing the correlations are between the latent variables.Table A4 [1] confirms the absence of vertical multicollinearity in the latent variable block, as the VIF block is lower than 3.3.Table A5 [1] indicates an acceptable fit model size, with the Standardized Root Mean Square Residual (SRMR) below 0.08, and the Chi-square (X 2 ) being significant at 5%. Findings and discussion Figure A2 [1] presents the results of hypothesis testing, and Table A6 [1] displays path coefficients, standard errors, effect sizes, p-values, and decisions regarding hypothesis acceptance or rejection.The R Square value is 0.10, indicating that WoB and LP collectively account for a 10% variation in CSG, while the remaining 90% is explained by unexamined variables.The R-Square value implies that CSG is entirely dependent on WoB and LP.In comparison, the effect of WoB on LP is 0.04, suggesting that variations in WoB contribute to a 4% variation in LP.Consequently, the influence of WoB is more substantial on LP.The R-squared coefficient or adjusted R-squared coefficient above 0.02 signifies reasonable explanatory power within the sub-model. The path coefficient values presented in Table A6 [1] are divided into two sections.Part A indicates the direct effect of each variable.WoB does not have a significant effect on CSG (β 5 0.066, p 5 0.148), thus rejecting hypothesis 1. WoB has a positive and significant impact on LP (β 5 0.193, p-value 5 0.005), supporting the acceptance of hypothesis 2. Furthermore, another direct product reveals that LP significantly and positively influences CSG (β 5 0.226, p-value <0.001), confirming hypothesis 3. To reinforce these findings, we conducted a robust test, dividing the sample into two based on the number of WoB.Table A6 [1] displays two subsamples: companies with only one WoB and companies with at least 2 or more WoB.The results indicate that when there is only one WoB, the impact of WoB on LP shows a significant negative result (β 5 À0.255, p-value 5 0.022).Adding another WoB can change the direction of the relationship to a significant positive one (β 5 0.459, p-value 5 0.019).The study aligns with research by Gull et al. (2023), asserting the need for two or more female directors to impact corporate decisions.These findings are consistent with Uddin (2021), who stated that implementing corporate governance and increasing the number of female directors decides to manage more debt in leverage structure decisions and enhance profitability. In this study, no direct influence was found between the presence of WoB and the enhancement of CSG for both the entire sample and the subsample.These results differ from previous research by Gold and Taib (2022), stating that gender diversity on the board plays a role in guiding companies towards sustainability and achieving sustainable integration.Table A6[1] Part B illustrates a positive and significant indirect effect of WoB on CSG after Policy toward corporate sustainable growth being mediated by LP.These results are sufficient to accept hypothesis 4. Full mediation occurs because the coefficient of the direct influence of the independent variable on the dependent variable, initially nonsignificant (β 5 0.066, p-value 5 0.148), becomes significantly positive (β 5 0.044, p-value 5 0.020) after going through the mediating variable LP for all samples.Thus, LP acts as a full mediator in the relationship between the influence of WoB and CSG.Moderation testing was applied in this study to interpret whether the determinants of WoB weaken or strengthen the relationship between LP and CSG.WoB interacts significantly with CSG (β 5 0.193, p-value <0.001), meaning WoB acts as a quasimoderator.This occurs because the moderator variable also acts as an independent variable.The weakening role of WoB as a moderator is indicated by a slight decrease in the coefficient value of the direct relationship between LP and CSG (β 5 0.226, p-value <0.001).Thus, hypothesis 5 is accepted.This study aligns with Alkebsee et al. (2021), stating that the presence of women in the "board composition" of corporate governance results in an effective monitoring mechanism. There is no direct influence from the WoB variable on the CSG variable; thus, WoB affects CSG only indirectly through LP.In this study, the relationship between latent variables becomes WoB → LP → CSG, with LP serving as the mediating variable.This model presents endogeneity concerning CSG, as the variation flows from WoB to CSG through LP, causing bias in the path estimation for the LP → CSG link through ordinary least squares regression.Figure A2[1] presents the solution to this issue, which involves creating the instrumental variable iCSG, which only incorporates the WoB variation that ends in CSG and nothing else, and revising the model to have the following relationships: WoB → LP, LP → CSG, and iCSG → CSG.The iCSG → CSG link can be used to test endogeneity through p-value and its effect size.The link can also be utilized to control endogeneity, eliminating bias when estimating the path coefficient for the LP → CSG link through ordinary least squares regression.This is addressed by including the instrumental variable iCSG (iWoM), which incorporates variations from WoM that end in CSG.IV is a variable that selectively shares its variance with another variable, exclusively with that variable.Instrumental variables can be used to test and control endogeneity, a situation where the structural error term for an endogenous variable is correlated with one or more predictor variables.The path coefficient for the iWoM → CSG link is small, at 0.098, and not significant (p-value 0.57).As an implementation of the Heckman procedure for assessing and controlling endogeneity (Bascle, 2008;Certo et al., 2016), this indicates no significant endogeneity to CGS in our model. Figure A3 [1] shows evidence that there is a J-curve graph for the relationship between WoB and CSG, indicating that WoB levels above 8.35% will increase CSG.The results indicate that the average WoB level in 36 companies listed on the KEHATI IDX ESG Sector Leaders Index observed for seven years from 2015 to 2021 is 17.6%, positioning it to the right of the minimum point at 8.35%.This means that WoB has a positive influence and can enhance CSG.These results differ from studies in the European Union and critical mass theory, which state that there is a change when companies have between 30 and 40% women on the board of directors. The moderating effect is evident in the hypothesis that Women on Board (WoB) moderates the relationship between Leverage Policy (LP) and corporate sustainable growth (CSG).Figure A4 [1], a three-dimensional (3D) graph Kock (2021), illustrates this moderating relationship, connecting WoB, LP, and CSG through a direct link.As the path for the direct relationship traverses the range of the moderating variable from low to high, the path coefficient's sign becomes positive, and its magnitude increases.This indicates that WoB slightly weakens the direct relationship between LP and CSG, and the effect is statistically significant.The study establishes a dynamic interaction between the moderator variable (WoB) and the mediator variable (LP), demonstrating changes in the indirect effect based on the varying values of the moderator, highlighting the occurrence of moderation in the mediation process. Implication The managerial implications of this research suggest that WoB has a positive and significant impact on LP when there are two or more WoB in companies that have implemented ESG in Indonesia.This finding indicates that the critical point is at a WoB level of 8.5%, which can enhance CSG, contrasting with suggestions from some previous studies that proposed levels above 30%.Specifically, the author supports recommendations regarding the need to increase the number of WoB to improve CSG. The social implications of this paper highlight the importance of considering the presence of WoB when analyzing their influence on LP and CSG.The presence of WoB will affect CSG through LP, meaning that an increasing number of WoB constituents will enhance CSG by implementing LP.With their tendency to be more compliant with regulations, cautious, vigilant, and risk-averse compared to men, WoB will effectively manage financing derived from debt. Conclusion and limitation The research findings reveal that WoB significantly and positively impacts LP when there are two or more individuals in the company.However, WoB had no direct effect on CSG in both overall samples and sub-samples.The J-curve graph illustrates the relationship between WoB and CSG, indicating that a WoB level above 8.35% contributes to increased CSG.The mediation test shows that LP fully mediates the influence of WoB on CSG, demonstrating a positive and significant indirect effect.This suggests that the company's debt policy plays a crucial role in determining the effectiveness of WoB's presence.The cautious approach of WoB, prioritizing internal funding sources, has been proven to reduce company risk.The interaction between WoB and LP slightly weakens the impact of LP on CSG.WoB's cautious funding decisions limit access to external funding, impacting profit generation and, consequently, CSG.A limitation of the study is that CSG measurement could consider financial performance and governance, leaving room for future research on the influence of WoB on CSG using environmental and social proxies.Publishing, pp. 121-137, doi: 10.1108/978-1-80043-380-920201008.Policy toward corporate sustainable growth
2024-05-12T15:22:15.107Z
2024-05-13T00:00:00.000
{ "year": 2024, "sha1": "97cd57b319c72869423fb4f92034cc22da1a08bc", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/JABES-02-2023-0049/full/pdf?title=do-women-have-a-say-a-moderated-mediation-models-influence-on-the-leverage-policy-toward-corporate-sustainable-growth", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e7a61c5a77ae897f9f2be0ecda7c9247b509328e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
225858894
pes2o/s2orc
v3-fos-license
The effect of problem-based learning model to students’ cognitive achievement on high and low students’ problem-solving abilities This study aimed to determine the effect of PBL model on students’ cognitive achievement on high and low students’ problem solving abilities. This study was quasi experiment research by using posttest only non-equivalent control group design with 22 students of X(3) grader science class as control class (conventional model) and 21 students of X(4) as experiment class (using PBL model) at one of senior high school in Surakarta. Students’ problem solving abilities illustrated as abilities in four aspects: identification the problem, identification the cause of problem, create the method of problem solution, examine the result of problem solving by using 40 items of multiple choice question refers to Problem Solving Skill Test [12]. Students’ problem solving abilities measured before the research is done and divided into high and low categories. Students’ cognitive achievement illustrated as students’ concept understanding and measured by using essay test was given after the PBL model used. Instrument had been validated by expert judgment and students. Data analyze by using t-test. The result showed: 1) Sig. (0,00<0,05) Ho rejected, there was difference of students’ achievement between experiment class and control class; 2) Sig (0,683 >0,05) H0 accepted, there wasn’t difference students’ achievement with high problem solving abilities between low problem solving abilities. Introduction The future education system was design to prepare young generations able to use their thinking skill and problem solving to deal with complicated problems. In the information technology era, the ability to think and solve problems is very important for students to have world-class knowledge to build the country [1,2]. However, students' problems solving abilities is generally still low. The trends of PISA test performance based on rank and achievement score over time are relatively low ie: 38 [3,4]. Learning observations and deep interview with teachers indicate that teachers are confused how to develop problem-solving-based learning [5]. The Problem-based learning (PBL) model has been introduced for use in learning through the curriculum 2013. However, in practice it has not delivered the desired results. In addition, the application of the PBL model generally less considering the important factors such as the students' problem solving abilities. Departing from that, hence this research aims to know the effect of PBL model application to students' cognitive achievement on high and low students' problem solving abilities. Theoretical framework The PBL model has syntax: orientation to the problem, organizing students, independent and group investigations, developing and presenting artifacts and exhibiting them, analyzing and evaluating problem-solving processes [6]. Characteristics of the PBL model is to expose students to practical problems in their daily lives as stimuli in learning [7]; helping students think through problem solving and constructing knowledge [8], allowing a variety of solutions to be viewed from various aspects, authentic inquiry, product yield, and co-operation [9]. There are three basic elements in the PBL model: initiating triggers, examining previously identified issues, utilizing knowledge in understanding the problem in depth [10]. The advantages of facilitating PBL models students build knowledge and make students interested in learning by being actively involved in learning [10]. Through the PBL model students are able to identify complex problems in the real world, manage conflict, make decisions independently [11,12], developing high-order thinking skills [13,14]. The weakness of the PBL model is less effective when applied in large classes [15]. The higher the students 'ability to solve the problem, the greater the chance of teachers' success in applying the PBL model. The importance of considering student problem solving skills in applying the PBL model. Problem Solving Skill Test (PSST) of Problem Solving Skill Test (PSST) instrument in the form of 40 multiple choice questions covering 4 aspects: identifying problems, identifying causes of problems, proposing problem solving methods, testing problem solving results [4]. Methods This research is a quasi experimental research with pretest-posttest non-equivalent control group design that involves the subjects of 43 students of IP X class in a high school in Surakarta. Students' problem solving abilities were measured before the research was conducted using the Problem Solving Skill Test (PSST) instrument in the form of multiple choice questions as many as 40 covering four aspects: identifying problems, identifying causes of problems, proposing problem solving methods, testing problem solving results [4]. Problem-solving skills are grouped into 2 high and low categories. Data on cognitive learning outcomes was measured using a questionnaire test of 3 questions. The research instrument has been validated by an expert judgement and is eligible for use. Results and Discussion Data description students' cognitive achievement shown in Table 1, Table 2 author) The result of t-test on the learning model shows that sig. 0,010 <0,05 concludes that H0 is rejected. It was mean that there is difference of students' cognitive achievement between experiment class (using PBL model) and control class (using conventional model). This is related to the characteristics of the PBL syntax model which includes five: problem orientation, organizing of student learning, investigation (individual and group), presentation and artifact exhibition, analyzing and evaluating the process of problem solving [6]. In the syntax of problem orientation students are faced with one problem in real life and students are required to identify and find problems to find the solution in terms of various aspects. Here students are trained in problem solving skills including: identification the problem, identifying the cause of problem, the problem solving as the problem solving component presented in Problem Solving Skill Test [4], so the students' ability to solve the corrected problem [2,8]. Conventional learning, even though students solve problems but less challenging, thus less encouraging students to think high level. While by the PBL model, students are required to analyze the presented phenomena, formulate problems, develop hypotheses and design investigations [9,13]. By performing a series of PBL model syntaxes leads to more meaningful learning. PBL learning model has the characteristics of training students to understand the subject matter in depth to develop problem solving skills [12]. Student-centered biology learning is the right way to teach students learn to increase student motivation and problem solving so as to provide opportunities for students to help overcome problems [16]. Judging from the students' problem solving abilities, t-test result shows sig 0,683> 0,05. It can be concluded that H0 is accepted, it was mean there is no significant difference of students' cognitive achievement between high and low students' problem solving abilities. This is because students' ability is relatively homogeneous, so the difference between high and low students' problem solving abilities is poor. While the results of the interaction between the PBL model and students' problem solving abilities obtained sig.0,889 and H0 received, meaning there is no interaction between PBL model and students' problem solving abilities [6]. This is related to the characteristics of the PBL model that has the potential to empower students' problem solving abilities. Conclusion Based on the results of the research can be concluded that problem-based learning model have a significant effect on students' cognitive achievement, but not significant difference between students' with high or low problem solving abilities.
2020-07-09T09:12:08.306Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "b4892c6d74c44f0a519d745fb0a65e29f12cbbe5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1567/4/042040", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "73c4ef5ee20afffd49ab8778ff9777b3178c0b32", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology", "Physics" ] }
105961
pes2o/s2orc
v3-fos-license
Beyond the Cortical Column: Abundance and Physiology of Horizontal Connections Imply a Strong Role for Inputs from the Surround Current concepts of cortical information processing and most cortical network models largely rest on the assumption that well-studied properties of local synaptic connectivity are sufficient to understand the generic properties of cortical networks. This view seems to be justified by the observation that the vertical connectivity within local volumes is strong, whereas horizontally, the connection probability between pairs of neurons drops sharply with distance. Recent neuroanatomical studies, however, have emphasized that a substantial fraction of synapses onto neocortical pyramidal neurons stems from cells outside the local volume. Here, we discuss recent findings on the signal integration from horizontal inputs, showing that they could serve as a substrate for reliable and temporally precise signal propagation. Quantification of connection probabilities and parameters of synaptic physiology as a function of lateral distance indicates that horizontal projections constitute a considerable fraction, if not the majority, of inputs from within the cortical network. Taking these non-local horizontal inputs into account may dramatically change our current view on cortical information processing. Temporal precision and iTs possible role for corTical processing Triggered by early theories on coding in neural networks (for an overview, see Perkel and Bullock, 1968), it has been hypothesized that temporal precision of neuronal spiking activity may play an important role for cortical information processing. However, data from early neurophysiological experiments recording responses to stimuli in primary sensory areas suggested that information is contained in the graded elevation of firing rates of cells responding to certain features of the stimulus (e.g., Adrian, 1928;Barlow, 1972). The idea of a rate code henceforth dominated the conceptual thinking about cortical coding and influenced the experimental designs. Experimental evidence supporting more intricate theories based on temporally precise spiking remained, for a long time, relatively rare. More recently, doubts have been raised whether the above mentioned recordings from strongly responding units are representative for the majority of neocortical cells (Shoham et al., 2006). Both, the refinement of recording techniques and the application of more sophisticated sensory stimuli have provided new insights concerning firing rates and activity dynamics of single neurons in primary sensory areas. Examples include intracellular recordings from anesthetized (Brecht et al., 2003) and awake, behaving mice (Margrie et al., 2002), revealing surprisingly low spike rates in the barrel cortex, even during free exploratory activity. When mice changed from quiet wakefulness to active whisking, excitatory cells in that area displayed a clear reduction of firing rates (Crochet and Petersen, 2006) and phase locking to whisker movements (Poulet and Petersen, 2008). In the auditory cortex, firing rates are particularly low (DeWeese et al., 2003;Hromádka et al., 2008) and decreased even when weak tones were presented against a slowly fluctuating noise background (Las et al., 2005). In fact, neurons in the auditory cortex suppress their spike responses when the animal is engaged in an auditory task (Otazu et al., 2009). In this brain area, neurons have been suggested to operate far away from firing threshold, requiring strongly correlated, transient input for spike generation (DeWeese and Zador, 2006). Functionally, the auditory system seems to be in a position to exploit timing differences as small as 3 ms between two artificially introduced action potentials for decision making (Yang et al., 2008). In the visual system, traditionally known for high firing rates during presentation of optimal stimuli, careful considerations of aspects like energy constraints, representation of high numbers of stimulus features, and measurement biases have led to the notion that primary visual cortex may, in fact, use a sparse code Field, 1996, 2005). A sparse population code had already been implicated earlier in inferotemporal cortex (Young and Yamane, 1992). Support for this view also came from studies showing that responses of single cells become sparser and more reliable when stimulated with natural scenes, especially if the surround of a cell's classical receptive field is included in the stimulation (Vinje and Gallant, 2000;Yen et al., 2006;Haider et al., 2010). Together, these findings revived the discussion about sparseness of the cortical code and, as a closely related issue, about the possible importance of timing of individual action potentials for information processing (for a review see Wolfe et al., 2010). The concept behind sparse coding is that information is represented with the minimum number of tokens. For populations of spiking neurons, this implies that only very few active neurons code for a specific state, e.g., a particular stimulus configuration (population sparseness), and that each neuron represents information over time with only a small number of spikes per unit time (lifetime sparseness). This, in turn, leads to low firing rates and low noise levels, as were described in the above mentioned experimental studies. In theoretical work, sparse coding has, for instance, been suggested to underlie the processing of complex natural scenes (Field, 1987;Levy and Baxter, 1996;Field, 1996, 2005;Simoncelli, 2003) and in the neural implementation of associative memory (Palm, 1982). Temporal precision of single spikes, on the other hand, is a pre-requisite for concepts like latency coding, with information thought to be contained in differences between timings of action potentials in a population of cells (van Rullen and Thorpe, 2002;Gollisch and Meister, 2008;Jacobs et al., 2009), or theories based on assemblies of synchronized cells Abeles, 1991; for recent reviews see Harris, 2005;Kumar et al., 2010). Here, precise timing relates to the millisecond or even sub-millisecond range, i.e., a precision in the order of the action potential duration or even higher. It can be argued whether the experimentally observed temporal spike locking to time-varying stimuli in primary sensory areas with a precision that merely reflects the stimulus dynamics should be considered a substrate for temporal coding at all, or whether it might rather be a pre-requisite for it at later processing stages (Aertsen et al., 1979;Harris, 2005;Tiesinga et al., 2008). Taken together, these considerations have triggered the experimental search for precisely correlated activity of pairs or groups of neurons in higher brain areas. The initially weak evidence was restricted to pairwise correlations (e.g., Aertsen et al., 1989;Vaadia et al., 1995;Alonso et al., 1996), but improved with advances in recording techniques and analysis methods. In particular, a number of studies in awake, behaving monkeys provided strong evidence for a possible relation between spike synchronization and cognitive function (Riehle et al., 1997;Super et al., 2003;Samonds et al., 2004;Maldonado et al., 2008). At the same time, however, it is not clear whether neocortical networks can operate with such precision, also in view of physiological findings on ion channel noise, synaptic variability and non-linear properties of dendritic integration (Häusser et al., 2000;Gulledge et al., 2005). While the necessary steps for precise information transmission in neocortical networks (see Figure 1) have been studied separately, it has not been demonstrated if, and under which conditions, reliable and precise signal propagation in cortical networks is at all possible (Kumar et al., 2010). HorizonTal corTical neTworks can work Temporally precise and reliable Experimental assessment of the precision and reliability of neocortical network activity is a difficult task. One possible approach is to record membrane potential fluctuations or spiking output of cells in the intact animal, preferably in response to repeated stimulus presentation or in Precision In temporal coding schemes, this parameter often describes the ability of a neuron to translate synaptic input into precisely timed spike output. Here, we refer to the precision of synaptic transmission, that is, how strong EPSC onset jitters with reference to repeated presynaptic action potentials. This measure relates to temporal coding precision, because precise connections are a pre-requisite for precise output spike timing. Reliability This refers to the reliability of synaptic transmission and describes the probability that a presynaptic AP leads to a faithful transmission at the synaptic terminal, resulting in a postsynaptic current in the target cell. Experimentally collected values can vary significantly and depend strongly on the pre-and postsynaptic cell-types. Inverse of the failure rate. , 1991;Feldmeyer et al., 1999Feldmeyer et al., , 2006Frick et al., 2008). But even compared with these latter studies, quantification of the physiological properties of synaptic connections probed in our study revealed a strikingly high temporal precision with a temporal jitter of less than 1 ms ( Figure 3C) and close to 100% reliability in almost all synaptic connections studied ( Figure 3A). At the same time, the amplitude variability was moderate ( Figure 3B), and accounted for most of the variability observed during postsynaptic signal integration, as shown by a simple model of subthreshold signal integration . Taken together, these findings suggested that synaptic physiology, not action potential propagation or dendritic integration, is the key factor determining amplitude variability and temporal precision in this cortical sub-network of converging excitatory inputs. What could be the reason for the high precision and reliability observed in this system? Potentially, the method used to find the connections within the acute slice, namely functional mapping with the help of laser-induced glutamate uncaging (Callaway and Katz, 1993;Dodt et al., 2003;Kötter et al., 2005) could have introduced a bias toward exposing especially reliable connections. Performing mapping experiments, covering large areas of the slice, is time consuming, accounting for a low number of trials per location during the phase where putative presynaptic sites are identified (up to four in our experiments; for a discussion, see Boucsein et al., 2005). Thus, unreliable connections might be overlooked. Another striking and important difference to earlier studies on the physiology of synaptic connections is that in relation to identical repetitions of a behavioral task. Even though the above-mentioned studies show that under certain conditions responses can be sparse, highly precise and reliable, others have stressed the high variability, presumably caused by activity unrelated to the stimulus, the so-called ongoing activity (Arieli et al., 1996;Tsodyks, 1999;Ohl et al., 2001;Nawrot, 2010). A novel experimental approach to precision and reliability in the neocortex, albeit somewhat reduced in terms of complexity of the network involved and regarding the possible sources of variability, was recently established in our lab (Boucsein et al., 2005). This method, dynamic photo stimulation, is especially suited to study reliability and precision of neuronal responses, because it enables tight control of timing and amount of synaptic input to a single cell. Providing repeated, "frozen noise"-type spatiotemporal sequences of synaptic input to a postsynaptic pyramidal neuron in an acute slice, we probed a reduced sub-network of converging excitatory inputs (Figure 1A), which can be considered a basic building block of neocortical networks and models thereof (Abeles, 1991;Diesmann et al., 1999;Kumar et al., 2010). In these experiments, we found that neocortical layer V pyramidal neurons possess remarkably precise integration capabilities (Figure 2; Nawrot et al., 2009). At first, these results seemed puzzling since a number of previous studies reported unreliable synaptic transmission in the neocortex: Different classes of connections can exhibit high variability in PSP amplitude and high failure rates of up to 70% (Koester and Johnston, 2005;Bremaud et al., 2007). Other authors reported more reliable synapses with less amplitude variability (Mason Temporal jitter, measured as the standard deviation of EPSC threshold crossing times after stimulation onset, scales with the delay between stimulation onset and EPSC onset. Quantification of timing of action potential generation for a set of directly stimulated cells (putative presynaptic cells) revealed that most of the jitter in EPSC timing was due to the variability in spike generation. Comparison of regression lines for presynaptic (gray) and postsynaptic (black) jitter suggests that only about 0.5 ms jitter is actually due to synaptic physiology. (D-g) Lateral distance from the stimulation site to the soma of the postsynaptic cells was extracted for each tested connection to evaluate possible distance dependence of physiological connection parameters. Failure rate (D), amplitude variability (e), and synaptic jitter (normalized to the total delay) (F) did not show any distance dependence, whereas amplitude (g) scaled negatively with distance. Colors correspond to cylindrical volumes, sketched in Figure 4 (H). To evaluate the lateral distance dependence of connection probability, we re-analyzed 17 mapping experiments, which were initially performed to find presynaptic sites for dynamic stimulation (compare to Figure 2B). Width of the scanning raster was 100 μm, and for each horizontal distance, we collected the number of sites, stimulation of which resulted in a postsynaptic EPSC. The ratio of this number relative to the total number of stimulations at the corresponding distance was taken as the estimated connection probability at that distance (n = 674 EPSCs in total). When stimulation sites were close to the soma or apical dendrite, EPSCs were often masked by large currents from uncaged glutamate impinging directly on the postsynaptic cell (direct responses). Probed distances where more than 20% of stimulated sites showed such direct responses were excluded from the analysis. Since it remains unknown how many neurons we stimulated at each target site and it was, thus, only possible to extract relative connection probabilities, we defined P 100 = 0.1 at a distance of 100 μm, as suggested by paired recording studies (see Table 1). Our model of exponential decay is, thus, constrained as P(d) = P 0 ⋅ exp(−d/λ). Single fits were performed for each experiment, and length constants λ were extracted. The panel shows an exponential decay with a length constant equal to the median of all extracted λ-values (black trace), upper and lower shaded regions mark the 75 and 25% quantile, respectively. (i) Accumulated number of connected cells as a function of lateral distance d from the soma: we estimated the number of connected neurons within a cylindric volume with radius d as N d f h P s sds , with ρ = 60,000/mm 3 (black trace) defining the cell density per cortex volume, f E = 0.85 representing the relative fraction of excitatory connections (Braitenberg and Schüz, 1998), and h = 1.3 mm defining the thickness of the gray matter. The estimated total number of presynaptic cells then amounts to N total∞ ~ 6,100 with λ of 330 μm (black trace; shaded regions mark total cell numbers for respective λ taken from the shaded region in H). Frontiers in Neuroscience www.frontiersin.org April 2011 | Volume 5 | Article 32 | 6 similarly tuned cells is not apparent (Ohki et al., 2005). In contrast to these various considerations on the vertical organization of cortical circuits, horizontal connections received much less attention in experimental and theoretical work (for a recent review see Voges et al., 2010b). The experimental confinement to a projection range of approximately 250 μm around the somato-dendritic axis of the pyramidal neurons (Deuchars et al., 1994;Markram et al., 1997;Lefort et al., 2009) and to vertical projections across laminae (for a review see Thomson and Lamy, 2007) is not solely due to a conceptual focus on local connections. In addition, the strong drop in connection probability with increasing somatic distance between cell pairs (Hellwig, 2000;Thomson and Bannister, 2003) imposes experimental constraints that make the physiological characterization of synaptic connections with paired recordings increasingly difficult at longer distances (however, see Yoshimura et al., 2000). This effect is augmented by the fact that paired recordings are usually conducted in acute brain slices, where sizeable portions of axonal arbors are cut, lowering the chances of finding distant pairs of connected cells even further (Stepanyants et al., 2009). At first glance, low connection probabilities to distant neurons might imply that connections from more distant cells are rare and, thus, might not play a major role in cortical processing. However, in their extensive study comparing neuronal subtypes in cat V1 with respect to their laminar distributions of dendrites and synapses, Binzegger et al. (2004) reported large numbers of excitatory synapses, especially in layers I (93%) and VI (70%), that remained "unaccounted for." Presumably, these synapses reflect the bulk of extra-columnar input which could not be allocated to any of the cells within their model, as the anatomical reconstructions used in their study only contained the local axonal arborization (diameter: ∼1000 μm), whereas projections of longer distance were cut. Other studies recognized that, despite low connection probabilities, the total number of potential presynaptic cells still might increase for larger distances, simply due to the quadratic increase in the number of potential partners (Hellwig, 2000;Holmgren et al., 2003). In another recent study, estimates of the fractions of local (columnar) and non-local connections suggested that up to 82% of the synapses a neuron receives may originate from cells outside the cortical column, i.e., a cylindrical volume with a diameter of ∼500 μm (Stepanyants et al., 2009; see also Figure 4). The same study suggested that even if the cortical volume covered by the dense our study we focused on horizontal connections (∼200-1500 μm distance in the direction parallel to the pia), whereas classical paired recording experiments were almost exclusively performed within the local range around the postsynaptic neuron (<250 μm distance), where strong vertical connectivity across laminae dominates (Lorento De Nó, 1949;Thomson and Bannister, 2003). To clarify this issue, we re-analyzed the lateral distance dependence of physiological parameters of the synaptic connections studied in our experimental paradigm. We found that the only parameter showing a systematic decrease with lateral distance was the excitatory postsynaptic current (EPSC) amplitude (Figure 3G), whereas other parameters of synaptic physiology did not show such systematic dependence at all (Figures 3D-F). As a result, our findings raised two main questions: Can horizontal connections be considered to play an important role in cortical processing and, if so, what is actually known about their physiological properties (apart from the limited data available from our previous study)? Here we will undertake first steps to answer these questions. THe fracTion of HorizonTal projecTions onTo pyramidal cells Since the seminal work of Mountcastle (1955Mountcastle ( , 1957 and Hubel and Wiesel (1962), showing that neurons along a path perpendicular to the cortical surface share functional properties of stimulus selectivity, the idea of a columnar structure defining generic building blocks for larger cortical networks has attracted many scientists (for recent reviews see da Costa and Martin, 2010;Rockland, 2010; and other contributions to the special issue on "The Neocortical Column" in Frontiers in Neuroanatomy). In the attempt to understand information processing in local cortical circuits, a well-established and ever more detailed description of the underlying functional architecture has been generated (for reviews see Thomson and Bannister, 2003;Markram, 2006;Douglas and Martin, 2007). In the course of these attempts, the original concept of columnar organization has been interpreted in quite different ways. While initially, common functional tuning properties in cat visual cortex were used to assign cells to the same column, later the intricate anatomical structures in layer IV of the somatosensory cortex in rodents (so-called barrels) were also utilized to define column boundaries. It has been debated whether both assignments refer to the same conceptual idea (for a review see Horton and Adams, 2005), especially since in visual areas of rat cortex the clustering of Horizontal connections In contrast to vertical projections across layers, horizontal connections in neocortical networks can span up to several millimeters and connect different areas or sensory modalities in a non-trivial fashion. They have been implicated in feed-forward and feedback circuits throughout cortex, as well as in binding of different information streams during associative processes and higher brain functions. Column This slightly ambiguous term loosely describes the concept of vertically arranged groups of cells that share certain functional and/or anatomical properties and could represent a "basic functional unit" in cortical processing. They are ubiquitous in the brain but in no way obligatory, and a comprehensive description of the various forms of "columns" in the brain is still lacking. Photostimulation Photostimulation is a technique using photolabile or "caged" precursors of neurotransmitters such as glutamate, which can be rapidly activated by short pulses of light. In acute brain slices, one can map the functional connectivity within the tissue by activating presynaptic cells to fire APs while monitoring the membrane potential of a postsynaptic target cell. Frontiers in Neuroscience www.frontiersin.org April 2011 | Volume 5 | Article 32 | 7 neurons projecting onto a single postsynaptic cell . For each horizontal distance from the cell soma, probed in bin-wise intervals of 100 μm, we measured the number of sites, the stimulation of which resulted in an EPSC measured at the soma. Then, for each of these distances, we estimated the connection probability as the ratio of axon plexus around the somato-dendritic axis is considered (a cylinder with diameter ∼1000 μm), the fraction of synapses originating from outside this volume may amount to 75%. To test these various predictions on our data, we carefully re-analyzed the photostimulation experiments in which we scanned acute brain slices for Figure 4 | Distribution of presynaptic cells within the cortical volume. (A) Graphical representation of connection probability as a function of lateral distance, comparing results from different studies. Methodological problems prevent direct numerical comparison. We, thus, normalized the maximum P con found in each study to unity and plotted one representative curve from each study into a single summary plot. Clearly, the length constant of the spatial decay of P con (d) derived from our data (black trace, cf. Figure 3H) fits well within the range reported by these previous studies. (B) Morphological reconstruction of a layer V pyramidal cell from a recording in an acute slice of 300 μm thickness with dendritic (blue) and axonal (red) arborizations. Following earlier work (Stepanyants et al., 2009), two definitions of locality can be derived from the neuronal morphology: either the volume covered by the dendrites (diameter of approximately 500 μm, gray), or, alternatively, by the dense axonal plexus around the somato-dendritic axis (diameter of approximately 1000 μm, blue). (C) The number of possible presynaptic partners (N cells ) increases substantially with distance, due to the quadratic increase of the volume covered by cylinders with increasing radius. This implies that the number of connected cells does not necessarily decrease with increasing distance, even if connection probability drops substantially. (D) To emphasize the consequences of the described distance dependence of P con for the total number of actually connected presynaptic cells within a certain distance, we calculated the numbers of these synaptically connected cells for the three different ranges depicted in (C). For all volumes, we used our exponential decay model with λ = 330 μm and P 0 = 0.135 and, again, assumed a thickness of cortical gray matter of 1.3 mm. Surprisingly, even with a strong decay in P con (d) with increasing distance, the majority of presynaptic cells are located outside the local volume. Depending on the definition of locality, at least half of the synapses on each cell (local = diameter of 1000 μm), or more than 80% (local = diameter of 500 μm) originate from cells not considered to be within the local volume. The total number of presynaptic cells is slightly higher than what can be expected to be contained in a cylinder of 4000 μm diameter [cf. extent of the bar in (D) does not account for 100%]. Frontiers in Neuroscience www.frontiersin.org April 2011 | Volume 5 | Article 32 | 8 itself, and the respective connection probability ( Figure 3I). Integration over the distance from 0 (soma) outward to a certain distance d from the soma then leads to an estimate of the total number of presynaptic neurons within that distance, that is, the number of neurons that provide synaptic input to the recorded cell ( Figure 3I). Interestingly, from this calculation it follows that the local volume around a layer V pyramidal cell (r = 250 μm) contains less than 25% of its presynaptic partners. Even if a larger distance of 500 μm is considered (as in Stepanyants et al., 2009), this fraction still amounts to less than 50% ( Figure 4D). These results are somewhat difficult to relate to other studies reporting on distance dependence of connection probability in neocortical networks, mostly because in the experimental studies on horizontal connectivity available to date, very different methods were employed, ranging from paired recordings over laser scanning approaches to modeling studies using morphological reconstructions (for references, see Table 1). In addition, most studies were restricted to distances of less than 250 μm, while our data span a range almost ten times as big. One exception is the study of Shepherd et al. (2005), where the authors this number of effective stimulation sites relative to the total number of stimulations at this distance. To the collection of these connection probability estimates as a function of lateral distance we then fitted an exponential decay function for each mapping experiment separately. From this fit, we determined the associated space constant for each experiment. The resulting space constants of connectivity decay with distance varied between 165 and 665 μm. Using the median value of 330 μm ( Figure 3H) and constraining our model of exponential decay by introducing a fixed value for the local connection probability of 0.1 at 100 μm (derived from the literature on paired recordings), we then calculated the potential numbers of presynaptic neurons as a function of somatic distance (Figure 3I). Under the simplifying assumptions of homogeneity and isotropy and adopting a cylindrical layout of the connectivity (cf. Figure 4C), the number of neurons at a certain distance providing input to a postsynaptic cell at the center of the cylinder is proportional to the product of the cell density (∼60-90,000 neurons/mm 3 in rat V1; Peters et al., 1985;Gabbott and Stewart, 1987;Skoglund et al., 1996;Miki et al., 1997), multiplied by the fraction of excitatory cells, the thickness of the gray matter, the distance reconstructions Only a few studies have focused on this issue, most of them being limited to distances up to 300 μm from the somato-dendritic axis of the cells. Note that methodologies in these studies were quite diverse, and quantitative measures of connectivity cannot easily be compared. The most direct quantification was obtained by paired recordings (magenta), when experimenters kept track of the total numbers of tested cells and related these to the numbers of connected cells found. Interestingly, most studies performed with paired recordings within the neocortex yielded a maximum P con in the order of 0.1 within the column. We used this value to constrain our model of exponential decay (Figure 3H) important aspects of cortical processing. In fact, the concept that information processing within the neocortex is based on columnar microcircuits has been challenged by studies showing that the lack of a columnar architecture does not impede the functionality of cortical neurons (Purves et al., 1992), and that the extent to which a columnar structure can be observed may vary substantially, even between individuals of the same species (Horton and Adams, 2005;Rockland, 2010). In addition, many studies on local connectivity have been performed in acute slices of rats and mice, where the concept of columns consisting of similarly tuned cells is, at least in the visual cortex, not easily applicable (Ohki et al., 2005). Thus, it is of key importance to gain more insight into the nature (both, physiological and anatomical) of horizontal connections in neocortical networks. The investigation of these connections is merely beginning. pHysiological cHaracTerizaTion of HorizonTal projecTions So far, details about horizontal connections in cortex have mainly been revealed by tracer injection studies (Burkhalter, 1989;Kisvárday et al., 1989;Kisvárday and Eysel, 1992;Lund et al., 1993;Van Hooser et al., 2006;Aronoff et al., 2010;reviewed in Voges et al., 2010b). These anatomical studies, however, did not deliver any information about the physiological properties of these connections. More recently, voltage-sensitive dyes (Laaris and Keller, 2002;Petersen et al., 2003;Tucker and Katz, 2003) and Ca 2+ -imaging (Göbel et al., 2007) have been used to study the role of long-range connections in vitro and in vivo, again providing only limited information on the physiology of single connections. Only few physiological studies, using electrical stimulation, report on selected physiological aspects of horizontal connections (Chagnac-Amitai and Connors, 1989;Ichinose and Murakoshi, 1996). The only studies that focused on the physiological properties of putative horizontal projections in the neocortex were performed either in vivo with the help of intracellular recordings in combination with extracellular stimulation (Matsumura et al., 1996) or in acute slices with a combination of intracellular recordings and juxtacellular stimulation (Yoshimura et al., 2000). In both these studies, synaptic connections between sites with a distance of up to 2 mm were characterized by spike-triggered averaging or paired recordings, and, in line with our findings (Figure 3G), they reported that synaptic PSP amplitude dropped slightly with increasing distance between connected cells. However, their finding of a decrease in synaptic reliability with increasing distance is in contrast with our finding of reliable horizontal combined connectivity measures derived from functional mapping via photostimulation and 3D morphological reconstructions for distances of up to 700 μm to assess whether it is sufficient to rely on anatomical data to determine the connectivity in rat barrel cortex. They concluded that neuronal specificity (e.g., type and position of pre-and postsynaptic neuron, amongst other factors) prevented direct comparability of these two measures. However, in their study, the authors largely focused on specific connections from L4/ L5A to L2/3 pyramidal neurons across the laminar borders. For this reason, we included a previous study from the same group , dealing with horizontal connectivity in L2/3 in our Figure 4. To compare results across studies, we normalized the various experimental findings on distance-dependent connection probabilities to the maximum connection probability in each case. This enabled us to include results from all these different studies within a single graphical representation ( Figure 4A). Apparently, connection probability decay functions vary considerably, but they all seem to follow a similar law, reasonably well approximated by an exponential decay, as fitted to our data. Moreover, the calculations above suggest that, also in other preparations, nonlocal (i.e., from beyond 250 μm lateral distance) presynaptic cells account for more than 70% of the total number of synaptic inputs onto a pyramidal cell ( Figure 4D; see also Voges et al., 2010b). Possible sources for unaccounted synapses include horizontal projections connecting neighboring columns, e.g., via L2/3 pyramidal neurons in primary visual cortex (Tucker and Katz, 2003;Buzás et al., 2006) or barrel cortex (Brecht et al., 2003;Lübke and Feldmeyer, 2007;Bruno et al., 2009), as well as far-reaching axon collaterals from L5/ L6 neurons within the same cortical area (Larsen et al., 2007). In the present study, we are focusing on these intra-cortical, horizontal projections. In addition, projections connecting different sensory areas and, e.g., thalamo-cortical, cortico-cortical, or inter-hemispheric connections (Cauller et al., 1998;Petreanu et al., 2007;Rubio-Garrido et al., 2009) need to be taken into account when referring to the total number of synaptic contacts a neuron receives. These connections have been shown to link stimulus features across columnar boundaries or sensory areas for different modalities and in different species (Mitchison and Crick, 1982;Weliky and Katz, 1994;Singer, 1999;Staiger et al., 2004;Buzás et al., 2006;Huber et al., 2008). The fact that only a small fraction of the axons synapsing onto a neuron originate from cells within the local volume implies that, by concentrating on the local circuit alone, one is likely to ignore Frontiers in Neuroscience www.frontiersin.org April 2011 | Volume 5 | Article 32 | 10 connections ( Figure 3D). Moreover, the results of the in vivo studies need to be interpreted with caution, because it is difficult to extract information about mono-synaptic connections in a recurrent, densely connected, active network from spike-triggered averages. Here, network effects can lead to responses that resemble PSPs from mono-synaptic connections, even though the respective cells are not directly connected. Actually, such PSP-shaped responses with sometimes negative latency with respect to the trigger unit have been experimentally observed (Matsumura et al., 1996), as well as theoretically predicted (Aertsen et al., , 1994Kumar et al., 2008). Yoshimura et al. (2000) focused on short-term plasticity of long-distance synaptic connections and reported only limited data on other parameters, like PSP amplitudes or synaptic reliability, and no information at all on connection probability. Photostimulation of selected subsets of neurons in acute brain slices (Callaway and Katz, 1993;Dodt et al., 2003;Kötter et al., 2005;Fino et al., 2009), as used in our study , seems to be one of the few methods available today that has been employed successfully to study the physiology of horizontal connections in greater detail (Shepherd et al., 2005;Matsuzaki et al., 2008). The methodological limitations of this technique are mainly that it is technically challenging to acquire maps with cellular resolution, that the number of stimulations of the same presynaptic site is limited due to detrimental side-effects of the short-wavelength light used for uncaging, and that the identity of the presynaptic cell is usually not recovered, except for its coarse location and the inhibitory or excitatory nature of its synapses (for a discussion see Nawrot et al., 2009). More quantitative experimental data will be necessary to judge whether horizontal connections can also show high failure rates, or if the comparatively high reliability found in our data may indeed be due to special physiological properties, such as multiple synaptic contacts for these connections, or extraordinarily strong single synapses, which would render these connections a suitable substrate for reliable and precise signal propagation in neocortical networks. funcTional implicaTions Horizontal connections within the neocortex are likely to play a pivotal role for cortical processing, purely due to their relative abundance. As is evident from the small number of studies thus far concerned with the physiological properties of horizontal connections, the interest in this large fraction of intra-cortical projections is just beginning. In recent modeling studies, it has been demonstrated that the implementation of horizontal connections can dramatically reduce wiring costs (Voges et al., 2010a) and has a strong impact on network dynamics (Kriener et al., 2009). These studies, however, focused on a related class of connections, the so-called long-range patchy connections (for a review see Voges et al., 2010b), which recently received more attention compared to non-patchy, long-distance horizontal connections. Patchy connections can be observed after bulk-loading of small volumes, preferentially in primary visual areas of higher mammals, as petal-like clusters of cells presumably receiving functional synaptic input from the injection site. It could be argued that the horizontal connections described in our study might serve the same purpose as patchy connections in higher mammals, but on a different spatial scale. However, detailed comparison of long-range patchy and non-patchy connections has cast doubt on this idea: while correlation strength was found to be high in pairs of similarly tuned neurons connected via longrange patchy connections in cat visual cortex, this was not true for the non-columnar cortex of gray squirrels (Van Hooser et al., 2006;Van Hooser, 2007). Similarly, it was described that the subthreshold membrane potential fluctuations display tuning similar to that of the spiking response in cells within orientation columns, while this has not been found in cells of non-columnar tissue. What could be the functional relevance of excitatory horizontal connections in the neocortex? A recent study using light-activated cells in vivo suggested that they might be the substrate for a competition between neighboring cortical domains, where strong activity in superficial layers inhibits neighboring domains within the same layer, while spreading excitation to a wide spatial range in deeper layers (Adesnik and Scanziani, 2010). Our findings emphasize that these connections might be especially reliable, strong, and temporally precise (Figure 3). Moreover, they indicate that precisely timed horizontal inputs are faithfully represented in the temporal dynamics of the integrated membrane potential of postsynaptic cells (Figure 2; Nawrot et al., 2009). Thus, horizontal connections and precise postsynaptic signal integration can subserve the fast and reliable propagation of correlated spiking over larger intra-cortical distances, at least in a low-firing regime. This combination of anatomy and physiology provides a potential neuronal substrate for neural computations with high temporal precision in the millisecond range, as is required for the realization of various temporal coding schemes Abeles, 1991;van Rullen and Thorpe, 2002;Gollisch and Meister, 2008;Jacobs et al., 2009; for recent reviews see Harris, that the neocortex could process and transfer information in such short times over such large distances, mainly based on local connectivity. In conclusion, therefore, it seems important to incorporate horizontal connections in our considerations for concepts of cortical processing and cortical neural network models. acknowledgmenTs This project received funding from the German Federal Ministry of Education and Research (BMBF grants 01GQ0420 to BCCN Freiburg, 01GQ0830 to BFNT Freiburg/ Tübingen, and 01GQ0413 to BCCN Berlin), from the European Union (EU Grant 15879, FACETS) and from the German Research Council (DFG-SFB 780 and DFG-GRK 1589). We thank Claudia Bachmann for help with the reconstruction of neuronal morphologies. 2005; Kumar et al., 2010). Even though conclusive evidence for the presence of such coding strategies in the brain is still lacking, several studies demonstrated the brain's ability of remarkably fast and precise information processing. When confronted with a classification task, monkeys can signal their correct choice within 140-160 ms after stimulus onset (Fabre-Thorpe et al., 1998), and this short reaction time includes not only cortical processing, but also peripheral vision as well as motor execution. In the same line of evidence, it has been shown that already 150-250 ms after stimulus presentation, intended movements for the proper trained direction can be extracted from signals from the motor cortex (Rickert et al., 2009). It is unknown in how far horizontal connections are involved in this kind of rapid sensory-motor transformation and decision making, but it seems rather unlikely
2014-10-01T00:00:00.000Z
2011-01-10T00:00:00.000
{ "year": 2011, "sha1": "42a3a659a407a7bc528fa10acc1b15ac88c61e76", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2011.00032/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "774913409c8804faf3967ff8c96d5f499bf53110", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
220647250
pes2o/s2orc
v3-fos-license
Off-fault damage characterisation during and after experimental quasi-static and dynamic rupture in crustal rock from laboratory P-wave tomography and microstructures Elastic strain energy released during shear failure in rock is partially spent as fracture energy $\Gamma$ to propagate the rupture further. $\Gamma$ is dissipated within the rupture tip process zone, and includes energy dissipated as off-fault damage, $\Gamma_\mathrm{off}$. Quantifying off-fault damage formed during rupture is crucial to understand its effect on rupture dynamics and slip-weakening processes behind the rupture tip, and its contribution to seismic radiation. Here, we quantify $\Gamma_\mathrm{off}$ and associated change in off-fault mechanical properties during and after quasi-static and dynamic rupture. We do so by performing dynamic and quasi-static shear failure experiments on intact Lanh\'elin granite under triaxial conditions. We quantify the change in elastic moduli around the fault from time-resolved 3D $P$-wave velocity tomography obtained during and after failure. We measure the off-fault microfracture damage after failure. From the tomography, we observe a localised maximum 25\% drop in $P$-wave velocity around the shear failure interface for both quasi-static and dynamic failure. Microfracture density data reveals a damage zone width of around 10 mm after quasi-static failure, and 20 mm after dynamic failure. Microfracture densities obtained from $P$-wave velocity tomography models using an effective medium approach are in good agreement with the measured off-fault microfracture damage. $\Gamma_\mathrm{off}$ obtained from off-fault microfracture measurements is around 3 kJm$^{2}$ for quasi-static rupture, and 5.5 kJm$^{2}$ for dynamic rupture. We argue that rupture velocity determines damage zone width for slip up to a few mm, and that shear fracture energy $\Gamma$ increases with increasing rupture velocity. Introduction During shear failure in rock, stored elastic strain energy is partly released as radiated energy E r (i.e., seismic waves) and mostly dissipated on and around the fault interface as latent heat and new fracture surface area through a plethora of dissipative processes. Dissipated energy is typically partitioned into frictional work and breakdown work, where frictional work E f is the work done to overcome the residual friction on the fault interface during sliding. Breakdown work W b is a collective term of energies dissipated in addition to E f , and primarily includes dissipative processes that reduce the strength of the fault interface towards the residual friction. This includes comminution, flash heating [Brantut and Viesca, 2017], and thermal pressurisation [Viesca and Garagash, 2015], but also includes energy dissipated towards propagating the rupture tip, and energy dissipated by deformation outside the principal slip zone (off-fault deformation). For earthquakes, E r and W b can be determined from seismological data [Tinti et al., 2005;Kanamori and Rivera, 2006], where W b varies from 10 2 to 10 8 Jm −2 as a function of total coseismic slip [Abercrombie and Rice, 2005;Viesca and Garagash, 2015]. As the strength evolution of the fault during failure cannot be determined directly from seismological data, a slipweakening law is typically assumed to determine a slip-weakening distance δ 0 , at which the fault has reached its residual frictional strength. Seismological estimates for W b do not discriminate between energy dissipated to propagate the rupture, Γ, and the remaining breakdown work (W b − Γ). Γ is called the shear fracture energy and is the energy dissipated within a process zone surrounding the rupture tip to overcome cohesion of the material and propagate the rupture by a unit area [Freund, 1990]. Γ is dissipated in a volume around the rupture tip, and may therefore include an offfault component Γ off in addition to the component of Γ dissipated to form the fault interface or principal slip zone. Measurements for material parameter Γ are of the order of 10 4 Jm −2 for initially intact crystalline low porosity rock under upper crustal conditions [Wong, 1982[Wong, , 1986Lockner et al., 2001;Aben et al., 2019], which may be considered an upper bound for pre-existing fault zones often comprised of damaged and altered rock. As Γ is dissipated earliest during shear failure [Barras et al., 2020], its constituent dissipative processes may affect slip weakening processes in the wake of the rupture tip process zone -and may affect the remainder of W b . We here aim to quantify the off-fault component of the fracture energy, Γ off . Off-fault deformation during shear failure, mainly fracturing and subsidiary slip, is created by transient off-fault stresses near the rupture tip [Andrews, 1976;Poliakov et al., 2002;Rice et al., 2005] and by increasingly larger off-fault stresses arising from progressive slip along rough faults [Chester and Chester, 2000;Dieterich and Smith, 2009]. Energy dissipated by off-fault deformation in the rupture tip process zone Γ off is one of Γ's constituent energy sinks. During shear failure, off-fault deformation caused directly by the stress concentration around the rupture tip as part of Γ off precedes most of the off-fault deformation from slip on a rough fault, since the amount of slip within the rupture tip process zone is negligible. Off-fault deformation, and particularly off-fault fracturing, changes the mechanical and hydraulic properties of fault damage zone rock, and thus the constituent dissipative processes of Γ off affect fault damage zone properties at an early stage during shear failure . This can have a feedback on rupture, slip, and ground motion; rupture simulations show that reduced mechanical properties in the fault damage zone affect fault slip [Cappa et al., 2014] and slip velocity [Andrews, 1976[Andrews, , 2005Dunham et al., 2011]. Due to fracturing near the rupture tip the pore volume increases and causes, under partially undrained conditions, a local pore fluid pressure drop and an increase in effective pressure on the fault [Brantut, 2020]. This can stabilise dynamic rupture [Martin, 1980] and slip [Segall and Rice, 1995;Segall et al., 2010]. Changes in hydraulic properties from off-fault fracture damage close to the fault interface have an effect on slip-weakening mechanisms that act in the wake of the rupture tip, such as thermal pressurisation [Brantut and Mitchell, 2018]. The dynamic reduction of elastic moduli in the fault damage zone causes high frequency content in the radiated ground motion [Thomas et al., 2017], and can be a substantial additional source of seismic radiation [Ben-Zion and Ampuero, 2009]. It is therefore crucial to 1): Quantify Γ off , and 2): Quantify the changes it imposes on off-fault mechanical properties. A measurement of total off-fault fracture surface area created in the rupture tip process zone gives an estimate for the cumulative fracture surface energy necessary to create them. This gives a lower bound for Γ off , as energy dissipated as latent heat during off-fault fracturing (i.e., slip on the fractures) remains unknown. Along strike-slip faults, this approach has yielded estimates for the total off-fault dissipated energy [Chester et al., 2005;Rockwell et al., 2009]. However, fractures observed in exhumed fault damage zones originate from either rupture tip stress concentrations, stresses generated by slip on a rough fault during shear failure, or quasi-static stresses [Mitchell and Faulkner, 2009], and were healed and overprinted by numerous shear failure events. This complicates quantification of Γ off from the geological record. Offfault fracture damage induced by shear failure under controlled conditions in the laboratory circumvents some of these complications, allowing for a microstructural description [Wawersik and Brace, 1971;Reches and Lockner, 1994] and quantification of fracture damage zones [Moore and Lockner, 1995;Zang et al., 2000] associated to a single failure event. Moore and Lockner [1995] estimated the cumulated surface energy in the fracture damage zone around a 'frozen' quasi-static rupture front in granite, where slip on the fault was negligible, yielding a lower bound for Γ off . A dynamically propagating rupture tip is expected to create a larger area of fracture damage, as the stress field around a propagating rupture tip is distorted with increasing rupture velocity [Poliakov et al., 2002], and we therefore expect Γ off to increase as well. Γ off can also be obtained from the change in stored elastic strain energy in the rupture tip process zone, with the underlying assumption that the change in elastic compliance is caused by off-fault fracturing. A reduction in elastic compliance is measured as a drop in seismic wave speeds, making them an attractive and costefficient proxy for large scale monitoring of fracture damage structures in fault zones [Mooney and Ginzburg, 1986;Rempe et al., 2013;Hillers et al., 2016;Qiu et al., 2017]. To date, high resolution geophysical measurements of wave speeds from dense arrays have given static snapshots of the fault damage zone structure, but not the coseismic velocity drop necessary to obtain the total coseismic off-fault dissipated energy, let alone Γ off . Laboratory-scale seismic tomography of the P-wave velocity structure [Brantut, 2018] obtained from ultrasonic data measured during quasi-static shear failure experiments does give the change in effective elastic moduli during rupture needed to calculate Γ off [Aben et al., 2019], yielding a similar value for Γ off to that calculated from fracture surface area by Moore and Lockner [1995]. There are, to our knowledge, no measurements of Γ off for dynamic shear ruptures yet, either from microstructures or from a change in elastic moduli. The changes in off-fault mechanical properties induced by shear rupture cannot be assessed directly from the scalar quantity Γ off , but the two approaches outlined above to estimate Γ off also provide the changes in elastic moduli and the microfracture density. These two physical properties can be reconciled using effective-medium theory models for cracked solids [e.g., Guéguen and Kachanov, 2011], which are an important tool for obtaining information on physical and hydraulic properties such as fracture density [Sayers and Kachanov, 1995], porosity, and permeability [Gavrilenko and Guéguen, 1989]. These physical parameters are key in studying the feedback between rupture and slip. Effective-medium approaches have been tested in the laboratory on deformed samples, where effective elastic moduli were measured by active ultrasonic surveys [e.g., Schubnel et al., 2003]. The path-averaged wave velocities obtained from these surveys are representative for fracture damage only when fractures are homogeneously spread throughout the sample. In laboratory shear failure experiments, fracture damage is localised around the fault interface and so path-averaged velocities cannot be used. Instead, recent advances in syn-deformation laboratory tomography techniques [Brantut, 2018;Stanchits et al., 2003] can be employed for the use of effective-medium models, so that changes in physical properties can be quantified in situ. Here, we assess Γ off for dynamic and quasi-static rupture in granite following the two approaches outlined above. To do so, we perform three types of shear failure experiments in the laboratory: Shear failure by quasi-static rupture, by dynamic rupture, and by partly quasi-static and partly dynamic rupture (from here on referred to as 'mixed rupture'). We quantify the change in mechanical properties around the fault caused by shear failure from timeresolved 3D P-wave velocity tomography models. These were obtained during and after quasi-static rupture and after dynamic rupture and mixed rupture. We also quantify the off-fault microfracture damage after dynamic and quasi-static shear failure from microstructural observations. An effective-medium approach is used to obtain microfracture densities from the 3D P-wave velocity models, which are compared with the measured microfracture densities. We then determine a damage zone width for the quasi-statically and dynamically failed samples. These estimates for damage zone width are compared to the expected damage zone width from the stress field around a propagating rupture tip [Poliakov et al., 2002] and from the off-fault stresses induced by slip along a rough fault [Chester and Chester, 2000]. We then obtain Γ off from measuring the cumulative off-fault fracture surface energy within the damage zone. These measurements are complementary to Γ off derived from changes in effective elastic moduli by Aben et al. [2019] for a quasistatic rupture. Last, we discuss the implications of our results to the energetics of earthquake rupture. Experiments Three different types of failure experiments were performed on intact 100 mm by 40 mm diameter Lanhélin granite cylinders (from Brittany, France) at 100 MPa confining pressure (Table 1): Failure by dynamic rupture, failure by quasi-static rupture, and failure by part quasi-static rupture and part dynamic rupture named mixed rupture. The experiments were performed at nominally dry conditions in a conventional oil-medium triaxial loading apparatus at University College London [Eccles et al., 2005]. Axial load was measured by an external load cell corrected for friction at the piston seal. Axial shortening was measured by a pair of Linear Variable Differential Transducers (LVDTs) outside the confining pressure vessel, corrected for the elastic shortening of the piston. The samples were equipped with two pairs of axial-radial strain gauges. The samples were placed in a rubber jacket equipped with 16 piezoelectric P-wave (V P ) transducers. Ultrasonic signals were amplified to 40 dB before being recorded by a digital oscilloscope (50 MHz sampling frequency). All signals consisted of 4096 data points, equivalent to an 82 µs time interval. Active ultrasonic velocity surveys were performed every 5 minutes, where all 16 piezoelectric transducers were sequentially used as a source, while the other transducers recorded the resulting waveforms. 1 MHz pulses were produced by exciting the source transducer with a 250 V signal. The signal-to-noise ratio was improved by stacking the recorded waveforms from six of these pulses per transducer. Between surveys, acoustic emissions (AE) were recorded on 16 channels, provided that the AE signal amplitude was above 250 mV on at least two channels within a 50 µs time interval. The digital oscilloscope stored up to four sets of AE waveforms per second. Dynamic rupture was achieved by setting a constant shortening rate equivalent to an axial strain rate of 10 −5 s −1 until dynamic shear failure. Quasi-static rupture was achieved by suppressing dynamic rupture via monitoring the AE rate, following the approach of Lockner et al. [2001]. When the acoustic emission rate showed a marked increase -a precursor to dynamic rupture -the axial load on the sample was decreased by reversing the displacement direction of the piston. For mixed rupture experiments, the rupture was controlled for about half the stress drop between the sample's peak stress and its residual frictional strength. The rupture was allowed to propagate dynamically for the remainder of the stress drop. After failure, one sample failed by dynamic rupture and one sample failed by mixed rupture were reloaded up to their residual frictional strength (Table 1), which resulted in some additional stable sliding along the fault. Poisson's ratio of the intact rock ν 0 was determined from the ratio of the axial and radial strain during axial loading in the elastic regime. The intact Young's modulus E was derived from the differential stress versus axial displacement curves measured during axial loading in the elastic regime. Analysis of ultrasonic data and P-wave tomography The FaATSO code by Brantut [2018] was used for tomographic inversion of the active ultrasonic surveys and AE arrival times. Prior to tomographic inversion, the ultrasonic waveforms recorded during the experiments were processed. Time of flight for all sensor combinations were picked for the first active ultrasonic survey of the experiment, and arrival times for subsequent surveys were extracted using an automated cross-correlation technique [e.g., Brantut et al., 2014] with a precision of about 0.05 µs. From these, pathaveraged velocities were calculated between sensor pairs. These ray paths are oriented at 90 • (i.e., horizontal), 58 • , 39 • , and 28 • angles to the loading axis of the sample. AE arrival times and source locations were obtained in three steps: 1) The first arrivals of the AE waveforms were automatically picked, and AE source locations were calculated using their arrivals in conjunction with a transverse isotropic velocity model based on the most recent ultrasonic survey. 2) The AE events were subjected to a quality test, where AEs with a source location error above 5 mm were discarded. 3) The automatically picked arrival times of the remaining AEs were subjected to an interactive visual check -arrival times were improved or removed when the difference between the automatically picked arrival time and the theoretical arrival time for the calculated source location was too large. 4) The AE source locations were recalculated based on the inspected arrival time dataset and the same source location error criterium was applied. The FaATSO code treats the arrival times of the ultrasonic surveys and the AE arrival times as the observed data. The model parameters are the AE source locations and origin times, and the horizontal P-wave velocity and anisotropy in voxels of 5 × 5 × 5 mm that cover the sample volume. The algorithm allows for vertical transverse isotropy for each voxel (i.e., the vertical velocity is independent from the horizontal velocity). V P anisotropy is expressed as the ratio (V v P −V h P )/V h P , where V h P and V v P are the horizontal and vertical P-wave velocities, respectively. To make predictions of the observed data based on the model parameters, a 3D anisotropic ray tracer is used (i.e., Eikonal solver) [Brantut, 2018]. The inverse problem is solved using a quasi-Newton inversion algorithm [Tarantola, 2005], and is constrained by a set of standard deviations that describe Gaussian variances on the observed data ( Table 1). The variance on the model parameters (AE source locations, velocity, and anisotropy) are also Gaussian, expressed by standard deviations (Table 1). For the velocity and anisotropy, there is a covariance between voxels that is a function of the variance and a correlation length [Brantut, 2018]. Through the covariance for velocity and anisotropy, the correlation length smooths heterogeneities in the inversion results. The observed data was divided in a number of time intervals (Table 1) with varying duration, each containing roughly 300 AEs, for which we performed the inversion. The AE source locations were used as an a priori model parameter. For the remaining a priori model parameters, V h P and anisotropy, we used two structures: 1) A homogenous vertical transverse isotropic (VTI) a priori velocity structure derived from the most recent ultrasonic survey in each time interval, and 2): An inherited a priori velocity structure from the inversion results of the preceding time interval, except for the first time interval where a homogenous VTI a priori structure was used. The quality of the inversion results was tested by comparing both sets of inversion results (see Text S1). A clear tomographic image during dynamic rupture could not be achieved by inversion of pre-and syn-rupture AE events, because the number of recorded syn-rupture events is too low due to the limited recording capacities of the acquisition system, and pre-rupture events occur at a stage where deformation is not yet localised. We therefore use AE events recorded during reloading of a dynamically failed sample and a sample failed by mixed rupture. Microstructural analysis Polished thin sections oriented perpendicular to the main fault interface were cut from epoxied post-mortem samples that failed by dynamic rupture and by quasi-static rupture. The thin sections were studied by optical microscopy and by scanning electron microscope (SEM), from the latter we obtained back-scatter electron (BSE) grayscale images along three transects through the centre of the sample (Figure 1a, b; Table 1). The images were taken at a 100× magnification and cover a 1.0 mm 2 area. The pixel dimension is 0.5 by 0.5 µm. We obtained the traces of microfractures as follows: Microfractures are revealed as low grayscale value features in the SEM pictures, because they are empty or filled with low density epoxy. The microfractures may be traced by hand, but given the large number of SEM images, we elected to use a semi-automated image analysis technique. Both methods are prone to user errors, but the errors from semi-automated image analysis are more consistent in all images so that analysis within the dataset itself is more reliable. The microfractures can be isolated by using a grayscale threshold, but this approach will isolate pores in addition to open fractures, and will exclude pixels of low aperture fractures because they partly overlap with higher density wall rock, which increases the absolute grayscale value. Fracture recognition from sharp grayscale contrasts (i.e., edge detection) is more sensitive to low aperture fractures, but will also recognise pores and sharp grain Trace of a fracture segment, and the minor and major axis of an ellipse fitted around the segment. The fracture segment length used to calculate ρ frac is given by the number of its constituent pixels. The angle θ between the major ellipse axis and the loading axis gives the fracture orientation. boundaries between different minerals. Here, we isolate microfractures based on fracture aperture, so that larger aperture pores can be excluded. To do so, we use the median filter technique used by Griffiths et al. [2017], and incorporate their approach in the newly developed fracture tracing code Giles (fracture tracinG by median filter, skeletonisation, and targeted closure, freely available on https://github.com/FransMossel/Giles fracturetracing.git). The median filter obtains a median grayscale value for a predefined window of pixels around a target pixel, and assigns this median value to the target pixel. The entire image is subjected to this action. If the predefined window is larger than the fracture aperture and smaller than the aperture of pores, it ascribes a median grayscale value to a pixel in the fracture that is much higher than the original value, but pixels that represent pores or grains do not significantly change [Griffiths et al., 2017]. The difference between the original grayscale values and the median filtered values is thus much higher in microfractures than in surrounding grains and pores. The image of this difference is therefore binarised. Small gaps between fracture traces in the binarised image are closed with a dilation-erosion action. The binary image is skeletonised, reducing the width of the trace to a single pixel, followed by targeted closure of gaps between traces that have the same orientation. Small residual branches on the fracture traces are an artefact of the skeletonisation process, and are removed by a pruning algorithm similar to that used by Griffiths et al. [2017]. A visual check and, when necessary, adjustment of the user-defined parameters, is imperative to ensure reasonable results from the fracture tracing code. See Text S1 for more details on the image analysis steps. The end result of image processing using Giles is a binary image with microfracture traces of single pixel width. Fractures below 3 to 9 µm in length (depending on the size of the median filter window) were not traced. Since we are primarily interested in off-fault damage, we manually removed fracture traces in gouge-filled zones and zones of cataclasite. We do not define individual fractures, because this requires manual unravelling of the microfracture network that would give arbitrary results for a well-connected fracture network where a clear fracture hierarchy is missing. Instead, we analyse fracture segments, which are defined as pixels connected to only two neighbours. Fracture segments are separated by fracture intersections, which are pixels with three or more neighbours. We obtained 2D fracture orientations for each fracture segment by fitting an ellipse around a segment and measuring the angle θ between the major axis of the ellipse and the sample axis ( Figure 1c). The absolute cumulative fracture length in an image is given by the number of pixels used for the fracture traces. Off-fault fracture density ρ frac (in mm/mm 2 ) was obtained for each image by dividing the total fracture length in an image with the surface area of that image. The SEM image transects span both sides of the fault zone, which experienced different transient stresses in the rupture tip process zone. The transient off-fault stresses are tensile on the side of the fault where the direction of slip is opposite to the rupture propagation direction, and compressive on the other side of the fault. Based on the migration of AE source locations over time, which indicates the rupture propagation direction, we identified the tensile and compressive sides of the fault (Figure 1a, b). Experiments The samples reached a peak differential stress of 660 to 700 MPa, followed by the onset of fault localisation and rupture propagation ( Figure 2a). Frictional sliding -and thus the completion of rupture -commenced between 360 and 350 MPa, based on the flattening of the stress-displacement curve and the spread of the AE source mechanisms across the entire slip surface during quasi-static rupture. The post-failure residual strength is around 300 MPa, as shown by the converging stress-displacement curves of the quasi-static, dynamic, and mixed experiments, of which the latter two approached the residual frictional strength from a lower differential stress by reloading of the sample. Visual inspection of the samples after the deformation experiment revealed a single shear failure zone (Figure 2c), except for Polished section of mixed ruptured sample LN8, oriented parallel to the main failure zone. A perpendicular incipient failure plane is visible. Surface has been epoxied prior to polishing, so that the damage zone is less apparent compared to (c). sample LN8 that includes an incipient secondary fault plane without noticeable displacement in addition to the through-going shear failure zone ( Figure 2d). All through-going failure zones are oriented approximately at 30 • relative to the compression axis. Using this fault angle, we resolved the average shear stress on the fault plane from the differential stress and confining pressure ( Figure 2b). The rupture fully traversed the sample and completed the failure zone at about 155 MPa shear stress (Figure 2b), as measured from the quasi-static rupture, and the residual frictional strength τ residual is around 120 MPa, as shown by the converging stressstrain curves for quasi-static, dynamic, and mixed ruptures ( Figure 2b). The slip on the fault δ , calculated from the axial displacement data corrected for machine stiffness and for the stiffness of the intact rock, was 0.83 mm at the end of the quasi-static rupture experiment, 2.88 to 3.22 mm after dynamic failure, and 1.93 to 2.44 mm after dynamic failure in mixed rupture experiments (Table 1). Additional slip of 0.19 mm and 0.29 mm was accumulated by reloading samples LG1 and LN8, respectively, after dynamic failure. A Young's modulus E = 88 GPa was measured for the intact rock during axial loading above 100 MPa and below about 400 MPa differential stress, and averaged over all experiments. Averaged over all experiments, a Poisson's ratio ν 0 = 0.20 was estimated for intact rock. Ultrasonic velocity surveys Path-averaged ultrasonic P-wave velocities were routinely calculated from the time of flight between two sensors, assuming a straight ray path (i.e., shortest distance) between the sensors. We present the P-wave velocity change during deformation with respect to the initial P-wave velocity at hydrostatic conditions along 5 straight ray paths at key orientations with respect to the fault plane during a quasi-static failure experiment ( Figure 3a) and a dynamic failure experiment (Figure 3b). In both samples, ray path A is perpendicular to the loading axis and located well outside the eventual failure zone. Ray path B is oriented at 39 • to the loading axis, and nearly its entire length is located within the failure zone. Ray paths C and D, both at a 58 • to the loading direction, intersect with the two extremities of the fault zone and run sub-parallel to it. Ray path E, oriented perpendicular to the loading direction, intersects the fault zone in the centre of the sample. Before the onset of quasi-static rupture, path-averaged P-wave velocities along all 5 ray paths increase slightly by 30 m/s up to 6.2 km/s from 0 to 400 MPa differential stress (Figure 3a), followed by a strong decrease as peak stress is approached. At the peak differential stress, V P along ray path A shows the smallest velocity reduction of about 13% down to 5.3 km/s. During quasi-static rupture, when differential stress drops from peak stress to about 350 MPa, V P along ray path A recovers by 6%, and remains stable during sliding between 350-300 MPa differential stress. Ray path B reveals a drop in P-wave velocity of about 13% at the peak differential stress. Between the peak stress and 600 MPa differential stress, V P along ray path B decreases by an additional 4%, and remains stable at a total reduction of 17% for the remainder of the stress drop. After rupture completion at 350 MPa differential stress and the onset of sliding, the P-wave velocity along ray path B recovers by 1-2%. At the peak stress, P-wave velocity along ray path C and D dropped by 17%. V P continues to decrease, along C down to 21% at 625 MPa differential stress, and along D down to 19% at 470 MPa (Figure 3a, asterisks). At the end of the experiment, Figure 3: (a): Normalised path-averaged P-wave velocity measured during a quasi-static failure experiment (sample LN5) versus differential stress. P-wave velocities were obtained from the first P-wave arrival of active ultrasonic surveys. The shaded area indicates the transition to frictional sliding, and the asterisks highlight the lowest velocities of three ray paths (see main text). The curves are coloured similar to their locations shown in the cross-section through the centre of the sample (inset) where the fault plane, delineated by AE source locations within 2 mm of the cross-section, intersects the cross-section and ray paths at a 45 • angle. (b): Normalised P-wave velocity measured before and after a dynamic failure experiment (sample LG1) versus differential stress. The dynamic stress drop during failure is dashed. The curves are coloured similar to their locations shown in the cross-section through the centre of the sample (inset) where the fault plane, delineated by AE source locations within 3 mm of the cross-section, intersects the cross-section and ray paths at a 40 • angle. after frictional sliding, the overall velocity drop along ray paths C and D is 14% and 15% respectively, which is a velocity recovery of 7% and 4% with respect to the minimum observed V P . The velocity drop along ray path E was 22% at the peak stress. Along this ray path, we observe the strongest reduction in P-wave velocity of about 26% down to 4.6 km/s at a differential stress of 590 MPa during quasi-static failure (Figure 3a, asterisks). As failure progresses and differential stress drops further, the velocity recovers so that a 19% reduction in V P is measured at the end of the experiment. Path-averaged P-wave velocities before dynamic failure (Figure 3b) are similar to those before quasi-static rupture. Ultrasonic surveys could not be obtained during dynamic rupture, but were obtained during reloading of the sample after failure. P-wave velocity outside the fault zone along ray path A was reduced by 15% down to 5.3 km/s prior to dynamic failure from an initial velocity of 6.2 km/s. The dynamic stress drop during failure caused an increase in V P of 8 to 11%, followed by a small decrease during reloading down to 5.9 km/s at 270 MPa differential stress ( Figure 3b). Pwave velocity along ray path B drops from 5.7 km/s (9% drop) at peak stress to 5.4 km/s (13% drop) after dynamic failure ( Figure 3b). During reloading, the P-wave velocity does not change along wave path B. Pre-failure P-wave velocities along ray paths C and D drop by 13-14% along both wave paths, and recover by 1% up to 5.4-5.5 km/s after the dynamic stress drop ( Figure 3b). Within the fault zone along ray path E, the P-wave velocity drops during the dynamic stress drop from 5.1 km/s down to 4.8 km/s (22% reduction, Figure 3b). V P decreases slightly more during reloading of the sample (down to a 23% reduction). Path-averaged P-wave velocity changes measured during quasistatic rupture and before and after dynamic rupture are of similar magnitude and show a wide variation in velocity reductions within a single sample, with V P reduced by 5 to 24% at the end of the experiment relative to the intact rock. These variations indicate strong localisation of damage, and the difference between horizontal V P (measured perpendicular to the loading axis) and V P measured at an angle indicate damage-induced anisotropy. Overall, P-wave velocity tends to increase with decreasing differential stress, except for the ray paths located entirely within the fault zone (ray paths B in Figure 3a and b). The above analysis assuming straight ray paths reveals very precise changes in path-averaged V P thanks to the cross-correlation technique used to extract arrival times. However, changes in path-averaged V P do not reveal where along the ray path the V P has changed. Therefore, we perform a 3D tomographic inversion, which will lack the precision of the path-averaged velocity changes, but will reveal the location of greatest change in V P . P-wave tomography We first present changes in the horizontal P-wave velocity structure introduced by quasi-static, dynamic, and mixed rupture. The results were obtained using an inherited a priori velocity model (see section 2.2 and Text S2). We then present the P-wave anisotropy inversion results. We detail the difference between the inherited and the homogeneous VTI a priori velocity models and the effect of different standard deviations on the model parameters in Supplementary information Text S2. Dynamic rupture propagation: The horizontal P-wave velocity before localised dynamic failure drops throughout the entire sample, from 6.2 km/s down to 5.4 km/s ( Figure 4a). The V P structure obtained immediately after dynamic failure shows a strong localised low velocity zone (V P drops by 22% down to 4.8 km/s) around the fault zone ( Figure 4b). The localised zone of low V P decreases in width from about 35 mm to 20 mm as the sample is reloaded. V P recovers throughout the sample during reloading, most notably in the low velocity zone where the minimum P-wave velocity increases by 600 m/s to 5.4 km/s (Figure 4c, d). Quasi-static rupture propagation: Before the onset of quasistatic rupture, horizontal V P decreases from around 6 km/s down to 5 km/s ( Figure 5a). The rupture nucleates near the bottom of the sample and propagates upwards (delineated by the AE source locations), during which a low P-wave velocity zone forms around the fault zone (Figure 5b, c). V P within the low velocity zone is as low as 4.6 km/s (a 25% drop relative to unaffected areas outside of the zone). In the wake of the rupture tip, the P-wave velocity at some distance from the fault recovers by at most 5%. After rupture completion at the onset of frictional sliding, V P recovers throughout the sample (the minimum V P rises by about 100 m/s, Figure 5d). Mixed rupture propagation: The horizontal V P before failure decreases throughout the sample (Figure 6a), similar to the velocity drop observed before dynamic failure and quasi-static rupture (Figures 4a and 5a). The deformation history of this particular sample becomes somewhat complicated after the peak stress: We observe a faint localisation zone, delineated by AE source locations and visible in a polished section (Figure d), that is oblique to the final failure surface. The V P structures of the first few time intervals after the peak stress show a low velocity zone around this aborted nascent rupture plane (Figure 6b). The eventual fault forms after a 50 MPa drop relative to the peak stress, and is embedded in a zone with velocities as low as 4.6 km/s, from an initial velocity of 5.9 km/s -a 22% drop (Figure 6c). The rupture was allowed to propagate dynamically at about 520 MPa. The V P structure after failure shows two elongated low velocity zones, one around the main fault zone and one around the 'failed' fault zone with velocity reductions of 20% and 18%, respectively, a recovery by 200 m/s up to 4.8 km/s ( Figure 6d). A localised low P-wave velocity zone is observed for the three rupture types (quasi-static, dynamic, and mixed). The minimum horizontal velocities within these zones are of similar order of magnitude: 4.6-4.8 km/s, equal to a 22-25% drop relative to the initial P-wave velocity. These velocity drops are in accordance with the largest drops observed in the path-averaged horizontal V P ( Figure 3). The largest velocity decrease for quasi-static and mixed rupture is observed during the propagation of the rupture itself (Figures 4c and 6c). For dynamic rupture, the lowest velocities were observed directly after failure and thus provide only an upper bound for the lowest horizontal P-wave velocities during dynamic rupture. P-wave tomography: Anisotropy During axial loading up to failure, V P anisotropy in dynamic and mixed rupture samples is fairly homogeneous and varies between 10-11% (i.e., the vertical P-wave velocity is 10-11% higher than the horizontal P-wave speed). Some variation in anisotropy near the sample extremities may be caused by lateral confinement from the coupling with the loading column. The anisotropy for the quasistatically ruptured sample is somewhat higher at 13-15%, although a 1-2% variance within the sample is similar to the dynamic and mixed rupture samples. The V P anisotropy adjacent to the ruptured zone increases up to 20% during quasi-static rupture. The anisotropy outside the ruptured zone remains at 15%, similar to the pre-rupture anisotropy. Anisotropy measured in the mixed rupture sample, during the quasi-static rupture interval, increases to 19% around the ruptured zone, and anisotropy outside the ruptured zone remains more or less constant at 12%. Thus, during rupture the vertical P-wave velocity decreases less relative to the horizontal P-wave velocity. The lowest anisotropy after rupture completion (at residual shear stress) is observed in the dynamically ruptured sample, with a maximum anisotropy of 14% near the ruptured zone and a 11% anisotropy outside this zone (Figure 7a). The maximum anisotropy near the ruptured zone after completion of quasi-static rupture is 19% (Figure 7b), which is a small recovery relative to the maximum anisotropy during rupture. The anisotropy in the volume unaffected by rupture remains similar to the pre-rupture anisotropy. The anisotropy after dynamic failure in the mixed rupture sample is 20% (Figure 7b), and the minimum anisotropy outside the ruptured zone is 12%. We thus see in all three rupture experiments an increase in anisotropy around the ruptured zone during and after failure, with the smallest increase after dynamic rupture. We can infer from the horizontal P-wave velocity decrease and anisotropy increase in the ruptured zone that the vertical P-wave velocity during rupture does not change much. Outside the ruptured zones, the anisotropy during and after rupture remains constant relative to the initial anisotropy just prior to reaching the peak differential stress. Microstructural observations Study of thin sections by optical microscopy reveals a zone of microfractures of around 1 mm in length and oriented parallel to the loading direction enveloping the shear failure zone (Figure 8a). For quasi-static shear failure, the extent of this damaged zone is appraised at roughly 8 to 10 mm on the tensile side of the fault, and 2 to 3 mm on the opposite compressive side. Several grains outside this off-fault damage zone have been subjected to extensive fracturing as well (Figure 8a). We will attempt to quantify our qualitatively assessed order of magnitude damage zone width from fracture density data obtained from SEM images hereafter. First, we describe the microstructures observed at smaller scale in the SEM images, followed by measurements of off-fault microfracture orientation and density. The SEM images show that the main failure plane resulting from dynamic rupture is surrounded by patches of gouge and cataclasite (Figure 8b, c), which were not preserved everywhere in the sample during the post-mortem treatment. Whereas the individual particles in patches of gouge cannot be clearly distinguished on the images, the fragments in the cataclasite zones are clearly visible and angular, and show rotation relative to their neighbouring fragments. At 100-500 µm distance from the main failure zone, the rock contains abundant mode I microfractures oriented parallel to the main loading direction with little to no shear or rotation of fragments ( Figure 8b). Some of these mode I fractures tend to deflect towards the main slip zone (Figure 8b). Qualitatively, the amount of microfractures decreases with increasing distance from the fault (Figure 8b, c; Figure 9a, b), and variation in microfracture density on the scale of individual SEM images is linked to mineral type (for instance, the biotite grain at the top of 9b is more heavily fractured relative to the feldspar below it). The microstructural damage observed near a quasi-statically formed failure zone is qualitatively similar to that observed after dynamic rupture; patches of gouge and cataclasite zones (Figure 8d Off-fault microfracture orientations: The dominant fracture orientation for off-fault microfractures was obtained from the cumulative length of the major ellipse axis of all the fracture segments in all SEM images that fall within 5 • intervals measured relative to the loading axis. All intervals are normalised by the interval with the largest cumulative length. The overall dominant off-fault microfracture orientation is parallel to the loading axis for both dynamic rupture (Figures 9c) and quasi-static rupture ( Figure 10c). The angle of the off-fault microfractures with respect to the fault plane is somewhat larger for the dynamic rupture case relative to the quasi-static one. Off-fault fracture density: The surface area of, and fracture traces in, gouge and cataclasite zones and empty fault space has not been used in the calculation of the off-fault microfracture density. The off-fault microfracture density is presented as a function of fault perpendicular distance. We set the origin of each SEM transect (i.e., 0 mm fault perpendicular distance) at the centre of the main failure plane, whose width varies along the fault but remains less than 1 mm (Figure 8) -thus none of the SEM images is entirely located in the main failure plane. After dynamic rupture, off-fault fracture density, ρ frac , is around 80 mm/mm 2 directly adjacent to the failure zone, and drops to 30-40 mm/mm 2 at about 1 mm distance from the failure zone (Figure 9d). Further from the fault, we observe an overall cm scale trend of decreasing ρ frac with increasing distance from the fault, superimposed to a mm scale variation. This variation is between 5-30 mm/mm 2 and also decreases with distance from the fault ( Figure 9d). ρ frac is around 50 mm/mm 2 directly adjacent to the quasistatically formed failure zone (Figure 10d). Within 1 mm distance, ρ frac drops to about 10-20 mm/mm 2 . After this initially steep drop, ρ frac decreases to below 10 mm/mm 2 over 1.5 cm fault perpendicular distance. A mm scale variation is restricted to about 10 mm/mm 2 magnitude, and decreases with distance. ρ frac measured across the quasi-statically ruptured failure zone is lower and has a lower variance relative to ρ frac measured across the dynamically ruptured failure zone. Damage zone width We shall now attempt to summarise the off-fault microfracture density data in an informative and simple measure. For this, we elect the measure of a damage zone width, allowing a direct comparison with fault damage zones studies in the field and with models that predict the extent of off-fault fracture damage from fault rupture and fault slip. The order of magnitude estimate for damage zone width from optical microscopy will be an independent indicator in determining damage zone widths from microfracture density data. The fracture damage that was measured in the SEM images was created during three stages, according to AE activity: i) pre-failure microfracturing throughout the volume of the sample during yield, leading to ii) fracture coalescence in the nucleation patch and process zone of the propagating rupture, which is followed by iii) slipinduced damage [Tapponnier and Brace, 1976]. Here, we are only interested in microfractures formed during stage ii), as it provides a measure for Γ off . Although we suppress clearly slip-induced stage iii) damage by removing gouge and cataclasite layers from the traced images, we cannot rule out that some of the off-fault microfracture damage has a slip-related origin. The damage zone width is defined as the fault perpendicular distance where the fault-related fracture density trend (stage ii) and iii) damage) intersects the background fracture density. We define the background fracture density ρ frac 0 as the sum of yield-related damage and initial damage already present in the samples prior to the experiment. It is reasonable to assume that stage i) introduced equal amounts of damage in the dynamic and quasi-static failed samples, and so ρ frac 0 estimated for the quasi-statically ruptured sample represents ρ frac 0 of the dynamically ruptured sample as well. We note that in the first place, our definition of background fracture damage gives a reference value for ρ frac 0 for our experiments, and is not, but may approach, the background fracture density as encountered in field studies. We obtain a measure for the background fracture density ρ frac 0 for the quasi-static failed sample from the average fracture density of 19 SEM images. The conditions to assume that these SEM images were outside the damage zone were: 1) They are located at 12 mm distance or more from the failure zone, 2) they lack open microfractures more than half the image in length, and 3) they do not contain heavily fractured zones (some example images are shown in SI Text S3). Of these 19 SEM images, the images with the lowest fracture densities (around 2-4 mm/mm 2 ) are qualitatively similar to the initial undeformed state of Lanhélin granite [Siratovich et al., 2105] -but ρ frac 0 based on these images excludes yield-related fracture damage and would give a lower bound only. To obtain a more realistic ρ frac 0 , the 19 SEM images also include some with a higher background fracture damage (up to 10 mm/mm 2 ), but without clear stage ii) or iii) related fracture damage. We find that ρ frac 0 = 6.2 mm/mm 2 with a standard error of 2.8 mm/mm 2 (Figure 9d, 10d). A visual check makes it clear that the trend of decreasing fracture density after quasi-static failure intersects with the background fracture density at around 5 to 10 mm fault parallel distance for most transect (Figure 10d), which matches the damage zone width first estimated from optical microscopy images (Figure 8a). Fracture density data after dynamic failure seems to intersect with the established background fracture density at larger fault perpendicular distances between 10 to 20 mm, suggesting a wider damage zone (Figure 9d). The decrease in fracture density with distance may be described by a power law function or exponential function, as is often done for damage zone studies in the field and laboratory [e.g., Faulkner et al., 2011;Mitchell and Faulkner, 2009;Savage and Brodsky, 2011;Moore and Lockner, 1995;Ostermeijer et al., 2020]. The intersection of such a fitted function intersects with the background fracture density threshold provides a damage zone width. This approach applied to a high resolution off-fault damage dataset with a large natural variance results in very large uncertainties on the damage zone width [Ostermeijer et al., 2020], thus oversimplifying or misrepresenting the actual damage distribution. We nonetheless pursued this approach for each transect and the results are presented in Text S3. For most transects, both on quasi-statically or dynamically failed samples, a power law decay fits best with the data. The damage zone width results are not always sensible and in harmony with our primary observations (Figure 8a, Figure 9d, and 10d)for instance, damage zone widths after quasi-static rupture that are much larger than 20 mm (i.e., outside the sample). We therefore use this method merely as an additional guidance, and resolve to manually picking damage zone widths for all transects in both samples. Note that our approach for obtaining damage zone width may differ from that used in field studies: For instance, we may have used a different definition for background fracture density, and we combined additional constraints with the results of fitting a damage decay function. The damage zone widths determined for the quasi-statically ruptured sample are between 7 and 13 mm on the tensile side of the fault, and between 4 and 13 mm on the compressional side of the fault (Figure 11c). These values are in accordance with our simple estimates from optical microscopy. The damage zone width may exceed the measured transect length or the sample width for a number of transects in the dynamically ruptured sample, in which case we ascribe a lower bound value of 20 mm. On both sides of the fault, the damage zone width is between 11 to 20 mm (Figure 11c). The damage zone after dynamic failure is thus wider by about a factor of two relative to the damage zone created during quasi-static failure. We did not observe a clear trend between damage zone width and distance from rupture nucleation for dynamic and quasi-static rupture (Figure 11c). The power law exponents of the highest quality power law fits vary between −0.37 and −0.49 for transects in both samples, these values are similar to those obtained for fault damage decay profiles in crystalline rock in the field [Savage and Brodsky, 2011;Ostermeijer et al., 2020]. Ultrasonics and Tomography P-wave velocity variations during quasi-static and dynamic failure experiments can be ascribed to two effects: 1) Variations in differential stress, where an increase results in closing of pre-existing horizontal microfractures (i.e., perpendicular to the loading axis) and opening of pre-existing vertical microfractures. Closing of horizontal microfractures mostly affects vertical V P , and opening of vertical microfractures has a greater effect on the horizontal V P [Paterson and Wong, 2005, , Chapter 5]. 2) Microfracture formation and growth during quasi-static or dynamic failure reduce the P-wave velocity locally. These microfractures are subjected to opening or closing as well. These two effects are recognised during and after our shear failure experiments, where predominantly vertical microfractures are formed within the damage zone (Figures 9c and 10c). These fractures decrease path-averaged P-wave velocities along horizontal ray paths more than those along angled ray paths, causing the observed anisotropy (Figure 3). The closing of the vertical microfractures, caused by the syn-failure differential stress drop, results in partial P-wave recovery [Passelègue et al., 2018]. The lowest V P along ray paths intersecting the failure zone are thus observed when the contribution of fracture opening in the rupture process zone dominates over the contribution of fracture closure due to decreasing differential stress (Figure 3a, asterisks). We see a similar evolution in V P in the time-resolved 3D P-wave structure during the stress drop associated to quasi-static failure (Figure 5b, c): A recovery of V P throughout the sample, except near the rupture front where V P decreases. Figure 1a, and the gray area bounds the background fracture density range between the mean and the mean plus one standard error that was established on the quasi-statically ruptured sample LN5. After dynamic failure, at the onset of reloading, the pathaveraged V P along most ray paths rises by a few percent. This is followed by a slight decrease for the remainder of the reloading interval (Figure 3b). In the 3D velocity structure, the post-rupture V P initially increases, in particular within the low velocity zone (Figure 4b, c). V P further decreases with progressive reloading (Figure 4c, d). This suggests that horizontal microfractures are closed immediately after the dynamic stress drop, resulting in a P-wave velocity increase. After horizontal microfracture closure, the opening of vertical microfractures dominates and horizontal P-wave velocity decreases. Such a progression is typically observed at the onset of loading of crystalline rock [Paterson and Wong, 2005, , Chapter 5]. The path-averaged P-wave velocities after quasi-static failure are very similar in magnitude to the the P-wave velocity measured along the same ray paths after dynamic failure and reloading (Fig-ure 3). The V P drop during reloading of the dynamically failed sample is a near perfect extension of the V P increase during the transition to frictional sliding -the V P -stress curves of each pair of matching ray paths can be connected fairly well, except for ray path E. This horizontal ray path crosses the fault zone and shows a much larger velocity drop near 300 MPa differential stress in the dynamic case than it does near the same differential stress for the quasi-static case. This may reflect the wider damage zone that was created during dynamic rupture, as this wave path is more sensitive to vertically oriented micro fracture damage than the other wave paths that intersect the fault. Relationship between microfracture damage and physical properties P-wave velocity and anisotropy changes are a direct result from changes in the effective elastic moduli of the material. Under the assumption that effective elastic moduli changes are primarily induced by the formation of microfractures, the P-wave tomography data contain information about the microfracture density. We use an effective medium approach to relate the seismic velocity from the tomographic data to effective elastic moduli, and obtain a fracture density tensor from these effective elastic moduli. We then compare the obtained microfracture density tensor with the microfracture densities measured on thin sections. 4.2.1. Fracture density computation following an effective medium approach We adopt the effective medium approach by Sayers and Kachanov [1995] for a solid containing non-interacting pennyshaped cracks. The overall strain in a cracked solid is the sum of the elastic strain in the matrix (i.e., the constituent minerals of the rock) and the additional strain due to the presence of cracks: where is S 0 i jkl the elastic compliance tensor of the matrix, σ kl is the stress tensor, and ∆S i jkl is the change in elastic compliance resulting from cracks given by [Sayers and Kachanov, 1995]: Here, δ i j is the Kronecker delta, and is the second rank crack density tensor for r penny-shaped cracks with radii a r and unit normal vector n r i in a volume of rock V with intact matrix elastic parameters E 0 and ν 0 (Young's modulus and poissons ratio, respectively). β i jkl is a fourth rank crack density tensor, the contribution of which can be neglected in case of a dry rock with low poissons ratio [Sayers and Kachanov, 1995]. The tomographic inversion algorithm for the P-wave velocity allows for a vertical transverse isotropy in each voxel, which would result from a transversely isotropic orientation distribution of cracks. The orientation distribution of off-fault fracture segments observed across the quasi-static and dynamically formed failure zones (Figures 9c and 10c) shows a dominant orientation that is near-vertical to the loading axis. We did not measure fracture orientations parallel to the main failure zone, but we assume these are similar to those measured perpendicular to the failure zone so that the microstructural data is consistent with the tomographic models. For a vertical transverse isotropic distribution of cracks, α 11 = α 22 are the horizontal components and α 33 the vertical component of the crack density tensor. Sayers and Kachanov [1995] gives the elastic stiffness tensor C i jkl for vertical transverse isotropy in Voigt notation as: C 11 +C 12 = (S 0 11 + α 33 )/D, C 11 −C 12 = 1/(S 0 11 − S 0 12 + α 11 ), C 33 = (S 0 11 + S 0 12 + α 11 )/D, C 44 = 1/(2S 0 11 − 2S 0 12 + α 11 + α 33 ), C 13 = −(S 0 12 )/D, C 66 = 1/(2S 0 11 − 2S 0 12 + 2α 11 ), D = (S 0 11 + α 33 )(S 0 11 + S 0 12 + α 11 ) − 2(S 0 12 ) 2 . (4) Within the frame of reference of the sample, the vertical and horizontal crack densities are: To find values for ρ v and ρ h , we use a similar inversion protocol to [Brantut, 2015] where we calculate the theoretical compliance tensor C i j for a range of possible values of ρ v and ρ h . From C i j , we obtain synthetic values for V v P and V h P : V P (θ ) = C 11 sin 2 (θ ) +C 33 cos 2 (θ ) +C 44 + √ M 2ρ where ρ is the density of the intact rock matrix and θ the angle with respect to the loading axis, which is 0 • for V v P and 90 • for V h P . We then use a least absolute criterion to obtain the best fit between synthetic V P and measured V P , assuming a Laplacian probability density function, so that we obtain the most likely values for ρ v and ρ h [Tarantola, 2005;Brantut et al., 2011]. For this, we assume an uncertainty on the measured V P of 200 ms −1 . The P-wave tomography models obtained after quasi-static, dynamic, and mixed failure provide the observed values for V h P and V v P . We took these velocities along a fault-perpendicular transect through the centre of each sample. S 0 i j and h were calculated from ν 0 = 0.20 and E 0 . We used a value for E 0 derived from the pathaveraged V P measured at peak stress, and take ρ = 2660 kg m −3 . Computed vertical crack densities after dynamic failure increase from ρ v = 0.09 − 0.10 at the edge of the sample to ρ v = 0.18 near the failure zone ( Figure 12a). The horizontal crack density ρ h increases from 0 near the edge of the sample to about 0.04 near the failure zone. These increasing crack densities suggest a damage zone width of about 20 mm on both sides of the fault, but we exercise caution with this measure as it it is near the resolution of the tomography imposed by the correlation length. After quasistatic failure, computed crack densities near the edge of the sample are somewhat lower compared to those computed for dynamic failure (ρ v = 0.04 − 0.07 and ρ h is negative, Figure 12b), but show a stronger increase near the failure zone where they show the same peak values (ρ v = 0.18 and ρ h = 0.03). The negative fracture densities near the edge of the sample result from a slight underestimate of the value for E 0 , which was obtained from path-averaged V P measurements. Such an error is expected in the absolute values of the path-averaged V P , but has minor consequences for the change in crack densities. The crack densities after mixed failure are similar to those computed after quasi-static and dynamic failure, but show a strongly asymmetric distribution across the failure zone ( Figure 12c). The higher crack densities on one side of the fault (positive fault-perpendicular distance in Figure 12c) coincide with the nascent secondary failure zone (Figure 2d). Off-fault microfracture density from microstructures compared to crack density The horizontal and vertical crack densities, ρ h and ρ v , computed from ultrasonic wave velocities have units of m 3 /m 3 and are directly derived from the crack tensor components α 11 /h and α 33 /h (equation [5]). The off-fault microfracture density ρ frac obtained from microstructures is measured in m/m 2 and is a scalar quantity. In order to compare the two methods, we convert the off-fault microfracture traces to tensor components α 11 /h and α 33 /h. We remain in the spirit of the effective medium approach by assuming that the sample contains a transversely isotropic orientation distribution of penny-shaped fractures so that α 11 = α 22 and all cracks have a radius a r . We treat each traced fracture segment as a trace through an individual penny-shaped fracture. The centre of a traced fracture does not necessarily lie on the SEM image plane, which means that the fracture trace length t is equal to or smaller than the true fracture diameter 2a. The mean fracture radiusā was obtained from the mean measured trace length t so that 2ā = (π/2)t [Oda, 1983]. To determinet, we first determined the probability distribution of trace lengths. We followed the ponential, and log-normal distributions, and 2) the goodness-of-fit for all three types of distributions was tested using the Kolmogorov-Smirnoff test, giving a probability for each distribution. We obtained a > 90% probability for a log-normal distribution of trace lengths for a majority of the images, and a power law distribution for the remaining images. For log-normal distributed trace lengths, t was calculated from the first moment of the distribution.t cannot be determined from the first moment of a power law distribution, and we therefore took the mean of the measured traces. The unit vectors in equation [3] for each individual fracture segment are given by cos(θ ) and sin(θ ) for the tensor components α 11 and α 33 , respectively (Figure 1c). Each image intersects only those fractures that have their centre withinā distance perpendicular to the image plane [Oda, 1983], under the assumption the average out-of-plane fracture orientations (i.e., rotation with respect to the sample axis) are perpendicular to the image. This assumption agrees with the assumption of a transversely isotropic fracture orientation distribution. The volume V associated to the fracture traces on each SEM image is then given as V = Sā, where S is the surface area of the image. With the parametersā, V , and θ , and equations [3] and [5], we obtain ρ h and ρ v for each image. Values for ρ h and ρ v obtained from the microstructures agree well with those computed from the tomography models, for both the dynamically and quasi-statically failed samples (Figure 12a, b), except near the failure zone. Here, between 0 and 2 mm distance from the fault, the microfracture densities from microstructures are up to an order of magnitude higher than those obtained from V P . This can be ascribed to the difference in spatial resolution of the two methods. Nonetheless, the primary (i.e., cm scale) features of the damage around the failure zone are captured by both direct observation of microfractures and by P-wave tomography. Our results show that P-wave tomography combined with an effective medium theory can quantify localised zones of fracture damage. The use of a normalised fracture density, such as the one presented here, has the advantage of direct applicability with other effective medium models, for instance to predict hydraulic properties [Gavrilenko and Guéguen, 1989;Guéguen and Schubnel, 2003]. We therefore propose that high resolution geophysical measurements of wave speeds from dense arrays combined with microstructural characteristics measured in the field [Rempe et al., 2013[Rempe et al., , 2018 or from borehole data [Jeppson et al., 2010] can reveal the physical properties around fault zones. Such data can be used to calibrate the findings of rupture simulations that allow for off-fault energy dissipation [Bhat et al., 2012;Thomas and Bhat, 2018;Okubo et al., 2019], and can be compared to laboratory failure experiments such as those presented here. Rupture energetics We first provide estimates for Γ and W b for all shear failure experiments from the mechanical stress and strain data. We then show that the damage zone width established after quasi-static and dy-namic failure is the results of stresses induced by rupture and not by slip, so that we can calculate Γ off thereafter. We provide estimates for Γ off for quasi-static and dynamic ruptures based on microstructural observations, and discuss the implications. A similar estimate may be obtained for Γ dissipated on the fault by quantifying the cumulative fracture surface in gouge and cataclasites that make up the main shear failure zone, but this is beyond the scope of this study. We leave such an endeavour to future studies, as difficulties need to be tackled regarding gouge preservation and resolution limits on identifying the smallest gouge grain sizes. Fracture energy and breakdown work from mechanical data We calculated breakdown work W b by converting the measured stress and axial strain data to shear stress and slip along the failure zone, following the steps described by Wong [1982Wong [ , 1986. The area under the shear stress versus slip curve in excess of the residual shear stress gives a measure for W b . We measured a residual shear stress of 140 MPa at the end of the quasi-static rupture experiment after 0.83 mm slip (Figure 2b), whereas the residual shear stress measured after reloading the samples after dynamic and mixed failure was 120 MPa (Figure 2b). This suggests that the quasi-statically created failure zone had not yet reached its residual frictional strength yet, supported by the convergence of the quasistatic failure stress-strain curve towards this value. Residual shear stresses of 140 MPa and 120 MPa after a slip distance of 0.83 mm give us quasi-static values for W b of 37 kJm −2 and 53 kJm −2 , respectively. These are lower bounds for quasi-static W b , as more slip would have been accrued towards continued weakening down to 120 MPa. Γ for quasi-static failure may be calculated in the same manner, up to the shear stress and slip distance at which the failure zone through the sample was completed. We thereby assume that all breakdown work done to drop the strength of the failure zone from the peak stress down to shear stress of rupture completion was dissipated to create the failure zone, including formation of off-fault microfractures and gouge associated to the rupture. The rupture was completed and the failure zone fully formed at a shear stress of 155 MPa and a slip distance of 0.44 mm, as established by Aben et al. [2019] using AE source locations. From the shear stress and slip data we then obtain Γ = 27 kJm −2 for quasi-static failure [Aben et al., 2019]. W b cannot be established directly from the mechanical data measured during dynamic failure, as the elastic unloading of the loading column is measured rather than the drop in shear stress of the fault. Order of magnitude estimates for dynamic W b and Γ may be obtained by approximating the loading system as a simple springslider model, similar to Beeler [2001], and solve the force balance by assuming some slip-weakening law. We tried this approach, but found results that are too erroneous to be useful. This is most likely because the model assumes a constant piston mass during dynamic failure, which is violated at short failure time scales by inertia of the piston. Lockner et al. [2001] pointed out that it is not strictly correct to use the average shear stress and average slip measured on the sample during failure for calculating W b (and Γ), since the size of the rupture tip process zone is smaller than the sample size. The assumption in using the average shear stress and average slip for the analysis of W b [Rice and Rudnicki, 1980;Wong, 1986] is that the fault is created in the entire sample at the peak stress (i.e., the sample is a point on the trajectory of a propagating rupture) -which we show is not the case. It is however encouraging that by using this approach, similar order of magnitude values for W b have been found on granitic rock samples with different diameters: 16 mm [Wong, 1982], 40 mm [Aben et al., 2019], and 76 mm [Lockner et al., 2001]. Damage zone width Microfracture damage observed after quasi-static and dynamic failure were induced by both rupture and slip. We observe a wider damage zone after dynamic rupture relative to that observed after quasi-static rupture. Part of the slip-related damage was corrected for by removing gouge-and cataclasite patches from the traced images prior to establishing off-fault microfracture densities and damage zone widths ( Figure 11). Nonetheless, some of the remaining off-fault microfractures may be induced by slip rather than rupture. The sample subjected to dynamic failure (sample LN7) accumulated 3.22 mm slip, whereas the sample subjected to quasistatic failure (sample LN5) accumulated 0.83 mm slip. The microstructural record after dynamic failure may thus contain more slip-related off-fault microfracture damage. Before we provide an estimate for Γ surf off , achieved by combining the damage zone width and microfracture density, we assess whether the difference in damage zone width obtained for quasi-static and dynamic rupture is an effect of rupture velocity or an effect of the difference in accumulated fault slip. Off-fault damage during rupture can be caused by stresses around the rupture tip. The geometry of the rupture tip stress field changes with rupture velocity so that damage is created in a larger area around the rupture tip at higher rupture velocity [Poliakov et al., 2002;Rice et al., 2005], increasing the damage zone width. Slip along a rough fault (i.e., asperities slipping past each other) causes additional stresses in the host rock around the asperities, and these stress heterogeneities result in off-fault damage. With progressive slip along rough faults, progressively larger asperities are dragged past each other and the additional off-fault stresses act over an increasingly larger area [Chester and Chester, 2000]. The damage zone width is thus also expected to increase with increasing slip. The sample subjected to dynamic rupture has experienced both a larger rupture velocity and a larger amount of slip relative to the quasi-statically ruptured sample. Here, we compute the damage zone width as a function of rupture velocity by adopting the analytical solution by Poliakov et al. [2002] for the elasto-dynamic stress field in a rupture tip process zone for a non-singular slipweakening rupture. We make the assumption of small scale yielding: The fracture energy is dissipated before the remainder of the breakdown work is done. This means that the initial drop in shear strength in the rock from peak strength down to 155 MPa is solely ascribed to dissipation of Γ, and further reduction in shear stress is caused by other slip-weakening processes. This assumption seems justified, based on the quasi-static and mixed failure experiments: The initially steep slope in shear stress versus slip during rupture (Figure 2b) causes stronger stress concentrations relative to the less steep slope of the curve after rupture completion. We therefore expect that the initial steep stress drop determines the damage zone width. We also predict the damage zone width as a function of slip by using the analytical solution by Chester and Chester [2000] for the stress field along a rough frictional fault in an elastic material. Using these two models, and realistic input parameters obtained from the rupture experiments, we then asses which parameter (rupture velocity or slip) is responsible for the observed difference in damage zone width between our quasi-static and dynamic rupture experiments. Rupture tip process zone model: We consider the 2D case of a mode II rupture that propagates parallel to the x-direction at z = 0. The stress in the rupture tip process zone is given by: where σ 0 i j is the far-field stress state on the sample and ∆σ i j are the additional stress components caused by the rupture that are given by equations [A3] and [A11] in Poliakov et al. [2002]. To remove the stress singularity at the rupture tip, the shear stress drops linearly from peak stress τ p to residual strength τ r over a slip-weakening zone of size R. For an infinite elastic medium, R decreases in size with increasing rupture velocity v [Rice, 1980]: where R 0 is the quasi-static limit of R, and µ and ν are the shear modulus and poissons ratio of the host rock respectively. The function g depends on v, and on the Pand S-wave velocities of the material [Poliakov et al., 2002]. The model parameters were obtained from the mechanical and ultrasonic data measured during the quasi-static rupture experiment on sample LN5 (Table 2), where we calculated µ from E and ν. These parameters yield R 0 = 0.14 m ,and R decreases down to 0.02 m at v = 0.9 × c s . The normal stress on the fault σ 0 zz , stress ratio k = σ 0 xx /σ 0 zz , and τ p were calculated from the mechanical data at the onset of rupture. The residual shear stress τ r = 155 MPa after δ c = 0.44 mm slip at rupture completion. Note that in the experiments, the shear stress along the failure zone drops further from 155 MPa to 120 MPa, but this occurs after rupture completion and any off-fault damage accrued during this stress drop is not part of off-fault damage related to the rupture. Rough fault model: We consider the 2D case of a strike-slip fault parallel to the x-direction at z = 0. We consider a uniform displacement U along a fault with coefficient of friction µ that has a sinusoidal perturbation: where L is the wavelength and A = γL/2π is the amplitude of the perturbation. γ is a dimensionless roughness factor. The stress around the rough or wavy fault is given by: where the stress perturbations caused by the sinusoidal fault ∆σ i j are (equations [10a-10c] in Chester and Chester [2000]): Figure 13: Damage zone width as a function of normalised rupture velocity (black) and fault slip (gray), based on the stresses around a propagating rupture tip [Poliakov et al., 2002] and the stresses caused by a wavy perturbation of a frictional interface [Chester and Chester, 2000]. The rupture velocity of the quasi-static rupture, and the total slip of the quasi-static and dynamic rupture experiments are highlighted. The approximate rupture velocity of the dynamic rupture experiment is indicated where the damage zone width is double that off the quasi-static rupture, in accordance with the microstructural results. with l = 2π/L and The normal stress on the fault σ 0 zz , stress ratio k = σ 0 xx /σ 0 zz , and coefficient of friction f were calculated from the mechanical data at the onset of frictional sliding in the quasi-static experiment ( Table 2). The surface roughness was estimated from a post-mortem cross section perpendicular to the fault plane (Figure 2c), where the main fault interface (length of the order of 100 mm) shows a waviness in the order of 1 mm, giving γ = 10 −2 . During rupture the elastic constants around the fault interface drop by around 50%, as observed in the P-wave tomography results during and after rupture ( Figure 4, 5, and 6). To take into account this elastic weakening by the rupture process zone prior to significant slip, Youngs modulus E is half that of the intact rock. Failure criterion and damage zone width: The 2D off-fault stress tensor around a rupture tip was calculated for a range of rupture velocities from 10 −6 × V S (i.e., quasi-static rupture) up to 0.9 ×V S (i.e., rupture velocity near the Rayleigh wave speed). The stress tensor for a wavy fault was calculated for a range of slip distances between 1 mm and 10 mm, for a range of perturbation wavelengths between 0.2 mm and 150 mm. We used a coulomb failure criterion to assess the damage zone width that can be expected from both mechanisms, where the maximum shear stress τ max and coulomb shear stress τ coulomb are defined as: where φ = tan −1 ( f DZ ), and f DZ is the coefficient of friction within the damage zone, for which we take µ DZ = 1 to represent intact rock [Chester and Chester, 2000]. We expect damage where τ max /τ coulomb > 1 or τ max /τ coulomb < 0. The damage zone width is the largest fault parallel distance at which the failure criterion is satisfied. The expected fault damage zone width for ruptures propagating below the Rayleigh wave speed increases from about 8 mm at normalised rupture velocities between 0 and 0.75 up to 200 mm or more as the rupture velocity approaches the Rayleigh wave speed (Figure 13). The damage zone width predicted for increasing slip along a rough fault shows a linear increase with slip ( Figure 13). The damage zone width observed after quasi-static rupture is around 10 mm (Figure 11c), which is similar to that predicted for a low velocity rupture, whereas the damage zone width predicted to result from slip is less then 3 mm for 0.8 mm of slip accumulated during the experiment. Dynamic rupture resulted in 3 mm total slip, giving a damage zone width of 8.6 mm according to the wavy fault model (Figure 13). This does not match with the observed damage zone width of around 15 to 20 mm (Figure 11c). Although the rupture velocity for this experiment was not measured, a damage zone width of 20 mm can result from a dynamic rupture velocity of about 0.8 ×V S . These results are first order estimates only: The parameters for the rupture model were taken from the quasi-static rupture data, whereas for the dynamic rupture the stress drop may be larger, the coefficient of friction of the fault may be lower, and the breakdown work larger. The rupture model strictly applies to an infinite medium, which may explain why the calculated process zone size R 0 is larger than the actual sample size. For the stresses resulting from slip on a rough fault, the background stresses are assumed to be constant, whereas in our experiments initial slip is accumulated within the rupture process zone where the shear stress and the coefficient of friction are not constant. However, this slip-weakening distance for the quasi-static case is less than 1 mm, after which the applied stresses in the experiment remain more or less constant. The simulated off-fault damage at lower velocity ruptures (< 0.7 normalised velocity) is mostly on the tensile side of the fault plane, similar to the results of Poliakov et al. [2002]. Other simulations for rupture-induced off-fault damage also predict a strong asymmetry in off-fault damage distribution, with most damage occurring on the tensile side of the fault [e.g., Rice et al., 2005;Xu et al., 2015;Thomas and Bhat, 2018]. In our experiments however, fracture damage occurs in equal amounts on both sides of the fault, and some microstructural studies on a off-fault damage surrounding an experimentally formed shear fracture did not observe a clear damage asymmetry either [Moore and Lockner, 1995;Zang et al., 2000]. This may be due to several reasons: 1) Rupture simulations are performed in a large continuum with constant far-field stresses, whereas our experiments were performed on a 100 mm by 40 mm cylinder where boundary effects may alter the off-fault stress fields as described by the models. 2) The orientation of the principal stresses (i.e., stress ratio k) change during failure, which may change the region where the off-fault failure criterion is satisfied [Poliakov et al., 2002;Rice et al., 2005]. In the simulations, the stress ratio k was kept constant whereas it actually changed from k = 2.2 to k = 1.8 during quasi-static rupture. 3) The trajectory of the propagating rupture is not linear so that the principal stresses with respect to the trajectory of the rupture process zone change locally. 4) The applied stresses change orientation due to already formed fracture damage in and behind the rupture process zone [Faulkner et al., 2006]. Some of the reasons above may be alleviated with a different loading geometry for better control on the principal stress orientations, using for instance a direct shear setup. More advanced rupture simulations with a 'rough' rupture trajectory may yield additional insights into the lack of damage asymmetry. Nonetheless, the models suggest that the damage zone width for quasi-static and dynamic rupture in our experiments is controlled primarily by the rupture tip process zone. This is supported by the 3D P-wave velocity structure of the mixed rupture, where a low velocity zone is associated with the incipient secondary fault ( Figure 6). This added structural complexity provides a unique opportunity to compare damage in a fault zone without slip, with that in a fault zone that has accumulated 2.4 mm of slip (the main fault in the same sample). The lowest V P in the zone around the incipient fault is only slightly higher than the lowest V P around the fully developed fault (Figure 6d), suggesting that rupture rather than slip caused the most off-fault damage. Estimates for Γ off A measure for Γ off has been obtained in a previous study from in situ V P tomography measurements on quasi-statically ruptured sample LN5 [Aben et al., 2019]. We now use a second and independent method to obtain Γ off from microstructural data for the same sample, and for a dynamically ruptured sample. The energy dissipated by creating new fracture surface in the volume around the fault gives an estimate for the off-fault dissipated fracture energy Γ off . We assume that the microfractures are mostly tensilelittle to no slip and some opening of the microfractures observed in the thin sections testify to this -so that the energy needed to form them is the mode I fracture energy. The cumulative mode I fracture energy gives us Γ off , which was calculated from the fracture density data as follows: where x i is the fault-perpendicular width of image i and Γ I is the mode I fracture energy for quartz and feldspar, ranging from 2 to 10 Jm −2 [Atkinson, 1987]. A factor 2 is included to account for the two new surfaces that comprise each fracture. We calculated Γ off for all the fracture density transects obtained on both the quasistatically ruptured and dynamically ruptured samples, for Γ I = 2 and Γ I = 10 Jm −2 . Γ off averaged for all transects for a quasi-static rupture ranges between 2 and 10 kJm −2 , that for a dynamic rupture is between 3 and 15 kJm −2 (Figure 14). Γ off for dynamic failure may be higher, considering that the fault damage zone width in the dynamically failed samples may be a lower bound only. Γ off increases nearly linearly with damage zone width, based on the transects through the tensile side of the fault (Figure 14). For the quasistatic case, Γ off on the tensile side of the fault is less than on the compressional side of the fault. The range of values for Γ off established by the microstructural approach depend mainly on the mode I fracture energy, for which we use values that vary by nearly an order of magnitude (2 and 10 kJm −2 ), but are usually expected to be at the lower end of these value. We see a good agreement between the results from two independent methods to determine Γ off : Γ off from P-wave tomography falls well within the estimated range for Γ off from microstructures, and is similar to it when the mode I fracture energy is 3 Jm −2 (Figure 11c). We note that not all of the fracture surface area may have been traced from the SEM images, as we elected not to trace fractures below 9 µm since this would have increased noise (for instance, intrinsic flaws in the grains and artefacts from thin section preparation) more than increased actual fracture surface area. A close inspection of the SEM images (Figure 8b-e) reveals that individual fractures shorter than 9 µm nearly all reside in zones of cataclasite and gouge close to the main failure zone, which have been excluded from further analysis. The microfractures further away from the fault zone are generally longer than 30 µm, and so the proportion of short microfractures not included as fracture surface area is small. This is confirmed by the good agreement between the two independent measures for Γ off at a realistic value for the mode I fracture energy. Γ off measured after dynamic failure is about 1.5 to two times higher compared to Γ off for quasi-static failure. This higher value for Γ off results from a wider damage zone and a higher overall fracture density. The first order estimate for damage zone width in section 4.3.2 suggests that the damage zone width is controlled by rupture velocity. A similar quantitative estimate for the cause of the difference in damage intensity cannot be achieved so easily. First, elastodynamic rupture models predict that with increasing rupture velocity, the state of stress in the rupture tip process zone exceeds the strength of the damage zone rock by an increasing amount [Poliakov et al., 2002;Rice et al., 2005], which may result in the formation of more microfractures. An increasing rupture velocity also increases off-fault strain rates that, when sufficiently high, give rise to a higher microfracture density due to inertia effects [Glenn and Chudnovsky, 1986;Liu et al., 1998;Bhat et al., 2012;Aben et al., 2017]. Second, off-fault stresses arising from slip along a rough fault will cause additional microfracturing and slip along off-fault microfractures formed during rupture. This effect could be represented in the rough fault model by decreasing the off-fault coefficient of friction, which also increases the distance at which slip along a rough fault interacts with rupture-induced off-fault damage. The microstructures of the dynamically failed sample indeed show a few small patches of fine material along secondary fractures at over 10 mm distance from the main fault, indicating that some slip occurred along this secondary fault. Energy dissipation by slip along off-fault microfractures is not considered in the above calculation of Γ off . Moore and Lockner [1995] observed peak fracture densities of the order of 40-80 mm/mm 2 in the microstructures of a quasistatic rupture propagation experiment on intact Westerly granite at 50 MPa confining pressure. Fracture densities dropped to a background density of around 14 mm/mm 2 at the damage zone boundary defined at 40 mm from the fault. Continuous microstructural observations were limited to 10 mm from the failure zone, except for one measurement at 40 mm distance. Moore and Lockner [1995] report values for Γ off that range from 1.7 to 8.6 kJm 2 , which is similar to the values reported here (2 to 10 kJm 2 , Figure 11c). However, our results show a damage zone width after quasi-static failure of around 10 mm. The similarity in Γ off and the difference in damage zone width is not caused by a difference in resolution; both this study as well as Moore and Lockner [1995] have a cut-off for fractures smaller than 3 µm. Possible explanations for the difference in damage zone widths are: 1) Field and laboratory studies describe the evolution of fracture density in the fault damage zone by an exponential decay [Mitchell and Faulkner, 2009;Faulkner et al., 2011;Moore and Lockner, 1995], a logarithmic decay [Zang et al., 2000], or a powerlaw decay [Savage and Brodsky, 2011;Mayolle et al., 2019;Ostermeijer et al., 2020] with increasing faultperpendicular distance. This may result in different damage zone widths, but does not affect Γ off much as the 'tail' of the damage zone does not contribute significant amounts of additional fracture damage. 2) This study and Moore and Lockner [1995] used different granitic samples. 3) The confining pressure used by Moore and Lockner [1995] is half of that used in this study. Earthquake rupture simulations show that the damage zone width decreases with increasing confining pressure, while the relative damage intensity within the damage zone increases [Okubo et al., 2019]. The results from this study and Moore and Lockner [1995] comply with these findings: An increase in confining pressure reduces the damage zone width while Γ off remains the same, which equals a higher microfracture intensity in a narrower damage zone. 4.3.4. Off-fault dissipated energy and rupture energetics Is Γ off a significant energy sink for all preexisting fault in the brittle crust? The primary prerequisite for dissipation of fracture energy in the off-fault volume is that the imposed far-field stresses plus extraneous transient stresses in the rupture tip process zone are sufficiently high to damage the host material. In the experiments presented here, the failure zone material consists of the same material (intact granite) as the host rock, so that strength of the fault interface is the same as the strength of the surrounding material. The imposed stress state during rupture is thus high relative to the strength of the host rock. The magnitude of the additional transient stress field of the rupture tip process zone is proportional to ∆σ i j ∝ Γ 1/2 for the limiting case of a singular shear crack [Freund, 1990], where Γ is relatively high for intact granite. From these two arguments it follows that stresses around the experimental ruptures are high enough to induce off-fault damage, but should be considered an upper bound for pre-existing fault zones in terms of Γ and strength. We can establish a lower bound scenario for a strong host rock and a weak interface, comprised off two bare granite slabs pressed together. Values of Γ = 0.01 − 3.5 Jm −2 have been published for such an experimental setup [Ke et al., 2018;Kammer and McLaskey, 2019]. These values are 5 to 7 orders of magnitude lower than for intact granite and were measured at 6 MPa normal stress, two orders of magnitude lower than our experiment. Thus both imposed far-field stress and the transient stress field are much lower than in our experiment, whereas the off-fault host material remains the same. We therefore expect no off-fault damage and a negligible value for Γ off in these experiments. These two cases mark the extremes for pre-existing faults, were our experiments are more illustrative for faults below 3 km depth where fault core materials likely experience rapid recovery of cohesion by sealing and healing processes, so that the fracture energy Γ of the material increases sufficiently to entice damage in the host rock. Fracture energy Γ is a material parameter that is independent of fault slip and increases slightly with rupture velocity in most materials (i.e., the change in Γ remains within the same order of magnitude) [Green and Pratt, 1974;Freund, 1990]. Γ determined from mode I rupture experiments performed in PMMA and glass provide analogue results for mode II shear rupture experiments performed here. During mode I rupture in PMMA and glass, Γ remains more or less constant below a critical velocity that is 0.36 (PMMA) or 0.42 (glass) of the Rayleigh wave speed, but increases by up to a factor 10 at higher rupture velocities up to the Rayleigh wave speed [Sharon et al., 1996]. This increase in Γ is an apparent one caused by microbranching instabilities along the main crack that creates additional fracture surface and accounts for the increase in Γ [Sharon et al., 1996]. At these rupture velocities, the single cracks still obey the initial Γ measured at low rupture velocity [Sharon and Fineberg, 1999]. Here, we show that part of the dependence of Γ on rupture velocity is caused by an increasing amount of off-fault dissipated energy Γ off . Γ off itself increases because the off-fault area in which energy is dissipated by microfracturing increases, and the amount of fractures within this area increases as well. What we measure as Γ off in our experiment is qualitatively similar to the additional energy dissipated by microbranching instabilities measured in PMMA during mode I rupture -with the main difference that microbranching around a shear rupture in granite occurs already at quasi-static conditions as evidenced by the off-fault microfractures after quasi-static rupture. Fracture energy on the main failure plane (Γ − Γ off ) is partly invested as surface energy to create gouge and cataclasites, and partly dissipated as heat. We assume that fracture energy spent on the main failure plane does not change with increasing rupture velocity. Γ thus only increases with rupture velocity if Γ off increases. In our experiments, Γ off doubles from around 3 kJm −2 for quasistatic rupture to at least 5.5 kJm −2 for dynamic rupture, and so Γ increases by 10%. An increase in Γ means that ruptures will consume more energy to propagate, and a propagating rupture in a material with a velocity-dependent fracture energy will have a decreasing acceleration rate with increasing rupture velocity [Freund, 1990]. Although the rupture velocity for the dynamic failure experiment is unknown, we can make a prediction for the evolution of Γ if we adopt the simple relation that Γ off increases linearly with rupture-induced damage zone width. We observe this in our experiments ( Figure 11). We then take the relation between rupture velocity and damage zone width (Figure 13), so that we can predict Γ off . Γ off for rupture velocities near the Rayleigh wave speed increases by up to a factor of 10-20 relative to Γ off at low rupture velocity. Near the Rayleigh wave speed, we then expect that Γ = 54 − 80 kJM −2 . The factor 10-20 increase in Γ is similar to that measured for PMMA. The critical velocity for a strong increase in Γ for shear failure in granite under confinement is concurrent with the strong increase in damage zone width, at 0.81 of the Rayleigh wave speed (0.75 V S ) whereas the critical branching speed for PMMA is 0.36 in mode I rupture. Field observations show that damage zone width scales linearly with total fault displacement below 1.5-4 km [Shipton et al., 2006;Savage and Brodsky, 2011;Faulkner et al., 2011] (Figure 15). These studies argue that this relation is mainly due to slip-related off-fault damage by fault zone roughness and secondary faulting. By approximating off-fault stresses during rupture and during slip along rough faults, we show that for small displacements the rupture tip process zone determines the damage zone width ( Figure 13). Our observed damage zone widths after quasi-static (around 10 mm wide) and dynamic rupture (around 20 mm wide) confirm this: They are an order of magnitude larger than the slip that was accumulated during failure (0.83 mm slip for quasi-static rupture and around 3 mm slip for dynamic rupture), and thus do not fit with the linear scaling relation between damage zone width and slip ( Figure 15). This is in contrast with what is argued by Faulkner et al. [2011], where it was suggested that the scaling relation goes through the origin (i.e., a zero displacement shear crack has no damage zone). Even at smaller negligible displacements, such as the failed secondary rupture in the mixed rupture experiment, a damage zone width is visible in the P-wave velocity structure ( Figure 6). We propose that the lower bound for the scaling relation observed in the field is determined by the stress field around a propagating rupture tip. The absolute value of this lower bound depends on the material properties, far-field stresses, and most importantly the rupture velocity. Aben et al. [2019] argued that the ratio between breakdown work W b and its off-fault dissipated energy component is proportional to δ 1−λ . λ ≈ 2 for small earthquake slip below 10 cm, and λ < 1 for larger slip [Viesca and Garagash, 2015], so that this ratio initially decreases with earthquake slip, to then stabilise or slightly increase with earthquake slip. For quasi-static failure and small slip (< 1 mm for our quasi-static rupture experiment), all the breakdown work is spend as fracture energy, and so Γ off /Γ = 0.1 [Aben et al., 2019]. However, the scaling proposed by Aben et al. [2019] is based on the assumption that damage zone width increases linearly with fault slip as seen in the field [Faulkner et al., 2011;Savage and Brodsky, 2011], whereas our results suggest that at very small amounts of slip the damage zone width is determined by rupture velocity ( Figure 15 and 13). The scaling relation between breakdown work and total off-fault dissipated energy is thus only valid when the damage zone width is determined by slip, i.e., fault roughness. Conclusions We performed dynamic, quasi-static, and mixed shear failure experiments on Lanhélin granite to quantify the off-fault damage in the rupture tip process zones. The in situ P-wave structure and evolution was revealed by laboratory-scale seismic tomography during and after quasi-static failure and after dynamic failure. In both quasi-static and dynamic cases a localised low velocity zone formed around the fault interface, where a maximum reduction in P-wave velocity of about 25% was observed. The low velocity zone around a fault created by dynamic rupture has a similar drop in Pwave velocities. The low velocity zones are caused by off-fault microfractures within the host rock around the fault during rupture and slip. Using an effective medium approach, we computed microfracture densities from the P-wave tomography across the quasi-static and dynamic failure zones. The resulting theoretical microfracture densities are in good agreement with microfracture densities measured from thin sections, indicating that the P-wave tomography reveals realistic near-fault changes in elastic properties. We propose that a similar exercise using high resolution geophysical measurements combined with microstructural measurements from the field can reveal the physical properties around larger fault zones. The damage zone width established from microstructural analysis corresponds to the width of the low P-wave velocity zones, and is around 1 cm wide in the quasi-statically failed sample and 2 cm in the dynamically failed sample. Comparison with a previous microstructural study on quasi-static failed samples suggests that the damage zone width is depth dependent. We argue that the damage zone width in our experiment is controlled by rupture velocity and not by the slip up to a few mm. We propose that at larger slip the damage zone width is determined by fault roughness. Hence, in our experiments the increase in off-fault dissipated energy is mostly caused by an increase in rupture velocity. The off-fault dissipated energy Γ off that we measure is therefore associated to the fracture energy Γ, and was calculated from microstructural observations. Γ off increases from around 3 kJm −2 for quasi-static rupture to at least 5.5 kJm −2 for dynamic rupture, and shows that shear fracture energy in crystalline material increases with increasing rupture velocity.
2020-07-21T01:01:05.591Z
2020-07-19T00:00:00.000
{ "year": 2020, "sha1": "3069071b596de350365ec14ac940d41e9ffecefb", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020JB019860", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3069071b596de350365ec14ac940d41e9ffecefb", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Materials Science", "Geology", "Physics" ] }
120531146
pes2o/s2orc
v3-fos-license
Reduced model prediction of electron temperature profiles in microtearing-dominated National Spherical Torus eXperiment plasmas A representative H-mode discharge from the National Spherical Torus eXperiment is studied in detail to utilize it as a basis for a time-evolving prediction of the electron temperature profile using an appropriate reduced transport model. The time evolution of characteristic plasma variables such as βe, νe∗, the MHD α parameter, and the gradient scale lengths of Te, Ti, and ne were examined as a prelude to performing linear gyrokinetic calculations to determine the fastest growing micro instability at various times and locations throughout the discharge. The inferences from the parameter evolutions and the linear stability calculations were consistent. Early in the discharge, when βe and νe∗ were relatively low, ballooning parity modes were dominant. As time progressed and both βe and νe∗ increased, microtearing became the dominant low-kθ mode, especially in the outer half of the plasma. There are instances in time and radius, however, where other modes, at higher-kθ, may, in addition to microtearing, be i... : T e profile fit metrics for the six times of interest. In the publication "Reduced model prediction of electron temperature profiles in microtearingdominated NSTX plasmas" by Kaye et al., 1 some the values of the RMS Deviation and Offset in Table 1 were incorrect. Given here is the table with the corrected values. There is no impact of the corrected values on either the general results or the related discussion. A representative H-mode discharge from the National Spherical Torus eXperiment is studied in detail to utilize it as a basis for a time-evolving prediction of the electron temperature profile using an appropriate reduced transport model. The time evolution of characteristic plasma variables such as b e ; à e , the MHD a parameter, and the gradient scale lengths of T e , T i , and n e were examined as a prelude to performing linear gyrokinetic calculations to determine the fastest growing micro instability at various times and locations throughout the discharge. The inferences from the parameter evolutions and the linear stability calculations were consistent. Early in the discharge, when b e and à e were relatively low, ballooning parity modes were dominant. As time progressed and both b e and à e increased, microtearing became the dominant low-k h mode, especially in the outer half of the plasma. There are instances in time and radius, however, where other modes, at higher-k h , may, in addition to microtearing, be important for driving electron transport. Given these results, the Rebut-Lallia-Watkins (RLW) electron thermal diffusivity model, which is based on microtearing-induced transport, was used to predict the time-evolving electron temperature across most of the profile. The results indicate that RLW does a good job of predicting T e for times and locations where microtearing was determined to be important, but not as well when microtearing was predicted to be stable or subdominant. V C 2014 AIP Publishing LLC. The use of reduced transport models in the plasma core is critical to rapid assessment and prediction of operational scenarios in present and future devices. The necessary validation studies of these models as precursors to their use in predicting performance aids in developing an understanding of the processes controlling transport in present-day devices. Fundamental non-linear gyrokinetic calculations that determine the microturbulence driving the plasma transport and the transport levels [1][2][3][4][5][6] are time and computer intensive, especially if they span multi-scales (from electron to ion-scale turbulence) and realistic mass ratios. 7 Therefore, using reduced models, either analytic or numerical-based, is the desired option for the aforementioned studies. The discussion in this paper will focus on models in the plasma core (r/a 0.8). Reduced transport models of one form or another have been around for years, if not decades. Some representative examples include the Coppi-Tang-Redi model based on electron temperature profile "resiliency." 8 This model, with ad-hoc electron thermal diffusivity, v e profile adjustments, has been benchmarked against Alcator C-Mod data 9 and was the basis for assessing ITER performance under a range of heating and current drive scenarios. 10 Other reduced models used for predicting ITER performance include Bohm-gyroBohm 11 and the Multi-Mode model. 12 The latter model combines a number of individual models covering ion to electron-scale turbulence-driven transport. The GLF23 (Ref. 13) model and its successor TGLF, 14 are numerically based and were developed from fits to the parameter variations of transport levels calculated through non-linear GYRO runs of a standard DIII-D discharge. Implicit in these models is a treatment of the ExB shear suppression of the turbulence that drives transport in both the ion and electron channels. These models, especially TGLF, have been well-validated with respect to DIII-D H-mode and Hybrid discharges, 15 although the model does lead to a significant underestimate of both the turbulence and transport in the outer regions of DIII-D L-mode discharges. 16 The source of this difference is presently under investigation. A key feature of GLF23 and TGLF is the ability to predict not only the energy transport but also the transport of particles and momentum as well. This capability has allowed for first generation full simulations of ITER performance in H-mode and Hybrid scenarios. 17,18 The GLF23 and, even while validation studies are ongoing, the TGLF models have been found to predict ion and electron transport and temperatures accurately in conventional aspect ratio (R/a ¼ 2.5-3, where R is major radius and a is minor radius) tokamaks where electrostatic instabilities such as the Ion Temperature Gradient/Trapped Electron Mode (ITG/TEM) or Electron Temperature Gradient (ETG) modes are dominant. 15 The Spherical Torus or Tokamak (ST) presents a greater challenge. For one, the aspect ratio in STs such as National Spherical Torus eXperiment (NSTX) or Maga-Amp Spherical Tokamak (MAST) is approximately one-half that in DIII-D, with the operational a) Author to whom correspondence should be addressed. Electronic mail: skaye@pppl.gov regime in DIII-D being the basis for the development of GLF23 and TGLF. Second, STs operate at low toroidal magnetic field, B T ($1/10 that in conventional aspect ratio devices), and thus higher volume-averaged toroidal beta, hb T i, where electromagnetic effects are important. Here, b T / P=B 2 T , where P is the plasma pressure. In ST plasmas at high hb T i and relatively high collisionality, the microtearing mode 19 has been predicted to be unstable in the plasma core, 20-23 and non-linear GYRO calculations of NSTX plasmas show that the resulting v e due to microtearing turbulence both agrees with that inferred at a particular experimental condition and varies strongly with collisionality in a manner consistent with the strong collisionality dependence of normalized confinement observed in NSTX. [24][25][26] The challenge for these microtearing unstable plasmas, then, is to identify a reduced transport model that is able to predict electron temperatures in this unique ST parameter regime in order to have confidence in extrapolation to future, lower collisionality STs such as NSTX-Upgrade 27 and a Fusion Nuclear Science Facility (FNSF), 28 where microtearing may be unstable. This paper is organized in the following manner. Temporal parameter variations of a representative NSTX H-mode discharge are presented in Sec. II, with the associated discussion motivated by what these variations indicate in terms of expected plasma stability to microtearing and other microturbulence. In Sec. III, linear growth rates, real frequencies, and mode structures are determined using the GYRO code. Not surprisingly, the stability characteristics are consistent with the expectations from the parameter variations studied in Sec. II. The examination of parameter variations and determination of the unstable modes are critical to understanding the electron temperature, T e , profile evolution predictions based on a reduced model of microtearing transport 29,30 performed in Sec. IV. It is important to establish the guidelines for where and when agreement with such a reduced model is expected, and where and when it is not. Model predictions show good agreement with the experimentally measured T e profiles for locations and times where microtearing is predicted to be unstable, but agreement is not good for times and locations when microtearing is predicted to be stable or subdominant. A comparison is also made between the measured electron stored energy and that given by the model predictions. A summary and discussion of future work are given in Sec. V. II. DISCHARGE DESCRIPTION AND PARAMETER EVOLUTION Time traces for the representative discharge used for this study are plotted in Fig. 1. The discharge, NSTX shot number 120967, is an H-mode which was taken from a confinement study consisting of I p and B T scans in plasmas using helium glow discharge cleaning plus boronization for wall conditioning. No lithium wall conditioning was used during this period of operation. It was from this collection of discharges that the strong increase in normalized confinement with decreasing collisionality, Bs e $ à À0:97 e was first identified. 25 This particular discharge had a plasma current I p of 0.7 MA, a toroidal field B T of 0.35 T, a deuterium neutral beam (NB) heating power of $4 MW into a Lower Single Null (LSN) deuterium plasma with elongation, j, $2.2 and plasma density up to 6  10 19 m À3 . All of these discharges exhibited small, high frequency Edge Localized Modes (ELMs). This discharge was one of the highest collisionality discharges in the scans, which consisted of varying I p from 0.7 to 1.1 MA and B T from 0.35 to 0.55 T in the same geometry and with the same heating power and density. In the discharge shown in the figure, the electron temperature and density were measured by a 20-point Thomson scattering diagnostic every 16 ms, the (carbon) ion temperature, toroidal rotation, and impurity density by a 51-point charge exchange recombination spectroscopy diagnostic every 10 ms, and the magnetic field pitch (i.e., B p /B T , which yielded local q via magnetic equilibrium reconstructions) by a 12-point motional Stark effect diagnostic also every 10 ms. Since the time of this discharge (2006), the radial resolution of the motional Stark effect and Thomson scattering diagnostics has increased. The discharge plasma current and injected neutral beam power are shown in the top two panels (a) and (b) in Fig. 1, with the injected power reaching its maximum of $4 MW by t ¼ 0.14 s. The plasma current remained constant essentially out to 0.7 s. As can be seen in the lower divertor D a trace ( Fig. 1(c)), the discharge transitioned from L-to H-mode at t ¼ 0.16 s, and exhibited small, type V ELMs until t ¼ 0.68 s. The line-integral electron density ( Fig. 1(d)) increased throughout the discharge, primarily from beam fueling, and the stored energy ( Fig. 1(e)) remained constant until just after 0.58 s, when it started a slow decrease with time in response to an increase in low-frequency n ¼ 1 oscillations at that time ( Fig. 1(f)). The spikes in the D a trace at t ¼ 0.68 s represented a possible transition back to the L-mode, and the discharge terminated in response to MHD mode locking starting at t ¼ 0.74 s. The behavior of this discharge is typical of most NSTX plasmas during this operational period. Times of interest for the global confinement studies 25 were taken during periods of low MHD activity (t $ 0.5 s), although for the study to be presented here, data and results will be shown at the following times: t ¼ 0.2, 0.3, 0.4, 0.5, 0.6, and 0.7 s. The time evolution of the electron temperature T e , the ion temperature T i , and the electron density n e profiles is shown in Figs. 2(a)-2(c), where the respective profiles are plotted every 0.1 s from t ¼ 0.2 to 0.7 s. The profiles are plotted as a function of x ¼ [U/U a ] 0.5 , where U is local toroidal flux and U a is toroidal flux at the boundary. These profiles are taken from a TRANSP run into which the spline-fitted measured data were input, symmetrized, and in the case of n e , in-out averaged about the magnetic axis. For the analysis to be shown here, single time slices from the TRANSP output were used; however, the input spline fits were timeaveraged over 625 ms. The information in the TRANSP run was used as a basis for the examination of the local parameter variations and input to the linear stability GYRO calculations to be presented later in this work. A more comprehensive discussion of the inputs to and treatment in TRANSP can be found in Kaye et al. 24,25 and references therein. The T e profiles shown in Fig. 2(a) are remarkably selfsimilar from x ¼ 0.4 to the boundary for all times shown. Some variation in the profile shape for x 0.4 is seen at early times (0.2-0.4 s), but outside of that radius the profile is seen to retain its shape but secularly decrease in magnitude over time as the density in the discharge increases (see Fig. 1(d)). The T i profiles ( Fig. 2(b)) show self-similarity, decreasing in magnitude from 0.2 to 0.5 s, but then exhibit a significant drop inside of x ¼ 0.4 for 0.6 and 0.7 s. The T i data beyond x ' 0.85 are not shown due to large uncertainties. Inside of x ¼ 0.85, the data typically have uncertainties in the 2%-3% range, while outside of this radius (and below 100 eV), the uncertainties are between 5% and 10%. An additional uncertainty for T i is due to the deviation of the data from the spline fit. This is typically another 1%-3%, except for the earliest two times when it was from 4%-10%. Representative error bars combining estimates from both sources of error are shown in Fig. 2(b). The density profiles ( Fig. 2(c)) show the typical ear associated with the H-mode at x ¼ 0.6-0.8. The ear, caused by carbon fueling in NSTX, moves slightly inward with time. The ear disappears in the last profile at t ¼ 0.7 s, reverting back to an L-mode profile shape in the outer region in response to the probable H-L back transition at t ¼ 0.68 s. The uncertainties in the T e and n e profiles are approximately 2%-3% across the profile. The deviation of the data from the spline fit for T e and n e is another 1%-3%, and representative error bars from the combined uncertainties are shown in the figure. It is of interest to note that even after this time, the T e profile retains its selfsimilar shape. The typical electron energy confinement time during this period (t ¼ 0.68-0.7 s) is 12-16 ms, so if there were a completely different transport mechanism controlling the T e profile after the H-L transition than before, profile changes would most likely have been observable by t ¼ 0.7 s. Much can be surmised concerning the controlling microturbulence-driven electron transport over the entire time range by tracking the time evolution of parameters believed to represent the importance of various turbulent modes. For tracking these parameters, three radial locations were chosen, x ¼ 0.35, 0.50, and 0.65, and the parameter evolution was studied along the same lines as that shown in Fig. 1 in Guttenfelder et al. 26 The first set of parameters that will be examined are the electron beta, b e , and collisionality, à e , which are plotted against each other in Fig. 3(a). Here, b e is the electron pressure normalized to the magnetic pressure and the collisionality is / n e Z ef f =T 2 e . Each line in the figure is color-coded according to radius, and the lines represent the temporal evolution of the parameters from 0. The MHD parameter a / Àðq 2 R 0 =BÞdP=dr is plotted against b e R=L T e in Fig. 3(b). The parameter a reflects the importance of the KBM, while b e R=L T e is used as an identifier for microtearing, which depends both on b e and R=L T e . Microtearing modes generally occur at high values of this parameter, while KBMs occur at high values of a. As can be FIG. 2. Electron temperature, ion temperature, and electron density profiles taken every 0.1 s from 0.2 to 0.7 s plotted as a function of the radial coordinate where U is local toroidal flux, and U a is toroidal flux at the boundary. seen in Fig. 3(b), the b e R=L T e values for x ¼ 0.35 and 0.50 increase in time at relative constant a into a possible microtearing regime, while values at x ¼ 0.65 mostly reside at the highest b e R=L T e , also in the possible microtearing regime. (Note that the word "possible" is being used, since these parameter variations are used as guides; more definitive conclusions about which modes exist cannot be made until at least linear gyrokinetic stability estimates are made, and these are presented in Sec. III.) For x ¼ 0.65, the t ¼ 0.5 s point also resides at its maximum a value, indicating the potential for the existence of mixed modes at this time and location. Finally, the evolution of the normalized profile gradients are plotted in Figs. 4(a) and 4(b). R=L n e is plotted against R=L T e in Fig. 4(a), while it is plotted against R=L T i in Fig. 4(b). The normalized temperature gradients especially reflect the drive terms for the microtearing/ETG and ITG modes, respectively. Note that the density profile can be inverted, as indicated in Fig. 2(c) and by the negative values of R=L n e . Relatively low values of R=L n e and R=L T e are seen for x ¼ 0.35 and x ¼ 0.5 at the earliest times, while R=L T e is larger for x ¼ 0.65 then, indicating the possibility of electrostatic rT e -driven modes (b e is low at this time as is seen in Fig. 3(a)). The rT e drive at x ¼ 0.35 and 0.5 increases with advancing time, as it does at x ¼ 0.65, where, for this radius, rn e also increases. The highest R=L n e points at x ¼ 0.65 also exhibit relatively large values of a (see Fig. 3(b)), supporting the speculation that mixed modes (microtearing and KBM) may co-exist at these locations at these times (especially for t ¼ 0.5 s). Fig. 4(b) shows very large R=L T i at the earliest times for x ¼ 0.50 and 0.65, coupled with large inverted density gradients, indicating the potential for rT i -driven electrostatic (low b) modes. The rT i drive becomes smaller as the discharge evolves in time. To summarize these trends, the following picture emerges. At the earliest times, low-b electrostatic modes driven by rT e and especially by rT i are expected to be important at x ¼ 0.50 and 0.65. As time advances, and both b e and à e increase, microtearing is expected to become important at all three radii. At x ¼ 0.65, R=L n e and a become large at later times, suggesting the possibility for microtearing and KBMs to co-exist at these times. Also at later times, the rT e drive becomes important at x ¼ 0.65, while b e drops at t ¼ 0.7 s at this radius, suggesting the potential for low-b, electrostatic ETG modes to exist. III. LINEARLY UNSTABLE MODES In this section, the existence of various modes will be investigated from results of linear stability calculations using the GYRO code. 1 Plasma profiles and equilibria taken from TRANSP interpretive analysis were input into GYRO, and calculations were performed across a large range of k h q s for the six times and all three radial locations. The k h q s ranged from 0.2 to 40 in a relatively coarse grid, with the following values evaluated: k h q s ¼ 0. The results for only this radius will be presented graphically, although the results for x ¼ 0.35 and 0.50 will be summarized at the end of the section. In each panel, the real frequency (dashed line, open points, left ordinate) and growth rate (solid line, solid points, right ordinate), normalized to the local value of c s /a, are shown as a function of k h q s for the fastest growing mode. The results are color-coded with red indicating ballooning parity (e.g., ITG/TEM/ETG/KBM) and blue indicating tearing parity (e.g., microtearing). Normalized real frequencies > 0 correspond to the electron direction while those <0 correspond to the ion direction. The horizontal shaded region represents the absolute value of the ExB shearing rate given by c E ¼ À(r/q)dX/dr normalized to the local value of c s /a. Here, X is the toroidal rotation frequency, which is the overwhelmingly dominant contribution to the ExB shear in these NSTX plasmas. A sharp switch from tearing to ballooning parity modes is seen to occur over a narrow range in k h q s in some cases to be shown, and this reflects a close competition between the two modes for dominance. Thus, while the fastest growing mode is the dominant mode, the subdominant mode may be strong enough as well to influence the transport level. Only non-linear calculations would include the influence of all unstable modes, and while these are underway, a comprehensive non-linear-based study is beyond the scope of this paper. Ballooning parity modes in both the ion and electron directions are seen at the earliest time, t ¼ 0.2 s (Fig. 5(a)), with the linear growth rates of the ion modes (k h q s 4) exceeding the ExB shearing rate by over a factor of two. Electron direction modes at higher k h q s have linear growth rates that are approximately one-half the ExB shearing rate. The effect of the ExB shearing on the modes cannot practically be estimated until, again, full non-linear calculations are run, but it is probable that the ExB shearing suppresses the k h q s ! 4 turbulence to some extent. That the electrostatic ballooning parity modes are the dominant ones at this time is consistent with the low b e and with the large R=L T i and R=L T e drive terms. With higher b e ; b e R=L T e and collisionality, microtearing modes are seen to dominate at t ¼ 0.3 s for k h q s 1, with growth rates much greater than the ExB shearing rate (Fig. 5(b)). An ion direction ballooning parity mode (ITG) is seen to exist at k h q s ¼ 2, with a growth rate of comparable amplitude to the ExB shearing rate. No modes are calculated to be unstable for k h q s > 2. Similar behavior of modes is seen at t ¼ 0.4 s (Fig. 5(c)), with the microtearing dominant for k h q s 1, and all modes are stable for k h q s > 1. Again, the linear growth rates exceed the ExB shearing rate. A scan of the input T e gradient to the linear GYRO calculation was performed for this case to determine how far from threshold the experimental R=L T e is. For the fastest growing mode at k h q s ¼ 1, the microtearing linear threshold was determined to be at approximately 75% of the experimental value, which is outside the experimental uncertainty of R=L T e . The a and R=L n e values at t ¼ 0.5 s, coupled with the higher b e ; b e R=L T e and à e values (Figs. 3(b) and 4(a)) suggested the possibility that a mixture of modes could exist at this time. This is indeed seen in Fig. 5(d), which gives the linear growth rates and real frequencies at this time. Microtearing is calculated to be the dominant mode for k h q s 2, except for one wavenumber, k h q s ¼ 0.6, where an electron direction ballooning parity mode is calculated to be the fastest growing mode. This mixture of modes within this k h q s range reflects the competition between modes for dominance, as discussed earlier in this section. For k h q s ! 4, it is this ballooning parity mode that is dominant with modes predicted to be unstable up to the maximum k h q s value, 40, that was studied. Parametric GYRO scans were carried out for selected k h q s to identify the ballooning-parity modes at this time. At k h q s ¼ 0.6, the ballooning-parity mode was found to scale strongly with b, indicating a KBM. The experimental b was approximately 25% above the threshold beta for the KBM at this time. At higher k h q s , k h q s ¼ 4, the scaling of the linear growth rate with b was weak, but c increased strongly with the temperature gradient, suggesting an ETG mode. For this k h q s the experimental R=L T e was 25% greater than the threshold value. At k h q s ¼ 20, the mode is also identified as an ETG, with the experimental R=L T e 35%-40% above threshold. Microtearing dominates across a wide range of k h q s at t ¼ 0.6 s (Fig. 5(e)) with no ballooning parity modes predicted to be unstable. The mode is relatively far from marginal, with the experimental R=L T e for k h q s ¼ 2 being a factor to two greater than the linear threshold value. Consistent with the drop in b e and increase in both R=L T e and especially R=L T i at t ¼ 0.7 s, the range of unstable microtearing modes shrinks, and that for electron direction ballooning parity modes increases, the latter covering the range of k h q s from 0.6 to 40. This ballooning parity branch consists of different modes, similar to what was determined for t ¼ 0.5 s. At k h q s ¼ 0.6, the mode is a KBM which is robustly unstable for a wide range of b. The mode at k h q s ¼ 20 is an ETG, with the experimental R=L T e over a factor of two above threshold. As a consistency check, the R=L T e at this time and at 0.5 s exceeds the analytic threshold value for ETG to be unstable. 32 In all, the results of the linear growth rate calculations at x ¼ 0.65 are consistent with the expectations inferred from the parameter values and variations seen in Figs. 3 and 4, with electrostatic ballooning parity modes seen at the earliest and latest times, microtearing for the times in between, and some mixture of modes especially at t ¼ 0.5 s. At x ¼ 0.35, microtearing modes are predicted to be unstable at low k h q s ( 1) for t ¼ 0.2-0.5 s, although the linear growth rates are significantly lower (by 30%-50%) than the ExB shearing rate. At t ¼ 0.6 and 0.7 s, the microtearing mode at k h q s 1 is supplanted by an ion-directed ballooning parity mode with growth rates of order 60%-70% of the ExB shearing rate. The results for x ¼ 0.5 are similar to those discussed in detail for x ¼ 0.65. ITG-like modes are predicted to be dominant at t ¼ 0.2 s, with growth rates comparable to the ExB shearing rate, and microtearing is predicted to exist for t ! 0.3 s at k h q s 1, with at least some of the growth rates over the k h q s range exceeding the shearing rate, except for t ¼ 0.4 and 0.5 s, where they are of order 50% of the ExB shearing rate. As for x ¼ 0.65, a mixture of microtearing and ballooning modes with significant linear growth rates is predicted to exist at t ¼ 0.7 s. IV. REDUCED MODEL PREDICTIONS OF T e In Secs. II and III, it was established that while microtearing may not have an exclusive role in setting transport levels in the plasmas being studied, it certainly can have a very important and often dominant one. This was seen in both the parameter variations and the results of the linear growth rate calculations for the later times in the study. A complementary method of establishing the importance of microtearing in setting transport in this NSTX parameter regime is to compare measured temperature profiles with those predicted by a microtearing-based transport model. This is most easily done if there exists a reduced microtearing transport model that could be implemented in a predictive transport solver. Such a model does exist. The Rebut-Lallia-Watkins (RLW) critical temperature gradient model 29,30 is based on a scenario of magnetic turbulence affecting magnetic topology, which in turn drives plasma transport, and, in particular, electron transport. Specifically in this model, magnetic islands form and overlap when the electron temperature gradient exceeds a critical (threshold) value. It is the island overlap and resulting field line stochasticity (i.e., microtearing) that enhances the electron transport. In the model, the microtearing transport is reduced by high magnetic shear, which, according to the authors, leads to magnetic island sizes that are too small to be self-sustaining. The critical electron temperature gradient for island overlap in this model is where g is local plasma resistivity, J is current density, and q is the local q-value. For the NSTX discharge studied here, the measured electron temperature gradient exceeds the threshold value for all times and radial locations by between one and two orders of magnitude. Thus, for the case studied here, the electron temperature profiles are far from marginality and the transport is not stiff. This was found also for conventional aspect ratio tokamaks, specifically the OH and EC-heated TCV tokamak. 33 The electron thermal diffusivity for the RLW model is given by v e;RLW / rT e T e þ 2 rn e n e ! T e T i 0:5 R r q 2 =rqBR 0:5 À Á : (2) Note that while the microtearing transport level in this model is strongly dependent on n e , T e , T i , q, and their gradients, there is no explicit or implicit dependence on either collisionality or b, which, from non-linear gyrokinetic calculations, are known to affect the microtearing-induced transport. These dependences are somewhat contained implicitly in the expression for the critical gradient given above, but this has virtually no effect on transport since the measured gradient is so much higher than the threshold value. The lack of à e and b e dependences can certainly be viewed as a shortcoming in the model, and they should be taken into account in any future revision of this, or development of a new, microtearing-based reduced transport model. For the study being presented here, the RLW model will be used to predict the time-evolving T e profile, which will be compared to the measured profiles at the six times of interest. The RLW model has been implemented in the TRANSP code within the framework of the recently developed PT_SOLVER stiff transport solver. PT_SOLVER was implemented in TRANSP as an engine for predicting transport with stiff models such as TGLF. While RLW is not stiff, the PT_SOLVER option is still used for its prediction. The reason for this is that PT_SOLVER is a multi-region solver that offers the capability of employing different models, or user-defined input, in different regions of the plasma. The boundaries of the different regions can be defined by the user. For the case being presented here, the RLW model was used in the region from x ¼ 0.2 to 0.8. Inside x ¼ 0.2, a userdefined v e value is used to reflect the enhanced electron transport associated with high-frequency Compressional/ Global Alfv en activity which has been inferred for this region. 34 So far, no reduced model for this CAE/GAErelated transport has been developed. The x ¼ 0.8 location has been chosen as the outer boundary. Beyond this location there are large uncertainties in the T i profile data, as previously discussed. For this calculation, only the T e profile was predicted. At the early stages of any model testing, it is important to be able to isolate the effects of the model by studying predictions in as few transport channels as possible. This avoids any non-linear propagation of prediction uncertainties that would occur when transport in multiple channels is being predicted, and thus could lead to misleading conclusions, either positive or negative. For instance, in this case study, the ions are governed primarily, but not exclusively, by neoclassical transport. 24 A prediction of the ion temperature profile using either neoclassical only or the RLW anomalous plus neoclassical models do not give precise agreement with the measured profiles. This difference propagates to and potentially confuses the comparison between predicted and measured T e role, since the ion-electron coupling term is important for this case. Consequently, until there is a combined reduced anomalous plus neoclassical transport model that is valid for the parameter range of interest (validation studies of TGLF as applied to NSTX discharges are underway), and which can give an accurate prediction, the T i profile for this calculation is taken to be the measured one. Similarly, the electron density, impurity density, and rotation profiles are also taken to be the measured ones. Using the RLW model in the fashion described above, the comparisons between the predicted and measured electron temperature profiles are shown in Figs. 6(a)-6(f). In each of these figures, the solid line is the measured T e while the dashed line is the predicted one. The shaded region represents the region in which the user-defined electron thermal diffusivity is applied. Here, v e is assumed to be 20 m 2 /s at x ¼ 0 and linearly interpolated to match the v e,RLW value at x ¼ 0.2, the innermost radius where the RLW model is used. As can be seen in the figure at early times, where microtearing is not predicted to be unstable and/or is just becoming important ((a) and (b)), the model over predicts the T e profile inside x ¼ 0.5, and under predicts outside that radius. The agreement becomes much better as time progresses and as microtearing becomes dominant at low-k (k h q s 1), with or without higher-k ballooning parity modes. By t ¼ 0.5 s, the agreement between the measured and predicted profiles can be considered to be quite good, except for inside x ¼ 0.2 where the user-defined value is used. It is noted that even at t ¼ 0.6 s, where the MHD activity increases, and at t ¼ 0.7 s, after the possible H-L back transition at T ¼ 0.68 s, RLW is still doing a good job of predicting the measured T e . The former implies that the MHD activity may be localized to near the plasma edge, while the latter implies that there is no significant change in the underlying transport levels on the time scale of 0.02 s, which is of order the electron confinement time (12-15 ms). The time evolution of the measured and predicted T e values at x ¼ 0.35, 0.50, and 0.65 is shown in Fig. 7. From this figure also, the large differences between the measured and predicted T e profiles at early times shrinks as time progresses. The predicted T e value at x ¼ 0.65 is typically 10% lower than the measured value at t ¼ 0.5 s. As mentioned previously, the uncertainty in the measured T e value is typically between 2% and 3%. Typical v e,RLW values at x ¼ 0. 35 The standard profile fit metrics between the predicted and the experimental profiles are given in Table I for the six times of interest. The profile metrics are defined below: RMS Deviation For the calculation of the metrics, only profile values between x ¼ 0.2 and 0.8 were used; this was the range in which the RLW model was applied. It is clearly seen in the table that the RMS deviation decreases dramatically with time, a quantitative indication of the improvement in the goodness of fit of the RLW model. For the last four times of interest, r ¼ 6 to 11%. This compares favorably with the RMS deviation of 6%-14% for the TGLF model over a range of DIII-D discharge types. 35 The RLW model does a reasonably good job in predicting the T e profile for this high à e plasma where microtearing is expected and calculated to be important, thus the use of this model is justified, at least for the later times. It is useful to make a similar prediction for comparison, however, using RLW for a plasma where microtearing is not believed to be This particular discharge had a large amount of lithium applied pre-shot, close to 1 gm, and was at the lower range of collisionality. à e $ 0:06 at x ¼ 0.5 in this discharge, as compared to ! 0.20 in 120967. Fig. 8(a) shows the linear growth rate (in units of c s /a) of the fastest growing mode as calculated by GYRO across the outer radii of the plasma. Here, q ¼ r/a. The dominant mode for this case during the "steady-state" portion of the plasma is a ballooning parity hybrid mode, with characteristics of both TEM and KBM. The growth rate exceeds the ExB shearing rate at all radii for which the mode is calculated to be unstable. Not surprisingly, when the RLW model is used to predict the T e profile in this plasma in which microtearing is subdominant or stable, the agreement is poor, as can be seen in Fig. 8(b). It is seen that the measured T e profile is much broader than that predicted by the model, which under predicts the T e outside of x ¼ 0.3 by up to a factor of two. The region of applicability of RLW can be assessed further by examining the variation of total electron stored energy. The electron stored energy, as given by RLW, goes as 30 W e ¼ 0:026n 0:75 e Z 0:25 ef f B 0:5 T I 0:5 p ðRa 2 jÞ 11=12 þ 0:012I p ðRa 2 jÞ 0:5 P tot =Z 0:5 ef f ; in units of MJ, 10 19 m À3 , T, MA, m, and MW. As admonished even by the RLW authors, this scaling should be used only as a guide. To this end, the ratio of the experimental electron stored energy to that predicted by RLW is plotted in Fig. 9 as a function of electron collisionality at x ¼ 0.5. The data points are taken from a collection of H-mode discharges consisting of those using either HeGDC þ boronization or lithium evaporation for wall conditioning. The details of the datasets can be found in Kaye et al. 25 For the plot shown here, the data are constrained so that q(x ¼ 0.5) ¼ 1.75-3.0 to avoid any implicit dependence on q that had not been identified. 30 The data were divided into three categories based on ELM activity: ELM-free (blue), giant ELMs (green), and small ELMs (red). The ratio W e / W e,RLW varies from $2.0 at the lowest collisionality to $1.0 at the highest. Note that the discharge 120967 used for this case study had a q(x ¼ 0.5) value that was greater than the range constraint presented in the figure, so this discharge is not include in this collection of data. While there certainly are dependences in the overall ratio due to the different effects of the various ELM types, the overall trend is that the ratio tends towards $1.0 as collisionality increases. This is consistent with the trends predicted by the non-linear gyrokinetic analysis 21,25 indicating the greater role of microtearing at the higher collisionalities in this collection of discharges. The trend and energy ratio values at the higher collisionalities indicate the relevance of using the RLW model in this parameter regime. V. SUMMARY AND DISCUSSION In this paper, a particular discharge was used as a case study for predicting the time evolution of the electron temperature using a reduced model that reflects microtearing-driven transport. The evolution of various discharge parameters representing the possible importance of various microinstabilities was first tracked, and this was followed by a determination of the linear (micro-)stability properties using the GYRO gyrokinetic code. Not surprisingly, the implications of the parameter evolution and the linear stability calculations were consistent. These results suggested the importance of rT e and/or rT i -driven electrostatic modes in the plasma core early in the discharge when b e and à e are relatively low, but a growing importance of microtearing-dominated transport with advancing time as b e and à e increased. It was also found that ballooning parity modes (ITG/TEM/KBM/ETG) can also play a role, even at later times in the discharge. A microtearing based reduced transport model developed in the mid-1980s, the Rebut-Lallia-Watkins model, was then used as a basis for predicting the electron temperature profile evolution in the plasma, using the measurements for all other profiles. This model was found to do a reasonable job of predicting the measured T e profiles in the regions and at the times where microtearing is expected to be important. Shortcomings of the model include a lack of any collisionality or b dependence, which are known to affect microtearing stability, but nevertheless the model does well even during discharge times where the edge MHD activity is slightly enhanced and shortly after an H-L back transition. The value of v e predicted by the model at a particular time and location at which the non-linear transport level was calculated by GYRO agrees with the non-linear prediction. Another microtearing-based transport model that could potentially be tested is that given by Wong et al. 23 Calculations of the electron thermal diffusivity from this model indicate not as good agreement with the experimentally inferred v e profile as does the RLW v e for the specific discharge being studied. The Wong model showed significant departures from the experimentally inferred v e except in the mid radius (x $ 0.5) region. Consequently, the predictions were restricted to use of the RLW model, which showed better agreement with the experimental v e . There are two coupled key points of emphasis that stem from this work. The first is that the prediction, and the agreement that was obtained, is by no means universal or extrapolatable in a simple manner. The discharge used for this case study was comprehensively analyzed through previous gyrokinetic simulations, and the importance of microtearing was established. This first point leads to the second, which is that use of any reduced model has to be justified. It is essential to know whether the physics represented by the model is valid for the regime being explored. This can be done, for instance, by independent calculations such as the linear gyrokinetic result presented in Sec. III. While good agreement was found in the case study discharge and other discharges where microtearing was predicted to be unstable and important, the agreement was poor for plasmas where microtearing was predicted to be stable or subdominant. Consequently, it would not be justifiable to simply apply the RLW model for predicting T e in, for instance, NSTX-U scenarios, or regimes in other devices, where collisionality may be lower and microtearing may be subdominant. Model validation in one regime does not necessarily imply validity in another, and this presents a certain paradox. Using a simple, reduced predictive tool is justifiable if it is known a priori that the physics of the predicted scenario is compatible with that of the model, but the microstability characteristics of the future scenario depends on the results from use of the predictive models. Some justification of the model use can be grounded in an a posteriori assessment of the stability properties of the predicted regime, although not with certainty. If the predicted regime is found to be stable to the modes driving the prediction, an inconsistency and lack of justification for use of the model are easily identified. However, any "self-consistency" found in the predicted parameter regime may merely be a "self-fulfilling prophecy." Predicted profiles may indeed suggest the validity of a model in that gyrokinetic calculations might support the underlying physics assumptions of the model used to produce them, but that self-consistency does not necessarily imply accuracy. While this leaves this particular predictive methodology in somewhat of an uncertain state, it is, nevertheless, a reality that must be recognized. The best approach appears to be to develop a reduced model from detailed gyrokinetic calculations that can encompass as much of the fundamental physics as possible, in the manner that TGLF was developed. Work is underway to explore the validity of this particular model for the range of presently accessible ST parameter regimes.
2019-04-18T13:06:30.568Z
2014-08-14T00:00:00.000
{ "year": 2014, "sha1": "f7560de85331a03a715133929381bb7d1b98d6eb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1063/1.4893135", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "08144d7e0d1a339d1a734ea453255db93e9061c2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252987899
pes2o/s2orc
v3-fos-license
A Coarse-Grained Molecular Model for Simulating Self-Healing of Bitumen : The longevity of asphalt pavements is a key focus of road engineering, which closely relates to the self-healing ability of bitumen. Our work aims to establish a CGMD model and matched force field for bitumen and break through the limitations of the research scale to further explore the microscopic mechanism of bitumen self-healing. In this study, a CGMD mapping scheme containing 16 kinds of beads is proposed, and the non-bond potential energy function and bond potential energy function are calculated based on all-atom simulation to construct and validate a coarse-grained model for bitumen. On this basis, a micro-crack model with a width of 36.6nm is simulated, and the variation laws of potential energy, density, diffusion coefficient, relative concentration and temperature in the process of bitumen self-healing are analyzed with the cracking rate parameter proposed to characterize the degree of bitumen crack healing. The results show that the computational size of the coarse-grained simulation is much larger than that of the all-atom, which can explain the self-healing mechanism at the molecular level. In the self-healing process, non-bonded interactions dominate the molecular movement, and differences in the decreased rate of diffusion among the components indicate that saturates and aromatics play a major role in self-healing. Meanwhile, the variations in crack rates reveal that healing time is inversely proportional to temperature. The impact of increasing temperature on reducing healing time is most obvious when the temperature approaches the glass transition temperature (300 K). Introduction Bitumen has the ability to self-heal.Microcracks produced by low temperatures and traffic load can be gradually repaired by surface reconstruction, surface proximity, surface wetting, and mutual diffusion.At present, the majority of bitumen self-healing behavior research is based on macroscopic tests, where the healing degree is evaluated by testing the physical properties of the bitumen.However, the self-healing behavior of bitumen is essentially determined by its microscopic properties.Therefore, to really comprehend the self-healing mechanism, emphasis must be paid to the molecular structure of bitumen.For this reason, bitumen research is increasingly using Molecular Dynamics (MD).Initially, Zhang and Greenfield [1] proposed a three-component molecular model of bitumen and, subsequently, Li and Greenfield [2,3] a four-component model.Researchers have investigated the mechanism of self-healing using these molecular models.Scanning electron microscope imaging of bitumen microcracks in combination with MD simulations by Shen et al. [4] demonstrated that temperature, crack width, and the degree of aging are major factors affecting self-healing.Sun et al. [5] determined an optimal self-healing temperature, while Qu et al. [6] proposed a six-component model of a bitumen healing agent.It was found that crack width, temperature, state of molecular aggregation, and aggregate influence self-healing.He et al. [7,8] performed MD simulations using the fourcomponent molecular model with short-term aging and a healing agent.It was seen that as the bitumen microcracks entirely vanish, the sample volume begins to diminish.According to this study, healing depends on the diffusion of bitumen and healing agent into the cracks.Wei Sun [9] described the self-healing process as a combination of surface wetting and surface diffusion.Based on MD, Hu et al. [10] studied the self-healing of crumb rubbermodified bitumen and found that self-healing decreases with increased rubber content.Gong et al. [11] simulated the self-healing process in a 3D microcrack model and studied the effect of carbon-based nanomaterials as a healing agent.It was found that adding carbon nanotubes can effectively improve the self-healing of ordinary bitumen.Zhang et al. [12] used the residual oil extracted from soybeans to regenerate aged bitumen, enhancing its rheological properties.The activation energy of self-healing in aged bitumen was estimated by MD simulations. Presently, research on bitumen self-healing is limited to all-atom MD simulations, which reduces the scale of the computational sample and makes it impossible to analyze self-healing at larger scales.In this paper, we propose to bridge the gap between the micro and the macro scales by doing larger-scale simulations using Coarse-Grained Molecular Dynamics (CGMD).CGMD is routinely used in the study of petroleum and hydrocarbon compounds, which are the main chemical components of bitumen.For instance, Madhusmita et al. [13] proposed a coarse-grained (CG) potential for polycyclic aromatic hydrocarbons, while Khashayar et al. [14], an energy-based CG potential for carbon and silicon nanomaterials.Based on experimental thermodynamic data, Eichenberger et al. [15] used the GROMOS 45A3 force field for biomolecules to parameterize a CG potential for liquid alkanes.Bejagam et al. [16] developed a transferable CG hydrocarbon model by combining MD with particle swarm optimization.Garrid et al. [17] used the square gradient theory to describe the change in interfacial tension with pressure and molecular chain length and the effect of nitrogen interfacial adsorption on interfacial tension for four binary systems composed of nitrogen pressurized n-alkanes (n-pentane, n-hexane, n-heptane, and n-octane).Guadalupe et al. [18] applied the CG force field developed by the statistical correlation fluid theory to the asphaltene model and carried out MD simulations to verify the CG representation of a benchmark system of 27 asphaltenes in pure solvents (toluene or heptane).Guannan Li et al. [19] improved the CG Martini force field through the Flory-Huggins theory to analyze the aggregation of bitumen molecules. However, CGMD is not frequently employed in bitumen research; to apply other force fields, such as Martini to bitumen systems, some adjustments need to be made for the specificity of the bitumen molecular structure and there is no unique CG force field for bitumen systems.We developed a CG bitumen model and used it to simulate crack healing for the first time.The CG bitumen model is about 112 times larger than the MD model we previously used [8].The crack's volume is roughly 481 times bigger than that of the MD model.In this way, the simulated crack width approaches the actual crack width. All-Atom Molecular Model (MD) Our coarse-grained molecular model is derived from the all-atom MD simulation.In the all-atom MD process, we used the four-components with twelve molecules model of bitumen (Table 1) proposed by Li and Greenfield [20], who characterized AAA-1, AAK-1 and AAM-1 studied in the Strategic Highway Research Program (SHRP) by varying the ratio of the number of each type of molecule.This model contains four components: asphaltene, aromatics, saturate and polar aromatic, with N, O and S introduced, which gives a more complete picture of the diversity and complexity of bitumen than the earlier three-component model containing only C and H. Table 1.Composition of the all-atom molecular model [20].O 5 The force field is the basis of MD simulation which, as an approximate treatment of quantum mechanical methods, ignore computationally resource-intensive electronic behavior and can be used for molecular systems with a large number of atoms.There are several suitable force fields for organic matter available for MD simulations of bitumen, such as Universal, Compass, Compass II, OPLS-aa and Gromos96, etc.In our all-atom simulations, the Compass II force field is chosen, which is a molecular force field for studying condensed matter optimization from the atomic level to simulate both organic molecular systems as well as inorganic ones, it is suitable for the simulations of bitumen. Molecular All simulations in this paper were completed using Materials Studio 2017.The all-atom molecular model of bitumen is simulated based on the following steps: (1) Twelve different bitumen molecules are drawn and geometrically optimized to eliminate errors in bond lengths, bond angles, etc.; (2) Molecules are placed into a computational box with periodic boundary conditions with an initial density of 0.1 g/cm 3 ; (3) The system is relaxed for 300 ps in the NVT ensemble, the temperature is set to 298 K. The NVT ensemble refers to a constant temperature, constant volume ensemble where, through NVT relaxation, the bitumen molecules will fully move in the calculation box, thus mixing uniformly and reaching a relatively low energy position, with no change in density during this process; (4) It is annealed five times in the temperature range of 263-433 K, which corresponds to the working temperature of road bitumen.Annealing means that the temperature of the system is repeatedly raised and lowered, allowing some of the molecules to break through the energy barrier to reach a less energetic and more stable structure upon return to normal temperature; (5) It is simulated for 300 ps in the NPT ensemble, T = 298 K, P = 1 atm.The NPT ensemble refers to a constant temperature and pressure ensemble where the total energy and system volume can be freely varied, and the density of the bitumen system will stabilize through NPT relaxation. For the control of the NVT and NPT ensembles described above, temperature control and pressure regulation operations are required.We adopt the Berendsen method [21,22], the basic idea of which is to couple the system to a constant temperature external heat bath and to regulate the temperature and pressure of the system by absorbing or releasing energy from the heat bath. Figure 1 shows the final conformation of the all-atom model (volume = 54.9 nm 3 ) after the above steps, where the colors of the elements C, H, O, N and S are grey, white, red, blue and yellowish, respectively. For the control of the NVT and NPT ensembles described above, and pressure regulation operations are required.We adopt the Beren basic idea of which is to couple the system to a constant temperatur and to regulate the temperature and pressure of the system by absorb ergy from the heat bath. Figure 1 shows the final conformation of the all-atom model (volu the above steps, where the colors of the elements C, H, O, N and S a blue and yellowish, respectively.Density is the most intuitive parameter of bitumen, indirectly static structural properties of the MD model; the diffusion coefficient resent the rate of movement of the molecules in the bitumen system; bi at low temperatures, it appears in a glassy state, with the temperatu softening to show viscoelasticity, the glass transition temperature ca this property of bitumen.The points in the temperature intervals 198 K were fitted linearly, and the horizontal coordinate of the intersectio the glass transition temperature of the all-atom molecular model, wh dition, bitumen is often used with modifiers, healing agents and other molecular point of view, the solubility parameter characterizes the in of the bitumen.Therefore, density, the diffusion coefficient, glass tra and solubility parameters were chosen to validate the all-atom mode the calculated values for the four parameters with reference values fr general, the calculated values agree well with the reference values. Properties Calculation Density (g/cm 3 ) (at 288 K) 0.996 Diffusion coefficient(cm 2 /s) (at 298 K) 0.4 × 10 −6 Glass transition temperature (K) 262.9 Solubility parameter (J/cm 3 ) 0. 5 17.6-18.2Density is the most intuitive parameter of bitumen, indirectly characterizing the static structural properties of the MD model; the diffusion coefficient can be used to represent the rate of movement of the molecules in the bitumen system; bitumen is a polymer, at low temperatures, it appears in a glassy state, with the temperature rising, gradually softening to show viscoelasticity, the glass transition temperature can be used to verify this property of bitumen.The points in the temperature intervals 198-258 K and 268-338 K were fitted linearly, and the horizontal coordinate of the intersection of the two lines is the glass transition temperature of the all-atom molecular model, which is 262.9K, in addition, bitumen is often used with modifiers, healing agents and other solutions, from the molecular point of view, the solubility parameter characterizes the intermolecular forces of the bitumen.Therefore, density, the diffusion coefficient, glass transition temperature and solubility parameters were chosen to validate the all-atom model.Table 2 compares the calculated values for the four parameters with reference values from the literature.In general, the calculated values agree well with the reference values. Coarse-Grained Model The core of CGMD is to determine the distribution structure and number of atoms corresponding to a coarse-grained bead, i.e., to determine the mapping scheme.If the number of atoms in the CG bead is too small, the simulation is limited in scale and the calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model. Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.calculation efficiency is low.If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model.Considering that bitumen molecules are mainly composed of polycyclic aromatic hydrocarbons, unsaturated cyclic aromatic compounds, non-polar hydrocarbons with straight and branched chains and aliphatic cycloalkanes, we considered 16 kinds of coarse beads classified in Table 3 after repeated calculation.During the division of the coarsegrained beads, the DMol3 module was used to minimize the energy of the functional groups to be coarse-grained, and the GGA-PBE generalization with the DNP group was selected to calculate the optimized structure with the lowest energy.All hydrogen atoms are not abstracted by the structure (including functional groups of hydrogen atoms), and all the chemical formulas on the number of hydrogen atoms are expressed in n, this is due to the fact that the relative atomic mass of hydrogen atoms is small compared with other heavy atoms, besides, the internal forces of the bitumen system are not dominated by hydrogen bonds; the differences in the number of hydrogen atoms within the same CG bead have less influence on the properties of the beads, thus this effect is ignored in this work.In the model, the coordination ratio of a bead is usually three to seven atoms.C3, C4a, C4b, C5a, C5b, and C6 are hydrocarbon structures without a benzene ring structure, N1, N2, and S1 are non-hydrocarbon structures without a benzene ring structure, B1, B2, B3, B4, B5, B6, and B7 are chemical structures with a benzene ring structure.In order to maintain the integrity of the benzene ring and to preserve as much as possible the characteristics of the benzene functional group, we have done the following special treatment of the benzene ring structure: without splitting the benzene ring structure that exists alone, the relevant structure is mapped as one of the CG beads B1, B4, B5, or B6; for the structure of a compound benzene ring (i.e., structures in which multiple benzene rings share carbon atoms), the mapping is completed with one of B2, B3, or B7 (the red circle in Table 3) attached to B1.For example, naphthalene (C10H8) is mapped as two beads, one B1 and the other B2.Based on the building block approach [26] to build a CG molecule model, the CG structure is kept as all-atoms by the confining effect of the force field.The CG model of all bitumen molecules is shown in Figure 2. Appl.Sci.2022, 12, x FOR PEER REVIEW 6 of 20 In the model, the coordination ratio of a bead is usually three to seven atoms.C3, C4a, C4b, C5a, C5b, and C6 are hydrocarbon structures without a benzene ring structure, N1, N2, and S1 are non-hydrocarbon structures without a benzene ring structure, B1, B2, B3, B4, B5, B6, and B7 are chemical structures with a benzene ring structure.In order to maintain the integrity of the benzene ring and to preserve as much as possible the characteristics of the benzene functional group, we have done the following special treatment of the benzene ring structure: without splitting the benzene ring structure that exists alone, the relevant structure is mapped as one of the CG beads B1, B4, B5, or B6; for the structure of a compound benzene ring (i.e., structures in which multiple benzene rings share carbon atoms), the mapping is completed with one of B2, B3, or B7 (the red circle in Table 3) attached to B1.For example, naphthalene (C10H8) is mapped as two beads, one B1 and the other B2.Based on the building block approach 14 to build a CG molecule model, the CG structure is kept as all-atoms by the confining effect of the force field.The CG model of all bitumen molecules is shown in Figure 2. Coarse-Grained Force Field The force field determines the motion laws of the simulated system, which is used to describe the inter-particle interactions, including the bonding interaction between particles, the van der Waals force between molecules, the hydrogen bond interaction, the charge force, etc.The basic computational particles of MD are atoms, while CGMD is based on coarse-grained beads, so the all-atom force field is not applicable to the simulation of coarse-grained molecular models. The CG force field describes all the interaction potential energy functions of coarsegrained beads in the form of potential energy functions, including non-bond potential energy functions and bond potential energy functions: (1) The non-bond potential is the energy that mainly affects the determination of macroscopic properties of the bitumen system, including van der Waals potential, hydrogen bond potential and charge energy.Among the non-bonded potential, van der Waals potential plays a major role in the bitumen system.Hydrogen bonds only exist between hydrogen atoms, oxygen atoms, fluorine atoms and nitrogen atoms and must conform to a certain structural general formula; the bitumen molecular model in this study has few oxygen atoms and nitrogen atoms and little influence of hydrogen bonds, so hydrogen bond potential is not considered in this study.The CG beads divided in this study are all neutral particles without charge, so charge energy is not considered in this study. (2) The bond potential energy function includes simple harmonic vibration potential energy function, plane bending potential energy function and out-of-plane rocking potential energy function.The bond potential energy plays a minor role in the determination of the macroscopic properties of the bitumen system.In this study, the out-ofplane rocking potential energy function with the least influence on the bitumen system is not considered. The assumed interaction potential energy is: where UCG is the CG potential, Ubond(r) is the bond energy (assumed harmonic), Uangle(θ) is the angle potential, Unon(r) is the non-bond potential, r is the distance between beads, and θ is the plane angle between three coarse-grained beads. Coarse-Grained Force Field The force field determines the motion laws of the simulated system, which is used to describe the inter-particle interactions, including the bonding interaction between particles, the van der Waals force between molecules, the hydrogen bond interaction, the charge force, etc.The basic computational particles of MD are atoms, while CGMD is based on coarse-grained beads, so the all-atom force field is not applicable to the simulation of coarse-grained molecular models. The CG force field describes all the interaction potential energy functions of coarsegrained beads in the form of potential energy functions, including non-bond potential energy functions and bond potential energy functions: (1) The non-bond potential is the energy that mainly affects the determination of macroscopic properties of the bitumen system, including van der Waals potential, hydrogen bond potential and charge energy.Among the non-bonded potential, van der Waals potential plays a major role in the bitumen system.Hydrogen bonds only exist between hydrogen atoms, oxygen atoms, fluorine atoms and nitrogen atoms and must conform to a certain structural general formula; the bitumen molecular model in this study has few oxygen atoms and nitrogen atoms and little influence of hydrogen bonds, so hydrogen bond potential is not considered in this study.The CG beads divided in this study are all neutral particles without charge, so charge energy is not considered in this study. (2) The bond potential energy function includes simple harmonic vibration potential energy function, plane bending potential energy function and out-of-plane rocking potential energy function.The bond potential energy plays a minor role in the determination of the macroscopic properties of the bitumen system.In this study, the out-of-plane rocking potential energy function with the least influence on the bitumen system is not considered. The assumed interaction potential energy is: where U CG is the CG potential, U bond (r) is the bond energy (assumed harmonic), U angle (θ) is the angle potential, U non (r) is the non-bond potential, r is the distance between beads, and θ is the plane angle between three coarse-grained beads. Non-Bond Potential Since van der Waals potential energy dominates the bitumen system's non-bonded potential, the van der Waals force is described as the Lennard-Jones potential [27] in this work.Combined with the "bottom-up" coarse-grained method developed by Voth [28] and Shinoda et al. [29] and starting from the microscopic molecular model, the radial distribution function g(r) is obtained by all-atom simulation of every two coarse beads with van der Waals force.The Boltzmann inversion formula [30] is used to calculate the Lennard-Jones potential function from the radial distribution function, as shown in Formula (2).The Lennard-Jones potential function is provided in Formula (3).It is used to define the non-bonded potential function. where K B is the Boltzmann constant, T the temperature, and g(r) is the radial distribution function.This potential is recast into a coarse-grained Lennard-Jones potential where ε and σ are the parameters of the force field.These parameters are estimated by comparing Equation ( 3) to the all-atom potential: ε corresponds to the lowest energy point of the Lennard-Jones function, while σ to the value of r when the potential is 0. Table 4 shows ε, and Table 5 shows σ. Bond Potential When the CG molecular model is constructed, the non-bond potential function is used to calculate the interaction between beads that are not directly connected, and the bond potential function is used to calculate the interaction between directly connected beads.In the bonding potential function, the simple harmonic vibration potential energy function is expressed as Formula (4), and the plane bending potential energy function is expressed as Formula (5): The bond coarse-grained potential function is considered harmonic, i.e., K bond and K angle are energy constants, r 0 the equilibrium distance between two beads and θ 0 the equilibrium angle among three beads. In the atomic every two coarse-grained molecular dynamics simulation, beads connected all the all-atom structures of bonding; the tagged two coarse beads had a centroid position in the all-atom structure.The distance between the two coarse beads' centroids was shown as r 0 , as shown in Table 6.Part of the combination of the two coarse beads did not come linked together by the bitumen molecular model of this study, therefore, there is a blank in Table 6.For the bond angle, the all-atom structure connected by every three coarse-grained beads was simulated, and the position of the centroids of the three coarse-grained beads in the all-atomic structure was marked, and the angle between the lines of the centroids of the three coarse-grained beads was measured as θ 0 , as shown in Table 7.Most combinations of three coarse-grained beads do not form bonding plane angles in the bitumen molecular model in this study, so although 16 different coarse-grained beads can theoretically form 2176 plane Angle combinations, there are only 73 combinations in Table 7.The data in Tables 6 and 7 are corrected by repeated trial calculation.We use the values of Kbond = 1250 KJ/(mol•nm) and Kangle = 25 KJ/mol.for all bonds and angles, which is consistent with the MARTINI [31]. Coarse-Grained Molecular Dynamics The equilibration steps for the coarse-grained model are comparable to those of the all-atom one, as follows: (1) Color the CG molecule models of each component of the bitumen separately for subsequent studies on the effect of the components upon the properties of the bitumen system.The asphaltenes are colored red, the aromatics green, the polar aromatics yellow and the saturates blue.(2) Construct a square calculation box with periodic boundary conditions, side lengths of the box are 30 nm and the initial density of the box is 0.15 g/cm 3 , then input the CG molecular model of each component into the box in terms of the mass ratio; (3) Conduct geometric optimization (energy minimization), which should be as full as possible since the CG molecular model is larger with more particles than the all-atom one, setting the number of iterations at 40,000; (4) Relaxing the model for 2000 ps at 533 K in the NVT ensemble to reach the energy minimum conformation; (5) Simulate the model for 3000 ps at 533 K and 10 atm in the NPT ensemble, then 1000 ps at 1 atm.The 10 atm run is for accelerating the compression of the model, and the 1atm run is to achieve the desired final pressure. The final configuration is shown in Figure 3.The volume of the simulation is 6128.5 nm 3 , which is about 112 times that of the all-atom molecular model. (3) Conduct geometric optimization (energy minimization), which should be possible since the CG molecular model is larger with more particles than the one, setting the number of iterations at 40,000; (4) Relaxing the model for 2000 ps at 533 K in the NVT ensemble to reach th minimum conformation; (5) Simulate the model for 3000 ps at 533 K and 10 atm in the NPT ensemble, t ps at 1 atm.The 10 atm run is for accelerating the compression of the mode 1atm run is to achieve the desired final pressure. The final configuration is shown in Figure 3.The volume of the simulation nm 3 , which is about 112 times that of the all-atom molecular model. Validation of the Bitumen CG Molecular Model Similar to the validation method of the all-atom molecular model, density, coefficient and molecular aggregation behavior were chosen as metrics to valida molecular model. The average density of the model is 0.878 g/cm 3 at 10 standard atmospheric p T = 533 K, while at 1 standard atmospheric pressure, T = 533 K, the average densit g/cm 3 .The small difference in density between atmospheric pressures indicate system.The density of the model at 1 standard atmospheric pressure, T = 288 K g/cm 3 , which is close to the reference 1.025 g/cm 3 in Table 2. Simulating the CG molecular model at a temperature of 298 K for 200 ps in ensemble, the diffusion coefficient was calculated to be 2.522 × 10 −6 cm 2 /s, whic to 3.5 × 10 −6 cm 2 /s measured by Andrews et al. 14 using fluorescence correlation copy. The change in the trajectory of the CG molecules is shown in Figure 4, w molecules exhibit molecular aggregation behavior in the NVT ensemble simulati ing agglomerates and further forming nanoaggregates 14. Validation of the Bitumen CG Molecular Model Similar to the validation method of the all-atom molecular model, density, diffusion coefficient and molecular aggregation behavior were chosen as metrics to validate the CG molecular model. The average density of the model is 0.878 g/cm 3 at 10 standard atmospheric pressures, T = 533 K, while at 1 standard atmospheric pressure, T = 533 K, the average density is 0.876 g/cm 3 .The small difference in density between atmospheric pressures indicates a stable system.The density of the model at 1 standard atmospheric pressure, T = 288 K is 1.023 g/cm 3 , which is close to the reference 1.025 g/cm 3 in Table 2. Simulating the CG molecular model at a temperature of 298 K for 200 ps in an NPT ensemble, the diffusion coefficient was calculated to be 2.522 × 10 −6 cm 2 /s, which is close to 3.5 × 10 −6 cm 2 /s measured by Andrews et al. [32] using fluorescence correlation spectroscopy. The change in the trajectory of the CG molecules is shown in Figure 4, where the molecules exhibit molecular aggregation behavior in the NVT ensemble simulation, forming agglomerates and further forming nanoaggregates [33]. Appl.Sci.2022, 12, x FOR PEER REVIEW 11 of 20 (3) Conduct geometric optimization (energy minimization), which should be as full as possible since the CG molecular model is larger with more particles than the all-atom one, setting the number of iterations at 40,000; (4) Relaxing the model for 2000 ps at 533 K in the NVT ensemble to reach the energy minimum conformation; (5) Simulate the model for 3000 ps at 533 K and 10 atm in the NPT ensemble, then 1000 ps at 1 atm.The 10 atm run is for accelerating the compression of the model, and the 1atm run is to achieve the desired final pressure. The final configuration is shown in Figure 3.The volume of the simulation is 6128.5 nm 3 , which is about 112 times that of the all-atom molecular model. Validation of the Bitumen CG Molecular Model Similar to the validation method of the all-atom molecular model, density, diffusion coefficient and molecular aggregation behavior were chosen as metrics to validate the CG molecular model. The average density of the model is 0.878 g/cm 3 at 10 standard atmospheric pressures, T = 533 K, while at 1 standard atmospheric pressure, T = 533 K, the average density is 0.876 g/cm 3 .The small difference in density between atmospheric pressures indicates a stable system.The density of the model at 1 standard atmospheric pressure, T = 288 K is 1.023 g/cm 3 , which is close to the reference 1.025 g/cm 3 in Table 2. Simulating the CG molecular model at a temperature of 298 K for 200 ps in an NPT ensemble, the diffusion coefficient was calculated to be 2.522 × 10 −6 cm 2 /s, which is close to 3.5 × 10 −6 cm 2 /s measured by Andrews et al. 14 using fluorescence correlation spectroscopy. The change in the trajectory of the CG molecules is shown in Figure 4, where the molecules exhibit molecular aggregation behavior in the NVT ensemble simulation, forming agglomerates and further forming nanoaggregates 14.Based on the above validation, the CG bitumen molecular model is considered reasonable and can be used for further research. Bitumen Healing The calculation box's dimensions for the microcrack simulations are 73.2 nm in length and 18.3 nm in height.The CG molecular model in Figure 3 is placed on both sides of the box to represent the bitumen before healing.The microcrack is represented by the empty area in the center of the box.The system is geometrically optimized for 50,000 iterations.The obtained bitumen microcrack model is shown in Figure 5.The volume of microcracks in this model is approximately 481 times that of all-atoms microcracks [8], and the microcrack width (36.6 nm) is about 18 times larger, which will be helpful to study the self-healing behavior and other mesoscopic properties of bitumen on a larger scale.Based on the above validation, the CG bitumen molecular model is considered re sonable and can be used for further research. Bitumen Healing The calculation box's dimensions for the microcrack simulations are 73.2 nm length and 18.3 nm in height.The CG molecular model in Figure 3 is placed on both side of the box to represent the bitumen before healing.The microcrack is represented by th empty area in the center of the box.The system is geometrically optimized for 50,000 ite ations.The obtained bitumen microcrack model is shown in Figure 5.The volume of m crocracks in this model is approximately 481 times that of all-atoms microcracks 8, an the microcrack width (36.6 nm) is about 18 times larger, which will be helpful to study th self-healing behavior and other mesoscopic properties of bitumen on a larger scale.Based on the above validation, the CG bitumen molecular model is considered sonable and can be used for further research. Bitumen Healing The calculation box's dimensions for the microcrack simulations are 73.2 n length and 18.3 nm in height.The CG molecular model in Figure 3 is placed on both of the box to represent the bitumen before healing.The microcrack is represented b empty area in the center of the box.The system is geometrically optimized for 50,000 ations.The obtained bitumen microcrack model is shown in Figure 5.The volume o crocracks in this model is approximately 481 times that of all-atoms microcracks 8 the microcrack width (36.6 nm) is about 18 times larger, which will be helpful to stud self-healing behavior and other mesoscopic properties of bitumen on a larger scale.NPT ensemble simulations of bitumen microcrack models were carried out a 283, 293, 303, 313, 323, 333, 343, 353, 363 and 373 K for 500 ps.The bond energy remains constant while the other two components di two sides of the crack get closer to one another.This situation indicates tha driven forces at the molecular level are the angle and van der Walls force bond energy.The molecules do not stretch to fill the empty space but 'unfo a straighter shape, while the two sides are attracted to each other by van der After the two sides touch, the van der Walls attraction presses one side aga compressing the molecules and increasing the bond potential.This evolution because it shows that during healing, the molecular structure of bitumen ch fore, even if the bitumen may appear the same under a microscope, there is ence before and after healing. Density The change in density of the system reflects the healing of asphalt o density of the box calculated during the simulation at 333 K is shown in Fig The bond energy remains constant while the other two components diminish as the two sides of the crack get closer to one another.This situation indicates that the healing-driven forces at the molecular level are the angle and van der Walls forces, but not the bond energy.The molecules do not stretch to fill the empty space but 'unfold' assuming a straighter shape, while the two sides are attracted to each other by van der Walls forces.After the two sides touch, the van der Walls attraction presses one side against the other, compressing the molecules and increasing the bond potential.This evolution is interesting because it shows that during healing, the molecular structure of bitumen changes.Therefore, even if the bitumen may appear the same under a microscope, there is a clear difference before and after healing. Density The change in density of the system reflects the healing of asphalt over time.The density of the box calculated during the simulation at 333 K is shown in Figure 7. As healing progresses, density increases (the total mass is constant, but the volume changes due to the NPT ensemble).The density of model healing was about 1.002 g/cm 3 .The density close to the CG molecular model of bitumen is 1.023 g/cm 3 . The density slows down the rising rate at about 80 ps after the start of the simulation and accelerates the rising rate at about 110 ps after the simulation initiates, which is consistent with the changing trend of the length, width, and height curve of the calculated box.The overall form of bitumen fluctuates dramatically in the first 80 ps and thereafter, stabilizes in accordance with the molecular motion trajectory.It is speculated that the bitumen molecular motion is mainly self-balancing and internal molecular recombination before the 80 ps.Until the crack is repaired as a result of molecular diffusion motion, the molecular motion is largely a diffusion motion starting at 80 ps and reaching its peak rate of diffusion motion at 110 ps.As healing progresses, density increases (the total mass is constant, but the changes due to the NPT ensemble).The density of model healing was about 1.00 The density close to the CG molecular model of bitumen is 1.023 g/cm 3 . The density slows down the rising rate at about 80 ps after the start of the sim and accelerates the rising rate at about 110 ps after the simulation initiates, which sistent with the changing trend of the length, width, and height curve of the ca box.The overall form of bitumen fluctuates dramatically in the first 80 ps and the stabilizes in accordance with the molecular motion trajectory.It is speculated tha tumen molecular motion is mainly self-balancing and internal molecular recomb before the 80 ps.Until the crack is repaired as a result of molecular diffusion mot molecular motion is largely a diffusion motion starting at 80 ps and reaching its p of diffusion motion at 110 ps. Diffusion Coefficient From the microscopic perspective, the speed of molecular motion can be repr by the diffusion coefficient, and there is a certain relationship between the mean displacement (MSD) and the diffusion coefficient 8. Theoretically, if the time enough, the slope of the function of lg (MSD) and lg (Time) should be 1, at whi Einstein diffusion occurs, and the self-diffusion coefficient of bitumen componen calculated only when this condition is met 8.However, due to the short simulati of molecular dynamics, as long as the function of MSD and time is linear, the d coefficient can be calculated by the slope. The healing time is about 300 ps.For each component, we can calculate the the MSD versus time function in the range of 150-295 ps and 300-410 ps (in these tw periods, the function of MSD and time is almost linear, which is shown in Figure Diffusion Coefficient From the microscopic perspective, the speed of molecular motion can be represented by the diffusion coefficient, and there is a certain relationship between the mean square displacement (MSD) and the diffusion coefficient [34].Theoretically, if the time is long enough, the slope of the function of lg (MSD) and lg (Time) should be 1, at which time Einstein diffusion occurs, and the self-diffusion coefficient of bitumen component can be calculated only when this condition is met [35].However, due to the short simulation time of molecular dynamics, as long as the function of MSD and time is linear, the diffusion coefficient can be calculated by the slope. The healing time is about 300 ps.For each component, we can calculate the slope of the MSD versus time function in the range of 150-295 ps and 300-410 ps (in these two time periods, the function of MSD and time is almost linear, which is shown in Figure 8) and convert them into the diffusion coefficient of each component molecule before and after healing.The diffusion rate of each component is asphaltenes < polar aromatics < saturates < aromatics.The diffusion rate of saturates is relatively close to that of aromatics, which is much greater than that of asphaltenes and polar aromatics.This phenomenon is consistent with colloidal theory [33].According to this theory, asphaltene, which is at the core of the colloidal bitumen system, should have the lowest diffusivity.Saturates and aromatics, on the other hand, are in the shell and should have higher diffusivity. The diffusion coefficients of the four components decrease during healing.As shown in Figure 9, asphaltenes decreased by about 3.1%, polar aromatics by roughly 49.0%, saturates decreased by about 38.3%, and aromatics decreased by about 40.8%.The reason why the healing affects the bitumen molecular diffusion coefficient is that, as bitumen heals, the free movement space of bitumen molecules becomes less and less.This effect is more significant on polar aromatics, saturates, and aromatics.This indicates that the self-healing behavior of bitumen is the main reason for the molecular diffusion of the polar aromatics, saturates fraction, and aromatics fraction, while the self-healing behavior of bitumen hardly affects the molecular diffusion of asphaltenes.Saturates and aromatics appear to be the primary contributors to the self-healing behavior of bitumen since their diffusion coefficients are substantially greater than those of asphaltenes and polar aromatics.The diffusion coefficients of the four components decrease during healing.As shown in Figure 9, asphaltenes decreased by about 3.1%, polar aromatics by roughly 49.0%, saturates decreased by about 38.3%, and aromatics decreased by about 40.8%.The reason why the healing affects the bitumen molecular diffusion coefficient is that, as bitumen heals, the free movement space of bitumen molecules becomes less and less.This effect is more significant on polar aromatics, saturates, and aromatics.This indicates that the selfhealing behavior of bitumen is the main reason for the molecular diffusion of the polar aromatics, saturates fraction, and aromatics fraction, while the self-healing behavior of bitumen hardly affects the molecular diffusion of asphaltenes.Saturates and aromatics appear to be the primary contributors to the self-healing behavior of bitumen since their diffusion coefficients are substantially greater than those of asphaltenes and polar aromatics.The diffusion coefficients of the four components decrease during healing.As shown in Figure 9, asphaltenes decreased by about 3.1%, polar aromatics by roughly 49.0%, saturates decreased by about 38.3%, and aromatics decreased by about 40.8%.The reason why the healing affects the bitumen molecular diffusion coefficient is that, as bitumen heals, the free movement space of bitumen molecules becomes less and less.This effect is more significant on polar aromatics, saturates, and aromatics.This indicates that the selfhealing behavior of bitumen is the main reason for the molecular diffusion of the polar aromatics, saturates fraction, and aromatics fraction, while the self-healing behavior of bitumen hardly affects the molecular diffusion of asphaltenes.Saturates and aromatics appear to be the primary contributors to the self-healing behavior of bitumen since their diffusion coefficients are substantially greater than those of asphaltenes and polar aromatics. Relative Concentration and Crack Ratio The relative concentration represents the fraction of molecules in a given direction.For a 3D box with XYZ coordinates, the analysis of relative concentration is more like a slice of the box.The larger the relative concentration at a point in the Z direction, the more molecules in the XY slice at this point.The relative concentration of molecules will be close to 0 at the position where voids completely appear in the box.The relative concentration can be calculated as: Among them, RC is the relative concentration of molecules, n s is the number of particles in a cross-section perpendicular to the coordinate axis in the box, V s is the volume of the cross-section, n is the number of particles in the calculated box, and V is the volume of the calculated box. Taking 333 K as an example, the relative concentration of molecules along the length of the box in the simulation process of the model is shown in Figure 10.Among them, 40 ps roughly corresponds to the moment the model achieves initial self-balance, 80 ps roughly corresponds to the moment the bitumen's overall shape changes, 110 ps roughly corresponds to the moment the model's healing rate starts to accelerate, and 300 ps roughly corresponds to the moment the model achieves healing. ticles in a cross-section perpendicular to the coordinate axis in the box, V the cross-section, n is the number of particles in the calculated box, and the calculated box. Taking 333 K as an example, the relative concentration of molecule of the box in the simulation process of the model is shown in Figure 10 ps roughly corresponds to the moment the model achieves initial s roughly corresponds to the moment the bitumen's overall shape change corresponds to the moment the model's healing rate starts to accele roughly corresponds to the moment the model achieves healing.The change in relative concentration is consistent with the molecu the simulation progresses, the crack in the center of the model becomes interval with the value of 0 in the center of the relative concentration shorter.The box's length steadily reduces as a result of the NPT ensemb the relative concentration broken line diagram gradually gets shorter.A both ends of the model gradually moves towards the central crack, the r tion of bitumen molecules at both ends gradually decreases. For the relative concentration along the length direction at a specif mum value of the abscissa represents the box length at that time, and interval with the value of 0 in the center represents the crack width at the NPT ensemble simulation, the box shrinks with the simulation, and the result of the reduction in the box length, which cannot directly repre crack healing.Therefore, the parameter crack rate is defined in this pap the degree of crack healing.The change in relative concentration is consistent with the molecular trajectory.As the simulation progresses, the crack in the center of the model becomes narrower, and the interval with the value of 0 in the center of the relative concentration diagram becomes shorter.The box's length steadily reduces as a result of the NPT ensemble simulation, and the relative concentration broken line diagram gradually gets shorter.As the bitumen at both ends of the model gradually moves towards the central crack, the relative concentration of bitumen molecules at both ends gradually decreases. For the relative concentration along the length direction at a specific time, the maximum value of the abscissa represents the box length at that time, and the length of the interval with the value of 0 in the center represents the crack width at that time.Due to the NPT ensemble simulation, the box shrinks with the simulation, and the crack width is the result of the reduction in the box length, which cannot directly represent the degree of crack healing.Therefore, the parameter crack rate is defined in this paper to characterize the degree of crack healing. Among them, c is the crack rate at a certain time, w is the crack width at that time, and l is the box length at that time. Taking 333 K as an example, the crack width, box length, and corresponding crack rate of the model before healing are shown in Figure 11.l Among them, c is the crack rate at a certain time, w is the crack width at that time, and l is the box length at that time. Taking 333 K as an example, the crack width, box length, and corresponding crack rate of the model before healing are shown in Figure 11.The model enters a state of self-balancing during the first 40 ps, during which time, the box length, crack width, and crack rate all increase.The box length essentially maintains a constant rate of reduction after 40 ps, which is in line with the features of NPT ensemble simulation.The crack width basically maintained a constant rate of decline within the 40-80 ps, accelerated from the 80 ps, slowed down at about 290 ps, and decreased to 0 at about 300 ps.The crack rate remained basically unchanged in the 40ps to 80 ps, accelerated from the 80 ps, slowed down at about 290 ps, and decreased to 0 at about 300 ps. According to the molecular trajectory, 80 ps is the time when the model completes the internal molecular reorganization, and 290 ps is the time when the bitumen molecules at both ends of the model begin to contact.The above analysis of the crack rate shows that at the initial stage of the self-healing behavior of bitumen, the bitumen molecules undergo internal molecular reorganization, and the cracks almost do not heal at this time; after internal molecular recombination, bitumen cracks begin to heal, and the healing rate continues to rise; when the bitumen molecules at both ends of the crack begin to contact, the crack healing rate decreases until the completion of healing. Temperature Figure 12 displays the model's healing time at various temperatures.The molecular diffusion process is more vigorous and heals more quickly at higher temperatures.As a result, the relationship between temperature and crack healing time is inverse.According to the slope in Figure 12, when the temperature is lower than 303 K, the effect of temperature is more pronounced.A possible explanation of this behavior is that 303 K is close to the glass transition temperature of bitumen, and an increase in temperature has a higher effect on molecular diffusion if the material is in the glass phase.The model enters a state of self-balancing during the first 40 ps, during which time, the box length, crack width, and crack rate all increase.The box length essentially maintains a constant rate of reduction after 40 ps, which is in line with the features of NPT ensemble simulation.The crack width basically maintained a constant rate of decline within the 40-80 ps, accelerated from the 80 ps, slowed down at about 290 ps, and decreased to 0 at about 300 ps.The crack rate remained basically unchanged in the 40ps to 80 ps, accelerated from the 80 ps, slowed down at about 290 ps, and decreased to 0 at about 300 ps. According to the molecular trajectory, 80 ps is the time when the model completes the internal molecular reorganization, and 290 ps is the time when the bitumen molecules at both ends of the model begin to contact.The above analysis of the crack rate shows that at the initial stage of the self-healing behavior of bitumen, the bitumen molecules undergo internal molecular reorganization, and the cracks almost do not heal at this time; after internal molecular recombination, bitumen cracks begin to heal, and the healing rate continues to rise; when the bitumen molecules at both ends of the crack begin to contact, the crack healing rate decreases until the completion of healing. Temperature Figure 12 displays the model's healing time at various temperatures.The molecular diffusion process is more vigorous and heals more quickly at higher temperatures.As a result, the relationship between temperature and crack healing time is inverse.According to the slope in Figure 12, when the temperature is lower than 303 K, the effect of temperature is more pronounced.A possible explanation of this behavior is that 303 K is close to the glass transition temperature of bitumen, and an increase in temperature has a higher effect on molecular diffusion if the material is in the glass phase. Conclusions In this work, the CG model was established based on the data of the MD si of bitumen, the force field parameters were calculated, and the coarse-grained fo suitable for the CG molecular model of bitumen was developed and verified.At time, the model was used in the study of self-healing bitumen.The main researc sions are as follows: According to the structure characteristics of the four-component bitumen mo twelve molecules, 16 mapping schemes of CG beads were divided; the integri benzene ring functional groups was preserved as far as possible, while the thicke zene ring structure was mapped by using two CG beads connected to each o Boltzmann inversion was used to calculate the non-bond potential energy, and monic potential was used to calculate the bond potential energy to obtain the fo parameters.Compared with the all-atom model, the CG model has a longer si time and larger scale, which will be helpful in studying the self-healing behavio ing agent-doped and aged asphalt at the mesoscopic scale. By studying CG models of microcracks in bitumen, it was found at the m level that angle and van der Waals forces are the driving forces for self-healing; t cules "unfold" into a straighter shape as the van der Waals forces pull the two sid crack together, rather than stretching to fill the vacant.The variation in crack showed that increasing the temperature near the glass transition temperature of accelerates the molecular diffusion movement more significantly.At the same diffusivity of saturates, aromatics decreased greatly during the healing process that the components mainly contributing to the self-healing behavior of bitume saturates and aromatics. Conclusions In this work, the CG model was established based on the data of the MD simulation of bitumen, the force field parameters were calculated, and the coarse-grained force field suitable for the CG molecular model of bitumen was developed and verified.At the same time, the model was used in the study of self-healing bitumen.The main research conclusions are as follows: According to the structure characteristics of the four-component bitumen model with twelve molecules, 16 mapping schemes of CG beads were divided; the integrity of the benzene ring functional groups was preserved as far as possible, while the thickened benzene ring structure was mapped by using two CG beads connected to each other.The Boltzmann inversion was used to calculate the non-bond potential energy, and the harmonic potential was used to calculate the bond potential energy to obtain the force field parameters.Compared with the all-atom model, the CG model has a longer simulation time and larger scale, which will be helpful in studying the self-healing behavior of healing agent-doped and aged asphalt at the mesoscopic scale. By studying CG models of microcracks in bitumen, it was found at the molecular level that angle and van der Waals forces are the driving forces for self-healing; the molecules "unfold" into a straighter shape as the van der Waals forces pull the two sides of the crack together, rather than stretching to fill the vacant.The variation in cracking rate showed that increasing the temperature near the glass transition temperature of bitumen accelerates the molecular diffusion movement more significantly.At the same time, the diffusivity of saturates, aromatics decreased greatly during the healing process, proving that the components mainly contributing to the self-healing behavior of bitumen are the saturates and aromatics. Figure 2 . Figure 2. The mapping scheme of the bitumen molecule model.(a) Mapping scheme of asphaltenes.(b) Mapping scheme of saturates.(c) Mapping scheme of aromatics.(d) Mapping scheme of polar aromatics. Figure 6 Figure 5 . Figure6depicts the energy development of the system at 333 K during the healin procedure.At around 300 ps, the microcrack's two sides come into contact. Figure 6 Figure 6 depicts the energy development of the system at 333 K during the healing procedure.At around 300 ps, the microcrack's two sides come into contact. Figure 6 Figure 6 . Figure 6 depicts the energy development of the system at 333 K during the he procedure.At around 300 ps, the microcrack's two sides come into contact. Figure 6 . Figure 6.Potential energy of model in the self-healing process.(a) Bond energy and (b) van der Waals energy. Figure 6 . Figure 6.Potential energy of model in the self-healing process.(a) Bond energy and angle energy; (b) van der Waals energy. Figure 7 . Figure 7.The density of the model in the process of self-healing. Figure 7 . Figure 7.The density of the model in the process of self-healing. Appl.Sci.2022, 12, x FOR PEER REVIEW 15 of 20 colloidal bitumen system, should have the lowest diffusivity.Saturates and aromatics, on the other hand, are in the shell and should have higher diffusivity. Figure 9 . Figure 9. Diffusion coefficient of each component before and after healing. Figure 9 . Figure 9. Diffusion coefficient of each component before and after healing.Figure 9. Diffusion coefficient of each component before and after healing. Figure 9 . Figure 9. Diffusion coefficient of each component before and after healing.Figure 9. Diffusion coefficient of each component before and after healing. Figure 10 . Figure 10.Relative concentration at each time during the self-healing process o Figure 10 . Figure 10.Relative concentration at each time during the self-healing process of the model. Figure 11 . Figure 11.Healing degree parameters of the model in the process of self-healing.(a) Crack width and box length; (b) Crack ratio. Figure 11 . Figure 11.Healing degree parameters of the model in the process of self-healing.(a) Crack width and box length; (b) Crack ratio. Table 2 . Calculation and reference values of properties of bitumen all-atom m Table 2 . Calculation and reference values of properties of bitumen all-atom molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Bead Name Structural Formula Chemical Formula Mapping Ratio Legend Bead Name Structural Formula Chemical Formula Mapping Ratio Legend If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model. Table 3 . Beads of bitumen CG molecular model. Bead Name Structural Formula Chemical Formula Mapping Ratio Legend Bead Name Structural Formula Chemical Formula Mapping Ratio Legend If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model. Table 3 . Beads of bitumen CG molecular model. Bead Name Structural Formula Chemical Formula Mapping Ratio Legend Bead Name Structural Formula Chemical Formula Mapping Ratio Legend If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model. Table 3 . Beads of bitumen CG molecular model. Bead Name Structural Formula Chemical Formula Mapping Ratio Legend Bead Name Structural Formula Chemical Formula Mapping Ratio Legend If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model. Table 3 . Beads of bitumen CG molecular model. Bead Name Structural Formula Chemical Formula Mapping Ratio Legend Bead Name Structural Formula Chemical Formula Mapping Ratio Legend If too many atoms are included, important molecular information (e.g., molecular topology structure and branched chain form) will be lost with insufficient accuracy of the model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model. Table 3 . Beads of bitumen CG molecular model.
2022-10-19T15:25:36.775Z
2022-10-14T00:00:00.000
{ "year": 2022, "sha1": "5d61b0ac7ccd6b673062b710654e572e18732136", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/20/10360/pdf?version=1666610911", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9093e67365112f149e5b8d54de4ca3de8074ad0c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
13601876
pes2o/s2orc
v3-fos-license
Spatial Distribution of Deep Sulcal Landmarks and Hemispherical Asymmetry on the Cortical Surface The locally deepest regions of major sulci, the sulcal pits, are thought to be the first cortical folds to develop and are closely related to functional areas. We examined the spatial distribution of sulcal pits across the entire cortical region, and assessed the hemispheric asymmetry in their frequency and distribution in a large group of normal adult brains. We automatically extracted sulcal pits from magnetic resonance imaging data using surface-based methods and constructed a group map from 148 subjects. The spatial distribution of the sulcal pits was relatively invariant between individuals, showing high frequency and density in specific focal areas. The left and right sulcal pits were spatially covariant in the regions of the earliest developed sulci. The sulcal pits with great spatial invariance appear to be useful as stable anatomical landmarks. We showed the most significant asymmetry in the frequency and spatial variance of sulcal pits in the superior temporal sulcus, which might be related to the lateralization of language function to the left hemisphere, developing more consistently and strongly than for the right. Our analyses support previous empirical and theoretical studies, and provide additional insights concerning the anatomical and functional development of the brain. Introduction The pattern of sulcal and gyral folds, the principal anatomical landmarks of the human cerebral cortex, exhibits structural complexity and large intersubject variability. Understanding the spatial relationships between structure and functional features, and the precise anatomical correspondence across different brains, remains a challenge. Although the origin and meaning of this variability are still unclear, the first cortical folds to develop appear to be stable in number, position, and orientation (Regis et al. 1995(Regis et al. , 2005Cachia et al. 2003;Lohmann et al. 2008). These may therefore be used as stable anatomical landmarks for matching different cortical folding patterns between subjects. The formation of the first sulci occurs during the early stage of radial growth of the cerebral cortex, and their formation may be closely related to functional areas and the protomap of cytoarchitectonic areas (Rakic 1988;Hasnain et al. 2001;Regis et al. 2005;Lohmann et al. 2008). The sulci that form later during the tangential growth of the cerebral cortex appear to be more variable, both in appearance and in their relationship to functional areas (Hasnain et al. 2001(Hasnain et al. , 2006. It is therefore important to identify those sulcal landmarks (the putative first cortical folds) that form early and that retain their identity during development in the human brain. Moreover, their spatial distribution may be important for understanding the anatomical and functional development of the human brain. Brain asymmetry has been observed in humans in terms of structure, function, and behavior. The left hemisphere is normally dominant for language and logical processing, whereas the right hemisphere is involved in spatial recognition (Riss 1984;Geschwind and Miller 2001). The distribution of human brain function between the left and right hemispheres is associated with asymmetries in anatomical structures such as the Sylvian fissures (SFs), planum temporale, and superior temporal regions (Good et al. 2001;Sowell et al. 2002;Ochiai et al. 2004;Van Essen 2005). This lateralized specialization may arise from evolutionary, developmental, genetic, experiential, or pathological factors (Toga and Thompson 2003). Asymmetrical gene expression in the human embryonic cortex appears as early as 12 weeks (Sun et al. 2005;Sun and Walsh 2006). If the sulci that form early are related to functional areas and the formation is under genetic control (Lohmann et al. 1999(Lohmann et al. , 2008, their appearance and spatial distribution should be asymmetrical and associated with lateralized brain function. The locally deepest regions of sulci, called the sulcal pits, are the first to develop and subsequently change the least as the cortex expands (Lohmann et al. 2008). The sulcal pits previously described had a regular spatial arrangement and showed less variation between individuals than the more superficial cortical regions. However, that study used a volume image and was confined to the lateral brain areas because of limitations in the methods used (Lohmann et al. 2008). The first cortical folds were called sulcal roots, and a multiscale-based representation of the sulcal folding patterns was used to identify putative sulcal roots on a cortical surface model of the adult brain (Cachia et al. 2003). These putative sulcal roots were recovered from the curvature of the surface in the sulcal fundic regions. However, these results were confined to one of the simplest sulci, the central sulcus (CS), in a few subjects, and the spatial distribution pattern and intersubject variability were not described. To the best of our knowledge, there has not been an examination of the hemispheric asymmetry of these sulcal landmarks. We examined the spatial distribution of the deep sulcal landmarks (assumed to be the first cortical folds in the adult brain) across the entire cortical region, and assessed the hemispheric asymmetry in their frequency and distribution in a large group of normal adult brains. We considered that the sulcal pits, the locally deepest points of a sulcal fundus (Lohmann et al. 2008), are the significant sulcal landmarks. We identified sulcal pits using the cortical surface obtained from magnetic resonance imaging (MRI) data. Data Acquisition This study used the data set of the International Consortium for Brain Mapping that has been used in many previous studies (Mazziotta et al. 1995;Watkins et al. 2001;Im, Lee, Lyttelton, et al. 2008). The subjects scanned were 152 unselected normal volunteers. Each subject gave written informed consent and the Research Ethics Committee of the Montreal Neurological Institute (MNI) and Hospital approved the study. Each subject was scanned using a Phillips Gyroscan 1.5 T superconducting magnet system. The sequence that was used yielded T 1weighted images (3-dimensional [3D] fast field echo scan with 140--160 slices, 1-mm isotropic resolution, time repetition [TR] = 8 ms, time echo [TE] = 0 ms, flip angle = 30°). We excluded 4 subjects from the 152 because of surface modeling errors. The final sample consisted of 83 men and 65 women. Their ages ranged from 18 to 44 years (mean ± standard deviation: 25.0 ± 4.9 years). As determined from a short questionnaire, 15 subjects were left-handed on a number of tasks and 124 subjects preferred to use their right hand. The hand-dominance for 9 subjects was not known. Image Processing and Cortical Surface Extraction Images were processed using a standard MNI anatomical pipeline. The native images were normalized to a standardized stereotaxic space using a linear transformation and corrected for intensity nonuniformity (Collins et al. 1994;Sled et al. 1998). The registered and corrected volumes were classified into white matter, gray matter, cerebrospinal fluid, and background using an advanced neural-net classifier (Zijdenbos et al. 1996). The hemispherical surfaces of the inner and outer cortex, consisting of 40 962 vertices, were automatically extracted using the Constrained Laplacian-Based Automated Segmentation with Proximities algorithm (MacDonald et al. 2000;Kim et al. 2005). We used the inner cortical surface to extract the sulcal landmarks. Sulcal Depth Sulcal depth maps were generated by measuring the 3D Euclidean distance from each vertex in the inner cortical surface to the nearest voxel on the cerebral hull. This approach has been employed and described in our previous studies (Im, Lee, Lyttelton, et al. 2008;Im, Lee, Seo, et al. 2008). We masked the surfaces to the images, isolated inner voxels of the surfaces and binarized the images to extract the cerebral hull volume. We performed a 3D morphological closing operation on the binarized image using a structuring element of spherical shape. The radius of the structuring element was 10 mm, which is larger than the maximum radius of the sulcus. We detected the edge of the image with the Laplacian of the Gaussian mask and constructed a cerebral hull volume that wrapped around the hemisphere, but did not encroach into the sulci. Extraction of Sulcal Pits on the Cortical Surface A sulcal pit is the deepest point in a sulcal catchment basin, and it can be identified by using the structural information of small gyri buried in depths of sulci called plis de passage (the focal elevation of the sulcal bottom) (Gratiolet 1854;Regis et al. 2005). The plis de passage, which was described as the remnant of the development of separate sulcal segments (Cunningham 1905), is located between 2 sulcal pits within a sulus. We used a watershed algorithm to extract the locally deepest points of the sulci on triangular meshes. The concept underlying our watershed processing has been previously described (Rettmann et al. 2002;Yang and Kruggel 2008). The algorithm initially sorts the depth values and creates a list of vertices that are ordered by their depth. The vertex of the largest depth was first defined as the sulcal pit, the initial vertex of a catchment basin. If the next vertex in the list was the neighbor of the previously identified catchment basin, it was added to this catchment basin. If all of its neighbors were unlabeled, we created a new sulcal pit as a seed vertex for the next catchment basin. A vertex can touch more than one existing catchment basin, and that vertex is a ridge point where growing catchment basins join. In this case, the vertex would be assigned to the closest catchment basin. This processing was terminated when the depth of the vertex in the list was less than a threshold value of 7 mm. This threshold was chosen to avoid detecting unimportant pits that belonged to small, shallow sulci. The original cortical surfaces had noisy features and small geometric variations in shape, and therefore their sulcal depth maps were not smooth. The watershed algorithm applied to the original sulcal depth map overextracted sulcal pits because of small, noisy ridges and catchment basins (Fig. 1). Surface-based diffusion smoothing with a full-width half-maximum (FWHM) value of 10 mm was used to smooth the depth maps (Chung et al. 2003). Although smoothing with the 10-mm kernel was not enough to eliminate all noise, larger kernels may remove not only noise, but also true sulcal pits. The results of the watershed algorithm using different smoothing kernels are shown in Figure 1. Our methods resulted in a slight overextraction of the pits rather than underextraction. Instead of using a larger smoothing kernel, we removed noisy sulcal pits by merging processing as described following. (a) Our first merging criterion was based on the areas of the catchment basins. Noisy catchment basins for minor local variations have small areas. If 1 of the areas of 2 or more catchment basins was smaller than a threshold when they met at a ridge point, the smaller catchment basin below the threshold was merged into the adjacent catchment basin with the deepest pit and its sulcal pit was removed. We empirically set the threshold at 30 mm 2 . We calculated the Voronoi region area of each vertex for local surface area (Meyer et al. 2002). The area of any arbitrary region on a triangular mesh can be measured as the sum of the Voronoi areas of the vertices making up that region ( Fig. 2A). Figure 1. Sulcal pit extraction using the watershed algorithm applied to sulcal depth maps with different kernels of diffusion smoothing for an individual cortical surface. The top row in the figure shows the depth maps of the left temporal area with increases in the smoothing kernel from 0 to 30 mm. The sulcal-pit maps based on these depth maps are shown in the second row. Sulcal pits were heavily overextracted in the original sulcal depth map. In the smoothed map at FWHM 20 mm, noisy pits were removed, but they were still present in the middle region of the STS. Some true sulcal pits seem absent on the map with smoothing at FWHM 30 mm. (b) We merged some sulcal pits using a second criterion, the distance between the pits. We computed the geodesic distance along the surface from all sulcal pits as seed vertices (Lanthier et al. 2001;Robbins 2003;Robbins et al. 2004). If the distance between any 2 pits was less than a 15 mm threshold, the shallower pit was merged into the deeper one. This criterion is based on the assumption that early, deep sulcal points may not be very close to each other. Previous studies describing the clusters of sulcal pits and the generic model of sulcal roots demonstrated that the distance from one location to another is larger than the 15 mm threshold that we used (Regis et al. 2005;Lohmann et al. 2008). (c) The final criterion was that the height of the ridge (the depth of the sulcal pit minus the depth of the ridge) should be less than the threshold for merging. Although 1 of the 2 criteria described above was met, merging was not executed and the sulcal pit was considered to be present if the ridge was higher than a threshold of 2.5 mm. Thus the logic for merging required ((a OR b) AND c) (Fig. 2B). The same threshold value of ridge height for merging was previously used in a watershed algorithm to segment anatomically different sulci (Yang and Kruggel 2008). Cluster Segmentation in the Group Map Each individual map of sulcal pits was transformed to a surface group template using a 2-dimensional (2D) surface-based registration that aligns variable sulcal folding patterns through sphere-to-sphere warping (Robbins et al. 2004;Lyttelton et al. 2007). The surface group template is an unbiased, high-resolution iterative registration template from a group of 222 subjects' hemispheres (Lyttelton et al. 2007). The transformed pits from many subjects were located on the common template space and clustered at several specific regions. We segmented and defined clusters from the distribution of the pits, and classified each pit into the one of the clusters. First, the pits were smoothed with an FWHM of 10 mm, maintaining a peak value of 1 ( Fig. 3A) (Chung et al. 2003). The smoothed pits from all subjects were overlaid on the template and their values were summed. Several regions of high density were shown in the group map of the smoothed pits, indicating that the sulcal pits from many subjects were clustered into these small regions (Fig. 3B). We segmented the cluster regions and detected the densest points (local maxima) in the clusters by using the watershed algorithm with the merging criterion of the area of the catchment basin described above. The area threshold was set as 30 mm 2 to avoid oversegmentation. Regions of low density whose values were less than 3 were excluded from running the watershed segmentation process. The improved surface registration algorithm and the group template with its increased anatomical detail (Lyttelton et al. 2007) allowed us to include some well-matched minor sulcal folds, and the sulcal pits of such folds were clustered in the group map. We manually selected the clusters located in the deep major sulci and excluded others because we were interested in the deep sulcal pits of major sulci that are related to early brain development. Pits occupying the same cluster had the same identity. If more than one pit was present in a cluster in one subject, the pit that was closest to the densest point was chosen for analysis. Construction of a Local Coordinate System Because the cortical surface starts from a spherical polygon model, the vertices can be inversely transformed to the spherical model (Kim et al. 2005). The sulcal pits, previously transformed to the common space of the surface template, can be distributed in a 2D spherical coordinate system. However, the spherical coordinates (h, u) of the pits cannot be used to calculate the shape distribution directly (Tao et al. 2002). We constructed a tangent plane as a local coordinate system for each cluster. On the tangent plane, we defined an orthogonal coordinate system (u, v) whose origin was the average position of the pits on the sphere. The pits of all subjects were then projected onto the tangent plane (Fig. 4). The methodological details have been described previously (Tao et al. 2002). We performed all data analyses based on the local coordinate system. Data Analysis The Frequency and Density of Sulcal Pits We examined the frequency of sulcal pits. The percentage of subjects that had a pit in each cluster region was calculated (the number of pits (N p )/148 3 100). This reflects the consistency of their appearance between subjects. We measured the density to assess the consistency of spatial localization of the pits within a sulcus. This density was defined as the percentage of pits present within a radius of 5 mm geodesic distance from each point of maximal density (the number of pits in the area of radius 5 mm (N p5 )/N p 3 100). Statistical Tests to Measure Hemispheric Asymmetry We analyzed the hemispheric differences in data distribution. First, pit frequency asymmetry between left and right hemispheres was estimated using a v 2 test. The presence of pits was modeled in the form of binary categorical variables taken to be 1 when the pits were present and 0 when absent. The variables for the left and right sides are also categorical. We generated crosstab table, relating 2 categorical variables with each other. A v 2 test was used to see if there is a relationship between 2 categorical variables, or they appear to be independent. If the values in the cells of the table are not balanced, it means that the frequency of sulcal pits is asymmetric and dependent on the side. In those cases where the assumption of the v 2 test (expected frequency of 5 or more in each cell) was not met, we adopted Fisher's exact test. Second, we tested the differences in spatial variance of the pits between the left and right hemispheres. An equality of variance was assessed using Levene's test (Levene 1960). Third, we examined the differences in the spatial distribution of the pits between the hemispheres. Because measurements made in the left and right hemispheres are dependent on each other, the differences in the multiple variables (u, v) were tested using a repeated-measures multivariate analysis of variance (MANOVA), with side (left and right) entered as the within-subject variable. Spatial Covariance of Sulcal Pits between Left and Right Hemispheres We investigated covariance in the location of sulcal pits on the template surface between the hemispheres. The spatial covariance between the left and right pits was assessed using a canonical correlation analysis. This is a multivariate statistical technique for describing and quantifying correlated variation between sets of vector variables (the u and v in our study). A canonical correlation analysis has been used to quantify the correlated behavior of different subcortical structures (Rao et al. 2008). Results We applied our method of sulcal pit extraction to MRI data from 148 normal adult brains. The final individual maps of sulcal pits and sulcal catchment basins are shown in Figure 5. The data in the first column on the left in Figure 5 show the same subject as in Figure 1. Any noisy pits remaining after applying diffusion smoothing were eliminated using merging, leaving only the likely pits on the left temporal lobe. Figure 6 shows the final map of the segmented cluster regions extracted from Figure 3B, including the distribution of sulcal pits in the major sulci. Over the entire cortical region, 48 clusters were detected on the left hemisphere and 47 on the right. The frequency of pits was too low to form a significant cluster in the right middle region of the superior temporal sulcus (STS). A neuroanatomist assigned anatomical labels to the clusters based on the literature (Duvernoy 1991) (Fig. 6). Most major sulci contained 2--4 sulcal pit clusters. The inferior temporal sulcus (ITS) and the cingulate sulcus (CiS) have long, variable shapes and contained more than 4 sulcal pit clusters. There was only one cluster for some of the sulci (orbital sulcus [OrS], olfactory sulcus, subparietal sulcus [SPS], lateral occipital sulcus, and SF). The areas of sulcal pit clusters were nearly symmetrical in both hemispheres, except for one region. Note that the cluster in STS b was not present in the right hemisphere. The region of this cluster in the left hemisphere was masked and applied to the homologous area in the right hemisphere for asymmetry analyses. The results of the frequency calculation showed that a high percentage of subjects had a sulcal pit within each cluster. Almost every subject possessed a sulcal pit within the SF cluster region with 99% and 100% for the left and right hemispheres, respectively. Other cluster regions with a frequency of more than 90% were the right middle frontal sulcus (MFS) a, the superior frontal sulcus (SFS) and postcentral sulcus (PoCS) b, both (left and right) MFS b, the junction between SFS and precentral sulcus (PrCS), the junction between PrCS and inferior frontal sulcus (IFS), CS a and b, intraparietal sulcus (IPS) b, STS c, ITS e, collateral sulcus (CoS) b, OrS, CiS f, SPS, and parieto-occipital sulcus (POS) a. The density of pits in some of these clusters was high, with a high proportion of pits lying within a 5 mm radius. The densities were higher than 70% in the clusters in the left IPS b, right MFS a and PoCS b, and both (left and right) the junction between SFS and PrCS, the junction between PrCS and IFS, CS b, CoS b, and SPS. The frequency and density of all pits are shown in supplementary material. Statistical tests were performed for each cluster region to determine asymmetry. The statistics were conservatively set to a significance threshold of P = 0.001 (O0.05/48) using Bonferroni correction. A v 2 test showed that the number of the pits in the left hemisphere was significantly larger than in the right hemisphere for the regions in PoCS a (left/right: 114/ 87, v 2 = 11.30, P = 0.0008), STS b (60/19, v 2 = 29.03, P < 0.0001) and d (128/81, v 2 = 35.96, P < 0.0001), and calcarine sulcus (CaS) a (115/83, v 2 = 15.62, P < 0.0001). The POS b region had a significantly larger number of pits in the right hemisphere than in the left (35/72, v 2 = 20.04, P < 0.0001). The most significant difference in pit frequency between the hemispheres was seen in the STS. The clusters in the STS also showed significant hemispheric differences in the spatial distribution of the pits. In the STS d, there was statistically significant asymmetry in the spatial variance of the pits in the cluster in the v variable of the local coordinate system (F 1,207 = 14.08, P = 0.0002) and a trend toward significance was shown in the u variable (F 1,207 = 7.52, P = 0.006). The spatial variance of the pits in the right hemisphere was greater than in the left. The distribution of the pits on the local coordinate system is shown in the scatter plot (Fig. 7A). There was no statistically significant asymmetry in the spatial variance in other clusters. The difference in the position of pit distribution between the 2 hemispheres for the STS c region (Fig. 8A) had the highest statistical significance (F 2,133 = 30.03, P < 0.0001) when tested using a repeated-measures MANOVA. This difference between the hemispheres was also statistically significant for the clusters in OrS (F 2,139 = 15.27, P < 0.0001), CiS f (F 2,144 = 7.75, P = 0.001), and POS a (F 2,125 = 23.34, P < 0.0001). A canonical correlation analysis showed that the sulcal pits in the left and right hemispheres were spatially covariant in some of the major sulci. Highly significant spatial correlations were found for the clusters in the CS b (R = 0.430, P < 0.0001), SF (R = 0.569, P < 0.0001), CaS a (R = 0.534, P = 0.0005), and POS a (R = 0.349, P < 0.0001). Discussion We are the first to automatically extract and map sulcal pits across the entire cortical region using the surface model. Our method provides a surface-mesh--based procedure that is not prone to overextraction and therefore appears to detect true sulcal pits more reliably. The smoothing and merging criteria operate cooperatively and are important in detecting appropriate sulcal pits. We observed the intrinsic variability of sulcal pits along the sulcal valley in a 2D surface-based coordinate system, reducing the variability due to the 3D position, direction, and length of the sulcus. Methodological considerations are more explained in supplementary material. Spatial Distribution, Frequency, and Density of Sulcal Pits We constructed a group map of sulcal pits from 148 MRI data sets from normal adults and examined their spatial distribution. The major pattern of the distribution and organization of sulcal pits was analyzed by segmenting and identifying sulcal pit clusters associated with the major sulci. Forty-eight clusters were identified in the left hemisphere and 47 in the right hemisphere with most major sulci containing 2 or more clusters. Our results support previous studies in terms of the number and location of such clusters. Based on the anatomical and embryological literature, the CS is considered to consist of 2 primitive folds (Cachia et al. 2003;Regis et al. 2005). Imaging studies extracting putative sulcal roots and volume-based sulcal pits and substructures have described 2 regions in the CS (Lohmann and von Cramon 2000; Cachia et al. 2003;Lohmann et al. 2008). We found more than 95% of sulcal pits in the superior (CS a) and middle (CS b) part of the CS with positions corresponding to the positions of the sulcal roots (Cachia et al. 2003). The sulcal pits were clustered in the inferior part (CS c) of the CS in our map. This may be a consequence of using an advanced surface registration algorithm that matched individual sulcal folding patterns with high accuracy. However, the frequency in the CS c was quite low (less than 30%) and the area of this cluster was small in both hemispheres. The sulcal pit in the inferior CS is not major and may appear as a small fold in a few subjects. The presence of 4 sulcal roots and basins has been reported in the STS (Lohmann and von Cramon 2000;Cachia et al. 2003). Four sulcal pit clusters were present in the left STS, consistent with previous reports. However, we observed only 3 clusters in the right STS. The hemispheric asymmetry in the pattern of sulcal pits is discussed in detail in the following section. Our results are consistent with a previous sulcal root analysis in which the ITS contained 5 sulcal roots (Regis et al. 2005). We found that the PrCS and PoCS contained 3 and 2 pit clusters, respectively, in both hemispheres. This is also consistent with a previous study of sulcal basin extraction (Lohmann and von Cramon 2000). These observations confirm that our sulcal pit extraction and cluster maps are meaningful and interesting. Our pit cluster map can be used to automatically label sulcal pits extracted from other individual brains by matching them with a surface registration template. The frequency and density of sulcal pits showed a consistency between subjects in the appearance and spatial location in each cluster. We found that there were several specific cluster regions showing a high frequency of sulcal pits. According to the radial unit hypothesis, the ventricular zone consists of proliferative units that form a protomap of cytoarchitectonic areas (Rakic 1988). The protomap model proposes that the cells in the embryonic cerebral vesicle carry intrinsic programs for species-specific cortical regionalization (Rakic 1988(Rakic , 2001Miyashita-Lin et al. 1999;Fukuchi-Shimogori and Grove 2001). Genetic control has an effect on the protomap and cortical regionalization, and is important in the development and distribution of cortical convolutions (Rubenstein and Rakic 1999;Piao et al. 2004;Rakic 2004). The gyrogenesis theory suggests that areas of rapid growth form gyri at the center of a functional zone, and boundaries between functional areas will tend to lie along the sulcal fundi (Welker 1990). The process of gyrogenesis may underlie the formation of the first major folds during the early stage of radial growth of the cerebral cortex (Hasnain et al. 2001(Hasnain et al. , 2006. Neurons migrate tangentially at later stages of corticogenesis (Rakic 1990). A mechanical folding model based on the differential tangential growth of the inner and outer cortical layers has been proposed for these stages (Richman et al. 1975). The formation of secondary and tertiary sulci may be influenced by this mechanical folding and other chaotic events (Hasnain et al. 2001(Hasnain et al. , 2006. In summary, the early major folds appear to show greater spatial invariance during development as they deepen and have a stronger spatial covariance with functional areas under closer genetic control than later developing sulci. The apparent immobility of the sulcal fundi has been reported (Smart and McSherry 1986) and reproduced using a simulated morphogenetic model (Toro and Burnod 2005). With regard to the high-frequency sulcal pit regions in our study, it appears that the first major folds develop and deepen at similar positions between individuals and retain their identity without merging into other folds. They develop from consistently predetermined and spatially stable functional areas arising from the protomap and powerful gyrogenesis in those areas at an early stage. We suggest that the ontogenetic protomaps of high-frequency regions might generally resemble each other more than those for other regions in human brains. Most sulci contained just 1 or 2 sulcal pit clusters showing a high frequency of more than 90%. We suggest that many sulci develop with 1 or 2 of the early major folds. For example, the major pit cluster in the CiS was present only in the posterior region (CiS f, with 99% frequency in both hemispheres). Other regions that have a low frequency may have variations in regard to the number and location of the first sulcal folds. Alternatively, sulcal pits may disappear during brain development because of the intricate merging of the first folds. There is a possibility that imperfections in our method may miss true sulcal pits. More than 70% of sulcal pits were localized within a 5 mm radius of the densest point in some of the highfrequency clusters. The sulcal pits in the clusters with high density and focal spatial localization (left IPS b, right MFS a and PoCS b, and both (left and right) the junction between SFS and PrCS, the junction between PrCS and IFS, CS b, CoS b, and SPS) appear to be useful as stable anatomical landmarks for improving current brain-image registration and cortical pattern matching. Hemispheric Asymmetries in the Frequency and the Spatial Distribution of Sulcal Pits We found hemispheric asymmetries in the frequency and distribution of sulcal pits. Because deep, early sulcal pits appear to be closely related to functional areas and under genetic control, the asymmetric distribution of sulcal pits may be associated with asymmetric genetic programs and functional hemispheric lateralization. The most statistically significant and interesting asymmetries were shown in the STS. For a better insight, we performed a principal component analysis (PCA) of the sulcal pit data on the local coordinate system and plotted them using the 2 axes of the PCA. The first component explained almost all variation of the sulcal pits in the STS c and d because the STS is an elongate sulcus and the distribution of sulcal pits is along the sulcal line (Figs 7B and 8B). We constructed histograms using the data projected onto the first component and confirmed the difference between the left and right hemispheres in the spatial variance of the pits in the STS d (Fig. 7B). This difference was statistically significant when an equality of variance was assessed using Levene's test (F 1,207 = 8.59, P = 0.0038). The sulcal pits in the left STS c distributed in a more anterior region along the sulcal line compared with those in the right (Fig. 8B). A paired t test using the data of the first component showed that the positional asymmetry of the pits in the STS c is statistically significant (t = 7.66, P < 0.0001). The specialization of the left hemisphere for language is one of the earliest and the best-known functional asymmetries. Cortical activation associated with language processing was strongly lateralized to the left superior temporal gyrus (STG) in functional MRI and positron emission topography studies (Karbe et al. 1995;Binder et al. 1997;Schlosser et al. 1998;Tzourio et al. 1998;Balsamo et al. 2002;Bleich-Cohen et al. 2009). In relation to functional asymmetry, structural asymmetries in the STS and its nearby areas (Heschl's gyrus and planum temporale) have been reported (Witelson and Pallie 1973;Penhune et al. 1996;Steinmetz 1996;Good et al. 2001;Watkins et al. 2001;Sowell et al. 2002;Emmorey et al. 2003). The regions of the STS c and d clusters are part of Wernicke's area and the STG is their neighboring gyrus. We suggest that the higher frequency and smaller spatial variance of sulcal pits in the left STS d may be related to the lateralization of language function to the left hemisphere, developing more consistently and strongly than for the right hemisphere. We showed that sulcal pits in the left STS c were located in a more anterior region. Larger functional area and cortical structure for language processing in the left hemisphere may result in the different position of first folds between the left and right hemispheres. The present study cannot fully address whether the asymmetry of sulcal pits is a key biomarker of the lateralization of the language function. The relationship between the positional and frequency asymmetry of sulcal pits, and functional and structural asymmetries should be further investigated in future work. The sulcal pattern analysis in the STS showed that the plis de passage in the intermediate region of the STS was never superficial on the right side and was much more visible on the left (Ochiai et al. 2004). Because the region of intermediate plis de passage looks like the boundary between the STS b and c, the significantly higher frequency in the left STS b that we found is directly related to the previous result of more superficial left intermediate plis de passage. The difference in pit frequency between the hemispheres may be related to the asymmetry in sulcal development. The STS b cluster in the right hemisphere was not included in the original cluster map because of the low frequency and density of sulcal pits. The STS had 3 pit clusters in the right hemisphere, but 4 clusters in the left (Fig. 6). In a previous atlas of the sulci (Ono et al. 1990), the STS was described as being continuous in a third of cases (36% on the left and 28% on the right). The STS was divided into 2 segments in 48% on the right and 32% on the left, into 3 segments in 16% on the right and 16% on the left, and into 4 segments but in only 24% of cases on the left. Our observation of only 3 pit clusters in the right STS is consistent with the lack of division of the STS into more than 4 segments in the right hemisphere. Up to the end of the first postnatal year, the left language regions, including the STS, lag behind the right side in development, perhaps to await speech development (Chi et al. 1977;Toga and Thompson 2003). A recent neuroimaging study in preterm newborns demonstrated that gyral complexity is present in the right STS earlier than in the left (Dubois et al. 2008). In adults, the STS is significantly deeper in the right hemisphere than in the left (Ochiai et al. 2004;Van Essen 2005). We suggest that the first folds formed in the right STS usually appear earlier near the pit of STS c, and then develop into a large sulcal segment or perhaps merge with neighboring folds, resulting in greater depth than for the left STS. Finally, there are fewer than 4 sulcal segments in the right STS (Ono et al. 1990) because of prominent and deep folding in the region of STS c and variations in the development of the STS b and d areas. Although the development of the STS in the left hemisphere lags behind that of the right, fold formation in the STS c region may proceed with neighboring folds, preserving their identity more consistently than on the right side, resulting in the frequent appearance of 4 segments in the left STS (Ono et al. 1990). As in the STS, the frequency of sulcal pits in the PoCS a was significantly greater in the left than in the right hemisphere. Previous structural analysis of the somatosensory area showed the same trend in leftward asymmetry. The area of the left postcentral gyrus significantly exceeded that of the right (Jung et al. 2003). The maturation of the somatosensory fibers was analyzed in infants, and the asymmetry in myelination favoring the left side has been described (Dubois et al. 2009). Higher fractional anisotropy in the left anterior part of the CaS was also shown compared with the right (Dubois et al. 2009). We found that the left anterior CaS (CaS a) had a statistically significant higher frequency of sulcal pits. Hemispheric Spatial Covariance of Sulcal Pits Our finding that the left and right sulcal pits are spatially covariant in the clusters in the 4 major sulci (CS b, SF, CaS a, and POS a) is highly significant. In a previous study investigating the early sulcal emergence in vivo, the interhemispheric fissure, SF, callosal sulcus, POS, CaS, CiS, and CS were the first to be identified in preterm newborns (Dubois et al. 2008). It is worthy of note that all the significant clusters in our spatial covariance analysis belong to the earliest developed sulci. It might provide clues for the developmental process of sulcal folds in the human brain. However, further studies should be performed to clarify its biological meaning. Conclusions We generated a group map on a surface template showing the intrinsic variability of sulcal pits clustered in specific focal areas, with a relatively consistent pattern of spatial distribution between subjects. The sulcal pits were automatically extracted from the cortical surface using a robust methodological procedure. Our analyses of sulcal pit distribution and its asymmetric pattern support previous empirical and theoretical studies, and provide additional insights concerning the anatomical and functional development of the brain.
2017-04-19T18:25:17.026Z
2010-03-01T00:00:00.000
{ "year": 2010, "sha1": "515b099faf99ae23cafe811dfddfad7868c4472f", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cercor/article-pdf/20/3/602/17302834/bhp127.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "21f50f701fc1fc352f757ca6e89ff0a60c10b0f9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
198483359
pes2o/s2orc
v3-fos-license
Thermal and Mechanical Analysis of a 72/48 Switched Reluctance Motor for Low-Speed Direct-Drive Mining Applications : In the process of electric motor design, it is essential to predict and provide an accurate thermal and mechanical model. The aim of this research is to improve the thermal and mechanical performance—which is implemented into a 72/48 switched reluctance motor (SRM) with 75 kW—of a low-speed direct-drive mining system (pulverizer). Thermal analysis of the SRM requires a deep understanding of the coolant behavior and the thermal mechanism in the motor. Computational fluid dynamics (CFD) based finite element analysis (FEA) was carried out in order to precisely visualize and estimate fluid state and temperature distribution inside the motor. Several different coolant configurations were carried out, with the purpose of determining an appropriate one for uniform temperature distribution in the SRM. The natural frequencies are presented with the developed finite element mechanical, structural model. To adapt in the mining application, the cooling jacket configurations with 17 channels and the shaft with spoke was found to be optimal for the SRM, which may raise the natural frequency and reduce the weight and temperature of the motor. The simulations results showed a good agreement with experimental results regarding temperature distribution within the motor. frequency to rise and reduced the weight and temperature of the SRM prototype. The experimental and computational fluid dynamics (CFD) results were compared and analyzed. The simulation results were in close agreement with the experimental results and the accuracy of the simulation results was proved. Introduction To optimize the design of electrical machines in mining applications, it has become fundamental to elevate out mechanical and thermal analysis along with electromagnetic analysis. Application of a direct drive system has become an irresistible trend in the last decade as the knowledge of energy consumption optimization increased. The switched reluctance motor (SRM) used for Pulverizing in mining applications is usually required to be with high-torque and low-speed, which will typically cause the high-temperature to rise and unpredictable vibration [1], which may create the conditions for catastrophic results and human casualties, if they are not taken account of. The SRM has no rare earth material as permanent magnets. Consequently, higher currents are required in order to produce high torque. Moreover, these higher currents cause a large amount of losses in the winding. In addition, the magnetically saturated operation of the SRM also leads to higher magnetic losses, which will typically cause an increase of the temperature of the SRM. This temperature rise may increment resistance and decrease both efficiency and the insulation lifespan of the motor winding [2][3][4][5][6][7]. Therefore, predicting the temperature filed precisely and hotspot positions inside the SRM is needed. In order to keep the operating temperature of the SRM within the permitted limit, a cooling system should be developed [8][9][10][11][12][13][14]. Moreover, mechanical characteristics, including acoustic noise and vibration, are significant factors that impact on the operating reliability and service life of the SRM. They are induced by the electromagnetic force within the air-gap between the stator and the rotor. Mechanical vibration is one of the major mechanical faults to cause a breakdown of electric machines. It is thus exceptionally significant to evaluate the mechanical vibration performance of the SRM to escape mechanical failures [15][16][17][18]. Hence, many researchers have investigated thermal analysis, vibration and acoustic noise analysis of the SRM [12,15,19]. In References [20][21][22] the lumped parameter thermal network (LPTN) analysis model and the finite element method (FEM) were established to compute the temperature of the SRM. LPTN has the advantages of fast computation speed even for thermal transients. However, the accuracy of LPTN is limited by the appropriateness of how the thermal resistance components are set [20]. Convective heat transfer is still the most complex issue which needs a better understanding of fluid flow within the motor [23]. The coupled thermal analysis is made through CFD and FEM gives enough valuable results on the motor temperature rise. Thus, the boundary condition of the equivalent thermal model can be obtained according to the CFD theory [24,25]. The radial force between rotor and stator is one of the significant sources of the vibration in the SRM. The study of vibration can be basically partitioned into two categories of time and frequency domain techniques [26]. In Reference [27] five approaches were employed to determine motor natural frequencies. The calculation of motor natural frequencies is necessary for the vibration of SRM [17]. Using a simulation environment ANSYS-Workbench, the acoustic noise and vibration created by the transient electromagnetic forces on the stator of an electric machine were presented previously [28]. However, the water jacket (WJ) optimization design has not been intensely studied and requires further research [2]. High temperatures caution the designer to minimize the losses, modify the cooling type, or optimize the machine. SRM thermal performance is generally influenced by two components. First is the sum of heat sources inside the SRM produced from losses. The second is how well the produced heat can be dissipated out from the SRM. In this paper, the losses of the SRM are evaluated utilizing the FEM to assist thermal analysis. The water cooling system is a very efficient method of transferring heat away from the 75 kW SRM for a high-torque low-speed on mining applications in harsh conditions. Meanwhile, the WJ of SRM not only undertakes the responsibility of heat dissipation but also has benefits of the structure of mechanical protection. Commonly in SRM, both the stator and the rotor employ the doubly salient teeth structure, leading to electromagnetic forces in the air gap when the phase current is excited. The rotor and stator are under electromagnetic forces that can change the air gap between them. The change of the gap affects the electromagnetic fields between a rotor and a stator; so, the calculation of vibration amplitudes is important and considered in this paper. So, it is essential to anticipate the natural frequencies in order to design a quiet SRM or to escape working the motor near the resonant frequency during drive operation [15][16][17][18][19]. For the goal of the reliable and stable operation of the SRM in a mining application, the thermal and mechanical design of the WJ of the SRM is discussed in this paper. The temperature rise in the SRM is analyzed in this paper. Three-dimensional CFD was established using ANSYS-Workbench to determine the temperature of each point of the SRM, which helps to find the hot spot. The motor temperature distribution under rated load was determined. Moreover, the natural frequencies of the prototype were computed using 3D finite element analysis (FEA). The tangential and radial forces of the air gap flux densities were calculated. Then, the radial forces were applied to the 72/48 SRM for a harmonic analysis using 3D FEA so that the total deformation of the machine can be visualized and analyzed. The temperature rise experiment was performed on a motor prototype. 3D Thermal Model The thermal distribution of the SRM provides knowledge to the designer. The SRM considered in this paper has an outer water-cooled peripheral casing structure. Water-cooled, the SRM for mining applications utilizes a closed structure. When the motor is working most of the heat delivered within the stator and rotor is passed to the housing through heat conduction, then the water flowing in the waterways dissipates the heat of the housing. The following presumptions are carried out to explain the model: The insulating material completely covers windings; 2. The stator and the rotor iron losses remain constant with uniform heat generation; 3. Thermal conductivities of motor materials are isotropic. Established on these presumptions, the structure of the 72/48 SRM considered in this paper is shown in Figure 1. The properties of isotropic material for each motor part are given in Table 1. The list of the geometrical dimensions is described in Table 2. Thermal studies of electric motors hold great significance as all of the materials employed are sensitive to heat. There are three methods by which heat can transfer: conduction, convection and radiation. Heat transfer by conduction is heat transferring through direct contact of materials, whereas heat transfer by convection is heat transferred by a fluid or gas. Heat transfer by radiation is when heat energy travels in actual waves. For isotropic media, the steady-state heat transfer governing equations and its boundary conditions can be presented as [15]: where k is thermal conductivity [W/m· • C], T amb and T are ambient and unknown temperature [ To estimate the temperature distribution, the heat transfer coefficient and the total losses in the SRM are required inputs. The heat sources leading to the temperature rise of the motor come from all losses of the motor. These losses consist of the copper, core, and mechanical loss. Heat transfer convection helps the motor to be cool. Investigation of Losses (Heat Sources) Understanding the power losses associated with an electrical machine, their behavior and distribution are of paramount importance when evaluating machine performance and thermal distribution. The losses in electrical machines can be classified by the spatial distribution of heat sources. The calculations of the copper, core, and mechanical loss for SRM are presented in References [29][30][31]. The specific core losses P Fe in [W/m 3 ] can be expressed by the theory of the Bertotti equation [31]. The core loss is distributed into three items: where P h , P e and P ex are hysteresis loss, eddy current loss and excess loss, respectively. The manufacturer of the magnetic sheets provides the value of iron loss in [W/kg] for given values of magnetic flux density (B) and frequency ( f ). Based on curve fitting techniques, the loss coefficients (k h , α and k ex ) can be identified. These coefficients that are used to compute iron losses are described in Table 3 for ferromagnetic material M19-29 gauge steel. Moreover, the B-H curve and the measured manufacturer's core loss curves of M19-29 gauge steel are shown in Figure 2. The ANSYS-Maxwell package was utilised to develop a loss model for the prediction of different components of the 72/48 SRM. To obtain the electromagnetic characteristics of the SRM, 2D FEM is developed. It evaluates the core losses in the SRM fed by an asymmetrical half-bridge power converter circuit and control circuit, which is modelled in SIMPLORER as presented previously [1]. The converter is powered by a 510 Vdc power supply. The current chopper control of phase A is performed with a turn-on and turn-off angle of 3.87 • and 6.37 • , respectively, the sequence of phase B and C has been reported previously [1]. The average of the electromagnetic torque is 7.28 kNm obtained at DC current 174 A. The core loss distribution in the stator and rotor at rotor positions 6.28 • , rated speed 105 rpm and rated load can be seen in Figure 3a. The copper losses were specified as functions of stator Dc current with winding resistance per phase (0.217 Ω), as shown in Figure 3b. In summary, the main losses in different portions of the 72/48 SRM are listed in Table 4. Heat Transfer Coefficient Boundary Conditions In References [6,15,32,33], the natural and force convection based on the dimensionless analysis of empirical heat transfer correlations were predicted using Equations (3) and (4), respectively. where Nu, Gr, Pr and Re are referred to Nusselt, Grashof, Prandt and Reynolds number respectively. The heat transfer coefficient (h cv ) of the boundary conditions is presented in Equation (1); however, we applied the following boundary conditions for the SRM model. All coefficients λ e f , λ eq , h o , h f , h cv are specified as boundary conditions. The steady-state and transient thermal characterization was then implemented. The Effective Thermal Conductivity of Air Gap Rotor rotation induces outside air into the air-gap between the rotor and the stator from the end side of the motor, performing in convection between the rotor and the stator. The thermal field is connected to the fluid field by the heat transfer of the rotor and the stator through the air-gap. This heat transfer is further influenced by the slotting and the surface roughness of the rotor and the stator. All these make calculating the temperature field even more difficult. Recognizing that, the effective thermal conductivity λ e f has been investigated [34][35][36]. The Reynolds number Re g in the air gap and critical Reynolds number Re c are: where ω r is the peripheral speed of the rotor, 4.387 m/s; n is the motor speed, 105 rpm; δ is the air-gap length, 0.001 m; υ is the kinematic viscosity of air, 1.5 E −5 m 2 /s, and R si is the inner radius of the stator 0.4 m. Then, we calculated Re g and Re c as 292.4 and 824 respectively, for our model. The air flow is laminar then the heat transfer mode is mainly that of conduction where λ e f can be treated as being equal to that of air λ air . This leads to heat transfer of the stator to the rotor as conduction. However, we propose using the structure of the shaft with a spoke (duct) to cool the rotor. Equivalent Model of Stator Windings The complex geometry of stator winding and insulation material in the stator slot makes it practically difficult to model the actual physical geometry. This difficulty was explained by recognising all insulating materials in the slots as a single insulating layer attached to the slot wall [5]. Thus, an equivalent model of the stator slot can be shown in Figure 4. The equivalent thermal conductivity λ eq of the slot insulation can be represented as: where λ i (i = 1, 2, 3, ...) is the average thermal conductivity of each insulated layer, and δ i (i = 1, 2, 3 , ...) is the equivalent thickness of each insulated layer. Heat Transfer Coefficient between External Frame and Ambient The thermal resistance between the housing and ambient of the totally enclosed WJ with no fan is often the largest single resistance between winding and ambient. It hence represents a vital role in the precision of the thermal performance of the motor. It can be described as [32]: where A s is the surface area, and h 0 is the effective heat transfer coefficient due to natural convection and radiation. Forced Convection Heat Transfer Coefficient between End Winding and End-Caps In Reference [32], the heat transfer coefficient (h f ) was obtained between end winding and end-caps as a performance of the inside air speed, as: The Heat Transfer Coefficient of Water Jacket Considering the larger of the motor losses produced in the stator portion, the cooling type of totally-enclosed water-cooled is used in the SRM for better cooling. The stator core of the SRM is enfolded by the WJ with a spiral type water channel, as shown in Figure 5. A partial sectional view of the 3D thermal model is shown as Figure 5 including stator and rotor core, shaft spoke, stator windings and housing with water channels. The cooling system removes heat in order to maintain SRM operation within the desired temperature range. The cooling system has a direct influence on the process and service life of the SRM. Presume all the losses produced in the SRM can be dissipated by the cooling water according to Newton's law of cooling [2]. Hence, the maximum wall temperature of the WJ can be given as: Moreover, the expression of the total quantity of heat generated in the SRM, that is lost due to the cooling water as given as: The product of h cv and A s surface heat transfer area can be determined as h cv A s convection heat transfer factor for estimating the cooling capability. It is obvious that the bigger h cv A s , the smaller the temperature-rise occuring in the SRM under the condition that the total losses of the SRM are constant. where H, W are the height and width of the water channel. The h cv strongly depends on Re. Thus, the h cv A s value of WJ can be estimated based on empirical formulations for laminar Re < 2300, for transition 2300 < Re < 4000 and for turbulent Re > 4000 flow respectively as [32,36,37]: The surface heat transfer area for rectangular channels is defined as: where N is series number of the water channel in the WJ. The thermal entrance length can be estimated from [38]: When fluid flows inside a pipeline, the friction occurs between the moving fluid and the stationary pipe wall. Fluid head losses are commonly the outcome of two mechanisms: friction along the pipe walls, head losses alongside the channel wall are called friction losses due to friction, and losses due to turbulence within the bulk fluid are called minor losses, which can be calculated by the Darcy-Weisbach equation respectively. where ξ and ζ are the frictional and local head loss coefficient, which depend on Re and relative roughness. They are very important since the V velocity and h cv of the WJ is influenced by these design parameters. Figure 6 shows the investigation into the influence of the series number of the water channel on the hydraulic diameter of the WJ, for N in the range (1 to 21), at a constant height of the water channel. It is evident that the hydraulic diameter of the WJ decreases, when the N of the water channel increases. Moreover, the width of the water channel is defined as: where S is space between channels, W t is the total width of WJ. As can be seen in Figure 8d, the WJ has a height of more than 40 mm and (N) may be less than 21 channels. The total water head loss can exceed the constraint (10 kPa) at the volumetric flow rate of 0.6 L/s, when (N) is greater than 17. Furthermore, as shown in Figure 8c, the range of h cv for a number of water channels (1 to 21) can be identified. The proposed structure of the 72/48 SRM in Figure 1 shows the WJ has one inlet and one outlet configuration, on two different sides. This means that the WJ must have a configuration with odd channels. Thus, the approximate range of h cv for odd channel (N) is presented in Table 5. From Table 5, the convection heat transfer coefficient (h cv ) value for 17 channels is the highest realistic value, whilst in 19 channels the fluid loss would be significantly higher. For comparison, it can be seen that the minimum of h cv for 17 channels is approximately twice and quarter of the value of 9 and 5 channels, respectively. For this reason, 5, 9 and 17 channels are chosen, also 1 channel is also chosen as a benchmark. Based on previous descriptions and calculations, the limitations of configuration dimensions of WJ are proposed and presented in Table 6. The four proposal structures are simulated using ANSYS-CFD. The (h cv ) at various convective surfaces inside the motor for the four proposed structures of WJ were obtained using the CFD method. Thermal Analysis Methods Thermal analysis methods of electrical machines can be classified into two essential types: numerical and analytical lumped-circuit techniques. Numerical analysis is an attractive approach, as very complex geometries can be modelled, hence, the heat transfer can be accurately determined. There are two types of numerical analysis: the FEA and the CFD. The CFD has the benefit that it can be utilized to predict flow in multiple regions, as around the motor end windings. Numerical Analysis-CFD Analysis for Water Jacket The CFD model was applied to studying the convective cooling inside the 72/48 SRM due to the flowing water, therefore calculating h cv for the water jacket (WJ). The 3D CFD is the best alternative for the flow process in an electrical machine. The 3D numerical simulations of CFD mode for four WJ configurations with different numbers of channels were performed. The geometrical modelling was carried out on an ANSYS-Design-Modeler, the unstructured, and structured grids are used in the paper. The SRM was meshed by a structured grid due to complex structures. There were 27,265,153 mesh cells in the simulations. With the use of the spiral WJ type, the coolant flows along the cylinder vertically, the coolant enters the cylinder from the bottom side and flow out from the head of the opposite side. The coolant used in this calculation was water at a temperature of 26 • C and the ambient temperature was 26 • C. The convection heat transfer coefficient between the motor's outside surface and the air is 5 W/m 2 • C. The inlet and outlet boundary conditions of the coolant are defined as a mass flow inlet (0.6 L/s) at a temperature of 26 • C and pressure outlet respectively. The wall boundary conditions as solid regions including stator, rotor and winding are set to heat flux boundary conditions. These wall boundary conditions were set to heat flux boundary conditions in our model. The winding's heating power was 6560 W, stator and the rotor's core heating power were 1445 W and 650 W, respectively. The losses of the stator, the rotor with winding copper loss were mapped directly to the model. For the application described in this research, the k-ε model was used for its robustness and simplicity. WJ is widely used for the high-power induction motor (IM) and interior permanent synchronous motor (IPMSM) [39,40]. Based on the same principle, an analytical method and numerical method were used in this work to obtain the machine's temperature rise. The temperature rise for the 72/48 SRM was achieved under the rated load and the rated speed. Figure 9 shows the 3D model structure of WJ configurations, with four different numbers of channels, such as 1, 5, 9 and 17 channels. Moreover, the direction of the arrow shows the direction of water flows. Figures 10 and 11 show the 3D models' steady-state temperature profile of WJ configurations, with four different numbers of channels; 1, 5, 9 and 17 for a water flow rate of 0.6 L/s. The most significant temperature zone inside the SRM was at the stator winding close to the outlet towards where the fluid exits the cooling jacket. The rate of heat transfer is specifically proportionate to the temperature difference, and as the water flows from the inlet towards the outlet, it absorbs heat, which increments its temperature. This leads to lesser temperature differences towards the end. Hence, it decreased the amount of heat transfer as water flows toward the outlet, resulting in the most significant temperature zone towards the end of the coolant flow loop. Figures 10a and 11a show the outer portion of the stator winding displays are asymmetric. The temperature of the side of end winding near the inlet and outlet is (78 • C) and (94 • C), respectively. This situation can be explained based on water flow behavior in the casing of the water jacket for 1 channel. However, the temperature field is still non-uniform and asymmetric in the rotor. There is little difference between 5 and 9 channels; the temperature field is still asymmetric in stator winding, shown in Figures 10b,c and 11b,c. The high temperatures in the stator winding decrease as the N is increment, when N becomes greater than 17, the lowering of the temperature becomes less effective. When the number of channels is 17, the temperature field is uniform and symmetric in the stator, rotor and winding at the same time, the heat dissipation of the cooling system is improved, and the result is shown in Figures 10d and 11d. Table 7 includes a comparison between the heat transfer coefficients h cv obtained by the numerical and analytical solutions. In this case, the numerical coefficients are lower than the analytical results. Clearly, an agreement is found between the two sets of results, confirming the numerical models are effective. The former results are more accurate than the latter. The increment in the number of the channel at a fixed water flow rate increases the average velocity of the water over the SRM surface resulting in an increment of the h cv , as shown in Figure 12. This results in the lowering of the overall temperature field and the maximum temperature inside the SRM. For a constant flow rate, the mean velocity of the coolant is directly proportional to the number of channels resulting in an increase in the jacket surface friction losses. The downside of increasing the number of channels is that it increases the SRM pumping power. The economic aspects of pump capacity required for coolant circulation should be considered as well [39]. Therefore, we recommend used the structure of WJ configurations, with 17 channels. The structure of the shaft with spoke (duct) for cool the rotor is presented in Figures 10 and 11. The boundary condition of the part of the shaft, which is affected by the hole in the 3D model, is modeled, with the convection heat transfer coefficient (5 W/m 2• C) on the surfaces of it. Figure 13 shows the 3D temperature profile of WJ configurations, with a different structure of shaft, spoke, and a solid shaft. The benefit of the used spoke shaft is that its causes the rising temperature of shaft and rotor to decrease. Moreover, it lowers the overall temperature field inside the SRM, as shown in Figure 13. Mechanical Analysis The vibration performances of SRM depend on two features-the radial force and the SRM structural characteristics. The frequency of a natural vibration in an elastic body is called the natural frequency. There are many reasons to calculate the natural frequencies and mode shapes of a structure prototype SRM. One reason is to avoid the resonance problem. In the SRM, it is found that resonance occurs if the phase frequency or the harmonics coincide with the SRM natural frequency. The phase frequency is given as: where ω m is the speed [rad/s] and P r is the number of rotor poles. Vibration is the maximum when any frequency of the harmonics F n = nF p is coincident with the SRM natural frequency. In References [1,15], the first mode frequency ω 2 f m of the SRM is given by: where D o and b sy are the outer diameter of the stator lamination stack and the stator yoke thickness, γ is the Poisson ratio, E is the modulus of elasticity and ρ is the mass density of the lamination material. The radial and tangential forces are the components of the air-gap electromagnetic force. The useful forces are the tangential ones; they are producing the torque. The radial force caused by the interaction of doubly salient teeth, during phase winding is excited, will lead to vibration of the motor [17,18,41]. The tangential force F t and the major force of mechanical deformation in the motor which is called the radial force F r , they are given as: where B is flux density,n is the unit normal vector, B r and B t are the radials and tangential component of the flux density in the air gap respectively. A 2D finite element models for the SRM has been developed, to obtain the waveforms and the Fast Fourier Transform (FFT) of the radial force, as shown in Figure 14. The developed SRM is in 72/48 combination. Thus, the fundamental component of the phase current is 84 Hz at 105 rpm. The SRM was simulated for a 174 A DC current at 105 rpm, producing torque of 7.28 kNm, such that the high magnitude of the radial force was 1.46 kN, around the fundamental operating frequencies. The numerical model of 3-phase 72/48 SRM was analyzed using the 3D FEA model to obtain the natural frequencies of the motor with two different structures of the shaft. The comparison between the results of these two different structures is presented in Table 8. It must be mentioned here that these natural frequencies vary depending on the boundary condition and the structure of the housing. Besides, the spoke shaft is much more weight-efficient than a solid shaft. This explains why the natural frequencies of a spoke shaft are greater than that of a solid shaft, as can be seen in Table 8. Figure 15 shows the total deformation of the 72/48 SRM with two different structures of the shaft at different frequencies. It can be observed that the minimum deformation is at the support points of the housing, and it increases gradually to reach a maximum at the top side of the housing. Figure 15 also shows that the total deformation of the SRM with a spoke shaft is larger than the deformation of SRM with a solid shaft. The minimum\maximum total deformations for the two shaft structures at various natural frequencies are clearly given in Table 9. Table 9. A comparison of the total deformation for the SRM with spoke and solid shaft at various natural frequencies. The radial forces determined from the electromagnetic FEA were entered into the mechanical FEA. Full mode harmonic analysis methods were applied to perform the analysis. The frequency range was specified as 0 to 1 kHz with 25 solution intervals for the frequency resolution of 41.7 Hz. The directional profile of deformation and acceleration were carried out with two different structures of the shaft and the results were shown in Figures 16-19, respectively. It is worth mentioning that the peaks directional deformation and acceleration emerged as natural frequencies mode-2 (516 Hz) and mode-5 (859 Hz) for the SRM with spoke shaft as shown in Figures 16 and 17. However, mode-3, which is 748.64 Hz, given in Table 8, is damped by the stator structure. Similarly, Figures 18 and 19 show the prototype of the SRM with the solid shaft; it had peak directional deformation and acceleration emerges from natural frequencies mode-2 (412 Hz) and mode-5 (463 Hz). The peak values of directional deformation and acceleration for a spoke shaft were greater than that of a solid shaft. However, those values occurred to a higher frequency, about six times that of the phase frequency (84 Hz). Experimental Setup To validate this proposed study, the target experiment with a suitable approach was carried out. The measurement results were compared with the CFD results to verify the previous calculations. The water jacket with 17 channels of SRM prototype was manufactured, to confirm the CFD computation methods. Thermal testing involved inserting platinum resistance thermometer sensors (PT100) into the selected parts of the machine. The temperature rise of the stator and three ends winding were measured using five temperature sensors PT100. A PT100 sensor with an accuracy of type A ± (0.15 + 0.002T) was used. Figure 20 shows the positions of PT100 in the prototype SRM. The location of each PT100 sensor is described in Table 10. The five measuring points were connected to the temperature monitor for temperature recordings. An electric pump pushed the coolant through the water jacket. The flow meter was used to monitor the coolant flow rate of the inlet port, and the flow rate of water was adjusted to 0.6 L/s at all times. The temperature rise of the outlet port was measured using an infrared radiation thermometer. The SRM was operated under a loaded condition at the rated speed of 105 rpm. Furthermore, the continuous output phase (DC) current rating of the power converter was 174 A, and a DC bus voltage of 510 V was used to energize the SRM. The experimental platform for 72/48 SRM is developed and shown in Figure 21 The load was carried by the disc, which was coupled to the SRM shaft and adjusted by dual brake calipers. Furthermore, by using a three-phase power quality analyzer (Fluke 434), a force sensor, an optical digital tachometer, and an oscilloscope with a Hall sensor, it was possible to monitor the SRM parameters, such as output power, load, current and speed, as shown in Figure 21b. The ambient temperature was 26 • C during the experiment. The temperatures of five measuring points were recorded every 5 min using a temperature monitor until it reached a steady state. The temperature curves for various motor parts, underrated operating, are shown in Figure 22. The temperature of the components inside the motor was stable after 55 min. The temperature of the sensor measured points is listed in Table 11, and CFD values were compared. From the data in Table 11, it is clear that the simulation temperatures results are in close agreement with the experimental results and the accuracy of the simulation results is proven. Measurement records show that the highest temperature of the SRM winding ends was 78 • C. Compared with the simulation results of 77.5 • C, the temperature difference was 0.5 • C, so the simulation results were credible. The measurement of the natural frequencies was done by a hammer impact test. Figure 23 shows the diagram of the 72/48 SRM test bench for the impulse hammer excitation. The experiments used an impact hammer (PCB 086C01) combination with the accelerometers. The first six natural frequencies of the SRM are shown in Table 12, and FEM values are compared. Of all the errors in the simulation, the natural frequencies are within 4%. The errors may be due to the measurement instrument and the complexity of the prototype structure. The accuracy of the FEM is verified by experimental test. Conclusions A thermal and mechanical analysis of the switched reluctance motor (SRM) for mining applications was discussed in this paper. Several case studies of water jacket configurations were carried out to determine an appropriate method for ensuring uniform temperature distribution in the proposed model. A 2D finite element model for the SRM was developed to obtain the losses and radial force. The cooling jacket configurations with 17 channels at a flow rate of 0.6 L/s were found to be optimal for a 75 kW, 72/48 SRM. The simulation results showed that the highest temperature point was 77.5 • C, located inside the SRM stator winding ends. The natural frequencies were calculated with the developed finite element mechanical, structural model. The 3D geometry of the SRM was modeled to obtain the vibration characteristics of the motor under free vibration for modal analysis, as well as the forced vibration response as radial forces for harmonic analysis. The structure of the shaft with a spoke was proposed for a 72/48 SRM; the advantages of the structure caused the natural frequency to rise and reduced the weight and temperature of the SRM prototype. The experimental and computational fluid dynamics (CFD) results were compared and analyzed. The simulation results were in close agreement with the experimental results and the accuracy of the simulation results was proved. Conflicts of Interest: The authors declare no conflict of interest.
2019-07-26T10:25:43.228Z
2019-07-05T00:00:00.000
{ "year": 2019, "sha1": "a597475591a03c142ce3244b8587040364067ffb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/13/2722/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d103fc058fa3ccf9c71f95fd9a4a85ceb048b2b0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
17496095
pes2o/s2orc
v3-fos-license
B6eGFPChAT mice overexpressing the vesicular acetylcholine transporter exhibit spontaneous hypoactivity and enhanced exploration in novel environments Cholinergic innervation is extensive throughout the central and peripheral nervous systems. Among its many roles, the neurotransmitter acetylcholine (ACh) contributes to the regulation of motor function, locomotion, and exploration. Cholinergic deficits and replacement strategies have been investigated in neurodegenerative disorders, particularly in cases of Alzheimer's disease (AD). Focus has been on blocking acetylcholinesterase (AChE) and enhancing ACh synthesis to improve cholinergic neurotransmission. As a first step in evaluating the physiological effects of enhanced cholinergic function through the upregulation of the vesicular acetylcholine transporter (VAChT), we used the hypercholinergic B6eGFPChAT congenic mouse model that has been shown to contain multiple VAChT gene copies. Analysis of biochemical and behavioral paradigms suggest that modest increases in VAChT expression can have a significant effect on spontaneous locomotion, reaction to novel stimuli, and the adaptation to novel environments. These observations support the potential of VAChT as a therapeutic target to enhance cholinergic tone, thereby decreasing spontaneous hyperactivity and increasing exploration in novel environments. Introduction Cholinergic neurotransmission plays key roles in the central and peripheral nervous systems (Woolf and Butcher 2011). Cholinergic impairments in neurodegenerative diseases, especially in Alzheimer's disease (AD), have led to the development of several cholinergic-based therapeutic strategies. Aside from the well-characterized role of acetylcholine (ACh) in cognitive functions such as learning and memory, ACh availability has been shown to contribute to a number of physiological and behavioral functions including peripheral motor function (Ribeiro et al. 2006;Woolf and Butcher 2011) and locomotor activity (Di Chiara et al. 1994;Martins-Silva et al. 2011;Woolf and Butcher 2011). Specifically, clinical assessments and experimental models of AD revealed that decreased cholinergic tone can cause spontaneous hyperactivity including increased restlessness, coupled with increased anxiety in novel environments (Ognibene et al. 2005;Piccininni et al. 2005;McGuinness et al. 2010;Sterniczuk et al. 2010b;Bedrosian et al. 2011;Walker et al. 2011). Therefore, strategies to modify cholinergic tone may provide a means to regulate both spontaneous and novelty-induced locomotion (Mega et al. 1999). Cholinergic neurotransmission is maintained through the appropriate synthesis, vesicular packaging, and release of ACh. Choline, sequestered through the high-affinity choline transporter (CHT), is transacetylated via the enzymatic activity of choline acetyltransferase (ChAT) and the precursor acetyl-coenzyme A (reviewed in Blusztajn and Wurtman 1983). Newly synthesized ACh is packaged into synaptic vesicles by vesicular acetylcholine transporter (VAChT) prior to its release to the synaptic cleft (Parsons 2000). Genetic targeting has been used to create mouse models presenting deficiency in one or more cholinergic compo-nents, including VAChT de Castro et al. 2009a;Guzman et al. 2011;Martins-Silva et al. 2011), ChAT (Misgeld et al. 2002;Brandon et al. 2004), CHT (Bazalakova et al. 2007), acetylcholinesterase (AChE) (Volpicelli-Daley et al. 2003) or through the modified expression of ACh receptors (Picciotto et al. 2000;Wess et al. 2007;Drenan et al. 2008Drenan et al. , 2010. Until recently, most animal models of cholinergic enhancement have been limited to the pharmacological inhibition of ACh degradation in the synaptic cleft. The previously characterized B6eGFP-ChAT mouse model (Tallini et al. 2006;Nagy and Aubert 2012) allows for the evaluation of whether increasing the vesicular storage and release of ACh is sufficient to elicit changes in behavioral activity. B6eGFPChAT mice have four genomic copies of the cholinergic gene locus, which contains the VAChT and ChAT promoter and coding regions (Eiden 1998;Tallini et al. 2006;Nagy and Aubert 2012). In these mice, the transcription of transgenic ChAT is terminated and replaced by the enhanced green fluorescent protein (eGFP), while the transcription of the VAChT transgene remains operational. As such, VAChT is overexpressed, while levels of ChAT, CHT, and AChE are maintained, in cholinergic neurons (Nagy and Aubert 2012). Here, the behavior of B6eGFPChAT mice was assessed in a panel of tests designed to elicit a variety of central and peripheral responses. We found that B6eGFPChAT mice have enhanced spontaneous activity and noveltyinduced exploration. The results of this study support the notion that modulating VAChT levels modifies behavioral activity, highlighting the importance of ACh vesicular storage in the regulation of cholinergic neurotransmission and function. Animals For all studies, congenic male B6.Cg-Tg(RP23-268L19-EGFP)2Mik/J (B6eGFPChAT; Jackson Laboratories, Bar Harbour, ME) mice homozygous for the RP23-268L 19-EGFP transgene were compared with sex and agematched B6 controls. Separate cohorts of animals were used for the biochemical, immunohistological, and behavioral studies. For the behavioral panel, B6eGFPChAT (N = 11) and B6 (N = 9) mice were between 124 and 126 days of age at upon entry to this study, housed under identical conditions, and exposed to regular handling prior to and during the study. The behavioral panel was conducted sequentially in the following order: open field (Days 1-5), peripheral motor function (Day 9), Rotorod (Days 10-11), dark/light box (Day 18), and elevated plus maze (Day 48). A subset of this cohort (N = 8 per genotype) were used for calorimetry . Presence of the transgene was confirmed using conventional polymerase chain reaction (PCR) and primers as previously described (Tallini et al. 2006), and by the expression of eGFP observed during immunofluorescence microscopy protocols. All animal protocols were approved by the Animal Care Committees of Sunnybrook Research Institute and the University of Western Ontario, and experiments were performed according to the guidelines set by the Canadian Council on Animal Care and the Animals for Research Act of Ontario. To quantify the relative amount of protein expression, blots were stripped and reprobed with antibodies against GAPDH (H86504M, Meridian Life Science, Memphis, TN) for 1 h followed by a horseradish peroxidase-conjugated secondary antibody for an additional hour. Signal intensities were analyzed using GeneTools software (Syngene, Frederick, MD) and normalized to GAPDH. The relative amount of VAChT, ChAT, and CHT protein in B6eGFPChAT tissue homogenates was expressed as a percent of protein present in B6 control tissue. Mean normalized densitometry values were analyzed by Student's t-test to compare genotypes. Spontaneous activity and indirect calorimetry B6eGFPChAT (N = 8) and B6 (N = 8) mice were placed in comprehensive lab animal monitoring system (CLAMS) metabolic cages (Columbus Instruments, Columbus, OH). These metabolic chambers monitor activity and metabolic performance. Following entry into the cages, the mice were allowed to acclimatize to the environment for 14-17 h prior to data collection. High-resolution real time activity data along with metabolic measurements collected every 10 min were acquired during the 12 h light cycle (0700 and 1900 h) and 12 h dark cycle (1900 and 0700 h). The metabolic measurements included the volume of carbon dioxide produced (VCO 2 ), the volume of oxygen consumed (VO 2 ), the respiration exchange ratio (RER = VCO 2 /VO 2 ), and the caloric (heat) value (([(3.815 + 1.232 9 RER) 9 VO 2 ] 9 1000)/ mouse weight). Sleep analysis was conducted using the Oxymax software (Columbus Instruments, Columbus, OH) as previously described and validated (Pack et al. 2007). The sleep threshold was set to 180 sec of 10 activity counts. The data are represented in~30 min intervals and analyzed using repeated measures two-way analysis of variance (ANOVA) or as the mean values over each 12 h period and analyzed using Student's t-test. Dark/light box Each mouse was placed into an automated activity monitor (Accuscan Instruments, Inc., Columbus, OH) that was separated into an enclosed dark region (20 9 40 cm) and an open light region (20 9 40 cm). The two regions were separated by an opening (10 9 15 cm) where mice were placed facing the dark region and allowed to explore for 10 min between 2000 and 2200 h. Activity (converted from infrared beam breaks to cm) in each of the two regions along with transitions between the regions were measured over the trial duration. Mean distance values were analyzed by Student's t-test. The proportional time and distance spent in the light field and the number of transitions were analyzed by the Mann-Whitney U test. Novel environment locomotion Locomotor activity was measured using an automated activity monitor (Accuscan Instruments, Inc., Columbus, OH). Experiments were performed between 1000 and 1600 h. Mice were allowed to explore the locomotor activity chamber (20 9 20 cm) for 2 h. Activity (converted from infrared beam breaks to cm) was measured at 5 min intervals. Measurements of activity were analyzed using repeated measures two-way ANOVA while cumulative means were assessed by the Student's t-test. Elevated plus maze Anxiety-like and exploratory behavior were evaluated using an elevated plus maze 50 cm above the floor with four arms 30 cm long and 5 cm wide (two darkened and enclosed with 40 cm walls). Mice were placed into the center of the maze facing one of the open arms. The accumulated time and distance spent on the open and closed arms, along with the entries into each of the arms was recorded over a single trial of 5 min using the automated tracking system (AnyMaze, Stoelting, Wood Dale, IL). The percentage of time spent on each of the arms and the number of entries into the arms were analyzed using Student's t-test or Mann-Whitney U test as parameters measuring anxiety-like behavior. Rotarod Mice were placed on a stationary rod of an automated rotarod apparatus (SD Instruments, San Diego, CA). The rotation of the rod was then initiated at the speed of 5 rpm, which accelerated at a rate of 10 rpm/min to 35 rpm over the course of 3 min. Latency to fall was automatically recorded using infrared beam break as the animal fell from the rod. Mice were tested on 10 trials during the first day, and four trials the next day, each with 15 min intertrial intervals. Results were analyzed by repeated measures ANOVA. Grip strength Forelimb grip strength was measured using a horizontally mounted digital force gauge (Chatillon, Largo, FL). Mice held by the base of their tails were slowly lowered and allowed to grasp a triangular bar attached to the gauge. The mice were then pulled backwards along the horizontal plane of the gauge. The peak tension of 10 successive trials was collected. Mean peak tension results for each genotype were analyzed by Student's t-test. Hanging wire Each mouse was placed on a wire cage top (square ½ inch mesh) which was gently shaken once to encourage the mice to grasp. The wire cage top was slowly inverted and suspended 40 cm above the base of a padded Plexiglas box. The mice were given three trials up to 300 sec with an intersession interval of 30 sec. The time it took each mouse to fall from the cage top was recorded. The mean trial hanging time results for each genotype were analyzed using repeated measures ANOVA and mean cumulative hang time over each of the trials were analyzed by Student's t-test. Results VAChT is overexpressed in the B6eGFPChAT mouse In the cortex, striatum, and hippocampus, VAChT staining presented as punctate fluorescence along ChAT positive fibers and in cell bodies of the striatum (Fig. 1A). Our previous observations in 3-month-old B6eGFPChAT mice (Nagy and Aubert 2012) revealed enhanced VAChT protein expression and here, we confirm that at 6 months of age, VAChT overexpression is sustained. The expression of VAChT in B6eGFPChAT mice was compared with B6 controls using Western blot analysis to detect cholinergic immunoreactivity in various regions of the central nervous system. Western blot targeting VAChT revealed a diffuse doublet at the predicted size of 70 kDa (Fig. 1B). Quantification of the VAChT band intensity revealed a significant two-to threefold increase of VAChT protein in B6eGFPChAT compared with B6 control mice (Fig. 1C). The enhanced level of VAChT protein was found in the cortex (t (4) = 8.752; P = 0.001), striatum (t (4) = 4.494; P = 0.046), and hippocampal formation (t (4) = 5.323; P = 0.006) (Fig. 1D). Western blots and quantification of ChAT ( Fig. 1D and E) and CHT ( Fig. 1F and G) revealed no significant change in protein expression in any of the regions that were analyzed. B6eGFPChAT mice exhibit unaltered motor function and coordination To measure the effect of increased VAChT on peripheral motor function, we first assessed forelimb grip strength using a digital tension gauge. B6eGFPChAT mice produced a peak tension of 0.268 kg which was not found to be significantly different from B6 control mice that produced 0.260 kg of peak tension (t (18) = 0.416; P = 0.682) ( Fig. 2A). In addition, no statistical difference was found between B6eGFPChAT and B6 control mice when measuring wire hang fatigue (two-way repeated measures ANOVA revealed no significant genotype factor, F (1,36) = 0.052; P = 0.822, and the expected trial factor, F (2,36) = 11.04; P < 0.001) (Fig. 2B) or total hanging time performance (t (18) = 0.229; P = 0.822) (Fig. 2C). We considered that the effect of VAChT overexpression might only be detectable during activities combining endurance, fine motor coordination, and balance. As such, performance on the rotarod was assessed through the latency to fall off the rotating cylinder. Both B6eG-FPChAT and B6 control mice improved significantly from trial 1 to trial 10 (two-way repeated measures ANOVA trial factor, F (9,162) = 8.653; P < 0.001), demonstrating that both strains significantly improved motor coordination over time (Fig. 2D). However, no significant effect of genotype (F (1,162) = 0.013; P = 0.910) or interaction (F (9,162) = 1.273; P = 0.256) was found. Motor skill retention was assessed using four additional trials for 24 h following the training sessions. In this paradigm, no significant differences were found between the latency to falling during Trial 10 (Day 1) and Trial 11 (Day 2) for B6eGFPChAT or B6 control mice (two-way repeated measures ANOVA F (1,18) = 0.201; P = 0.659). Similarly, no genotype effect on performance was observed during the four trials performed during Day 2 (F (1,54) = 0.366; P = 0.553) (Fig. 2D). Taken together, these data suggest that B6eGFPChAT mice have maintained motor function and learning compared with B6 control mice, and that elevated VAChT-mediated ACh vesicular packaging as observed in B6eGFPChAT mice is not sufficient to improve these normal motor functions. B6eGFPChAT mice display spontaneous hypoactivity in a home cage environment Given the role of cholinergic neurons in the regulation of muscle activity through central and peripheral innervation, we sought to determine whether increased VAChT expression influences spontaneous locomotor activity. Through the monitoring of locomotor activity over a 24 h period, B6eGFPChAT mice were found to exhibit hypoactivity during both their light (t (14) = 2.205; P = 0.045) and dark cycles (t (14) = 3.823; P = 0.002) ( Fig. 3A). High-resolution analysis of the locomotor activity exposed a significant genotype factor when analyzed by repeated measures two-way ANOVA (F (1,658) = 4.660; P = 0.049) (Fig. 3B). Bonferroni post-test revealed that the B6eGFPChAT mice displayed significantly less activity during the biphasic diurnal activity peaks typically exhibited by rodents at~2100 and 0430 h (Fig. 3B). We further evaluated physiological function and tone through the assessment of respiratory characteristics that are asso-ciated with physical activity. Using two-way repeated measures ANOVA, we found that there was no significant genotype effect during the assessment of RER (F (1,658) = 2.105; P = 0.169) ( . Punctate VAChT immunoreactivity is present in cholinergic cell bodies and processes (red). (B and C) Representative immunoblots for VAChT (B) and densitometry quantification (C) of immunoblot reveals significant two-to threefold overexpression of VAChT in the cortex, striatum, and hippocampus of B6eGFPChAT mice compared with B6 control mice. (D and E) Representative immunoblot for ChAT (D) and densitometry quantification (E) shows no significant difference in ChAT expression between genotypes in the analyzed regions. (F and G) Representative immunoblot for CHT (F) and densitometry quantification (G) shows no significant difference in CHT expression between genotypes in the analyzed regions. *P < 0.05; **P < 0.01; ***P < 0.005 compared with B6 control mice. B6eGFPChAT mice display increased activity and exhibit impaired habituation in novel environments To evaluate the behavioral response to a novel environment, we placed B6eGFPChAT and B6 control mice into open field arenas for 2 h. To establish the instantaneous response to novelty, we first considered the data collected during the initial 5 min of exposure which has been previously established as a predictive time to establish the effect (Crawley 2007). Using this criteria, B6eGFPChAT mice exhibit a significant increase in total distance (t (18) = 3.199; P = 0.005) (Fig. 5A) and rearing activity (t (18) = 2.570; P = 0.019) (Fig. 5C) compared with B6 controls. In addition to a single exposure, we tested intersession habituation by repeating the exposure of mice to the boxes in three consecutive days ( Fig. 5B and D). B6 mice exhibited a decrease in total activity which reached statistical significance during day 3 when compared with day 1 (F (2,26) = 5.232; P = 0.013) ( Fig. 5B; light bars). In con-trast, B6eGFPChAT mice did not show statistically significant changes in total distance between exposures ( Fig. 5B; dark bars). Notably, B6eGFPChAT mice revealed significantly higher locomotion when compared with B6 control mice during the day 3 exposure (Bonferroni post hoc test between B6eGFPChAT and B6 control on day 3, t = 2.884; P = 0.013) (Fig. 5B). No significant difference was observed for habituation of rearing events (no genotype effect, F (1,36) = 1.405; P = 0.251, expected time effect, F (2,36) = 17.25; P < 0.001) (Fig. 5D). From these data, we show that B6eGFPChAT mice exhibit increased locomotor activity upon initial exposure to open field environments, which decreases to B6 levels by 10 min and is followed by maintained intrasession habituation. In addition, B6eGFP-ChAT mice were found to have increased locomotor activity compared with B6 controls during the day 3 exposure. Thigmotactic behavior is maintained in B6eGFPChAT mice We considered that the brief increase in locomotor behavior exhibited in the open field environment might be due to differences in anxiety in B6eGFPChAT compared with B6 mice. We therefore sought to evaluate the thigmotactic behavior of the B6eGFPChAT mice (i.e., the proportion of time spent along the periphery of the open field) during a novel exposure to the environment. No significant difference was observed during the first 5 min (t (18) = 0.3479; P = 0.732) or during the 2 h duration with regards to the proportion of time spent in the center between the B6eGFPChAT and B6 genotypes (two-way repeated measure ANOVA did not reveal a significant genotype factor, F (1,414) = 0.5771; P = 0.457) (Fig. 6A). We did observe, however, a significant interaction in the proportion of center time between B6eGFPChAT and B6 control mice (F (1,414) = 4.000; P < 0.001). Through visual inspection of the data in Figure 6A, we hypothesized that the interaction effect was due to increased unbiased activity during the last hour of the trial. As such, we generated activity maps for the first and second hours of the exposure to compare the activity patterns between genotypes (Fig. 6B). During the first hour of the open field exposure, B6eGFPChAT and B6 genotypes each exhibit unbiased exploration of the open field ( Fig. 6B; top row). During the last hour of analysis, B6 mice are found almost exclusively in the peripheral regions of the arena ( Fig. 6B; bottom row). In contrast, B6eGFPChAT mice exhibited activity that was unbiased to either the peripheral or center regions of the open field. The pattern of activity and exploration by B6eGFPChAT mice was particularly evident during the last 20 min interval ( Fig. 6B; bottom row; purple). These data suggest that enhanced ACh vesicular packaging may contribute to altered thigmotactic behavior through increased activity and exploration to the novel environment. B6eGFPChAT mice show increased activity in the dark/light box To further characterize anxiety levels in B6eGFPChAT compared with B6 mice, the dark/light box paradigm was employed. The dark/light task is based on the innate aversion of mice to brightly lit areas and on the spontaneous exploration of mice in response to mild stressors, in this case novel open environments and light (Crawley 2007). The aversion to the environment is measured by the time and total distance accumulated in each compartment. In this test, B6eGFPChAT and B6 control mice spent~40% of their total distance (Fig. 7A) and time (Fig. 7B) in the light compartment and were found not to be significantly different from each other (Mann-Whitney U test = 42.00, P = 0.649 and Mann-Whitney U test = 39.00, P = 0.447, respectively). Transitions between the light and dark compartments are considered an index of activity and exploration. In this study, the number of transitions was significantly greater for B6eGFPChAT compared with B6 control mice (Mann-Whitney U test = 21.50, P = 0.036) (Fig. 7D). Similarly, B6eGFPChAT mice accumulated a significantly greater total distance over the 10 min duration than B6 controls (t (18) = 2.740; P = 0.013) (Fig. 7C). These results reiterate that B6eGFPChAT mice do not exhibit perturbed anxiety to open environments and light per se, however, B6eGFPChAT mice are more active and display increased exploration to the novel environment of the dark/light box. and B6 control mice (N = 9). As rearing events are registered, the mouse must go below the level of the vertical sensor for 1 sec before the next rearing event can be recorded. (D) Habituation to the novel open field measured as cumulative 2 h rearing events for B6eGFPChAT (N = 11) and B6 control mice (N = 9). *P < 0.05; **P < 0.01 compared with B6 controls. # P < 0.05; ## P < 0.01 compared with day 1. (Fig. 8A and B). Parameters reflecting changes in locomotor activity in this model were also found to be significantly different including the number of closed arm entries (Mann-Whitney U test = 16.50; P = 0.013) and total distance (t (14) = 2.150; P = 0.029) ( Fig. 8B and C). These data revealed another aspect of the exploratory phenotype to novel environments in B6eGFPChAT mice, as these mice accumulated greater total distance and increased preference to the open arm. The latency to enter the open arm was not used as an outcome measures here as mice were placed into the center of the maze facing one of the open arms. Discussion Here, we present biochemical and behavioral characteristics of B6eGFPChAT mice that delineates the role of VAChT overexpression on cholinergic function, focusing on peripheral motor function, locomotion, and anxiety. Our data provide evidence that modest increases in VAChT expression, previously associated with increased ACh release (Nagy and Aubert 2012), elicits physiological consequences, including spontaneous and novelty-induced locomotor activity. Collectively, these results provide insights on the importance of ACh storage and release on behavior, and this may have implications in human neurodegenerative disorders that exhibit cholinergic dysfunction. Biochemical analysis We previously described that 3-month-old B6eGFPChAT mice have increased VAChT gene and protein expression that results from increased genomic copies of the cholinergic gene locus (Nagy and Aubert 2012). These events are a consequence of the modified RP23-268L19 bacterial artificial chromosome (BAC), containing the VAChT genomic sequence, that was used to initially generate the transgenic mice (Tallini et al. 2006;Nagy and Aubert 2012). Increased VAChT expression enhanced ACh release in the hippocampus (Nagy and Aubert 2012), and likely enhanced cholinergic function in all brain regions where cholinergic terminals are found. Here, we found that VAChT overexpression is maintained at 6 months of age, spanning the age of animals used in this study. In contrast, no significant differences were found for ChAT and CHT protein expression, consistent with our and other's previous findings that alteration in VAChT does not affect other presynaptic cholinergic proteins Nagy and Aubert 2012). VAChT overexpression is therefore maintained at least up to 6 months in B6eGFPChAT mice without affecting ChAT and CHT expression. Motor strength and coordination Spontaneous and evoked release of ACh at the neuromuscular junction is responsible for peripheral muscle contraction in response to motor neuron activation. As such, VAChT-knockdown mice are significantly impaired in their ability to sustain prolonged physical activity, specifically in regards to forelimb grip strength, hanging endurance, and rotarod performance de Castro et al. 2009b). An assessment of cholinergic tone at the neuromuscular junction has not been performed in B6eGFPChAT mice. The peripheral expression of the BAC transgene has been previously characterized in B6eGFPChAT mice (Tallini et al. 2006). Using the same mouse model, we found that VAChT is overexpressed in the central nervous system (Fig. 1; Nagy and Aubert 2012) and peripheral regions of the autonomous nervous system (Fig. S1). Our analysis of neuromuscular function in B6eGFPChAT mice reveals that forelimb grip strength and ability to freely support their body weight using an endurance paradigm were maintained. Furthermore, rotarod performance using an accelerating rod to assess coordination, motor learning, and endurance was essentially identical between genotypes. The maintenance of motor function in VAChT-overexpressing mice may reflect the tolerance that exists within the neuromuscular junction to withstand changes in cholinergic transmission. Under normal physiological conditions, peripheral cholinergic neurons maintain cholinergic function through readily releasable pools of ACh-containing synaptic vesicles. During prolonged stimulation, large storage reserves of ACh-containing vesicles can be localized within peripheral cholinergic neurons and used for synaptic release (Rizzoli and Betz 2005). For these reasons, the impact of VAChT overexpression on neuromuscular function may require more demanding physical conditions to be resolved. Indeed, previous studies have identified that CHT overexpression improves performance during endurance treadmill paradigms, while CHT deficiency impaired treadmill performance (Bazalakova et al. 2007;Lund et al. 2010). It remains to be determined whether similar paradigms would elicit an effect in B6eGFPChAT mice. In contrast to peripheral neurons, central cholinergic neurons have smaller pools of readily releasable vesicles, and as such may be more dependent on the rapid recycling of vesicles. Under certain physiological scenarios, such as when synaptic vesicles cycle faster than they can be filled (Prado et al. 2002), neurotransmitter transporters may be rate limiting to neurotransmitter release. During these events, the rate of ACh release may be enhanced during VAChT overexpression and as such, central cholinergic functions may be more sensitive to modified levels of VAChT. Spontaneous activity and circadian rhythms ACh is known to play a complex role in the regulation of locomotor control, including acting as a modulator of the dopaminergic system (Rice and Cragg 2004;Drenan et al. 2010;Lester et al. 2010;Threlfell et al. 2010). In particular, ACh within the laterodorsal and pedunculopontine tegmental nuclei of the pons has been shown to mediate the dopaminergic activity along the nigrostriatal pathways whose innervation to the dorsal striatum is responsible for voluntary motor control (Lester et al. 2010). In addition, striatal cholinergic interneurons regulate dopamine release via beta2 subunit containing nicotinic acetylcholine receptors (b2*-nAChR) present on dopaminergic axons in the striatum (Threlfell et al. 2010). Several reports show that pharmacological or genetic alteration of cholinergic or dopaminergic function leads to increased striatal dopamine release and increased spontaneous locomotion (Giros et al. 1996;Gomeza et al. 1999;Rice and Cragg 2004;Drenan et al. 2010;Threlfell et al. 2010). In addition to dopaminergic modulation of locomotion in the striatum, the contribution of forebrain cholinergic tone in spontaneous locomotion has recently been revealed. Mice with VAChT deficiency throughout the central and peripheral nervous system (Martins-Silva et al. 2011) or specifically in basal forebrain neurons (Martyn et al. 2012) display hyperactivity. Interestingly, cholinergic contribution to locomotion appears to be independent of cholinergic striatal interneurons because selective removal of VAChT in the striatum does not induce hyperactivity . It is therefore plausible that cholinergic innervation to other central regions, including the cortex and hippocampal formation, play important roles in the regulation of this behavior. Our findings that B6eGFPChAT mice exhibit hypoactive spontaneous activity are consistent with the notion that ACh "turns down" neuronal circuits controlling spontaneous locomotion (Martins-Silva et al. 2011;Martyn et al. 2012). The observed hypoactivity in B6eGFPChAT mice was most evident during activity peaks occurring over the dark phase of the light/dark cycle. In addition, metabolic parameters of heat, VO 2 , and CO 2 appear to correspond to daily rhythmic patterns of locomotion, with significant and corresponding decreases in VO 2 during the periods of significant hypoactivity. The transient decrease in VO 2 likely reflects the inherent decrease in respiration requirements associated with decreased activity. Taken together, these data suggest that the change in spontaneous activity is closely associated to the activity-rest pattern of B6eGFP-ChAT mice. These data are consistent with previous findings showing that normal activity-rest patterns are regulated by cholinergic neurotransmission, potentially through b2*-nAChR of the suprachiasmatic nucleus (Liu and Gillette 1996;Yang et al. 2010;Xu et al. 2011). This is because cholinergic neurotransmission is generally associated with a series of characteristic sleep changes, including decreased rapid eye movement sleep (REM) latency and increased REM density (Sarter and Bruno 1999;Vazquez and Baghdoyan 2001). As such, we considered that the sleeping patterns in B6eGFPChAT mice could have contributed to the observed patterns of activity in this study. However, this was found not to be the case, and when activity and inactivity were analyzed by determining movement by infrared beam break (Pack et al. 2007), no significant differences were found in the patterns of sleep time, sleep bout number, or sleep bout duration. Collectively, these data suggest that VAChT overexpression induces generalized locomotor hypoactivity that is unrelated to circadian sleep regulation. VAChT overexpression in B6eGFPChAT mice has not been targeted to specific brain regions, limiting the identification of specific brain areas responsible for the observed hypoactivity. However, based on the discussion above, we postulate that VAChT overexpression is enhancing the inhibitory effect of ACh via cholinergic basal forebrain or dopaminergic striatal networks. Indeed, the decreased spontaneous activity exhibited by B6eGFPChAT mice is reminiscent of mouse models with increased ACh (via AChE inhibition) or decreased dopamine neurotransmission (Kobayashi et al. 1995;Zhou and Palmiter 1995). Confirmation of these potential mechanisms awaits region-specific VAChT overexpression models. Exploratory behavior Novel stimuli, including new or modified environments, generate approach/avoidance conflicts in mice. The conflict tests the balance between exploring the novelty to gain information and the anxiety-related cautiousness to avoid danger or harm. Exposure to novel stimuli has been extensively associated with cholinergic activation. Studies using exposure to novel environments and sensory stimulation as the experimental paradigms have also shown increased ACh release in the nucleus accumbens, hippocampal formation, and cortical structures (Thiel et al. 1998;Schildein et al. 2000;Giovannini et al. 2001). Furthermore, a number of studies have demonstrated that cortical (Day et al. 1991), striatal (Cohen et al. 2012), and hippocampal (Dudar et al. 1979;Day et al. 1991;Mizuno et al. 1991). ACh release is positively correlated to behavioral arousal in novel environments as defined by locomotor activity. We therefore investigated the exploratory behavior in B6eGFPChAT mice in novel environments to evaluate the contribution of VAChT overexpression. The results from the open field experiments indicate that B6eGFPChAT mice display transient increases in activity upon initial exposure to the novel environment compared to B6 control mice, including increased horizontal activity and rearing. These increased levels of exploration return to normal following the first 10 min of the open field exposure, where B6eGFPChAT mice begin to elicit normal intrasession patterns of habituation. Upon repeated exposure to the novel environment, B6eGFPChAT mice displayed only a modest decrease in locomotion, which did not reach significance, and was found to be significantly different than B6 control mice by day 3. The intrasession and intersession habituation patterns of B6 control mice were found to be consistent with previous reports (Bolivar et al. 2000;Bolivar and Flaherty 2003). While the intrasession habituation of B6eGFPChAT mice was unchanged, the impaired status of intersession habituation in this study was unexpected. This is because earlier studies have shown that deficits in habituation are attributed to ACh deficiency (Ukai et al. 1994;Schildein et al. 2000Schildein et al. , 2002, as ACh levels in the hippocampus (Giovannini et al. 2001) or cortex (Sarter and Bruno 1999;Sarter and Parikh 2005) may contribute to memory consolidation or attention processes following exposure to the novel environment. We considered that intersession habituation to novel environments may be the result of two components, one related to memory and anxiety and one related to motor activity. Indeed, previous experiments performing repeated exposures to novel environments reveal that during initial exposures, elevated ACh released from cortical and hippocampal regions may be associated with fear, stress, and motor activity (Giovannini et al. 2001). Subsequent habituated exposures have a limited component of memory and anxiety, as the inherent fear elicited by the novelty of the environment is minimized, and as such cortical and hippocampal cholinergic activation is related primarily to motor activity (Giovannini et al. 2001). As such, we propose that the observed intersession activity in B6eGFPChAT mice is attributed to increased exploration associated locomotion of B6eGFP-ChAT mice exposed to novel environments rather than impaired habituation per se. This speculation can be supported by the observed rearing habituation, which suggests that, to a certain extent, habituation behavior is maintained in B6eGFPChAT. Furthermore, our observed locomotor arousal is consistent with the mechanism that instantaneous release of ACh positively correlates with increased activity in novel environments (Dudar et al. 1979;Day et al. 1991;Mizuno et al. 1991;Cohen et al. 2012), and suggests that VAChT overexpression may potentiate this response. Anxiety-like behavior Endogenous cholinergic tone has been associated with anxiety-like behavior in mice. The effect of ACh is complex in that increased ACh release has been associated with both anxiolytic and anxiogenic actions (File et al. 1998(File et al. , 2000. For this reason, the relationship between ACh and anxiety may be related to regional subunit configurations of ACh receptors in the central nervous system (File et al. 2000;Labarca et al. 2001;Salas et al. 2003;Gotti and Clementi 2004;McGranahan et al. 2011). In this study, we utilized multiple experimental paradigms (open field, dark/light box, and elevated plus maze) known to elicit behavioral response in mice to assess the role of VAChT overexpression on anxiety-like behavior. When exposed to a novel open field, B6eGFPChAT mice did not show any center versus peripheral exploratory bias during the first 5 min of analysis, the time that has been previously shown to elicit the most robust anxiety behavior, or over the entire duration of the assay. The strongly significant interaction that was observed during the open field exposure is clarified by considering the activity traces for the test. Whereas each genotype exhibits unbiased exploration of the open field during the first 60 min of analysis, B6eGFPChAT mice show dramatically more exploration in the open field compared with B6 control mice during the final 60 min of analysis. Consistent with these findings, the dark/light box did not differentiate between genotypes with respect to the primary outcomes of time and distance accumulation in the light field. However, an unbiased increase in total distance was revealed for B6eGFPChAT mice that is reflected by an increase in the total transitions between the dark and light fields. Open field and dark/light box did not detect significantly anxiety-like differences between B6eGFP-ChAT and B6 control mice. However, B6eGFPChAT mice showed a moderate but significant bias to the open arms, suggesting that VAChT overexpression decreased anxietylike behavior in the elevated plus maze. The decreased anxiety-like behavior observed in the elevated plus maze in the context of the released exploratory inhibition observed during each of the anxiety-like behavioral tasks suggests that the genetic modifications in the B6eGFP-ChAT have an anxiolytic effect. The divergent findings in the primary outcomes of the open field and dark/light box (no change in anxiety) and the elevated plus maze (decreased anxiety) can be reconciled as the former tasks may not provide the same sensitivity as the elevated plus maze, which delivers a more complex anxiogenic insult (Crawley 2007). Alternatively, changes in the primary outcome of the elevated plus maze during VAChT overexpression may be solely based on the modified exploratory locomotion in the B6eGFPChAT mouse. Implications and concluding remarks In this study, we used congenic B6eGFPChAT mice that are homozygous for the RP23-268L19-EGFP transgene and have been previously characterized as having increased VAChT gene and protein expression (Nagy and Aubert 2012). These commercially available mice have been recently utilized during the investigation of multiple cholinergic pathways primarily for the identification and functional characterization of cholinergic neurons (Ade et al. 2011;Krasteva et al. 2011;Ogura et al. 2011;Rosas-Ballina et al. 2011). Here, we identified that B6eGFPChAT mice present a unique behavioral phenotype compared with B6 controls. While it remains possible that the observed phenotype will be confounded by positional effects related to the random insertion of the BAC transgene, only a single commercially available B6eGFPChAT founder line exists precluding our examination using multiple founders with independent insertion sites. Keeping these limitations in mind, a cholinergic rationale related to the observed increase in VAChT protein and previously defined enhancement in ACh release (Nagy and Aubert 2012) is congruent with the data and it provides a plausible explanation to the observed behavior in B6eGFPChAT mice. The utility of the B6eGFPChAT mouse as an experimental model for VAChT overexpression could be of significance for future studies related to neurodegeneration. Significant decreases in VAChT expression have been associated with various neurodegenerative conditions (Kuhl et al. 1996;Efange et al. 1997;Bell and Cuello 2006;Bohnen and Albin 2011;Chen et al. 2011). Most notably, progressive VAChT deficiency is observed during AD progression (Bell and Cuello 2006;Chen et al. 2011) and in postmortem AD brains (Efange et al. 1997;Chen et al. 2011). Interestingly, the disease pathology of AD is also marked by abnormal motor behavior including spontaneous hyperactivity and restlessness (Mega et al. 1999;Ognibene et al. 2005;Sterniczuk et al. 2010b;Bedrosian et al. 2011;Walker et al. 2011), as well as enhanced anxiety to novelty (Sterniczuk et al. 2010a;Bedrosian et al. 2011). The series of experiments described in this study suggest that increased VAChT expression observed in B6eGFPChAT mice contributes to spontaneous hypoactive behavior and increased exploration in novel environments. In cases of cholinergic deficiency and impaired locomotor-related behavior, identifying approaches to upregulate VAChT may be of therapeutic significance.
2018-04-03T06:19:00.558Z
2013-04-18T00:00:00.000
{ "year": 2013, "sha1": "4ceed8e8b8f93ce811c43da6c3a8adf2fab24cf1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.139", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ceed8e8b8f93ce811c43da6c3a8adf2fab24cf1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16105367
pes2o/s2orc
v3-fos-license
Efficacy of Compound Therapy by Ginseng and Ciprofloxacin on Bacterial Prostatitis. OBJECTIVE Genitourinary tract infections play a significant role in male infertility. Infections of reproductive sex glands, such as the prostate, impair function and indirectly affect male fertility. The general aim of this study is to investigate the protective effect of Korean red ginseng (KRG) on prostatitis in male rats treated with ciprofloxacin (CIPX). MATERIALS AND METHODS In this experimental study, we randomly divided 72 two male Wistar rats into 9 groups. The groups were treated as follows for 10 days: i. Control (no medication), ii. Sham [(normal saline injection into the vas deferens and oral administration of phosphate-buffered saline (PBS)], iii. Ginseng, iv. CPIX, v. CIPX+ginseng, vi. Uropathogenic Escherichia coli (E. coli) (UPEC), vii. UPEC+ginseng, viii. UPEC+CIPX, and ix. UPEC+ginseng+CIPX. The rats were killed 14 days after the last injection and the prostate glands were removed. After sample preparation, routine histology was performed using hematoxylin and eosin staining. The terminal deoxynucleotidyl transferase mediated dUTP-biotin nick end labeling (TUNEL) method was used to determine the presence of apoptotic cells. RESULTS The severity score for acinar changes and inflammatory cell infiltration in the UPEC+CIPX group did not significantly different from the UPEC group. However this score significantly decreased in the UPEC+CIPX+ginseng group compared to the UPEC group. Apoptotic index of all ginseng treated groups significantly decreased compared to the UPEC and CPIX groups. CONCLUSION These results suggested that ginseng might be an effective adjunct in CIPX treatment of prostatitis. The combined use ginseng and CIPX was more effective than ginseng or CIPX alone. Introduction Infertility is an important concern among 20% of couples. Approximately 50% of infertility is attributed to males (1,2). Male infertility consists of spermatogenesis disorders, defects of sperm transportation, impotence, hypogonadism and urinary tract infections (UTI) (3,4). Urogenital infections are responsible for approximately 35% of male infertility. These infections may impair accessory gland functions, such as the prostate, and lead to changes in seminal plasma composition (5,6). Therefore, male accessory sex glands infection is a major risk factor for infertility (6). Uropathogenic Escherichia coli (E. coli) (UPEC) is an important causative agent in more than 70% of urogenital tract infections (7). Antibiotics have long been considered the most effective treatment for bacterial infections. Ciprofloxacin (CIPX), belonging to the family of fluoroquinolones, has a broad spectrum of efficacy for bacterial infections (8). This drug can be transported to the seminal fluid and directly affect sperm cells by reducing sperm concentration, motility and viability (9). In a study, CIPX administration (5 mg/kg body weight) to 70 adult male Wistar rats has resulted in acinar changes, lymphocytic infiltration and fibrosis in the interstitial space in the prostate gland (10). A decrease was observed in testis, epididymis and seminal vesicle weights after administration of CIPX to rats for over 60 days (9). This antibiotic induces oxidative damage in rats and increases reproductive toxicity (11). It can activate caspase 3 and increase the apoptosis process in male germ cells (8). Both infections and fluoroquinolones can induce the generation of a tremendous amount of reactive oxygen species (ROS) (12)(13)(14). Excessive generation of free radicals can damage proteins, lipids, and nucleic acid structures, in addition to contributing to cellular dysfunction and death (15). According to previous reports, antioxidants can protect against CIPX by eliminating ROS generation during administration of this antibiotic (14). In another study, the removal of accessory reproductive glands from hamsters has led to increased DNA damage in spermatozoa. This fact suggests that these glands are the main source of antioxidants in seminal fluid (16). Collectively, by affecting E. coli and CIPX on histological structures of male accessory glands, their function as main source of antioxidant, will be changed. Under these circumstances, administration of antioxidants can be useful. Korean red ginseng (KRG), a derivative of Panax ginseng, is considered a very powerful antioxidant to be used to eliminate free radicals. Kim et al. (17) have shown that KRG improved rat testis dysfunction by suppression of superoxide production. In addition to anti-stress and antioxidant activities of KRG, there are also potent pharmacologic actions against cancer and diabetes. Choi et al. (18) have proven that the combination therapy of ginsenoside with CIPX is an effective treatment for chronic bacterial prostatitis (CBP). The use of Panax ginseng extract can promote spermatogenesis and increase serum testosterone, follicle stimulating hormone (FSH) and luteinizing hormone (LH) levels. This normalization of hormone levels may be due to the effect of Panax ginseng on the hypothalamic-pituitary axis (3,19). However, less is known about the protective effect of KRG on the apoptosis process and structural changes in prostate gland infections. This study seeks to determine the effective role of KRG on UPEC infection of the prostate in a rat model under treatment with CIPX. Animals In this experimental study, a total of 72 adult male Wistar rats were provided from Shahid Beheshti University, Tehran, Iran. This study was approved by the Medical Ethics Committee of Qazvin University of Medical Sciences. Rats ranged in age from 2 to 3 months and weighed 200 to 250 g. Before the experiment, animals were maintained for one week under controlled environmental conditions (23˚C and a 12 hour/12 hour dark-light cycle). Food and water were available ad libitum. The hygienic conditions were kept constant throughout the experimental period. Preparation of Korean red ginseng We initially prepared a suspension of white powder of ginseng root with 50% ethanol. The prepared suspension was boiled and condensed by vacuum, dried by speed vac. The resultant material was resolved with PBS (26). Tissue and sample collection At 14 days after the end of the experiment, all animals were anesthetized using ketamine (50 mg/ kg) and xylazine (12 mg/kg) and their prostate glands were carefully removed (10,23). Histological analysis All samples were fixed in 10% formalin and embedded in paraffin. Samples were sectioned by rotary microtome into 5 µm thick slices, stained with hematoxylin and eosin, and examined by light microscopy (Olympus DP25, Japan) using Image Analyzer software (ImageJ 1.43u). The severity of inflammatory cell infiltration, interstitial fibrosis and acinar changes, as indications of prostate inflammation were measured and graded on a scale from 0 to 5 (Table 1) (10,27). Terminal deoxynucleotidyl transferase mediated dUTP-biotin nick end labeling assay TUNEL was used to quantify apoptotic cells in the prostate epithelium. The procedure was performed according to recent studies (9). At the end of the staining, we evaluated the apoptotic index by counting the number of cells that showed TUNEL positivity in 100 cells each in 10 random slides from all groups by light microscopy at ×400 magnification (28,29). Statistical analysis Statistical analysis was performed using one way ANOVA followed by Tukey's post hoc comparison test. The significance level was considered to be P<0.05. Apoptosis The number of TUNEL positive cells in the prostate epithelium greatly increased following antibiotic treatment (Fig.1). Figure 2 shows the mean apoptotic index in 1000 epithelial cells per group. The apoptotic index of both the CIPX and UPEC groups significantly increased in contrast with all other groups (P<0.05). The numbers of apoptotic cells in the UPEC+CIPX+ginseng group decreased compared to the UPEC+CIPX group. There was no significant difference between the UPEC+ginseng and control group (P<0.05). Prostate histopathology Histopathological investigation of the prostate was performed to evaluate acinar changes, inflammatory cell infiltration and interstitial fibrosis. Severity scores for these items in each group are given in Figures 3-5. Lymphocytes, monocytes and neutrophils comprised the majority of inflammatory cells. Severity scores for acinar changes and inflammatory cell infiltration in the UPEC and UPEC+ginseng groups did not significantly differ (P<0.05), whereas scores of both groups increased significantly compared to the con-trol group (Figs.3, 4). A comparison between the CIPX, CIPX+ginseng and control groups showed no significant differences in scores. This result was also in line with the interstitial fibrosis evaluation (Fig.5). There was no significant difference between the UPEC+CIPX+ginseng group and control group for all inflammatory items (Fig.6). The severity score of the UPEC+CIPX+ginseng group in contrast to the UPEC group showed a significant difference (P<0.05). Interstitial fibrosis evaluation showed significant increases in the UPEC, UPEC+ginseng and UPEC+CIPX groups compared with all other groups (P<0.05). Discussion This study evaluated the protective effect of KRG on UPEC induced prostatitis in a rat model treated with CIPX. Our findings in the histopathological evaluations demonstrated no significant differences between the UPEC and UPEC+ginseng groups. In addition, the combined use of ginseng and CIPX was more effective than their single use. CIPX significantly increased the number of apoptotic cells. The anti-apoptotic effect of ginseng caused significant reductions in severity scores in the UPEC+CIPX+ginseng, UPEC+ginseng and CIPX+ginseng groups compared to the UPEC and CIPX groups. More than 50% of couples' infertility problems are related to male factors (1,2). Among these, infections of the accessory sex glands such as prostatitis play an important role (6). For prostate infections, the use of antibiotics such as CIPX is the gold standard of treatment (10). However, recent studies report adverse effects of testicular dysfunction, DNA damage and chromatin abnormalities of sperm cells, and increased numbers of apoptotic germ cells in seminiferous tubules in males treated with CIPX (8,9,30). In addition, CIPX can impair the histological structure of the epididymis, testicles, seminal vesicles and prostate (11). These adverse effects can be related to increased numbers of ROS during CIPX treatment (14). Therefore, during this condition the use of a potent antioxidant can be helpful. KRG is one of the mostly used herbal medicines in East Asia (31). Ernst reported possible mechanisms of action as its anti-apoptotic effect, anti-inflammatory action, antioxidant, reduction of platelet adhesion and vasodilation (32). The anti-aging, anti-diabetic and anti-cancer effects of ginseng are also reported (33). Elkahwaji et al. (34) have shown interstitial edema, acute inflammatory cell infiltration and acinar shrinkage in prostate infected by E. coli. This finding agreed with our results which showed that the UPEC group significantly differed from the other groups in all inflammatory items. The observed increase in the UPEC group score could be attributed to excessive ROS production by leukocytes present in inflammatory conditions. According to our findings, interstitial fibrosis evaluation of CIPX treatment of UPEC induced prostatitis showed a significant difference compared to the UPEC group. This result confirmed findings by Kim et al. (27). On the other hand, there was no significant difference between the UPEC+CIPX and UPEC groups in inflammatory cell infiltration and acinar changes. Demir et al. (23) reported that CIPX used for treatment of E. coli infected testicles degenerates germinal epithelium. A possible explanation for these differences might be due to CIPX suppression of E. coli by blocking bacterial DNA synthesis (8). Therefore, CIPX reduces infection outcomes on histological structures. In addition, this antibiotic decreases serum testosterone levels (35,36) and indirectly affects male reproductive organs (23,37). Kim et al. (17) reported a protective effect of KRG on rat testicular dysfunction by suppression of ROS production. Choi et al. (18) suggested ginseng+CIPX to be an effective treatment in rats. Kim et al. (27) reported the good effect of ginseng combined with CIPX under inflammatory conditions. Our study supported recent studies by regarding the protective effect of ginseng. In the current study, there was no significant difference between the UPEC and UPEC+ginseng groups. Of note, the severity score of the UPEC+CIPX+ginseng group showed a significant decrease compared to the UPEC group in all inflammatory items. Ginseng could not suppress E. coli but it could eliminate excess ROS produced during antibiotic treatment. The UPEC and CIPX groups significantly increased apoptosis in prostate epithelial cells compared to the control group. Dwyer et al. (38) in a molecular assessment, showed that E. coli exhibited characteristic markers of apoptosis. Khaki et al. (9) reported an increased number of apoptotic germ cells per seminiferous tubules in the CIPX group compared to the control group. Nguyen et al. (39) and Kim et al. (40) proved the anti-apoptotic effect of KRG in neuroblastoma cells. To the best of our knowledge, there has been no specific study of the anti-apoptotic effect of ginseng on infected prostate epithelium. We have observed a significant difference between the UPEC and UPEC+ginseng groups, as well as between the CIPX and CIPX+ginseng groups. CIPX suppresses E. coli action by blocking bacterial DNA synthesis, however it increases the number of oxidants produced (8,14). According to our results, it can be concluded that ginseng is a good antioxidant to be used to eliminate excess oxidants. Therefore Ef fect of Ginseng and CIPX on Prostatitis use of this antioxidant during oxidative stress conditions may be helpful. Conclusion These findings enhanced our understanding of anti-inflammatory and anti-apoptotic effects of KRG. Our experimental results have suggested that combined use of CIPX and KRG in UPEC infected rats can be helpful in treating male UTI and possibly improve fertility. These results are subject to certain limitations, such as measurement of serum testosterone levels and microbiological analyses. Further studies should be carried out in humans.
2016-05-04T20:20:58.661Z
2016-04-04T00:00:00.000
{ "year": 2016, "sha1": "2cc873e0811110a8e31ef1a1cfd4491914a2dd97", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2cc873e0811110a8e31ef1a1cfd4491914a2dd97", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
214410982
pes2o/s2orc
v3-fos-license
Thermo-mechanical coupling numerical simulation method under high temperature heterogeneous rock and application in underground coal gasification The heterogeneity of a rock mass under high temperature and its thermo-mechanical coupling characteristics are difficult problems to investigate. This situation brings considerable difficulties to the study of underground coal gasification under thermo-mechanical coupling. The development of a numerical simulation method for the thermo-mechanical coupling of heterogeneity rock mass under high-temperature burnt conditions can provide an important foundation for related research. On the basis of the variation of mechanical properties of rock mass with temperature, a thermo-mechanical coupling simulation method, which considers the heterogeneity of a rock mass under high temperature, is proposed in this study. A test block experiment is implemented and then applied to the strata movement and failure of underground coal gasification. The results are as follows: (1) The proposed method can truly reflect the heterogeneity of a rock mass under high-temperature environment, providing an effective method for the numerical simulation of geotechnical engineering in high-temperature conditions. (2) The variation of mechanical properties of rock mass after an increase in temperature is the main reason for the change law of strata movement and failure of underground coal gasification. These factors should be considered in the investigation of underground gasification strata movement and failure. The present study can provide an important means for the research on geotechnical engineering in high-temperature environments. Introduction In the numerical simulation of geotechnical engineering, rock mass is generally considered homogeneous and isotropic, even if it is disturbed by temperature, but only when the influence of thermal stress is considered (Elahi et al., 2017;Laouafa, 2016;Peter-Borie et al., 2018). However, under high-temperature conditions, not only high-temperature thermal stress will be generated inside the rock mass; the structure and physical-chemical properties of the rock will also be considerably affected by the burning of the rock, thus changing the internal stress distribution and deformation and the failure law of the rock mass. Therefore, a numerical simulation method of thermo-mechanical coupling, which considers hightemperature burnt conditions, needs to be urgently developed. Underground coal gasification (UCG), as an advanced method of coal chemical mining, has been proposed since the last century. However, although the UCG technology has achieved remarkable progress, its large-scale commercial production is far from ideal (Heinberg and Fridley, 2010;Khadse et al., 2007). The two main reasons for using UCG are as follows: (a) Ensure the regularly controlled combustion of coal in the process of UCG, in which the roof should not collapse in a large area, and the groundwater should not break into the burning surface, i.e. the shape design of the gasifier must be relatively conservative (Jiang et al., 2018). (b) Guarantee the connection between the gasifier and the aquifer, which will cause groundwater pollution, and the connection between the gasifier and the ground surface, which will bring air pollution, indicating that UCG entails environmental risk (Kotyrba and Sta nczyk, 2017;Verma et al., 2015). The main reason for these problems is that the main factors that affect the movement and failure of surrounding rocks in a burnt-out zone under high temperature is hard to grasp and control in the design of gasifiers, and this scenario has affected the popularization and application of the UCG technology (Liu et al., 2007;Stuermer et al., 1982;Yang et al., 2008). Researchers have conducted many studies and achieved fruitful results in ensuring the stability of the gasifier in the UCG process. Elahi et al. (2017) considered the stress change and the deformation of surrounding rocks in a burnt-out area under the condition of thermo-dynamic coupling. They found that various constitutive models lead to different deformations of surrounding rocks. On the basis of elastic foundation beam theory, Xin et al. (2016) established the theory of thermo-elastic foundation beam in UCG under high temperature and analyzed the movement deformation of the roof. Liu (2014) studied roof stability in a gasification channel extension. Shahbazi et al. (2019) considered the change in mechanical properties of rock mass under high temperature and studied the stability of an underground gasification shaft under thermo-mechanical coupling and then proposed a method to solve the problem. Li et al. (2017) investigated the stability of hyperbolic pillars in UCG under the coupling of high-temperature and in-situ stress. Najafi et al. (2014) established a design method to analyze UCG-pillar stability based on CRIP technology and built a numerical model in FLAC3D to verify the reliability of the method. Tang (2013) investigated the failure law of an overlying strata on a roof in UCG condition. Yang et al. (2014) established a thermo-mechanical coupling model with the ABAQUS finite element software to analyze the temperature field conduction of surrounding rocks and their stress distribution and surface subsidence in a burnt-out zone. Laouafa et al. (2016) adopted the infinite element method to study the influences of the hightemperature effect of surrounding rocks in a burnt-out zone on the movement destruction and surface subsidence of surrounding rocks. Li et al. (2016) compared the different surface subsidence laws caused by UCG mining and conventional strip mining and then proposed the predicating method for UCG mining and subsidence. Mellors et al. (2016) investigated the surface subsidence caused by UCG according to SAR technology. The past studies strongly promoted the development of UCG, but the heterogeneity of rock mass after high-temperature burning was not considered, indicating limitations in the application of their research results. In this study, a thermo-mechanical coupling numerical simulation method is proposed on the basis of the characterization that the mechanical properties of a rock mass vary dramatically with temperature. The block test results show that the method can reflect the heterogeneity of the rock mass in a high-temperature environment. Our study provides an effective method to numerically simulate geotechnical engineering in high-temperature environments. In the UCGsimulated experiments, we found that the variation of the rock's mechanical properties after an increase in temperature is the main reason for the change law of movement and failure of the UCG rock strata. These factors should be considered in the study of movement and failure control of UCG rock strata correspondingly. The research results provide an important means to accurately and effectively study geotechnical engineering in high-temperature conditions, and it offers an important reference for the subsequent research of UCG. Numerical simulation for the thermo-mechanical coupling of high-temperature heterogeneous rock Methods and procedures In geotechnical engineering, the change in mechanical properties of materials with temperature is often neglected in the past studies on the numerical simulation of thermomechanical coupling. The main reason is that the change in rock mechanical properties with temperature and the propagation of temperature field is extremely complex. This study utilizes FLAC3D as an example to investigate the failure characteristics of the high-temperature heterogeneous rocks. FLAC3D is a finite difference program based on the continuum medium model, and it is widely used in geotechnical engineering. The numerical simulation method, which considers the thermo-mechanical coupling of high-temperature burnt rock, is implemented in four steps ( Figure 1). 1. Establish an appropriate model according to the requirements. (a) Determine the shape's size of each part of the model and the shape's size of each unit. (b) Determine the initial properties, stress field, temperature field, initial thermodynamic parameters, and boundary conditions of the model. Simulated analysis of test block A rectangular test block model with a width of 50 mm and a height 100 mm is established. The model consists of 2000 regular hexahedral elements, and the edge length of each zone is 5 mm. A Mohr-Coulomb model is used as the constitutive model. The initial temperature of the model is 20 C. Loading plates are added to both ends of the model, and the load is placed in fixed speed on top of the model to determine the compressive strength ( Figure 2). Sandstone is selected as the model material, and its initial thermal and mechanical properties are shown in Table 1. The variation of mechanical properties of sandstone with temperature is shown in Table 2. Before experiencing high temperature, the model was isotropic (Figure 3(a)). A surface heat source of 300 C is applied to the bottom of the model. After heating for 2 h, the temperature field distribution is determined (Figure 3(b)). Given that the variation of rock mechanical properties with temperature is highly complex, parameters cannot be assigned to each unit one by one by using the attachment function built in FLAC3D. Moreover, even if parametric precision is reduced, the efficiency of assigning parameters is extremely low. The FISH program developed in this study can assign parameters to the model based on the non-homogeneous rock mass after an increase in temperature (Figure 3 (c)), and thus the three defects of current mainstream simulation software are solved as follows: (a) the limitation involving the number of materials in a single model is overcome; (b) the efficiency of assigning parameters to non-homogeneous materials is increased; and (c) the program can be used to study the orderly/random variation of unit parameters, indicating improved simulation accuracy of non-homogeneous materials. Figure 4 shows the variation of internal temperature and mechanical properties with height in the model. The temperature decreases with the increase in height, from $300 C to $20 C. The mechanical properties also change with height. When the highest temperature drops to room temperature, the properties also change to the initial value. The variation law with temperature is similar to sandstone in Table 2. As shown in Figure 5, the mechanical strength of the model varies considerably under high temperature. At the elastic stage, the stress-strain curves of the specimen at room and high temperatures almost coincide; the reason is that the elastic modulus of sandstone does not change remarkably below 300 C. After entering the Source: reproduced with permission from Zhu et al., 2006;Su et al., 2008;Liet al., 2014;Wan, 2006;Zhang, 2012;Min, 1982;and Shoemaker et al., 1977. plastic stage, the compressive strength of the model decreases from 6.7 to 5.1 MPa. The change can be explained by strength properties, such as cohesion, tensile strength, and internal friction angle, which decrease sharply under high temperature. Thus, the hightemperature burnt state of rocks has a considerable influence on the simulation results. Given that the mechanical properties of the mesh in the model are based on the average temperature of the mesh, the size of the mesh may affect the strength of the model. In this study, the block models with mesh sizes of 2.5, 3.3, 5, and 10 mm are established, and the compressive strength is studied ( Figure 6). The compressive strength of the model decreases with the increase in mesh size. When the mesh size is 2.5 mm, its compressive strength is 5.4 MPa. When the mesh size increases to 10 mm, its compressive strength decreases to 5.0 MPa and is gradually stabilized. The reason is that when grid size is increased, the number of zones forming the test block decreases. As a result, the block tends to be uniform, and its compressive strength tends to be stable. Overview of investigated area The latitude. This area is rich in coal. The gasification experiment is performed on the #2 coal seam in the strata of the area allocated for UCG research. Since the ignition experiment on 5 October 2007, a large amount of high-quality gas was obtained, and the experiment was also completed successfully. Rock mechanics properties under high-temperature This study summarizes the variations of physical and mechanical properties of sandstone, mudstone, and coal with temperature according to the literature and then establishes the empirical formula of the mechanical properties of rock mass with temperature by regression analysis (Table 2). Sandstone: According to Figure 7(a), the elastic modulus of sandstone increases, decreases and increases again with the rise in temperature. The maximum value is 1.4 times, and the minimum value is 0.4 times, relative to that at room temperature. Source: reproduced with permission from Zhu et al., 2006;Su et al., 2008;Li et al., 2014;Wan, 2006;Zhang, 2012;Min, 1982;and Shoemaker et al., 1977. Poisson ratio, tensile strength, cohesion, and internal friction angle all decrease with the increase in temperature (Figure 7(a)) (Li et al., 2014;Su et al., 2008;Zhu et al., 2006). Mudstone: Figure 7(b) shows the variation laws of the physical properties of mudstone with temperature. The elastic modulus shows an increasing-decreasing-increasing trend with temperature. The maximum value is 2.2 times, and the minimum value is 0.4 times, relative to that at room temperature. The Poisson ratio of mudstone varies with temperature. When the Poisson ratio decreases gradually from 20 C to 300 C, the minimum value is 0.3 times relative to that at room temperature; afterwards, it becomes consistent. The variation laws of tensile strength, cohesive force, and internal friction angle of mudstone with temperature are similar, i.e. they gradually decrease when the temperature increases (Wan, 2006;Zhang, 2012). Coal: According to Figure 7(c), the elastic modulus of coal is consistent at temperatures lower than 150 C and drops rapidly beyond 150 C. The elastic modulus of the coal sample drops to $1% relative to that at the normal temperature and remains to be consistent beyond 400 C. Tensile strength, cohesion, and internal friction angle also show similar laws. The Passion ratio of coal increases as the temperature increases and is maintained at 1.6 times relative to that at the normal temperature, and then it becomes stable beyond 400 C (Min, 1982;Shoemaker et al., 1977). The isolated coal pillar becomes invalid and loses its bearing capacity beyond 400 C during UCG. Introduction of gasification technology and process The experiment adopts the underground gasification technology of "strip mining-surface mining" gasifier with retreating controlling gas injection. The arrangement form of the working surface is similar to that of the strip mining, as shown in Figure 8 gasification process, the ignition channel is initially processed, and then the gasification operation is carried out at the first gasification working face. The ignition channel is used as the starting point of gasification and until the end point of the gasification surface is reached. After 90 d, a burnt-out zone with a width of 16 m and a length of 170 m is formed. Each subsequent working face requires 90 d to complete the gasification work, and four burnt-out zones are finally formed when the gasification of the stope is completed. Field measurement In this study, a high-precision GPS dynamic monitoring point called CG05 is established in the middle of the test area (Figures 8 and 9). The monitoring point is located between the second and third gasification working faces and in the middle of the gasification stope. The surface subsidence measured by the monitoring station likely corresponds to the maximum subsidence. According to the monitoring results, the maximum subsidence of the monitoring point is 36 mm after three months when four gasification surfaces complete the gasification. Thus, gasification does not lead to serious surface subsidence. Model establishment and boundary conditions The shape of the burnt-out zone after the UCG process is similar to a rectangle (Li et al., 2016). A total of four strips are gasified in the experimental station. The length in the strike direction of the gasification area is 136 m and that in the inclined direction is 170 m. The width of the gasification area is 16 m and that of the isolation coal pillar is 24 m. The goaf modeling method and its corresponding grid size are the same as those of the coal seam. The model excavation is realized by means of the null model. The influence of the model boundary in minimized by establishing a model with a length of 600 m, width of 600 m, and height of 317 m. The floor thickness is 50 m, the coal seam thickness is 5 m, and the roof of the coal seam is 262 m from the surface. The model is composed of regular hexahedrons with grid sizes varying from 2 to 8 m. The model is formed with 28,43,850 nodes and 23,73,750 grids. The grids near the burnt-out zone are fine, whereas those far from the burnt-out zone are rough ( Figure 10). Many studies proved that the hightemperature impact of UCG does not exceed 20 m (Li et al., 2018); thus, we only perform thermal coupling calculations in the high-temperature range. The Mohr-Coulomb model is applied to the constitutive model of the rock failure. The initial thermo-mechanical properties used to calculate the model's thermal coupling are shown in Tables 3 and 4. After gasification, the model is parameterized to simulate the heterogeneity of the surrounding rocks in the burnt-out zone. The model of the burnt-out zone is set as the null model to simulate coal seam gasification. The bottom of the model has a fixed boundary to disallow movement. The top of the model has a free boundary that can move freely in any direction. The boundaries on the two sides in the x and y directions are fixed parallel to the X and Y axes correspondingly, and movement is disallowed. During gasification, the gasified surface temperature is 1400 C and applied to the surface of the burnt-out zone. The initial temperature of the model is 20 C, and the model boundary is adiathermal. After the model is established, the FLAC3D software is imported for the numerical calculation. Simulation scheme and method The two main factors of the change in movement law of the strata during the UCG process are (a) thermal stress and (b) the surrounding rock's thermal burning. Four numerical models are established to analyze the main controlling factors affecting rock movement. 1. In conventional mining conditions, no heat effect exists in the mining process. 2. The surrounding rocks are affected by thermal stress only in the burnt-out zone. 3. The surrounding rocks are subjected to thermal burning only in the burnt-out zone. 4. The thermal stress is synergistic with the thermal burning of the coal-rock. Results Distribution of temperature field around the burnt-out zone. Heat conduction is the main mode of heat transfer in UCG. The heat conduction model of the temperature field in the continuous medium is shown as formula (1). The profile range of the high-temperature influence on the gasifier after simulation for 90 d is shown in Figure 11(a). where k is specific heat capacity, a is heat conduction coefficient, and q is material density. When the temperature field propagates in the rock materials, the closer the temperature is to the original rock temperature, the slower the convergence rate becomes. Here, 20.5 C can be regarded the boundary of the temperature field's spread range. As shown in Figure 11(a), the propagation range of the temperature field for the floor is $8.7 m and that for the roof is 12 m. The variation speed of temperature within the rock gradually decreases as it approaches the burnt-out zone, and the temperature of the roof drops slower than that of the floor. The propagation range of the high temperature of the coal pillars is $8.5 m, and the temperature fields are symmetrically distributed on both sides of the coal pillar. Inside the coal pillar, the range in which temperature exceeds 400 C is $2.2 m. According to the variation laws of the mechanical properties of coal with temperature, a coal pillar with a propagation range of $2.2 m is considered a failure under the influence of high temperature, as it has lost its bearing capacity completely. Figure 11(b) shows the distribution of the elastic modulus of the surrounding rocks of the burnt-out zone after an increase in temperature. The surrounding rock mass of the burntout zone presents high heterogeneity, as well as those of the other mechanical parameters, which are not individually detailed in this paper. Distribution of vertical stress around the burnt-out zone. The thermal stress generated during UCG and the variation of rock properties have a major influence on the change in internal stress in the surrounding rocks of the burnt-out zone. Taking into account thermal stress and thermal burning, the elastic constitutive model of the internal stress-strain of the stratum is given by formula (2). where r i;j is the internal micro-element stress, e i;j is the internal micro-element strain, c i;j is the material stiffness matrix, b i;j is the material thermal modulus, DT is the difference between current temperature and original temperature of the rock, and T is the current temperature of the model. Figure 12 shows the similarities in the vertical stress distributions of the roof, floor, and coal pillar. Stress is concentrated on the coal pillars and the roof and floor on both sides of the coal pillars. The stress extremes near each coal pillar decreases gradually from the middle to the two sides of the mining area; the stress extreme of the coal pillar in the middle of the mining area is the maximum, while the stress extremes on two sides of the coal wall are the minimum. Stress distribution is symmetrical for the coal pillars in the middle of the mining area. According to Figure 12(a), the vertical stress extreme of the roof is located at the top of the central coal pillars. When the rock is under the collaborative influence of thermal stress and thermal burning, the extreme is À14 MPa. When the rock is under the influence of thermal burning only, the extreme is somewhat similar at À13 MPa. The extreme is À9 MPa without the thermal effect. The vertical stress extreme is slightly increased to À9.3 MPa when the only influence is thermal stress. Meanwhile, according to Figure 12(b), the vertical stress of the floor is slightly smaller than that of the roof in the overall. The vertical stress extreme under the collaborative effect of thermal stress and thermal burning is the maximum at À11 MPa, which is the same as that under the influence of thermal burning only. The vertical stress is the minimum at À9 MPa without the thermal effect and is slightly increased to À9.3 MPa when under the influence of thermal stress only. Figure 12(c) shows the vertical stress distribution of the coal pillars. The maximum vertical stress is À15 MPa under the collaborative effect of thermal stress and thermal burning, and the minimum vertical stress is À12 MPa without thermal action. When only affected by thermally burning, the vertical stress of the coal pillar decreases slightly to À14.8 MPa; when only affected by thermal stress, the vertical stress increases slightly to À13 MPa. Additionally, the distribution of vertical stress in the coal pillar increases first and then decreases from both sides to the middle part. The distance between the stress extreme and the coal pillar edge increases gradually from the two sides of the mining area to the middle part of the coal pillar. The distance reaches the maximum of 8 m under the collaborative effect of thermal stress and the thermal burning of surrounding rocks; this scenario is the same as that affected by high-temperature burning only. The distance is the minimum at 2 m without the thermal effect, which is the same as that affected by thermal stress only. Distribution of plastic regions of coal-rock around the burnt-out zone. In this simulation, the Mohr-Coulomb failure constitutive model is adopted as the failure criterion of the model. With the thermal burnt state of coal-rock taken into account, the shear failure criterion is as follows where r 1 is the minimum principal stress, r 3 is the maximum principal stress, c is the cohesive force, and / is the internal friction angle. The tensile failure criterion is where r t is the tensile strength of the material. According to Figure 13(a) and (b), the developmental ranges of the areas for the surrounding rock failure of the gasifier are the same under normal temperature and gasification when affected by thermal stress only. The roof failure height is 8 m, and the floor failure height is 4 m. The failure area width of the two sides of the coal pillar is 2 m, and the elastic area of the coal pillars is 20 m. The failure forms of the surrounding rocks under normal temperature are all in the form of shear failure, and a small area of tensile failure appears on the roof and the floor under thermal stress. According to Figure 13(c) and (d), the destructive developmental range of surrounding rocks of the gasifier under thermal burning only is the same as that under the synergistic Figure 13. Plastic area distribution of surrounding rocks in burning-out area (90 simulated-days). effect of thermal stress and thermal burning. The roof failure height is 16 m, and the floor failure height is 6 m. The failure area width of the coal pillars on the two sides of the burntout zone at the central mining area is 8 m, while the failure area of the coal pillars on the two sides of the burnt-out zone in the marginal mining area is somewhat narrow at 6 m. The width of the elastic area of the central coal pillars is 8 m, and the width of the core area of the coal pillars on both sides is 10 m. The tensile failure ranges of the surrounding rocks in the burnt-out zone under the collaborative effect of thermal stress and thermal burning is higher than those under the action of thermal burning only, and tensile failure areas are observed on the floor. Distribution of vertical deformation around the burnt-out zone. After the gasification for 90 d, the deformations of the roof, floor, and coal pillars are decreased in general from the middle of the mining area to the two sides. The deformation curves are symmetrical for the coal pillar in the middle of the mining area, and the deformation extreme appears near the middle of the coal pillar. According to Figure 14(a), the subsidence of the roof is mainly concentrated in the burntout zone. A certain amount of subsidence is observed in the separated coal pillars and the top roof on both sides of the coal walls, but the amount of subsidence is relatively small. In conventional mining conditions, the maximum subsidence of the roof is 0.13 m, which is only increased slightly to 0.16 m under the action of thermal stress. The maximum value of roof subsidence increases sharply to 0.4 m after coal-rock thermal burning, and that under the synergistic action of thermal stress and thermal burning of the surrounding rocks is 0.46 m. Figure 14(b) shows the vertical deformation laws of the floor. A slight uplift on the floor occurs at the bottom of the burnt-out zone, whereas the bottom of the coal pillars is in a state of sinking. This phenomenon is due to the stress of the floor that is released after the coal seam is burned, thus enabling the floor rock to rebound and cause a heaving of the floor in the burnt-out zone. Meanwhile, the load of the overlying rock is concentrated under the coal pillar and causes a compression of the floor. The figure also shows that the floor bulge is closely related to thermal stress. The maximum value of the floor bulge is 0.047 m without the thermal effect and increases to 0.062 m under the thermal stress effect only. The floor bulge slightly increases to 0.069 m under thermal burning and then rapidly increases to 0.098 m under the collaborative effect of thermal stress and thermal burning. According to Figure 14(c), the compression of coal pillars is gradually decreased from the two sides of the coal pillars to the middle part. This phenomenon is attributed to the two damaged sides of the coal pillars, indicating that these parts have lost their bearing capacity, further resulting in a sharp increase in deformation. The compression amount of both sides of the coal pillar is at least 0.037 m in conventional mining conditions, and this value is the same under thermal stress only. When the coal pillar is only subjected to thermal burning, the compression amount of both sides increases sharply to 0.2 m, and the maximum compression amount is 0.22 m under thermal stress and high-temperature burning. The compression amount in the middle of the coal pillars is somewhat small. The minimum compression amount is 0.01 m in conventional mining conditions and under the thermal stress effect only. The compression increases sharply to 0.036 m after the thermal burning of the coal pillars. The compression amount increases to 0.037 m under the collaborative effect of thermal stress and thermal burning. Subsidence distribution on the surface. In the process of UCG, the deformation of surrounding rocks in the burnt-out zone is gradually transferred upward and finally subsides on the surface to form a subsidence basin. The subsidence of the surface will not only threaten the safe use of building structures but also cause an ecological degradation on the surface. When the 10 mm surface subsidence is taken as the boundary of the subsidence basin (Figure 15), the maximum surface subsidence is 0.011 m in conventional mining conditions, and the subsidence area is 9432 m 2 , which is similar to the maximum subsidence at 0.0109 m under thermal stress effect only. At this time, the subsidence area is slightly reduced to 7893 m 2 . The maximum surface subsidence under the thermally burnt effect of the surrounding rocks in the burnt-out zone is sharply increased to 0.03 m, and the subsidence area is increased to 134,441 m 2 , which is similar to the maximum subsidence at 0.031 m under the collaborative effect of thermal burning and thermal stress (i.e. the corresponding subsidence area is 137,324 m 2 ). As for the existing UCG process, the UCG has a minimal influence on surface subsidence; considering that the process is expected to improve, the surface subsidence caused by UCG has to be further investigated. Discussion In geotechnical engineering, the heterogeneity of rocks is one of the popular topics in current research. Most especially in high temperatures, the change in mechanical properties of a rock mass causes a change in deformation and a failure mode, and these factors bring challenges to UCG and geothermal resource exploitation. In this study, geotechnical engineering problems are resolved by numerical simulations, and their results are obtained. Past scholars considered the influence of thermal stress on the stability of gasification channels. Many of them believed that the thermal stress of roofs increases the bending moment and subsequently the threat of roof collapse (Li et al., 2017;Xin et al., 2016). Other scholars considered the change in mechanical properties of rocks under high temperature, but the average value method is often used at a certain range in their calculations, and this approach reduces the accuracy of their calculations. As for the method in this study, each zone in the model is separately assigned with parameters, and the role of thermal stress and high-temperature burning of rocks on the movement of the UCG strata is accurately studied (Najafi et al., 2014;Otto and Kempka, 2015). The simulation results show that the high-temperature denaturation of coals and rocks is the key factor affecting the movement and destruction of the rock strata. Although certain problems caused by the high-temperature heterogeneity of rocks are discussed in this study, the research results are based on the partial simplification of the model, and its conclusions still have some limitations. In the numerical simulation of geotechnical engineering, the mesh size of the model often has a major impact on the simulation results. Most especially for high-temperature heterogeneous problems, the accurate selection of the mesh determines the correctness of the problem-solving approach. In this study, the minimum mesh of the model near the goaf is 2 m, which has occupied 20 GB of memory and consumed substantial computing time. Therefore, the traditional PC has presented some limitations in the study. With the progress of computer technology, this problem will be solved. The Mohr-Coulomb model in this study is used as the failure model of rocks. However, several studies have proven that the failure mode of a rock mass gradually changes from brittleness to toughness under temperatures exceeding 600 C. These factors should be considered when studying the failure of hightemperature heterogeneous materials. The temperature field in this research is in the range of 2-3 m inside the rock mass and set to rapidly decrease from over 1000 C to $400 C. In other words, the problem is partly simplified. In the future, the heterogeneity of a constitutive model for rock mass failure can be studied more effectively. The three main modes of heat diffusion are conduction, convection, and radiation. Many studies have shown that the main form of heat diffusion in the surrounding rocks of UCG is heat conduction. However, with the failure of surrounding rocks, the high-temperature convection of gas should also be taken into account. The results of the current research on the propagation of gas around the surrounding rocks after failure and the thermal convection characteristics of high-temperature gas have yet to be comprehensively determined; such an investigation is not carried out in the present study. From the above analysis, several problems can be solved in the numerical simulation of a high-temperature heterogeneous rock mass. With the deepening of research, the above problems will be the focus of future research. Conclusions 1. In high-temperature conditions, not only high-temperature thermal stress is produced in the rock mass, but the mechanical properties of materials are also substantively changed. On the basis of this problem, we establish a numerical simulation method for the thermo-mechanical coupling of a non-homogeneous rock mass under high temperature, thus solving the numerical simulation problem of a high-temperature burnt rock mass. 2. The numerical simulation method is used to study the internal temperature field and the mechanical properties after the expansion of the temperature field. The uniaxial compression test results show that failure strength changes considerably under high temperature. The experimental results are accurate and reasonable, and they prove that the method can be applied to the thermo-mechanical coupling study of a high-temperature non-homogeneous rock mass. 3. The simulation method is applied to the research on the strata movement and the failure of UCG. Strata movement and failure are separately studied under non-thermal action, thermal stress action only, high-temperature burnt action only, and thermal burning and thermal stress synergistic action. Variations of mechanical properties of the rock mass under high temperature are the main reason for the change law of strata movement and failure of UCG. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2020-01-23T09:19:48.495Z
2020-01-17T00:00:00.000
{ "year": 2020, "sha1": "8bf2ed775d55a4561325914ab4611bcc92b2733f", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0144598719888981", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "193a6a097166aa0aec8c389b3225190e6eab6ee0", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
237380457
pes2o/s2orc
v3-fos-license
Comparison of Microbial Detection of Hemodialysis Water in Reasoner’s 2A Agar (R2A) and Trypticase Soy Agar (TSA) ƒThis is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/ license/by-nc/3.0/). The quality management of dialysis water used as dialysis fluid is important for patients exposed to large amounts of water. The treatment of dialysis water causes chemical and microbiological contamination. Dialysis water contaminated with bacteria causes various diseases and inflammatory reactions due to the inflow of toxins into the body. Consequently, the aim of this study was to understand the sensitivity of agar for the detection of bacteria in dialysis water, the seasonal characteristics of bacterial culture, and bacterial identification. In all, 420 samples of dialysis water collected from a hospital between September 2017 and August 2018 were cultured at clinical laboratories. The bacterial growth rate of R2A was 99 cases (23.5%), and that of TSA was 47 cases (11.1%). R2A was more sensitive than TSA for samples incubated above 1 CFU/ml in hemodialysis, and TSA was more sensitive than R2A for samples incubated above 50 CFU/ml. The morphological characteristics of the microorganisms were confirmed by gram staining 188 strains of 30 isolates from the specimens. In R2A, Gram-positive bacteria were isolated in 33.3% (n = 42), Gram-negative bacteria were isolated in 56.3% (n = 71), and fungal strains were isolated in 10.3% (n = 13). In TSA, Gram-positive bacteria were isolated in 33.8% (n = 21), Gram-negative bacteria were isolated in 64.5% (n = 40), and fungal strains were isolated in 1.6% (n = 1). In addition, seasonal distinctions were observed in microbial cultures. INTRODUCTION Healthcare-associated infection (HAI) includes nosocomial infections occurring in hospitals, out-of-hospital infections resulting from discharge, and outbreaks in the community (1). According to a study in the United States, HAI caused by multidrug-resistant bacteria infections occur in more than two million people each year. It was confirmed to account for 1% of all deaths (2). HAI includes surgical wound infections, bloodstream infections, urinary tract infections, and respiratory infections. Patients with significantly reduced immunity, such as the elderly, chronically ill, and organ transplant patients, are particularly vulnerable to HAI. Among the various treatments of patients, renal patients undergoing hemodialysis are treated with reduced immunity and are easily exposed to HAI, including injection needle accidents and multidrug-resistant bacteria, so infection control for hemodialysis is important (3,4). According to the Korean Nephrological Association's report "2018 Korea's Renal Replacement Therapy," the number of patients undergoing renal replacement therapy in Korea is 98,746, of whom 73,059 are undergoing hemodialysis. In particular, the number of renal failure patients per one million population was reported as 1,907. The number of people with renal failure is increasing every year, and the mortality rate is high, so many academics and the media are interested. Chronic kidney disease is generally associated with old age, diabetes, hypertension, obesity, and cardiovascular disease, and hemodialysis is a major treatment for these kidney diseases (5). Most patients with CKD receive hemodialysis three times a week, and the dialysis regimen is targeted to eliminate about two-thirds of urea during each treatment (6). Waste products, such as urea, diffuse through a thin membrane that separates the dialysate flowing in the opposite direction from the blood. Most bacteria and viruses present in dialysate have a high molecular weight and cannot pass through a thin membrane. However, toxins and the like have a small molecular weight, so they can pass through the membrane, requiring special attention in the use of the dialysate (7). The quality management of dialysis water used as dialysate is important for patients exposed to large amounts of water (8). Abnormalities in the water purification facility of dialysis water cause chemical and microbiological contamination, and dialysis water contaminated with bacteria causes not only inflammatory reactions but also various diseases due to the influx of toxins into the body (9). Fever from bacterial infection is caused by bacterial inflammatory substances and stimulates the secretion of interleukin-1 (IL-1) and cytokines by peripheral blood mononuclear cells (PBMC) (10). IL-1 causes an increase in C-reactive protein (CRP), and cytokine mediates the host response to infection and is involved in acute and chronic inflammation of bacterial ions (11). Therefore, even if there is no sign of fever, increased cytokine production due to the contamination of dialysis water causes chronic inflammatory conditions, resulting in dialysis-related amyloidosis, hypoalbuminemia, atherosclerosis, hypotension, and coma (12). In medical institutions that perform dialysis, HAI should be prevented through the regular monitoring of dialysis water. Detailed guidelines for managing dialysis water are provided by organizations such as the Association for the Advancement of Medical Instrumentation (AAMI), the American National Standards Institute (ANSI), the International Organization for Standardization (ISO), and the European Best Practice Guidelines (EBPG). According to the AAMI guidelines (13), the quality control of dialysis water includes a chemical test ( of endotoxins and microorganisms. According to the AAMI guidelines, the acceptance criteria for dialysis water are less than 100 CFU/ml of bacteria and less than 0.25 EU/ml of endotoxins. In dialysis water, the action level (the concentration of bacterial contamination that must be taken before the maximum limit) is 50 CFU/ml of bacteria and 0.125 EU/ml of endotoxins, and corrective action must be taken quickly to drop below this level. As for the enforcement standards for each test, chemical tests should be conducted at least once a year, endotoxin tests must be conducted once a quarter, and microbial culture tests must be conducted once a month. For microbial culture tests, Trypticase Soy Agar (TSA) is recommended by the AAMI in the United States, and Reasoner's 2a Agar (R2A) is recommended by the International Organization for Standardization (ISO) (14). R2A and TSA studies reported that microbial cultures using R2A showed higher microbial detection compared to TSA cultures (12,15). However, according to a study published in 2016, there was no significant distinction in microbial cultures using TSA and R2A (16). Looking at the above foreign cases, differences were confirmed in R2A and TSA. In addition, foreign research data did not include studies on environmental factors such as climate, season, and water quality in the country. In Korea, studies related to microbial culture tests and environmental factors for R2A and TSA in hemodialysis water are insufficient, and studies suitable for the domestic environment have been judged necessary to control infection and prevent HAI in hemodialysis water. Therefore, this study aims to analyze the microbial culture and sensitivity of R2A, and TSA used in microbiological tests of hemodialysis water, identify seasonal microorganisms, and use them as basic data for improving hemodialysis infection control. Study sample The hemodialysis water used in this study was a sample requested for microbial culture from September 2017 to August 2018, and 35 samples per month, a total of 420 samples were studied. According to the type of specimen, 396 cases of dialysis water and 24 cases of reverse osmosis water are classified. Sample Transport The samples were collected in a sterile, endotoxin-free container, and immediately transported to the laboratory for testing within 30 minutes. Inoculation Media Specimens requested from the laboratory were inoculated evenly on the surface of the medium by the spread plate method using a pipette in a TSA (Micromedia Co. Ltd., Korea) and an R2A (Asan Co. Ltd., Korea) medium, respectively. All tests were conducted in a biological safety cabinet (BSC class II), and the same specimen was repeated three times. Culture Methods As the culture media, TSA (Micromedia Co. Ltd., Korea) and R2A (Asan Co. Ltd., Korea) were selected, and an oxygen culture environment was selected as the culture condition. Regarding the incubation period, the TSA medium was cultured at 37°C for 48 hours, and the R2A medium was cultured at 27°C for one week (Table 3). Gram Stain The pure cultured independent colonies were smeared on slides, dried, and stained according to the manufacturer's standard usage guidelines using a Gram stain reagent (YD Diagnostics Co. Ltd., Korea). The staining and morphological characteristics of the microorganisms were observed using a microscope (x1,000). Microorganisms were identified by dividing them into gram-positive bacteria, gram-negative bacteria, and fungi through microscope checks. Statistical Analysis All experimental results were expressed as mean and standard deviation through a total of three experiments, and the statistical analysis tested for the mean value and significance using SPSS Version 25.0 for the window software program. The significance test was performed at the p < 0.05 level. RESULTS Comparing the number of dialysis water samples with bacterial culture by R2A and TSA Of the total 420 hemodialysis water samples, 28 cases (6.7%) showed positive results for both R2A and TSA, and 71 cases (16.9%) were positive for R2A and negative for TSA. There were 19 samples (4.5%) that showed negative results in R2A and positive results in TSA, and 302 cases (71.9%) were confirmed as negative results in both R2A and TSA (Table 4, Fig. 1). Comparison of the number of samples of dialysis water with bacterial positive cultures by R2A and TSA In confirming the positive rate of each medium in all samples, the microorganisms detected in R2A were higher than those detected in TSA, with 99 cases (23.5%) in R2A and 47 cases (11.1%) in TSA (Table 5). Comparison of bacteria growth on R2A and TSA media in identifying samples ≥50 CFU/ml Among the 420 specimens, 12 (2.8%) in R2A and 18 (4.2%) in TSA were found to be cultured with more than 50 CFU/ml, and TSA was more sensitive than R2A (Table 6). Comparison of bacteria growth on R2A and TSA media in identifying samples <50 CFU/ml Among the 420 specimens, 87 (20.7%) in R2A and 29 (6.9%) in TSA were found to be cultured with less than 50 CFU/ml, and R2A was more sensitive than TSA ( Table 7). Identification of heterotrophic bacteria isolated from dialysis water from September 2017 to August 2018 Of the 30 strains isolated from a total of 420 specimens, 188 strains were subcultured and studied. The isolated strains for each medium were identified as 126 in R2A and 62 in TSA, and the morphological characteristics of the microorganisms were confirmed through Gram staining. Seasonal distribution of bacterial populations isolated from dialysis water The species of microorganisms separated from hemodialysis water were analyzed seasonally according to environmental factors, such as season and climate, in Korea (Table 9). Seasonal comparison of the number of samples of dialysis water with bacterial positive culture by R2A and TSA Because of the seasonal analysis of the one-year study, the positive cases of microorganisms isolated from R2A were confirmed to be 24 cases in spring, 19 cases in summer, 30 cases in autumn, and 26 cases in winter. The positive cases of microorganisms isolated from TSA were 10 cases in spring, nine cases in summer, 17 cases in autumn, and 11 cases in winter. It was confirmed that the positive rate of microorganisms cultured in R2A medium was higher in all four seasons than that of microorganisms cultivated in TSA, and a high positive rate was confirmed in autumn in both R2A and TSA mediums, showing characteristic results (Table 10). DISCUSSION The influx of microorganisms into the body is a risk factor that threatens human health by causing inflammatory reactions and various diseases. The rapid cultivation and identification of microorganisms is required for the treatment and prognosis of diseases. Consequently, various media have been developed, and there have been changes in the culture environment and conditions. In particular, the selection of the medium is the most important factor in increasing the detection rate of microorganisms. In this study, microbiological tests of hemodialysis water were performed using R2A and TSA to confirm the positive rate for each medium, the positive rate for each season, and the results of microbial identification. This contributed greatly to the prevention of medical-related infections and the management of artificial kidney center infections implemented in medical institutions. It is the first in the world to reflect climate change and seasonal specificity, and it is a meaningful study conducted over a long time period. The positive rates of microorganisms in hemodialysis water were confirmed in 99 cases (23.5%) in the R2A medium and 47 cases (11.1%) in the TSA medium. Therefore, the R2A medium was found to be more sensitive than the TSA medium. In the Netherlands (15), in a study conducted on 229 samples of hemodialysis water and reverse osmosis water, the R2A medium showed a higher positive rate than the TSA medium, confirming results similar to this study. In Thailand (12), a study conducted on 143 samples of reverse osmosis water, also confirmed that the R2A medium was more sensitive than the TSA medium. The positive rate of bacteria counts of 50 CFU/mL or higher was found to be 2.8% in R2A and 4.2% in TSA in this study, which was higher than that of 1.5% in R2A and 1.3% in TSA, the results of the US study. When comparing the results with those of advanced countries, it is thought that a more active management of hemodialysis water is necessary. R2A is a low-nutrient agar used with lower incubation temperatures and longer incubation times (17). However, TSA is a universal medium containing two peptones to support the growth of various microorganisms (18). Hence, the identification and positive rates of microorganisms cultured in the two media are considered different. The 188 strains of 30 species isolated in this study were classified into 33.5% Gram-positive microorganisms, 59.0% Gram-negative microorganisms, and 7.4% yeast-like fungi. Aeromonas sp., rarely cultured in TSA, is a bacterium that exists in fresh water and has been reported as a zoonotic pathogen that causes corneal inflammation in a recent domestic study (19). Moraxella sp. is also known as the causative agent of infectious diseases of otolaryngology (20). In Japan, bloodstream infections caused by Methylobacterium sp. have been reported in patients undergoing hemodialysis (21). Especially Sphingomonas paucimobilis, Acinetobacter lwoffii, and Oligella ureolytica showed high separation rates among Gram-negative microorganisms. In a Thai study, Pseudomonas spp. was 40%, Moraxella spp. 23%, Acinetobacter spp. 16%, Staphylococcus spp. 16%, Alcaligenase spp. 14%, Gram-negative rod 7%, Corynebacterium spp. 3%, Micrococcus spp. 3%, Bacillus spp. 1%, Chromobacterium spp. 1%, Gram-positive rod 1%, Rhodococcus spp. 1%, and Streptococcus spp. 1% was isolated, which was like this study in that the separation rate of Gram-negative microorganisms was high (12). The species of microorganisms isolated from hemodialysis water differed from season to season, and Acinetobacter lwoffii and Sphingomonas paucimobilis were identified as bacteria isolated throughout the year from R2A. Sphingomonas paucimobilis is an opportunistic pathogen that causes meningitis, sepsis, bacteremia, and peritonitis in people with reduced immunity (22). Acinetobacter lwoffii is a potential opportunistic pathogen in patients with impaired immune systems, and it has been identified as a cause of healthcare-associated infections (23). It is necessary to manage infections more effectively for hemodialysis water by monitoring bacteria that are separated year-round and bacteria that are separated by season. In foreign countries, there have been no studies on the detection of microorganisms in hemodialysis water related to season or climate. However, in Korea, the four seasons are distinct, and the temperature change varies greatly from season to season. The cultivation of microorganisms is closely related to the temperature and growth of bacteria, as well as the components of the medium and cultivation time. Therefore, if the R2A medium and the TSA medium are used together for the microbiological testing of hemodialysis water, it is believed that they can contribute to patient safety and public health safety infections.
2021-09-01T15:30:12.016Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "31e27a16bac28c116b17ad12d8ef208f96af1bee", "oa_license": "CCBYNC", "oa_url": "https://journal-jbv.apub.kr/articles/pdf/YqGy/jbv-2021-051-02-5.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d62f2d186ccb9d26255aff87851d530891e50211", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
254497456
pes2o/s2orc
v3-fos-license
Light People: Prof. John Bowers spoke about silicon photonics Editorial Silicon photonics is advancing rapidly with many scientific and engineering advances and many new applications for photonics. To highlight the topic, Light: Science & Applications invited John Bowers, director of the Institute for Energy Efficiency and distinguished professor from the University of California, Santa Barbara, to talk about the fundamentals and industries, and give a future perspective of silicon photonics. The below is summarized from the video interview of Prof. Bowers. The original interview can be accessed in Supplementary video. are all working on this. There are remaining issues of high yields, high reliability, cost reduction, and fiber attach. The packaging of electronics and photonics together is a challenge, in particular, the temperature limits placed by the fiber attachment step. But again, progress is very rapid. Q3: Silicon is not an ideal platform for light emitters, but your group developed a unique approach to create active optical components on silicon and achieved mass production in just the last few years. How did you come up with this great idea? A3: Silicon is incredibly bad as a light emitter. Its internal quantum efficiency is about one part in a million, whereas a direct bandgap III-V material's efficiency is essentially 100%. I knew from the beginning that we need to have a direct bandgap semiconductor, and I ignored most of the work on silicon, GeSn, and so forth. It was also clear that reliability would require high-quality materials. Defects are our big problems, particularly for lasers. Previously, we had developed bonded LEDs by putting GaAs on GaP back in the 1990s, and it was widely used in high-brightness LED by many manufacturers. So, it was natural to me to try bonding direct bandgap III-V to Si. And indeed, it worked very well. Our collaboration with Intel was essential to solving manufacturing issues and bringing it to high-volume production. Q4: After years of endeavor, what are the new challenges of light sources in silicon photonics? Your recent work also involves monolithic integration using quantum dot materials on silicon, how did you move into this research direction? A4: Heterogeneous integration has the advantage of being able to combine multiple materials together, bonded, and processed at one time. Conventional InP PICs have 5 or 6 regrowth steps, which is expensive, has problems, and limits the yield. With heterogeneous integration, one can bond the modulator, laser, and detector epi side by side at one time, and process them together. That's always going to be the advantage of heterogeneous integration. But the cost of the substrate is not insignificant. The size of III-V substrates is far smaller than 300 mm. This drives our interest in monolithic integration. Fortunately, the work we did together has now resulted in high-quality, high-yield lasers on epitaxially grown 300 mm substrates. That's very exciting, I initially never thought that would happen. It's remarkable. A big issue in monolithic GaAs on Si devices was whether they are quantum wells or quantum dots. After 40 years of research, quantum well devices still have a lifetime of only 1000 h at high temperatures, but quantum dots have over 1,000,000 h of lifetime at high temperatures. The detailed science of dislocation flow is interesting and it's important to solve these fundamental material issues. I'm continuing to be astounded by the differences. For the last 50 years, it has been difficult to make reliable GaAs pump lasers. The facet damage issues require delicate coating and processing. With silicon photonics, we can naturally get a reliable GaAs laser because we'll never expose III-V facets. It's been very lucky and fruitful. Q5: In addition to light sources, photodetectors and modulators are also important components in photonic integrated circuits. What poses the main challenges for high-speed and low-loss on-chip modulators? What about the noise of photodetectors for silicon photonics? A5: Typically, the main problem of modulators is microwave loss, especially at high speeds. For a conventional Lithium niobate modulator with a centimeter length, microwave loss is a big problem. When going to ring modulators with short lengths, people quickly achieved beyond 100 Gbit/s up to 180 Gbit/s. We'll see the same thing with Mach-Zehnder modulators'. With the high confinement in silicon cladded by silicon dioxide, and III-V cladded by silicon dioxide, we can make the devices quite short and have very high performance. Similarly, for photodetectors, there are two really good solutions to the noise issue. One is that with gain on silicon, preamplified PIN detectors can be quite efficient: 10 dB better than just a PIN by itself. Secondly, silicon is an excellent avalanche material. We worked with Intel a decade ago and demonstrated III-V on silicon APDs with gain bandwidth products of 800 GHz. We will see that sort of technology become commercialized because silicon is such a superb avalanche material. Similarly, there are a lot of advances in germanium and silicon APDs. There is a lot of progress yet to be done there. Q6: Compared to integrated circuits whose scale has gone down to several nanometers, will the scale limit in silicon photonic imparts its potential? A6: There is certainly a big difference in scale. I doubt there will ever be a need for silicon photonics to use 3 nm lithography. 45 nm technology is sufficient to make high-performance, high-quality silicon photonics devices. That is good because working in an older foundry at a lower lithography level is much cheaper. By 3D bonding the PIC to electronics that may be 3 nm or beyond, it allows us to get the best of both worlds. So, I do not think it makes sense to integrate photonics and electronics onto the same wafer in the same process flow. It makes both process flows more expensive and longer. It makes much more sense to do 3D integration of the most advanced electronics with the most advanced photonics. Today, a 5 μm diameter ring modulator can achieve high-capacity interconnects compatible with the best processors. So, the sizes are not going to get that small for photonics devices. Q7: To process silicon photonics in CMOS foundries, how do people control the contamination issues, especially when heterogeneous integrated III-V materials are introduced? A7: I think what everyone does is to include the III-V materials, at the dirty end of the process, so they're back in the copper end of the cycle. Most of these materials, indium, gallium, arsenic, and phosphorus, are already in the foundry. Therefore, operating in the dirty end of the foundry is not a big issue. For optical gyroscopes where low-loss optical waveguide is needed, high-temperature processing needs to be done prior to the introduction of III-V materials. The order of photonic integration steps does matter to solve that problem. Q8: Silicon photonics enables a wide range of applications, what's your perspective on this? A8: Silicon photonics is ideal for any applications that have high volume needs to scale rapidly. Data centers are the biggest and the most immediate application. Telecommunications, as Acacia has demonstrated, is a second high-volume application, where the uniformity and superior performance from silicon processing really helps. The lithography in a 65 nm process is far better than in a typical InP foundry. Gratings and everything else can be directly written with high performance. A third general application is optical LIDARs, though the cost must be small, and the complexity might be large for an optical phase array. Scanning the beam in 2D requires matching to the electronic drivers to control the optical phase array. So, it becomes important to have both chips on the same substrate (silicon) for 3D integration. Optical gyroscopes are another example where the chips may be quite large. To get a sensitive rotation sensor, the chip is going to be at least a centimeter on the side. This is benefited from the large area substrates, high-volume, and low-cost processing of silicon. Today, the best gyroscopes are fiber optics based. They work very well, but they're not integrated and are expensive. I think we can make better gyroscopes with silicon substrates and SiN waveguides. Datacom Self-driving AI AR/VR IoT Optical interface Silicon photonics enables a wide range of applications Q9: What chances can microcombs bring to silicon photonics? A9: The progress of microcombs is just phenomenal. If you look where we were a decade ago, and where we are today, it's just astounding. A decade ago, combs were noisy and now it's simple to make a single soliton and very quiet combs. I think the whole turnkey approach of generating solitons is the key. Every time you turn it on, you get a single soliton and low noise. Hundreds of lines can be generated, which are useful for applications including not only DWDM (dense wavelength division multiplexing), stable multi-wavelength sources, but also octave bandwidth generation for extremely quiet clocks. We've seen the ability to build an optical synthesizer where you can shift the frequency in steps of just 1 Hz across many THz of frequencies. I hope there will be a revolution analogous to what has happened when electronics developed synthesizers. Much better frequency control enabled a whole host of new applications. The ability to control laser frequencies to 1 Hz will open a lot of applications as well. Q10: How about quantum computing? A10: The whole artificial intelligence and machine learning field is quickly driving the need for extensive and more efficient computing. Optical computing is a natural way to do things like vector-matrix multiplications where you may not need 16-bit resolution, but you do need an answer efficiently. There are a whole host of companies using silicon photonics for optical computing, and I think it will have a big impact and work very well. Quantum computing is another step with a lot of inherent advantages. For instance, GaAs can be used to make entangled photons at high rates because of the large nonlinearity. This is very promising for future quantum applications and there's a host of other ways to do it as well. A11: Thin film LiNbO 3 on silicon has moved quickly. With thin film LiNbO 3 , the tight mode confinement helps to make high-speed modulators, comb generators, and a wild variety of devices. Lončar's group at Harvard has really driven this with great success. Tightly confined modes work much better than the previous indiffused waveguides. GaAs is another good example. For GaAs resonators with waveguides that are tenths of microns high by eight-tenths of microns wide, you can get efficient second harmonic generation, photon entanglement, and comb generation. We've seen a comb generator with just 20 uW of power, which was inconceivable just ten years ago. There remain integration challenges. LiNbO 3 has a very different coefficient of thermal expansion from silicon, which limits the processing when combined with the other process steps. GaAs materials have a closer range of thermal expansion coefficients with silicon, and you can certainly go to typically 400 or 500°C. Oftentimes, when you include LiNbO 3 with the rest of the silicon photonics, you're limited to 200 or 250°C. Those can be solved in a back-end process. Q12: What would be your next research focus? A12: Just to expand the range of applications. The use of silicon photonics in data centers is well-established. Intel has a billion-dollar business doing that today and many other companies are also actively involved. With the foundry efforts from Tower, TSMC, Global Foundries, and AIM, there will be tons of opportunities open for other applications as well, LIDARs, gyroscopes, and spectroscopy, for example. Dual comb spectroscopy is one really exciting advance that will allow us to sense a variety of pollutants, greenhouse gases, and medical sensing of things in our blood and so forth. That is what I am excited about in the future. In addition, most silicon photonics is still in the infrared, but going into the visible is important. The scope of silicon photonics expands to those using silicon substrate and silicon processing. The waveguides are not limited to silicon, but can cover a wide range of materials, including LiNbO 3 waveguides, compound semiconductors, CSOI, SiN, etc. Nexus photonics is a company integrating lasers with silicon nitride waveguides. Using SiN waveguides, they make 980 nm tunable lasers operate up to 185°C, which is phenomenal. I also expect to see a lot of atomic clock applications, display applications, headsup displays on glasses and contact lenses, and so forth, with the SiN-based silicon photonics. Q13: Beyond a leading scientist, you are also known for commercializing several products and incubating several corporations successfully. What are the key factors from research to commercialization? A13: The biggest key factor is certainly hiring smart people. I have been very lucky to have a lot of smart students and postdocs that left the group and started companies. I am glad I was able to help them. It makes the research real for students if they achieve success in the university lab and later are able to commercialize it. It's very satisfying for all of us. In particular, Alex Fang with Aurrion, Jon Geske with Aerius Photonics, Tin Komljenovic with Nexus, and Alan Liu with Quintessent have successfully applied research they've done for real products and hopefully changing the world. Q14: The commercialization of silicon photonics has attracted enormous venture capital. Can the development of silicon photonics keep pace with the expansion of its startup companies? A14: Historically, hardware companies required a lot of money to be successful because you must build a fab. If you look at Infinera, the first thing they did was to build a clean room which was very expensive and took a long time. Now we have the capability of accessing wellestablished, well-funded advanced foundries. Those foundries are paid for by electronics but can now be used for photonics. This allows startup companies to get established and get a product out without much capital, and then to scale rapidly. Intel went from making its first 100 GB transceiver to making 3 million transceivers a year in just 3 years. That's the strength of consumable electronics and conventional CMOS foundries. Q15: What was the influence of your work experience at Bell Lab on your choice of silicon photonics as a research focus at UCSB? A15: I was lucky to go to Bell Labs at the very beginning of fiber optics. I arrived in 1982, and the first fiber-optic systems were just being deployed with relatively modest bit rates, one hundred Mbit/s sorts of systems. The problems were very clear. We had to move to longer wavelengths, to 1.3 and 1.55 μm, and move to single-frequency lasers. There were very clear research directions, and you could go down the hallway and find an expert in whatever you needed, whether it was about fibers, epitaxy or processing. We learned to work together and moved rapidly to the first 2, 4, 8, and 16 Gb/s systems, and then the whole development of DWDM, and the optically amplified systems. When I went to UCSB, that was all established, and the Internet became widely available. We first worked on VCSELs, particularly using bonding to make long-wavelength VCSELs, LEDs, and mode-locked lasers for comb sources. Then, when silicon photonics started exploding, the need for a laser on silicon was obvious. Every photonic integrated circuit had to have an integrated laser. It really did not make sense for me to do anything else. You can always have a fiber-coupled laser, but that limits the complexity with hundreds of sources. Electronics can succeed because you have cheap gain everywhere and complex integrated circuits can't be made without gain. The same thing is true for fiber optics. It was the development of the EDFA that enabled low-cost, high-capacity fiber-optic systems, which allowed the explosion of fiber optics to occur. The same thing is true for photonic integrated circuits. You need to have gain on chips, whether it's the laser or just the amplifier, to make bigger and more complex chips. We are just at the very beginning of that field, and we will see much better performance in the future. Q16: Why did you choose academia as your career path? A16: I really enjoy teaching. I enjoy seeing students become successful. Some arrive without even knowing how to use an oscilloscope, and they graduate as a world leader in their field. That is something they should be very proud of, and I am also very proud of what they have accomplished. Being able to help them succeed, I feel very happy, and I still do. Early on, I was quite involved in running the startups that came out of my group, Terabit or Calient, for example. But now, students are the leading people in running the company and my role is just to help them become successful. I'm very proud of the many students, who have gone on to become tenured professors, and do a better job at teaching and research than I do. Prof. Bowers and his students Q17: What part of your career makes you most excited? A17: I think it is changing the world. If we can make photonics on silicon rather than on InP or GaAs, if we can solve medical problems, and make very efficient quantum computers using silicon photonics, that will make me very happy. Changing the world is always a team effort and a team sport. The big roles are typically played by others, like Alex Fang or others that lead a team at a company. I am happy to support the effort. Q18: You have supervised more than 80 PhD students and postdocs and most of them still work in integrated photonics. How did you inspire and unite your students to work in this field? A18: This is a great field to be in and it's still exploding. Different fields explode at different times, then it sorts of saturates, and then becomes very mature. Telecommunications is becoming mature. A lot of great work at Acacia led to advanced silicon photonics coherent systems and demonstrate the quality of what you can do in a CMOS foundry. That is an important, yet a mature field. Data centers are still exploding with a lot of innovation and we are still in the early stage. In the other areas, optical computing, quantum computing, sensing, etc., we are just at the very beginning. There will be many students making very sophisticated devices in these areas, far beyond what we see today. I think it is important to pick a field that is expanding. The other big advantage of silicon photonics lies in its big economic driver, namely, the need to marry electronics and photonics together. We can make better photonics because we have 3D-integrated electronics to drive it intimately. We can make better electronics because we have photonics to do the optical interconnects. That's a big economic driver and everything else can succeed and flow from that. Q19: Silicon photonic is now the key technology for next-generation data communication. But 20 years back, how did you persuade your students to work in this new field? A19: Like Wayne Gretzky says, go to where the puck is going, not to where it is. When I first came to UCSB, we had a bunch of big laser systems, but we rapidly moved to semiconductors and fiber optics. Photonic integrated circuits are cost-effective and can scale rapidly. That's the direction I have always taken, and I encouraged my students to do the same. Q20: You are the role model for many researchers. Who was your role model when you started your career? A20: My advisor, Gordon Kino was my biggest role model. He was a brilliant man, originally a mathematician, and really led the theory of what we were doing in the lab. From him, I learned to always combine experiments and theory together. My recent role models include people like Rod Alferness, who has been the dean at UCSB and was the department head and chief scientist at Bell Lab before that. He is sort of the premier leader of research, always been able to encourage researchers and give them good advice and steer them in the right direction with the resources they need. That has also been my goal. If I can make sure the students have the resources to do the research they want to do, and not be limited by the equipment, they can be more self-motivated and make a difference. Q21: What motivated you to pursue a PhD in the early days? What kind of advice would you like to share with students who were just starting their academic career? A21: Certainly, one thing is just to get in the lab and do real research. When I was an undergraduate, I was fortunate to work with the high-energy physics group that Marvin Marshak led at the University of Minnesota, with experiments at Argonne National Lab and Fermi Lab. It was exciting. I was a physics major, and I was convinced that's what I wanted to do. But it also became clear that not many high-energy physics graduates became tenured professors, probably one in a hundred, I suspect. So that drove me to shift to solid-state physics for my PhD. Almost everyone I knew stayed in the field of solid-state physics later on, whereas too many of the high-energy physicists became computer scientists after ten years. I went to graduate school in the late 70 s, and that's when fiber optics was just beginning. It was clear that it was a good field to go into. My advice to students is to find your passion and pursue your thesis passionately. Grad school is a great opportunity to do great research. Q22: You have taken a lot of responsibilities in both the university and the industry business. How did you manage to multi-task? A22: Multi-tasking is required for all of us. Prioritizing work and being organized are the key. I often tell graduate students that getting a thesis is frustrating. You must have reasonable expectations that it's going to be hard work. When you hit that low point that nothing is working, when you're failing and everyone else seems to be successful, you must continue. You can not just fail around and jump to some other area, you should stick with it. So be prepared there'll be great, exciting days, but also be prepared that there might be very low days when nothing's working. You must be determined and pursue it despite difficulties. And indeed, the harder the problem, the more prominent your success will be. Q23: How did you make a balance between work and life? A23: I think it is important to get exercise. It is hard when there is a lot of work demands in your job and family demands in your life. For me, I get up most days at about 6:00 am and I bike for the first hour. Regular exercises help me manage stress and stay healthy. Once you have kids, engaging in their activities, whether it's soccer or something else, is important. Q24: What kind of exercise do you most suggest? A24: At Bell Labs, we had a group of about ten of us who went running every day at lunchtime, that was great. My knees are not what they used to be, so I can't run long distances anymore. Biking, skiing, and playing pickleball is much more of my preference now. Prof. Bowers and his group members on a ski trip in Colorado Q25: The past decades witnessed a boom in journals, and we are very lucky to have your support for our three journals, LSA, eLight, and Light: Advanced Manufacturing. What do you think constitutes a good journal? A25: Well, I think LSA, eLight, and LAM are very highquality journals, and I have been lucky to publish in all three. I think high standards and rapid handling of the processing of the manuscripts are key, and the three journals do both very well.
2022-12-11T05:06:07.689Z
2022-12-09T00:00:00.000
{ "year": 2022, "sha1": "072affa886a525e7c141a51e5f58afc6fa7648c1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "072affa886a525e7c141a51e5f58afc6fa7648c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
213488586
pes2o/s2orc
v3-fos-license
The effect of zimmer twins as digital storytelling on students’ writing narrative text ABSTRACT Introduction In Indonesia, learning English is considered as EFL. English is a foreign language focuses on the main aspect involves language skills which combined with technology and these skills need to be mastered to learn English effectively. According to Laborda & Royo (2007, p. 321) stated that technology is the importance of including ICT in the language curriculum the teacher may doubt technology or just be hesitant to include activities use a computer in their classroom. In recent years many of technology already has used in the education world especially to professional educators in teaching English for learners. Each of the students have a lower proficiency and afraid to writing because they are assured by themselves that they will make mistake to writing occasionally the students still have lack of confident feels to writing ability. Teaching and learning in the school is guided by the content of Kemendikbud (2014) and secondary education which explains that in order to reach the quality that has been planned in syllabus design, there are principles that used as students as references for learning activities including (1) students are facilitated to find out; (2) students learn various learning source; (3) developing students creativity in the learning process; (4) fun and challenging learning atmosphere. Writing is one of ability skill to increase the students smoothly (Oshima & Hogue, 2007). Based on the Standard Competence of English Curriculum 2013 the students are expected to be able to the written text of simple narrative text with focusing on the purpose, structure, and grammatical features of the text. The author teaches how to make narrative text with their previous knowledge see plot, actor, and point of view. The researcher wants to attempt for the learners with learning this digital story method as a multimedia system which makes the learners have a nervous feel to writing about the topic and also it might be the learners must have known technology to the learning process can easily and smoothly in the classroom area. Zanz (2015) explains that digital storytelling is an educational tool to develop the students in creative and help to learn by doing. In general, the digital story mentions a small form of making a movie that consents the students to create storytelling to their study. Storytelling is not only potent in children education but also effective in all areas of higher education (Wang & Zhan, 2010, p. 78). Education also required learning storytelling to improve ability in writing knowledge. To investigates use digital storytelling to improve this innovative technology in teaching and learning. The online platform used in this study is called the Zimmer twins it's a website design for children to share their creative story through animated stories . This research also focuses on digital media in the class which having access internet connection in a computer lab or laptop in order the teaching and learning process to students as a storyline can create a short story they can be creative and they can create text dialogues between characters and others. This research is focused on the students' reactions that come out through digital tools to create a story and to create a movie (Schmoelz, 2018). The teacher should make a paper sheet to each students with a text narrative story after each students saw the topic stories in media teaching. It found the explanation of narrative in a curriculum guide in the teacher gives materials can help students that have difficulty feels to write. Based on the problem above, the teacher can make an enthusiast-class using storytelling is significant through its potential to contribute to language learning. The digital stories have found their way into language learning contexts. The activities of making and tell digital stories, which integrate sound and video images with dialog text to deliver the meaning content of stories. Therefore storytelling is the most traditional for teaching but the teacher still using books story in communication with the students so that it is a new way in the modern era. The researcher has a problem with the following questions: Do the students taught by Zimmer twins' media digital storytelling have better writing narrative story text achievement than those taught by non-digital in storytelling?. Method This research is quantitative research. The experimental research has several designs including preexperimental design, true experimental design, factorial design, and quasi-experimental design. The researcher used a quasi-experimental to design this study. Quasi-experiment design involves in an educational setting, it is impossible choosing sample randomly of the population and assigns to a different class, and the researcher only assigns at random different treatments to two different classes (Charles, 1995) in (Latief,p. 96). This quasi-experimental research was used to know differences in the ability of the treatment class and the non -treatment class. The design is represented by a nonequivalent control group design. Islam since most of them were seemed totally matchto join this research because of their condition. Based on the design of this research, the researcher took decision to point two classes of the eleventh class out: 20 students in each XI social 1 and XI social 2. The sampling techniques chosen by the researcher was a nonprobability sampling technique. The researcher determines the type of sampling that is purposive sampling. The technique of deciding this sample was selected of judgment by the teacher or researcher to takes 20 students. This research uses three variables; the independent variable is a variable that influences the dependent variable and variable covariate to determine the effect of covariate with the dependent variable. In this research, the independent variable was Zimmer twins as animation movies of digital storytelling. The dependent variable is the students' writing ability, and narrative text before giving treatment (pre-test) as covariate variable to find out the influence of ability of the initial score students who differ significantly. The instrument of this research was pre-test and post-test. The test is the form of a narrative essay after the students watch an animated movie in Zimmer twins website. The test will be used as source information on what to extend students initial ability to write a narrative story. All students already know that narrative is a text which is commonly used to tell an imaginary story. The test gave in the beginning pre-test and in the end post-test, the pre-test was a form of essay used to measure initial ability in writing a narrative story. Then post-test can be used to measure ability in writing the narrative story after students give a treatment of learning Zimmer twins. Reliability is also needed to create a good test because a test must be reliable as a measuring instrument; the researcher used the product-moment to measure the reliability of the research using testretest because in this class the researcher gives the same test to the students at a different time. To measure whether the test was reliable or not, the researcher used the formula Cronbach's Alpha is .880 that the test has very high reliability. Validity is when a test must measure what it is intended to measure and what has been taught. It means that students have to write the story that they have already learned. To valid, the researcher uses content validity because content validity is the extent to which a measuring instrument. Before the instrument is applied to students, the author consults with experts judgments is lecturer whether the instrument is appropriate or not to measure the variable of research. The data was taken by doing pretest, treatment, and post-test in experimemntal group and pre-test and post-test in control group. After collecting data, the researcher analyzed the result of this quasi-experiment research by using ANCOVA which stood for analysis of Covariance as it was applied to compare the post-test scores of the two groups and pre-test before treatment. The result of students of both class score was low scores, they got a bad score. And the researcher know based on the total number of pre-test score was 1.207 for experimental class and 1.342 for control class and that mean was 60.35 for experimental class with the total number of students was 20 because they understood about the topic discussed and some words that used in writing a paragraph. But, some students still get difficulties in vocabulary so the researcher gave more attention and 67.10 for control class with a total number of students was 20. Based on the results of the total number of post-test above, the experimental group students score higher than the pre-test, this happened because students already understood more about a narrative text. The score of the control group 1.410 with a mean score of 70.50 and the score of experimental group 1.531 with a mean score 76.55. Based on the table above, the control group scores lower than the experimental group, this occurs because the control group does not get treatment. The control group was only taught used conventional methods. Based on the table of ANCOVA model above, the score of media is successful it is a lower than alpha (0.05) is .000 ≤ 0.05 and the significant score pre-test lower than alpha (0.05) is 025 ≤ 0.05. This ANCOVA model, the pre-test as a covariate was influence because the initial ability of the students scores between the two groups in pre-test was significantly different and this affect on the significantly difference mean in writing skill. The difference mean of writing skill by using media variable were significantly can influence the difference mean in writing skill, so that pre-test and media have significantly difference mean scores of students in writting narrative text. Hypothesis Testing Based on the purpose above, the result of ANCOVA from P-value (.000) was lower than alpha ≤ 0,05 it is means that H1 was accepted and H0 was rejected of the hypothesis. If P-value ≤ alpha (α), it means that H1 was accepted and H0 was rejected. It showed that there is was an influence of Zimmer twins as media digital storytelling on the students' writing narrative text at the eleventh grade of MA Yayasan Sirojul Islam. From the data above, the researcher analyses that there was a significant difference between the students who are taught used Zimmer twins and those who are taught not used Zimmer twins in writing class. Writing is communication into written form. Cheung (2016) emphasized to teach writing effectively, the teacher must be clear knowledge of the skill and process that are involved, the teacher must know of a part writing process.The author gave a writing test about the narrative text to know the result of the score by using five-components or aspect of writing skills. Those aspects were content, organization, vocabulary, grammar, and mechanic. Those aspect adapted from Weigle (2002). After scoring those aspects, the researcher used an analytical scoring, it means that the material of narrative text is clearly of researcher uses Zimmer twins to created animated and tell the story to the students understood about generic structure of a narrative text and they can increase the aspect to be better. From the total score the mean scores pre-test as the initial score in writing skill, the control group got a score is 67.10 and in experimental group the score was 60.35 in the pre-test. The result of the pretest score showed low on all aspect, because the students have low scores on pre-test that can influence the writing of narrative text, this happens because they have difficulty to processing words especially vocabulary and grammar errors to write narrative text. After that, the author gave a three times treatment for experimental group to help the students become more enthusiastic, active in learning English by using media, in contrast to the control group by using conventional method. The researcher gave post-test to students in the control and experimental groups. From the data shows that there are differences from the mean score post-test of the control and experimental groups on post-test it has mean 76.55 while, the posttest score of students who are taught by conventional method has mean 70.50. It means that the average of English Teaching Journal ISSN: 2338-2678  The effect of zimmer twins, Dyah Wulandari 57 students taught by using Zimmer twins as media on writing narrative text was high than the students taught by conventional method. From each score the experimental group post-test results were higher than the control group. The research result was in line with Anderson, et. al (2015) Explain that Zimmer twins are an easy way to be applied by the students as a story starter in the classroom. A teacher must present the site and shows a short animation movie to students are well-known with the Zimmer twins. Then the students could be seen short movies and students make the story until finished from the students created a movie. The majority of students at the Yayasan Sirojul Islamic are not learning English through learning media such as Zimmer twins because they are still learning from the textbook so that use of Zimmer twins as a media for website animation gets better results compared to the conventional technique. noted that zimmer twins as digital storytelling they can encourage to use the vocabulary they had learn in the classroom contribute to creating a movie making tool to learn many vocabularies from dialog text and emotion to be able completed the videos movie. This is because in the Zimmer twins media animation has a variety of visual images, dialog texts, feelings and expressions of various characters of the stories made by the students themselves. In the animated movies made by students that have advantages among others can improve the students understanding ability on the material delivered by the teacher, so that they can write narrative texts became more coherent in the animated movie also contain dialogues text that uses varied expression so students can increase their vocabulary in writing narrative text. Prins, et. al (2017) argues that the narrative is an easy text to write because of the context following a chronological of the story through an expression of different media animation. The students had big enthusiasts of made short animated movies based on their own each story in Zimmer twins then they wrote of narrative text. It happened because Zimmer twins website as a digital storytelling it is unique media which easy to implement the ideas, imagination and can make a creative short movie by using many scene expression, etc. Then, it can be helpful good media to the students writing ability effectively and interested in the teaching-learning. In this research, the researcher concluded that teaching writing by using Zimmer twins as media digital storytelling make the students have interested and motivation to write a narrative text. The researcher gave treatment in a group for increase the students motivation. It could be shown when the researcher was teaching narrative text by using Zimmer twins as digital storytelling on the website. This experimental research was declared successful, easily to be applied and enjoyed in following a teaching-learning process. And they can write English well and it could be proved by a good result in doing their assignment. Conclusion Based on the result of data analysis, the researcher summarizes that writing ability is an issue of students at MA Yayasan Sirojul Islam on this research. The research problem on this research is to find out the differences between students who learned used Zimmer twins as digital storytelling and those who are not using nondigital storytelling.This design experimental use quasi-experimental consisting of two groups are experimental and control group. The result data showed that the students of the experimental class were a significant difference which used Zimmer twins in learning narrative writing got a better score than the control group. Teaching narrative writing with Zimmer twins as digital storytelling did had better in writing at the eleventh class of MA Yayasan Sirojul Islam. This research was successful because this method gives a good contribution to the students' in writing then it could become a stimulation to write the narrative text also the students understood of the part of the story after they see the animated movies. The students can express their creative thought about making animated stories from Zimmer twins. Therefore, it will make the teaching and learning process would be more fun and interesting. The researcher hopes that the teachers are expected to be completed with a guidebook that steps to make animated movies from websites or software as digital storytelling and able to apply teaching media from the Zimmer twins media in English learning because of the effective results compared to conventional learning methods. This is also expected to be a motivator for teachers to continue to innovate and enhance students' creativity in the learning process. The researcher hopes that the student in writing narrative text is expected to make the process of learning English more enjoyable. Besides, students' are expected to be more active so that the ability to write their narrative texts becomes better. The next researcher can use this Zimmer twins media to teach English at a different level, grade, subject and topic to develop the teaching and learning process. The media Zimmer twins can be used and developed as the stabilization media animation in website to teach the students not only in writing but also in other aspects as like speaking skill.
2020-02-27T09:34:45.910Z
2019-11-24T00:00:00.000
{ "year": 2019, "sha1": "1e8825d3c91e7f33eb7d75add573aab03788433c", "oa_license": null, "oa_url": "http://e-journal.unipma.ac.id/index.php/ETJ/article/download/5434/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0c3da9180893adf87c3b8d6d1ce6bfdaf7d40126", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
258256701
pes2o/s2orc
v3-fos-license
A new desorption method of polyurethane foam for the determination of gold and a comparative study on four desorption methods based on meta-analysis Accurate determination of gold in geological samples is an important prerequisite and guarantee for studying geological problems. There are many methods for digestion and enrichment of gold among which polyurethane foam (PUF) enrichment after aqua regia digestion is the most commonly used in the experiments. A new method to help the relief of gold from the PUF was put forward in this study, and it was applied to four certified reference materials (CRMs) together with three previously used methods, and the optimal extraction and enrichment conditions were determined through experiments. The four methods were compared by meta-analysis, and the thiourea liberation method was superior to the other three methods because of its simple operation and high accuracy. Out of consideration for the incomplete adsorption of gold in the solution by only one piece of PUF, repetitive adsorption of gold with a second and a third piece of PUF in the solution was proposed in this study. Results show that the gold content obtained by secondary and tertiary adsorption accounts for 11.03% of the total content, and the highest can reach 20.74%. When the third adsorption was carried out, the gold content in several samples was below the detection limit. Therefore, repeated adsorption of gold in the solution is necessary, and three times of adsorption is necessary. Introduction Gold is a noble metal with high ductility, corrosion resistance and chemical stability. It has important applications in geology, materials, chemistry, biology, medicine and other elds because of its special properties. Studies of the source, enrichment process, and state of gold are of great guiding signicance in gold exploration. However, the distribution of gold in various geological samples is uneven and oen exists in a low content which makes the separation, enrichment, 1 and accurate determination of gold a difficult problem in geological analysis. The treatment methods before gold determination include dry ashing, 2-4 and wet acid digestion. Enrichment, such as precipitation, 5 ion exchange resin, 6 solvent extraction, 7-9 PUF adsorption, 10,11 extraction chromatography, 12 is followed. There are many testing methods for gold, including atomic absorption spectrometry, 13 inductively coupled plasma mass spectrometry 14,15 and so on. However, the uncertainty of gold determination is oen high due to incomplete sample digestion, incomplete adsorption and desorption process of gold, high detection limit of instruments, and interference of instrument matrix. PUF was widely used in various eld [16][17][18] and was rst applied to adsorb gold in acidic medium in 1970 by Bowen. 19 PUF is made of toluene diisocyanate and polyether polyol. The carbon dioxide generated in the synthesis process is le in the polymer to form spongy foam plastic with a certain degree of crosschain. When the gold in the solution meets the foam, adsorption and exchange occur on the foam membrane potential. The coordination anions of gold would be bonded with the active groups of -CH 2 -O-CH 2 and -O-C-NHR on the foam skeleton, so that the two would combine to form a stable ionic association. [20][21][22][23] In recent years, PUF has been widely used in the determination of trace gold in geological samples because of its strong adsorption performance and low price. Desorption is required before determination. There are several methods of desorption of gold, so it is of great signicance to choose the best way among them. [24][25][26][27] Meta-analysis is widely used in medicine, psychology and other elds to conduct comprehensive quantitative analysis of many research results of the same subject with specic conditions, which is superior to the comprehensive analysis ability of conventional literature review. 28 However, it is rare in the study of geological phenomena. [29][30][31][32] Meta-analysis is applied to the correlation between a specic exposure factor and a specic outcome in medical analysis. [33][34][35] It is feasible to transform the original geochemical data into valid data which can be identi-ed by meta-analysis, and to obtain the mean differences of four different methods so as to compare different methods. The desorption methods of PUF were discussed and a new method of desorption of PUF was proposed. Meta-analysis was applied to compare the four desorption methods of PUF. Considering the incomplete adsorption of one piece of PUF, the necessity of repeated adsorption was put forward and veried. This study aims to establish an accurate, simple, and rapid process for the determination of gold in geological samples through a comparative study of sample digestion method, the time of PUF adsorption, the way of PUF desorption. Reagents and standards Hydrochloric acid (HCl), nitric acid (HNO 3 ) of analytical grade and deionized water were used in sample digestion. 250 g L −1 ferric chloride solution with 1% HCl was used during adsorption. 10 g L −1 thiourea solution, 200 g L −1 potassium chloride solution and 500 g L −1 potassium bromide solution, methyl isobutyl ketone (MIBK), and potassium chlorate of analytical grade were employed in the desorption process. National standard gold single element solution (GSB-1715-2004, 1000 mg L −1 ) was acquired from the National Center of Analysis and Testing for Nonferrous Metals and Electronic Materials. CRMs (GBW07248a, GBW07808b, GBW07809b) were obtained from the Institute of Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Langfang, China). CRM GBW07192 was obtained from the Central South Institute of Metallurgical Geology (Yichang, China). Instrumentation Samples were triturated by a planetary ball mill (QM-3SP4, Laibu, China) and dried in an air-dry oven (DHG-9423A, Shanghai Jinghong, China) at 105°C. Cooled sample powder was weighed with an electronic balance (ATY124, Shimadzu, Japan) and ashed in a muffle furnace (SXL-1008, Shanghai Jinghong, China). The electric heating plate (SB-1.8-4, Shanghai Shiyan, China) was used in sample digestion. The cyclotron oscillator (HY-8A, Jintan Jingda, China) was used to ensure the gold was absorbed by PUF completely. The constant temperature water bath (HH-S26S, Jintan Instruments, China) was employed in the desorption of gold. Gold determination was performed on an inductively coupled plasma mass spectrometry (ICP-MS) (Nexion 350D, PerkinElmer, USA). The operating parameters of ICP-MS were listed in Table 1. Pretreatment of PUF The internal residues le during the production of PUF such as carbamido, allophanate, biuret, and isocyanate will greatly reduce the adsorption performance of the PUF which leads to the degradation of the adsorption performance. Hence, it is necessary to pretreat the PUF before the adsorption experiment. First, the PUF was cut into small squares of about 0.2 g (3 × 2 × 1 cm 3 ), socked into sodium hydroxide solution and boiled for 30 minutes, washed with deionized water until neutral. Then, it was transferred into 10% HCl and soaked for 2 h, washed again and dipped in deionized water for 2 h, squeeze the water out, dried naturally for later use. 36 Sample digestion Herein, the high-temperature ashing-acid digestion and PUF adsorption were performed to determine the content of gold in rock samples according to the geological and mineral industry standard DZ/T 0279.4-2016 of the People's Republic of China. 10.0000 g of sample was accurately weighed into a porcelain crucible and sent to a muffle furnace for 2 h at 700°C, during which the furnace was opened twice for oxygen supplement to ensure that the sample powder was ashed completely. Aer cooling, the sample was transferred to a conical ask, several drops of water were added to moisten the powder, 30 mL 50% aqua regia was added to dissolve the sample, place the conical ask on the electric heating plate with the surface dish covered. Remove the surface dish one hour later and continue to evaporate until the solution is about 10 mL. 70 mL water and 3 mL FeCl 3 were added with a PUF to the cooling solution. The conical ask was then placed on the cyclotron oscillator for 30 minutes. 10,11 The PUF was taken out, the residue was washed and squeezed for later use. Desorption of gold Four methods of desorption of gold from PUF were performed in the experiment, among which method III was a new method proposed in this study. Method I (HNO 3 -KClO 3 decomposition). A new method proposed in this study: PUF could be decomposed by inorganic acid and oxidizing agent, among which nitric acid and potassium chlorate have the best decomposition effect. 37 Specic operations are as follows: the PUF aer adsorption of gold was placed into a glass beaker where 0.1 g KClO 3 and 10 mL HNO 3 were added. The beaker was then placed on a hot plate until the solution evaporates to dry during which a lot of . Au (III) could be reduced to Au (I) by hot thiourea solution and combined to produce the Au (I)thiourea complex in which process the gold ions can be liberated from the PUF. The specic operation is as follows: the PUF was transferred into a colorimetric tube where 10 mL thiourea solution was added, placed the tube in a boiling water bath for 30 minutes, during which the PUF was continuously extruded with a rubber-tipped glass rod so that the gold could be liberated completely. Squeeze the PUF as many times as possible but be careful not to break the bottom of the colorimetric tube which may lead to a loss of solution. Squeeze and remove the PUF immediately while it is still hot. The solution was cooled for determination. 3.3.3. Method III (ashing burning). The PUF was placed in a porcelain crucible and sent to a muffle furnace for 2 h at 600°C aer two drops of anhydrous ethanol were added to improve ashing efficiency. The residue was transferred into a beaker aer being cooled and two drops of potassium chloride solution and 3 mL aqua regia were added, placed the beaker on a boiling water bath until the solution evaporated thoroughly. Ten drops of HCl were added to remove the HNO 3 , the solution was transferred to a colorimetric tube for determination. 38 3.3.4. Method IV (organic solvent extraction). The PUF was placed in to a glass beaker where 0.1 g KClO 3 and 10 mL HNO 3 were added. The beaker was then placed on a hot plate until the solution evaporates to dryness. 10 mL 50% HCl was added to dissolve the solids, the solution was transferred into a 25 mL colorimetric tube with 1 mL 50% fresh potassium bromide solution added, dilute to 20 mL with deionized water and 5 mL MIBK was added, gold was determined in the oil layer aer shaking stratication. 39 Validation of methods Four CRMs with different content of gold were digested and absorbed under the same condition and extricated with four different methods. The logarithmic deviation (Dlg C) between the measured mean value and the standard value of the CRMs and the relative standard deviation (RSD) between the measured value and the standard value of the CRMs were calculated to measure the accuracy and precision of the methods according to the geological and mineral industry standard DZ/T 0011-2015 of the People's Republic of China. Dlg C and RSD can be evaluated by the following equations: where Cis the mean of parallel measurements, C i is the value of parallel measurements, C s is the standard value of CRM, n = 5 is the number of parallel experiments. The DlgCand RSD acquired are all less than 0.11 and 10% respectively, indicating that the accuracy and precision of the methods are qualied. Comparison of methods The gold in the sample was absorbed three times of oscillating and the desorption of PUF was performed with the above four different methods. Parallel experiments were carried out ve times under the same conditions, and the experimental results were shown in Table 2. All the above four methods can meet the test requirements, among which the method I is highly accurate and efficient which is suitable for laboratory testing except that it requires a large amount of acid and is dangerous to operate. Method II has been widely applied in geological samples determination especially during laboratory analysis and testing with high accuracy. This method requires the use of a glass bar to squeeze the PUF which leads to the fracture of the bottom of the colorimetric tube and the loss of the sample, so it is recommended to use a glass bar with a rubber plug at one end and squeeze the PUF for more than 200 times. Method III is relatively simple and convenient to operate with small consumption of reagent which is especially suitable for the extraction of large quantities of gold during industry work but has the disadvantages of cumbersome transfer extraction, long process, high energy consumption of muffle furnace. It is easy to cause the problem of unqualied accuracy and precision of method IV due to the unstable absorbance of the instrument of organic solvents and the use of organic solvents in the process is not environmentally friendly. In addition, it is unkind for ICP-MS to determine the content of solution with organic reagent as solvent. Meta-analysis In the meta-analysis, when the probability of heterogeneity test is P > 0.05, multiple independent studies can be considered to The diamond at the bottom of the forest plot represents the combined results of multiple randomized controlled trials (RCTs). The gure was divided into two halves, le and right, to judge whether the difference is statistically signicant. The le side of the line is the experimental group, namely the treatment group, and the right side is the control group. When the diamond intersects the vertical line, there is no statistical signicance between different methods in the RCT. The position and meaning of the diamond are as follows: for adverse outcomes such as disease events and death, when the diamond is completely on the le of the vertical line, the treatment group is more effective; when the diamond is completely on the right, the control group is more effective; For favorable outcomes such as remission and cure, the position and meaning of the diamond are opposite; The greater the distance of the rhombus from the vertical line, the more obvious the effectiveness difference between the two schemes. In this study, four methods were used to test the relative errors and relative standard deviations of four samples for meta-analysis. The forest map obtained is shown in Fig. 1. The analysis results are shown in Table 3. Results indicated that the four independent methods were homogeneous, and the probability of the overall effect test was less than 0.05, which had signicant statistical signicance. All the three mean differences were greater than 0, the mean difference of the method II and method III are equal in value and both less than method IV, which indicated that the effect of method I in the control group was more signicant than that in the experimental groups. Method II and method III are more accurate and effective than method IV under the experimental conditions set up in this study. Of course, the experimental conditions employed in this study including the time and temperature of adsorption, desorption, and ashing burning, the amount of oxidant may introduce a certain degree of impact on the measurement results, which deserved to be explored and optimized in the subsequent study. Repetitive adsorption In order to avoid the incompleteness of single adsorption of PUF which led to the inaccurate determination of gold, the gold in the solution was adsorbed three times under the premise of ensuring the same PUF treatment mode, adsorption mode and foam treatment mode in this experiment, and nally, the sum of the three adsorption times was obtained as the total gold content. It can be seen from Table 4 that the gold content determined aer the second and third adsorption accounted for more than 3.26% of the total gold content, with an average of 11.03% and a maximum of 20.74%. Among them, the gold content determined aer the second adsorption accounted for more than 3.14% of the total gold content, with an average of 9.23% and the highest of 16.19%. Aer the third adsorption, the determined gold content decreased signicantly, with an average value of 1.79%. When the third adsorption was carried out, the gold content of some samples was already below the detection limit. Conclusions The new method of the desorption of gold proposed in this study has the advantages of simple operation and time-saving. The accuracy and precision of all these four methods can meet the test requirements. The reagent dosage, proportion and detailed operation process of each method were proposed, which can provide a basis for the determination of gold in geological samples. The four PUF desorption methods were compared and the thiourea liberation method was determined as the best method to help gold relieve from the PUF by meta-analysis. The advantages of this method are high accuracy, high precision, and simple operation. Ashing burning method and HNO 3 -KClO 3 decomposition method followed, while the organic solvent extraction method is not stable enough and the introduction of organic solvents was not environmentally friendly. The thiourea liberation method and HNO 3 -KClO 3 decomposition method with high accuracy and efficiency were suitable for laboratory analysis and determination while the ashing burning method can be employed in industry work. Meta-analysis can quantitatively compare the results of different methods which has immeasurable application potential in the geochemical eld. In this study, repeated adsorption of gold in the solution was proposed. Repeated adsorption experiments were carried out for each desorption method. Experiment results showed that the gold content of secondary adsorption and tertiary adsorption accounted for 11.03% of the total content, and the highest was 20.74%. Moreover, the lower the gold content in the sample, the higher the ratio of determining gold content to total content aer repeated adsorption. Therefore, it is necessary to carry out repeated adsorption of PUF under the adsorption and desorption conditions of setting according to this experiment, and three times of adsorption is enough. Conflicts of interest There are no conicts to declare.
2023-04-22T05:04:51.124Z
2023-04-17T00:00:00.000
{ "year": 2023, "sha1": "46859cb471b40c294cb78e0293d8641f7cff657a", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "46859cb471b40c294cb78e0293d8641f7cff657a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
237680766
pes2o/s2orc
v3-fos-license
Evaluation of Antibiotic Tolerance in Pseudomonas aeruginosa for Aminoglycosides and its Predicted Gene Regulations Through In-silico Transcriptomic Analysis Pseudomonas aeruginosa causes chronic infections like cystic fibrosis, endocarditis, bacteremia and sepsis, which are life-threatening and difficult to treat. The lack of antibiotic response in P. aeruginosa is due to adaptive resistance mechanism, which prevents the entry of antibiotics into cytosol of the cell to achieve tolerance. Among different groups of antibiotics, aminoglycosides are used as a parental antibiotic for treatment of P. aeruginosa . This study aims to determine the kinetics of antibiotic tolerance and gene expression changes in P. aeruginosa exposed to amikacin, gentamicin, and tobramycin. These antibiotics were exposed to P. aeruginosa at their MICs and the experimental setup was monitored till 72 hours, followed by the measurement of optical density in the interval of every 12 hours. The growth of P. aeruginosa in MICs of antibiotics represents the kinetics of antibiotic tolerance in amikacin, gentamicin, and tobramycin. Transcriptomic profile of antibiotic exposed P. aeruginosa PA14 was taken from Gene Expression Omnibus (GEO), NCBI as microarray datasets. The gene expressions of two datasets were compared by test versus control. Tobramycin exposed P. aeruginosa failed to develop tolerance in MICs 0.5µg/mL, 1µg/mL and 1.5µg/mL. Whereas amikacin and gentamicin treated P. aeruginosa developed tolerance in MICs. This depicts the superior in vitro response of tobramycin over the gentamicin and amikacin . Further, in silico transcriptomic analysis of tobramycin treated P. aeruginosa resulted in low expression of 16s rRNA Methyltransferase E, B & L, alginate biosynthesis genes and several proteins of Type 2 Secretory System (T2SS) and Type 3 Secretory System (T3SS). Introduction P. aeruginosa is an opportunistic pathogen, causes chronic infections which are difficult to treat because of the limited response to antimicrobials and emergence of antibiotic resistance during therapy [1].Multi-drug resistance (MDR) in P. aeruginosa is increasing due to over-exposure to antibiotics [2].It is developed by various physiological and genetic mechanisms, which includes multidrug efflux pumps, beta-lactamase production, outer membrane protein (porin) loss and target mutations.In hospitals, MRD P. aeruginosa are concurrently resistant to ciprofloxacin, imipenem, ceftazidime and piperacillin-tazobactam in most of the cases, which limits the treatment options [3]. Aminoglycosides are major group of antibiotics with potential bacteriocidic effect for the treatment of Pseudomonas infections.They are either used alone or in combination to treat various infections to overcome drug resistance, particularly in cystic fibrosis patients [4], and infective endocarditis [5].In the face of systemic infection with shock/sepsis, antimicrobial therapy should consist of two antimicrobial agents, with one of these being an aminoglycoside [6], because it exhibits concentration-dependent bactericidal activity and produce prolonged post-antibiotic effects [7]. β-lactam antibiotic plus an aminoglycoside is the commonly used synergistic combinations for treatment of clinical infections.Other combinations are fluoroquinolone & aminoglycosides and tetracycline & aminoglycosides.Clinical isolates show high percent of susceptibility to aminoglycosides than the other first-line antibiotics [8].Despite high susceptibility to aminoglycosides in clinical isolates, P. aeruginosa exhibits physiological adaptations to the antibiotics which results in less response to its synergistic combinations. P. aeruginosa thrives in the inhibitory concentration of antibiotics gradually and acquires adaptive resistance, which makes the treatment more complicated [9].Adaptive resistance mechanism was characterized by modification of the cytoplasmic membrane, condensation of membrane proteins and reduction of phospholipid content [10] which reduces penetration of antibiotics into the plasma membrane.Studies shows that the adaptive resistance can also develop by up-regulation of efflux pumps especially, MexXY-OprM [11]. Among immunocompromised patients, P. aeruginosa are favored to adapt the administered antibiotics and enable better survival of the bacterial generation by emerging as physiologically resistant groups [12].Adaptive resistance is developed due to rapid transcriptomic alteration in response to antibiotic [13].Better understanding of the kinetics and transcriptomic changes during antibiotic exposure can develop scientific insight on the adaptive resistance mechanism in P. aeruginosa [13,14].On this background our study was designed for better understanding of adaptive resistance in P. aeruginosa for gentamicin, amikacin and tobramycin which are commonly used aminoglycosides as first line antibiotics. Broth Dilution Method Minimum Inhibitory Concentrations (MICs) of P. aeruginosa ATCC 27853 was determined by broth dilution method as per the Clinical and Laboratory Standards Institute (CLSI) guidelines. MIC assay was performed for gentamicin, amikacin, and tobramycin (purchased from Sigma Aldrich) with log phase culture (5×10 8 CFU/mL) in Mueller Hinton Broth (MHB) using 96-well microtiter plate.The final optical density (OD) was determined in Epoch ™ Microplate spectrophotometer at 600nm. In vitro exposure of antibiotics to P. aeruginosa From the recorded MIC values, P. aeruginosa was inoculated in 10 mL MHB in a Tarsons tube with corresponding antibiotic concentrations after adjusting the cell density to OD600 0.26 (in log phase).The experiment set up was observed for 72 hours.P. aeruginosa was inoculated in antibiotic concentrations 0.5μg/mL, 1μg/mL & 1.5μg/mL for gentamicin & tobramycin.For amikacin, 1μg/mL, 2μg/mL & 3μg/mL antibiotic concentrations were taken.All antibiotic concentrations taken were based on MICs determined for P. aeruginosa ATCC 27853.The experimental condition was incubated at 37 o C with optimal shaking of 74rpm.At every 12 hours the turbidity was monitored by measuring OD600 in Epoch ™ microtiter plate reader [14] [15].The tube with growth were sub-cultured in nutrient agar.The colonies were conformed for P. aeruginosa by Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) automated identification system (VITEK® MS, BioMérieux) by following the standard procedure for sample preparation [16]. Retrieval of microarray datasets The differential gene expression analysis was performed by exploring microarray datasets published in Gene Expression Omnibus (GEO) NCBI, available as accessible series no.GSE9991 and GSE9989.Two datasets 1.) Tobramycin treated planktonic culture of P. aeruginosa (GSE9991) and 2.) Tobramycin treated P. aeruginosa biofilm (GSE9989) were analyzed.In GSE9991, planktonic culture of PA14 was exposed to 5 µg/mL tobramycin for 30 minutes at 37 0 C. Two samples GSM252561 and GSM252562 of tobramycin treated PA14 planktonic culture was taken as test, which was compared to 2 samples GSM252559 and GSM252560 of unexposed PA14 planktonic culture as control.GSE9989 consist of 6 samples, in which 3 samples GSM252496, GSM252501 and GSM252505 of unexposed P. aeruginosa biofilm were taken as control and 3 samples GSM252506, GSM252507 and GSM252508 of Tobramycin exposed P. aeruginosa biofilm was taken as test.Biofilms were grown on CFBE41o-cells in culture for 9 hours in MEM/0.4% arginine.Replicate samples were then incubated in the presence or absence of 500 μg/mL tobramycin for 30 minutes [17]. In the published datasets, before RNA harvesting, cells were washed several times in 2 ml PBS to remove antibiotics.The RNA was extracted using RNeasy RNA isolation kit.Bacterial RNA was then purified using MicrobEnrich kit, to exclude mammalian RNA.The purified RNA was subjected to cDNA synthesis followed by microarray preparation according to Affymetrix -Genechip P. aeruginosa Genome Array Expression Analysis Protocol [17]. Differential gene expression analysis In the microarray datasets, raw data available as platform file was extracted, formatted, and uploaded as input data in NetworkAnalyst 3.0 (https://www.networkanalyst.ca/).In NetworkAnalyst 3.0, organism ID -P.aeruginosa, data typeintensity table (Microarray data) options were selected, and probe summarization was performed by multi-array average algorithm. The raw data was preprocessed by removing unannotated genes, and datasets were normalized by log2 transformation.By Linear Models for Microarray Analysis (Limma) statistical package, datasets were subjected to specific comparison by its test versus control samples to determine log2 fold change (LogFc) between two groups.To identify significant DEGs, the datasets were filtered by significant threshold cutoff of P-value ≤0.05 [18]. Gene Ontology (GO) and functional enrichment analysis PANTHER Classification System (http://pantherdb.org/) [19] and DAVID Bioinformatics Resources 6.8 (https://david.ncifcrf.gov/)[20] computational tools were used for further downstream analysis of significant DEGs.Gene ontology was performed in PANTHER for functional classification of DEGs under three categories which includes, molecular functions, biological process, and protein class.For enrichment analysis of significant DEGs, DAVID Bioinformatics Resources 6.8 tool was used.The databases selected for enrichment analysis in DAVID were KEGG Pathway, InterPro, UniProtKB, and SMART.Minimum threshold gene counts of ≥ 2 and EASE Score of 0.1 were set as cut off to filter and enrichments of P-value ≤ 0.05 and false discovery rate (FDR) ≤ 0.05 were only considered for the study.In the functional annotation clusters, enrichment score of 2.0 were set as threshold to select enriched clusters. 3.2.In vitro exposure of antibiotics to P. aeruginosa The initial OD600 of the bacterial culture was ~0.26 in all the tubes and after 12 hours the OD600 dropped to ~0.13.In 0.5 μg/mL & 1 μg/mL of gentamicin (Figure 1) and 1 μg/mL & 2 μg/mL of amikacin (Figure 3) tubes, OD600 increased exponentially after 24 hours.After 48 hours, the bacterial growth attained the initial OD600 ~0.26 (Figures 1 and 3) and the cells resumed active growth after the post-antibiotic effect.In tobramycin and higher concentrations of gentamicin and amikacin tubes, after 12 hours OD600 further declined, and no growth were observed till 72 hours (Figures 1, 2 and 3).The growth in 1 μg/mL & 2 μg/mL of amikacin and 0.5 μg/mL & 1 μg/mL of gentamicin tubes were confirmed as P. aeruginosa by MALDI-TOF automated identification system, based on the peptide mass fingerprint matching. 3.3.Differential gene expression analysis In GSE9991, among 125 of DEGs 53 genes were upregulated and 72 were downregulated.In GSE9989, a total of 307 genes were differentially expressed in which 52 genes were upregulated and 255 genes were downregulated.Distribution of DEGs in both the datasets were represented in volcano plot (Figure 4).Targeted DEGs in the study were 17 from GSE9991 and 22 from GSE9989 (Table 1&2). 3.5.Functional enrichment analysis The DEGs enriched in the functional pathways is represented in (Figure 5).DEGs enriched in the similar pathways were clustered into groups as functional annotation clusters with significant enrichment score (Table 3).The targeted DEGs were enriched in RNA Methyltransferases, 16S rRNA 7-methylguanosine methyltransferase, alginate biosynthesis, repressors for alginate synthesis, transcriptional repressor of SOS response, type II secretary proteins, type II transport domains, translocation protein in type III secretion and type III export protein. 4.Discussion The experimental setup of in vitro exposure of antibiotic to P. aeruginosa reflects the antibiotic tolerance in chronic infections, where the response to antimicrobials diminishes due to development of adaptive resistance.Among the antibiotics evaluated for tolerance in P. aeruginosa, tobramycin exhibited a superior post-antibiotic effect in all MICs and was observed to be more effective by suppressing the antibiotic tolerance mechanism.Further, in silico analysis of DEGs in microarray datasets mimicking our experimental condition exposed the possible effectiveness of tobramycin in antibiotic tolerance. In the microarray datasets, the Gene Ontological classification enabled preliminary classification of DEGs in key categories The genes of RNA Methyltransferase and methylation metabolism was enriched in catalytic activity (GO:0003824) and nucleic acid metabolism (PC00171).Regulatory genes for alginate biosynthesis pathway were observed in transcriptional regulator (GO:0098772). Among the functional enrichments observed in GSE9991 and GSE9989 datasets, the following enrichments play a significant role in antibiotic tolerance and virulence of P. aeruginosa. Methylation of 16s rRNA by Methyltransferase is a common mechanism of resistance to aminoglycosides leading to loss of affinity of the drug to the target [23].In GSE9991, RNA Methyltransferases: PA0419, a Ribosomal RNA small subunit methyltransferase E which methylates 16s rRNA bases in 30s subunit was downregulated in the test samples.PA0017 and PA3680 genes of class B and J methyltransferase also lags expression.In GSE9989, 16S rRNA 7-methylguanosine methyltransferase: gidB belongs to Methyltransferase G, which involves in methylation of 7 th nucleotide guanosine, confers resistance to aminoglycoside by decreasing the binding affinity to its target [24].Low expression of gidB (LogFc -3.32) and the gene expression profile of Methyltransferases suggest low incidence of resistance development during tobramycin exposure.The following observation also extend the insights in adaptive resistance mechanism and the possibility of regulation control of RNA Methyltransferases by P. aeruginosa during antibiotic exposure. In chronic infections caused by P. aeruginosa, biofilm formation is common during the course of infection [25], which confers additional resistance from host defenses and antibiotics [26].Some antibiotics are involved in the up-regulation of genes that are responsible for induction of alginate production (mucopolysaccharide with an altered LPS and lipid A) which results in reduced antigen presentation to the immune system [27].In addition, biofilms of P. aeruginosa also contributes to antibiotic tolerance and the regulations of several biofilm forming genes will affect the persistence of the cells in antibiotics [28].In GSE9991, Alginate Biosynthesis: algL is a lyase precursor, that participates in catabolic activity of alginic acid, which leads to deconstruction of alginate complex [29] was highly expressed.Regulatory protein of alginate biosynthesis genes, algR [30] declined in expression.pslF was low expressed (LogFc -1.03) which is one among glycosyltransferase family, involved in extracellular polysaccharide biosynthetic pathway [31].In GSE9989, Repressors for alginate synthesis: algU is the sigma factor for alginate biosynthesis genes.mucA codes for anti-sigma factor and mucB is a negative regulator of algU [32].High expression of mucA (LogFc 4.09) and mucB (LogFc 1.15) may downregulates the alginate biosynthesis.The following transcriptional changes observed would possibly affect the alginate production to significant level in the presence of tobramycin treatment. Previous studies suggests that toxin-antitoxin system mediates persister cells in antibiotics. Although, studies in E.coli shows evidence for the mechanism [33], it is not clear in P. aeruginosa.Some of the DEGs were linked to suppress proteins of type II secretion system and type III secretion system, which participates in virulence activity of P. aeruginosa.In GSE9991, Type II Secretary Proteins: PA2677, PA2672 and PA0687 engages in catalytic and transporter protein activity was downregulated.Other transporter domains xcpR, xcpU, xcpV, xcpX, xcpY, xcpZ, tadB and tadD were also low expressed [34], affecting the T2SS.In GSE9989, Type II Transport Domains: xcpQ, xcpS, xcpT, xcpU, xcpV, xcpW, xcpX and xcpY which involves in efflux of toxin from xcpR, a cytosolic domain was down-regulated affecting the export of T2SS [35]. Translocation protein in type III secretion: pscQ, pscP and pscR are translocation protein of type III secretion system, which translocate the toxin across the host cell cytoplasmic membrane.Downregulation of pscQ negatively impact the toxin delivery to cytosol of host cell.Type III Export protein: pscE, pscF, pscG, pscH, pscI, pscJ, pscK are the export proteins of type III secretion system present in cytoplasmic membrane which transfers the toxin from cytosolic domain to MS ring of basal body [35].Low expression of all these proteins, prevents toxins to reach filament, from where it is translocated into host cell.Following transcriptomic changes suggests the suppression of toxin port systems (T2SS and T3SS), which may reduce the virulence of the organism during tobramycin treatment. Overuse of antibiotics hiked up transcriptional regulation, favoring adaptive resistance which outturns the fall in antibiotic activity over time [36].This study suggests the use of tobramycin for treatment of chronic pseudomonas infection, as P. aeruginosa failed to develop adaptive resistance in tobramycin and exhibited a positive transcriptomic regulation for antibiotic response. Tobramycin is restricted for systemic use due rise of creatinine level during initial days of therapy. Intensity of nephrotoxicity between aminoglycosides is poorly understood.Recent cohort study on, nephrotoxicity suggested that, tobramycin has less comparative toxicity over gentamicin [37]. Considering the in vivo drug response and predisposing factors, tobramycin one among the option might enable a better treatment alternative from the current drug combinations. Transcriptional alterations in microbes are dynamic event triggered by environmental changes, which out-turns the increase in adaptive resistance.Although, adaptive resistance involves in hike of baseline MIC of the bacteria over time, genetic resistance is a function of time.It takes several generations of the bacteria to achieve genotypic resistance.The methodology of constantly switching antibiotics through in vitro exposure of antibiotics would enable us in deciphering which clinical isolates would be physiologically resistant, leading to alternative aminoglycoside treatment for combating chronic infections. Figure 1 : Figure 1: In vitro exposure of MICs 0.5μg/mL, 1μg/mL, and 1.5μg/mL of gentamicin to P. aeruginosa.The OD values depict the kinetics of adaptive resistance in the cell. Figure 2 : Figure 2: In vitro exposure of MICs 0.5 μg/mL, 1 μg/mL, and 1.5 μg/mL of tobramycin to P. aeruginosa.The OD values depict the kinetics of adaptive resistance in the cell. Figure 3 : Figure 3: In vitro exposure of MICs 1 μg/mL, 2 μg/mL, and 3 μg/mL of amikacin to P. aeruginosa.The OD values depict the kinetics of adaptive resistance in the cell. Figure 4 : Figure 4: Volcano plot representing the distribution of DEGs in tobramycin treated P. aeruginosa.The figure represents the degree of variation in the gene expression of P. aeruginosa after exposure of tobramycin. Figure 5 : Figure 5: Pathway enrichments of DEGs.The functional annotations of target DEGs were performed in the DAVID Bioinformatics Resources 6.8. and represented in graph with percentage of DEGs enriched in the pathway. Table 1 . List of target DEGs and its log fold change of expression in tobramycin treated planktonic P. aeruginosa (GSE9991) Table 2 . List of target DEGs and its log fold change of expression in tobramycin treated P. aeruginosa biofilm (GSE9989) Table 3 : Functional annotations of the DEGs clustered into various groups with significant enrichment score.Derived from the source DAVID Bioinformatics Resources 6.8.
2021-05-06T13:23:57.977Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "7bc97fca3e9a8bc541fceb746756b20a9a440869", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2036-7481/12/3/45/pdf?version=1629944428", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "2f7601a0def1105694e15474fb4cb8f831f7a125", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }