id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
250041688 | pes2o/s2orc | v3-fos-license | CLSTN3 gene variant associates with obesity risk and contributes to dysfunction in white adipose tissue
Objective White adipose tissue (WAT) possesses the remarkable remodeling capacity, and maladaptation of this ability contributes to the development of obesity and associated comorbidities. Calsyntenin-3 (CLSTN3) is a transmembrane protein that promotes synapse development in brain. Even though this gene has been reported to be associated with adipose tissue, its role in the regulation of WAT function is unknown yet. We aim to further assess the expression pattern of CLSTN3 gene in human adipose tissue, and investigate its regulatory impact on WAT function. Methods In our study, we observed the expression pattern of Clstn3/CLSTN3 gene in mouse and human WAT. Genetic association study and expression quantitative trait loci analysis were combined to identify the phenotypic effect of CLSTN3 gene variant in humans. This was followed by mouse experiments using adeno-associated virus-mediated human CLSTN3 overexpression in inguinal WAT. We investigated the effect of CLSTN3 on WAT function and overall metabolic homeostasis, as well as the possible underlying molecular mechanism. Results We observed that CLSTN3 gene was routinely expressed in human WAT and predominantly enriched in adipocyte fraction. Furthermore, we identified that the variant rs7296261 in the CLSTN3 locus was associated with a high risk of obesity, and its risk allele was linked to an increase in CLSTN3 expression in human WAT. Overexpression of CLSTN3 in inguinal WAT of mice resulted in diet-induced local dysfunctional expansion, liver steatosis, and systemic metabolic deficiency. In vivo and ex vivo lipolysis assays demonstrated that CLSTN3 overexpression attenuated catecholamine-stimulated lipolysis. Mechanistically, CLSTN3 could interact with amyloid precursor protein (APP) in WAT and increase APP accumulation in mitochondria, which in turn impaired adipose mitochondrial function and promoted obesity. Conclusion Taken together, we provide the evidence for a novel role of CLSTN3 in modulating WAT function, thereby reinforcing the fact that targeting CLSTN3 may be a potential approach for the treatment of obesity and associated metabolic diseases.
INTRODUCTION
Understanding the processes that lead to excess adiposity is necessary to elucidate the pathophysiology of obesity. It facilitates to identify novel avenues for preventing and treating obesity-related disorders [1]. In adults, an increase in adipocyte number (hyperplasia) may lead to gain of lower-body fat, while an increase in adipocyte size (hypertrophy) may cause expansion of abdominal white adipose tissue (WAT) [2]. Adipocyte hypertrophy contributes to an increased risk of hypoxia, which consistently induces stress signaling that initiates chronic lowgrade inflammation and triggers fibrotic program in WAT. The integrated response promotes adipocyte dysfunction as well as wholebody metabolic defects [3,4]. Moreover, increased subcutaneous adipocyte size, particularly in the abdominal region, is a predictor of obesity-associated comorbidities, such as type 2 diabetes [5]. Accordingly, targeting adipocyte hypertrophy may help in improving metabolic dysfunction in people with obesity. However, the factors driving adipocyte enlargement and fat accumulation have not yet been completely identified. Despite the presence of the environmental factors that drive fat accumulation, a twin study clearly demonstrated a substantial degree of genetic control on human adiposity [6]. Genome-wide association study (GWAS), a high-throughput genotyping technology, has remarkably increased the speed of gene discovery at loci with associations for common traits and diseases, including obesity [7e9]. The fat mass and obesity-associated (FTO) gene is the first obesity susceptibility gene to be identified by GWAS, and this locus has the largest effect on body mass index (BMI) and obesity risk [10,11]. Furthermore, genome-wide polygenic scores integrate all available variants to quantify the inherited susceptibility to obesity from birth to adulthood [12]. A list of candidate genes has been discovered that may be responsible for the development of obesity in different populations [13]. Subsequent functional studies of these genes have elucidated the biological pathways involved in adipose biology [14,15]. However, there is still a great scope of discovering obesity-associated genes using human genomic datasets and analyzing their functional relevance with fat accumulation in the context of obesity. Calsyntenins (CLSTNs) are evolutionarily conserved proteins that were originally identified in central nervous system (CNS), and play critical roles during neural development [16,17]. Calsyntenin-3 (CLSTN3) is localized to the postsynaptic membrane where it acts as a synaptogenic adhesion molecule via interaction with Neurexin 1a [18]. However, few studies have investigated the roles of CLSTN3 in peripheral tissues of mice and humans. Amyloid precursor protein (APP) is also a transmembrane protein that is widely expressed in CNS as well as peripheral tissues, including liver and adipose tissue. Abnormal expression of APP in peripheral tissues is associated with metabolic diseases, such as type 2 diabetes and nonalcoholic fatty liver disease [19,20]. Interestingly, CLSTN3 can interact with APP and the neural adaptor protein X11-like to form a tripartite complex in brain that enhances the stabilization of APP metabolism [21,22]. The similar expression pattern of CLSTN3 and APP in CNS as well as the interaction between them suggest that they may share common or coordinated regulatory roles in peripheral tissues. In this study, the transcript of Clstn3/CLSTN3 gene was observed from transcriptomic data of WAT in mice and humans, respectively. Thereafter, we explored the phenotypic effect of CLSTN3 gene variant using human genetic association study and expression quantitative trait loci (eQTL) analysis. We proposed that obesity risk conferred by CLSTN3 rs7296261 is mediated by high CLSTN3 expression in human adipose tissue. Additionally, we aimed to assess the effect of CLSTN3 overexpression in inguinal WAT (iWAT) of mice on adipose tissue function, and determine it as a potential target for preventing and treating obesity and associated comorbidities.
Subjects
A total of 2,386 individuals were recruited from Shanghai Obesity Study (SHOS) [23e25]. Detailed study methods about recruitment and clinical data collection have been previously described [23]. Briefly, SHOS is prospective cohort to investigate the development of obesity and associated diseases. Genomic DNA was isolated from whole blood sample and then subjected to exome genotyping. The associations of the single nucleotide polymorphisms (SNPs) in the CLSTN3 locus with obesity-associated traits were obtained. These individuals were grouped by SNP rs7296261 and the clinical characteristics were presented in Table S1. The eQTL analysis for CLSTN3 rs7296261 on gene expression was performed in paired abdominal subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) from 81 obese participants who underwent bariatric surgery at Shanghai Jiao Tong University Affiliated Sixth People's Hospital. Patients with severe conditions were not included, including generalized inflammation or malignant diseases. The clinical characteristics of these participants grouped by CLSTN3 rs7296261 genotyping were presented in Table S2. Lastly, we analyzed the abdominal SAT from 10 lean (BMI, 17.9e24.4 kg/m 2 ) and 10 obese (BMI, 31.1e42.3 kg/m 2 ) participants, who underwent a laparoscopic cholecystectomy or bariatric surgery at Shanghai Jiao Tong University Affiliated Sixth People's Hospital. All human studies were approved by the Ethics Committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital. All participants provided written informed consent.
Genotyping and quality control
Genomic DNA was extracted from peripheral blood leukocytes using QIAamp DNA Blood Mini Kit (Qiagen), according to the manufacturer's instructions. Genome-wide genotyping and quality control of extracted DNA were performed, according to previously published protocols [24]. In brief, genome-wide genotyping of all the subjects was performed using Infinium Exome-24 v1.0 BeadChip (Illumina). DNA quality control was evaluated at the individual as well as the SNP level. We screened three variants in the CLSTN3 locus, including rs145190321 (MAF ¼ 0.0002), rs189282788 (MAF ¼ 0.0027), and rs7296261 (MAF ¼ 0.4765). The first two loci were not included due to the low-frequency in the population. CLSTN3 rs7296261 was further genotyped in genomic DNA samples of 81 obese participants who underwent bariatric surgery by sanger sequencing following polymerase chain reaction (PCR) amplification. For quality control, resequencing validation of randomly chosen individuals was done to confirm genotyping results.
Animals
Male C57BL/6 mice, aged 4e5 weeks, were purchased from Shanghai SLAC Laboratory Animal Company and housed under a 12-hour light/ dark cycle with free access to water and food. Adeno-associated virus (AAV) was administered into iWAT pads of 6-week-old mice. To induce WAT browning model, 9-week-old mice were individually caged at 4 C for 1 week. To induce obesity model, 7-week-old mice were fed on high-fat diet (HFD) for 12e15 weeks, in which 60% kcal was obtained from fat, 20% kcal from carbohydrate and 20% kcal from protein (Research Diets, D12492). For cold-induced lipolysis, mice were exposed to 4-hour cold stress. For in vivo lipolysis, b 3 -adrenoceptor agonist CL-316,243 (MilliporeSigma) was intraperitoneally injected to the mice at a dose of 1 mg/kg body weight. All animal studies were reviewed and approved by the Animal Care Committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital.
Production and injection of AAV
The coding sequence of human CLSTN3 transcript (GenBank, NM_014718.4) was a generous gift from Jiahuai Han's laboratory (School of Life Sciences, Xiamen University, China). It was cloned into pAAV-CMV-3FLAG-WPRE vector (Obio Technology, Shanghai, China) to induce overexpression of CLSTN3 (AAV-CLSTN3, Figure S1A). A viral vector without CLSTN3 plasmid insertion was used as the control (AAV-CON, Figure S1A). The full sequence map for pAAV-CMV-CLSTN3-3FLAG-WPRE vector and CLSTN3 coding sequence was shown in Figure S1B and Figure S1C, respectively. The vectors were packaged into AAV9 (GenBank, AY530579.1) by Obio Technology. Approximately 4 Â 10 10 AAV particles were administered into both sides of iWAT pads Original Article of each mouse. After 4 weeks, mice were sacrificed to examine the efficiency of AAV-mediated CLSTN3 overexpression in iWAT.
Human primary adipocytes
Stromal vascular fraction (SVF) and mature adipocytes were isolated from human abdominal SAT as previously described [26]. Fresh adipose tissue was collected in Dulbecco's Modified Eagle Medium (DMEM, MilliporeSigma) and kept on ice during the transport. Biopsy was minced and digested with DMEM containing 0.2% type II collagenase (MilliporeSigma) and 1.5% bovine serum albumin (BSA) for 30 min at 37 C with gentle shaking. After being kept on ice for 10 min, digested sample was filtered with a 70 mm cell strainer, and cell suspension was centrifuged for 10 min at 800Âg. The cell precipitate was washed with phosphate buffer saline (PBS) and centrifuged again to get SVF. The floating mature adipocyte fraction was further washed with PBS and collected for further use.
2.6. Extraction of membrane and cytosol protein Membrane and cytosolic protein fractions were extracted from human primary adipocytes using the Membrane and Cytosol Protein Extraction Kit (Beyotime) according to the manufacturer's protocol [27]. After efficient homogenization, sample was centrifuged at 600Âg to remove the nuclei and intact cells. Then, the supernatant was centrifuged at 14,000Âg for 30 min at 4 C to obtain the precipitate that was comprised of membrane fraction. Membrane protein was extracted with specific reagent. The remaining supernatant was used to obtain cytosol protein.
Histology analysis
Adipose tissue or liver sample was fixed in 4% paraformaldehyde for 24 h at 4 C, embedded in paraffin, and cut into 4-mm thick section for histological analysis. Section was stained with hematoxylin and eosin (H&E) staining to determine morphological change, and image was viewed and photographed using a PANNORAMIC Digital Slide Scanner (2DHISTECH). Image-Pro Plus 6.0 was used to calculate adipocyte area and number from the sections of six individual mice and two fields per mouse in each group. In addition, some of histological images of were acquired with a Nikon microscope, and adipocyte area was quantified from the sections of three mice. For immunofluorescence analysis, deparaffinized tissue section was blocked with goat serum, and incubated with anti-CLSTN3 antibody (Proteintech), followed by Alexa Fluor 488-labeled Goat Anti-Rabbit IgG (Beyotime) and DAPI staining solution. Image was captured using a Leica fluorescence microscope.
2.10. Insulin signaling analysis Insulin (Lily) at a concentration of 1.0 U/kg body weight was intraperitoneally injected to HFD-fed mice following 6-hour fasting period. Mice were sacrificed 15 min after insulin injection and iWAT pads were collected for further measurement.
Systemic metabolic tests
Glucose tolerance test (GTT) and insulin tolerance test (ITT) was performed in mice fed with HFD for 13 week and 14 weeks, respectively. For GTT, mice were subjected to overnight fasting followed by intraperitoneal injection of D-glucose solution (MilliporeSigma) at a dose of 1.25 g/kg body weight. For ITT, insulin (Lily) at a concentration of 1.0 U/kg body weight was intraperitoneally administered to the mice after a 6-hour fasting period. Blood glucose level was measured with a glucometer (Roche) at each time point, including before and 15, 30, 60, 90, and 120 min after glucose or insulin challenge.
Serum and liver chemistry
After mice were euthanized, blood samples were obtained and centrifuged to collect the serum. Serum insulin level was measured using a commercially available enzyme-linked immunosorbent assay (ELISA) kit (ImmunoDiagnostics Limited). Serum level of total cholesterol and triglyceride was measured using commercially available kits from Siemens Healthcare Diagnostics Inc. with ADVIA 2400 Chemistry System [28]. Triglyceride in liver was determined with Triglyceride Quantification Colorimetric/Fluorometric Kit (Biovision) following the manufacturer's protocol [29]. Briefly, after homogenizing approximately 50 mg of liver tissue, the lysate was centrifuged to obtain the supernatant for hepatic triglyceride content and total protein concentration measurements.
Lipolysis assay
For cold-induced and in vivo lipolysis, blood samples were collected at different time points, and centrifuged at 4,000 rpm for 10 min to get mouse serum. For ex vivo lipolysis, approximately 20 mg of iWAT explant was weighed, placed into a 24-well plate with 200 ml of DMEM containing 2% BSA, and incubated with 10 mmol/L isoproterenol (MilliporeSigma), a b-adrenoceptor agonist, at 37 C incubator for 60 min. Medium was collected at different time intervals for further measurement. The level of free glycerol and non-esterified fatty acid (NEFA) in mouse serum or culture medium was measured using commercially kits, respectively (Free glycerol, MilliporeSigma; NEFA, Wako).
2.14. Immunoprecipitation assay Immunoprecipitation assay was performed according to a previously described protocol [30]. For in vitro immunoprecipitation, CLSTN3-FLAG and APP-HA plasmids were co-transfected into 293T cells (ATCC) in 6-well plate using Lipofectamine (Invitrogen) following the manufacturer's instructions. Cells were harvested after 48 h later, and resuspended in lysis buffer (Beyotime) containing protease inhibitor (Roche). For FLAG-tagged immunoprecipitation in mice, iWAT samples were homogenized with lysis buffer. The lysate from cultured cells or tissues was centrifuged with 12,000 rpm for 10 min at 4 C. Supernatant was harvested and incubated with FLAG beads (MilliporeSigma) for 4 h at 4 C. For endogenous immunoprecipitation in human SAT, supernatant from homogenized lysate was incubated with anti-CLSTN3 antibody (Proteintech) or IgG (Beyotime) pre-treated bead (MedChemExpress) overnight at 4 C. All the samples were analyzed by immunoblotting with indicated antibodies.
Mitochondrial isolation
Mitochondria were isolated from iWAT biopsy using Tissue Mitochondria Isolation Kit (Beyotime, C3606). After mice were euthanized, iWAT pads were removed rapidly and minced into small pieces. A glass homogenizer was used to grind the tissue pieces on ice with mitochondria isolation solution containing protease inhibitor (Roche). The lysate was centrifuged with 600Âg for 5 min at 4 C, and supernatant was further centrifuged with 11,000Âg for 10 min at 4 C. The precipitate containing mitochondria was resuspended with mitochondrial lysate solution. Mitochondrial protein concentration was measured by the BCA method.
Mitochondrial respiration measurement
Ex vivo mitochondrial respiration was measured using Seahorse XF24 Extracellular Flux Analyzer (Agilent) as previously described [31]. Firstly, 5e10 mg of iWAT explants were placed into the XF24 Islet Capture Microplates (Agilent), and incubated with XF base medium with 25 mmol/L glucose, 2 mmol/L glutamine, and 2 mmol/L sodium pyruvate (pH 7.4) in a 37 C non-CO 2 incubator (Agilent) for 1 h. Then, tissue explants were sequentially injected with 10 mmol/L oligomycin, 8 mmol/L FCCP, and 12 mmol/L antimycin A plus rotenone to perform mitochondrial stress test. The initial oxygen consumption rate (OCR) value was assessed by Seahorse XF24 software, and the final OCR result was standardized to the protein content of tissue explant in each well. The primary data of initial OCR value and protein content were annotated in Table S4. 2.17. Mitochondrial DNA copy number Mitochondrial DNA (mtDNA) copy number was measured according to a previously described protocol [32]. Genomic DNA was extracted from mouse iWAT using QIAamp Fast DNA Tissue Kit (Qiagen). The ratio of mitochondrial gene mteNd1 to nuclear gene Rbm15 was determined by performing quantitative PCR. The primer sequences were listed in Table S3.
Statistical analysis
We used GraphPad Prism 7.0 and SAS 9.4 software to perform the statistical analyses. All data were presented as mean AE standard error of the mean (SEM), mean AE standard deviation (SD) or median (interquartile range, 25e75%). The two-tailed paired or unpaired Student's t test, and Wilcoxon signed rank sum test were used for establishing the comparison between two groups. The two-way ANOVA followed with Bonferroni correction was applied for multiple comparisons with two independent factors. Linear regression analysis was performed to examine the correlation between two variables. Statistical significance was set at p-value < 0.05.
RESULTS
3.1. CLSTN3 gene is routinely expressed in human adipose tissue To identify the expression pattern of Clstn3/CLSTN3 gene in WAT, we analyzed the distribution of RNA sequencing reads in the Clstn3/ CLSTN3 locus in iWAT of mice and SAT of humans, respectively. Integrative Genomics Viewer demonstrated that the transcript of Clstn3 gene was expressed in a very low level in iWAT of mice; it was mainly expressed in the novel form of Clstn3b gene that shares the last two exons with mouse Clstn3 gene [33] ( Figure 1A). On the contrary, we observed that the transcript of CLSTN3 gene was routinely expressed in human SAT ( Figure 1B). To understand the expression of CLSTN3 in human adipose tissue further, we attempted to isolate SVF and adipocyte fraction from SAT biopsies. Interestingly, mRNA expression of CLSTN3 was predominantly enriched in adipocyte fraction, which was consistent with adipocyte marker gene PPARG and LEP ( Figure 1C,D). CLSTN3 acts as a synaptogenic protein that is abundantly localized at the cell surface. Here, cell membrane localization of CLSTN3 in human primary adipocytes was further supported by the evidence from subcellular fractionation and immunoblot analysis; it was identical to the membrane marker CAV1 ( Figure 1E). Together, the discrepant expression pattern of Clstn3/CLSTN3 gene in mice and humans, suggests that CLSTN3 may play a specific role in human adipose biology.
3.2. The variant rs7296261 in human CLSTN3 locus is associated with obesity risk To test this hypothesis, we summarized the associations of CLSTN3 variants with obesity risk in a human cohort of 2,386 individuals from SHOS. We found that only SNP rs7296261 in the CLSTN3 locus was significantly associated with metabolic traits. Genotyping for rs7296261 divided the individuals into three groups, namely 659 GG, 1,180 GA, and 547 AA genotype carriers. The clinical characteristics of three groups were presented in Table S1. Of note, there were significant differences in metabolic traits between the homozygous GG and AA genotype carriers, including BMI, body fat, total cholesterol, and low-density lipoprotein cholesterol (LDL-c). In particular, compared to GG genotype carriers, AA genotype carriers exhibited a higher BMI (mean, 24.28 kg/m 2 vs 24.74 kg/m 2 ) and body fat (mean, 28.4% vs. 29.4%) (Figure 2A,B). Since individuals with modest BMIs in the SHOS cohort had the compensatory ability of insulin action under glucose challenge, there was no significant difference in glucose metabolism between two groups (Table S1) [34]. To further understand the relationship between CLSTN3 rs7296261 and increased adiposity, we performed eQTL analysis on paired abdominal SAT and VAT of 81 obese participants. SNP rs7296261 is located in the intronic region of CLSTN3, and the presence of this variant may alter the transcriptional expression of this gene [35]. Genotype-Tissue Expression (GTEx) database showed the eQTL analysis for rs7296261 [36]. Individuals carrying AA genotype exhibited higher CLSTN3 expression in both subcutaneous and visceral fat ( Figure S2A and S2B). Our data confirmed that CLSTN3 mRNA level was higher in both SAT and VAT of individuals carrying AA genotype than that in GG genotype carriers ( Figure 2C,D). SNP rs7296261 is also located in the first exon of novel CLSTN3B gene, but there was no difference in CLSTN3B mRNA level between GG and AA groups ( Figure S2C and S2D). Meanwhile, we performed an additional analysis on metabolic traits of 81 obese participants with severe BMIs Original Article according to the genotyping of CLSTN3 rs7296261. Surprisingly, we observed that fasting plasma glucose (mean, 6.38 mmol/L vs 8.6 mmol/L) and HbA1c (mean, 6.39% vs 8.27%) were higher in AA genotype carriers than in GG genotype carriers ( Figure 2E,F); 2-hour plasma glucose after 75 g oral glucose tolerance test (OGTT) showed an increasing trend in AA genotype group, compared to GG genotype group (mean, 9.14 mmol/L vs 11.97 mmol/L, p ¼ 0.071) (Table S2). Furthermore, we observed that CLSTN3 mRNA level in SAT of obese subjects was significantly elevated compared to that in lean participants ( Figure 2G). SAT CLSTN3 expression was positively correlated with BMI (r ¼ 0.4834, p ¼ 0.031) and body fat (r ¼ 0.4677, p ¼ 0.038) ( Figure 2H,I). Together, these results demonstrate that CLSTN3 rs7296261, which is associated with obesity risk, leads to an increase in CLSTN3 expression in human adipose tissue; moreover, high CLSTN3 expression may be associated with increased adiposity.
3.3. CLSTN3 has no effect on adipose thermogenesis in mice To test the hypothesis that CLSTN3 may play a role in the regulation of adipose biology in mice, we first generated an AAV carrying human CLSTN3 coding sequence (AAV-CLSTN3) and the control virus (AAV-CON). AAV particles were subcutaneously injected to both sides of iWAT pads of C57BL/6J male mice ( Figure 3A). Immunoblotting and immunofluorescence analyses confirmed that CLSTN3 was locally overexpressed in iWAT; meanwhile, we did not observe the presence of endogenous Clstn3 protein expression in iWAT, epididymal WAT (eWAT) and liver ( Figure 3B,C). The novel form of Clstn3 gene, Clstn3b, regulates whole-body metabolism by controlling adipose thermogenesis in mice [33]. Therefore, we examined the effect of CLSTN3 overexpression in the regulation of adipose thermogenesis. However, there was no apparent difference between AAV-CON and AAV-CLSTN3 mice in terms of the expression of thermogenic genes (Ucp1, Pgc1a, Cidea, Cox7a1, and Clstn3b) in iWAT ( Figure 3D), irrespective of whether they were exposed to room temperature or chronic cold. Incidentally, UCP1 protein level remained unaltered in iWAT upon CLSTN3 overexpression ( Figure 3E).
Overexpression of CLSTN3 causes diet-induced dysfunctional iWAT and liver steatosis
To understand whether CLSTN3 overexpression in iWAT drives dietinduced obesity, we subjected the mice to HFD feeding ( Figure 4A). During 12-week HFD course, body weight of AAV-CLSTN3 mice remained similar to that of AAV-CON mice ( Figure 4B). Furthermore, food intake appeared comparable in the two groups ( Figure S3A). However, tissue weight of iWAT, where CLSTN3 was overexpressed, increased significantly; tissue weight of eWAT, where CLSTN3 was not overexpressed, did not exhibit a significant change ( Figure 4C). Moreover, H&E staining revealed the presence of larger adipocytes without affecting the adipocyte number in iWAT of AAV-CLSTN3 mice as compared to that in AAV-CON mice ( Figure 4D); eWAT adipocytes did not differ in size and number between two groups ( Figure S3B). Gene expression analysis showed an increase in the expression of generic macrophage genes (Adgre1 and Il6) and pro-inflammatory M1 macrophage marker (Nos2) in iWAT of AAV-CLSTN3 mice ( Figure 4E). Obesity is frequently associated with chronic adipose inflammation, which is also linked to adipose fibrosis [4]. Hence, we found that the expression of the major driver for adipose fibrosis, Hif1a, was upregulated in CLSTN3-overexpressed iWAT. This was further highlighted by increased expression of its target genes, including Tgfb1, Col1a1, Col3a1, and Col6a1 ( Figure 4E). Furthermore, AAV-CLSTN3 mice showed an attenuated P-AKT signal in iWAT upon insulin stimulation ( Figure S3C). The expression of genes (Slc2a1, Slc2a4, and Hk2) involved in glucose clearance showed a reducing trend in CLSTN3overexpressed iWAT ( Figure S3D). In conclusion, these data demonstrate that iWAT of AAV-CLSTN3 mice displays a local dysfunctional expansion phenomenon, which may further pose a risk for systemic health in mice. Thereafter, we attempted to correlate the relevance of this phenomenon to whole-body homeostasis. After HFD feeding, AAV-CLSTN3 mice displayed an increased glucose intolerance as compared to the control mice in GTT, although insulin tolerance changed minimally in ITT ( Figure 4F,G). Meanwhile, we observed that fasting insulin concentration was comparable between two groups ( Figure 4H). This moderate change in GTT and no substantial change in overall ITT response may be explained by AAV-mediated local overexpression of CLSTN3 in iWAT. Highly inflammatory and fibrotic adipose tissue is usually associated with adverse changes in liver [37]. Therefore, we examined the liver of HFD-fed AAV-CLSTN3 mice for histological change, and noticed a pronounced fatty liver phenotype ( Figure 4I); liver chemistry showed elevated triglyceride level ( Figure 4J). Furthermore, serum concentrations of total cholesterol and triglyceride were higher in AAV-CLSTN3 mice ( Figure 4K). Therefore, these data indicate that iWAT-specific CLSTN3 overexpression is associated with local, dysfunctional WAT expansion and liver steatosis, which in turn leads to moderate impairment of whole-body metabolism.
CLSTN3 attenuates catecholamine-stimulated lipolysis in vivo and ex vivo
The driving force for adipocyte hypertrophy after CLSTN3 induction is unclear. Surprisingly, we noted that CLSTN3-overexpressing iWAT displayed a rapid adipocyte enlargement after 4-week AAV administration ( Figure 5A). This demonstrated that CLSTN3 overexpression is sufficient to drive adipocyte hypertrophy, even in the absence of HFD. Mechanistically, adipocyte size is regulated by the balance between lipogenesis and lipolysis [38]. Initially, we did not find any differences in the expression of genes that are critical for adipogenesis (Pparg,
Original Article
Cebpa, and Fabp4) in iWAT upon CLSTN3 overexpression, while there was a nonsignificant reduction in the expression of genes involved in lipogenesis (Acaca, Fasn, and Scd1) and lipolysis (Lipe and Pnpla2) ( Figure 5B). Incidentally, HSL can be activated via its phosphorylation, and the level of P-HSL is used to assess the level of lipolysis [39]. Hence, we further explored the effect of CLSTN3 overexpression on HSL phosphorylation. Physiologically, lipolysis is coordinately controlled by the fasting/feed cycle and cold stress [40]. Firstly, we observed that CLSTN3 overexpression did not alter the levels of blood glucose and serum insulin either in fasted or fed state ( Figure S4A and S4B). Also, it had no effect on fasting-induced lipolysis ( Figure S4C and S4D). Specifically, in response to cold stress, the levels of free glycerol and NEFA in serum of CLSTN3-overexpressing mice were lower than that in the control mice ( Figure 5C,D). We observed that P-HSL level in iWAT of CLSTN3overexpressing mice was lower ( Figure 5E). Moreover, we tested in vivo lipolytic capacity. We noticed that AAV-CON mice exhibited enhanced serum glycerol and NEFA levels at different time points in response to CL-316,243, while AAV-CLSTN3 mice failed to trigger marked glycerol and NEFA release at 30-min time point (Figure 5F,G). At the protein level, CLSTN3 overexpression attenuated the phosphorylation of HSL in iWAT ( Figure 5H). Furthermore, our ex vivo lipolysis assay confirmed that iWAT explants obtained from AAV-CLSTN3 mice displayed an impaired response to isoproterenolinduced glycerol release from 30-min time point and NEFA release from 15-min time point (Figure 5I,J). Likewise, CLSTN3 overexpression reduced the ratio of P-HSL/T-HSL protein expression in iWAT explants ( Figure 5K). Therefore, we conclude that CLSTN3 overexpression in iWAT causes adipocyte hypertrophy, at least in part, due to impaired catecholamine-stimulated lipolysis.
3.6. CLSTN3 modulates adipose mitochondrial function via its interaction with APP Our observations raised a critical question regarding the mechanism by which CLSTN3 controls lipolysis in iWAT. Previous studies have established that CLSTN3 can interact with APP to form a complex that stabilizes APP metabolism in brain [21,22]. Initially, we examined the interaction between CLSTN3 and APP using immunoprecipitation assays. The interaction between overexpressed CLSTN3-FLAG and APP-HA was observed in 293T cells ( Figure 6A). Endogenous APP could be immunoprecipitated with FLAG-tagged CLSTN3 in iWAT of AAV-CLSTN3 mice ( Figure 6B). Moreover, endogenous CLSTN3 and APP exhibited a tight interaction in human SAT ( Figure 6C). To examine whether the interaction affects APP level, we assessed its level in whole lysate of mouse iWAT. Notably, we observed that APP level was greatly upregulated in iWAT of AAV-CLSTN3 mice ( Figure 6D). Obese conditions induce APP abnormal expression in WAT of mice, and excess APP is mis-targeted into mitochondria [41]. Consistent with this, we observed that APP level was substantially enriched in mitochondria isolated from CLSTN3-overexpressed iWAT ( Figure 6E). Mitochondrial mis-localization of APP in WAT disrupts mitochondrial function and impairs lipolysis in mice [41]. This promotes us to examine whether the adverse effect of CLSTN3 on lipolysis is mediated through mitochondrial dysfunction caused by APP accumulation. Interestingly, we observed a substantial decline in maximal OCR of iWAT explants of AAV-CLSTN3 mice ( Figure 6F). We additionally observed that CLSTN3 overexpression did not alter mtDNA copy number ( Figure 6G) or gene expression levels of mitochondrial biogenesis regulators (Pgc1a, Tfam, Nrf1, and Nrf2) ( Figure 6H). Furthermore, upon CLSTN3 overexpression, the majority of genes associated with fatty acid oxidation and respiration chain reaction remained unaffected, while Hadh and Cpt1b were Figure 6I). Finally, we evaluated the protein levels of crucial OXPHOS components in mitochondria isolated from CLSTN3overexpressed iWAT. Among these, we found that SDHB (CII), UQCRC2 (CIII), and ATP5A (CV) were significantly suppressed ( Figure 6J). Hence, these observations indicate that CLSTN3 may cause mitochondrial dysfunction by interacting with APP and leading to APP enrichment in mitochondria.
DISCUSSION
In the present study, we have established that CLSTN3 has a vital role in regulating lipolysis in adipose tissue, apart from its fundamental action in CNS. We combined human genomic dataset and eQTL analysis to provide the evidence that high CLSTN3 expression in adipose tissue correlates with the risk of human obesity. Our in vivo and ex vivo studies demonstrate that CLSTN3 enhancement causes iWAT dysfunction, partly due to impaired catecholamine-stimulated lipolysis. Similar to other CLSTNs (CLSTN1 and CLSTN2), CLSTN3 is a type I transmembrane protein of cadherin superfamily. Although three members have similar molecular structure, CLSTN3 differs in function from the other two, because this protein has a unique C-terminus and shows a more prominent surface localization [16]. It promotes synapse development in CNS by interacting with a-neurexin [42,43]. Kim et al. showed novel physiological function of neuronal CLSTN3 in regulating energy homeostasis and bone metabolism in mice [44]. However, the relevance of CLSTN3 in peripheral tissues has not yet been studied thoroughly. Zeng et al. revealed a novel gene Clstn3b in mice, which shares the last two exons of Clstn3 gene and regulates systemic energy expenditure by controlling innervation of thermogenic adipose tissue. Clstn3b mRNA expression is specifically restricted to adipose tissue: it is most highly expressed in BAT, followed by iWAT and eWAT [33]. The information of Clstn3 expression in adipose tissue is lack. Our data showed that Clstn3 expression in mouse iWAT and eWAT is extremely low, while CLSTN3 was routinely expressed in human adipose tissue. Moreover, due to the shared sequence homology between Clstn3b and Clstn3 genes, their total transcripts have been identified in transcriptomic datasets of mouse and human fat [26,45]. Furthermore, the adipose-specificity of both Clstn3 and Clstn3b transcripts has been assessed separately in adipose depots of mice and humans under environmental cues [46]. In the current study, we observed that the expression pattern of CLSTN3 gene in human adipose tissue is different from that in mice. CLSTN3 transcript is routinely expressed in human adipose tissue and predominantly enriched in mature adipocyte fraction. Therefore, the overall function of CLSTN3 in adipose biology is worth exploring in depth. Obesity is heritable, and it predisposes individuals to many metabolic diseases. GWAS have been performed till date to understand the genetic basis of the biological processes underlying obesity. It is worth noting that genetic loci associated with BMI overlap with genes involved in neurodevelopment, indicating a role of CNS, particularly the hypothalamus, in the regulation of body mass [47]. Adjusted-BMI loci are enriched for genes expressed in adipose depots and putative regulatory elements in adipocytes, and eQTL analysis provides insight to the potential pathophysiological mechanisms [48]. The non-coding variants may influence gene expression via chromatin modification, DNA accessibility and transcription factor binding in specific cell types and tissues [35]. For example, RIPK1 gene variants have been demonstrated to associate with human obesity; SNP rs5873855 at the RIPK1 intronic region disrupts one binding site for the transcriptional repressor E4BP4, and increases RIPK1 promoter activity and gene expression in adipose tissue [49]. SNP rs7296261 is located in the intron of CLSTN3 gene, and the variant may govern its transcriptional expression. However, the underlying gene regulatory elements and transcription factors that SNP rs7296261 influences need to be further defined. Here, we have demonstrated that rs7296261 is associated with high CLSTN3 expression in human adipose tissue and obesity risk. This implies that participants who are genetically susceptible to an increased expression of CLSTN3 tend to have a high risk of obesity. The data obtained from the examination of HFD-induced obese mice further support the role of CLSTN3 in adipose biology, wherein CLSTN3 overexpression results in a deterioration of WAT function and the onset of liver steatosis. Body fat mass is determined by the balance between lipid storage and mobilization in adipocytes [50]. The physiological regulation of the release of fatty acid from triglyceride is stimulated by fasting or cold stress, and it occurs via the release of catecholamines from the sympathetic nerves [50,51]. Impaired lipolytic capacity commonly results in improved metabolic function, as reduced FFA liberation from adipose depots is thought to alleviate lipotoxicity in peripheral tissues, including liver [52,53]. An et al. revealed that mitochondrial dicarboxylate carrier mDIC prevents hepatic lipotoxicity by inhibiting white adipocyte lipolysis [53]. Meanwhile, they revealed that mitochondrial APP enhancement impairs catecholamine-induced lipolysis, thereby resulting in rapid adipocyte hypertrophy and liver steatosis [41]. Therefore, simple assessment on lipolysis in WAT may not be the determinant of ectopic fat deposition and metabolic dysfunction; however, which tissue becomes dysfunctional shortly should be considered. The early stage of inefficient subcutaneous adipocyte lipolysis predicts further weight gain and glucose intolerance in women [54]. Our data demonstrate that CLSTN3 overexpression leads to adipocyte hypertrophy and dysfunction rapidly due to mitochondrial dysfunction and lower catecholamine-stimulated lipolysis in iWAT depots, which in turn leads to the following development of liver steatosis and whole-body deficiency. In addition, the alteration of some adipokines directly contributes to the progression of metabolic liver diseases. For example, adiponectin is secreted exclusively from adipose tissue, and has been shown to reduce hepatic lipogenesis and increases b-oxidation to promote systemic energy homeostasis [55]. In the present study, CLSTN3-driven adipokines participating in adipocyte-liver crosstalk remain unknown yet. It is well-known that mitochondrial dynamics regulate lipid storage and utilization [56]. However, the mediators between mitochondrial dysfunction after CLSTN3 overexpression and decreased catecholaminestimulated lipolysis are currently unclear. There are certain modulators for the crosslink between mitochondrial function and lipolysis. LINC00473 is shuttled to the mitochondrial-lipid droplet interphase, and modulates mitochondrial responsiveness and lipolysis under catecholamine activation [57]. Beclin1 is the core molecule for macroautophagy machinery in adipose tissue, and it has critical roles in the maintenance of mitochondrial homeostasis and lipolysis in relation to b-adrenergic stimulation [58]. Therefore, the underlying inter-organelle communications after CLSTN3 overexpression are worthy to be further explored. Therapeutic silencing of CLSTN3 should be performed in adipose tissue to reinforce its function, while the expression level of Clstn3 is extremely low in mouse WAT, and interventions on human adipose tissue are challenging. APP can be cleaved by proteases in amyloidogenic and nonamyloidogenic ways to produce a variety of short peptides, among which the role of amyloid b peptides in Alzheimer's disease has been intensively investigated [59]. Abnormal expression of full-length APP in peripheral tissues is associated with metabolic diseases [19]. Mitochondrial mis-localization of APP in adipocytes disrupts mitochondrial function, inhibits lipolysis, and promotes the occurrence of obesity in mice [41]. APP knockdown in adipocytes enhances mitochondrial respiration [60]. In our study, the similarities that were observed in the expression and function of CLSTN3 and APP in WAT suggest that they may share common signaling pathways in the regulation of adipose biology and systemic homeostasis. Therefore, we propose that the interaction between CLSTN3 and APP forms one of vital mechanisms underlying the pathological role of CLSTN3 in controlling adipocyte mitochondrial function. There are several limitations of our study. First, the effect of CLSTN3 on WAT dysfunction are proved through AAV-mediated overexpression, and more should focus on the comprehensive role of CLSTN3 in adipocyte-specific CLSTN3 transgenic mice. Second, there is a correlation between CLSTN3 expression and APP localization to mitochondria, while the role of CLSTN3 in modulating WAT function through APP translocation pathway needs to be thoroughly investigated. Last, the associations of CLSTN3 variant rs7296261 on metabolic traits and its expression in adipose tissue need to be strengthened in a human cohort with the lean and varying degree of obesity in future studies. In conclusion, our work suggests that the presence of CLSTN3 gene variant is correlated with high CLSTN3 expression in human adipose tissue, which in turn is associated with unfavorable phenotypes. In the context of obesity, we have demonstrated a novel role of CLSTN3 in adipose mitochondrial function, catecholamine-induced lipolysis, and adipocyte hypertrophy. We believe that CLSTN3 may be a therapeutic target for the treatment of obesity and associated metabolic diseases.
AUTHOR CONTRIBUTIONS
YY, CH, and JH conceived and designed the study. NB, XL, and LJ performed the experiments, collected and analyzed the results. MA, JM, FH, YX, JS, JX, and RZ assisted with experiments and data analysis. NB and YY wrote the paper. CH and JH reviewed the manuscript. All authors approved the final version of the manuscript to be published. | 2022-06-26T15:12:31.942Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "c5bf1342baf444172ac1401b6ac6051df1e5dece",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.molmet.2022.101531",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3eb88b96f7a4f9f20b24f2606849b8a1fa24bcf1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210984247 | pes2o/s2orc | v3-fos-license | Isolation and molecular characterization of Toxoplasma gondii from placental tissues of pregnant women who received toxoplasmosis treatment during an outbreak in southern Brazil
Toxoplasma gondii is a protozoan that has great genetic diversity and is prevalent worldwide. In 2018, an outbreak of toxoplasmosis occurred in Santa Maria, Brazil, which was considered the largest outbreak ever described in the world. This paper describes the isolation and molecular characterization of Toxoplasma gondii from the placenta of two pregnant women with acute toxoplasmosis who had live births and were receiving treatment for toxoplasmosis during the outbreak. For this, placental tissue samples from two patients underwent isolation by mice bioassay, conventional PCR and genotyping using PCR-RFLP with twelve markers. Both samples were positive in isolation in mice. The isolate was lethal to mice, suggesting high virulence. In addition, the samples were positive in conventional PCR and isolates submitted to PCR-RFLP genotyping presented an atypical genotype, which had never been described before. This research contributes to the elucidation of this great outbreak in Brazil.
Introduction
Toxoplasma gondii is a tissue cyst-forming protozoan capable of infecting warm-blooded animals, including humans, and is prevalent in most parts of the world [1]. It is one of the most studied coccidians due to its importance in animal and human health [2,3], as well as its suitability as a model in molecular studies [1].
Although T. gondii is the only species of the genus Toxoplasma [1,4], there are various genotypes [1]. The first genotyping studies of T. gondii led to the description of a clonal population structure with three main lines, designated as type I, II, and III [5,6]. Currently, there are many known genotypes that do not belong to these three clonal lineages and are called atypical. They are generally considered more virulent [7]. They are formed by sexual reproduction between gametes of different genotypes which occur in the intestine of felids [1]. In Brazil, these atypical genotypes have been widely described [8]. There are studies showing the prevalence of T. gondii in animals and humans [9,10,11], and some studies have performed the isolation and genetic characterization from cases of congenital toxoplasmosis [12,13].
T. gondii infection is generally asymptomatic in humans. However, it is potentially serious when acquired during pregnancy in immunocompetent individuals, as it carries the risk of fetal transmission [14]. When congenital toxoplasmosis occurs, the protozoan can cause lesions in the fetus that range from subclinical to neurological lesions, and even fetal death or miscarriage [15,16]. The clinical manifestation varies according to the stage of pregnancy, infection time [17], and genotype [16]. The latter makes congenital toxoplasmosis more serious in Brazil, due to infection with more virulent genotypes [18].
In 2018, an outbreak of toxoplasmosis occurred in Santa Maria, Rio Grande do Sul, with 809 confirmed cases. Of these, 114 were pregnant women who had 3 fetal deaths, 10 abortions, and 22 live births with congenital toxoplasmosis [19]. The objective of this study was to describe the isolation and molecular characterization of T. gondii from the placenta of two pregnant women with acute toxoplasmosis who delivered alive children and were receiving treatment for toxoplasmosis.
Samples and clinical history
The placental tissue samples from two patients (patient 1 and patient 2) who delivered their babies at the University Hospital of Santa Maria during the toxoplasmosis outbreak in 2018, were referred to the Laboratory of Parasitic Diseases of the Federal University of Santa Maria (UFSM) for diagnostic purposes. Part of the tissue was intended for protozoan isolation, and another part for molecular tests.
According to their clinical history, both patients were positive for acute toxoplasmosis through the detection of anti-T. gondii IgM in Enzyme-linked Immunosorbent Assay (ELISA). The diagnosis of the two pregnant women occurred in the final trimester of gestation. Both patients received treatment and had alive children. The treatment protocol included a combination of Sulfadiazine, Pyrimethamine and Folinic Acid (SPAF). Patient 1 started receiving treatment from 35 weeks of gestation, while patient 2 received treatment from the 36th week. Both patients received treatment for four weeks and thereafter gave birth.
Isolation through bioassay in mice
The placental tissues were subjected to peptic digestion individually, according to the technique described by Dubey, 1998 [20]. For digestion 50 g of placental tissue were used for peptic digestion. The digested material was resuspended in 5 mL of saline, and immediately after digestion, the mice were inoculated with 1 mL of the peptic digestion solution intraperitoneally. For each sample to be tested, four Swiss female mice were used, maintaining the fifth as a negative control. The animals were obtained from the Central Bioterium of the UFSM. Mice were monitored daily for possible clinical signs of acute toxoplasmosis. When disease led to death, samples were collected from brain, heart, lung, and intraperitoneal fluid from all mice. The tissue was subjected to molecular analysis. Intraperitoneal fluid was also analyzed under a microscope with 40× magnification.
All procedures were approved by the Committee of Ethics in the Use of Animals of the Federal University of Santa Maria, under the protocol 7150250419.
DNA extraction
DNA extraction was performed from placental tissue samples from both patients, and from mouse tissues using Wizard Genomics DNA Purification kit (Promega), following the manufacturer's instructions. In all cases, 20mg of tissues were used for DNA extraction.
Polymerase Chain Reaction (PCR)
The PCR amplification was performed with specific primers TOX4 (CGCTGCAGGGAGGAAG ACGAAAGTTG) and TOX5 (CGCTGCAGACACAGTGCATCTGGATT) which amplified a 529 bp fragment from the T. gondii genome. The PCR was performed as described by Homan et al. 2000 [21]. As a positive control, tachyzoite DNA from the RH strain was used, and DNAase-free water was used as a negative control. A molecular marker of 100 bp (Brand-Ludwig Biotec) was used as the molecular standard size. Amplified products were visualized in the UV transilluminator after 1.5% agarose gel was stained with SYBR Safe DNA gel stain (Invitrogen).
Analysis of restriction fragment length polymorphism (RFLP)
The genotypic characterization was performed from mouse tissues that were positive for the TOX gene (529 bp) using twelve markers (SAG 1, 5' SAG2, 3' SAG2, Alt SAG2, SAG3, BTUB, GRA6, C22-8, C29-2, L358, PK1, APICO), according to the technique described by Su et al. 2010 [22]. To do so, the extracted DNA was amplified by nested-PCR (n-PCR) technique followed by PCR-RFLP analysis. DNA target sequences were first amplified by multiplex PCR, using external primers of all markers, followed by nested-PCR using internal primers for each marker. DNA samples from standard strains, RH, ME49 and VEG were used as controls for genotypes I, II, and III, respectively.
The polymorphism of each locus was analyzed by standard RFLP bands which was used to distinguish each strain type. For this, nested-PCR products were digested with appropriate restriction enzymes for each marker, according to Su et al. 2010 [22]. The controls were also digested using the same restriction enzymes. The negative control consisted of DNAase-free water. The results obtained were compared and classified according to the genotypes present in ToxoDB (http://toxodb.org/toxo/).
Results
T. gondii was isolated from the placental tissues of two patients. Within two weeks the mice presented signs indicative which acute toxoplasmosis such as apathy, bristly hair, photophobia, ascites, and death (Table 1). In addition, it was possible to identify a large amount of tachyzoites in the intraperitoneal fluid collected from the animals. As expected the samples of placental tissue as well as tissue samples from mice (brain, heart, and lung) submitted to conventional PCR showed an amplified product of 529 base pairs, confirming the presence of T. gondii DNA in the placenta of the evaluated patients, and in the bioassay mice.
In the genotypic characterization by the RFLP technique, the DNA analysis of T. gondii amplified from the tissues of mice submitted to the bioassay presented an atypical genotype, not yet described in ToxoDB. This result compared to other genotypes in Table 2.
Discussion
Samples from animals have been widely used for isolation and genetic characterization of T. gondii [8]. However, in humans, this diagnosis is restricted [23], which makes it difficult to clarify the virulence of strains that infect humans and their genetic identity. In the present study conducted during the toxoplasmosis outbreak in Santa Maria, T. gondii was isolated from placental tissues of two patients with acute toxoplasmosis who received specific treatment in the third gestation trimester. This result is interesting since the success of T. gondii isolation is lower in cases of pregnant women receiving treatment [16,24,25]. The isolation of the protozoan species in the two patients in this study suggests that in both the cases the treatment protocol established did not prevent the protozoa from reaching the placenta, or that the congenital infection occurred even before the start of treatment.
In addition to confirming the presence of T. gondii in the placenta, the isolation in mice allows the virulence evaluation of genotypes present in the samples [26], since virulent strains usually cause acute infection with clinical signs in mice [3]. Signs which are characteristic of acute toxoplasmosis such as ascites, bristly hair, and photophobia were seen in all mice inoculated with placental samples from the patients in this study. In addition, the mice died within a maximum of 15 days, suggesting that the genotype present in the samples was quite virulent, although the amounts of inoculated tachyzoites can also interfere, since it was not estimated.
The genotype found in these samples was characterized as atypical, and is related to more severe forms of toxoplasmosis [27]. Atypical genotypes are not uncommon in Brazil, where the genetic diversity of T. gondii is large [8], but the genotype present in the samples of this research had not yet been described in ToxoDB. Recently, in Southern Brazil, Vielmo et al., 2019 [28], also described an atypical genotype very similar to that found in the current study, capable of causing a chicken outbreak on a small rural property, suggesting that these two I I I I III III II III III I I I a P. 2 I I I I III III II III III I I I b [28] I I I I III III II III III I I III c BrI I I I III II I u-1 I I I I I c Br II I I I III III III I III I II II III c Br III I III III III III III II III III III III III c Br IV I III III III III III II I III III III closely related genotypes are virulent to humans and animals. Although very similar to each other, both genotypes differ from the Brazilian clonal lineages, BrI, BrII, BrIII and BrIV [9], as shown in Table 2.
In addition, it should be considered that the evaluated patients were diagnosed in the last gestational trimester and started receiving treatment after 30 weeks of gestation. This fact reaffirms the importance of diagnosis pregnant women through serology is essential for fast and efficient treatment to reduce cases of congenital toxoplasmosis [29,30].
This study was funded by the Higher Education Personnel Improvement Coordination.
Conclusion
It was possible, by isolation and genotyping, to identify a new atypical T. gondii genotype, never described before, and with high virulence characteristics. This research contributes to elucidate the outbreak of toxoplasmosis in Santa Maria, Brazil.
Supporting information S1 File. Patient Bioassay 1, Patient Bioassay 2 and Mouse images from the bioassay showing some clinical signs. (DOCX) | 2020-02-01T14:02:53.030Z | 2020-01-30T00:00:00.000 | {
"year": 2020,
"sha1": "45307e7f5d4b021a7011cfbc91fbf20eacbf535a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0228442",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78523da3c97255dd570b8ffae86310d70dc25305",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
261934807 | pes2o/s2orc | v3-fos-license | Unveiling Inertia Constants by Exploring Mass Distribution in Wind Turbine Blades and Review of the Drive Train Parameters
: In studies of dynamic stability and power quality, it is necessary to know the values of the mechanical parameters determining the transient response of wind turbines. Their exact values are not as decisive as the power curve, but an inaccurate estimate can distort or even invalidate the simulation results. From a review of the literature, it has been found that, despite their importance, the values of inertia, stiffness and damping are hardly available for any turbine model. Another detected problem is the lack of confidence in the data origin. This article aims to solve the issue of the scarcity and unreliability of data on inertia, and gathers the information found on the remaining mechanical parameters. Available blade inertia values in kg · m 2 are presented. Special treatment has been given to those providing the mass distribution along the blade span, for which the provided values of inertia have been compared with those obtained numerically, showing good matching. With this, different reliable relations are obtained that allow for the calculation of the turbine rotor inertia, based on the mass and length of the blade. When the center of gravity is also available, a very correlated expression ( r 2 = 0.975) is provided to obtain the inertia. The references to the stiffness and damping constant of the drive train, which are even more rare, will also be presented. In addition, the study includes a revision of gearboxes, generators and blade weight, according to their IEC-class and material.
Introduction 1.Context and Purpose of the Work
In power system stability studies, it is important to have an appropriate model for the characterization of a physical phenomenon of interest.Depending on the analysis to be carried out, the corresponding models may be different.According to the time scale of interest for stability studies, the models can be classified as electromagnetic models (to scale short time frames) or electromechanical models to investigate slower events.The influence that modeling can have on the results of the study can be seen in [1].
With regard to the electromechanical models, the most important components to take into account are the turbine inertia, the generator inertia and the coupling between both moving masses.In general, the inertia in AC electrical power systems represents the kinetic energy stored in large rotating generators and synchronous motors, providing them with the tendency to maintain continuous rotation.In the case of additional rotating masses (steam turbine in the case of thermal units, Francis/Kaplan/Pelton turbine in the case of hydro units or blades in the case of wind turbines), the corresponding inertia must be added to the generator value.
A system that possesses a sufficient natural rotational inertia is able to maintain the grid frequency in its rated values, since this frequency is really an electromechanical variable that is linked to the mechanical speed of rotation of the generators.When a sudden power imbalance between generation and demand occurs in the grid, a frequency deviation typically appears.If the contingency is not serious, this stored kinetic energy allows for a rapid response, permitting a transfer of energy to balance generation and load, before the frequency deviation exceeds permitted values.On the other hand, in normal operation, small variations are permitted in the electrical frequency in real time, to ensure an adequate balance of active generation and load.In both cases, abrupt or slow power imbalance, inertia plays a determining role during the balancing transient.
This kinetic energy is: where J is the system inertia and Ω G is the rotational speed of the electrical generator [2].It allows us to define the inertia time constant H of a generator as the ratio between the stored kinetic energy E k and its rated power.It determines the time interval during which an electrical generator can supply its rated power, by using solely the kinetic energy stored in its rotating masses [2].Additionally, the current paradigm shift, with the incorporation of wind power and photovoltaics into the power system, is achieved using power-electronic converters that do not possess natural rotational inertia, unlike a synchronous generator (SG).Consequently, renewable generators displace synchronous generation and reduce the amount of rotating mass in the system.Therefore, system operators need to model these electrical systems with low rotational inertia, which presents significant challenges to controlling the stability of the system [3,4].
The availability of precise inertia data permits the system operator to make appropriate decisions and facilitates the improved planning and operation of the system.This ensures the stability and resilience of the network and enables the efficient integration of larger amounts of renewable generators, without compromising system security or power quality.
Literature Review
The importance of inertia in stability studies is addressed by many authors.Ekanayake in [5] shows how, with the correct control of the power electronics connected to the rotor of a doubly fed induction generator (DFIG), inertia allows the recovery of a significant amount of kinetic energy.This alleviates the effect of a frequency drop in the network system.
In the case of fixed speed induction generators (squirrel-cage type), the recovered kinetic energy is much lower since the range of operating rotation speeds is reduced to +1÷3% above the synchronous speed.At the other extreme, SGs allow a much wider speed range, and there is a decoupling between the mechanical part and the transient phenomena that occur on the network side [6], due to the IGBT-converters.Initially, this decoupling could lead to a reduction in inertia seen by the network, and therefore, less damping against abrupt changes in generation and charging patterns [7,8].This has given rise to a multitude of articles on frequency control strategies to effectively integrate wind energy systems into the grid.In any case, for the evaluation of the kinetic storage capacity, it is necessary to know, as precisely as possible, the inertia value for the turbine and the generator.It is also mandatory to be familiar with the mechanism for the transmission of kinetic energy between both components through the coupling, for which it is necessary to have an understanding of its characteristics of stiffness and damping.
Guillamon in [9] performs a review of inertia data, and although providing its dimensionless value, it is difficult for some references to trace those which have been reliably obtained.Other authors such as Morren in [6] assume, in the absence of reliable data, that the geometry of the blade is very complicated, and approximate this to a bar of constant chord and constant, with which they obtain a simple expression to calculate inertia as a function of power.
Research Gap and Motivation
Despite their importance in system stability and power quality studies, these parameters, especially that of inertia, but also the stiffness and damping constant of the drive train, are rarely available for the turbine model under investigation.Therefore, the researcher must often carry out a study on which parameters to use in order to faithfully reproduce the actual situation.The alternative is to rely on the value used by other researchers, without any assurances that these can be adapted to the characteristics of power, length or weight of their own system.
As a result, it is highly desirable that future studies on stability and, in general, on the dynamic behavior of wind turbines contain reliable data, so that the modeling of the system is as realistic as possible.These parameters logically depend on the size of the turbine.Even using a dimensionless formulation, which converts the magnitudes into per unit (p.u.), these parameters have a certain dependence on power and, as power grows over the years, it is necessary to update the available databases with the values that can be extracted from modern machines.
Contribution
In this article, in addition to reviewing realistic data on inertia, mainly from multimegawatt turbines, it is intended to trace the references that provide data on the density distribution (DD) of real blades, preferentially with designs based on realistic dynamic and structural analyses.From these, the calculation of the inertia of the blade will be made, and a relationship with some specific magnitude of the geometry will be deduced, such as the position of the center of gravity (CoG).
This method of obtaining inertia is novel, and is intended to be the initial stage of a more extensive database to provide reliable data on inertia, unlike inertia data obtained by transient analysis or other non-direct methods.
Although minor, other components also influence the dynamics of the mechanical system, the precision of which will depend on the number of masses with which the mechanical part of the turbine is modeled [10,11]: generator inertia, hub inertia and stiffness/damping of the power transmission shafts between the turbine and generator.The relationship between the different mechanical variables will be reviewed through these parameters, and the corresponding expressions converted into dimensionless values.In [11], it is stated that it is possible to reduce the number of masses of the drive train model, without significant deviations.
The theoretical and technical contributions of this paper are to: • Provide general expressions that allow the weight of the blade to be estimated based not only on its length, but also on the IEC wind class of the turbine and the material of the blade.
•
Provide an expression that accurately estimates the blade inertia starting from the position of the CoG, its weight and its length.
•
Formulate in depth a complete dimensionless framework to relate magnitudes and parameters (such as inertia, damping, stiffness and friction) with respect to their base references, valid for systems of three masses and two masses.
Paper Organization
The rest of the article is explained below.In Section 2, the mechanical equations that govern the rotation movement of each component are reviewed, the expressions that transform each variable and parameter into its value in p.u. are shown and certain general, distinguishing characteristics of each parameter are described.In Section 3, all the data retrieved on the DD of 21 blade models are organized, and a relationship is identified that links the inertia with the position of the CoG of the blade.The values obtained for the other parameters reviewed are also organized.Finally, in Section 4, the results are evaluated, highlighting the achievements obtained as well as their limitations.
At the end of the paper, a list of abbreviations and variables is included.
Methods
This section formulates the expressions governing the dynamics of the interaction between rotor and generator and how they can be converted into p.u.It also anticipates certain issues to be taken into account prior to listing the revised data.
Hereinafter, the set of blades plus hub will be named as the turbine rotor or simply turbine, and this will appear referenced in the expressions as T. A blade will be designated as B, the generator as G and the gearbox as GB.Magnitudes and parameters with dimensions will appear in uppercase, and their conversion to p.u. will be displayed in lowercase.
Dynamics Rotor-Generator
Once the blades extract the aerodynamic energy from the wind and transform it into a rotation torque T W exerted at the turbine speed Ω T , these magnitudes will drive the dynamics of the system until they produce a torque on generator T G , which will rotate at speed Ω G .This speed will be measured by the grid frequency, f , divided by the number of pole pairs, n pp , and will be exactly equal to this value in the case of SGs or slightly higher (due to slip) in the case of IGs in normal operation.
With regard to IGs, a high number of poles would increase the magnetization losses, so this value is reduced to no more than n pp = 2 or n pp = 3.Since the rotational speed in rpm is equal (or very close) to 60 f /n pp with f being the grid frequency, this yields an impermissible value for blade rotation.It is, therefore, necessary to include a gearbox that accommodates both speeds.This new element introduces new equations into the dynamics of the system.
Permanent Magnet Synchronous Generators (PMSG), which are the most frequent type of SGs in wind turbines (WT), allow for a high number of poles that considerably reduce the rotational speed.In the latter case, it would not be necessary to use a gearbox.This setup is referred to as direct drive.In other schemes with PMSGs, the number of pairs of poles is not high enough to obtain the adequate speed, although in any case, the required transmission ratio decreases and with it, the dimensions and stages of the gearbox.
Other elements that form part of this transmission are the brake and the couplings that account for the misalignment between the rotation axes, although they are generally not included in the dynamic analysis of the system.
Figure 1 represents the interior of the nacelle, with the transmission chain from the turbine rotor to the generator.The scheme corresponds to an IG or a PMSG with a reduced number of poles.In this case, to accommodate the high speed in the generator shaft with the low speed of the blades, it is necessary to include a gearbox.Its ratio increases with the turbine rotor diameter and decreases with the number of poles.Figure 2 corresponds to this dynamic scheme, called three masses, where: J T is the inertia of the turbine rotor due to the distribution of masses in the blades and, to a lesser extent, in the hub.• D T is the coefficient of friction due to the aerodynamic resistance offered by the blades.• K HGB is the stiffness constant in the slow axis that joins the hub and the gearbox.• C HGB is the damping constant of the torsional movement of the slow axis.• J GB is the inertia of the gearbox discs, measured from the slow shaft.• D GB is the coefficient of friction due to friction in the gearbox, measured from the slow shaft.• J G is the inertia of the rotor of the electric generator and the brake.• D G is the coefficient of friction due to friction in the generator and ventilation losses.• K GBG is the stiffness constant in the fast axis that joins the gearbox and the generator.• C GBG is the damping constant of the torsion motion of the fast axis.
Mechanical Equations
The equations that govern the dynamics in a system formed by the turbine rotor, the slow shaft, the gearbox, the fast shaft and the generator are those that correspond to the left side of the following expressions: 5) where superscripts LSS and HSS stand for low-speed shaft and high-speed shaft, respectively (as an example, T LSS is the torque available in the LSS), Θ is the twist angle in the LSS or HSS and n GB is the gearbox ratio.The variables and magnitudes on the right hand side are the same expressions, although referred to their base magnitudes; they are explained in the following section.
Referring to Base Magnitudes
On many occasions, the value of a magnitude expressed in a measurement system (e.g., MKS) does not provide information on the operating range in which it is working and must be compared with its rated value.For this reason, it is customary to compare magnitudes, electrical or mechanical in our case, to their rated values, so that when working below rated conditions, the value will be between 0 and 1.This usually entails the adimensionaliza-tion of the parameters that intervene in the relationships between magnitudes.In most cases, the new magnitudes and parameters lose their dimensions and we then work in p.u.This facilitates computational manipulation, makes the variables independent of the capacity of the turbine to a certain extent or offers a better view of how close the operation is to overloading or idling.In the case of an ideal gearbox, this disappears when operating in p.u.The expressions, once transformed into p.u, are those found on the right side of ( 2)- (10), where uppercase has been used for values with dimension and lowercase for values in p.u.
In the following subsections, a set of dimensionless values for the different parameters of these equations (inertia, damping coefficients and friction coefficients) will be listed, and it will be easy to identify outlier values and mark them as unreliable.If the adimensionalization is not carried out, the normality range for each parameter will depend on other variables (power, speed, n GB ), so its identification is more difficult.
The relationships between variables with dimensions and in p.u. are the following: and ) where P is the turbine rated capacity.This determines the transformation to p.u. of several parameters (inertias, stiffness, friction or damping), the values of which will be different according to whether they are given in the LSS or the HSS.
It should be mentioned that there are cases such as that of the inertia constant J in which a complete adimensionalization does not make practical sense, and although it is transformed to a value referred to rated conditions, it has a magnitude of s.In any case, and although it would be more accurate to speak of magnitudes referred to base values, the term p.u. will continue to be used, despite not being strictly correct for some parameters.Its reference to the rated values (H) also appears as an exception, in capital letters.
For the parameters in the LSS, the transformations yield: For the parameters in the HSS, the transformations yield: It is worth mentioning that some authors use the turbine rotation speed provided by the manufacturer as Ω LSS B to transform the dimensioned parameters to p.u. parameters.This is true for turbines driving PMSGs.However, for turbines with IGs, the catalogs generally do not show the exact speed value at the rated conditions, but Ω LSS B , so once again, it is correct to use the value obtained from the catalogs as Ω LSS B .However, certain other sources have also been identified in which these two values differ, although the deviation is minimal, less than 5%.This deviation is due to the slip s, of reduced value in high-power generators since this slip determines the rotor losses due to the Joule effect.Consequently, there is a certain error in turbines with IGs when considering the rotational speed of the turbine as the base of the slow shaft speed.This error, although small, can be avoided by adopting Ω LSS B = 2π f n GB n pp as the base speed.
System of Two Masses
In the case of multipole PMSGs with direct drive, there would be no gearbox and all the measurements would refer to a single axis.In this case, it is usual to group the different constants in such a way that one works with a model of two masses, as in Figure 2b.The error committed is not usually significant in stability studies [11].
Even when there is a gear ratio, all constants tend to refer to a single axis, thus simplifying the analysis of the dynamics and making it possible to compare parameters between turbines with different gearbox ratios.In addition, the system is simplified if, as indicated in [11], the coupling on the fast axis is considered infinitely rigid, with which Ω HSS GB = Ω G .As shown in Figure 2b, the parameters measured in the HSS can be converted to the LSS (or vice versa), simply by multiplying (or dividing) by n 2 GB .This is where Param is a coefficient of inertia, stiffness, friction or damping.This is coherent with the expressions ( 22)-( 29).
The different constants could be grouped into equivalent values according to [10], which, in the case of using variables in p.u., is more simplified: where m2 refers to the equivalent value that includes all the components to the right of the rotor.This would constitute mass 2 against mass 1 which is that of the turbine.
From the grouped values of J m2 , J T and K eq , the critical damping of the torsional dynamics can be obtained: where for the study of damping in torsional dynamics, the following expression must be used [10,12].
From ( 22) and ( 23), C c can be obtained as a dimensionless value.Assuming, without loss of generality, that the values refer to the LSS, this would be: From (24), this is as follows: where f is the network frequency.In the event that the values referred to the fast axis, it would be necessary to multiply J eq and K eq by n 2 GB , but at the same time, the damping would also need to be divided by n 2 GB , so the expression would still be valid.The resonant frequency is: similar to that obtained in [13].
Evaluation of the Blade Inertia
Figure 3 represents the typical blade aspect composed of sections of different airfoils.Different regions can be established: root, a transition zone and the aerodynamic zone from which the tip part can also be distinguished.The root region is the zone that is connected to the hub through a bolted joint.It is cylindrical for structural reasons and to facilitate pitch control; hence, it has lower aerodynamical efficiency.It consists of thick aerofoil profiles to provide greater structural integrity, since this region supports the largest edgewise moments (due to the weight) and flapwise bending moments (due to aerodynamical forces).A transition region adapts the geometry from the cylindrical shape to the aerodynamical profile.This geometry is not as structurally efficient as the circular variant, but it has to withstand practically the same moments.Consequently, it is the most structurally requested region and carries the highest loads, especially at the high-pressure side [14,15].Next is the aerodynamic zone, where the geometry is designed to be able to resist the design loads according to the IEC class in which it is framed.Once this restriction is overcome, the design focus is on maximizing the lift to drag ratio.In general, a larger chord length close to the root would increase energy capture, but this results in higher moments when the turbine is parked in extreme wind conditions.The tip is a compromise between aerodynamics, aeroacoustics and deflection control.A less tapered tip can increase the lift but also the noise and thrust forces, which will give larger tip deflections and bending moments.
In addition to these design requisites, manufacturers also introduce modifications into the design and skin material to reduce the effect of leading-edge soiling on airfoil performance.In order to obtain reliable blade inertia data, and also to be able to deduce realistic expressions for their estimation, the repositories and the literature have been reviewed in search of blade models that provide mass distribution along the blade span.This has been interpolated to produce uniform distributions across 101 positions of each blade, and the result has been uploaded to [16] as a csv file.It is also accompanied by the Matlab/Octave code that allows them to be recovered.Section 3 shows the result of the analysis performed.
In the event that the moment of inertia is provided with respect to the root of the blade J root , instead of the axis of rotation (AoR) J AoR , this inertia would have to be transferred by the amount equivalent to the radius of the hub R H (see Figure 3).Applying Steiner's theorem, and passing first through the CoG, we have: where J CoG is the inertia around the CoG, M B is the blade mass and L CoG is the distance from the CoG up to the blade root.
Modelling the Drive Train
A precise study of the behavior of the drive train dynamics requires a model of five masses [10], relating to the turbine rotor Figure 2b, the minimum requirements of which are outlined in IEC 61400-4.However, as mentioned in Section 2.4, a two-mass model is preferentially used in stability studies.Even for this simplified model, it is very difficult to find data for stiffness or even data regarding the damping of the coupling between the rotor and the generator.In fact, the drive train torsional damping is often estimated, as in [17] for the NREL-5MW reference model, by assuming a relative damping ζ = 0.05.Hence: with c c deduced from (38).
Generator Inertia
The different types of generator used in megawatt WTs are listed in [18] along with their generic characteristics and are resumed in Table 1.Typical speeds, in concordance with [19], appear in the third column.Relationships between the mass and the power of the generator have also been extracted from this reference (fifth column).The data of the remaining columns have been extracted from [20], from [21] and from other scattered catalogs.The expressions in the last column of Table 1 can also be reached with a constructive and functional analysis of the generators.It is worth mentioning that what appears in this column is the mass of the entire machine and, in fact, only the rotor mass and geometry are required for the inertia.Therefore, the mass of the frame and that of the generator should have been subtracted.As a rule of thumb, many authors assume the same mass for the three components, although the ratio between the mass of the rotor with respect to the stator increases with the number of pairs of poles.
In general, for IGs, the torque and speed are determined, respectively, by the stator current and the voltage.The first magnitude determines the section of Cu and the second, the number of turns; hence, the power increases with the mass of Cu.On the other hand, to achieve a certain torque, Lorentz's law indicates that the force is proportional to the length of the conductor (that is, the length of the machine and the number of windings that fit in the periphery), while the arm is related to the radius.Consequently, the power will be roughly proportional to the volume of the machine.Assuming that diameter and depth are scaled equally, we have: since the speed of the generator rotor depends on its number of pairs of poles which is independent of the power.For slow synchronous generators, the speed is given by the inverse of the n pp , and this number, in turn, determines the circumference length of the generator rotor.
On the other hand, an increase in the power of an electric machine implies a roughly proportional increase in weight.Consequently, as far as inertia is concerned, we have:
Results
In this section, various expressions and relationships will be released that link the different aspects relating to blade design: the power of the turbine that would mount the blade, the rotor diameter, the blade length, the blade mass... Subsequently, the results relative to the different elements of the turbine dynamics will be shown, especially the inertia of the blades.
Expressions Relating to Weight and Blade Length
Each blade model is designed for certain wind conditions which determine its geometry and mass, and therefore, its inertia.The different conditions are included in the IEC 61400 standard, which distinguishes between several classes depending on the average wind speed (IEC Class I, II and III) and turbulence (subclasses A and B).Class IA is the most demanding in terms of design requirements.Table 2 shows a comparison of the average mass of a blade, as a function of wind class and power.An additional class, T, has been created for areas which experience typhoons.Sometimes, Class S is also indicated independently or in combination with another class, in the case of non-standard conditions specified by the manufacturer.Different versions are often based on a blade model, with each one adapted to a certain class.The left-hand plot of Figure 4 shows the dependence of the mass and the length of the blade on different design classes.Some weight reduction was observed for Class III turbines, but contrary to what is shown in Table 2, there was no significant difference between Class I and Class II turbines.On the other hand, the experience and trajectory of each blade manufacturer usually leads to the use of a different material and method for its structure.At first, it was mainly polyester resin, reinforced with glass fibers, the matrix of which has been displaced by epoxy resin whose composites exhibit better properties than polyester resin.Both of these are considered to be glass fiber reinforced polymer (GFRP).Later, carbon fiber reinforced polymer (CFRP) was introduced to substitute glass or to be combined to form hybrid glass-carbon blades (Gl-C/Ep) [22,23].
In the right-hand plot of Figure 4, the blades have been classified according to the essential material employed in their manufacture.In this set of study, it was observed that composite materials, based on fibers and polymers, have displaced metals or wood, with a clear predominance of blades using GFRP.For this group, it is difficult to deduce from catalogs whether the matrix is polyester or epoxy.Blades, based on CFRP, appear less frequently and are usually hybrid composites (CFRP/GFRP).For this material, a certain decrease in the weight of the blade is observed.The opposite occurs for steel blades, which are significantly heavier, although it should be mentioned that these are two-blade turbines.
Table 3 lists the relationship between M and L found for every case of study, with M expressed in T and L in m.The rows are divided into three groups according to their capacity, IEC class and material.The relatively bad correlation (0.865) found in turbines larger than 3 MW, disregarding IEC class or material, indicates that there is great variability due to design conditions and material; therefore, it is preferable to know the IEC class and the material to have a good estimate of the weight, if this is unknown.
Inertia Obtained from Density Distribution
The most distinctive aspect of this work is to offer a set of inertia values obtained from the DDs of different blade models found in the literature.This distribution is not arbitrary, but a consequence of a detailed structural and aerodynamical analysis.The resulting data are organized in the Mendeley [16] repository, along with a Matlab script to retrieve the data.From them, the inertia has been calculated for each of the blade models presented, as well as the position of the CoG, with respect to the center of rotation.
Figure 5 represents the mass distribution along the blade for 21 blade models.The data have been interpolated for each model in order to provide 101 values, uniformly separated between 0 and 1, where 0 is the blade root and 1 is its tip.
The result of operating with these data is shown in Table 4.The meaning of each column is the following: 1.
Capacity of the turbine for which the blade is designed.
2.
Maximum rotational speed.This matches the rated rotor speed of the turbine.
3.
Blade mass, as extracted from the reference (up) and calculated by the integration (down).4.
Inertia of the blade, as extracted from the reference (up) and calculated by the integration (down).A value of CoG or inertia appears in cursive when it refers to the blade root instead of the rotation axis. 5.
Inertia of the three blades, as extracted from the reference (up) and calculated by the integration (down).The extracted data have been moved, where necessary, to the rotation axis.6.
Position of the CoG with respect to the rotation axis, as extracted from the reference (up) and calculated by the integration (down).7.
Calculated position of the CoG, divided by the the rotor radius.8.
Value of the coefficient Time constant of inertia H. 10.Reference where data have been obtained.In many cases, the inertia value does not appear as such, but as the "first mass moment of inertia", as in [24].In this case, it must be divided by M B .Attention must be paid as to whether the CoG value refers to the axis of rotation or to the blade root.In the latter case, for the purposes of inertia of the rotor as a whole, R hub must be added.The latter also applies to inertia (also designated as the "second mass moment of inertia") and ( 40) should be applied.Other inertia values obtained from the literature, although without providing the mass distribution, are included in Table 5.Only data originally provided in kg m 2 , not in s, are included.
As discussed previously, there are different airfoils along the blade span, grouped into four regions.The first region, the root, has the assembly structure with the hub and must withstand the greatest moments.It is this zone in which a higher density is observed in kg/m, and this difference is notable from one model to another.In any case, since this region is close to the AoR, its influence on the inertia is minimal.If one tries to find some type of relationship between the inertia J of the blade, as a function of the mass and the length of the blade, the expressions are obtained with an apparently good correlation.Thus, when looking for a relationship of the type J B /M B = F(L B ), it yields: A relationship of the type J B = F(M B L 2 B ) has also been tested, with the following result: Figure 6 (in blue) represents the inertia values obtained from the mass distribution extracted from the references in Table 4, which are taken as real values.Shown in purple and green, the direct estimates of inertia from the expressions (45) and (46), are recorded, respectively.It has been observed that, in the case of these estimates, obtained without taking geometry into account, there is a deviation from the estimated values (purple and green points) with respect to the value taken as a reference (blue).Figure 6.Comparison of several estimates to obtain J with respect to the value calculated from the DD (blue).In purple, the estimate from (45).In green, the estimate from (46).In red, the estimate from kJ obtained from the CoG (50).
In the following, the blade geometry will be included in the inertia estimation through the CoG position.Accordingly, a more precise estimation of the blade inertia is obtained by comparing columns 7 (CoG/L) and 8 (k J ) of Table 4.This is represented in Figure 7 with a good correlation between them (r 2 = 0.975), in the form: From Figure 7, it can be seen that they are related through: or The results of applying (50) are observed in Figure 6 as the items with the red marker, and are compared to the results of applying (45) and (46).As can be seen, the values obtained from the CoG (red marker) are very similar to those calculated from the DD (in blue).Consequently, expression (50) allows for a precise estimate of the inertia once the mass, length and CoG positions are known.
It should be mentioned that when the value of (CoG − R H )/L B is used as abscissa instead of CoG • /R rotor , a similar relationship is obtained, although with a somewhat lower correlation (r 2 = 0.972).However, there is no observed dependence between the turbine capacity, identified through the marker colour in Figure 7 and k
Drive Train
Based on data obtained from [20], it can be seen that medium-speed wind turbines use planetary gears, with one or two stages.Clipper wind turbines, with four synchronous generators per turbine, and those that mount the WinDrive system are practically the only high-speed models with two stages.All the others with a speed above 1000 rpm have three stages, mainly combining spur and planetary, and specifically comprise 181 models of 241.
There are some other models (37/241) with planetary gears along with helical or spur gears only (7/241) or just planetary (6/241).Each type has its own transmission structure that gives different stiffness constants [43].
The values found are very scarce and, in many cases, the reliability of their origin is questionable.They are represented in Table 6, where the dimensional values are in uppercase and the p.u. values are in lowercase.The parameters are those indicated in Figure 2. Most of these values are in the range specified by Gonzalez-Longatt in [11]: LSS stiffness, K LSS (p.u./rad el ) 0.35-0.70;HSS stiffness, K HSS (p.u./rad el ) (p.u.) 2.00-4.00.In general, the coupling on the slow shaft is less rigid than that on the fast shaft.In addition, within their wide ranges, the turbines with active-stall control typically occupy the upper values and those with pitch control occupy the lower values. 1The values, K LSS and K HSS in [46] have been divided into 2π f = 314 rad/s, because the values were given in p.u. instead of p.u./el.rad. 2 The value of C GB in [50] has been multiplied by 2π f = 377 rad/s, because the values were given in p.u. s/el.rad,instead of p.u., although these are still very low.
Hub Inertia
The hub is the weightier rotating component and, therefore, its inertia must be taken into account or at least analyzed.Table 7 lists certain values found in the literature: Other values, supplied directly in s, are: H = 0.685 s for a 2 MW DFIG [53]; H = 0.142 s for a 0.6 MW DFIG [47]; H = 2 s for a 0.75 MW DGIG [10]; H = 0.5 s for a 1.5 MW PMSG [49]; H = 0.75 s for a 1.67 MW DFIG [13]; H = 1.150 s for a 2 MW PMSG [54]; H = 0.4 s for a 5 MW DFIG [50].
These values are more significant than those from the hub, and should be considered in dynamic studies.It was also observed that there is a great variability in the value of H, which will prove difficult in extracting some generic law, as the inertia will depend on the constructive characteristics of the rotor, probably whether the generator is an IG or a PMSG.
Article Contribution
The main focus of this article has been to provide reliable inertia data for the wind turbine.For this reason, H T values recovered from the bibliographic search have not been included.
An expression has been proposed that quite accurately links inertia to the mass of the blade, its length and the position of the center of gravity, although the latter is only slightly easier to find than the inertia itself.Two expressions have also been proposed, which are very similar in terms of their results and which do not require CoG data, although there is a deviation for some blade models with less conventional geometry.
In order to model the mechanical power transmission dynamics, this work has two additional purposes.The first is to establish an end-to-end dimensionless framework of the mechanical magnitudes that come into play in the turbine dynamics.The second is to collect existing data on the inertia of the remainder of the components and other mechanical parameters, such as stiffness, friction and damping.Since the data found are scarcer, the values in p.u. found in the literature have been incorporated into the review.However, many are not entirely reliable or have not been precisely defined in the base magnitudes of the adimensionalization.
Limitations and Benefits of the Proposed Work
As already mentioned, the present work provides an expression that presents the designer with a very accurate value of the blade inertia.Its limitation is that one of the arguments required is the position of the CoG.This value is easier to find than the inertia value, but even so, it is not often found for each turbine model.
Future Work
The aim of future work is to study in depth the characteristics of the coupling between the hub and the generator, especially in the slow shaft for non direct-drive couplings.This will allow to link the values of stiffness and damping more precisely.The constructive characteristics of squirrel cage, wound rotor and PMSGs will also be studied in order to deduce in each case the expressions that link its rated capacity with the rotor weight and, if possible, with its inertia.
Figure 1 .Figure 2 .
Figure 1.Typical components in a wind turbine drive train.
Figure 3 .
Figure 3.The position of the CoG with respect to the blade root is the position with respect to the AoR summed to the hub radius.
Figure 4 .
Figure 4. Blade mass as a function of the length: (a) for different IEC classes; (b) for different materials.
8
Comparison of estimations for Jfrom M distrib.(ref) from CoG/R T from J/M = f(R T ) from J = f(MR T 2 ) J .
Figure 7 .
Figure 7. Correlation between the position of the CoG and the inertia, obtained from the mass distributions found in the literature.The value, L, is half the rotor diameter.
Table 1 .
Types of generators and the corresponding coupling to the turbine rotor.Manufacturers as Clipper, Eno Energy or Catum mount fast synchronous machines up to 1600 rpm.
2The exponent 0.92 for induction generators is coherent with the statement that M ∝ P.
Table 2 .
Comparison of blade mass per length depending on the IEC wind class.
Table 3 .
Expressions linking M in kg and L in m.
Table 4 .
Values relative to the mass, length and inertia of several blade models extracted from the literature, and compared with the values obtained from the DD appearing in the corresponding reference.A value of CoG or inertia appears in cursive when it refers to the blade root instead of the rotation axis. 1
Table 5 .
Other values relating to the mass, length and inertia of several blade models extracted from the literature.A direct drive is claimed in the article, but this would lead to a tip speed of 227 m/s.H has been obtained assuming a tip speed of 76 m/s.2The number of poles found in the reference is 200, but with this value, the tip speed is 196 m/s.It has been assumed that 200 is the number of pole pairs. 1
Table 6 .
Values for the drive train components of Figure2, obtained from the literature.Values in cursive are in pu or pu/el.rad.
Table 7 .
Values for hub inertia obtained from the literature. | 2023-09-16T15:06:01.942Z | 2023-09-13T00:00:00.000 | {
"year": 2023,
"sha1": "903466bbd54c6aea51de4c0b1f1aa08d2d4608d9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1702/11/9/908/pdf?version=1694611198",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8af8c33dec66e07e3d88eba24d8454496f790ace",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
21744588 | pes2o/s2orc | v3-fos-license | African Shea Butter as a Staple and Renewable Bioproduct
The world is really making a turn to natural and bioderived chemicals/products. This is so in order to enhance sustainability of the socioeconomic growth, conservation of the environment and human safety.Shea butter is a renewable bioproduct which is traditionally and industriallyused for many medical, personal care and cosmetic applications and so on.More so, shea butter can be a great solution to skin diseases in these days of acute weather and climatic conditions because of its effectiveness and nontoxic nature. The shea tree from which the shea butter is derived, is cultivated orgrown widely and naturally in West and Central Africa in the semiarid Sahel. This tree is an important natural resourcewhich should be seriouslyprotected to sustainthe shea butter derived from it.In addition, there is need for more participation in the shea butter business to maximize its beneficial values. The extraction of butter can be alternatively carried out by using greener solvents preferably supercritical carbon dioxide (scCO2) instead of hexane to enhance the quality of the butter.Thus, shea butter is essentially valuable and nontoxic bio-renewable product. The benefits and applications of shea butter cannot be over emphasised and are indispensable and inexhaustive.
Introduction
Shea treeis cultivated or grown widely and naturally in West and Central Africa in the semi-arid Sahel [1], [2].It starts to bear fruit after about 20 years and continue to produce nuts for up to 200 years [1].The common varieties of this nut are; Vitellaria paradoxa and Vitellaria nilotica; whereVitellaria paradoxa is more largely grown and marketed.Vitellaria nilotica is predominantly produced in northern Uganda and southern Sudan. The specie, Vitellaria paradoxa of the shea tree grows extensively in Senegal and Uganda, where it is protected and managed [3].The solid fat (butter or stearin) and the liquid oil (olein)products are obtained from shea nut [1].Shea tree fruit has sweet edible pulp and nut. Shea tree was first described by Moroccan traveller and scholar Battuta as far back in 1348 [2], while botanical characteristics of shea tree and the derivationof butter from it was described by the first European that visited the Niger River, Scot Mungo park [4].Shea butter is being staple of African pharmacology.Benin, Ghana, Burkina Faso, and Cote D"Ivoire are major producers of the peculiar species (V. paradoxa. nilotica) [5].The shea fruits are known as Chamen, Kandayi/Makande, Osisi/Okwuma, Emi/Orioyo etc in Tiv, Hausa, Igbo, and Yoruba natives of Nigeria respectively [5].Maranz et al., 2004 had demonstrated the existence of three kinds of shea butter based on fat profile. The one with high stearin (St: O > 1) is classified as hardbutter found in Burkina Faso and Ghana.The soft-butter with mid-range stearin (St:O,0.7 -1.0) isgrown in West and East of the hard butter zone. Also there a very soft-butter or liquid-oil which has high olein content (St:O < 0.7) along the northern Uganda. This has provide choice of locations where a particular butter for specific applications [3]. The shea kernel from Northern Ghana and Burkina Faso is particularly valued for its high stearin and total fat content [3].The main importance of the shea tree (Vitellaria paradoxa) is due to the oil or fat (shea butter) that can be extracted from the dried kernels [3],which has not been well maximized. More so, the harvesting of the shea fruits is most times locally performed by women without enough knowledge on maintaining the quality of these nuts for the extraction of the butter. Worst of it is that, a lot of these nuts are abandoned in fields due to poor investment in this business. Looking at the numerous values of shea butter, it is important to invest into the business so as to improve on the quality and quantity of shea butter for our use.Moreover, there is dearth of quality literature on the holistic processing of shea fruits and nuts. Thus, this review emphasises the need for maximising the production of this indispensable and bio-renewableproduct, shea butter.
Extraction of Shea Butter from the Shea Nuts
The mature shea fruit are collected between July -August, followed by grading, sorting and cleaning to obtain high quality fruits. Thereafter the fruits arede-pulped; removal of fleshy mesocarp which is facilitated by fermentation. Burrying the fruits in pits which cause the pulp to ferment, disintegrate and produce enough heat that prevent germination of the fruits. Nut are then sun-dried traditionalfor 5-10 days to beat down the moisture content to about 15-30%. Alternatively, the nuts are subjected to oven drying at temperatures of 50 O C, 4-5 days reducing to the moisture content to 4-5%. This process helps to enhance the de-husking; removing husk by pounding of the nut in a mortar and pestle and then subsequently roasting and cracking between two stones [3], [5].Pre-extraction stages like accumulation of fresh shea nuts, heating the fresh nuts and drying the kernel may affect the quality of kernel by increasing free fatty acid (FFA), Peroxide Value (PV) and fungal levels [3].In addition,polycyclic aromatic hydrocarbons are produced in the cause of smoking or roasting with open wood heating. The presence of these will hinder entry into the "edible marketplace" in Europe and US because of their carcinogenic properties [3]. Unfortunately, often times, local producers do not have access to these requisite information or facility for maintaining the required quality standard. In the traditional method for butter extraction, the kernels are dried, crushed, ground and kneaded to form a paste [1], [2].The paste is poured into hot water (85 O C) and stand for 8 hs. A grey material is formed during the separation of the oily fat from the oil-water emulsion.This oily fat is then skimmed off the surface, clarified and heated into solid (shea butter). In solvent extraction method, about 250 g paste is poured into beaker; 120 mL of hot water at 85 O C is then added follow by 230 mL n-hexane. This is allowed to stand for 48 hs to enable the oil to separate. Then the oil is decanted, allowed to solidify and packaged in glass vessels or aluminium foil [5].Solvent extraction method does give higher yield and other chemical and physical qualities of oil butter than the traditional extraction method of the extraction of shea butter [5].Ikyaet al., [5] have recommend the use of solvent extraction method since it gives higher yield of shea butter, notwithstanding, the sensory qualities of the butter from traditional extraction method is better than the one from solvent extraction [5]. It is also worth using greener solvent (importantly scCO 2 ) which are environmentally-friendly for the extraction of better quality shea butter which will be attracting to the buyers. Buyers prefer kernels with the following qualities: Free Fatty Acids (FFA) <6%, kernel fat content 45-55%, water content <7% and impurities < 1%; or shea butter with white to yellow appearance, low impurities, low odour,low melting point, and high unsaponifiable fraction (the portion with therapeutic properties, 3-12% of total extract). In addition, manufacturers in the chocolate and other food industries prefer to buy the shea nuts as whole rather than the butter so that they can dictates the processing and quality of the final product; as well as the storage for longer time, because shea butter deteriorates more rapidly [1].
Composition/Properties of Shea Butter
Shea butter contain many fatty acids; palmitic, margaric, stearic, oleic, linoleic, arachidic, eicosenoic, docosanoic tetracosanoic acids [5].Fatty acids in shea are mainly stearic, oleic, palmitic, linoleic, and arachidic. The Olein butteris low melting fraction (triacyl-glycerols high in oleic acid, e.g. O-St-O), while the Stearinbutter has high melting fat fraction (high in stearic acid, e.g. St-O-St). The long fatty chains in butter can degrade through "autoxidation"into peroxides (measure as peroxide value) that can later break down into other chemicals including malodorousketones and aldehydes. This is catalysed by heat, certain metals (e.g. iron and copper)and ultra-violet light.Shea butter has unsaponifiables compounds (3-12%) that are responsible for the therapeutic properties of shea butter, e.g. antioxidants (oil soluble tocopherols and water-soluble catechins) triterpenes (butyrospermol), phenols, sterols, karitene and allantoin.Triterpenes and vanillin are the main constituent of the unsaponifiable fraction of the shea butter. The main terpene in shea butter is thetriterpene alpha-amyrin [2].Shea butter may be hydrogenated to increases the shelf life because it becomes more stable and easy to handle [2]. But hydrogenation can break down a lot of the unsaponifiable that give shea butter so many unique benefits. Most big companies produce hydrogenated shea butter. Hence locally made shea butter that is not hydrogentated is better in terms of the unique benefits.Different grades of shea butter have peculiar melting point which is related to their applications [2]. Butter could be fractionatedand tested for specific applications to maximize the potentials of the chemicals.
Uses of Shea Butter and Shea Tree
The shea tree themselves are used as shade for other crops in dry seasons. Also, the shea tree wood is hard, heavy and resistant to termite which makes it useful in building construction, manufacture of mortars, craft goods and charcoal. About 150,000 tonnes of shea tree kernel are consumed annually for various applications [3].Locally shea butter finds application in making soap [4].It enhances cicatrisation of umbilical cord after circumcision.The oil from shea seeds is used locally for frying and making sauces [1], [3].Shea butter has nutritive qualities including vitamins A, D, E, F.Since the 19th century, Africans have traded shea butter as a source of stearin (vegetable fat), particularly for the European chocolate industry, and as beneficial component of personal care products. Shea butter is used in the production of cocoa butter equivalents (CBEs) and in other confectionery industry [1],or improvers; say up to 5% content by weight is allowed under European Union (EU) regulations on chocolate [1].Countries that allow manufacture ofCBEs include the UK, Denmark, Sweden, Portugal, Ireland, Russia and Japan [1].Industrially, it is a feed stock in producing detergents, [1]lubricants, candles and paints [5]. Shea butter contains 90% Triglycerides (saponifiable fraction) and 10% non-triglycerides (unsaponifiable fraction) hence it can be used in making soap [1],premium creams, lotions, skin care products [3], and margarine.Due to its unique blend of unsaponifiables (with UV-B absorbing properties) and essential fatty acid triglycerides, butter is a prime active ingredient for [1], [2], [4].It"s effectively used for hair care products (shampoos and conditioners) [4].Lupeol in the butter is being considered for potential anti-cancer effect. Shea butter has 1% tocopherols making it one of the most antioxidant vegetable oils. Some sterols similar to those found in shea butter can reduce damage to cells in arthritis.It has antiinflammatory effects by calming redness and itching due to the alpha-amyrin in shea butter. Its anti-inflammatory effect is comparable to a popular branddexamethasone [2].It is used in treatment of leprosy and other ailments in Nigeria [5].Shea butter is good for treatment of eczema [2], [3]. It is useful against skin irritant and as soothing in sprains and strains. It does protect the skin from UV light damage by absorbing the UV with the presence of compounds like cinnamic acid esters which have strong UV absorbing properties [2], [3].Unsaponifiable (3-12%) fraction of shea butter has therapeutic benefits, such as, UV protection, moisturising, regenerative, anti-wrinkle, anti-agingand prevent wrinkle formation [3], [4].These have resulted into worldwide growing demand for shea butter in personal care products as recognised by cosmetic industry [3].People in Africa in the past do apply shea butterto protect their skin from dryness and sunburn, treatment of chapped lips and feet, skin abrasions and blemishes. Thus far it is natural, moisturizer and healer of skin [2].It qualities defies any conventional lipids. As far back as 1940, evidence was that those using shea butter experience scarce skin disease and they had smooth skin. Importantly, it goes without saying that, there are lot more to be discovered from the butter in terms of its benefits. Thus, the uses of shea butter, an essentially biorenewable product are unlimited.
Conclusion
Shea butter is one of the world's most sustainable biorenewable products. It comes from shea trees that grow naturally in the grasslands of west and central Africa and do not need any irrigation, fertilizer or pesticides and can produce up to 180 years.It is environmentally sound and good for current ecologically sensitive market [4].These days of acute weather and climatic conditions that is affecting people"s skin [6], shea butter will help in curing skin disease in addition to other unlimited benefits. In order to make the butter readily available and sustainable, there should be more globalised protection and plantation of the tree in West and Central Africa. Government and NGOs should support local people in processing butter [3], [7]. The butter can be maximized by fractionating and appropriating these fractions to specific applications. Greener solvents like supercritical carbon dioxide (scCO 2 )can be used for shea butter extraction to have a better quality product to meet the needs of western markets like US and Europe. | 2018-05-20T02:53:09.529Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "268eec57b8d11cb476999cb48089e3869bb1323f",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/v4i12.nov152415",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "14bccbea0965893bc01bcc2e3b64520704d6974d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
248674792 | pes2o/s2orc | v3-fos-license | A Safety Study of Local Injection of Two Concentrations of Pituitrin in Laparoscopic Uterine Fibroid Surgery: A Randomized Clinical Trial
Background : The purpose of the study was to compare the safety of local injection of 6 units of pituitrin diluted to 20 mL vs 6 units of pituitrin diluted to 10 mL for laparoscopic uterine fibroid (UF) surgery. Methods : This was a randomized clinical trial of patients scheduled for laparoscopic UF surgery at Fu Xing Hospital, Capital Medical University, Beijing, China. Ninety-six patients were divided into two groups according to the concentration of pituitrin utilized: Group1 (6 units of pituitrin diluted to 20 mL for all injection) 48 cases; Group2 (6 units of pituitrin diluted to 10 mL for all injection) 48 cases. The observation indicators were mean arterial pressure (MAP1) and heart rate (HR1) upon entering the operating room; the lowest mean arterial pressure (MAP2) and the highest heart rate (HR2) within 5 minutes after injecting pituitrin; the highest mean arterial pressure (MAP3) and the lowest heart rate (HR3) within 30 minutes after injecting pituitrin; hemoglobin (Hb1) and hematocrit (Hct1) within one week before surgery; hemoglobin (Hb2) and hematocrit (Hct2) within one day after surgery; and the time for the mean arterial pressure to return to the level of entering the operation room after using pituitrin (Recovery Time). Results : All baseline and observation data showed no statistical difference between the two groups. Conclusions : The safety profile of local injection of pituitrin in the 6 units of pituitrin diluted to 20 mL and 6 units of pituitrin diluted to 10 mL are the same when used for laparoscopic UF surgery.
Introduction
Approximately 20%-50% of patients with uterine fibroids (UFs) require surgery [1][2][3][4][5][6][7][8].Laparoscopic UF surgery can reduce bleeding and facilitate postoperative recovery.It is currently a common surgical method for the treatment of UFs.During laparoscopic UF surgery, to reduce surgical bleeding, the surgeon injects pituitrin into the myometrium surrounding the target myomas.Pituitrin is widely used in gynecologic surgery to constrict blood vessels and reduce blood flow [9][10][11].At present, there are not many studies on the effect of pituitrin on circulation.A small number of studies have focused on the optimal dose of local injection of pituitrin during laparoscopic UF surgery, but there are few studies on its safe and effective concentrations.The present study aimed to investigate the safety and effects of local injection of pituitrin in 6 units diluted to 20 mL and 6 units diluted to 10 mL in laparoscopic UF surgery.The results might provide evidence for the improvement of safety during laparoscopic UF surgery.
Study Design and Participants
This was a randomized clinical trial that included patients scheduled for laparoscopic UF surgery at Fu Xing Hospital, Capital Medical University.This study was approved by the ethics review committee of Fu Xing Hos-pital, Capital Medical University.All patients signed the informed consent form.
Ninety-six patients were divided into two groups according to the concentration of pituitrin: Group1 (6 units of pituitrin diluted to 20 mL for all injection) 48 cases; Group2 (6 units of pituitrin diluted to 10 mL for all injection) 48 cases.
Randomization and Blinding
The patients were randomly divided into two groups via the random number table method: Group1 (6 units of pituitrin diluted to 20 mL for all injection), and Group2 (6 units of pituitrin diluted to 10 mL for all injection).The patients and surgeons were blind to the assigned group.
Intervention
Laparoscopic UF surgery is a minimally invasive procedure.Three ports were made in the patient's abdomen, one of which was in the navel.The lens with an external display was placed into the abdominal cavity through the navel port, and carbon dioxide pneumoperitoneum was induced.The complete uterus could be observed through the display.The two other ports were utilized for surgery.To reduce the bleeding from the surgical wound, the surgeon injected pituitrin in the two concentrations studied into the surgical site and then performed surgery to remove the UFs.
Observation Indexes
Demographic information, including age, height, weight, number of the UFs, diameter of the largest UF, and location of the largest UF were obtained from the medical records.Surgery-related indicators obtained from the medical records included.MAP1, HR1, MAP2, HR2; MAP3, HR3, Recovery Time were all recorded by the anesthesiologist.Hb1, Hct1, Hb2, Hct2 were obtained from blood tests acquired before and after surgery.
Statistical Analysis
SPSS v26 (IBM, Corp., Armonk, NY, USA) was used for statistical analysis.The measurement data with normal distribution were represented by mean ± standard deviations and analyzed using the independent sample t-test.The count data was represented by the proportion of the mean, and the chi-square test was performed.p-values < 0.05 were considered statistically significant.
Characteristics
A total of 100 patients were enrolled.an evaluation of 96 patients.The excluded were 4 patients who have ex-perienced laparotomy, had incomplete data records, and repeated injections of pituitrin.There were 48 cases in each group (Fig. 1).All patients underwent successful surgery.There were no significant differences in age, height, weight, number of the UFs the diameter of the largest UF, and location of the largest UF between the two groups.Baseline characteristics are shown in Table 1.
Discussion
The present study evaluated local injections of two concentrations of pituitrin that have similar effects on the hemodynamic fluctuations during surgery.
Currently, vasopressin is widely used in gynecologic surgery to reduce bleeding, make the operative field clear and shorten the operative time.As not all hospitals have access to vasopressin, pituitrin is a visable alternative.However, there are some differences between pituitrin and vasopressin.
Pituitrin contains oxytocin and vasopressin and is commonly used in laparoscopic UF surgery.Oxytocin activates vascular endothelial receptors [12], increases intracellular calcium ion concentration, promotes the release of nitric oxide, causes vasodilation, and reduces blood flow in UFs [13][14][15].The vasodilation caused by oxytocin can lead to a decrease in blood pressure and an increase in heart rate.Vasopressin acts on vasopressin V1 receptors to cause contraction of vascular smooth muscle and myometrium, thereby reducing bleeding during surgery [16][17][18] and shortening operative time.Contraction of vascular smooth muscle and myometrium can cause increased blood pressure and slower heart rate.Pituitrin has a bidirectional effect on circulation.Since the half-life of oxytocin is 3-4 minutes [9,19] and the half-life of vasopressin is 4-20 minutes [20], the observation time point set in this study is the lowest blood pressure and the highest heart rate within 5 minutes after local injection of pituitrin as well as the high- est blood pressure and lowest heart rate within 30 minutes after injection.
The study by Cohen et al. [21] indicates both dilute and concentrated vasopressin solutions that use the same drug dosing demonstrate comparable safety and tolerability when used for minimally invasive myomectomy; how-ever, higher volume administration of vasopressin does not reduce blood loss.These results were consistent with the result of our study.
Our study concluded that the safety profile of local injection of pituitrin in the 6 units of pituitrin diluted to 20 mL and 6 units of pituitrin diluted to 10 mL are the same and HR1 were recorded when entering the operating room; MAP2 and HR2 were recorded within 5 minutes after injecting pituitrin; MAP3 and HR3 were recorded within 30 minutes after injecting pituitrin; Hb1 and Hct1 were recorded within one week before surgery; Hb2 and Hct2 were recorded one day after surgery.Recovery Time: the time for the mean arterial pressure to return to the level of entering operating room after using pituitrin.
when used for laparoscopic UF surgery.Pituitrin can reduce bleeding during surgery, theoretically make the surgical field clear, and shorten the operative time.However, pituitrin also has adverse reactions and affects circulation [20].Therefore, the dosage should be strictly controlled.
The present study had several limitations.First, this study is a single-center study.Some multi-center studies can be carried out for different regions, different races, and groups of people.Second, the sample size can be further expanded for research in the future.
Conclusions
The safety effects of local injection of pituitrin in the 6 units diluted to 20 mL and 6 units diluted to 10 mL are the same when used for laparoscopic UF surgery. | 2022-05-11T15:08:05.004Z | 2022-05-05T00:00:00.000 | {
"year": 2022,
"sha1": "a08aa6a856a31a4b71e12dce95102b3f73f23593",
"oa_license": "CCBY",
"oa_url": "https://www.imrpress.com/journal/CEOG/49/5/10.31083/j.ceog4905103/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "65f55511805be3c08bbddcb8d95d9127989c3c5c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
14590870 | pes2o/s2orc | v3-fos-license | Distribution, diversity, mesonotal morphology, gallery architecture, and queen physogastry of the termite genus Calcaritermes (Isoptera, Kalotermitidae)
Abstract An updated New World distribution of the genus Calcaritermes is given along with photographs and a key to the New World species outside Mexico. Calcaritermes recessifrons is found to be a junior synonym of Calcaritermes nigriceps. Except for Calcaritermes temnocephalus, pseudergates of the other seven studied Calcaritermes species possess a mesonotal rasp. The rasps suggest a role in propagation of microbes on gallery surfaces and microbial infusion below the wood surface. Calcaritermes temoncephalus is shown to have an unusually large physogastric queens for a kalotermitid and several species produce large eggs.
Introduction
In his monumental revision of the family Kalotermitidae, Krishna (1961) formed the current taxonomic definition of the termite genus Calcaritermes (Snyder 1925). Krishna (1961) separated Calcaritermes from all other genera by the diagnostic enlargement of the outer spine ("spur" sensu Snyder 1925b) of the fore tibia relative to the other two tibial spines. Soldiers also possess a dark, rather smooth, and cylindrical head cap-sule. Calcaritermes is a basal group within the Kalotermitidae (Legendre et al. 2008) and is not closely related to the sympatric Cryptotermes which also possess phragmotic dark-headed soldiers. Krishna 1961, however, could not morphologically distinguish Calcaritermes alates from those of the genus Glyptotermes. Even so, Emerson (1969) described C. vetus from a fossilized alate in amber collected in the Simojovel region of Chiapas, Mexico. Emerson based his generic assignment on the similarity of the fossil to that of C. temnocephalus and its range in southeastern Mexico. The most recent review of Calcaritermes distribution was also provided by Emerson (1969).
As with most non-pest termite genera, details of the ecology and bionomics of the Calcaritermes are completely unknown. Almost all that is published about Calcaritermes relates to identification of preserved specimens for faunal surveys (e.g. Scheffrahn et al. 2005 and part of this paper). The only research involving Calcaritermes biology stems from two studies: one of their protist gut fauna (Gile et al. 2010) and the other of alate flight in forest canopy (Bourguignon et al. 2009).
In the current paper, the New World distribution and diversity of Calcaritermes is revised based on material in the University of Florida collection. I use field photography to show the live habitus of castes of seven Calcaritermes species and depict eight soldiers using montage photography of preserved material. I also reassess the mesonotal "rasp" of pseudergate castes of Calcaritermes and provide an example of extreme queen physogastry in the Kalotermitidae. Finally, I describe the atypical feeding galleries of this genus and hypothesize a relationship between gallery architecture and the mesonotal rasp in terms of microbial symbiosis.
Material and methods
A total of 214 colony samples of Calcaritermes from 122 localities ( Fig. 1) were collected between 1996 and 2010 and identified by the author from original descriptions and comparisons. These samples are included in the University of Florida (UF) Termite Collection, Fort Lauderdale Research and Education Center, Davie, Florida. This collection houses over 34,000 samples, mostly from the Caribbean Basin, which the author and his colleagues have amassed since 1986. The findings herein are a direct result of field observations made while collecting Calcaritermes during various survey expeditions.
Field photographs (Figs 2, 4E, 4F, and 5) were taken with a Nikon Coolpix S7c digital camera set to macro and flash mode. Specimens were usually photographed in a 5.5 cm dia. plastic Petri dish bottom lined with manila folder cardboard although natural substrate (Figs 3D and 3I) was sometimes suitable. Figures 3 and 4C were taken as multilayer montages using a Leica M205C stereomicroscope controlled by Leica Application Suite version 3 software. Montage specimens were taken from 85% ethanol and suspended in a pool of Purell® Hand Sanitizer to position the specimens in a transparent plastic Petri dish. Mesonotal rasps (Figs 4A and B) were slide-mounted with PVA mounting medium (BioQuip Products, Inc) and photographed with an Olympus BH-2 compound microscope fitted with phase contrast optics. Figure 3D was taken of a pseudergate that was freshly killed by desiccation and photographed with a Hitachi 4700 FESEM scanning electron microscope at 3-5 kV.
Distribution
Calcaritermes is primarily a neotropical genus with the exceptions of a relic nearctic species, C. nearcticus Snyder, 1933, found from central and northeastern Florida to southeastern Georgia (Scheffrahn et al. 2001) and an anomalous indomalaysian congener, C. krishnai (Maiti and Chakraborty) known from Great Nicobar Island (Roonwal and Chhotani 1989) and Papua New Guinea (Y. Roisin, unpublished data). The current New World distribution of Calcaritermes is given in Fig. 1. Literature localities in Fig. 1 include C. colei Krishna from San Luis Potosi, Mexico and C. snyderi Krishna from El Salvador (Krishna 1962), C. imminens from Colombia (Snyder 1925b), C. parvinotus Light from Colima, Mexico (Light 1933) and from Chamela, Mexico (Nickle and Collins 1988) and C. rioensis from Brazil (Krishna 1962, Reis andCancello 2007). Emerson's 1969 localities for C. guatemalae (Tabasco region of Mexico) and C. nigriceps (central Colombia) not mapped in Fig.1 because they were deemed too vague. Krishna (1962) redescribed C. temnocephalus (Silvestri 1901) from types collected in Venezuela (Silvestri 1903) and additional material from Trinidad. The type locality, Las Trincheras (10.31, -68.09), Carabobo State, is in the vicinity of Caracas where Frederik Vilhelm August Meinert collected insects in 1891 (Reuter 1904) of which the termites were studied by Silvestri (1901Silvestri ( , 1903. In 2008, we collected all castes of a Calcaritermes sp. at P.N. San Sebastián, Carabobo, Venezuela (10.402, -68.000, elev. 105 m). Our material matched Krishna's 1962 redescription of C. temnocephalus and substantiates our earlier synonymy that C. fairchildi is a junior synonym of C. temnocephalus (Scheffrahn et al. in press). Specifically, the description of C. fairchildi (=thompsonae) (Snyder 1926b(Snyder , 1926c from Costa Rica ( Fig. 1) also compares favorably with our Venezuela sample. I have compared 50 colony series of C. temnocephalus from Guadeloupe to Ecuador and Belize. C. temnocephalus is unique among congeners in the UF collection (species shown in Fig. 3) because pseudergate castes do not have a mesanotal rasp and also lack the concavity of the posterior margin of the pronotum (Figs 4E, 5C). Imagos of C. temnocephalus are unique among those described in the genus in that they are orange-brown in body coloration and have hyaline wings (Fig. 2B, 5C). The next lightest imago is C. brevicollis with a medium brown dorsal coloration and lightly pigmented wings. All eight other described Calcaritermes imagos are dark brown to blackish and have smoky wings (e.g., Fig. 2D, 2I). Snyder (1925b) described C. recessifrons from one soldier and a series of alates (type locality Cincinnati, 11.10, -74.08, Fig. 1) collected by W. M. Mann during his expedition to Colombia. In 2009, we surveyed termites near the type locality for C. recessifrons ("above" Minca 11.126, -74.120, elev. 712 m) and collected several colony samples of Calcaritermes there. Our material matched Snyder's description of C. ressesifrons. The description of C. nigriceps (Emerson 1925) from British Guiana (Fig. 1, now Guyana) also compared favorably with our sample. Further comparison of Calcaritermes specimens collected from Grenada to Panama (Fig. 1) confirmed that C. recessifrons is a junior synonym of C. nigriceps as previously reported by Scheffrahn et al. (in press). C. nigriceps soldiers are unique among congeners in the UF collection as the frontal furrow is shallow and unsculptured (Fig. 3C). Cincinnati, Colombia, is also the type locality of C. imminens (Snyder 1925b); however, we were unable to collect this distinctive medium-sized species in which the soldier has an overhanging frons. Table 1 lists the current New World species of Calcaritermes and their type localities.
Mesonotum morphology
Mandible dentition of pseudergate or nymphal castes has been used for generic grouping of some Kalotermitidae (Krishna 1961), but for most genera, these weak and often overlapping characters by themselves lead to tenuous or uncertain identifications. In describing the imago and immature forms of C. emarginicollis from Costa Rica, Snyder (1925a) was first to observe and depict (Snyder's Figs 2,4) that the mesonotum of the brachypterous nymph had an "aspirate or rugose area" while in the presoldier caste, he noted that the aspirate area of the mesonotum was elevated. Snyder (1925b, Light (1933), and Krishna (1962) described eight more Calcaritermes species, but the mesonotal rugosity was not mentioned again for any caste until Miller (1943) reported that nymphs of C. nearcticus had a "slightly raised median mesonotal area upon which appear numerous aspirities". In Krishna's 1961 revision of the Kalotermitidae, the mesonotal character was not mentioned. These mesonotal "rasps" were found on all apterous pseudergates, early stage brachypterous nymphs, and most soldiers of Calcaritermes for species in the UF Collection (Table 1) with the exception of C. temnocephalus in which the rasp is absent. Under magnification, it was observed that each of these rasps actually consist of a single layer of slightly overlapping spatulate scales with basal attachments at their anterior ends (Figs 4A, 4B, 4D). The mesonotal rasps have a midline divide and form an elevated mound raised above the remainder of the dorsum (Fig. 4C, right). The posterior margin of the pronotum of all eragtoid/nymphoid castes, except again C. temnocephalus, has a posterior marginal concavity that partially surrounds the anterior of the rasp (Fig. 4C, arrow). The pronotum is steeply angled toward the head anterior to the rasp (Fig. 4C, right). The scale patterns and lateral profile of the rasps vary somewhat among species (e.g., Figs 4A, 4B) but no species-specific morphology was investigated in this study. No rasp was found on any mature reproductive and the robustness of the rasp was inversely proportional to wing bud size disappearing when the nymphs were one molt from adulthood. The mesonotal rasp is the first external character to provide a diagnostic, generic identification of an immature kalotermitid.
Microscopic examination of the mesonotal rasps from ethanol-preserved specimens did not reveal microbial material around the scales. However, when live specimens of C. nearcticus were prepared for SEM without cleaning or rinsing, an organic (microbial?) paste was observed between the scales (Fig 4D).
Queen physogastry. Over the years, I have observed hundreds of mature queens in kalotermitid nests but was struck by the extreme queen physogastry in C. temnocephalus. On 26 May 2008, two colonies of C. temnocephalus were collected by the UF Figure 2. Photographs of live Calcaritermes specimens taken during collection. A Soldier of C. guatemalae, Honduras B Alate of C. temnocephalus, Venezuela C Soldiers of C. snyderi Honduras D Alate of C. guatemalae, Belize (arrow denotes fungal hyphae growing in gallery) e Soldier of C. temnocephalus, Venezuela F Soldier of C. brevicollis, Colombia G Two soldiers from the same colony of C. nigriceps, Colombia h Soldier and pseudergates of C. rioensis, Venezuela (arrow denotes mesonotal rasp of pseudergate) I Soldier, dealate, and pseudergates of C. nearcticus, Florida (arrow denotes fungal staining around gallery). All images to same scale. survey team at Silva Seco de Capadare, Guiermo,Venezuela (11.154,elev. 58m). Both colonies were large and occupied rather sound wood from which a mature primary queen was removed (Fig. 4E). The extent of physogastry of these queens is what is typically observed in the Rhinotermitidae or Termitidae in which the intersegmental membrane stretches well beyond the width of the tergites or sternites. Typically the extended intersegmental membrane in primary queens of the Kalotermitidae is narrower than the width of adjacent abdominal sclerites, but in the C. temnocephalus queens, the membrane is much wider than the sclerites. Eggs from one of the C. temnocephalus colonies (Fig. 4E, arrow) and from a C. brevicollis colony in Panama (Fig. 4F, arrows) also appeared disproportionally large compared to other kalotermitids.
Nests. Calcaritermes colonies infest damp or wet wood, usually in the shade of forest canopy. At ground level, populations are never plentiful in a given area. However, Roisin et al. (2006) found that the preponderance of C. brevicollis colonies in a Panamanian rain forest were occupying dead branches 10 m or higher above the ground. Workers and soldiers move rather slowly compared to most other kalotermitids, but in contrast, the alates flutter in hyperkinetic fashion as soon as their galleries are opened. Bourguignon et al. (2009) collected all dispersing C. brevicollis alates during March to June in flight intercept traps. No alates were attracted to light traps indicating that C. brevicollis, and probably the other dark-colored species, are daytime flyers.
The gallery system of Calcaritermes differs from other kalotermitids in several distinct ways. First, the galleries are narrow and tubular, maybe allowing only two termites to pass at one time. The galleries are spaced rather far apart in the wood matrix, thus Figure 5. A C. nigriceps galleries exposed in Colombia B C. nearcticus galleries in oak wood, Florida C C. temnocephalus galleries in Venezuela. Images not to same scale. occupying a relative small volume of the colonized member (Figs 5A, 5B). Secondly, the galleries contain very few loose fecal pellets, but gallery surfaces are generously lined with what appears to be a moist fecal/microbial? paste (Figs 5A, 5B, 5C). Miller (1949) noted that C. nearcticus "lines some of its galleries with a coating of brownish material". Thirdly, the peripheries of the galleries are stained or exhibit halos suggesting fungal infection emanating from the gallery surfaces into the wood at varying depths (Figs 5A, 5B). Again, the exception is C. temnocephalus which infests wood in open, often dryer conditions, had less fecal coating and microbial growth (Fig. 5C) in their galleries.
Given the mesonotal rasp, the low volume of wood excavated, gallery coating, and peripheral gallery staining, one can hypothesize that Calcaritermes derives some nutrition via a symbiotic relationship with microbes growing on the surface of their galleries. The rasp may be used by foragers to inoculate gallery surfaces with fungal or bacterial spores analogous to the mycangium (Stone et al. 2007) found in bark beetles (Scolytinae). Unlike the mycangia of adult beetles, Calcaritermes adults (alates) show no obvious external structure for horizontal transfer of spores to new nesting sites although alimentary storage is a possibility. So whether Calcaritermes, like bark beetles, have some form of external association with microorganism (Gilbertson 1984) or actually rely on symbiotic mycophagy (Harrington 2005) remains to be studied. | 2017-06-18T05:18:27.576Z | 2011-11-21T00:00:00.000 | {
"year": 2011,
"sha1": "bd0daeedc0daf90d6441e16a8f347e691cf02806",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/lib/ajax_srv/article_elements_srv.php?action=download_pdf&item_id=3006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd0daeedc0daf90d6441e16a8f347e691cf02806",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
218823633 | pes2o/s2orc | v3-fos-license | The Establishment of the Hydraulic Structure Optimal Size in the Conditions of Underground Kimberlite Mines
The article is devoted to the establishment of the hydraulic structures optimal size in underground kimberlite mines. According to the results of parametric tests, the authors obtained a mathematical formula that allows with a high degree of accuracy to calculate the optimal dimensions of the clarifying tanks and water tanks in terms of settling mine water for the needs of the underground kimberlite mines of the Russian Federation.
Introduction
A characteristic feature of the sectional pumps used in the systems of the main, district and auxiliary drainage of underground kimberlite mines of the Russian Federation is the low durability of their units and parts (Table 1). These negative factors include a high concentration in pumped out mine waters of solid particles, close contact with which leads to premature failure of parts of the flow part of the pumping equipment existing at underground kimberlite mines due to their active hydroabrasive wear ( Fig. 1) Surveys of workers responsible for the mine drainage of the underground kimberlite mines of the Russian Federation indicate that the main reason for the high pollution of pumped mine water is their low efficiency of settling in existing clarifying tanks and water collectors. One of the obvious reasons for the low efficiency of settling of mine water in these underground hydraulic structures is their insufficiently chosen dimensions
Methods and materials
As a toolkit for establishing the optimal size of the clarifying tanks and water collectors of drainage installations of underground kimberlite mines of the Russian Federation, methods of mathematical statistics were used.
Results and discussion
The studies conducted by the authors indicate that in the conditions of the underground kimberlite mines of the Russian Federation, the weighted average frequency of cleaning the water tanks T of various drainage installations from sludge strongly correlates with the weighted average level of filling; working depth h (Fig. 2). Since the frequency of cleaning underground hydraulic structures from sludge and the effectiveness of settling mine water in them are interrelated processes, it can be openly said that the effectiveness of settling mine water increases when the working depth of the brightening tank or sump decreases.
Based on the foregoing, we conclude that when calculating the optimal dimensions of the lightening tanks and catchment basins, it is necessary first of all to focus on their working depth.
As is known, the following mathematical formula is used for calculating the working volume V of the brightening tanks and catchment basins [3]: Through studies of the physical properties of mine water taken from the lightening tanks of the main drainage plant at the Udachny underground kimberlite mine, a linear regression equation was derived that allows calculating the deposition time t of most of the solid particles depending on the working depth h of the underground hydraulic structure with a high degree of accuracy (Fig. 3) [3]. The derived linear regression equation is universal in terms of use, since the physical characteristics of mine water, pumped out from various underground kimberlite mines in the Russian Federation (particle size distribution of solid particles), affect the sedimentation rate of solid particles.
After combining expression (1) and the linear regression equation (see Fig. 2) together, the authors obtained a mathematical formula that allows calculating optimal sizes of clarifying reservoirs and water tanks with a high degree of confidence in terms of settling mine waters for the needs of underground kimberlite mines of the Russian Federation.
Conclusion
According to the results of the research conducted by the authors, one of the possible ways to reduce the rate of hydroabrasive wear of parts and units of sectional pumps used in the drainage units of the underground kimberlite mines of the Russian Federation was proposed and sufficiently substantiated. | 2020-04-16T09:11:57.895Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "1bd4bc1574b4e4b32f5fadc60a0ebee691589d57",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/459/4/042054",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dfe0861afc914ea271e77904c24698ab6f0beb52",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
237866235 | pes2o/s2orc | v3-fos-license | Roadside Car Surveys: Methodological Constraints and Solutions for Estimating Parrot Abundances across the World
: Parrots stand out among birds because of their poor conservation status and the lack of available information on their population sizes and trends. Estimating parrot abundance is complicated by the high mobility, gregariousness, patchy distributions, and rarity of many species. Roadside car surveys can be useful to cover large areas and increase the probability of detecting spatially aggregated species or those occurring at very low densities. However, such surveys may be biased due to their inability to handle differences in detectability among species and habitats. We conducted 98 roadside surveys, covering > 57,000 km across 20 countries and the main world biomes, recording ca. 120,000 parrots from 137 species. We found that larger and more gregarious species are more easily visually detected and at greater distances, with variations among biomes. However, raw estimates of relative parrot abundances (individuals/km) were strongly correlated (r = 0.86–0.93) with parrot densities (individuals/km 2 ) estimated through distance sampling (DS) models, showing that variability in abundances among species (>40 orders of magnitude) overcomes any potential detectability bias. While both methods provide similar results, DS cannot be used to study parrot communities or monitor the population trends of all parrot species as it requires a minimum of encounters that are not reached for most species (64% in our case), mainly the rarest and more threatened. However, DS may be the most suitable choice for some species-specific studies of common species. We summarize the strengths and weaknesses of both methods to guide researchers in choosing the best–fitting option for their particular research hypotheses, characteristics of the species studied, and logistical constraints.
Introduction
Parrots (Order Psittaciformes) stand out among birds because of their poor conservation status [1,2] and the lack of knowledge on their population sizes and trends. According to the most recent IUCN evaluation, almost 30% of the 402 extant parrot species are threatened with extinction, while accurate information on their population numbers and changes in abundances is lacking for most species [3]. The paucity of information on population sizes, densities, and changes in the abundance of parrots across the world was highlighted six years ago [4], calling for further development and application of monitoring methods to better understand how parrot populations are responding to the variety of threats they face [1]. In fact, a recent review relating conservation threats to population trends in the Neotropics, the realm with the highest richness of parrot species [1], revealed the scarcity of data on actual abundances and population trends [5]. The situation is similar for the other realms, even for the Afrotropics [6] where parrot species richness is the lowest [1].
Estimating parrot abundance is challenging because many species naturally occur at very low densities [4], while others have heavily patched distributions or very restricted ranges [3]. Moreover, widespread threats such as habitat loss, illegal trade, and persecution [7][8][9] may be drastically reducing parrot population sizes and ranges, making the design of monitoring programs even more difficult. Moreover, some parrot species are highly gregarious and aggregate in large communal roots, and thus estimates of overall population size can be obtained when all roosts are located and can be properly surveyed [10]. However, this is not feasible for most parrots species, as roost sites may often change [11], they can not be located in large, inaccessible areas, or simply because not all species gather in large communal roosts. Then, researchers are forced to use alternative methodologies such as point counts and line transects, traditionally used for many avian taxa, to obtain estimates of relative abundances and densities [10]. A recent review has compiled different sampling and analytical methods for estimating parrot abundances [10]. Although the efficiency of walk line transects and point counts to estimate parrot abundances may differ among studies [11][12][13], both methods are constrained by the small geographic scale at which they can be done. Therefore, they may not be logistically affordable for surveying parrot species that are patchily distributed and with very low densities, as a very large number of sampling sites (e.g., up to 2000) are required for surveying uncommon species [12]. Conversely, roadside car surveys allow the coverage of very large areas, thus accounting for the large home ranges and mobility of many common parrot species and increasing the probability of detecting individuals of species occurring at very low densities and/or those that are spatially aggregated [10].
Roadside car surveys have been largely used to survey conspicuous species (mostly raptors, e.g., [14][15][16]), providing an easy-to-obtain measure of relative abundance (number of individuals recorded/km surveyed). Recently, roadside car surveys have also been used to relate the relative abundances of parrot species to habitat changes [17,18], the role of parrots as seed dispersers [19,20] or their roles in other ecological functions [21], or to assess how parrots are selectively poached for their use as pets [22]. Their gregariousness and especially their loud vocalization behavior [10] makes this method even more appropriate for parrots because vocalizations facilitate their detection compared to other taxa such as raptors, which are mostly only visually detected and thus more difficult to record when perched hidden by the vegetation. The easier aural than visual detection of parrots was revealed by Lee & Marsden [23], showing that only 4% out of 2,681 parrot detections obtained through walk line transects were of silent, seen-only groups. However, as for point counts [24] and walk line transects [23], several parrot encounters correspond to aural-only detections, and thus the number of unobserved individuals cannot be recorded for estimating abundances [22][23][24]. A proposed solution for this problem, both for point counts, walking and car transects, is to substitute missing count data (i.e., aural-only encounters) with the average flock size obtained for the species during the survey [22][23][24]. However, there is no evaluation of how this methodological approach may affect the estimates of abundance. Another obvious problem for all three methods is that the probability of detection decreases with the distance of encountered birds from the observer and that this distance-dependent probability of detection may vary among species and habitats [10]. This problem is easily solved through distance sampling (DS) modeling, currently implemented in accessible statistical packages, which allows the calculation of probabilities of detection to estimate densities (individuals/km 2 ) of the studied species [10]. However, this much more desirable approximation comes with the caveat that robust DS modeling requires a minimum of visual encounters [10], from which distance measurements can be taken to inform models, which in some cases could reach 40-50 contacts [13]. Unfortunately, this analytical constraint makes it impossible to estimate the abundances for rare parrot species occurring at very low densities [20,25] or those relatively abundant but highly gregarious species recorded in high numbers of individuals in a few very large flocks [26], with numbers of encounters that are insufficient for DS modeling. Nonetheless, recent work showed a strong correlation between distance-uncorrected relative parrot abundances obtained through roadside car surveys and distance-corrected densities for a sample of species with enough visual encounters needed for DS modeling [22] (see also unpublished results offered by [10]). These results support the idea that distance-uncorrected relative abundances of parrots obtained through roadside car surveys are good proxies of their actual abundances, especially when the high variability in abundance among species overcomes the main sources of sampling error, i.e., differences in detectability [22]. Nonetheless, further research embracing different parrot communities and biomes is needed before generalizing these conclusions.
Here, we take advantage of an unprecedented data set that compiles our roadside car surveys conducted over 10 years, covering 20 countries and all continents and biomes inhabited by parrots across the world. We first assessed sources of variability related to the percentage of aural-only encounters and the distance at which parrots were detected. We hypothesized that parrot detectability in roadside surveys is a function of species size and gregariousness, and the openness of the surveyed habitat. We predicted that larger and more gregarious species should be more easily detected visually and at greater distances, and that detection should also vary among biomes since they range from very open (e.g., Deserts and Xeric Scrublands) to highly concealing forested habitats (e.g., Tropical and Subtropical Broadleaf Moist Forests). We then correlated distance-uncorrected relative densities (individuals/km) with density estimates (individuals/km 2 ) obtained through DS modeling, using different thresholds for a minimum of visual contacts. We evaluated how adding an estimation of the number of only heard (unseen) individuals [22][23][24] affects these correlations. We found a strong correlation between these estimates of parrot abundances and discuss the pros and cons of both methods, including the loss of whole surveys and the traits of species that are excluded when using DS and not reaching the minimum numbers of visual encounters needed for statistical modeling. We aim to guide researchers in choosing the best-suited methodology given their research objectives and study species.
Study Areas and Field Work
We selected several countries from the main five parrot-inhabited realms (Neotropic, Afrotropic, Indomalayan, and Australasia). These regions represent the richest to the poorest parrot communities worldwide [1]. This work was embedded within different research projects, having all in common our need to estimate the relative abundance of each species within each parrot community. We used these estimates to answer different questions, such as those related to their relative contribution to ecological functions [19][20][21], assessing poaching pressure [22], or the effects of habitat transformations on parrot abundances [17,18]. Therefore, for each country, we designed road itineraries to cover the main biomes and ecoregions occupied by parrots (obtained from https://ecoregions2017.appspot.com/; accessed 15 January 2021) and the distribution of as many parrot species as possible (obtained from [3] and a variety of regional bird field guides). Using satellite maps, we selected unpaved and low-transit paved roads that crossed from pristine to highly humanized habitats (e.g., agricultural and urbanized areas), thus maximizing the chances of finding a variety of parrot species, from those intolerant to habitat transformations to those benefitting from anthropogenic changes (e.g., [17,18,[27][28][29]).
Most of the fieldwork was done between 2011 and 2020 (Supplementary S1), through expeditions that typically lasted 3-5 weeks. Some small countries were well surveyed through a single expedition (e.g., Costa Rica), while some of the largest (e.g., Brazil) required many expeditions to cover the greater variety of biomes, ecoregions, and parrot communities. In such cases, results obtained from a single ecoregion/biome/country in different expeditions (usually conducted in different years) were pooled to increase sample sizes (number of km surveyed and number of parrots recorded) and thus better represent the whole parrot community and increase the precision of estimates [12]. Only Australia, Colombia, and India were partially surveyed due to logistical constraints (Supplementary S1). Surveys were conducted in different seasons and across the annual cycles of parrots. However, this should not be problematic for the objectives of this paper, since our analyses compare results of two parrot abundance estimates simultaneously obtained within each ecoregion/biome/country surveyed (see below). Rather, the large geographic and temporal scales of our surveys reinforced and allowed the generalization our results.
Roadside Surveys
Typically, and similarly to other roadside parrot surveys [17][18][19][20][21][22], the driver and two experienced observers drove a 4 × 4 vehicle at low speed (10-40 km/h) following previously designed itineraries from dawn to dusk, avoiding rainy and hot middays when parrot activity declines [30,31]. All parrots detected were recorded, briefly stopping when needed to identify species and/or count the number of individuals in flocks. Observers were familiar with the parrot species surveyed, as surveys were combined with behavioral and foraging studies across all study areas (see e.g., [32][33][34][35]), so they were able to visually and aurally identify them. Moreover, several authors participated in different surveys, and the first author participated in 91% of all surveys, so each survey included researchers with accumulated experience in identifying parrots. For a subsample of surveys (those conducted since 2018), we also recorded the mode of detection (i.e., whether parrots were first detected aurally, visually, or both) and their behavior at first detection (i.e., resting, feeding, or flying). Following previous recommendations [13] and studies [17][18][19][20][21][22], we considered both perched and flying individuals for estimating parrot abundances (see Discussion for pros and cons of including flying birds), thus also making distance-corrected and uncorrected estimates (see below) comparable. We paid special attention to the flying direction and group size of parrots in flight to avoid double counting of flocks [13].
Distances of detection (i.e., the perpendicular distance from parrots to the road when they were first detected) were recorded to compare two estimates of parrot abundance (see below). Detection distance was estimated visually for short distances or using a laser rangefinder incorporated into binoculars for large distances (Leica Geovid 10 × 42, range: 8-1500 m), measuring the distance to the closest tree for flying flocks. In the case of loose flocks, we measured the distance to the closest individual in the flock. In many instances, parrots were only heard and the species identified through their vocalizations because they were concealed by vegetation. Therefore, we could not record the distance of detection nor the number of individuals. Thus, we classified detections as aural (only heard) or visual (seen or both seen and heard).
Since 2018, all roadside surveys and parrot counts were recorded using the ObsMapp application for smartphones, which uploads the observations to the citizen science platform Observation (www.observation.org; accessed 15 January 2021). Therefore, all records, exact location, and associated information can be viewed and downloaded from this web platform (searching for the observers Pedro Romero-Vidal, Dailos Hernández-Brito, and José Luis Tella) by any researcher in the future.
Distance Sampling Modeling
Distance sampling (DS) models were fit for each combination of country, ecoregion, and species (henceforth study case). The maximum detection distance was fixed at 500 m for all species. While this value may not be optimal for some species and/or habitat types, it encompasses most of the detections (see Results Section 3.3.2). More importantly, having a single maximum distance allows straightforward comparisons among study cases. We restricted DS modeling to those study cases with at least 10 visual contacts within 500 m of distance. We conservatively used this encounter threshold as it was the minimum required for DS modeling in a previous whole-parrot community study [22], thus allowing us to include as many species and study cases as possible. In fact, a minimum of 10 contacts of the target species was suggested to obtain useful, if imprecise, parrot density estimates [4]. Nonetheless, we also tested how results could change by gradually increasing the threshold up to 50 visual contacts per species (see below). Because the number of individuals in a group can influence detection, we evaluated the potential correlation between group size and detection distance using Spearman correlation tests. We binned distance data for each study case to facilitate the fitting of detection functions, using breaks every 25 (a), 50 (b), and 100 (c) m (i.e., a: 0-25, 25-50, . . . , 475-500 m; b: 0-50, 50-100, . . . , 450-500 m; c: 0-100, 100-200, . . . , 400-500 m). For each binning setup (a, b, and c), we fitted DS models with a half-normal key function as previously recommended after visual inspection of the histograms of distances [36,37], but also using the hazard rate and the uniform key as alternative functions. We compared models with no adjustment terms and with cosine, Hermite polynomial, and simple polynomial adjustments, up to order 5. For models where group size was correlated with detection distances, we also fitted a DS model with group size as a covariate. Akaike's Information Criterion (AIC) was used to compare models within a distance break set [38], but it cannot be used to compare models fit to data with different binning setups [36]. Thus, we performed chi-square goodness-of-fit tests to compare the best models from each binning setup and identify the best fitting model (highest chi-square test p-value) for each study case. To allow visual inspection of our DS models and chi-square tests, we provide, for each study case, a histogram of detection distances (with Sturges's breaks), the plot of group size x detection distances (with Spearman correlation test p-value), and the estimated detection functions from the best DS models for each binning setup, overlaid on the histogram of detection distances with the respective distance breaks (Supplementary S3).
Detection probability (P) was obtained from the best model for each study case. Then, abundance (N) was calculated by dividing the number of observed individuals by the estimated P within 500 m maximum distance (or a 1 km-wide strip centered on the road). Density was calculated by dividing N by the length (in km) surveyed for each case, providing an estimate of individuals/km 2 (the width surveyed was 1 km). Analyses were done in R using the "Distance" package [39,40].
Traits of Parrot Species
We obtained two measures of parrot size, body length (in cm) and body mass (in g), from [41]. As a proxy of the gregariousness of a species, we used our own data on flock sizes. For analyses based on study cases, we used the average flock size of the species recorded within each study case. For analyses at the species level, we used the overall average flock size after pooling data when a species was surveyed in more than one study case. Average flock sizes were unrelated to the body length (Spearman correlation, r s = −0.02, p = 0.84) and body mass (r s = −0.09, p = 0.28) of the 131 species visually recorded in our study. However, body length and body mass were strongly correlated (r s = 0.88, p < 0.001), so both variables were alternatively fitted in models accounting for the relationship between detectability and body size (see below). Results were nearly identical but the effect of body mass was always slightly stronger than that of body length, so the later results are not shown for simplicity. The global conservation status of each species was obtained from the 2020 IUCN Red List [3].
Statistical Analyses
We used Generalized Linear Models (GLM) to assess how the number of parrot encounters and the number of parrot species recorded (negative binomial error distribution, log link function) varied among realms and with the lengths of surveys. Moreover, we evaluated how the percentage of aural encounters, distances of detection, and probabilities of detection (P) (log-transformed; normal error distribution, identity link function) were affected by the body mass and flock size of the species and the biomes they occupied. For the proportion of aural encounters, we restricted analyses to species with at least 15 encounters to reduce error biases in the estimation of proportions [42].
The relationship between relative abundances (individuals observed/km; response variable) and densities of parrots (individuals estimated/km 2 ) obtained through DS modeling was assessed with non-parametric Spearman correlation and linear regressions on raw and log-transformed data, respectively. As the robustness of DS models and thus the precision of their estimated densities may increase with sample sizes (i.e., number of contacts [12]), we performed five regressions by restricting data to cases with at least 10, 20, 30, 40, and 50 visual encounters at distances ≤ 500 m. Following previous recommendations to avoid the underestimation of secretive species [22][23][24], we also estimated relative abundances by summing to the number of observed individuals the estimation of those not observed (number of aural-only contacts × average flock size obtained in each study case), divided by the km surveyed, and repeated the same regression on densities obtained from DS models. Finally, we assessed whether these relationships are influenced by body mass, flock size, biome, and the number of visual encounters through a GLM (response variable: log-transformed relative abundance; normal error distribution, identity link function).
The characteristics of case studies and species (body mass, flock size, relative abundance, conservation status) not available for estimating their densities through DS modeling due to the low number of visual contacts were identified using GLMs (response variable: available/not available; binomial error distribution, logistic link function).
Our data set included species that were surveyed in different case studies (mean = 4.2, median = 2 case studies per species), thus providing replicates that allow for the testing of the relative contribution of species traits and biomes on detectability and abundance estimates through the multivariate models described above. These models would require controlling for species identity to account for pseudoreplication. However, models fitting species identity as random or fixed effects together with species traits confounded their individual effects as species had unique values of body size and almost-unique values of flock size. As our research goal was not to simply assess whether species differ among them but to know what species traits explain these differences, we show models including species traits without controlling for their identity. Models did not show data overdispersion, and the percentage of deviance explained by GLMs and adjusted R 2 for linear regressions are provided to show the variability in the data captured by our models. Statistical analyses were performed using SPSS v. 27.
Surveys averaged 584.1 km in length (range: 35.88-6899.48 km, N = 98), and 75% of them were longer than 150 km (Supplementary S1). The number of parrot encounters varied between 0 and 1263 per survey (mean 162.1 + 199.4 SD, median 98, Supplementary S1), and a GLM revealed it was unrelated to survey length (χ 2 = 1.58, p = 0.21) but varied among realms (χ 2 = 64.44, p < 0.001), with no significant interaction between survey length and realm (χ 2 = 2.07, p = 0.56). The average number of parrot encounters per survey decreased as follows: Neotropic > Indomalayan > Australasia > Afrotropic. The number of parrot species recorded per survey ranged from 0 to 25 (mean 5.87 + 5.27 SD, median 4 species, Supplementary S1). Similarly to the number of encounters, a GLM showed that the number of species recorded was unrelated to survey length (χ 2 = 0.36, p = 0.85) and varied among realms (χ 2 = 21.71, p < 0.001), with no significant interaction between survey length and realm (χ 2 = 0.63, p = 0.89). The average number of species recorded per survey decreased as follows: Australasia > Neotropic > Indomalayan > Afrotropic (Supplementary S1). As each of the 98 surveys covered different combinations of biomes, ecoregions, and countries (Supplementary S1), and up to 25 species were recorded per survey, we obtained a total of 575 estimates of species-specific parrot abundances (i.e., study cases).
Aural and Visual Encounter Rates
Considering the smaller data set of parrot encounters in which we recorded the mode of detection (N = 9617 encounters), 15.6% were detected visually, 46% were detected aurally, and 38.4% were simultaneously seen and heard. Parrot detections summing those exclusively heard plus those heard and seen accounted for 84.4% of the encounters.
Using the whole data set, we recorded a total of 15,072 parrot encounters of which 5325 (35.33%) were only aural, thus allowing records of 119,797 observed individuals and an unknown number of unseen individuals identified to the species level through their vocalizations. The proportion of aural encounters differed among species, ranging from 0% to 100% (mean = 23.9%, median = 16.5%; 6 species were only aurally registered, see Supplementary S2). Considering those study cases with at least 15 encounters (N = 191), a GLM showed that the proportion of aural encounters decreased with body mass (χ 2 = 69.04, p < 0.001, Figure 2a) and to a lesser extent with average flock size (χ 2 = 6.09, p < 0.014, Figure 2b) of the species, meaning that the larger and more gregarious species were more easily recorded visually, with no statistically significant variation among biomes (χ 2 = 17.58, p = 0.063, Figure 2c). This model explained 34.48% of the deviance.
As proposed in previous works, a method to avoid underestimating the number of parrots due to aural-only encounters is to multiply them by the average flock size recorded within each species-specific study case and summing this estimate of unseen (but heard) individuals to the number of visually recorded individuals, thus obtaining a more reliable estimate per species. By applying this factor of correction to our whole data set (575 study cases), the total number of parrots recorded increased by 22.6% (i.e., from 119,797 observed individuals to 154,759 estimated individuals). Importantly, this increment largely varied among species, ranging from 0 to 73.8% (mean = 20.4 + 19.8 SD, median = 14.3, N = 131 species; the increment could be not calculated for the six species that were only aurally encountered).
Distance-Dependent Detectability
The distance at which parrots were detected was influenced by several factors. When analyzing the smaller data set in which both the type of detection and behavior of birds were recorded, a GLM showed that distances (range 4-1400 m, mean = 89.8 ± 102.7 SD, median = 60.0, N = 4,783) were lower for aural than for visual detections (χ 2 = 82.83, p < 0.001) and for perching than for flying birds (χ 2 = 349.68, p < 0.0001), while they were larger for larger flocks (χ 2 = 37.39, p < 0.001) and species with larger body mass (χ 2 = 449.75, p < 0.0001), with significant variations among biomes (χ 2 = 125.42, p < 0.0001) (deviance explained by the model: 22.24%). These and probably other unmeasured sources of variation suggest the need for modeling distance-dependent probabilities of detection for unbiased estimation of parrot abundances.
Using the whole data set, we could calculate distance-dependent probabilities of detection (P) through DS modeling for 208 study cases with at least 10 visual encounters within 500 m of the transect line per species. Distances ranged between 0 and 1498 m (mean = 76.1 ± 95.8 SD, median = 50, N = 8491), while 99.3% of the distances were ≤500 m.
The half-normal key function was the detection function best fitting the data in most of the study cases (51.4%), followed by the hazard rate (42.8%) and the uniform (5.8%) functions. The best-fitted models included different cosine adjustments in 24 (11.5%) of the cases, and only in 10 cases (4.8%) included group size as a covariate. The resulting P ranged from 0.01 to 1 (mean: 0.22 ± 0.15 SD, median = 0.19). It is worth noting that the extremely low values of P (ranging from 0.01 to 0.05, in 18 study cases obtained through the hazard rate and in one case obtained through the half-normal functions) may be attributable to cases where parrots were attracted by feeding/nesting resources available close to the roads, thus violating a key assumption of DS modeling and making these values questionable (see Discussion Section 4.1). A GLM showed that P was positively related to the body mass (χ 2 = 25.02, p < 0.001, Figure 3a) and the average flock size (χ 2 = 25.31, p < 0.001, Figure 3b) of the species, meaning that the larger and more gregarious species were detected farther from the road than the smaller and less gregarious species, with significant differences among biomes (χ 2 = 30.38, p < 0.001) despite the large overlap shown by biomes in univariate plots (Figure 3c). This model explained 35.7% of the deviance. When excluding the 19 questionable P values (black dots in Figure 3a,b) from the GLM, the results were similar (body mass: χ 2 = 62.68, p < 0.001; flock size: χ 2 = 31.83, p < 0.001; biomes: χ 2 = 27.19, p < 0.001; deviance explained by the model: 35.13%).
Relationships between Densities and Relative Abundances
Parrot densities (individuals estimated/km 2 ) were obtained by correcting the number of individuals observed by their P obtained through DS modeling, for the 208 study cases with at least 10 visual encounters at distances < 500 m per parrot species. Densities ranged from 0.04 to 97.4 individuals/km 2 (mean = 5.1 ± 11.2 SD, median = 1.8). We also calculated the relative abundances (number of observed individuals/km) for the same dataset, which ranged from 0.02 to 7.31 individuals/km (mean = 0.57 ± 0.86 SD, median = 0.30). The relative abundances of the species were uncorrelated to their probabilities of detection (Spearman correlation, r s = −0.10, p = 0.15, N = 208).
Despite the large differences in P among case studies, the fact that both densities and relative abundances of parrots varied within >40 orders of magnitude, leads to a strong positive correlation between these two estimates of abundance (Spearman correlation of raw data: r s = 0.83, p < 0.001; linear regression of log-transformed values: r = 0.83, estimate: 0.659 ± 0.031 SE, p < 0.001, adjusted-R 2 = 0.69, N = 208; Figure 4a). This correlation becomes stronger when excluding the 19 densities obtained from the extremely low, questionable P values (linear regression of log-transformed values: r = 0.92, estimate: 0.799 + 0.025 SE, p < 0.001, adjusted-R 2 = 0.84, N = 189; Figure 4b).
Nearly identical results were obtained when restricting the dataset to study cases with at least 20, 30, 40, and 50 visual encounters at distances < 500 m per species to increase the robustness of DS modeling (r = 0.86-0.91, all p < 0.001), despite the fact that study cases were reduced to 120, 74, 65, and 52, respectively. Therefore, estimates of parrot abundances are equivalent whether or not controlling for differences in detectability.
As suggested in previous works, a way to avoid the underestimation of parrot species with varying percentages of aural encounters is to estimate the number of unobserved individuals by multiplying them by the average flock size of the species obtained in the same survey. This estimated relative abundance index (i.e., (number of observed individuals + number of estimated heard individuals)/km) correlates equally well with densities obtained through DS modeling (linear regression of log-transformed values: r = 0.83, p < 0.001, estimate: 0.725 + 0.033 SE, adjusted-R 2 = 0.70, N = 208; Figure 4c); thus, its use is recommended to avoid the underestimation of parrot numbers. As before, the correlation results stronger when excluding the 19 questionable desnities (r = 0.93, p < 0.001, estimate: 0.881 + 0.026 SE, adjusted-R 2 = 0.86, N = 189; Figure 4d). This relationship remains similar in a GLM (estimate: 0.825 + 0.034 SE, χ 2 = 565.89, p < 0.0001) when controlling for a much smaller effect of flock size (estimate: 0.006, SE: 0.002, χ 2 = 14.28, p < 0.001), with no significant effects of body mass (χ 2 = 0.05, p = 0.82), biomes (χ 2 = 16.93, p = 0.06), and number of visual encounters (χ 2 = 0.07, p = 0.79). This model explained 87.4% of the deviance.
Characteristics of the Species and Surveys Lost When Using Distance Sampling
From the 575 study cases obtained, in 367 (63.8%) DS modeling was not possible because the number of visual contacts was <10. The number of study cases lost when using DS mostly corresponded to those showing lower relative abundances (individuals/km; χ 2 = 82.13, p < 0.0001, Figure 5a), with a smaller positive effect of average flock size (χ 2 = 20.80, p < 0.001). This may be explained by the fact that some common species are highly gregarious and thus can be recorded in high numbers (see large data dispersion in Figure 5a) but with a low number of flocks encountered, thus not allowing DS modeling. The loss of cases from DS modeling was unrelated to the body mass of the species (χ 2 = 0.45, p = 0.50) (deviance explained by model: 30.35%). . Relationship between (a) the relative abundance (individuals/km) and density (individuals/km 2 ) of parrots when including densities obtained from questionable probabilities of detection (black dots) and (b) excluding them, and (c) between the estimated relative abundance (i.e., (number of observed individuals + number of estimated heard individuals)/km) and density (individuals/km 2 ) of parrots when including densities obtained from questionable probabilities of detection (black dots) and (d) excluding them. Densities were obtained through distance sampling modeling for 208 study cases with at least 10 visual encounters at distances < 500 m. Red lines represent the 95% CI for the regression lines. DS modeling could not be applied to 64 (46.7%) of the 137 species surveyed even when pooling all surveys across world ecoregions together, as they did not reach a minimum of 10 visual encounters. The percentage of species excluded varied among realms, the highest being in the Afrotropics (100%, N = 6 species), followed by the Indomalayan (16.7%, N = 6 spp), Australasia (25%, N = 16 spp), and Neotropic (33.6%, N = 110 spp) realms. The species excluded from DS modeling significantly showed a poorer global conservation status (χ 2 = 7.51, p < 0.01; 72.32% of deviance explained, Figure 5b). DS modeling could be not applied for 34 (34.7%) of the 98 surveys conducted, as they did not include a single species reaching a minimum of 10 visual encounters. The percentage of surveys excluded for modeling varied among realms, with the highest being in the Afrotropics (100%, N = 16 surveys), followed by the Neotropic (23.3%, N = 73), Indomalayan (16.7%, N = 6), and Australasia (0%, N = 3) realms.
Discussion
Roadside car surveys have been largely recommended to estimate the abundances of large and conspicuous birds which occur at low densities, such as raptors [43]. Recently, this methodology has been applied to parrots, although there is no proper evaluation of its strengths and weaknesses [10]. After our experience conducting roadside raptor surveys in a variety of tropical biomes [15,16], we considered this method to be even more adequate for parrots given that their frequent and loudly vocal activity makes them more easily detectable than the more silent raptors. In fact, 85% of our parrot encounters were aurally detected. The loud behavior of parrots largely reduces the problems in detecting raptors in forested biomes [15]. Supporting this, we found that the proportion of aural detections was related to the body mass and gregariousness of the species but not to the biomes they inhabit, which included habitats largely differing in openness, from steppes to rainforests. Therefore, through our large-scale roadside surveys, we were able to record c. 35% of the extant parrot species across the world biomes, including the commonest to the rarest and even Critically Endangered species. The former species, as well as those common but highly gregarious or patchily distributed, are difficult to survey through walked line transects and point counts because of their low encounter rates [10]. Moreover, we have demonstrated that distance-uncorrected estimates of parrot abundances are strongly correlated to those obtained when using DS modeling, thus providing a good proxy of the actual relative densities of the species. Nonetheless, roadside parrot surveys have several limitations regarding the design and length of surveys and the detectability of the species, which can be addressed as discussed below.
Roadside Parrot Surveys: Caveats, Solutions, and Prospects
As for raptors and other avian taxa [15,43], parrot abundances obtained through roadside surveys can be biased by the spatial distribution of roads and the response of the species to them. Recent studies have shown that coexisting bird species may differentially respond to roads, some decreasing but others increasing their abundances close to them, also differing in their responses between major and minor roads [44,45]. As some scavengers and birds of prey may be attracted by roadkills and the larger availability of prey and perching sites (e.g., power lines, poles) close to roads [15,46], some parrots can also be attracted by feeding resources, large trees and perching sites available close to roads. In fact, we could confirm that most of the extremely low probabilities of detection we obtained corresponded to study cases where parrots were attracted by feeding resources most often available in the gutters of the roads, such as fruiting trees (e.g., Burrowing parrots Cyanoliseus patagonus in Argentina, [35]) or herb seeds (e.g., Galahs Eolophus roseicapilla in Australia, [47]), or by lines of eucalyptus trees and power lines running in parallel to roads in deforested areas of Argentina, Paraguay, Uruguay and Brazil, substrates where Monk parakeets Myiopsitta monachus build their large communal nests [48]. In these few cases, extremely low probabilities of detection did not result from parrots being hard to detect at large distances from roads but from the fact that they were aggregated around them.
These particular circumstances violate a key assumption of DS modeling, i.e., that animal locations are independent of the line transect position [38], thus questioning its use as they may lead to the obtention of inflated densities (see Results Section 3.4 and Figure 4a,c).
On the other hand, some parrots may avoid roads because of human disturbance. This so-called "disturbance effect" may even affect bird abundances obtained from point counts because of the presence of observers [49], and thus traffic should also affect the behavior of parrots. We tried to minimize this disturbance effect by selecting a priori, using recent satellite images, minor paved roads and unpaved roads with little or no traffic, often only accessible using 4 × 4 vehicles. The fact that the relative abundances (individuals/km) of the species were uncorrelated to their distance-dependent probabilities of detection suggests that the less encountered species are actually uncommon (as is also supported by their IUCN Red List evaluations [3]), rather than their abundances being underestimated because they avoid roads and thus remain undetected. Moreover, through this work we found that parrots, from the smallest to the largest species, were largely undisturbed by the vehicle, allowing us to approach them at short distances, even taking detailed photographs (e.g., [34,35]). This agrees with the perception of high behavioral flexibility of parrots when facing human disturbance (e.g., [18,27,50]). In fact, recent studies have shown that the inter-individual variability of birds in their tolerance to sources of human disturbance such as roads [51] and human presence [52] is related to the relative brain size of the species, and parrots are among the birds with larger brains showing less fear of humans [52]. Nonetheless, further well-designed studies are needed to delve more deeply into these aspects and to evaluate how parrots respond to roads with high traffic intensity.
Another problem of roadside surveys is that habitat composition and configuration near roads may differ from the surrounding areas, thus leading to bird abundance biases [10,53]. The occurrence of these potential biases can be assessed a posteriori by comparing habitat composition along the roads surveyed with surrounding areas [54] but, ideally, can be largely avoided by carefully selecting the roads a priori using satellite images. In our case, within each survey, we intentionally selected roads crossing both protected and unprotected habitats with different degrees of transformation, as we were interested in surveying whole parrot communities that included habitat-sensitive species but also those that are favored by low-intensive agricultural and urban habitats [17,18,[27][28][29]. In other cases, however, researchers may be interested in surveying a particular species and in such a case they should ensure the selected roads cover and represent the habitats used by this species and not others. Alternatively, they may be interested in species responses to habitat transformation. Road transects can be divided into small sections whose habitats can be measured [43], and thus long surveys crossing fragments of habitats with different degrees of transformation, from pristine to urban areas within the same study area, allow for testing changes in parrot abundances based on changes in land use [17,18]. The length of the section can be used as a proxy of the size of the habitat patch crossed when acquiring large data sets, and thus testing the effects of habitat transformation together with patchiness on single-species parrot abundances [18]. The same approach can be translated to multi-species studies, obtaining estimates of total abundance, diversity, and species richness (by simply recording presence/absence of each species) for each roadside habitat section [15,55]. Another approach is to compare the habitat composition within a buffer centered on each detected parrot with that around random points selected from the same roadside survey, combining field data with remote sensing tools [55]. These approaches have still been little explored and have the potential to increase our knowledge on the responses of different parrot species and communities to very large-scale changes in land use and habitat fragmentation, and are urged given the further habitat loss predicted for parrots worldwide [7].
Do We Need to Account for Parrot Detectability?
As for other avian taxa, it is widely assumed that detectability varies among parrot species [10]. However, differences in distance-dependent detectability among parrot species have been little reported [13], and even less is known about which parrot traits explain these differences. Observations of flying parrots recorded from Amazonian rainforest canopy points showed that larger-bodied species were detected at greater distances, and that average flock sizes were negatively related to their body mass [56]. Here, analyzing a large data set that includes a variety of species and biomes, we show that not only the distances of detection but also the probabilities of visual detections are positively related to the body mass and gregariousness of a species. Moreover, there are other potential sources of variation in parrot detectability that we could not assess through our large-scale approach. For example, visual (but not aural) detectability may vary within species and biomes due to habitat transformations (it could be higher in agriculture than in forest habitats) and seasonal changes in vegetation structure (it could be higher during the dry season in deciduous tropical dry forests when most trees lose their leaves).
Breeding phenology may also affect parrot detectability since the gregariousness of some species decreases during the nesting period [10,11] and nesting pairs may be more tied to their nesting sites and thus less mobile and detectable. Therefore, it is important to consider potential seasonal changes in parrot behavior and to account for variation in parrot detectability when performing censuses.
Accordingly, our distance-dependent probabilities of detection (P) were positively related to the body mass and gregariousness of the species and varied among biomes. Even though we relied on a minimum of 10 visual encounters, which can lead to useful but imprecise density estimates [4], the densities obtained were within the ranges obtained for the same parrot genera through DS modeling using walked line transects and point counts [4]. As highlighted in the same review, parrot densities obtained through different methods, even including roost counts, are quite similar when looking at differences among species [4]. This is likely due to the fact that differences in natural (and/or human-induced) abundances among parrot species [3] are so high (in our study within >40 orders of magnitude) that any biases due to differential detectability or other methodological biases are overcome in interspecific comparisons. Then, perhaps not surprisingly, our results allow us to confirm and generalize previous findings [22], showing a strong correlation between detectability-corrected and uncorrected estimates of parrot abundance at a global scale. Notably, the same correlation holds when increasing the minimum threshold of encounters to increase the robustness of DS modeling and when including estimates of the number of unseen (only-heard) individuals, while it is not affected by the body mass of the species, biomes, or the number of encounters per species. Therefore, simple estimates of relative parrot abundance (individuals/km) can be used as good proxies of their estimated detectability-corrected densities. This does not mean however that one method is better than the other, nor that distance sampling is not needed for roadside parrot surveys. The choice should be balanced attending to different methodological constraints and research objectives, as further discussed below.
Pros and Cons of Distance Sampling
A major challenge for estimating parrot densities is obtaining enough encounters from all species for DS modeling [4]. For example, density estimates could be obtained for only 9 of 17 parrot species after significant effort conducting walked line transects (accumulating 2,412 km surveyed over 3 years) in two small Amazonian study areas [23]. In our study, 64% of the case studies, 47% of the parrot species, and 35% of all surveys had to be excluded from DS modeling. This occurred despite pooling data from the same ecoregions/countries obtained in different seasons and years, when available, to increase sample sizes, to better represent the whole parrot community, and increase the precision of estimates [12], and even though we used the lowest number of visual encounters required for DS modeling [4]. Concerningly, most of the species excluded are threatened or uncommon in the wild, but there are also some common but highly gregarious species, varying among realms. The extreme case is exemplified by the Afrotropic realm, where all study cases, species, and surveys were excluded despite the high survey effort invested (Supplementary S1).
Obviously, the percentage of exclusions from DS modeling would be much higher if we had separated surveys by years or seasons or split them into habitat-category sections [17], or had increased the minimum number of encounters for obtaining more precise density estimates [13], as many researchers may require for dealing with their research objectives.
Some procedures have been proposed to solve the problem of insufficient detections for parrot DS modeling. One is to use the records of a coexisting common species to model its probability of detection and use it for estimating the density of a congeneric, similarsized rare species from which insufficient encounters were obtained [20]. However, after our experience, all species from the same genus (e.g., large macaws Ara, amazon parrots Amazona) are often equally scarce within the same survey, and thus all are unavoidably excluded from DS modeling. Another solution applied is pooling all records from rare species (even from different genera) to estimate a common probability of detection and derived species-specific density estimates [57]. However, these estimates must be taken with caution as the assumption that the detectability of different species is equivalent may be violated [53].
Rather than forcing the obtention of somewhat questionable density estimates when species-specific data are lacking, we recommend relying on simple relative abundances (individuals/km) when roadside surveys focus on whole parrot communities that include uncommon species, as they offer abundance estimates equivalent to detectability-corrected densities. Moreover, not recording distances has some advantages. On the one hand, the calculus of relative abundances is very simple and does not require the statistical skills needed for DS modeling. On the other hand, the field-work time saved by not recording distances (i.e., in surveys of rich and abundant parrot communities, researchers often must stop the car every few minutes to record them) can be invested in conducting longer roadside surveys, thus better representing the areas and parrot communities surveyed. This may be an important advantage, as parrot surveys are often logistically constrained by climatic conditions, and the time and funds available. Contrarily, we recommend DS modeling when researchers focus on one or a few common species, as they can then obtain more precise estimates of abundance by increasing the number of encounters (not paying attention to the rest of the species) and the best-fitting detection functions, as is done with point counts and line transects [13]. Even more importantly, DS modeling allows the calculation of densities that can be carefully extrapolated to the extent of suitable habitat and thus estimate the size of parrot populations, as has been done using point counts on islands [24,58]. A stratified design of large-scale roadside surveys could allow the estimation of population sizes for common parrot species with country-and even continental-level distributions, something that could be logistically unaffordable through point counts and walked line transects.
Finally, as a word of caution, researchers must keep in mind that distance sampling modeling was developed to correct for the imperfect detection of species in census surveys, but that the violation of some assumptions may also generate imperfect results. For the case of parrots, some assumptions of DS modeling are often violated: that all individuals encountered are accurately counted and their distances of detection exactly measured, and that encountered birds do not move while conducting the survey [10,38,53]. We have shown that the first assumption is not only violated in walked line transects and point counts [23,34] but also in roadside car surveys (see also [22]). In our surveys, 24% of the encounters corresponded to aural contacts of an unknown number of unseen individuals. Concerningly, the proportion of aural contacts was not randomly distributed but varied from 0 to 100% among species, being related to their body mass and gregariousness. As a solution following previous works [12,[22][23][24], we estimated the number of unseen birds by substituting aural contacts with the average flock size of the species obtained from the same survey (this is important as average flock sizes may vary among seasons and regions). We used average flock size for consistency with previous works that adopted this solution [22][23][24] and because it is often reported as a measure of gregariousness (e.g., [13,56]). Given the often right-skewed distribution of flock sizes, researchers could use the median instead of the mean, although results should not markedly differ. In any case, we recommend incorporating this procedure to avoid the underestimation of parrot numbers in roadside surveys (in our case reaching 23% on average), resulting in relative parrot abundances that strongly correlated to distance-corrected densities. However, incorporating these estimates of an unseen number of individuals into DS modeling is challenging given the difficulties of estimating their distances of detection. Some solutions have been proposed when conducting parrot walked line transects and point counts, such as measuring distances to other objects at a similar distance if the heard parrot/flock was not visible [13,24] or categorizing these estimated distances to unseen parrots into intervals [58]. These estimations require expert observer skills and thus, researchers must be careful to do not introduce distance biases that would affect DS density estimates [38].
Regarding bird movements, DS modeling was conceptually developed as a 'snapshot' method in which animals are ideally 'frozen' while the survey is conducted, but in practice animals often make non-responsive movements (i.e., not disturbed by the observer) [38]. Buckland et al. [53] suggested that this assumption must be relaxed to include flying individuals in avian taxa that spend large proportions of their time in flight, such as seabirds and raptors. This is also the case for parrots. Except for a few low-mobility forest species (e.g., genus Pionites), most parrot species make long daily trips looking for food and moving between foraging, breeding, and roosting sites [10]. In fact, 36% of our parrot encounters corresponded to birds/flocks detected in non-responsive flights. Excluding these records would underestimate parrot abundances, with non-random biases according to the different flight propensities among species. Using walked line transects, Legault et al. [13] found that excluding flying birds caused an underestimation of parrot densities that varied between 7% and 67% among species. In their review on distance sampling approaches and assumptions, Thomas et al. [38] indicated that, in practice, non-responsive movement in walked line-transect surveys is not problematic provided it is slow relative to the speed of the observer, and thus it should be even less problematic for the fasterspeed road car surveys. Therefore, we support the inclusion of flying parrots in roadside car surveys, as for walked line transects [13], but also suggest that researchers record the behavior of parrots (perching, foraging, flying) encountered. This may later allow researchers to decide whether to include flying birds in DS estimates [13] and to assess for example foraging habitat preferences by restricting records to foraging birds [17].
Researchers should be not discouraged by the limitations of DS modeling applied to roadside car surveys. Rather, they should be aware of how and when its application is feasible for their study species. On the other hand, some analytical advances for estimating parrot abundances [10] such as the use of hierarchical (N-mixture) models [59] have been recently applied to parrot roost counts [60], walked transects, and point counts [61], and have the potential to be used in roadside parrot surveys as has been done for raptors [16].
Conclusions
While roost counts may allow estimating regional and even global populations sizes of some parrot species [11,60,62,63], they are not affordable for most parrot populations and species and thus estimates of densities are often obtained using point counts or walk line transects [10]. However, these methodologies may fail to record rare and patchily distributed species, a problem that could be solved using large-scale roadside car surveys [10]. Here, compiling roadside car surveys conducted across the world biomes and continents inhabited by parrots, we have assessed how the aural-and distance-dependent probabilities of detection are affected by species traits and biomes as well as the pros and cons of roadside car surveys using or not using DS modeling, providing potential solutions for the problems encountered. We have demonstrated that distance-uncorrected estimates of parrot abundances are strongly correlated to those obtained using DS modeling, thus offering a good proxy for the actual relative densities of the species. This however does not mean that one method is better than the other. While DS modeling generally can not be used when dealing with whole parrot communities, because it results in the exclusion of a high percentage of surveys and species (mostly those uncommon and threatened ones), it may be useful for species-specific studies of common species. As learned from comparisons of other survey methodologies [10,49,59], the choice of the most suitable method is context-dependent. We summarize in Table 1 the strengths and weaknesses of using or not using DS attending to sampling effectiveness, which is understood here as the ability of either method to record birds that are present, to methodological constraints, and to the output variables required to reach different research goals. We hope this comprehensive summary will help guide researchers in choosing the best-fitting option for their particular research hypotheses, characteristics of the species studied, and logistical constraints. Table 1. Comparison of strengths (+) and weaknesses (-) when using distance sampling modeling (DS Yes) or not (DS No) for estimating parrot abundances through roadside surveys, attending to the shortcomings of both methods and the objectives of studies. Equal signs (=) denote similar performance. | 2021-09-28T01:10:11.883Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "b02b62dfb5b1b091ec0d9441f8de33ab1afc36cc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-2818/13/7/300/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "488bf870e9af4e8d0f20fc888771990bf18db759",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
18542383 | pes2o/s2orc | v3-fos-license | Appendix S1
Specimen voucher numbers, identification, and geographic location of samples for this study.
STEP 1 Relationships between variables
Compute correlations between species diversity, SoK and PA size (S1.1)
STEP 2 Negative binomials models
Calculate negative binomial models to determine the effect of SoK on species diversity (S1.2). Use the standardized residuals as site-level relative species diversity for each taxon STEP 3 Taxon-specific data Combine taxon-specific data (SoK and relative species diversity) across all taxa into site-specific values in two ways: 1. Estimate the first principal component of a PCA to generate site-specific PC1 scores (S1.3 and S1.4) 2. Calculate site-level unweighted mean values across taxonomic groups to generate site-specific mean values
STEP 4 Model selection
Assess the differences between the PCA-generated values and the unweighted mean values to decide what to use when generating individual priority ranks (S1.5)
S1.1 Determining the relationship between variables used to build priority ranks 12
There was a tendency for the known species diversity to increase as the state of knowledge 13 (SoK) increased ( Fig. S1.2). Species diversity also showed a positive association with protected 14 area (PA) size for most taxa, though SoK was weakly related to PA size (Figs S1.3 and S1.4), 15 suggesting surveying effort by experts were evenly distributed among PAs of various sizes. 16 (right column) for endemic rodents, bats, endemic carnivora and lemurs (separated by rows). 28 29 30 S1.2 Negative binomial models 31 Several negative binomial models (one for each taxon) were performed to determine the effect 32 of SoK on species diversity by taking protected area (PA) size into account. Species diversity 33 was modelled as dependent on SoK (categorical variable) and log 10 -transformed PA size, as 34 well as their interaction. Since there were no significant interaction effects (p > 0.05) between 35 log 10 of PA size and SoK, the results below (R output) are shown only for the individual effects 36 of the predictor variables. SoK was a significant predictor (p < 0.05) of species diversity for all 37 taxa, while PA size was significant for most taxa (excluding amphibians and carnivorans). A 38 significant association with PA size likely reflects biological reality (i.e., more species in larger 39 areas), while a significant association with SoK likely reflects bias due to sampling effort. 40 Negative binomial models were fitted in R (R Development Core Team 2019) with the function 41 glm.nb from the MASS package (Venables & Ripley 2002
S1.3 Species diversity by state of knowledge (SoK) 136
In order to account for bias due to sampling effort (i.e., differences in SoK), we recalculated 137 negative binomial models for each taxon in which species diversity was dependent solely on 138 SoK. We then used the standardized residuals from those models to represent relative species 139 diversity ( Fig. S1.5). We performed principal components analysis (PCA) on the standardized residuals of 147 species diversity vs. SoK. The first two principal components captured 40.7% and 15.2% of 148 the variation, respectively. All taxa loaded in the same direction, though not to the same degree 149 as reptiles and bats showed particularly low loadings (Fig. S1.6). Sites that tended to be more 150 diverse than expected for its SoK for one taxon also tended to be more diverse for other taxa. 151
Endemic.Carnivora
Lemurs partitioning the variance among variables, which eliminated the need for a value judgement 162 about which taxa were more important at the cost of others. 163
State of knowledge (SoK) 165
The first and second principal components captured 61.4% and 12.7% of the variation, 166 respectively with all taxa loading in the same direction, though not to the same degree (Fig. 167 S1.7). The first principal component alone could be used to build a ranking based on SoK, but 168 since it is not justified to preference one taxon over the other, it may be more desirable to 169 weight all taxa equally. second accounted for 15.8%, with forest types loading in several different directions (Fig. 197 S1.9). This was probably because the forests varied in their makeup. 198 Since they all showed non-negative rates of loss, we simply looked at total forest loss 199 between the two time periods (1996-2006 and 2006-2016). The first principal component 200 captured 52.3% of the variation, with the remainder in the second principal component.
Relative species diversity (mean)
Standardized residual species diversity (mean) | 2017-06-10T00:20:07.804Z | 2006-01-01T00:00:00.000 | {
"year": 2001,
"sha1": "2e66e74211dfb36dc909f0d807a73d76815ba1fe",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0019416&type=printable",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8ab0ad7a73610059e54e5ec3eac01af454183cfc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246996527 | pes2o/s2orc | v3-fos-license | Phase-locking matter-wave interferometer of vortex states
Matter-wave interferometer of ultracold atoms with different linear momenta has been extensively studied in theory and experiment. The vortex matter-wave interferometer with different angular momenta is applicable as a quantum sensor for measuring the magnetic field, rotation, geometric phase, etc. Here we report the experimental realization of a vortex matter-wave interferometer by coherently transferring the optical angular momentum to an ultracold Bose condensate. We use the angular interference technique to measure the relative phase of two vortex states. For a lossless interferometer with atoms only populating two spin states, the difference between the relative phases in the two spin states is locked to π. We also prove the robustness of this out-of-phase relation, not sensitive to the angular-momentum difference between two vortex states, constituent of Raman optical fields and expansion of the condensate. The experimental results agree well with the calculation from the unitary evolution of wave packet in quantum mechanics. This work opens a new way to build a quantum sensor based on the vortex matter-wave interference.
Introduction
Interference is fundamental to wave dynamics and quantum mechanics.Interferometry represents a unique way to probe the subtle changes of physical parameters by precisely measuring the resultant tiny relative phase shifts.Matterwave interferometry, especially those realized in ultracold atomic gases, opens a pathway to extract the relative phase between coherent constituents traversing different paths, and holds promise for applications in both practical precision measurement and fundamental quantum research [1].Like the original optical interferometer where beam splitter (BS) plays a center role, a variety of matter-wave interferometers have emerged employing different mechanisms to realize coherent splitting and recombination of the wavepackets, including double-well potential [2,3], Bragg scattering [4], optical lattice [5] and Stern-Gerlach separation [6,7], to name a few.As mentioned above, matter-wave interferometer of ultracold atoms with different linear momenta has been extensively studied in theory and experiment.Another type of fundamental matter-wave interferometer would be a vortex matter-wave interferometer with different orbital angular momenta (OAMs).This proposal could be implemented by transferring the OAM of the laser beam to cold atoms through the optical transition, thanks to the recent developments both in vortex light beam carrying definite OAM [8][9][10] and its coherent interaction with cold atoms [11][12][13][14].
Recently, the vortex matter-wave interferometer has been theoretically proposed to be applicable to measure the rotation, magnetic field, interatomic interaction, geometric phase, etc. [15][16][17][18].Two different vortex states accumulate different phases respect to an external rotation of the system, thus a compact and stable quantum gyroscope without BS and mirror is feasible using this scheme [16,17].The two interfering vortex states can overlap at zero relative velocity, then the long interrogation time can enhance the measurement precision to extract the subtle interatomic interaction [15,18].Furthermore, OAM states offer a high dimensional Hilbert space to obtain extra security and dense coding of quantum information [19,20], providing the possibility of using the vortex-state superposition in quantum gases as qubit [21,22].People has used the vortex interference patterns in ultracold atoms to measure the winding number of the vortex state [11,12,23].However, the vortex matter-wave interferometer is yet to be explored, where the relative phase between the two interfering vortex states should be quantitatively determined.
Here we report the first experimental realization of a vortex matter-wave interferometer in a two-spin Bose-Einstein condensate.We measure the relative phase between the two interfering vortex states by analyzing the angular interference fringes.A pair of Raman laser beams with an OAM difference as one BS produce the spin-dependent vortex states, and a radio-frequency (RF) pulse as another BS combines two different vortex states into one spin state.After producing two lossless BSs with atoms only populating the two spin states, even the phase of the interference in each spin state fluctuates, the interferences in the two spin states exhibit a constant π-phase difference.This out-ofphase relation is robust, which is independent of the angular-momentum difference between the two interfering vortex states, constituent of Raman optical fields and expansion of the condensate.The experimental results agree well with the calculation from the unitary evolution of wave packet passing through the lossless BSs in quantum mechanics.
Results
Scheme of the interferometer.The vortex matter-wave interferometer is schematically presented in Fig. 1.The input vortex state with an OAM number of l 1 is in the spin state |↓ of the condensate, which can be denoted as |↓ |l 1 .A two-photon Raman pulse composed of a pair of vortex laser beams (with OAM numbers of L 1 and L 2 , respectively), as BS1, produces the vortex state |l 2 in the spin state |↑ , where l 2 = l 1 + (L 1 − L 2 ).After BS1, the quantum states can be written as |↓ |l 1 + |↑ |l 2 .A RF pulse, as BS2, transfers atoms back and forth between the two spin states, generating the interference between the two vortex states |l 1 and |l 2 in each spin state.After BS2, the quantum state becomes |↑ (|l 1 + e iφ ↑ |l 2 ) + |↓ (|l 1 + e iφ ↓ |l 2 ).Here we neglect the global phase in each spin state.φ ↑ and φ ↓ are the relative phases between the two vortex states in spin states |↑ and |↓ , respectively.Finally a Stern-Gerlach magnetic field pulse makes a projection of the output quantum state onto the two spin states, i.e., the superposition state |l 1 + e iφ ↑ |l 2 in |↑ and |l 1 + e iφ ↓ |l 2 ) in |↓ .The value of φ ↑,↓ can subsequently be extracted from the interference pattern in the two spin states.Suppose that the two-photon Raman and RF induced transitions are lossless, i.e., atoms only populate the two spin states, then the two BSs can be written as the unitary operators in quantum mechanics [21,22,24], respectively, where R = 1 2 Ω R T R with the Rabi frequency Ω R and pulse period T R of the optical Raman lights, RF = 1 2 Ω RF T RF with the Rabi frequency Ω RF and pulse period T RF of the RF field.∆φ R and ∆φ RF are the obtained phases during the Raman and RF induced transitions, respectively.U Raman transfers the OAM as well as the atomic population between the two spin states, while U RF only transfers the atomic population.As seen in Fig. 1, the input state is (|↓ , |↑ ) T = (u 10 |l 1 , 0) T where u 10 is the spatial wave function of the initial state.Using Eqs. ( 1) and ( 2), we can obtain the output state of the interferometer, where ∆φ = (∆φ R − ∆φ RF ).In the following, we set ∆φ and φ ↑,↓ in the range (−π, π).Then φ ↑ = ∆φ and φ ↓ = ∆φ + (2n + 1)π, where n = −1, 0. We obtain the characteristic properties of the lossless vortex matterwave interferometer: (1) the two BSs can be considered as one lossless BS represented by a unitary operator U = U RF U Raman ; (2) the interferences in the two spin states have a π-phase difference, Eq. ( 4) indicates that, although the relative phase (φ ↑ or φ ↓ ) in each spin state can fluctuate due to external coupling with optical Raman and RF pulses, the interferences in the two spin states have a constant π-phase difference.
The relative phase can be extracted by analyzing the angular interference pattern (see related calculations in Methods).θ ↑ and θ ↓ are defined as the azimuthal angles of the maximum interference fringes in the two spin states (see Fig. 3), respectively.With ∆l = l 1 − l 2 , the azimuthal angle between the two adjacent maximum interference fringe is equal to 2π/|∆l|.Set θ ↑,↓ ∈ (−π/|∆l|, π/|∆l|), then we get the relation of the interference fringes , Considering 5) exhibits the out-of-phase interferences of Eq. ( 4).In the following, we will experimentally demonstrate the relation of Eq. ( 5).It is noted that, being compared to the linear-momentum state, two vortex states can overlap at zero relative velocity and interfere with a high stability.This unique character provides a convenient way to probe the interference pattern in each spin state.Preparation of a lossless interferometer.We prepare two lossless BSs with atoms only populating the two spin states as shown in Fig. 2. A 87 Rb condensate with an atom number of N = 1.2(1) × 10 5 is produced in a nearly spherical optical dipole trap with a trapping frequency ω = 2π × 77.5 Hz [12,25].Atoms initially populate the spin state |↓ = |F = 1, m F = −1 with zero OAM l 1 = 0. Two laser beams copropagate across the ultracold atoms, transferring the relative OAM of the two laser beams (L 1 = −2 and L 2 = 0) to the condensate in the two-photon Raman induced transition and producing the vortex state |l 2 = −2 in the spin state |↑ = |F = 1, m F = 0 (Fig. 2(a)).In a previous work [12], we have obtained the spin-orbital-angular-momentum coupling (SOAMC) with an adiabatic process in the trap.Here we apply the optical coupling during the expansion of the condensate after a time of 8 ms (see the time sequence in Fig. 2(b)), enlarging the atomic cloud to increase the coupling strength.The period as well as the power of the Raman and RF pulses are selected to obtain a high interference visibility (see Methods).
87 Rb atom has three spin states |m F = −1, 0, 1 in the ground state.To produce the lossless Raman and RF induced transitions in which atoms only populate the two spin states |↓ and |↑ , a bias magnetic field introduces a large quadratic Zeeman shift ω q = 2π × 5.52kHz and a blue detuning δ is applied.This will induce a big detuning (δ + 2ω q /2π) of the spin state |m F = 1 , resulting in a negligible population in it.In Fig. 2(c), we measure the atom numbers in three spin states versus the two-photon detuning δ of the Raman pulse.Atom ratio N +1 /N decreases when increasing the blue detuning and reaches the minimum at δ ≈ 9 kHz, where N is the total atom number and N +1 is the atom number in the spin state |m F = 1 .In the range δ ∈ (8 kHz, 12 kHz), N +1 /N < 0.03.Considering that the atom number in the spin state |↑ also decreases with the increased detuning, we generally select a detuning at δ ∈ (8 kHz, 10 kHz).In Fig. 2(d), we measure the atom ratio N +1 /N versus the detuning δ of the RF pulse.N +1 /N is smaller than 0.05 in the range δ ∈ (9 kHz, 11 kHz).So we can prepare two lossless BSs by choosing the Raman detuning in the range δ ∈ (8 kHz, 10 kHz) and the RF detuning in the range δ ∈ (9 kHz, 11 kHz).Measurement of the out-of-phase interferences.In Fig. 3, we analyze the angular interference fringe between the two vortex states |l 1 = 0 and |l 2 = −2 in the two spin states.As one exemplary measurement, the interference patterns in the two spin states are imaged as shown in Fig. 3(a) and (b), respectively.In the cylindrical coordinates (r, φ, z), r = x 2 + y 2 is the radius and φ is the azimuthal angle in the x − y plane.Cold atoms are confined at z = 0. θ ↑ as well as θ ↓ is defined as the azimuthal angle of the maximum interference fringe in each spin state.To extract θ ↑,↓ , we use a cosine wave function, to fit the angular interference fringe in Fig. 3(c) and (d).∆l = l 1 − l 2 = 2, OD is the optical density with a bias OD 0 , and A is the amplitude.It is noted that, to optimize the fitting, two circles with φ ∈ (0, 4π) are plotted.Because the azimuthal angle between the two adjacent maximum interference fringe is equal to 2π/|∆l|, θ ↑,↓ is limited in the range (−π/|∆l|, π/|∆l|).Then we get that θ ↑ = 0.7495, θ ↓ = -0.7831and ∆θ = 1.5326 ≈ π/2.The black solid circle in Fig. 3(a) or (b) schematically presents the interference path.In order to accurately determine the values of θ ↑,↓ , we plot 15 interference fringes with a separation of 1 pixel (1 pixel ≈ 6.4 µm) along the radius r.Then θ ↑,↓ as well as A are plotted versus r.We take the values of θ ↑,↓ with A being the maximum as the final values.See Supplementary Note 1 for the details.Despite the fluctuation of the azimuthal angle θ ↑ (or θ ↓ ) shot to shot in each spin state (Fig. 3(e)), the difference between the two azimuthal angles remains constant, i.e., ∆θ = |θ ↑ − θ ↓ | ≈ π/2 (Fig. 3(f)).This is the evidence of the out-of-phase interferences in the two spin states.The Raman and RF pulses will transfer their phases (∆φ R and ∆φ RF ) to the two spin states during their interactions with atoms.The fluctuation of ∆φ R and ∆φ RF shot to shot results in the variation of the relative phase ∆φ (see Eq. ( 3)), thus causing the randomness of θ ↑ (or θ ↓ ).We can calculate ∆φ as well as φ ↑,↓ from each measurement of θ ↑,↓ according to the relations φ ↑ = ∆φ = −∆lθ ↑ and φ ↓ = −∆lθ ↓ (see Methods).For the measurement in Fig. 3(a, b), φ ↑ = ∆φ = −1.4990,φ ↓ = 1.5662 and φ ↓ − φ ↑ = 3.0652.Then |φ ↓ − φ ↑ | ≈ π holds for the six measurements in Fig. 3(e, f), which also demonstrates the out-of-phase relation.We can calculate the interference patterns with the measured value of ∆φ.The measured values of ∆φ and the calculated interference patterns are shown in Supplementary Note 2.
In Fig. 4, we demonstrate the out-of-phase interferences for different values of ∆l.We obtain various values of ∆l by constructing different configurations of the optical Raman pulse, i.e., ( for ∆l = 2, and (L 1 = −2, L 2 = 1) for ∆l = 3.The winding number L 1,2 of the optical OAM is controlled by a spatial light modulator (SLM).θ ↑,↓ as well as ∆θ are measured using the same method as in Fig. 3.A collection of the measurements is shown in Fig. 4. The theoretical calculation of Eq. ( 5), ∆θ = π/|∆l|, agrees well with the experimental results.5).The error bar is the standard deviation of six measurements.
Measurement number In Fig. 5, we demonstrate the out-of-phase interferences for different constituents of the Raman pulse and during the expansion of the condensate.Eq. (5) indicates that ∆θ only depends on |∆l|, but not on the constituent of the Raman pulse.This is analogous to a BS whose property doesn't depend on its composition.In Fig. 5 Different constituents of the Raman pulse are applied, i.e., ( , and (L 1 = 0, L 2 = −2).∆θ remains constant, ∆θ = π/2.This constant value also holds for different expansion times of the condensate, as indicated in Fig. 5(b).
Discussion
In conclusion, we report the first experimental realization of a vortex matter-wave interferometer in ultracold quantum gases.Through producing a lossless interferometer, we demonstrate the out-of-phase relation for the interferences in the two spin states.We further demonstrate the robustness of this out-of-phase relation, which is independent of the angular-momentum difference between the two interfering vortex states, constituent of Raman optical fields and expansion of the condensate.The experimental results agree well with the calculation from the unitary evolution of wave packet in quantum mechanics.In current experiments, despite the fluctuation of the interference phase in each spin state, we show the ability to measure the relative phase between the two vortex states for each measurement.In the future, we will obtain controllable and stable relative phase using optical and RF phase-locking techniques, which is critical to realize a phase-preserving interferometer [3,6].The vortex matter-wave interferometer is a good candidate to build a quantum sensor for measuring the rotation, magnetic field, interatomic interaction, and geometric phase [15][16][17][18].
Methods
Experimental setup.We produce a spherical Rb condensate using the combination of the optical force and gravity as in Ref. [12,25].The trapping frequency is ω = 2π × 77.5 Hz.The OAM number is a good quantum number in a system with a rotational symmetry.The atom number is N = 1.2(1) × 10 5 and the temperature is T ≈ 50 nK.The cold atoms initially populate the spin state |↓ = |F = 1, m F = −1 with zero OAM l 1 = 0. Two laser beams with different OAMs (L 1 and L 2 ) copropagate across the ultracold atoms, transferring the relative OAM of the two laser beams (∆L = L 1 − L 2 ) to the condensate in the two-photon Raman induced transition, while suppressing the transfer of the linear momentum.A pair of Helmholtz coils produce a bias magnetic field B 0 , which provides the quantum axis and a large quadratic Zeeman shift ω q = 2π × 5.52 kHz of the Rb ground spin states.A pair of anti-Helmholtz coils produce a pulse of a gradient magnetic field ∂B/∂r to spatially separate different spin states.The probe beam counterpropagates with the Raman beams, detecting the density distribution of the condensate in the plane r − φ.
After a TOF of 8 ms, the condensate is coupled by a pair of two-photon Raman lights (L 1 and L 2 ) followed with a RF pulse.The atom size is about 10 µm.For the Raman laser beam, the waist is about 70 µm, the power is about 30 mW.We use the tune-out wavelength λ = 790.02nm of the two Raman beams, in which the ground spin manifold of the Rb atom experiences no scalar ac Stark shift.This can ensure that any vortex structure observed in the condensate is produced due to the OAM transferring, not the trapping effect of the vortex laser beam.The period of the Raman as well as RF pulse is 60 µs.The Rabi frequency of the optical Raman fields is spatial dependent.The Rabi frequency of the RF field is 2π × 1.67 kHz, which is determined by measuring the Rabi oscillation of the atom numbers in the two spin states.The period as well as the power of the Raman and RF pulses are selected to obtain a high interference visibility.A gradient magnetic field (∂B/∂r) is applied to spatially separate different spin states.The spin-resolved density is probed with a TOF of 20 ms.Magnetic field calibration.The absolute value and stability of the detuning δ is mainly determined by the bias magnetic field B 0 .We calibrate the magnetic field by adiabatically coupling the ground spin states of F = 1 with a RF passage.We scan the RF signal from 6.090 MHz to different values in 50 ms and simultaneously record the populations of the three spin states.The Hamitonian of the system dressed by the RF signal is where )/2 is the effective resonant position, which is set by the bias magnetic field B 0 .Ω RF is the coupling strength of the RF signal.
Fig. 1 ( 2 |l 1 =0>+𝒆FIG. 1 .
Fig.1 (b) shows an exemplary interferometer with l 1 = 0, L 1 = −2 and L 2 = 0.The vortex states in the two spin states are imaged in three steps of the interferometer, at the input, between the two BSs, and at the output.The spin-resolved atomic images are taken with the help of a Stern-Gerlach magnetic field and a time of flight (TOF) of 20 ms.The petal-like interferences between |l 1 = 0 and |l 2 = −2 are present in the two output spin states.
1 FIG. 2 .
FIG. 2. Preparation of two lossless BSs (the two-photon Raman pulse and the RF pulse).(a) Energy level diagram.Two laser beams couple the two spin states |↑ = |F = 1, mF = 0 and |↓ = |F = 1, mF = −1 .δ is the two-photon detuning.ωq = 2π × 5.52 kHz is the quadratic Zeeman shift.A RF pulse also couples the two spin states |↓ and |↑ .(b) The time sequence of the interferometer.After a TOF of 8 ms, the condensate is coupled by a pair of two-photon Raman lights (L1 and L2) followed with a RF pulse.Then 1 ms later, a gradient magnetic field (∂B/∂r) is applied to spatially separate different spin states.The spin-resolved atomic density is probed with a TOF of 20 ms.(c) Atom ratio N+1/N versus the detuning δ of the Raman pulse.N is the total atom number and N+1 is the atom number in the spin state |mF = 1 .The error bar is the stand deviation of several measurements.Here the RF pulse is absent.The insets show the exemplary atomic images with δ = 0, 4, 8 kHz, respectively.The solid curve is the theoretical calculation on three-level atoms coupled with a Raman pulse.(d) Atom ratio N+1/N versus the detuning δ of the RF pulse.The insets show the exemplary atomic images with δ = 7, 9, 11 kHz, respectively.The solid curve is the theoretical calculation on three-level atoms coupled with a RF pulse.
FIG. 3 .
FIG. 3. Out-of-phase interferences in the two spin states for ∆l = 2. (a) and (b) show exemplary interference patterns between the two vortex states |l1 = 0 and |l2 −2 in the two spin states, respectively.r is the radius, φ is the azimuthal angle, and θ ↑ as well as θ ↓ is the azimuthal angle of the maximum interference fringe.(c) and (d) indicate the angular interference fringes along the paths denoted by the black solid circles in (a) and (b), respectively.A cosine wave function (the red solid curve) is used to fit the interference fringe to extract θ ↑,↓ , where θ ↑ = 0.7495, θ ↓ = -0.7831and ∆θ = 1.5326.(e) shows the values of θ ↑ (black squares) and θ ↓ (red circles) for six measurements.θ ↑,↓ fluctuates shot to shot.(f) shows a constant difference between the two azimuthal angles, ∆θ = |θ ↑ − θ ↓ | ≈ π/2.The inset shows the corresponding images of the interference patterns in the two spin states.The first measurement is taken as the example in (a) and (b).
FIG. S2.Determination of the azimuthal angle θ ↓ of the maximum interference fringe in the spin state |↓ .(a) shows the interference pattern.15 black circles schematically indicate the interference paths with a separation of 1 pixel.(b) shows an exemplary interference fringe with r = 10 pixels.The red solid curve indicates the numerical fitting with a cosine function of Eq. (S1), which gives θ 10↓ = -0.7831.(c) and (d) show θ ↓ and A as a function of r, respectively.As indicated by the vertical dashed line, when r = 10 pixels, A10 = Amax = 0.39747 and θ 10↓ = −0.7831.So we set θ ↓ = -0.7831. | 2022-02-21T06:47:32.359Z | 2022-02-18T00:00:00.000 | {
"year": 2022,
"sha1": "8eec0835f38b91ee4480a1bc491cd6983e82d1f8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41534-022-00585-5.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "56e633230614fb264544d17fff81ae0016889952",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
104300464 | pes2o/s2orc | v3-fos-license | Identification of certain bioactive compounds with anthelmintic properties in Azadirachta indica and Clerodendrum viscosum
: This study was undertaken to detect certain bioactive compounds with anthelmintic properties in Azadirachta indica and Clerodendrum viscosum by using high performance liquid chromatography (HPLC). From the HPLC analysis A. indica showed peak retention time which was similar to standard phenolic compounds including tannic acid ( A. indica retention time 3.270 min, STD retention time 3.271 min) and pyrogallol ( A. indica retention time 3.948 min, STD retention time 3.795 min). Benzoic acid ( C. Viscosum retention time 6.092 min, STD retention time 6.067 min), tannic acid ( C. Viscosum retention time 3.322 min, STD retention time 3.271 min) and quercetin ( C. Viscosum retention time 4.967 min, STD retention time 4.222 min) was detected in leaf part of C. Viscosum. Most of these ingredients have well-known anthelmintic roles. Thus, it can be concluded that Azadirachta indica and Clerodendrum viscosum leaves contain bioactive compounds with anthelmintic properties.
Introduction
For many years use of herbal drugs has become popular due to their easy availability, therapeutic potential, least side effects and minimum cost.At present nearly 80% of the world population rely on plant based drugs for their health care need (Sermakkani and Thangapandian, 2012).Different bioactive amines are playing important role for development of novel compounds, which might be crucial for maintaining a healthy society.The human civilization has been maintaining an intimate relationship with the plants from time immemorial.They depend on plants and other natural sources for their well-being and survival (Shil et al., 2014).Various plants still available in the nature are yet to be explored for their medicinal potential (De et al., 2013).Continuous resistance of plants give the importance for the development new semi-synthetic and synthetic compounds.The novel molecules from plant sources have been instrumental in development of structurally modified compounds, which assist a lot in the development of modern therapeutic system.The screening of plant extracts is an innovative strategy to find therapeutically active compounds in many plant species (Zhang et al., 2013).Hence, High Performance Liquid Chromatography (HPLC) associated with particular detection techniques have become a sophisticated means for analysis of various compounds.Clerodendrum viscosum Linn.(Family: Verbenaceae), is a shrub having quadrangular stem, large leaves of ovate shape, acuminate apex, entire or denticulate margin, cylindrical petiole and hairy leaves (Vinodh et al., 2013).The plant is of 0.9-2.4meter in height and flowers are whitish-pink in color with long pubescent pedicels in stalked cymes and the fruits are four lobed drupe of 8 mm in diameter.This plant is common throughout India, Bangladesh, Myanmar, Thailand and Indonesia.The leaf and root have been used in Indian traditional medicine for the treatment of asthma, fever, bronchitis, skin diseases, epilepsy, inflammation, tumors, worm infestation and snake bite (Kirtikar, 1971).The fresh leaf juice is used as vermifuge, bitter tonic, febrifuge in malaria fever, especially in children (Bhattacharjee et al., 2011;Prakash et al., 2011).The various parts of the plant are reported to have many biological activities like, antimicrobial (Kirtikar et al., 1971), cytotoxic (Oly et al., 2011), anthelmintic antioxidant (Rahman et al., 2011) and antinociceptive.Neem has been extensively used in Ayurveda, Unani and Homoeopathic medicine and has become a cynosure of modern medicine.Neem elaborates a vast array of biologically active compounds that are chemically diverse and structurally complex.More than 140 compounds have been isolated from different parts of neem.All parts of the neem tree-leaves, flowers, seeds, fruits, roots and bark have been used traditionally for the treatment of inflammation, infections, fever, skin diseases and dental disorders.The medicinal utilities have been described especially for neem leaf.Neem leaf and its constituents have been demonstrated to exhibit immunomodulatory, anti-inflammatory, antihyperglycaemic, antiulcer, antimalarial, antifungal, antibacterial, antiviral, antioxidant, antimutagenic and anticarcinogenic properties.Literature survey revealed that till date, no work has been reported on HPLC analysis of methanol extract of Clerodendrum viscosum and Azadirachta indica leaves.Therefore, in our present study, it was thought worthwhile to isolate and characterize the bioactive phytochemical compounds from methanol extract of the plant with the help High Performance Liquid Chromatography (HPLC) technique.
Collection of plants
The plant of A. indica and Clerodendrum viscosum were collected from different areas of Savar upzilla, Bangladesh during the month of November-December 2016.
Extraction of the plant
The collected plant leaves were sun dried.Then they were heated through Oven to be fully dried at below 40ºC for two days.The fully dried leaves were then ground to make them powder by the help of a suitable grinder.The whole powders were extracted by cold extraction with solvent methanol and kept for a period of three (03) days accompanying occasional shaking and stirring.The whole mixture then underwent a coarse filtration by a piece of clean, white cotton material.Then these were filtered through Whatman filter paper.The filtrate (methanol extract) obtained was evaporated by Rotary evaporator (Bibby RE-200, Sterilin Ltd., UK) at 5 to 6 rpm and at 68ºC temperature.It rendered a gummy concentrate of greenish black color.The gummy concentrate was designated as crude extract.Then the crude extract was dried by freeze drier and preserved at 4ºC (Shobana et al., 2009).
Phytochemical analysis
All the extracts were subjected to qualitative phytochemical screening to identify the presence of alkaloids, flavonoids, carbohydrates, gum, reducing sugars, saponins, steroids, tannins and terpenoids using the established methods of HPLC.Plants powder (5 g) was dissolved in 50 mL acidified deionized water (pH 2, achieved by the addition of 0.2 M HCl).The solution was passed through preconditioned C18 cartridges (3 mL 3 500 mg) purchased from Agilent Technologies.The cartridges were preconditioned by sequentially passing through 3 mL each of methanol and acidified water (pH 2) at a drop-wise flow rate.The aqueous extract solution (10 mL) was then applied to the preconditioned cartridges at a drop-wise flow rate to ensure the efficient adsorption of phenolic compounds.The adsorbed phenolics were then eluted from the cartridges with 1.5 mL of 90% v/v methanol/ water solution at a drop-wise flow rate.The entire extraction procedure was repeated three times.The eluent was collected and stored at 22 0 C before HPLC analysis using an HPLC system
Results and Discussion
From the present study methanol extracts were made from the leaves of A. indica and Clerodendrum viscosum in which various bioactive compounds were found by HPLC (Table 1). A. indica and Clerodendrum viscosum contain tannic acid, pyrogallol, benzoic acid, quercetin.From the HPLC analysis certain bioactive compounds were found in Azadirachta indica and Clerodendrum viscosum leaves which were detected by HPLC, whereas Azadirachta indica (neem) leaves contained phenolic compounds like tannic acid (A.indica retention time 3.270 min, STD retention time 3.271 min) and pyrogallol (A.indica retention time 3.948 min, STD retention time 3.795 min) (Figure 1).On the other hand, Clerodendrum viscosum leaves contained benzoic acid (C.Viscosum retention time 6.092 min, STD retention time 6.067 min), tannic acid (C.Viscosum retention time 3.322 min, STD retention time 3.271 min) and quercetin (C.Viscosum retention time 4.967 min, STD retention time 4.222 min) (Figure 2).The tannic acid, pyrogellol and quercetin have some anthelmintic properties that act against earthworms (Pheretima posthuma), tapeworms (Raillietina spiralis) and roundworms (Ascaridia galli) (Haque Rabiu and Mondal Subhasish, 2011).The preliminary phytochemical screening of Clerodendrum viscosum using generally accepted laboratory technique for qualitative determinantion showed the presence of steroids, saponins, phenolic compounds and tannins (Chandrashekar and Rao, 2018)
Conclusions
It can be concluded that Azadirachta indica and Clerodendrum viscosum leaves possess bioactive compounds with anthelmintic properties which have been detected accurately by HPLC.
Table 1 . Bioactive compounds of Azadirachta indica and Clerodendrum viscosum of methanol extract.
which were almost similar to our study. | 2019-04-10T13:12:39.523Z | 2018-08-30T00:00:00.000 | {
"year": 2018,
"sha1": "fe04da10a05bab7ebc2921fca909a951fa0e879f",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/AAJBB/article/download/64820/44016",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fe04da10a05bab7ebc2921fca909a951fa0e879f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
260783495 | pes2o/s2orc | v3-fos-license | Study of the Stability of Wine Samples for 1H-NMR Metabolomic Profile Analysis through Chemometrics Methods
Wine is a temperature, light, and oxygen-sensitive product, so its physicochemical characteristics can be modified by variations in temperature and time when samples are either sampled, transported, and/or analyzed. These changes can alter its metabolomic fingerprinting, impacting further classification tasks and quality/quantitative analyses. For these reasons, the aim of this work is to compare and analyze the information obtained by different chemometric methods used in a complementary form (PCA, ASCA, and PARAFAC) to study 1H-NMR spectra variations of four red wine samples kept at different temperatures and time lapses. In conjunction, distinctive changes in the spectra are satisfactorily tracked with each chemometric method. The chemometric analyses reveal variations related to the wine sample, temperature, and time, as well as the interactions among these factors. Moreover, the magnitude and statistical significance of the effects are satisfactorily accounted for by ASCA, while the time-related effects variations are encountered by PARAFAC modeling. Acetaldehyde, formic acid, polyphenols, carbohydrates, lactic acid, ethyl lactate, methanol, choline, succinic acid, proline, acetoin, acetic acid, 1,3-propanediol, isopentanol, and some amino acids are identified as some of the metabolites which present the most important variations.
Introduction
Wine is a fermented alcoholic beverage extensively distributed, appreciated, and investigated around the world. Chemical characterization of wine has evidenced the complexity of this matrix since it is composed of a high quantity and diversity of metabolites (also known as metabolome). Metabolomics is one of the most recent omics sciences, and it comprises the study of the metabolome of organisms, biological systems, or products in a specific condition. Liquid chromatography-mass spectrometry (LC-MS) and proton nuclear magnetic resonance ( 1 H-NMR) are two analytical platforms used for metabolomics. Although LC-MS is very sensitive and possesses the ability to quantify a greater number of compounds, 1 H-NMR is advantageous since it is non-destructive, robust, highly reproducible, requires short analysis time, and laborious sample preparation steps are not needed. In addition, it has the capability of the detection of diverse organic compounds and supplies very specific structural information. For this reason, it has been widely used for wine fingerprinting and metabolomics [1][2][3]. Correlation of the 1 H-NMR spectra data to some properties of wine using chemometrics methods can help to find patterns of similarity or differences according to vineyard [2], origin [3], variety [4], vintage and ageing [5][6][7], bottle aging [8], quality control [2], authentication [9], color evolution, and stability [10], and cultivation practices [5,9,11], among other factors or their combination [8,[12][13][14]. Chemometric methods make use of multivariate analysis (MVA) techniques.
In recent years, many studies have been conducted using unsupervised methods such as principal component analysis (PCA) and supervised methods such as partial least squares discriminant analysis (PLS-DA) and/or its orthogonal form (OPLS-DA) to analyze the sources of variation and other underlying factors, as previously mentioned. In addition, techniques such as hierarchical cluster analysis (HCA) and linear discriminant analysis (LDA) have also been used to predict geographical origin [3,15] and variety [16][17][18]. Nevertheless, these chemometric methods do not consider the explicit inclusion of temporal variation during modeling.
Although wine is a relatively stable product while it is in the bottle, as soon as it is opened and poured into different recipients, either for drinking or chemical analysis, air promotes reactions that affect its characteristics, giving a continuously changing dynamic system. Samples taken from wineries and transported at room temperature are subjected to strong variation as well, especially if long transport distances are considered. In this regard, chemometrics investigation of gas chromatography (GC) data and enological parameters (absorbance at 420 nm, free SO 2 , total SO 2 , total phenol, and total aldehyde) of a bag-in-box white wine stored at different temperatures and times showed a grouping trend that was influenced by both variables. In addition, the maximum storage time could be predicted accurately by partial least squares (PLS) regression of the GC data [19]. In another study, hierarchical and non-hierarchical cluster analyses were employed to monitor the level of biogenic amines in opened bottles against time and other conditions such as different temperatures and stopper type (screw cap or cork), and use of vacuum devices by dispersive liquid-liquid microextraction-gas chromatography-mass spectrometry (ME-GC-MS) to reveal latent relationships between wine brands, the conditions for their storage, and their amine content [20]. Furthermore, wine evolution during bottle aging has also been studied by 1 H-NMR spectroscopy and PCA [8]. The authors found that metabolite variations due to wine aging were minimal compared to those that resulted from a different wine type and wine geographical origin. Storage at a low and controlled temperature for 2 or 4 years allowed for identifying a decrease in organic acids (lactic acid, succinic acid, and tartaric acid) and an increase in esters (ethyl acetate and ethyl lactate) content for most wines. Catechin and epicatechin decreased during aging in all wines, while gallic acid increased in almost all red wines [8]. The influence of transport temperature profiles on wine quality was also studied by simulating transport conditions in a climate chamber for five wines representing international wine styles: one sparkling, two white, and two red still wines. Analytical and sensory results demonstrated that a significant temperature influence, calculated as temperature-time equivalence, needs to be reached before quality effects are observed. The wines of a fruity and lighter style were more sensitive than the wines with higher alcohol and wooden aging. Furthermore, except for Champagne, sensory examinations showed a regular linear trend with exposure severity starting at the time equivalence at 15 • C. The sensory results correlated well with the ultravioletvisible (UV-vis) and head-space solid-phase microextraction gas-chromatography massspectrometry (HS-SPME-GC-MS) results [21].
Metabolomic analysis of wines requires well-traceable samples; however, these samples may have different origins and could be sampled in different forms, e.g., they can be taken directly from cellars using either plastic or glass containers or they can come from a sampling of commercial bottles from restaurants/bars, etc., by regulatory authorities in the interest of public health and safety, to be transported to laboratories. No matter the procedure, the storage and transport conditions may impact the future analysis of the samples and, consequently, the results and conclusions of the investigation. As no work has been previously found that evaluates the effects of temperature, time, and their interaction using complementary chemometrics methods, neither to consider the experimental design information in the chemometric evaluation, this work aims to compare and analyze the information obtained by different chemometric methods used in a complementary form (PCA, ANOVA simultaneous component analysis (ASCA), and parallel factor analysis (PARAFAC)) to study 1 H-NMR spectra variations of red wine samples kept at different temperatures and time lapses since the bottles were opened. Through these multivariate analyses, the identification of relevant characteristics and changes that permits deciding about the sample conditions before further metabolomic studies is attained.
NMR Fingerprint
Representative 1 H-NMR spectra of wine taken as soon as the bottle was opened and after long-time storage at extreme temperatures (67 days at 40 • C) are shown in Figure 1. Thirty-five compounds were identified, and changes in the intensity of some signals were detected as the main differences between both spectra, e.g., the intensity of signals from acetaldehyde (δ 9.67, q), formic acid (δ 8.33, s), succinic acid (δ 2.63, s), and acetic acid (δ 2.07, s), was increased. All the metabolites identified in the spectra with their chemical shift (δ, ppm), multiplicity, and constant coupling (J, Hz) are shown in Table 1. It is important to note that as no buffer was used in sample preparation, the principal consequence may be the changes in chemical shifts in some signals. For example, succinic acid singlet presented variations around 1.24 Hz after 1 day of storage at 40 • C, indicating possible pH changes in the sample.
Molecules 2023, 28, x FOR PEER REVIEW 3 of 18 design information in the chemometric evaluation, this work aims to compare and analyze the information obtained by different chemometric methods used in a complementary form (PCA, ANOVA simultaneous component analysis (ASCA), and parallel factor analysis (PARAFAC)) to study 1 H-NMR spectra variations of red wine samples kept at different temperatures and time lapses since the bottles were opened. Through these multivariate analyses, the identification of relevant characteristics and changes that permits deciding about the sample conditions before further metabolomic studies is attained.
NMR Fingerprint
Representative 1 H-NMR spectra of wine taken as soon as the bottle was opened and after long-time storage at extreme temperatures (67 days at 40 °C) are shown in Figure 1. Thirty-five compounds were identified, and changes in the intensity of some signals were detected as the main differences between both spectra, e.g., the intensity of signals from acetaldehyde (δ 9.67, q), formic acid (δ 8.33, s), succinic acid (δ 2.63, s), and acetic acid (δ 2.07, s), was increased. All the metabolites identified in the spectra with their chemical shift (δ, ppm), multiplicity, and constant coupling (J, Hz) are shown in Table 1. It is important to note that as no buffer was used in sample preparation, the principal consequence may be the changes in chemical shifts in some signals. For example, succinic acid singlet presented variations around 1.24 Hz after 1 day of storage at 40 °C, indicating possible pH changes in the sample. Overlayed 1 H-NMR spectra (700 MHz, D2O, 300 K) of S2 wine sample at "zero-time" (black) and stored at 40°C for 67 days (blue). Identified compounds are numbered and listed in Table 1. The 1 H-NMR spectra of three control samples of each wine were randomly acquired by duplicate over a maximum time of 10 h to simulate normal variations during workday conditions. Then, 1 H-NMR data were subjected to an exploratory PCA analysis showing time-course changes between controls and replicas ( Figure 2A). The result of a threecomponent model provided a root mean square error of calibration (RMSEC) = 2.81 × 10 −4 and a root mean square error of cross-validation (RMSECV) = 4.49 × 10 −4 values. As some dispersion of data was observed, the PCA loading plot was analyzed, revealing that the regions in the spectra contributing to such a phenomenon were those associated with signals near to δ 1.38, δ 4.32, and δ 4.54 ppm. It was noticed that these regions corresponding to broad signals appeared even twenty minutes after the initial (zero-time) measure. For this reason, a complementary PCA was run, removing the regions 4.60-4.46, 4.34-4.24, and 1.41-1.36 ppm. As a result, a reduction of the dispersion between samples was observed ( Figure 2B), although the difference between samples was conserved. To make these signals clearer, the expansions of the spectra regions with those chemical shifts for all samples (stored at all temperatures and times) are shown in Figure 3. These signals could not be assigned, but their analyses over all stored samples showed that they increased and were less wide through time and could be attributed to a degradation and oxidation process. For this reason, further analyses were performed keeping these regions. were less wide through time and could be attributed to a degradation and oxidation pro cess. For this reason, further analyses were performed keeping these regions. The spectra variations of wine samples subjected to different temperatures and time were then analyzed by PCA. Briefly, models with 5-8 components were satisfactorily ex plained by 97.64 to 99.56% of spectra variation with RMSEC and RMSECV values between Molecules 2023, 28, x FOR PEER REVIEW 5 were less wide through time and could be attributed to a degradation and oxidation cess. For this reason, further analyses were performed keeping these regions. The spectra variations of wine samples subjected to different temperatures and ti were then analyzed by PCA. Briefly, models with 5-8 components were satisfactorily plained by 97.64 to 99.56% of spectra variation with RMSEC and RMSECV values betw 0.0042-0.0068 and 0.0116-0.0145, respectively. No outliers were observed in the Hotel The spectra variations of wine samples subjected to different temperatures and times were then analyzed by PCA. Briefly, models with 5-8 components were satisfactorily explained by 97.64 to 99.56% of spectra variation with RMSEC and RMSECV values between 0.0042-0.0068 and 0.0116-0.0145, respectively. No outliers were observed in the Hotelling T 2 reduced vs. Q residuals reduced plots, indicating that all samples could be considered in the analysis.
In Figure 4, the PCA score plots of the models are shown. Clearly, spectra variations related to temperature and time are evidenced. In addition, some time-ordering of the samples is observed, indicating a gradual change of the spectra features through the elapsed time. The differences in the observed patterns comparing each case (C1, X1, S1, and S2) indicate that temperature and time effects are case-dependent, and the effect of temperature is not equal at each level of the time variable. In addition, it was observed that two components accounted for the spectra variations within 85.46 to 93.39%. To provide insight into the effects of time and temperature in each case, the time series plots of the score values of the first principal components are plotted in Figure 5, as ultimately, these terms account for 75.25-88.24% of the spectra variations in the data. As observed, PC1 accounts simultaneously for both effects, making the isolation of each source of variation indiscernible. Furthermore, as previously discussed, it is clearly observed that the effects vary within the cases because the score values present different profiles among them, showing the most different behavior in case S2. In addition, samples at 40 • C show the most significant variations in all cases, and the scores at 4 • C are very close to those at 20 • C in three cases, except for S2. Considering the similarity of the loadings for the components, it was decided then to analyze the four wines at once in one PCA analysis ( Figure 6). The results of this procedure showed that some samples lay in the region of high leverage and outside the models' plane in the Hotelling T 2 reduced vs. Q residuals reduced plots; however, as they correspond to those with the longer times (67 days), it was decided to keep them during modeling. A model with nine components satisfactorily explained 98.42% of spectra variation with RMSEC and RMSECV values of 0.0082 and 0.0151, respectively. Considering the similarity of the loadings for the components, it was decided then to analyze the four wines at once in one PCA analysis ( Figure 6). The results of this procedure showed that some samples lay in the region of high leverage and outside the models plane in the Hotelling T 2 reduced vs. Q residuals reduced plots; however, as they correspond to those with the longer times (67 days), it was decided to keep them during modeling. A model with nine components satisfactorily explained 98.42% of spectra variation with RMSEC and RMSECV values of 0.0082 and 0.0151, respectively. The PCA scores plot of the model showing the data according to characteristic information, i.e., case (C1, X1, S1, S2), temperature (with colors), and time (with numbers), are shown in Figure 6. Changes in temperature and time are observed, where the wines at the lowest temperature vary the least. The differences in patterns observed indicate that the The PCA scores plot of the model showing the data according to characteristic information, i.e., case (C1, X1, S1, S2), temperature (with colors), and time (with numbers), are shown in Figure 6. Changes in temperature and time are observed, where the wines at the lowest temperature vary the least. The differences in patterns observed indicate that the temperature and time effects are case-dependent and that the effect of temperature is not equal at each level of the time variable, i.e., interactions between both variables are relevant. Interestingly, comparing all data, the C1 case samples are present as a well-defined group in the graph (Figure 6A), indicating a well-differentiated pattern. More important, the effect of time seems to be more discernable than that of temperature ( Figure 6B), and as time elapses, the data are more dispersed. This was confirmed in the Hotelling T 2 reduced vs. Q residuals reduced plot, where samples at 57 and 67 days seem to be anomalous from the rest. Because PCA is not able to disclose in an independent way both effects, since no information related to the experimental design is included in the analysis, further studies were performed using the ASCA method taking advantage of the full-factorial structure of the experimental data matrix.
ANOVA Simultaneous Component Analysis (ASCA)
Since the experiments were planned as a full-factorial design, an improvement in data interpretation with respect to PCA can be achieved by segmenting the information of the data matrix through ANOVA analysis prior to PCA. As the results in the previous section clearly indicated interactions among temperature, time, and case, a model contemplating up to two-variable interactions in a three-way ANOVA was considered. During modeling, p-values were obtained via 1000 permutation tests. Not surprisingly, the results of the analyses indicated that all variables play an important role as main effects and as interaction terms in all possible binary combinations at the 99.9% confidence level. Almost half (53.40%) of variability is due to differences among cases; temperature and time as the main effects have similar contributions (13.48 and 10.97%), while the contribution of the interactions follows the order: temperature × time (10.03%) > time × case (6.07%) > temperature × case (1.88%).
In Figure 7, the score plots of the factors are displayed. Clearly, the separation of the different contributions by ANOVA allowed a better definition of groups (temperature, time, and case (C1, X1, S1, and S2)) in comparison to PCA. In Figure 7A, it is noted that temperature clusters the wines in practically two well-defined groups along PC1 since the scores at 40 • C are well separated from those at 4 and 20 • C, which are less different. Additionally, some distinction is observed along the PC2 for the latter samples. This is slightly observed for time ( Figure 7B), where some trend is perceived as time increases ongoing from the first to the last days. On day 67, the spectra profiles considerably differ. More interestingly, a strong distinction according to the case (C1, X1, S1, and S2) is attained by the ASCA algorithm, which clearly isolates this contribution ( Figure 7C), as was previously observed in PCA but not in a so-defined form. According to this, these four wine samples can be differentiated using only the PC1 and PC2 score values. Further confirmation of this result may allow us to define a characteristic signature according to the case, which may be helpful for identification purposes in conditions where the samples have been subject to temperature and temporal variations.
On resume, ASCA analysis clearly confirmed: (i) the importance of the interaction between temperature and time in a quantitative way, denoting that the main and interactions effects have comparable magnitudes; (ii) that samples at 4 • and 20 • C are more similar between them in comparison to 40 • C; (iii) that a characteristic profile is present in each case. However, it was still difficult to isolate the contribution of the temporal variable to characterize the pattern followed by the spectra variations associated with this effect ( Figure 7B). For this reason, the PARAFAC chemometric method was applied in the following section.
Parallel Factor Analysis (PARAFAC)
Due to the impossibility of disclosing the time effect in a clear way, because of the interactions that this variable has with temperature according to the previous sections and taking advantage of the 3-way data structure, a trilinear modeling strategy was further employed using the PARAFAC algorithm. For the analysis, mode 1 corresponded to time (days), mode 2 to 1 H-NMR spectra, and mode 3 to temperature. A two-factor model reached 99.66, 99.50, 99.66, and 99.46% of explained variance with residual errors of 4.51, 4.23, 3.32, and 4.26 and core consistencies of 94, 98, 96, and 93% for X1, C1, S1, and S2, respectively. The ratio of the mode 3 (temperature) loading plots (Figure 8) clearly shows similar ratio values at 4 °C and 20 °C for all cases except S2, which significantly differs from those at 40 °C, in accordance with PCA and ASCA modeling. Wines at 40°C varied the most in comparison to the lower ones. The mode 1 (time) loadings ratio ( Figure 9) reveals characteristics of temporal variations for each case (C1, X1, S1, and S2), as previously inferred in PCA and ASCA modeling, where it was not possible to identify time variation patterns along the cases. However, it is clear from the plot that, on average, after the seventh day, the sample changes became more abrupt, except for S2, for which the limiting day was the fourth. However, as discussed in the ASCA section, the effect of temperature is not independent, it is case-dependent. As it may be interesting to identify the lapse time in which the integrity of the wine samples is compromised as a function of time and temperature, the mode 2 data were normalized by the others. The results for case S1
Parallel Factor Analysis (PARAFAC)
Due to the impossibility of disclosing the time effect in a clear way, because of the interactions that this variable has with temperature according to the previous sections and taking advantage of the 3-way data structure, a trilinear modeling strategy was further employed using the PARAFAC algorithm. For the analysis, mode 1 corresponded to time (days), mode 2 to 1 H-NMR spectra, and mode 3 to temperature. A two-factor model reached 99.66, 99.50, 99.66, and 99.46% of explained variance with residual errors of 4.51, 4.23, 3.32, and 4.26 and core consistencies of 94, 98, 96, and 93% for X1, C1, S1, and S2, respectively. The ratio of the mode 3 (temperature) loading plots (Figure 8) clearly shows similar ratio values at 4 • C and 20 • C for all cases except S2, which significantly differs from those at 40 • C, in accordance with PCA and ASCA modeling. Wines at 40 • C varied the most in comparison to the lower ones. The mode 1 (time) loadings ratio ( Figure 9) reveals characteristics of temporal variations for each case (C1, X1, S1, and S2), as previously inferred in PCA and ASCA modeling, where it was not possible to identify time variation patterns along the cases. However, it is clear from the plot that, on average, after the seventh day, the sample changes became more abrupt, except for S2, for which the limiting day was the fourth. However, as discussed in the ASCA section, the effect of temperature is not independent, it is case-dependent. As it may be interesting to identify the lapse time in which the integrity of the wine samples is compromised as a function of time and temperature, the mode 2 data were normalized by the others. The results for case S1 are shown in Figure 10, in which each of the normalized factors of the model is plotted at the different studied temperatures. In a sense, this representation is the reconstruction of the spectra in terms of the two factors required for modeling; consequently, it has the advantage that it is easy to track the changes in the spectra due to time and temperature simultaneous variations. In general, the first factor in the PARAFAC model (Factor 1) does not appreciably vary with temperature and can be viewed as a starting point from which further changes can be followed. In contrast, the second factor (Factor 2) has high variation representing the spectra modification along time for each temperature. From Figure 10, it is observed that, in general, at 4 • C, samples remained stable up to for 7 days, at 20 • C for 2 days, and at 40 • C, changes on the first day were already notorious. This indicates that, in general, samples at 20 • C were comparable to those at 4 • C if the lapse time does not exceed 2 days. As expected from previous PCA and ASCA analyses, each case (C1, X1, S1, and S2) presented a characteristic decomposition pattern in this reconstruction. However, the C1, X1, and S2 cases showed similar behavior. These results are in good agreement with those of Jung et al. [21], who demonstrated that a significant temperature influence, calculated as a temperature-time equivalence, needs to be reached to observe the effects. Xl S1 S2 Cl • 4 • 20 a40 Figure 8. Temperature-mode PARAFAC loadings for the first to the second-factor ratio for the X1, C1, S1, and S2 cases at 4, 20, and 40 • C. Figure 9. Time-mode PARAFAC loadings of the second to the first-factor ratio for the X1, C1, S1, and S2 cases. Finally, the mode 2 loadings plot ( Figure 11) was used to identify the buckets corresponding to the chemical shifts associated with the observed variations. In general, the buckets listed in Table 2 were identified as those with the major changes. Clearly, each case (C1, X1, S1, and S2) shows a distinctive change pattern in the appearance and disappearance of signals, indicating that the changes in metabolites are case-dependent. This behavior was previously reported by Cassino et al. [8], where wine variations due to aging were observed to be dependent on the type of sample. In addition, it was observed that practically all listed buckets were relevant within each chemometric method, as expected from the fact that the information is the same but processed in a different manner in each chemometric method. Finally, the mode 2 loadings plot ( Figure 11) was used to identify the buckets corresponding to the chemical shifts associated with the observed variations. In general, the buckets listed in Table 2 were identified as those with the major changes. Clearly, each case (C1, X1, S1, and S2) shows a distinctive change pattern in the appearance and disappearance of signals, indicating that the changes in metabolites are case-dependent. This behavior was previously reported by Cassino et al. [8], where wine variations due to aging were observed to be dependent on the type of sample. In addition, it was observed that practically all listed buckets were relevant within each chemometric method, as expected from the fact that the information is the same but processed in a different manner in each chemometric method. Figure 11. Spectra-mode PARAFAC loadings for the first and second factors for the X1, C1, S1, and S2 cases. Table 2. Bucket list (p < 0.05) with metabolites and not assigned signals related to the major changes in the studied wines.
Metabolites Identification
The 1 H-NMR signals contained in the buckets that significantly contribute to describing the changes observed in wine transformation (p < 0.05) were assigned to their corresponding metabolites (Table 2). To observe the changes in the signals, 1 H-NMR spectra expansions of these regions are presented in Figure 12. As shown, acetaldehyde and formic acid raise their concentration with temperature and time increase. This change could be explained by the oxidation of ethanol and methanol. Acetaldehyde is the major oxidation product in wine due to the presence of ethanol. Additionally, this compound could react with anthocyanins and other phenolic compounds [22], which explains the diminishing of the broad signals at δ 7.70-5.82 ppm. Acetaldehyde can also be oxidized to acetic acid, increasing its concentration [23]. Furthermore, in the 2.74-5.3 ppm region, some other signals augmented their intensity; however, no metabolite could be assigned to them. Furthermore, the metabolites that diminished their concentration were polyphenols, higher alcohols, a couple of organic acids (lactic and succinic acids), and other non-identified compounds. Higher alcohols can react with organic acids such as lactic and succinic acids, resulting in the formation of new esters [8,24,25]. The reduction in some amino acids (such as alanine, threonine, leucine, and valine) could be due to reactions with the remaining carbohydrates in the wine [26].
Molecules 2023, 28, x FOR PEER REVIEW 14 of 18 corresponding metabolites (Table 2). To observe the changes in the signals, 1 H-NMR spectra expansions of these regions are presented in Figure 12. As shown, acetaldehyde and formic acid raise their concentration with temperature and time increase. This change could be explained by the oxidation of ethanol and methanol. Acetaldehyde is the major oxidation product in wine due to the presence of ethanol. Additionally, this compound could react with anthocyanins and other phenolic compounds [22], which explains the diminishing of the broad signals at δ 7.70-5.82 ppm. Acetaldehyde can also be oxidized to acetic acid, increasing its concentration [23]. Furthermore, in the 2.74-5.3 ppm region, some other signals augmented their intensity; however, no metabolite could be assigned to them. Furthermore, the metabolites that diminished their concentration were polyphenols, higher alcohols, a couple of organic acids (lactic and succinic acids), and other nonidentified compounds. Higher alcohols can react with organic acids such as lactic and succinic acids, resulting in the formation of new esters [8,24,25]. The reduction in some amino acids (such as alanine, threonine, leucine, and valine) could be due to reactions with the remaining carbohydrates in the wine [26].
Wine Samples
Four red Cabernet Sauvignon commercial wines from different brands obtained from Baja California, Mexico, were analyzed. Their ethanol content varied from 12 to 14% (v/v). Two Cabernet Sauvignon wines from Valle de Guadalupe, with years of production 2016 (C1) and 2017 (X1), and two from Valle de Santo Tomas, with year of production 2015 (S1 and S2) were analyzed.
Sample Preparation and Storage Conditions
After the bottles were opened, three aliquots were immediately analyzed to have samples at "zero" time for each wine. At the same time, the rest of the wine samples were transferred to fully-filled conical tubes, covered from any light source, and stored at three different temperatures: 4, 20, and 40 • C for 67 days. The last condition simulated accelerated storage [27]. One fully-filled conical tube was sampled once for each time and temperature condition.
Control Samples
Duplicate acquisition of 3 samples of each wine was randomized over a period of 24 min to up to 10 h to simulate the normal time between preparation and acquisition of a set of samples. An average RSD of the sample spectra of 1.4% was determined.
1 H-NMR Analysis
For sample preparation, an internal standard solution containing 5.75 mM of 3-(trimethylsilyl) propionic-2,2,3,3-d4 acid sodium salt (TSP, 98% D atom, Sigma-Aldrich, Burlington, MA, USA) in deuterated water (99.9% D 2 O, Cambridge Isotope Laboratories, Andover, MA, USA) was prepared. Then, 900 µL of wine and 100 µL of internal standard solution were transferred into a cryovial and vortex stirred for 30 s; 600 µL of this solution were transferred into a 5 mm NMR tube. No buffer solution was added in order to study wine transformations without the restriction of fixed pH values. The 1 H-NMR experiments were performed at 300 K on a 700 MHz Avance III HD spectrometer (Bruker, Billerica, MA, USA) equipped with a 5 mm z-axis gradient TCI cryoprobe and a SampleJet autosampler. Data were recorded automatically under ICON-NMR (Bruker, Billerica, MA, USA) control. 1 H-NMR spectra were acquired with Bruker sequence noesygppr1d; water and ethanol signals were suppressed by applying a modulated shaped pulse during a relaxation delay (D1) of 4.0 s and mixing time (D8) of 0.01 s. Each spectrum was acquired with 4 dummy scans (DS), a total acquisition time (AQ) of 3.99 s, 32 scans (NS), and was collected into (TD) 65,536 complex data points and using a spectral width (SW) of 13,992.5 Hz.
NMR Spectra Processing
Free induction decays (FIDs) were Fourier transformed, phased, baseline-corrected, and aligned by shifting the TSP signal to zero, all in automatic mode with 0.3 Hz apodization using TopSpin software v.3.5.6 (Bruker, Billerica, MA, USA).
Multivariate Data Analysis
The 1 H-NMR data set was reduced by generating homogeneous boxes (binning) of 0.04 ppm in a chemical shift range of 10.00-0.2 ppm, excluding water (5.02-4.70 ppm) and ethanol signals (3.77-3.53, 1.28-1.07 ppm), using alignment and normalization to TSP. Mean center and Pareto scaling were employed for PCA and ASCA analyses. Crossvalidation was performed by Venetian blinds with 10 splits and blind thickness of one for both techniques. As for PARAFAC, the spectra were Pareto-scaled before three-way folding, and the spectra mode was non-negativity constrained in the analyses. PLS-Toolbox 9.0 software (Eigenvector Research, Inc., Wenatchee, WA, USA) was employed for all chemometric analyses.
Conclusions
Chemometric analyses using the PCA, ASCA, and PARAFAC methods were successfully employed in a complementary form to identify relevant characteristics and changes in 1 H-NMR spectra of opened red wine bottle samples exposed at different temperatures (4, 20, and 40 • C) and time lapses (0 to 67 days) to decide about the sample conditions before further metabolomic studies. Results indicated that changes in temperature and time can satisfactorily be tracked with each chemometric method. PCA analysis revealed variations related to the sample, temperature, and time. The differences in patterns observed comparing each sample indicated that temperature and time effects were case-dependent and that the effect of temperature was not equal at each level of the time variable, i.e., interactions between both variables were relevant. In addition, it showed that the effect of temperature was more discernable than that of time. ASCA analysis was used to take advantage of the full-factorial structure of the experimental data matrix. This chemometric method clearly confirmed: (i) the importance of the interaction between temperature and time in a quantitative way, denoting that the main and interactions effects had comparable magnitudes, (ii) that samples at 4 • and 20 • C were more similar between them in comparison to 40 • C, and (iii) that a characteristic profile was present in the samples which define a signature which may be helpful for sample identification purposes, in conditions where the wine has been subjected to temperature and temporal variations. PARAFAC analyses confirmed the results from PCA and ASCA but additionally indicated that, in general, samples at 20 • C were comparable to those at 4 • C if the lapse time does not exceed 2 days. From the buckets associated with their corresponding spectra loadings, identification of the related metabolites was performed. As expected, practically all the listed buckets were relevant to all chemometric methods, as expected from the fact that the information was the same but processed in a different manner in each chemometric algorithm. In general, Acetaldehyde, Formic acid, Acetic acid, Polyphenols, Lactic acid, Methanol, Choline, Succinic acid, Proline, Acetoin, 1,3-propanediol, Isopentanol, Alanine, and other superior alcohols and amino acids were the metabolites responsible for the main changes. It is expected that this information may be useful to evaluate wine quality and integrity when the conditions for the storage and analysis of red wine samples cannot be completely under control due to different field or laboratory situations. | 2023-08-11T15:02:56.792Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "a03857515838512937d49bcbf865dedf772faf11",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/16/5962/pdf?version=1691560405",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf05ad0e3e6a0568035a2d680ae59b15ed7e76b9",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220404780 | pes2o/s2orc | v3-fos-license | Do we need tailored training and development plans for European Union respiratory nurses?
Respiratory diseases inflict a massive health burden worldwide, affecting >1 billion people. COPD, asthma, acute lower respiratory tract infections, tuberculosis and lung cancer are among the most common causes of severe illness and death globally [1]. Respiratory nurses are key members of the pulmonary healthcare team caring for people in acute settings, as well as in primary care, providing a wide range of interventions from ventilation to palliative care. Their specialised roles deliver both autonomous and prescribed interventions [2–6].
Respiratory diseases inflict a massive health burden worldwide, affecting >1 billion people. COPD, asthma, acute lower respiratory tract infections, tuberculosis and lung cancer are among the most common causes of severe illness and death globally [1]. Respiratory nurses are key members of the pulmonary healthcare team caring for people in acute settings, as well as in primary care, providing a wide range of interventions from ventilation to palliative care. Their specialised roles deliver both autonomous and prescribed interventions [2][3][4][5][6].
Worldwide, the recognition of respiratory nurses has been effective in improving the quality of care and patients' outcomes. In Australia and the USA, the scope and role of respiratory nurses has been well defined and established for 30 years [7]. In 2017, the European Respiratory Society (ERS) documented that allied respiratory professionals (ARPs) "are involved in the prevention, diagnosis, evaluation, treatment and management of respiratory diseases" [8]; however, the role of respiratory nurses within the ARPs was not clearly delineated. In the European Union (EU) only Denmark, Finland, Iceland, Norway, Portugal, Spain, Sweden and the UK have a formal respiratory specialisation for nurses, and the competences and education levels of respiratory nurses vary from one European country to another. Currently, there is a lack of consensus on the definition, role and activities of respiratory nurses. Thus, it is challenging to understand which specialist care would be best provided by respiratory nurses, and respiratory nursing roles in joint research projects and educational programmes remain unclear. It is imperative that respiratory nurses themselves define the scope of respiratory nursing and replace a general description such as "nurses taking care of people with pulmonary diseases", with the clarity needed for harmonised, tailored training and development plans.
Defining and outlining the scope of practice, role and activities of respiratory nurses is of utmost importance to establish the components of advanced education for respiratory nurses. The specialisation, competences and responsibilities of respiratory nurses are still non-existent and not clearly defined in a majority of European countries (table 1) [9]. A unique curriculum for the specialisation of respiratory nurses needs to be created and implemented. Improving higher education, including basic nursing education, was the focus for development and implementation of the Bologna declaration, considered to be the most important reform to enable comparability in educational standards and quality that occurred in Europe in the past 30 years. It aimed to create a more coherent, compatible, comparable and competitive European higher education area to promote governmental inter-cooperation [10]. All countries that have signed the Bologna declaration agreed to strive for the consistency of educational systems across Europe. This agreement is especially important in respiratory nursing. Setting minimum standards of training and competence for specialties, such as in respiratory care, as well as basic nursing education, could help all European nursing schools Tailored training and development plans for nurses to implement a coherent curriculum higher nursing education [11]. Initially, a consensus must be found on whether respiratory nurses' education should be an advanced course (<1 year full-time post-graduate), a diploma (1 year full-time postgraduate) or a master's degree (2 years full-time post-graduate). Respiratory nursing specialisations and specialised courses are available, but not consistently across Europe. Additionally, the effectiveness of programmes that improve graduate nurse transition to the workforce in terms of contributing to staff retention has not been proven [9,12]. The transition of nurses to advanced practice has, however, been shown to be a critical strategy in attracting nurses to specialty areas [9,13]. Current Transition to Specialty Practice (TSP) programmes do provide a structured, supported practice for nurses entering a specific nursing area [9] but these TSP programmes have no consistent framework available to guide their development or delivery. As a result, there are significant variations in the way TSP programmes are delivered [14]. Moreover, upon completion of the specialisation, levels of competencies also differ from one country to another [9,[15][16][17].
Recognisably, higher levels of nursing education and training are influenced by national education systems, statutory and regulatory processes, and by professional groups in each country [18]. Therefore, addressing these issues could best be accomplished by bringing together academic and clinical expert nurses from the respiratory field to form a working group capable of tackling these challenges. The main goal of this group would be to define a scope of practice, a curricular framework and a platform to share educational resources with other EU countries [9,19,20]. This group could provide opportunities for nurses to establish closer links with their European colleagues across a spectrum of clinical practice, management, and research to raise the awareness and benefits of the respiratory nurse specialist role [21].
The ERS Nurses Group 09.03 should lead this working group, and provide support and links to all nurses, stakeholders and patients that wish to contribute to the recognition and advancement of respiratory nurses. The role of specialised respiratory nurses needs to be recognised by healthcare systems in all EU countries. The EU and national regulators must be led to recognise that an evolution of quality standards and harmonisation of respiratory specialist nurse education is critical to improving care for patients with respiratory diseases. Thanks to advances in diagnosis and treatment, people with respiratory diseases potentially have a longer life expectancy with better quality of life. Tailored training and development of respiratory specialist nurses could be the solution to realising this potential. Other allied respiratory professions have documented how harmonised roles and education are at the forefront of high quality and safe care provision as well as enhanced quality of life for patients [8]. Nurses are critical players in healthcare and should be the next profession to standardise levels of education, preparing them for an active partnership with other healthcare professionals prepared to tackle the chronic disease problem in Europe. | 2020-07-09T09:12:13.006Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "5597442e3eee5890871335782df4291e7a401f85",
"oa_license": "CCBYNC",
"oa_url": "https://breathe.ersjournals.com/content/breathe/16/2/200010.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "165d707192229a4618ee040bf0b1c27c9f1a1ba8",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251601166 | pes2o/s2orc | v3-fos-license | Biomechanical Comparison of Cannulated Screw Osteosynthesis With Tension-Band Wiring for Proximal Fractures of the Fifth Metatarsal (Jones Fracture) The Journal of Foot & Ankle Surgery
Jones fractures, which lie at the junction of the diaphysis to the metaphysis of the fi fth metatarsal, are a well- described clinical issue. There are various surgical approaches, including the commonly performed cannulated screw osteosyntheses, and the less frequently used tension-band approach. The aim is to compare the biomechan- ical stability of these osteosyntheses. We performed an osteotomy on 16 fresh frozen fi fth metatarsal bones from body donors representing a Jones fracture. The fractures were treated pairwise with screw osteosynthesis or ten- sion-band wiring. This was followed by cyclic axial bending until osteosynthesis failure. Stability under axial bending force was higher in the screw osteosynthesis (mean: 70.0 § 66.5 N) compared to the tension-band wiring (mean: 35.7 § 23.3 N) group although not reaching statistical signi fi cance ( p = .116). The study shows no statistically signi fi cant difference in biomechanical stability under axial loading between screw osteosynthesis and ten- sion band wiring. Based on the data obtained, no differences can be observed from a biomechanical point of view. The study supports the established method of treating Jones fractures primarily with screw osteosynthesis. In addition, the data suggest that tension band wiring may be a good alternative osteosynthesis, for example,
Metatarsal fractures are common in adults, consisting of 3.2% to 6.8% of all fractures (1). Of the metatarsal bones, the fifth is the most commonly affected with 56% to 68% of all metatarsal injuries (1,2). Jones fractures occur at the metaphyseal-diaphyseal junction at the base of the fifth metatarsal.
Poor blood supply as shown by Smith et al. (3) is a reason for prolonged healing time in nonoperative treatment of this fracture type leading to delayed union or non-union. Furthermore, activation of the peroneal tendons inserting proximally to the fracture site during gait contribute to dislocation of Jones fractures (4). Therefore, surgical treatment was recommended to avoid this complication and the associated more complicated course of treatment.
In a literature review by Dean et al., it was shown, that the acute Jones fractures, that are located approximately 1.5 cm distal to the proximal tuberosity (Dameron & Quill type II) are related to an incidence of non-union of 20%, showing radiographic union mean times of 15.9 weeks under nonsurgical treatment (5). Surgical treatment showed decreased time for radiographic union of 7.1 weeks accompanied by a surgical complication rate of 22.6%, mostly screw related complications and rarely non-unions.
There is consensus that displaced type II fractures require operative treatment (6)(7)(8)(9). But the management of non-displaced type II fractures is debatable. Many studies have shown a benefit of early fixation when compared to conservative cast therapy, resulting in earlier radiographic union, weightbearing, and return to normal activity, including sports (10,11).
There are several surgical procedures to treat type II fifth metatarsal fractures. The most used is that of cannulated screw osteosynthesis. Nevertheless, refracture following surgical treatment occurs by rates of 10% to 15% (5,7,(12)(13)(14). As recently suggested, the use of headless compression screw may add a greater amount of stiffness than conventional, partially threaded screws (15). Plate osteosynthesis, that provides higher biomechanical stability compared to screw osteosynthesis is a viable proposition (16). A shorter time to fracture zone union was described for the plate in a clinical study (17). An alternative to this osteosynthesis, is tension-band wiring as suggested by Sarimo et al. (18). They treated delayed or non-union Jones fractures under cast therapy and described a postoperative radiographic union rate of around 12.8 weeks and a return to activity in about 14.7 weeks.
To the best of our knowledge there is no study evaluating the biomechanical stability of tension-band wiring in Jones fractures.
In our study we compare the biomechanical stability of tensionband wiring to that of cannulated screw osteosynthesis in fifth using a bending-stress cadaver bone model.
Specimens
Sixteen fresh frozen human fifth metatarsal (8 pairs) were isolated from body donors while obtaining the insertion of the peroneus brevis tendon. Bone mineral density (BMD) of all cadavers was determined in the associated calcanei (QDR 4500 Elite Densitometer,). The specimens were thawed at room temperature. The study was approved by the local ethics committee.
Fracture Model
Osteotomy was performed 1.5 cm distal to the styloid process in the metaphyseal zone involving the joint of the fourth/fifth metatarsal. The distance was measured with a ruler aligned with the shaft axis of the bone.
Osteosynthesis
Following the osteotomies, a screw osteosynthesis was performed on the right metatarsal bone and tension-band wiring on the left. The screw osteosynthesis was performed using a 4.5 mm cannulated titanium partial thread screw (Stryker GmbH & Co. KG) as suggested in the user manual. After measuring the possible screw length, the longest possible screw was used shown in Fig. 1A. The tension-band wiring was performed as described by Sarimo et al. using 1.25 mm K-wires and 1.0 mm cerclage wire shown in Fig. 1B. All osteosynthesis were performed by 2 experienced traumatological foot consultants. One clinically experienced surgeon performed screw osteosynthesis and the other performed tension band wiring to standardize the procedure.
Prior to embedding the osteotomy gap, all segments of K-wires and screws were capped with modeling clay to avoid contact with the embedment liquids (Fig. 2). Both ends of the specimen were embedded in liquid methymethacrylate (Technovit 3040) in a cylindrical form, matching the size of the mounting device of the test machine.
Test Design and Procedure
Biomechanical testing was performed using a universal material testing machine (TC-FR 1.0TH.D09, Zwick Z1.0). During the measurement the distal enclosing cylinder could slide freely in the longitudinal direction of the specimen axis using a 2-dimensional freeswinging table (Fig. 3).
The sufficiency of the osteosynthesis was plotted by using an ultrasound measurement system (ZEBRIS, CMS 70PV5, ZEBRIS Medical) applied to the specimen. One ultrasound sensor each was attached to the metatarsal base and the metatarsal diaphysis. The ultrasound measurement registers movement in a 3-dimensional space. Any movement of each attached button respectively to the sensing camera is registered every 33 ms. This results in a change in the coordinates system with a sensitivity of 0.1 mm. The distance of displacement is calculated afterwards. The protocol of axial loading force was determined by preliminary tests using a fifth metatarsal bone (SYNBONE). After 5 setting cycles, axial bending force was determined in 10 measuring cycles. Each specimen then underwent 10 cycles of a 10 N dorsally directed bending force at a rate of 1 mm/s and unloading to a tensile force of À5 N simulating unloading. After each cycle the force was increased by 10 N until osteosynthesis failure. As the definition of failed osteosynthesis is widely discussed and not conclusively defined, we assume a widening fracture gap larger than 2 mm as osteosynthesis failure. This threshold is commonly accepted describing foot and ankles fractures as displaced (7,19).
Shown is a prepared specimen with the added markers for ultrasound measurement (green and yellow). The proximal end of the fifth metatarsal bone undergoes 10 cycles per measurement of dorsal bending starting at 10 N and unloading of 5 N. The dorsally bending force is increased by 10 N per measurement until osteosynthesis failure.
Statistical Analysis
Statistical analysis was performed using the Wilcoxon test for comparison of median and rank sums of the two study groups. BMD values between groups were analyzed using the Wilcoxon test. The relation between BMD and osteosynthesis failure was analyzed by Spearman correlation coefficient. A p ≤ .05 was considered statistically significant for all tests. The SPSS Statistics (version 21, IBM, Armonk) and graph pad PRISM (version 6, graph pad, San Diego) were used for all calculations.
Results
There were no significant differences in bone density between the two groups using the osteosyntheses cannulated screw osteosynthesis 154.9 § 45.3 g/cm 2 and tension-band wiring 156 § 43.3 g/cm 2 (p = .779).
There was no statistically significant correlation between the BMD and the axial bending force which led to osteosynthesis failure (r = 0.566, p = .148).
Discussion
In this study, we show that there is no difference in maximum failure load between the osteosynthesis with cannulated screws or tension-band wiring. Furthermore, no influence of bone density values could be seen.
For the test, the force was introduced via a dorsally directed vector, the existing rotational impulses through the tendon of the m. peroneus brevis were under-estimated (4,20). The value of rotational forces in Jones fractures has not been clearly proven, as the natural joint surfaces and surrounding soft tissue can only allow slight rotational movements. A possible higher stability against these rotational movements could not be considered with our test set-up. Improving rotational stability and compression of the fracture site may favor tension-band wiring. The displaced avulsion fracture is a more recognized site for tensionband wiring with respect to the slightly oblique fracture line (7). The strictly transverse fracture line is not ideal for tension-band wiring and requires an orthogonal entry for the K-wires.
To ensure the best possible stiffness of the screw osteosynthesis, conventional partially threaded screws were used. In biomechanical tests, these showed superiority to variable-pitch fully threaded headless compression screws (21). Screws of the same size were used for all cadavers with diameter 4.5 mm. A superiority of 5.5 mm could not be demonstrated in other studies (22,23). Due to the anatomical curvature of the fifth metatarsal correct screw length should be taken into consideration to avoid fracture gapping (24,25).
Twenty-five N load was estimated as physiologic load on the fifth metatarsal head during ambulation (26), this load could be provided in fractured bones treated with cannulated screws (mean: 70.0 § 66.5 N) and almost with the tension band wiring (mean: 35.0 § 23.3 N). Thus, a complete avoidance of weight bearing can be avoided and at least a partial weight bearing after surgical treatment can be considered.
Even though the outcomes with screw osteosynthesis have been satisfactory, there are 10% to 15% of the cases osteosynthesis failed and refractures occurred (5,27). In such cases revisional osteosynthesis may be indicated. Due to lack of alternatives, re-osteosynthesis is mostly performed using a larger screw (27). Tension-band wiring could be considered as an alternative. Taking into consideration that screw osteosynthesis is minimally invasive related to satisfactory clinical outcome as shown by early return to normal activity, there should be no change in the primary management of treating acute Jones fractures. This is supported by the experience of Sarimo et al., using tension-band wiring, that resulted in early weight bearing and an early return to activity after screw osteosynthesis (18).
The main clinical advantage of the tension-band wiring is more stability against torsional stability. The impact of the torsional impact on fifth metatarsal fracture has to be solidly proven with a further study.
We cannot make a statement about the maximum stability of the osteosyntheses in patients with surgically treated fifth metatarsal fractures because we did not test the specimens to failure. Additional cyclic testing would be useful. In addition, it is difficult to extrapolate the in vivo load to ex vivo fracture models. The biomechanical setup tries to adapt the in vivo forces as close as possible to the clinical setting. A recommendation on weight-bearing after osteosynthetic treatment of a fracture of the fifth metatarsal cannot be made on basis of biomechanical results alone. Further clinical studies are necessary to verify the statements made.
The study shows no statistically significant difference in biomechanical stability under axial loading between screw osteosynthesis and tension band wiring. Therefore, the data tend to support the use of the established method of treating Jones fractures with screw osteosynthesis. However, not from a biomechanical point of view, because in this respect the osteosynthesis methods are equivalent, but because of the minimally invasive surgical technique and the complications that can be minimized.
We believe that tension-band wiring is a viable proposition in stabilizing acute Jones fractures especially when rotational stability is considered or in cases of operative revisions. We, therefore, aim to challenge our hypothesis with a future study.
Statement of Consent
Informed consent was obtained during lifetime from all human subjects for experimentation involved in the study. Approval was granted by the Ethic Committee of the Friedrich-Schiller-University (University Hospital Jena, 2020-2016-Material).
Assessors
The first author (MU) and senior author (FK) were involved in planning the study, performing the experiments, data analysis and interpretation, writing the manuscript, and revising the manuscript. The coauthors were involved in planning the study (IG, JH, RL), performing the experiments (IG, JH), supervision (GH), and revising the manuscript (IG, JH, RL, GH). | 2022-08-17T15:03:40.114Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "9dce1b5af898e0068b07aec19b2160a1d21ff0ab",
"oa_license": "CCBY",
"oa_url": "http://www.jfas.org/article/S1067251622002344/pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ecb1a3afb555a8af63e4e07ab4e55bdaaaa7a9db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
159041618 | pes2o/s2orc | v3-fos-license | On Selecting Stable Predictors in Time Series Models
We extend the feature selection methodology to dependent data and propose a novel time series predictor selection scheme that accommodates statistical dependence in a more typical i.i.d sub-sampling based framework. Furthermore, the machinery of mixing stationary processes allows us to quantify the improvements of our approach over any base predictor selection method (such as lasso) even in a finite sample setting. Using the lasso as a base procedure we demonstrate the applicability of our methods to simulated and several real time series datasets.
Introduction
Variable or Predictor selection is an important problem in machine learning and statistics and has received a lot of attention in the literature. These methods are valuable in various applications in biology and finance for both the very large n (samples) or the high dimensional (p >> n) setting. In many cases of interest the objective of variable selection is to uncover an underlying causal mechanism and hence the method in question must exhibit robustness to noise in the data.
Most predictor selection methods discussed in the literature hold only for i.i.d data. The topic is relatively unexplored in the case of time series problems where variables correspond to the choice of exogenous predictors or even the endogenous lags to include in the regression model 1 . As time series data becomes pervasive with the advent of sensor enabled devices and networks of these devices (for e.g. Yu et al. [2017]), such predictor selection is poised to become widely applicable. Some recent applications already employ external time series of web searches to predict disease prevalence such as Influenza (GFT of Ginsberg et al. [2009], ARGO of Yang et al. [2015]) and twitter based sentiment time series for stock prediction Si et al. [2013]. These approaches attempt to mitigate the problem of searching through millions of candidate exogenous time series that may be predictive of the underlying metric. In such applications selecting the correct time series is particularly crucial as with incorrect predictors, disease or other forecasts can deviate significantly from reality (see Lazer et al. [2014] for a discussion). Another important application lies in the domain of causal inference (Brodersen et al. [2015]), wherein the impact of an intervention on a metric is evaluated using a Bayesian structural time series model. This is achieved by forecasting, using metric history and exogenous predictors (pseudo-controls), beyond the time of intervention. Finally, comparing the difference between the forecasts and what really happened, the event's impact is estimated. It is evident that the quality of the forecast is crucial in this task and that the choice of the pseudo-controls or the correct lags is very important and so we would like to have sufficient confidence in this choice.
Contribution
In this paper we propose and analyze novel and efficient stable procedures for predictor selection in time series inspired by the framework developed in Shah and Samworth [2013]. To that end we first describe block sampling for time series and then propose stability measures called block pair average (BPA) and simultaneous block selection (SBS) with corresponding error control bounds. We empirically validate our procedures on several real time series and show both qualitative and quantitative improvements over competing methods. To the best of our knowledge these are the first predictor selection methods with finite sample guarantees that have applications to interpretable forecasting, classification and other time series domains.
Related Work
The foundation of our work is based on extensions to sub-sampling based methods for i.i.d data. In this setting Meinshausen and Bühlmann [2010] proposed Stability Selection (SS) as a repeated sub-sampling based methodology to improve the performance of any variable selection technique and also provided bounds on the number of selected false positives. This method is an improvement over standard variable selection techniques as it is usually non-trivial to provide error control bounds for methods run on the complete data. More recently Shah and Samworth [2013] proposed Complementary Pairs Stability Selection (CPSS) procedures that provide significant improvements over SS. Their performance bounds do not explicitly depend on signal and noise variables (that are usually unknown) but instead depends on the number of low selection probability variables that are included and on the number of high selection probability variables that are excluded by the CPSS procedures. This approach doesn't require any dependence on restrictive and unverifiable assumptions such as exchangeability which is required for the analysis in Meinshausen and Bühlmann [2010]. The central idea of stability selection using both SS and CPSS procedures is repeated execution of a base procedure (e.g. lasso -Tibshirani [1996] ) on subsamples of n/2 data points to identify variables that show up often in the selected set. In time series applications, the error control yielded by these stability procedures does not hold as the sub-sampling does not account for the underlying dependence.
Most existing predictor selection methods in time series are largely based on heuristics, Ng et al. [2013] or simply use plain lasso Yang et al. [2015], Buncic and Tischhauser [2017] on the entire data and it is non-trivial to provide guarantees for such methods. For the specific case of vector auto regression (VAR) models, Song and Bickel [2011] propose a grouped penalty based approach that provably identifies relevant lags and predictors in the asymptotic d (number of time series) andp (number of lags) regime. Our method is of a fundamentally distinct flavor in that we provide quantifiable improvement over any base predictor selection method, including the method in Song and Bickel [2011], even in the finite data (sample or dimension) setting. Moreover our approach also works for the more general VAR-X (VAR with exogenous) model and in general is independent of the base predictor selection mechanism.
Preliminaries
In this section we establish notation and preliminary details for the models we work with. Let y t ∈ R d and x t ∈ R m denote strictly stationary sequences. Before we present an example of variable selection procedure in the time series domain first consider the general VAR-X model Ltkepohl [2007] where each A i ∈ R d×d and C j ∈ R d×m . Model (1) captures the effect of endogenous and exogenous variables at different lags on the response variable and has been a staple in econometrics with more recent applications in genetics Larvie et al. [2016] and renewable energy forecasting Cavalcante et al. [2016]. This model can be 1 -regularized and thus employed as a variable selection procedure, for example to determine stable cross sectional relations in y t or to identify predictive exogenous series in x t .
where L is a convex loss and each Z t ∈ R dp+ms and B ∈ R d×(dp+ms) are stacked such that Since y t and x t is a stationary mixing sequence it follows that Z t is also stationary (See Bradley [2005]). Unless stated we are not assuming Model 2 as the underlying data generating process (DGP). The model and its sparse estimation only serves as a means for predictor selection as described in the later sections. Figure 1: We construct a blockwise independent sequence (right) from a dependent sequence (left) such that the points in the block are dependent but the blocks are independent. The odd blocks in both the sequences have the same distribution.
Stable Predictor Selection
Consider the stationary sequence Z 1 , . . . , Z T in R p . In the case of Model 2, p = dp + ms. Let us define a variable selection procedureŜ l T =Ŝ l T (Z i1 , . . . , Z i l T ), as an estimator of the set of signal variables S ⊂ {1, ..., p}, that takes as input a dependent random sample of length l T 2 and takes values in all subsets of {1, . . . , p}. Define the selection probability of a variable index k ∈ {1, . . . , p} as The improved stability framework as presented in Shah and Samworth [2013] executes the base selection procedureŜ n/2 on B i.i.d random samples of size n/2 to then combine the variable selection estimates. In our setting because the stationary sequence is dependent standard stability sub-sampling doesn't apply, so instead using the independent block technique from Yu [1994] we create "almost" independent blocks and then transfer the stability error control to i.i.d blocks that have the same distribution as the original blocks.
Divide the sequence Z 1 , . . . , Z T into 2µ T blocks of length a T and assume that T = 2µ T a T without any loss of generality. Let O and E be the sets that denote the indices in the odd and even blocks respectively such that The intuition behind this approach is that for an appropriately mixing sequence the odd blocks are roughly independent, provided a T is large enough (See Figure 3). And as we show later in the analysis, if we create "independent" blocks with each new block having the same distribution as its dependent counterpart we can use this duality to work with the original dependent time series.
The block construction described leads to measures of stability for predictors that we define next.
Definitions
Algorithm 1 shows the use of the BPA measure in the context of lasso base predictor.
Simultaneous Block Selection (SBS)
Definition 2. Simultaneous Block Selection: Note that measures (6) and (7) are dependent data analogues of the i.i.d measures presented in Shah and Samworth [2013] and serve the same purpose of reducing variance (via averaging) as estimators of p k,l T . For these we have l T = µ T /2 a T .
Finally to analyze these measures we need the following definitions Definition 3. For some θ ∈ (0, 1], define L θ = {k : p k,l T ≤ θ} be the set of low selection probability features underŜ l T and H θ = {k : p k,l T > θ} denote the set of features that have high selection probability and The quantities E |Ŝ av ∩ L θ | and E |N av ∩ H θ | denote the expected number of low (noise) and high (signal) probability predictors that are included and excluded by our procedure. We analyze these measures against a base predictor to quantify the improvements that our procedures yield.
Assumptions
We will refer to the following assumptions when necessary 1. Assumption 1: Probability of selection underŜ l T is better than random. Thus p k,l T ≥ p 0 /p where p 0 is the number of signal variables.
Assumption 4:
The distribution of the sequence Π sim B (k) is unimodal for each k ∈ L θ . This assumption is discussed in Section 3.2 of Shah and Samworth [2013] and reflects the empirical observation that for many different DGPs the distribution of Π sim B (k) is consistently unimodal and leads to a sharper version of Markov inequality employed in the analysis.
There are several equivalent definitions of β-mixing in the literature but we refer to Definition-2.2 in Yu [1994].
Block Pair Averaging -Analysis
For the BPA measure we have the following Theorem 1. If φ ∈ { 1 2 + 1 B , 1 2 + 3 2B , ..., 1} and θ < 1 √ 3 , then under Assumptions 1, 2 and 4 and when then we have for the block average procedure (6) and with l T = µ T /2 , a T = T /4 4. Remark 4-The improved empirical performance obtained using the r-concavity assumption also carries over to our case but for simplicity and generality we retain only the unimodal assumption. See Shah and Samworth [2013] for more details.
Remark 5
We can quite easily extend the BPA measure to the case of d-BPA wherein we sample d blocks without replacement to get more conservative measures. This becomes obvious in the steps leading upto (15) in the proof of Theorem 1. However, for simplicity we retain only the discussion for d = 2. The i.i.d analogue of this approach seems non-trivial.
6. Remark 6 The SBS measure is used in the proof of Theorem 1 and may also be used in practice besides BPA and its analysis is almost exactly similar.
Empirical Results
To convince the reader of the utility of our methods we evaluate them on simulated and real time series data using lasso as a base predictor. For the real time series we evaluate forecast errors on a hold-out test period using the selected predictors from Algorithm 1 and compare them to other predictor selection methods such as standard lasso, elastic net Zou and Hastie [2005], adaptive lasso Zou [2006] and AIC based maximum lag selection for bothp and s (See Model 1). For these methods we use the standard method of cross-validation.
Algorithm 1 BPA Stable Selection 1: Input: Y ∈ R T , Z ∈ R T ×p , φ ∈ (0.5, 0.9], q, B, a T , λ = {λ 1 , . . . , λ 100 } a sequence of regularizers. 2: Initialize: Π av B (k) = 0, ∀k ∈ {1, . . . , p} 3: q-Estimate: Solve min B Y − BZ + λ B 1 and set λ q ∈ λ to be the smallest λ that returns q active entries of B. 4: for n = 1 to B do Note that the standard method of CV works for auto-regressive models as long as the errors are assumed to be uncorrelated. See Bergmeir et al. [2018] for a discussion. The post selection training is restricted to least squares regression. The train-test split is 67%/33% and the predictions are made in a rolling fashion with each test data point added to the training set and re-trained to forecast the next step. On the simulated data we test the robustness of our method by adding varying degree of noise to the simulated data and reporting true positive and false positive rates (TPR/FPR). We de-trend the real data in case of any obvious violations of stationarity.
Before we get into the details of the empirical results we first discuss the choice of parameters and how to use Theorem 1 for the case of lasso as a base selection method.
Theory in Practice
Since E |Ŝ l T ∩ L θ | is not known in practice we approximate it by q = E |Ŝ l T | . Setting q is equivalent to choosing the regularization parameter λ, we can vary it until lasso selects q predictors. Once q is set we use θ = q/p to denote the irrelevant variables as those having less than average selection probability.
The appropriate threshold can then be determined based on Theorem 1 by solving for φ where l ∈ (0, 1] is pre-specified. In general we can specify any two of q, φ and l to get the stable predictor set. We will present results with different values of q for some threshold φ ∈ (0.5, 0.9]. B is set to 50 and a T was set to some multiple of seasonality in the data for all experiments. This reflects the empirical assumption that across seasons the data points have a weak dependence. In the absence of seasonality we didn't see a significant difference against other choices of a T such as √ T or log T . See Algorithm 1 for a complete recipe of our method using the lasso base predictor. In the algorithm Y(O l ) and Z(O l ) correspond to the data points from the corresponding block sequence (see also the proof of Theorem 1). The regularizer sequence is the commonly used in the glmnet package Friedman et al. [2010]. Note that in the proofs we require T /4 of the data to be used for the comparison against the base procedure, however in Algorithm 1 and empirically we use all the data for determining the q-estimated active set.
Simulated Data
We simulate a synthetic time series with T = 10000 from the following auto-regressive with exogenous variables (AR(3)-X) model where the exogenous time series are simulated from a simple AR(2) model with Figure 2: a) The TPR/FPR reveals that Algorithm 1 is much less sensitive to added noise and hence more robust. The competing procedures are either too conservative (adaptive lasso) or too accepting. b) BPA ranked predictors. Dew point is the top predictor and there is a causal mechanismm for this. Defined as the temperature at which dew droplets start to form and so not surprisingly this impacts the PM2.5 measurement significantly.
Method
BeijingPM2.5 ShanghaiPM2.5 IndoorTemp AirQuality q 1 -BPA We also simulate 100 time series (AR (3)) for the dataset along with 2 i.i.d noise time series to create a more realistic setting of several candidates with varying possibility of being included in the stable set. Our objective is to recover a stable set (lags and exogenous) using Algorithm 1 for a AR(3)-X model with 104 exogenous time series. The parameters are normally distributed, β i ∼ N (0, 0.05) for all i = 1, ..., 50.
We vary the amount of noise (N (0, σ 2 )) added to the predictors and the response and plot the TPR/FPR with 1-standard deviation error bars. To estimate the stable predictors using the BPA measure we set q = 40 and φ = 0.8. Figure 2a) shows that the stable set is more robust to growing noise levels compared to the competing approaches. The TPR for lasso and Elastic net is higher since it selects many more predictors and consequently also has a much higher FPR. In contrast adaptive lasso is too conservative and Algorithm 1 has an FPR comparable to it but much better TPR. Note that for σ 2 = 0 only less than half of the original predictors in Model (10) show up in the selected set, since a pure least squares estimation of the model reveals that only a fraction of the 50 predictors are statistically significant.
6 Real Data 6.1 Chinese Cities PM2.5 The Chinese cities PM2.5 dataset, PM2 is an hourly sampled measurement of particulate matter concentration alongside several other exogenous meteorological time series. Our goal is to forecast the PM2.5 concentration of Beijing and Shanghai using exogenous and endogenous signals. The data specific parameters settings are T = 52854, a T = 3 * 24 (thrice the daily seasonality),p max = 24 and s max = 3 (max endogenous and exogenous lags to select from).
Indoor Temperature
The indoor temperature dataset SML is a one minute sampled (15 minute smoothed) set of measurements by a monitoring system that captures various attributes such as CO2 concentration, humidity, lighting, outdoor conditions etc. The idea behind this dataset is to aid in accurate forecasting of the indoor temperature to better regulate the energy consumption of the HVAC system. For this analysis we only consider the impact of the exogenous signal on the temperature so thatp max = 0 and s max = 2. Other parameters are T = 4137, a T = 96.
Air Quality -Temperature Prediction
The air quality dataset air is an hourly sampled set of measurements by chemical sensors that capture various attributes such as CO, NO2 and O3 concentration. While the original purpose of this dataset is to estimate sensor estimation quality, we repurpose it to model temperature as a function of these concentrations. For this analysis we setp max = 24 and s max = 3. Other parameters are T = 9358, a T = 24.
Results and Discussion
Table 1 reveals that the BPA measure performs better on all but one datasets. For the BeijingPM2.5 dataset it appears that all variables contribute to the improved RMSE. Note that the choice of q does make a difference to the results, as too small a value would exclude signal variables (appears to be the case for BeijingPM2.5) and a large q would add noisy variables to the selected set. Some prior knowledge of the domain can be useful here, but in its absence a largish q (for example 0.5p ) seems to be a safe choice (due to the guaranteed error control for the BPA measure) as the results indicate that for most datasets RMSE only degrades a little. From a causality perspective Figure 2b) reveals that dew point (temperature at which dew droplets form) and precipitation are important variables. This is corroborated by meteorological studies such as Liang et al. [2015].
Conclusion
Filling the gap in the feature selection literature for dependent data, we proposed novel stable predictor selection techniques in the time series setting. For robustness scores that can be used with any selection method, we provided theoretical results guaranteeing error control. The stable predictors selected by our method were shown to have superior predictive performance on several real datasets. In the future we intend to explore such a scheme and measures for nonstationary and heteroscedastic time series.
Appendix
We gather all the proofs in this section
Proof of Theorem 1
Proof. Denote the sequence of random variables that correspond to the O j and E j indices as Consider a sequence of i.i.d blocks {Z(O j ) : j = 1, . . . , µ T } whereZ(O j ) = {Z i : i ∈ O j } such that the sequence is independent of Z 1 , . . . , Z T and each block has the same distribution as a block from the original sequence (Z(O j )).
Let h(O) be a bounded measurable function on the set of selected blocks andÕ be the corresponding i.i.d sequence of blocks fromZ(O j ) then where the expectation is w.r.t to the distribution of the original and constructed sequences. For a measurable and bounded function h such that |h| ≤ M we have from Lemma 4.1 of Yu [1994] E Next we have Using (13) for h(O) = 1 Π sim B (k)≥2φ−1 , using the unimodal Markov inequality for the simultaneous selector on the constructed independent blocks {Õ}, from Theorem 3 of [Shah and Samworth, 2013] and finally using the fact that the sequence is stationary (all blocks have the same distribution) Using the assumption (p 0 /p) ≤ p k,a T (Assumption 1) it is easy to see that when µ T (c β /a r T ) ≤ C(φ, B)p 2 This implies from (15) that when (17) For the second part replaceŜ av byN av as an estimator of N , the set of noise variables. Let Π av B,N and Π sim B,N correspond to the noise variable estimators. Let h(Õ) = 1 Π sim B,N P(k ∈N av ) = P(k / ∈Ŝ av ) (1 − p k,l T ) 1 p k,l T >θ = 2 (1 − θ) C(φ, B)E |N l T ∩ H θ | | 2019-05-18T23:45:32.000Z | 2019-05-18T00:00:00.000 | {
"year": 2019,
"sha1": "527c90a80a802d51d19e327d4dc9b31dcac98b04",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "527c90a80a802d51d19e327d4dc9b31dcac98b04",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
50773272 | pes2o/s2orc | v3-fos-license | Fumarase: From the TCA Cycle to DNA Damage Response and Tumor Suppression
Fumarase is an enzyme of the tricarboxylic acid (TCA) cycle in mitochondria, but in recent years, it has emerged as a participant in the response to DNA double strand breaks (DSBs) in the nucleus. In fact, this enzyme is dual-targeted and can be also readily detected in the mitochondrial and cytosolic/nuclear compartments of all the eukaryotic organisms examined. Intriguingly, this evolutionary conserved cytosolic population of fumarase, its enzymatic activity and the associated metabolite fumarate, are required for the cellular DNA damage response (DDR) to double-strand breaks. Here we review findings from yeast and human cells regarding how fumarase and fumarate may precisely participate in the DNA damage response. In yeast, cytosolic fumarase is involved in the homologous recombination (HR) repair pathway, through its function in the DSB resection process. One target of this regulation is the resection enzyme Sae2. In human cells, fumarase is involved in the non-homologous end joining (NHEJ) repair pathway. Fumarase is phosphorylated by the DNA-dependent protein kinase (DNA-PK) complex, which induces the recruitment of fumarase to the DSB and local generation of fumarate. Fumarate inhibits the lysine demethylase 2B (KDM2B), thereby facilitating the dimethylation of histone H3, which leads to the repair of the break by the NHEJ pathway. Finally, we discuss the question how fumarase may function as a tumor suppressor via its metabolite substrate fumarate. We offer a number of models which can explain an apparent contradiction regarding how fumarate absence/accumulation, as a function of subcellular location and stage can determine tumorigenesis. Fumarate, on the one hand, a positive regulator of genome stability (its absence supports genome instability and tumorigenesis) and, on the other hand, its accumulation drives angiogenesis and proliferation (thereby supporting tumor establishment).
INTRODUCTION
The maintenance of genome integrity is one of the most important problems of all living organisms. An average human cell suffers approximately one hundred thousand different DNA lesions each day (Lindahl and Barnes, 2000;Alberts et al., 2004;Jackson and Bartek, 2009). Failure to repair the damaged DNA can lead to disease, the most prominent of which is cancer (Hanahan and Weinberg, 2011;O'Driscoll, 2012). DNA double-strand breaks (DSBs) are one of the most cytotoxic damages that can be inflicted on our genetic material. Defective repair of these lesions can lead to gross chromosomal rearrangements, such as large deletions, translocations and insertions. Such rearrangements can lead to loss of tumor suppressor genes and oncogene misexpression, both of which have been implicated in cancer induction and progression (Lengauer et al., 1998;Richardson and Jasin, 2000;van Gent et al., 2001;Shiloh and Lehmann, 2004;Hanahan and Weinberg, 2011;O'Driscoll, 2012). Thus, identifying and characterizing unknown factors that play a role in the response to DSBs, is extremely important.
FUMARASE, ITS CANONICAL FUNCTION AND SUBCELLULAR LOCATIONS
Fumarase is a member of the class II fumarase enzymes which is conserved from prokaryotes to humans. In the yeast, S. cerevisiae, the enzyme fumarase is encoded by the FUM1 gene whose product is a homotetramer with a molecular weight of about 200 kDa (Wu and Tzagoloff, 1987;Woods et al., 1988;Burak et al., 2013). Fumarase catalyzes the hydration of fumarate to L-malate and the reverse dehydration reaction (Mann and Woolf, 1930;Woods et al., 1988). Fumarase is found in mitochondria where it participates in the tricarboxylic acid (TCA) cycle.
In addition to the mitochondrial fumarase, the enzyme can also be found in the cytosolic compartment. The cytosolic localization of fumarase is highly conserved, as the enzyme can be found in the cytosol of most eukaryotes extending from yeast to human (Tolley and Craig, 1975;Edwards and Hopkinson, 1979;Kobayashi and Tuboi, 1983;Akiba et al., 1984;O'Hare and Doonan, 1985;Wu and Tzagoloff, 1987). These dual localized proteins are coined "echoforms, " indicating repetitious forms of the same protein distinctly placed in the cell. There are a number of known mechanisms that regulate the subcellular distribution of fumarase in eukaryotes (Figure 1). In S. cerevisiae, both cytosolic and mitochondrial fumarase echoforms are encoded by the FUM1 gene (Wu and Tzagoloff, 1987). In the course of translation, a subset of FUM1 translation products, which are partially translocated, fold outside mitochondria and are blocked for full mitochondrial import by a mechanism termed reverse translocation (Figure 1, right, S. cerevisiae). Upon translation termination, these folded translation products remain in the cytosol constituting the cytosolic fumarase population (Stein et al., 1994;Sass et al., 2001Sass et al., , 2003Yogev and Pines, 2011;Kalderon and Pines, 2014). In human cells, the human homolog of fumarase, termed fumarate hydratase (FH), is expressed from a single gene (Figure 1, top middle) (van Someren et al., 1974;Craig et al., 1976). The fumarase gene promoter was shown to contain multiple transcription start sites from which two groups of fumarase mRNAs are transcribed. The first group includes transcripts, which are translated into proteins that contain the fumarase mitochondrial targeting sequence (MTS), while the second group translates into fumarase proteins which lack this sequence. Following translation, these two versions of the protein constitute the mitochondrial and cytosolic echoforms of fumarase, respectively (Dik et al., 2016). In rat liver, it has been suggested that the two translation products (as above in human), one containing and one lacking the MTS, are formed by alternative translation initiation (Figure 1, top left) (Suzuki et al., 1989;Tuboi et al., 1990). This same situation of two translation products is achieved in Arabidopsis thaliana by two nearly identical genes, one that encodes fumarase with an MTS and one that lacks the MTS (Figure 1, bottom) (Pracharoenwattana et al., 2010).
FIGURE 1 | Mechanisms of fumarase dual targeting in different organisms. In S. cerevisiae all fumarase molecules are first targeted to mitochondria, begin their translocation and are processed by the mitochondrial processing peptidase (MPP). Some of the molecules move back to the cytosol in a process termed "reverse translocation"; If folding of the fumarase protein molecule starts in mitochondria it will be localized to the mitochondrial matrix, however, if folding of the protein molecule starts outside mitochondria it will reside in the cytosol. In other words, the folding of fumarase is the driving force for its localization. In human, a single fumarase gene encodes two groups of mRNAs either encoding a full-length mitochondrial precursor that harbors an MTS, or a shorter cytoplasmic polypeptide that lacks it. In rat, a single fumarase gene encodes a single mRNA, which by differential translation initiation produces a full-length mitochondrial precursor that harbors an MTS and a shorter cytoplasmic polypeptide that lacks it. A. thaliana harbors two highly homologs fumarase genes that encode a mitochondrial or cytosolic protein, either containing or lacking an MTS respectively. Sequences encoding or indicating the mature fumarases are in green lines for DNA, purple for mRNA and light blue for protein. The MTS sequences are indicated by yellow lines and ribosomes colored red.
CYTOSOLIC FUMARASE PLAYS A ROLE IN THE DNA DAMAGE RESPONSE (DDR) TO DNA DOUBLE STRAND BREAKS (DSBs)
With the canonical role of fumarase in the TCA cycle and mitochondria, it was unclear what the function of the enzyme in the cytosol is. To address the question of the cytosolic fumarase function in S. cerevisiae, Yogev et al. constructed a strain termed Fum M . The FUM1 gene in this strain was deleted from its original location on chromosome 16 and inserted into the mitochondrial DNA. This resulted in the depletion of cytosolic fumarase, while the mitochondrial population of the enzyme was retained, thus presenting an opportunity to determine the cytosolic function of fumarase (Yogev et al., 2010).
The Fum M strain exhibited significant sensitivity to HOinduced DSBs, γ-irradiation and DSB-inducing chemicals. As a consequence of DSB induction in wild type (WT) yeast, fumarase expression levels increased and the enzyme was now also found in the cell nucleus. Expression of cytosolic fumarase or exposure of the cells to fumarate, suppressed the DSB sensitivity of the Fum M strain (Yogev et al., 2010). We conclude that the enzymatic activity of cytosolic fumarase is important for the DNA damage response (DDR) to DSBs. Yogev et al. also showed that fumarase is required for the DSB DDR in human cell lines. Following DSB induction the cellular levels of fumarase increased and localization of the protein to the nucleus was observed. In addition, fumarase knockdown has been shown to increase cell susceptibility to ionizing radiation and hydroxyurea (HU) induced DSBs (Yogev et al., 2010). These results from human and yeast cells were the basis for our original model of fumarase function in the DDR (Figure 2A). Worth mentioning here, as will be referred to in the "Concluding remarks, " is the finding that a bacterial fumarase (of Bacillus subtilis, Fum-bc) is induced upon DNA damage, colocalized with the bacterial DNA and participates in the DDR ( Figure 2B). Thus, the dual function of fumarase in the TCA cycle and the DDR may be an ancient feature of prokaryotes and eukaryotes. Fumarase is a TCA cycle enzyme which catalyzes the conversion of fumarate to L-malate in the mitochondria. Upon DNA damage the cytosolic echoform of fumarase is localized to the nucleus, there, its enzymatic activity catalyzes the reverse conversion of malate to fumarate, so causing local accumulation of fumarate. This accumulation of fumarate (by fumarase) is required for the proper function of the DNA damage response (DDR) to double strand breaks (DSBs), in both human (FH) and yeast (Fum1) cells via targets such as KDM2B and Sae2 respectively. (B) Bacterium, Bacillus subtilis. Fumarase of Bacillus subtilis. (Fumbc) is also a TCA cycle enzyme and is induced upon DNA damage. Fumbc is co-localized with the bacterial DNA. Fumbc dependent intracellular signaling of the B. subtilis DNA damage response is achieved via production of L-malate, which affects the translation of RecN, the first protein recruited to DNA damage sites (Singer et al., 2017). Blue circles indicate fumarase in the different organisms (human, yeast, and bacteria). Leshets et al., have found that yeast cytosolic fumarase is important for the HR repair pathway, through its function in the initial step of the DSB resection process (Figure 3, right) (Leshets et al., 2018). Supporting this notion is that no genetic interactions were detected with the extensive resection factors Exo1 and Sgs1 (Leshets et al., 2018). Moreover, previous publications indicated that during the initial step of resection 50 to 1,600 bases of DSB flanking DNA can be processed (Mimitou and Symington, 2008;Zhu et al., 2008;Garcia et al., 2011). In that study the resection assay measures resection 0.29 kbp upstream the HO cut site. If following the depletion of cytosolic fumarase only the extensive step of resection is affected, one would detect at least some level of initial resection 0.29 kbp from the DSB. In fact, resection was not detected, suggesting that cytosolic fumarase is involved in the initial step of the DSB resection process (Leshets et al., 2018).
YEAST FUMARASE IS INVOLVED IN DSB RESECTION
The functional interaction between Sae2 and cytosolic fumarase further supports the role of fumarase in the initial step of resection. The interaction was first suggested by the similar phenotypes of the Fum M and the sae2 strains. Both strains exhibit postponed dissociation of Mre11 from DSBs, decreased resection and impaired kinetics of DSB repair (Lisby et al., 2004;Clerici et al., 2005;Ferrari et al., 2015). The DSB susceptibility of the Fum M strain was partially suppressed by overexpression of Sae2 and this reconstituted its resection capacity. A split-ubiquitin assay indicated that these proteins physically interact in vivo, and a direct interaction in vitro was shown by a column retention assay. We still did not know whether cytosolic fumarase acts upstream of Sae2, and if so, how does it regulate this endonuclease? One hint was the reduced protein levels of Sae2 in cytosolic fumarase depleted cells, suggesting that fumarase acts upstream of Sae2, which we presume is regulated by determining its protein abundance. In this regard, cytosolic fumarase regulation of Sae2 is at the protein level, and not at the Sae2 mRNA level (Leshets et al., 2018). It is possible that cytosolic fumarase may enhance the translation of Sae2 or have a negative effect on its degradation.
Mre11 nuclease activity has been shown to be part of the initial step in the resection process, both in human, and yeast cells (Cannavo and Cejka, 2014;Anand et al., 2016). Exposure to the metabolite fumarate can inhibit the DSB sensitivity of the Mre11 nuclease dead (mre11-nd) mutant cells (Leshets et al., 2018), suggesting that fumarate is involved in the resection process. Nevertheless, how fumarate affects Sae2 is still unclear.
FIGURE 3 | Fumarase functions in the human and yeast DNA damage response (DDR) to double-strand breaks (DSBs). Two DSB repair pathways in human and yeast are non-homologous end joining (NHEJ) and homologous recombination (HR). Left panel: Upon DSB formation in human cells, fumarase (FH) is phosphorylated on Thr236 by the DNA-dependent protein kinase (DNA-PK) complex. This modification induces the recruitment of fumarase to the DSB and local generation of fumarate. Fumarate inhibits the lysine demethylase 2B (KDM2B), thereby facilitating the dimethylation of histone H3 on lysine 36 (H3K36me2) by the SET domain and mariner transposase fusion protein (SETMAR). This leads to the repair of the break by the NHEJ pathway. Right panel: The repair of a DSB by the HR pathway, requires that the DNA flanking the DSB undergoes resection. In yeast, the resection process is orchestrated by the Mre11-Rad50-Xrs2 complex, Sae2, Exo1, and the Dna2-STR complex. The yeast cytosolic fumarase and the metabolite fumarate affect the DSB resection process, by regulating the protein level of the resection factor Sae2.
DOES YEAST CYTOSOLIC FUMARASE HAVE ADDITIONAL ROLES IN THE DSB DDR PATHWAY?
We assume that cytosolic fumarase may be important for the DDR not only through its functional relationship with Sae2. Supporting this assumption is the fact that fumarase and Sae2 are not epistatic (Leshets, thesis 2018). It has been previously shown that the depletion of Sae2 only partially impairs the resection process (Clerici et al., 2005;Ferrari et al., 2015). In comparison, the inhibition of resection is much more profound upon cytosolic fumarase depletion (Leshets et al., 2018). These observations suggest that cytosolic fumarase may be involved with additional resection factors. In this regard, no genetic interactions have been detected with Exo1 or Sgs1 (Leshets et al., 2018).
HUMAN FUMARASE AND THE NHEJ PATHWAY
A consequence of DSB formation, is the phosphorylation of fumarase on Thr 236 by the DNA-dependent protein kinase (DNA-PK) (Jiang et al., 2015). This phosphorylation is required for the recruitment of fumarase to the DSB. Following its recruitment, fumarase-mediated fumarate production inhibits the α-ketoglutarate-dependent lysine demethylase 2B (KDM2B). KDM2B inhibition increases the histone H3 dimethylation on lysine 36 (H3K36me2) which leads to accumulation of the DNA-PK complex and subsequent repair of the break by NHEJ (Figure 3, left). The phosphorylation of histone H2AX (γ-H2AX) is a central event during DSB DDR. One of the kinases, which was shown to induce H2AX phosphorylation, is the DNA-PK complex. Interestingly, the mutation of Thr 236 of fumarase does not affect γ-H2AX levels, even though it is expected to do so due to the reduced DNA-PK accumulation at the DSB (Stiff et al., 2004;An et al., 2010;Jiang et al., 2015). The aberrant kinetics of H2AX phosphorylation because of the fumarase knockdown was previously described by Yogev et al. and confirmed by Jiang et al. Nevertheless, the Thr 236 mutation did not affect γ-H2AX (Yogev et al., 2010;Jiang et al., 2015). This observation suggests that fumarase's role in the DSB DDR is not restricted to the NHEJ pathway (e.g., HR as in yeast).
IS HUMAN FUMARASE ALSO INVOLVED IN THE HR PATHWAY?
A functional relationship between fumarase and α-ketoglutaratedependent histone demethylases is very intriguing due to the emerging importance of histone methylation for the DDR. Indeed, fumarase has been shown to influence the global histone methylation pattern and fumarate was shown to inhibit several members of the KDM family, including KDM4A (Xiao et al., 2012). KDM4A is a tri-methylase capable of converting H3K36me3 to H3K36me2, while SET domaincontaining protein 2 (Setd2) methyltransferase is responsible for the generation of H3K36me3 (Whetstine et al., 2006;Edmunds et al., 2008). These observations suggest that fumarasedependent fumarate production may inhibit KDM4A, thus facilitating the generation of H3K36me3 by Setd2. This histone modification is especially intriguing due to the fact that H3K36me3 has been shown to be important for the repair of DSBs by the HR pathway (Carvalho et al., 2014;Pfister et al., 2014). This is supported by the observation that transcriptionally active chromatin which is marked by H3K36me3, is preferentially repaired by HR .
It has been proposed that H3K36me3 is important for the HR repair due to its involvement in the DSB resection process. Two comprehensive studies proposed that Setd2-dependent H3K36me3 is required for recruitment of the lens epitheliumderived growth factor p75 splice variant (LEDGF) to the chromatin. Following DSB induction, LEDGF has been shown to recruit CtIP which facilitates the initiation of resection (Sartori et al., 2007;Daugaard et al., 2012;Pfister et al., 2014;Anand et al., 2016). In concert, these results may imply that fumarase facilitated H3K36 tri-methylation can induce DSB resection, thus committing the cell to the repair of the DSB by HR.
The deduction above and the results presented by Jiang et al. propose a complex model of fumarase involvement in the response to DSBs in human cells. Fumarase mediated fumarate production may facilitate the generation of H3K36me2 or H3K36me3 by the inhibition of KDM2B or KDM4A, respectively (Xiao et al., 2012;Jiang et al., 2015). These suggested capacities of fumarase support its importance in both NHEJ and HR repair pathways.
HUMAN FUMARASE FUNCTIONS AS A TUMOR SUPPRESSOR
Fumarase was shown to be a tumor suppressor. Heterozygous mutations in the fumarase gene are associated with hereditary leiomyomatosis and renal cell cancer (HLRCC) syndrome. Patients with HLRCC can suffer from multiple uterine and cutaneous leiomyomas and tend to develop type II papillary renal cell carcinoma. HLRCC is a syndrome which is dominantly inherited and is considered a two-hit condition. Essentially all of the HLRCC tumors of patients exhibit inactivation of both fumarase alleles. These findings emphasize that the complete loss of fumarase activity is required for the tumorigenesis process (Reed et al., 1973;Kiuru et al., 2001;Launonen et al., 2001;Tomlinson et al., 2002). While there is well-known involvement of fumarase in HLRCC, mutations in it are rarely detected in sporadic tumors. Nonetheless, biallelic inactivation of fumarase has been reported in some cases of uterine leiomyomas, soft tissue sarcoma, and type II papillary renal cell carcinomas (Barker et al., 2002;Kiuru et al., 2002;Lehtonen et al., 2004;Gardie et al., 2011).
Much effort has been put into determining the mechanism by which fumarase functions as a tumor suppressor. The sole leading model for some years was that the loss of fumarase activity and the buildup of fumarate concentrations inhibits PHD 1, 2 and 3, which are α-ketoglutarate-dependent prolyl hydroxylase enzymes. PHD inhibition stabilizes the α subunit of HIF (hypoxia-inducible transcription factor), which leads to the establishment of an active HIF transcription complex. High levels of HIF have been shown to enhance angiogenesis and glucose metabolism, both of which are known to be essential for tumorigenesis (Isaacs et al., 2005;Pollard et al., 2005;Selak et al., 2005;Vanharanta et al., 2006;Hanahan and Weinberg, 2011). Nevertheless, the recent data discussed above suggest that fumarase is also important in order to maintain genomic stability (Yogev et al., 2010;Jiang et al., 2015). According to this scenario, the loss of fumarase, as a guardian of genome integrity, can also contribute to the development of cancer.
THE PARADOXICAL NATURE OF THE MECHANISM BY WHICH FUMARASE ACTS AS A TUMOR SUPPRESSOR
There are two proposed models for the activity of fumarase as a tumor suppressor. In the first, the loss of fumarase is suggested to block the TCA cycle in mitochondria causing the accumulation of fumarate, which subsequently leads to the stabilization of HIF (Isaacs et al., 2005;Pollard et al., 2005;Selak et al., 2005;Vanharanta et al., 2006). The second model, suggests that the loss of fumarase diminishes the ability of the cell to generate fumarate, thereby compromising genomic stability (Yogev et al., 2010;Jiang et al., 2015). In both models, fumarate is the key effector molecule, but in the first model it induces an oncogenic effect, while in the second, fumarate acts as an inhibitor of oncogenicity. Considering both models, the problem is that inactivation of fumarase in a cell is proposed to lead to both accumulation of fumarate, due to the blockage of the TCA cycle, and also to the inability to generate fumarate for the DDR. This discrepancy raises the question how can these two apparently contradictory models be reconciled.
The first plausible answer to this question may rely on the possibility that in order to function in the DDR, fumarase must generate high concentrations of fumarate near the cellular DNA, in proximity of the DSB (Figure 4A). This possibility is supported by the fact that upon DSB induction fumarase was shown to localize to the cell nucleus and even form nuclear foci (Yogev et al., 2010;Jiang et al., 2015). Considering this, it is plausible that upon biallelic inactivation of the enzyme, the increase in the cellular concentration of fumarate is sufficient for the stabilization of HIF, but not high enough to compensate for the lack of fumarase near the cellular DNA. The second possibility which, is an extension of the first, argues that the two mechanisms by which fumarase acts as a tumor suppressor, occur at different stages of tumor development ( Figure 4B). In the first stage, biallelic inactivation of fumarase in a single tumor cell may abolish the ability of the cell to generate fumarate in the proximity of the cellular DNA, thereby decreasing genomic stability. Nevertheless, at this stage the fumarate concentration in the single tumor cell may not be sufficient for HIF stabilization. In the second stage, proliferation of the fumarase deficient cells may form a closely positioned cell population, in which the fumarate concentration required for HIF stabilization can be achieved. According to this scenario, the loss of fumarase first reduces the genomic stability of the cell, and only later causes HIF stabilization ( Figure 4B). Unfortunately, while fumarate levels have been determined in established FH-deficient tumors, there is no data regarding the levels of this metabolite at stages in which the loss of FH occurs. Until such measurements of fumarate levels at different stages of tumor development are available, the second scenario above, although plausible, remains speculative.
CONCLUDING REMARKS
Fumarase is a highly conserved metabolic enzyme of the TCA cycle which is involved in the two main DSB repair pathways in eukaryotes; NHEJ in human and HR in yeast. The enzyme and its associated metabolite fumarate, interact and affect different components of the DDR pathways (e.g., KDM2B, Sae2, Figure 2A, 3). To this setting we can add a recent study by Singer et al. which shows that fumarase in prokaryotes already possessed both TCA cycle and DDR functions (Singer et al., 2017). Fumarase of Bacillus subtilis (Fum-bc) a prokaryote bacterium is induced upon DNA damage, co-localized with the bacterial DNA and participates in the DDR ( Figure 2B). Intriguingly, Fumbc can complement both eukaryotic functions (TCA cycle and DDR) when expressed in yeast. Fumarase dependent intracellular signaling of the B. subtilis DDR is achieved via production of L-malic acid, which affects the translation of RecN, the first protein recruited to DNA damage sites (Singer et al., 2017). Thus, different fumarase related metabolites function in the DDR of different organisms. One take home message is that it is the fumarase related metabolites which are the active molecules in the DDR, but they must be administered at specific locations in the cell and that is why the enzyme that produces these molecules must be localized to the right place in the cell.
With respect to evolution, it appears that for fumarase, the two functions came first, already in the prokaryote, thereby creating the driving force for dual localization of the protein in the eukaryotic cell. The notion that during evolution, cellular functions such as the DDR can recruit different primary metabolite signaling molecules is exciting.
Recent studies have extended our understanding of the possible functions of fumarase in the DDR. Nevertheless, additional enquiries are needed in order to decipher the complex role of this enzyme and its associated metabolites in the different DDR pathways. Deeper comprehension of this role will help us fully understand the function of fumarase in health and disease, and in particular its functions as a tumor suppressor (Figure 4).
AUTHOR CONTRIBUTIONS
ML and YS are Ph.D. students, NL is a collaborator, OP is an expert on protein dual targeting and dual function, in particular regarding the enzyme fumarase.
FUNDING
This work was supported by grants to OP from the Israel Science Foundation (ISF grant number 1455/17) and the German Israeli Project Cooperation (DIP grant number P17516). NL and OP were supported by The CREATE Project of the National Research Foundation of Singapore. | 2018-07-25T13:03:24.626Z | 2018-07-25T00:00:00.000 | {
"year": 2018,
"sha1": "0ef55e38261835076539efe0e52f44b1870eb796",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2018.00068/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ef55e38261835076539efe0e52f44b1870eb796",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
118457785 | pes2o/s2orc | v3-fos-license | Classification of Floquet Statistical Distribution for Time-Periodic Open Systems
How to understand the order of Floquet stationary states in the presence of external bath coupling and their statistical mechanics is challenging; the answers are important for preparations and control of those Floquet states. Here, we propose a scheme to classify the statistical distribution of Floquet states for time-periodic systems which couple to an external heat bath. If an effective Hamiltonian and a system-bath coupling operator, which are all time-independent, can be simultaneously obtained via a time-periodic unitary transformation, the statistical mechanics of the Floquet states is equivalent to the equilibrium statistical mechanics of the effective Hamiltonian. In the large driving frequency cases, we also show that the conditions of this theorem can be weakened to: the time-period part in the system Hamiltonian commutes with the system-bath coupling operator. A Floquet-Markov approach is applied to numerically compute the Floquet state occupation distribution of a bosonic chain, and the results agree with the theoretical predictions.
Introduction. It is proposed that certain exotic quantum phenomena, e.g. topological insulator, quantum Hall states, Majorana fermions, quantum phase transitions, and so on, may be generated by using time-periodic external fields [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Some related experimental signatures are reported recently [16,17]. The knowledge of those phenomena mostly comes from a Floquet theorem for time-periodic quantum mechanics, and the states in those systems belong to a certain type of non-equilibrium stationary states-Floquet states [18,19]. In reality, interactions or heat bath couplings always result in relaxations and re-distributions in Floquet states. In that case, one have to understand the statistical mechanics of those Floquet systems.
Thermodynamics and statistical mechanics of a timeperiodic quantum system has long been an elusive topic. Recent studies for time-periodic isolated interacting quantum systems show that periodic ergodic system will relax to an infinite-temperature distribution in the thermodynamics limit [20][21][22]. For time-periodic open quantum systems (i.e. with an external bath), some studies based on a Floquet-Markov approach show that the occupation distributions of Floquet states have nontrivial behaviors [23][24][25][26][27][28]: The occupation distribution has Boltzmann-like weight in some regimes, and has almost equal probabilities in some other regimes. Those behaviors are also related to the regular and chaotic behaviors in Poincare section of the classical phase space [28]. It is clear that the statistical properties of Floquet states do not obey the standard equilibrium statistical mechanics. When and how do the statistical properties start to deviate from the standard equilibrium statistical mechanics is still unclear for time-periodic open systems. We also want to ask whether those non-trivial behaviors depend on the form of the time-periodic modulation and the system-bath coupling, and when do the open Floquet systems resemble some equilibrium systems? Understanding those questions would be helpful for experimental preparations and control of the Floquet states.
Summary of the main results. We study the statistical mechanics of a generic open Floquet system, which is modeled by a time-periodic system (the timeindependent part has a finite energy band) coupled to a heat bath. We find a general classification theorem to classify the statistical properties of Floquet states: If both the time-periodic system Hamiltonian and the system-bath coupling operator can be simultaneously transformed to time-independent forms via a timeperiodic unitary transformation, the statistical mechanics of the Floquet states, e.g. the concept of temperature and Floquet occupation statistical distribution, is the exactly the same as the standard equilibrium statistical mechanics of the corresponding time-independent effective Hamiltonian. The order of the Floquet states (quasi-energies) can only be understood from the order of the corresponding effective Hamiltonian eigenvalues. For large driving frequency ω > D (where D is the largest energy scale, e.g. band width), the effective Hamiltonian can be obtained perturbatively up to certain order of D/ω [29]. Up to the leading order, the conditions of the theorem-when the equilibrium statistical mechanics works-can be weakened to: [H D , A S ] = 0, where H D is the time-periodic part in the system Hamiltonian and A S is the system-bath coupling operator . On the other hand, if those conditions are not satisfied, Floquet occupation distribution does not follow Boltzmann distribution, and one cannot define temperature. To test the theorem, we consider a one-dimensional tight-binding chain with bosons in the presence of a heat bath, and numerically compute the occupation distribution of the Floquet states.
Theoretical Model: Open Floquet systems. We consider a time-periodic quantum system with an external heat bath, and the whole system can be modeled in a standard way [30][31][32] where the quantum system is periodic in time H S (t) = H S (t + T ) with time period T . The heat bath is modeled by an ensemble of harmonic oscillators H B = n p 2 n /2m n + m n ω 2 n x 2 n /2 . One usually assumes the coupling between the system and bath is bilinear where γ is the coupling strength and A S is a system-bath coupling operator. Without the heat bath, the solution of the Schrodinger equation for a time-periodic Hamiltonian can be obtained from a Floquet theorem [18,19]: The wavefunction can be factorized |ψ α (t) = e −iǫαt |φ α (t) (hereafter = 1), where ǫ α is called quasi-energy and |φ α (t) = |φ α (t + T ) is called Floquet state. Therefore, one reaches a time- With the heat bath, the system-bath coupling induces the transitions and thus relaxazation between different Floquet states, so we have to consider their statistical properties, e.g. occupation distribution. The occupation distribution of some Floquet model systems was studied by using Floquet-Markov approach [23,[25][26][27][28]; and those studies show that the concepts for equilibrium statistical mechanics are not generally applicable. Therefore, one may ask: Can we find a way to classify the statistical distribution via the time-periodic Hamiltonian and the system-bath coupling operator?
A general classification theorem for Floquet statistical mechanics. Since both system Hamiltonian H S and the Floquet states |φ α (t) are periodic in time, one can apply the Fourier expansion and rewrite the Floquet operator H S,F = H S (t) − i∂ t in an extended matrix form where the block matrix elements are Fourier series coefficients H S (t) = n H S,n e −inωt and ω = 2π/T . If we can find a unitary transition the quasi-energy ǫ α are related to the eigenvalues E eff,α of H S,eff via the relation ǫ α = mod (E eff,α , ω). Note that the unitary transition U F of the matrix Hamiltonian Eq. (3) is equivalent to a time-periodic unitary transi- Just like H S,F , the system-bath coupling can also be written into an extended matrix form H SB = λ Diag[· · · , A S , A S , A S , · · · ] ⊗ n c n x n . To study the system-bath coupling in the Floquet picture, we can apply the same unitary transition U F , which blockdiagonalizes H S,F , to the system coupling operator. We then obtain First of all, if only A S,eff = 0 for n = 0, the system-bath coupling operator is also block-diagonal. This condition is equivalent to the case thatÛ does not depend on time. In that case, we obtain a series of identical decoupled Hamiltonians (n = · · · , −2, −1, 0, 1, 2, · · · ) in the block-diagonal parts. Therefore, we can see that the statistical mechanics of Floquet states becomes the standard equilibrium statistical mechanics of the effective Hamiltonian H S,eff . The occupation of Floquet states follows the standard Boltzmann distribution (consider higher bath temperature), and one can define the concept of temperature for the system in a standard way. The order of the Floquet states and quasi-energies should be only considered from the effective Hamiltonian eigenvalues. If we have a weak system-bath coupling and Markovian conditions, the corresponding equilibrium statistical properties do not depend on the form of the coupling operator A In that case, the off-diagonal blocks A (n) S,eff can induce transitions between different Floquet equivalent sectors (e.g. between H S,eff +nω and H S,eff +mω); and therefore, the Floquet picture is not exactly correct in this case. If we still consider the statistical properties in the Floquet picture, the distribution becomes complicated [23,[25][26][27][28]. It is unclear whether one can find a universal theory in this regime.
A Weak Version of the Classification Theorem. The classification scheme and their conditions in the last section are too abstract to be useful; and therefore, we first look at an example and then extract some practical conditions. We consider a non-interacting one-dimensional bosonic tight-binding chain with a heat bath, and focus on the cases with periodic modulation of potential energy. The system Hamiltonian is where the operator c † j (c j ) creates (annihilates) a boson on the j−site of the chain, n j = c † j c j is the number operator, and the term V (j − δ) 2 describes a quadratic potential on the chain (δ is an arbitrary constant). The last term is a time-periodic tilt potential where F (t) = F (t + T ) and H D = j jn j (define H S (t) = H 0 + F (t)H D ). In continuum limit, the model above is equivalent to a particle in one-dimensional potential well H S (t) = P 2 /(2m) + U (X, t), where P and X are the momentum and position variables, U (X, t) is time-periodic potential energy. For such a case, one usually assume a linear system-bath coupling, i.e. A S = X; this results in a form A S = a j jn j in the tight-binding model (where a = 1 is lattice constant). We Then, under a same transformation, we want to ask if the system-bath coupling operator can also be block-diagonalized, i.e. whetherÛ F (t) † A SÛF (t) is independent of time. This condition is equivalent to [H D , A S ] = 0. Now, the classification theorem for Floquet statistical mechanics can be weakened to: "Assume a time-periodic system with a heat bath can be modeled by Eq.(1). In the large driving frequency limit, i.e. ω ≫ D (D is the largest energy scale, e.g. band width), if [H D , A S ] = 0, the statistical mechanics of the Floquet states is equivalent to the standard equilibrium statistical mechanics of the corresponding effective Hamiltonian H S,eff ." For example, if the system-bath coupling has a form is an arbitrary function of site j, and κ is an exponent.), any type of potential modulation (e.g. H D = j jn j ) satisfies the condition [H D , A S ] = 0. However, some types of modulations, e.g. periodic modulation of the hopping strength J, do not meet the condition.
Numerical Results from a Floquet-Markov Approach. Here, we will calculate the occupation distribution of the Floquet states for the model in Eq. (7), and test the classification theorem. With the heat bath, the occupation distribution is described by the reduced density operator ρ S (t) = T r B ρ(t), where ρ(t) is the density operator for the whole system-bath model and T r B denotes a partial trace over the bath. For time-periodic system, one can adopt the Floquet-Markov approach [23,[25][26][27][28]: 1) The density matrix equation is simplified by the Markov approximation, which requires a small bath correlation time compared to the relaxation time characterizing the evolution of the system; 2) the master equation including the reduced density operator is further projected onto the space of Floquet states: 3) we consider the regime that the system-bath coupling strength is sufficiently small compared to all the quasienergy level spacings, so all the off-diagonal density matrix elements in the master equation can be neglected. With those approximations, the system has a stationary solution for the occupation probability P α = ρ S(αα) of Floquet state |φ α (t) , which obey the rate equation [23,[25][26][27][28] and the normalization condition β P β = 1. Here, we show the main steps in solving the rate equation Eq. (10). The rates describing the bath-induced transition between Floquet states are defined as where A S(αβ) (m) = T 0 dte −imωt A S(αβ) (t). The function g(ǫ) = n B (ǫ)J(ǫ)/π is the correlation function of the bath coupling operator n c n x n , where is the plank distribution for the bath with temperature 1/β. The spectral density of the bath is J(ǫ) = (π/2) n (c 2 n /m n ω n )[δ(ω − ω n ) − δ(ω + ω n ))]. In the continuum limit for an ohmic bath with exponential cutoff, spectral density becomes J(ǫ) ∝ ǫe −|ǫ|/ǫc .
We numerically study the Floquet occupation distribution of a 1D finite chain (M = 40) modeled by Eq. (7) with square-wave modulation The quasi-energies and Floquet states can be obtained by solving U (T, 0)|φ α (0) = e −iǫαT |φ α (0) and |φ α (t) = e iǫαt U (t, 0)|φ α (0) , where U (t, 0) is the time evolution operator for the system. First of all, we consider a case that the heat bath couples the particle density in the chain with the form A S = j jn j such that [H D , A S ] = 0. In evaluating the transition rate R αβ , it is necessary to truncate the summations: We consider the summations from m = −250 to 250 in numerics. By solving the rate equations in Eq. (10) with the normalization condition, one can obtain the Floquet occupation distribution P . In Fig. 1, we plot the Floquet occupation distribution as a function of their corresponding eigenvalues E eff,α of H S,eff . As mentioned before, the relation between E eff,α and quasienergies is ǫ α = mod (E eff,α , ω), where some non-zero off-diagonal terms in U † F H S,F U F are neglected. For large driving frequency ω = 20.0 and ω = 10.0 (compared to bandwidth D = 4J = 10.0), the Floquet occupation distribution is almost the same as Boltzmann distribution. As the driving frequency decreases, the distribution starts to deviate from the Boltzmann distribution. The reason is as follows. For smaller ω, the energy separation between different Floquet sectors is not large enough to prevent the bath-induced transitions among those sectors due to the presence of the off-diagonal blocks in U † F H S,F U F . Or we may also think the proposed unitary transformationÛ F (t) is poor in those cases, and the correct transformation (which can block-diagonalize H S,F ) cannot block-diagonalize the system-bath coupling operator, i.e.Û F (t) † A SÛF (t) is still time-dependent.
Secondly, we consider the system-bath coupling A S = j c † j+1 c j + h.c., i.e. the bath couples to the particle hopping processes and so [H D , A S ] = 0. In that case, the Floquet occupation distribution is shown in Fig. 2, and does not follow the Boltzmann distribution even for large driving frequency. Interestingly, the distribution first undergo an exponential decay and then becomes an exponential growth. It seems that one reaches an example with negative-temperature. However, in the presence of the off-diagonal blocks A (n) S,eff , the equilibrium statistical mechanics for H S,eff is meaningless for the Floquet states. Those Floquet occupation distribution functions provide a different way to understand the order of the quasi-energies: The larger Floquet occupation values correspond to the quasi-energies with "lower positions". The general theory to correctly capture the statistical mechanics in this regime is still waiting to be discovered.
Discussions. We consider Floquet systems with the thermal bath coupling, and discover a classification theorem for the statistical mechanics of the Floquet open systems. In large driving frequency, if the time-periodic part in the system Hamiltonian commutes with the systembath coupling operator, i.e. [H D , A S ] = 0, the statistical mechanics of the Floquet system can be described by the standard equilibrium statistical mechanics of an effective Hamiltonian H S,eff . The Floquet "ground states" correspond to the ground states of H S,eff , and the order of Floquet states correspond to the order of eigenstates for H S,eff . However, if one has [H D , A S ] = 0, the statistical distribution of the Floquet state becomes uncontrolled, and Floquet picture may not be the right way to understand the statistical mechanics of time-periodic systems. Therefore, our results have important implications for engineering the Floquet systems and preparations of Floquet exotic states in realistic experimental systems.
D.E.L. is grateful to A.Levchenko for valuable discussions. D.E.L. acknowledges the support from Michigan State University in the problem formulation stage of the work.
Inoue and A. Tanaka, | 2015-04-07T17:45:22.000Z | 2014-10-03T00:00:00.000 | {
"year": 2014,
"sha1": "0cb47ac6cf467301644458990d70ca7fbadc990c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.0990",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0cb47ac6cf467301644458990d70ca7fbadc990c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
45293361 | pes2o/s2orc | v3-fos-license | Spatial Fluctuations of Loose Spin Coupling in CuMn = Co Multilayers
T. Saerbeck,* N. Loh, D. Lott, B. P. Toperverg, A.M. Mulders, A. Fraile Rodrı́guez, J.W. Freeland, M. Ali, B. J. Hickey, A. P. J. Stampfl, F. Klose, and R. L. Stamps School of Physics, University of Western Australia, Crawley, WA6009, Australia Australian Nuclear Science and Technology Organisation, Menai, NSW 2234, Australia Institute for Materials Research, Helmholtz Zentrum Geesthacht, 21502 Geesthacht, Germany Department of Physics, Ruhr-University Bochum, 44780 Bochum, Germany Petersburg Nuclear Physics Institute, 188350 Gatchina, Russia School of Physical, Environmental and Mathematical Sciences, UNSW in Canberra, ACT 2600, Australia Department of Imaging and Applied Physics, Curtin University of Technology, Perth, WA 6845, Australia Dept. Fı́sica Fonamental and Institut de Nanociència i Nanotecnologia (IN2UB), Universitat de Barcelona, 08028 Barcelona, Spain Swiss Light Source, Paul Scherrer Institut, CH-5232 Villigen, Switzerland Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 604389, USA School of Physics and Astronomy, University of Leeds, Leeds LS29JT, United Kingdom School of Chemistry, The University of Sydney, NSW 2006, Australia SUPA, School of Physics and Astronomy, University of Glasgow, Glasgow 612 8QQ, Scotland, United Kingdom (Received 14 March 2011; published 12 September 2011)
Magnetic exchange coupling in the form of Ruderman-Kittel-Kasuya-Kittel (RKKY) interactions in magnetic thin films or multilayers is a fascinating and important phenomenon that has found great utility in magneto-and spin electronics [1].Novel systems, incorporating dilute magnetic impurities in metallic multilayers [2], semiconductors [3], oxides [4], or insulators [5], exhibit a range of interesting phenomena based on exchange coupling through dilute impurity centers, the understanding of which is crucial for any future search for applicability or the tuning of physical properties [6].
Early observations of an orthogonal magnetic coupling in metallic multilayers [7] have been described with a loose spin coupling (LSC) model proposed by Slonczewski [8], describing the impurities as paramagnetic (PM) moments that interact with the magnetic layers.While LSC was able to account for the temperature dependence of biquadratic (BQ) coupling, unphysically large values for the exchange fields were required to account for the observed coupling strength [2,7,8].We show that properly accounting for the spatial distribution of the impurities throughout the multilayer removes this limitation.Further, we find that the first order, bilinear LSC terms do not cancel out when averaged over different spin locations, as suggested previously [2,8].Instead, we demonstrate how the random distribution of spins leads to a lateral fluctuation which contributes to the net BQ coupling.Our results are therefore of consequence for the problem of RKKY coupling in general, especially since several encounters of magnetic impurity-mediated coupling in novel materials are examined in view of LSC [9,10].By accounting for the natural random distribution, experimental observations of temperature dependent coupling angles and saturation fields in strongly BQ coupled Cu 0:94 Mn 0:06 =Co multilayers are reproduced using a single set of underlying RKKY exchange parameters.
The magnetic structures of the ferromagnetic (FM) Co layers have been resolved using polarized neutron reflectometry (PNR) as a function of external field and temperature using NERO at the Helmholtz Zentrum Geesthacht (HZG), Germany.Full polarization analysis recording non-spin-flip (NSF), R þþ and R ÀÀ , and spin-flip (SF), R þÀ and R Àþ , reflectivities is used to determine the magnitude and direction of the magnetization vector in successive layers [12].In such measurements R þþ and R ÀÀ probe the projection M k of the magnetization vector onto the polarization direction (within the surface plane of the sample), while R þÀ and R Àþ are due to the component M ?normal to this direction.Figure 1(a) shows reflectivities along the momentum transfer Q Z , normal to the surface, at 300 and 30 K as well as the corresponding simulations.The refined structural parameters, such as mass densities, individual thicknesses, and interface roughness, agree well with values determined by x-ray reflectometry and high angle neutron and x-ray diffraction, which confirm coherent growth along the Cu(111) direction.The first order Bragg peak at Q Z ¼ 0:158 # A À1 , observed at 300 K, corresponds to a bilayer periodicity of 40 A ˚.The difference between R þþ and R ÀÀ indicates FM alignment of subsequent Co layers with an averaged magnetization of 1:46 AE 0:05 B =atom (bulk: 1:53 B =atom).Upon cooling of the system below 100 K, a magnetic half-order peak is observed in R þÀ and R Àþ at Q Z ¼ 0:076 # A À1 , suggesting a canted magnetic structure with finite M ?and a doubling in periodicity due to alternating alignment of subsequent magnetizations.
While specular PNR as a function of Q Z resolves depth profiles of nuclear and magnetic structures, the lateral wave vector transfer Q X probes magnetic domains in the sample plane, via off-specular scattering [12].As shown in Fig. 1(b), the SF signal is an intense off-specular scattering associated with the presence of lateral magnetic domains.Simulations of both the specular and off-specular data have been performed using a distorted wave Born approximation [Fig.1(b), right] [12,13].The increase in experimental background signal towards Q Z ¼ 0 is not regarded in the simulations.The off-specular scattering related to the half-order peak at 30 K is well described by magnetic domains with a lateral size of 0:43 m and magnetizations alternatively canted in neighboring Co layers throughout the multilayer [Fig.1(a), inset].We find a canting of 1;2 ¼ AE30 with respect to the external guide field.Temperature and field dependence of the coupling angle ¼ 1 À 2 are shown in Fig. 2(a).A more detailed description of the specular and off-specular analysis will be presented elsewhere [14].
From the temperature and field dependence of the Co coupling angle , the interlayer exchange energies J 1 and J 2 are determined over the magnetic areal density EðÞ [2].Assuming negligible in-plane anisotropy, where H is the external field, M the volume magnetization, and d the thickness of the FM layers.J 1 and J 2 are deduced by energy minimization of Eq. ( 1) [15] and found to increase below a temperature where the canted magnetization appears [Fig. 2 In order to identify correlations between the BQ coupling and the magnetic state of the dilute magnetic impurities, we now turn to an element specific investigation using polarized x-ray absorption spectroscopy (XAS).Figure 3 shows XAS at the Mn L 2;3 edges for left ( À ) and right ( þ ) circular polarization of the biquadratically coupled state (T ¼ 70 K, H ¼ 50 mT) [16].Several multiplet features, as well as a large branching ratio IðL 3 Þ= ½IðL 3 Þ þ IðL 2 Þ ¼ 0:75, are characteristic for a high spin state [17].The qualitative shape of the multiplet features has been simulated with CTM4XAS [18] using a predominantly 3d 5 electron configuration with M Mn ¼ ð4:4 AE 0:4Þ B .The finite x-ray magnetic circular dichroism (XMCD), i.e., the difference between the þ and À absorption cross section, indicates an increase in the net Mn magnetization towards lower temperature (Fig. 3).Both the Co and Mn XMCD have the same sign, demonstrating that the net Mn moment is orientated collinear to the Co magnetization, consistent with PM Mn spins polarized due to exchange interactions with the nearby FM layers.The observed similarity of element specific hysteresis loops of Co and Mn, recorded by x-ray resonant magnetic scattering (XRMS) in reflectivity at the L 3 edges at the Advanced Photon Source, further supports this [19].
A model of the magnetic interaction between the FM Co layers must include the PM Mn spins as well as the Cu conduction electrons, defining the energies Here J RKKY 1 is the RKKY exchange between FM Co layers [20], J LSC 1 and J LSC 2 are the loose spin couplings via the Mn spins [8].In order to fully describe the coupling situation, we introduce a new contribution to the overall interlayer exchange coupling J fluct 2 , arising from random lateral distributions of the Mn impurities.Beyond the model of LSC, we will discuss lateral variations in J LSC 1 , derived by convolution with a random Mn distribution, which will lead to additional BQ coupling via fluctuation mechanisms [21].A schematic of the interlayer coupling situation is shown in Fig. 4(a).To include the lateral disorder in calculating J LSC 1 , one needs to consider the three-dimensional form of the RKKY interaction J RKKY ¼ C½2kr cosð2kr þ ÈÞ À sinð2kr þ ÈÞ= ð2krÞ 4 [20] between individual Co and Mn spins.k represents the frequency and È the phase of the oscillation.For a single Mn spin, this interaction varies in the plane of the ferromagnets as a function of the distance sðx; yÞ [Fig.4(a)].The RKKY exchange fields Ũj of the ferromagnets are defined by J RKKY , integrated over the plane of the ferromagnets, ŨA ðzÞ ¼ ŨB ðt À zÞ, and are plotted in Fig. 4(c), with amplitude j Beff j, oriented parallel to the corresponding ferromagnet, extremal spanning vector q z for a lattice spacing d Cu and characteristic temperatures T 0 and T i 0 [22].Since the orientation of the PM Mn spins is isotropic, the vector sum UðzÞ ¼ j ŨA ðzÞ þ ŨB ðzÞj can be used to calculate the total exchange energies J LSC 1 and J LSC 2 according to the model from Slonczewski [8]: 0Þ and J LSC 2 ¼ P z i 1=2½Fðz i ; Þþ Fðz i ; 0Þ À Fðz i ; =2Þ, where F½Uðz; Þ is the free energy of the system.In order to account for the 3D structure of the exchange coupling, a summation over all impurity locations z i in the Cu layer for collinear ( ¼ ; 0) and orthogonal alignment ( ¼ =2) has been performed.FIG. 3 (color online).Mn L 2;3 -edge absorption and x-ray magnetic circular dichroism as a function of temperature.Going beyond the LSC model, we now consider the exchange interaction J LSC 1 calculated for collinear and orthogonal alignment of the interaction through a single Mn spin, J RKKY Co-Mn ðx; y; zÞ and the right-hand ferromagnet U B ðzÞ [23].A comprehensive 2D form of J LSC 1 ðx; yÞ is obtained by a convolution with a random 6 at.% site occupancy of spins in the lateral dimension of the spacer, in which each Mn position is represented via a function.Figure 4(b) shows the result of the lateral convolution, summed over z positions corresponding to the (111) lattice planes of the Cu spacer.In order to derive the new BQ coupling contribution, J LSC 1 ðx; yÞ is decomposed into 2D Fourier components J F ðx; yÞ ¼ a sinðx=lÞ sinðy=lÞ, where amplitude a and length scale l are chosen to resemble the length and energy fluctuations in J LSC 1 ðx; yÞ [Fig.4(b)].The resulting expression for the additional BQ coupling [19,21] is where A ¼ 1:2 Â 10 À8 Jm À1 is the exchange stiffness of the Co layers.The magnetic interaction energies J RKKY 1 , J LSC 1 , J LSC 2 , and J fluct 2 [Fig.2(b)] are solely determined by the RKKY exchange parameters J RKKY [Eq.( 2)] and the Mn moment.The phase and period of the RKKY are estimated from results on Co=Cuð111Þ multilayers [24].A substantial J LSC 1 contribution exists even after the summation over Mn positions, but the opposite sign of J RKKY 1 and J LSC 1 and the new contribution from J fluct 2 describe our experimental findings very well.The best fit gives B eff ¼ 540 T, T 0 ¼ 1000 K, T i 0 ¼ 300 K, and J Co =J Mn ¼ 0:62, where J Co =J Mn describes an energy scaling to account for different hybridization between Cu conduction electrons and Co and Mn d electrons.These values are consistent with previous experimental works on Co=Cuð111Þ systems [22,24].The Mn spins at the center of the Cu layer are aligned opposite to the Co moments [Fig.4(c)] while the net Mn polarization is parallel to the Co moments in agreement with the XMCD and XRMS results.Investigations on a range of CuMn thicknesses all gave similar results, therefore indicating a phenomenon of greater generality rather than a special case of wellmatched coupling terms [25].
In conclusion, we have shown how random positional disorder of dilute magnetic impurity atoms enhances biquadratic coupling by creating lateral variations in exchange.Our results highlight the influence of dilute magnetic impurities on the host system, which is of consequence for the problem of RKKY coupling in general.We demonstrate that, by taking into account all three dimensions of the interaction and positional disorder, a consistent model of conduction spin mediated interlayer coupling is obtained.Next to tailoring of artificial magnetic properties in metallic multilayers, the applicability of our model to the general problem of impurity-mediated exchange coupling will improve the understanding of biquadratic coupling in Heusler alloys [26] and dilute magnetic semiconductors [27] and aid in identifying the role of magnetic impurities in doped insulators showing ferromagnetic properties [28].
We acknowledge financial support from the Access to Major research Facilities Programme and Australian Research Council.
FIG. 1 (color online).(a) NSF (red circles, blue squares) and SF (green triangles, cyan stars) PNR data fitted to the model sketched in the inset.(b) Off-specular scattering at 30 K and 7 mT data (left) and simulations (right).
FIG. 4 (
FIG. 4 (color online).(a) Schematic coupling of a PM Mn spin between two FM layers.(b) Calculated lateral variation of J LSC 1 at 30 K. (c) RKKY exchange fields acting on loose Mn atoms derived by fits to the experimental data.Black dots indicate (111) planes in the fcc lattice.The extended plateau on either side of the plot indicates Mn impurities in the direct vicinity of the Co layers experiencing direct magnetic exchange. | 2018-04-03T06:01:59.405Z | 2011-09-12T00:00:00.000 | {
"year": 2011,
"sha1": "f2202143f4c7ba2483955a7b2b869f48548b3f00",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/135997/1/598950.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "ffdc529d012dec91aaf6f03d755e3e866a38b458",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
132060732 | pes2o/s2orc | v3-fos-license | Review on Smart Electro-Clothing Systems (SeCSs)
This review paper presents an overview of the smart electro-clothing systems (SeCSs) targeted at health monitoring, sports benefits, fitness tracking, and social activities. Technical features of the available SeCSs, covering both textile and electronic components, are thoroughly discussed and their applications in the industry and research purposes are highlighted. In addition, it also presents the developments in the associated areas of wearable sensor systems and textile-based dry sensors. As became evident during the literature research, such a review on SeCSs covering all relevant issues has not been presented before. This paper will be particularly helpful for new generation researchers who are and will be investigating the design, development, function, and comforts of the sensor integrated clothing materials.
Introduction
The electrical, chemical, and mechanical activities that take place in the human body during any biological event, such as beating of the heart and contraction of muscles, produce different biomedical signals [1]. On the basis of the physiological origins of these biosignals, they can be grouped as bioelectrical, biomagnetic, biochemical, biomechanical, bioacoustics, bio-optical, and biothermal signals. They can be further classified based on their nature of existence, that is, permanent or induced biosignals [2]. Permanent signals exist at all times within the body and are generated without any artificial trigger, impact, or excitation from outside of the body, for example, electrocardiogram (ECG) signal. Induced biosignals are artificially triggered, excited, or induced and they exist roughly for the duration of the excitation, for example, electroretinogram (ERG). Sensors that can sense biosignals or biopotentials can be categorised as physical, electrical, or chemical depending on their specific applications [1]. Different kinds of specialised electrodes are used for capturing biosignals. These electrodes could be either non-invasive (placed on skin surface) or invasive (e.g., microelectrodes or wire electrodes). Adding electrodes and sensors onto textiles and garments is a non-evasive way of capturing and measuring biosignals.
Thanks to the advancement of technology in producing microelectromechanical systems (MEMSs), wearable electronics have become very common consumables on the market nowadays. Wrist-worn wearable devices (smart watches and fitness trackers) experienced a growth of 18% and 7% in the United Kindom during the period of 2016-2017 and 2017-2018, respectively [3]. With the advent of conductive threads, textile structures either woven or knitted from conductive yarns, and conductive print-inks including those from graphene, it is now possible to produce or integrate light-weight weight sensors onto textiles to monitor health, fitness, and performance in a non-clinical environment, in daily-life, and in sport-training conditions [4][5][6][7]. An overview of the recent developments in wearable sensors for remote health monitoring is presented by Majumder et al. [8], while the smart sensors and fusion systems for sports and biomedical applications are reviewed by Mendes Jr. et al. [9]. In some cases, smart sensors are worn directly on the body using belts, straps, and adhesives; and in some cases, they are integrated or pocketed within textiles. The concept of Wearable 2.0 [10] envisages a full integration of wearable electronics within clothing, as presented in the Figure 1. Traditionally, such systems are known as smart garments, e-textiles, and e-garments. In the literature, they have also been mentioned as the IoT (Internet of Things) smart garments system [11]. For the ease of understanding across all disciplines, we have referred to them as smart electroclothing systems (SeCSs) in this review.
A good number of SeCSs have emerged onto the market. This paper reviews the state-of-the-art development in design, construction, functionality, and application of such systems. As far as is known, such a review on SeCSs covering these relevant issues has not been presented before. However, it is important for researchers and product developers to have a complete review of those before initiating new research and attempting new product development in this and associated fields.
Types of SeCSs Based on Applications
On the basis of their areas of application, SeCSs can be classified into the following four groups ( Figure 2): (1) SeCS for health; (2) SeCS for sports; A good number of SeCSs have emerged onto the market. This paper reviews the state-of-the-art development in design, construction, functionality, and application of such systems. As far as is known, such a review on SeCSs covering these relevant issues has not been presented before. However, it is important for researchers and product developers to have a complete review of those before initiating new research and attempting new product development in this and associated fields.
Types of SeCSs Based on Applications
On the basis of their areas of application, SeCSs can be classified into the following four groups ( The textile-based systems that can measure biosignals, for example, ECG, body temperature, and so on, can be used for detecting and monitoring medical conditions, and can support recovery and rehabilitation; those are promoted by their suppliers for medical applications are identified as 'SeCSs for health' in this paper. The systems, which are promoted by their suppliers for sport applications, including monitoring players' and athletes' physical conditions and performance, and helping players/athletes and their coaches in training and coaching, are considered as 'SeCSs for sports'. The systems that help general consumers with their daily fitness activities, such as walking, jogging, running, doing yoga, and physical exercises, are reported as 'SeCSs for fitness' in this review. The systems that do not fall into any of the above-mentioned categories, but facilitate users' social activities such as communication, entertainment, and leisure activities, are identified as 'SeCSs for social'. The textile-based systems that can measure biosignals, for example, ECG, body temperature, and so on, can be used for detecting and monitoring medical conditions, and can support recovery and rehabilitation; those are promoted by their suppliers for medical applications are identified as 'SeCSs for health' in this paper. The systems, which are promoted by their suppliers for sport applications, including monitoring players' and athletes' physical conditions and performance, and helping players/athletes and their coaches in training and coaching, are considered as 'SeCSs for sports'. The systems that help general consumers with their daily fitness activities, such as walking, jogging, running, doing yoga, and physical exercises, are reported as 'SeCSs for fitness' in this review. The systems that do not fall into any of the above-mentioned categories, but facilitate users' social activities such as communication, entertainment, and leisure activities, are identified as 'SeCSs for social'.
Design Criteria for SeCSs
Most of the clothing materials we wear day-to-day are made of flexible textile fabrics that are either woven or knitted out of linear textile structures known as threads and yarns consisting of one of more types of natural or man-made fibres [12]. The basic characteristics of textile materials are that they are soft and flexible materials that are able to drape the curves of our body nicely. The fundamental requirements for them are their capability to ensure physiological (or thermophysiological), sensorial (or tactile), and psychological of wearer [12]. At the same time, they should be washable to meet the users' desire for reuse. However, these requirements are negatively interfered with when hard and non-washable electronics and electric materials are assembled with textile materials. This is the biggest challenge in designing SeCSs. Until now, it is not technically possible to have fully flexible and washable electronics components that can be assembled seamlessly with textile materials. Therefore, the trend is to make MEMSs in such a way that they minimally interfere with wearers' comfort and can be detached from the clothing component before washing. The next challenge of designing SeCSs is to place appropriate electrodes and sensors in appropriate places on clothing so that they can come into sufficient contact with wearers' body parts to be able to sense the targeted biosignals as purely as possible. For example, ECG sensors are usually positioned at the chest and ribs area and a blood oxygen sensor is placed at the triceps of the left or right muscles [10,13]. In addition to accurate positioning, it is also important to ensure no or minimal movement of them to avoid any noise in signals, also known as the motion artefact.
Design Criteria for SeCSs
Most of the clothing materials we wear day-to-day are made of flexible textile fabrics that are either woven or knitted out of linear textile structures known as threads and yarns consisting of one of more types of natural or man-made fibres [12]. The basic characteristics of textile materials are that they are soft and flexible materials that are able to drape the curves of our body nicely. The fundamental requirements for them are their capability to ensure physiological (or thermo-physiological), sensorial (or tactile), and psychological of wearer [12]. At the same time, they should be washable to meet the users' desire for reuse. However, these requirements are negatively interfered with when hard and non-washable electronics and electric materials are assembled with textile materials. This is the biggest challenge in designing SeCSs. Until now, it is not technically possible to have fully flexible and washable electronics components that can be assembled seamlessly with textile materials. Therefore, the trend is to make MEMSs in such a way that they minimally interfere with wearers' comfort and can be detached from the clothing component before washing. The next challenge of designing SeCSs is to place appropriate electrodes and sensors in appropriate places on clothing so that they can come into sufficient contact with wearers' body parts to be able to sense the targeted biosignals as purely as possible. For example, ECG sensors are usually positioned at the chest and ribs area and a blood Sensors 2020, 20, 587 4 of 23 oxygen sensor is placed at the triceps of the left or right muscles [10,13]. In addition to accurate positioning, it is also important to ensure no or minimal movement of them to avoid any noise in signals, also known as the motion artefact.
System Architecture
Every SeCS consists of both hardware and software items. The generic architecture of SeCS is presented in the Figure 3. The system architecture generally includes eight working subsystems and two supporting subsystems. The common working subsystems included in a SeCS are as follows: (1) control subsystem, (2) sensing subsystem, (3) actuator subsystem, (4) communication subsystem, (5) location subsystem, (6) power subsystem, (7) storage subsystem, and (8) display subsystem [11]. Two supporting subsystems included in SeCSs are interconnection and software subsystems. Most of the hardware items in a SeCS construction, such as the control subsystem, certain types of sensing and actuator subsystems, location subsystem, power subsystem, storage subsystem, and display subsystem, are electronics and non-textile materials. Except the display subsystems, in most of the cases, the rest of these subsystems are accumulated within an electronic board in as miniaturised a form as possible to finally connect to textile components [13,14].
System Architecture
Every SeCS consists of both hardware and software items. The generic architecture of SeCS is presented in the Figure 3. The system architecture generally includes eight working subsystems and two supporting subsystems. The common working subsystems included in a SeCS are as follows: (1) control subsystem, (2) sensing subsystem, (3) actuator subsystem, (4) communication subsystem, (5) location subsystem, (6) power subsystem, (7) storage subsystem, and (8) display subsystem [11]. Two supporting subsystems included in SeCSs are interconnection and software subsystems. Most of the hardware items in a SeCS construction, such as the control subsystem, certain types of sensing and actuator subsystems, location subsystem, power subsystem, storage subsystem, and display subsystem, are electronics and non-textile materials. Except the display subsystems, in most of the cases, the rest of these subsystems are accumulated within an electronic board in as miniaturised a form as possible to finally connect to textile components [13,14]. Different sensing units that potentially form sensing subsystem of an SeCS can be motion, gesture, and position sensors, temperature and other bio-vital sensors, location sensor, interaction and environmental sensors, and sensors for detecting surrounding objects [4][5][6][7][8][9][10][11][12][13][14]. The common sensors for motion, gesture, and positions are accelerometer, magnetometer, and gyroscope. A combined package of accelerometer, magnetometer, and gyroscope is common in use owing to their volatile application and prices. This combination is termed as a nine-axis intertial motion unit (IMU) sensor. STMicroelectronic's LSM9DS1 and Bosch's BMF055 are two examples of such IMU sensors. Potential bio-vital sensors that may be integrated within a SeCS are for sensing heart rate, respiration rate, blood pressure, pulse oxygenation, glucose levels, and galvanic skin response, or electromygraphy (EMG), ECG, electroencephalogram (EEG), and so on [8][9][10][11][12][13][14]. Actuators for SeCSs include visual indicators, sound, movement and vibration, and heating and cooling [8][9][10][11][12][13][14].
Construction of SeCS
Textile fabrics work as the basic platform for integrating different subsystems in and on them to construct a SeCS. Figure 4 represents the interaction between different textile-based and non-textilebased subsystems of a SeCS. The interconnects transfer power and biosignal between the sensor point and the data processing unit (electronic board). The sensor units are linked to a rigid electronics board by connectors. The quality and reliability of the sensor integrated into smart garments are fundamentally dependent on these constituent components of SeCSs. Any failure of any of these three components will cause the device to malfunction. Any sensor needs to be highly and selectively Different sensing units that potentially form sensing subsystem of an SeCS can be motion, gesture, and position sensors, temperature and other bio-vital sensors, location sensor, interaction and environmental sensors, and sensors for detecting surrounding objects [4][5][6][7][8][9][10][11][12][13][14]. The common sensors for motion, gesture, and positions are accelerometer, magnetometer, and gyroscope. A combined package of accelerometer, magnetometer, and gyroscope is common in use owing to their volatile application and prices. This combination is termed as a nine-axis intertial motion unit (IMU) sensor. STMicroelectronic's LSM9DS1 and Bosch's BMF055 are two examples of such IMU sensors. Potential bio-vital sensors that may be integrated within a SeCS are for sensing heart rate, respiration rate, blood pressure, pulse oxygenation, glucose levels, and galvanic skin response, or electromygraphy (EMG), ECG, electroencephalogram (EEG), and so on [8][9][10][11][12][13][14]. Actuators for SeCSs include visual indicators, sound, movement and vibration, and heating and cooling [8][9][10][11][12][13][14].
Construction of SeCS
Textile fabrics work as the basic platform for integrating different subsystems in and on them to construct a SeCS. Figure 4 represents the interaction between different textile-based and non-textile-based subsystems of a SeCS. The interconnects transfer power and biosignal between the Sensors 2020, 20, 587 5 of 23 sensor point and the data processing unit (electronic board). The sensor units are linked to a rigid electronics board by connectors. The quality and reliability of the sensor integrated into smart garments are fundamentally dependent on these constituent components of SeCSs. Any failure of any of these three components will cause the device to malfunction. Any sensor needs to be highly and selectively sensitive to biopotential (e.g., ECG, EMG, and EEG) or other targeted markers. The interconnect and the connector should have very low resistance and high durability, so that the resistance of these components does not change after repeated mechanical agitation during washing, bending, and stretching of the device. Traditionally, metallic wires and components act as an interconnector for electric items; however, textile-based interconnects are becoming popular for SeCSs to comply with the requirements of wearability and washability.
Sensors 2020, 20, x FOR PEER REVIEW 5 of 23 the connector should have very low resistance and high durability, so that the resistance of these components does not change after repeated mechanical agitation during washing, bending, and stretching of the device. Traditionally, metallic wires and components act as an interconnector for electric items; however, textile-based interconnects are becoming popular for SeCSs to comply with the requirements of wearability and washability. As mentioned before, the connector connects the soft textile electronic parts with the rigid electronics board. Snap buttons, low-melting temperature soldering, and conductive epoxy bonding are the common ways to connect the hardware piece onto the textile. The limitation such a joint is that any stress concentration on the connector when the fabric is strained may cause the joint to crack between the soft and hard segments of the device. Unfortunately, there has not been much research conducted to solve the connector problem for the electronic-textile industry. It is shown that utilisation of non-conductive epoxy-glue encapsulation on top of the conductive and brittle connectors improves the durability and the lifetime of the device [15,16].
Textile Fabrics
For SeCSs, the electronic components are either attached or developed directly on textiles. This section reviews the variety of textile materials available for potential use as base materials for constructing SeCSs. Textile materials come in any of the following geometrical structures and physical appearances: (a) fibre (including filament), (b) yarn and thread, (c) fabric, and (d) assembled product (clothing and non-clothing). Fibre is a hair-like pliable material, which is considered as the basic and starting unit of a textile material. Yarn, an intermediate product between fibre and fabric, is made of fibres twisted together [12]. A fabric is either made by interlacing of two sets of yarns or by interlocking of loops formed by one set of yarn. The former process is known as weaving and the latter one is known as knitting, from which the resultant materials got their names as woven and knitted fabrics, respectively.
Woven fabrics commonly come in any of the following design groups: plain, twill, and satin fabrics, where each of them may have different derivatives. Owing to difference in weave design, the fabrics of these three classes will have different textures and different properties, for example, tensile strength, even if they are made from exactly the same type of yarns. The machine used for weaving is commonly known as a loom, and there are varieties of looms available such as shuttle looms including tappet, dobby, and jacquard looms, and shuttle-less looms including projectile, rapier, water-jet, air jet, and so on [17].
The varieties of knitted fabrics produced commercially are either weft or warp knitted fabrics. A weft-knitting machine can produce three basic knit designs such as plain, rib, and purl [18], and the fabrics produced are known by the design they contain. Each of the basic weft knit designs may have many derivatives within each design group. Warp knitted fabrics can be of seven different As mentioned before, the connector connects the soft textile electronic parts with the rigid electronics board. Snap buttons, low-melting temperature soldering, and conductive epoxy bonding are the common ways to connect the hardware piece onto the textile. The limitation such a joint is that any stress concentration on the connector when the fabric is strained may cause the joint to crack between the soft and hard segments of the device. Unfortunately, there has not been much research conducted to solve the connector problem for the electronic-textile industry. It is shown that utilisation of non-conductive epoxy-glue encapsulation on top of the conductive and brittle connectors improves the durability and the lifetime of the device [15,16].
Textile Fabrics
For SeCSs, the electronic components are either attached or developed directly on textiles. This section reviews the variety of textile materials available for potential use as base materials for constructing SeCSs. Textile materials come in any of the following geometrical structures and physical appearances: (a) fibre (including filament), (b) yarn and thread, (c) fabric, and (d) assembled product (clothing and non-clothing). Fibre is a hair-like pliable material, which is considered as the basic and starting unit of a textile material. Yarn, an intermediate product between fibre and fabric, is made of fibres twisted together [12]. A fabric is either made by interlacing of two sets of yarns or by interlocking of loops formed by one set of yarn. The former process is known as weaving and the latter one is known as knitting, from which the resultant materials got their names as woven and knitted fabrics, respectively.
Woven fabrics commonly come in any of the following design groups: plain, twill, and satin fabrics, where each of them may have different derivatives. Owing to difference in weave design, the fabrics of these three classes will have different textures and different properties, for example, tensile strength, even if they are made from exactly the same type of yarns. The machine used for weaving is commonly known as a loom, and there are varieties of looms available such as shuttle looms including tappet, dobby, and jacquard looms, and shuttle-less looms including projectile, rapier, water-jet, air jet, and so on [17]. The varieties of knitted fabrics produced commercially are either weft or warp knitted fabrics. A weft-knitting machine can produce three basic knit designs such as plain, rib, and purl [18], and the fabrics produced are known by the design they contain. Each of the basic weft knit designs may have many derivatives within each design group. Warp knitted fabrics can be of seven different types: namely, tricot, raschel, ketten raschel, milanese, simplex, crochet, and weft-insertion warp [19].
There is another category of fabric, which is made directly from fibres using different bonding technologies including chemical, mechanical, and thermal bonding technology, among other. They are known as non-woven and can be classified into three groups based on the techniques used to lay the fibres together, such as drylaid, wetlaid, and polymer-laid (or spunmelt) [20]. This type of material is commonly used as padding and filler within clothing.
Clothing materials made for our use or sometimes for animal use are assembled products out of mostly fabrics combined with threads and minor non-textiles materials such as button, zippers, hook and loop fastener, rivet, and so on.
Textile-Based Sensors and Electrodes
Although several types of sensors are incorporated within SeCSs, only a few of them are actually developed on textile surfaces directly, such as ECG, EMG, and temperature sensors. As ECG and EMG sensors detect electrical signals from the skin surface, the fundamental principle of developing such sensors directly on textiles is to make the textile surface conductive. Traditionally, disposable wet electrodes containing conductive silver/silver-chloride (Ag/AgCl) ink, printed on an adhesive paper and coated with ionically conductive gel (typically hydrogel), are used to measure the ECG signal from the heart activity. The ionic gel creates the ionic bridge between the body and the electrodes and lowers the skin to electrode impedance. Additionally, the AgCl salt in the conductive ink also helps to maintain the ionic bridge network between the skin and the electrode. The skin alike soft gel material can enhance the adhesion of the electrodes with the skin, and thus the minimise motion artifact of the signal. However, these sticky sensors can cause discomfort and noticeable body rash if used for a long time [21]. Therefore, textile-based dry electrodes are heavily studied as an alternative to commercial wet electrode for long time monitoring of vital signs, even in the non-hospital condition.
Screen printing of conductive ink directly on substrates like, film, textile, and nonwoven materials is used as a simple and common technique to develop sensors and electrodes for measuring the electrical signal from the skin surface. Increasing the surface area of the electrode can potentially decrease the skin to electrode impedance and provide a reasonable signal with a comparable signal-to-noise-ratio (SNR) to commercial wet electrodes [22]. Ag/AgCl ink is dominantly used for screen printing dry electrode to enhance the ionic conductivity and lower the skin to electrode impedance, although this requires the generation of sweat on the skin. Dry electrodes show promise in the literature as durable sensor electrodes for long time monitoring; however, the signal quality deteriorates drastically when the wearer is in active mode such as walking or running, as dry electrodes cannot create a good adhesion on the skin like wet electrodes. Integrating dry electrodes at the strategic locations (where the body muscles do not move much during active modes) in compression garments enhances the signal quality [23]. Other than conductive Ag ink and Ag/AgCl ink, functional materials including carbon [24] conductive polymers such as PEDOT/PSS [Poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate)] [25] are used to measure signals like ECG. These active materials are directly screen printed, inkjet-printed [26], or dip-coated [25] on textile to develop wearable sensor electrodes.
Electrically conductive yarns can be integrated in the fabric structure to develop a conductive patch that can also be as used textile sensors to measure human physiological vital signs. The whole garments knitting technology can enable developing a garment with a diverse design without the need for cutting and sewing. This platform technology can integrate a conductive yarn and knit sensor patched in the designated location of a garment. Knitted sensors improve wearers' comfort owing to their breathability, however, they require high compression to detect a quality signal [27]. Additionally, technology like embroidery of conductive yarn on a textile can create dense conductive patterns on a Sensors 2020, 20, 587 7 of 23 textile surface and create a cushiony structure to impart compression on the sensor location to improve signal quality [28].
Other than these common materials and manufacturing processes, recent development of ionically conductive inkjet printable materials show great promise for manufacturing biosensors on different substrates including textiles. The combination of inkjet-printed conductive polymer electrodes and ionically conductive materials on top as a coating lowers the skin to electrode impedance and improves the SNR of the signal [26]. It is already mentioned that the ionically conductive and tacky hydrogel material is used in almost all commercial electrodes to lower the skin to electrode impedance, improve ionic conductivity and lower the motion artifact. However, these gel materials are not durable and dry out over the time. Recent development of durable, tough and conductive hydrogel opens the new avenue of electronics called "ionotronics". These materials already show a superior result in sensing bio-signals (including ECG) from the human skin for a long period of time [29]. However, the multiple manufacturing steps and integration challenge of this material with other soft materials such as textiles are yet to be progressed for commercial applications.
The concept of creating a secondary skin-like material that act as sensors and feels like skin poses a unique idea for building biosensors. The development of gecko-like dry adhesive with conductive functionality shows promising results to monitor bio-signals from skin in real time during heavy action periods of the wearer. The literature shows the development of conductive soft silicone materials with micro-patterned top surface can adhere to the human skin, simulating the gecko feature [30]. The sensor shows significant improvement of the motion artefact, which is a great challenge for the class of dry electrodes.
Textile-Based Interconnects
The interconnects are developed on textile using different techniques, such as by screen printing of conductive thick paste directly on the fabric [31] or on a transferable thermoplastic film [32], stitching or embroidery of conductive yarns, direct knitting or weaving of the conductive yarns as an interconnect pattern, and so on [33,34]. Screen printing of conductive thick paste ink has been a common practice to print interconnect on non-stretch plastic film for different printed electronics applications. However, the screen-printed metal film layer delaminates or cracks when the interconnect is subjected to the stretching owing to the mismatch of the mechanical properties of the printed film and the substrates. [32,35]. On the other hand, interconnects stitched, knitted, or woven with metal filament integrated or electroless plated metallised conductive yarns are more durable during stretching [36]. Recent progress in the inkjet printing of particle-free reactive metal-salt solution directly on textile fabric can metallize the yarn at the molecular level. Such a method could potentially solve the problem of durability, patternability, and scalability of textile interconnects [36]. Table 1 provides a list of SeCSs that are being offered on the market or in the offing and their suppliers. Among the twenty-two companies listed there, the USA and Canada dominate with the most number of suppliers offering SeCSs (see Figure 5). In contrast, only few companies are active on the European market. Interestingly, no Chinese company has been identified as being active on the global market. It is also noticeable from Table 1 that more than half of the suppliers are offering health monitoring SeCSs, while SeCSs for sports are the next leading product category. Most of the companies offer complete garment solutions to the consumers and only one company supplies compression sleeves. While the most of the companies target adults, only a few focus on babies with their product offers. Pricewise, the available SeCSs fall within the price range of luxury products except those that come from India.
SeCS for Health
Twelve companies have been identified as the suppliers of SeCSs that can capture biosignals from the human body and their proprietary software systems can analyse those signals to report the well-being of the wearers. Tables 2 and 3 summarise the features of these products and the following subsections discuss them briefly. Out of these companies, as can be seen in Table 2, OMsignal, Myant, and Smartlife are offering technology rather than any readymade product to the consumers directly. The common features of the SeCSs of this category are the use of knitted fabrics as the base clothing material upon which the electronic components are attached and integrated. The products from Neopanda and Mimo are focused dedicatedly to new-borns and babies, and the rest are for grown up consumers.
SeCS for Health
Twelve companies have been identified as the suppliers of SeCSs that can capture biosignals from the human body and their proprietary software systems can analyse those signals to report the well-being of the wearers. Tables 2 and 3 summarise the features of these products and the following subsections discuss them briefly. Out of these companies, as can be seen in Table 2, OMsignal, Myant, and Smartlife are offering technology rather than any readymade product to the consumers directly. The common features of the SeCSs of this category are the use of knitted fabrics as the base clothing material upon which the electronic components are attached and integrated. The products from Neopanda and Mimo are focused dedicatedly to new-borns and babies, and the rest are for grown up consumers. [37] is claimed to be the first SeCS certified as medical device by the European Union's (EU) medical device directives (MDD)-93/42/EEC [64] for collecting ECG data [37]. The hardware system consists of a t-shirt or a vest as a carrier of conductive pathway (covered electric wire), digital recorder, SD card, battery charger, and disposable electrode. For capturing biosignals, one or more electrodes need to first be placed on the recommended areas of wearer's body, and the t-shirt is then donned to facilitate the electrodes to be connected with its cables. The recording device is connected inside of a pocket located at the level of the side-waist. The system allows to collect ECG data from a wearer using commercial wet-electrodes for a long period of time, transmits remotely, and stores all data for posterior analysis. It can measure a patient's movement using a three-axis accelerometer. Vital Jacket is available in two versions (with 1 or 5 leads) for babies, children, and adults, and both can perform an ambulatory ECG. It has an analysis software specific for rhythm alterations, which can only be read by a health professional. Experimental applications of VJ include stress detection in bus drivers [65] and firefighters [66] through analysis of ECG data and heart rate variability (HRV), identification of physiological responses to stress in musicians [67] through monitoring heart rate as beats per minute (bpm), and studying stress and fatigue of first responders through ECG and continuous blood pressure monitoring in laboratory condition [68].
Hexoskin
Hexoskin from Montreal, QC, Canada is a wearable health monitoring system that includes ECG electrodes integrated into clothing and an e-module including breathing and movement sensors [41]. The system can measure heart rate (HR), heart rate variability (HRV) and heart rate recovery (HR2), breathing rate and volume, movement, step count, cadence, stride, activity level, calories burned, and sleep quality. According to the supplier, the system has found research application in the areas of cardiac, respiratory, and activity analysis (such a steps, cadence and calories, stress, cognitive, and sleep). Abdallah et al. [69] applied the Hexoskin biometric vest to measure ventilation (VE), tidal volume (VT), breathing frequency (Bf), inspiratory capacity (IC), and inspiratory reserve volume (IRV) of a small cohort of adults with chronic obstructive pulmonary disease (COPD) at rest and during exercise; and found them to be valid when compared against the data collected by a pneumotachograph (Ptach). Al-Sayed et al. [70] compared the heart rate monitoring capacity of the Hexoskin biometric shirt and Polar H7 heart rate sensor through a study involving twelve volunteers and reported no significant difference between the two systems. Banerjee et al. [71] employed the Hexoskin vest to estimate physiological measures such as heart rate, breathing rate, lung volume, step count, and activity level of thirty-one participants aged 65 and older and compared the collected data against the clinically accepted gold standard values. They concluded that heart rate, breathing rate, and step count collected by Hexoskin showed a strong correlation against the gold standard measures, but lung volume and activity level measures did not.
OM Signal
OM signal from Canada offers SeCS with embedded ECG, respiration, and physical activity sensors [39]. The system contains a printed ECG sensor on the inner surface of the clothing. The e-module is attached on the left side under the chest area of clothing (shirt, camisole, and bra). This records the consumer's biosignals and streams them wirelessly and in real time to the consumer's smartphone via Bluetooth. The data are also automatically sent to the cloud platform of the Internet, where it can be further analysed using advanced algorithms and artificial intelligence (AI). Porbabee [72,73] applied this system in ECG-based human identification and mental stress prediction using heart rate variability. However, the system is yet to be offered commercially.
Emglare
Emglare from California, USA offers a SeCS with fully integrated heart rate and EC sensors, non-detachable battery for wireless charging, and a Bluetooth antenna [40]. The intelligently designed vest hosts two ECG sensors and one heart rate monitor at the chest area in the front and an automatic power switch near the left armhole. The Bluetooth antenna, battery, and wireless charging component are hidden at the centre back of the clothing. Once a consumer wears the vest and turns on the Emglare mobile app, the smart vest starts sending heart rate and ECG data automatically to the app. The system stores health statistics on a daily and weekly basis, which can be shared with others. The application automatically sends a notification if the heart rate is higher than usual and can inform connected doctors, relatives, or friends about it. Although it looks very smart in design and activity, the system is yet to reach consumers' hands. However, the company is accepting online pre-orders now.
Master Caution ® from Healthwatch
Master Caution ® from Healthwatch Technologies, Israel offers a 12-lead ECG monitoring garment with Food and Drug Administration (FDA)-clearance and European Union's CE-approval [41]. The system can monitor heart activity, respiration, fall detection, movement, and temperature. The design is based on wearable textile-electrodes and heart-sensors and contains digital health diagnostic services including mobile cardiac telemetry, patient monitoring tele-health services, and other services that allow for in-home medical services. Master Caution ® continuous monitoring solutions assist clinicians in remotely monitoring their elderly or bed-ridden patients. It can alert to cardiac events such as ischemia and arrhythmias in near real-time, using the automatic analysis (AI) system, and thus securing personal health around-the-clock for improved patient safety. According to the supplier, the offered garment is machine washable, with at least 50 washing cycles, and is available in a full size range for men and for women.
Siren Diabetic Socks
Siren from San Francisco offers socks with embedded temperature sensors that can help detect foot ulcers early in diabetic patients [42]. It uses temperature micro-sensors integrated into textiles that can detect changes in temperature at the bottom of the feet. A small tag attached to the sock reads this temperature gradient data and wirelessly transmits it via Bluetooth to a specific app. A study by Armstrong et al. [74] shows that self-monitoring of foot temperature may reduce the risk of ulceration in diabetic patients.
Neopenda Baby Hat
The New York based company Neopenda (New York, NY, USA) aims to fight the sudden infant death syndrome (SIDS) in developing countries, namely Uganda. It offers a baby monitoring hat Sensors 2020, 20, 587 13 of 23 that makes it possible for nurses to monitor several infants continuously and simultaneously, thus reducing newborn mortality [43,44]. The baby hat is embedded with an e-module at the front and is able to measure temperature, heart rate, respiratory rate, and blood oxygen saturation of the infants. The device can transfer vital signs to a central monitoring system via a Bluetooth transmitter. The system is designed to monitor up to twenty-four babies through one monitor.
6.1.8. Mimo from Rest Devices Similar to Neopenda, Mimo from Rest Devices (USA) is a baby breathing and activity monitoring kit that includes machine washable kimonos of a specific size (0-3, 3-6, and 6-12 months), one Lilypad (charging & WiFi base station), one low-power Bluetooth transmitter called turtle, and charging and power cables [48]. The e-module is in the shape of a green turtle and snaps onto the front of the onesie, and can monitor the baby's breathing, body position, sleep activity, and skin temperature. Mimo data strips pick up subtle movements in baby's breathing and activity and transmit those to the Lilypad, which sits near the baby while plugged into a wall. The Lilypad picks up baby's coos and cries through an embedded microphone, and sends that live audio, along with all other data, securely to a server and then straight to parents' smart devices, where they can see, in real-time, how their little ones are doing.
Bioman+ from AiQ
The Taiwanese company AiQ's smart clothing offers a variety of smart garment, under the general name Bioman+, with an integrated 1-3-lead ECG monitoring system for health monitoring of patients, elderly people, and sportspersons [47]. It is an upper body garment solution that consists of conductive fibre-based textile electrodes for the acquisition of the electric activity of human body and conductive thread to carry the electric signals to the processing and transmission module that is snapped onto the garment. It is available in several styles-vests, t-shirts, and sportbras-with five different types of electrode structures suitable for different user scenarios and three fabric variants with different levels of compression. The company claims to have used stainless steel fibres, yarns, and threads, omitting the need for an additional copper or silver coating, to simplify manufacturing [75]. The Canadian company Myant Inc. offers smart fabrics under the brand "Skiin" that are claimed to be comfortable and washable, and able to monitor ECG, HRV, breathing patterns, stress levels, sleep quality, steps, distance, calories burned, active minutes, and stationary time all day and night. For female consumers, it can also identify the fertility window by monitoring the changes in skin temperature and resting HRV to maximise the chances of getting pregnant [48]. The company has presented the design of underwear in classic cuts in varying fit for both men and women. Each undergarment has a slit in the waistband where the smart device can be inserted to track the health of the wearer [76]. The device can be charged wirelessly. The company is offering smart fabrics and smart solution for retailers; therefore, the final product is yet to be available commercially.
Neuronaute ® from BioSerenity
The French company BioSerenity offers a SeCS called Neuronaute ® for diagnosis and monitoring of patients with epilepsy in their own home [50,61]. The system consists of a smart t-shirt and a smart cap containing EEG, ECG, and EMG sensors and a nine-axis accelerometer. This top and cap outfit can detect electrical activity from the brain, heart, and muscles of its wearer and send it to a smart phone or to doctors via the Cloud [62,63]. The system obtained CE marking in 2016 after a six-month trial at the Brain and Spine Institute at the Pitié-Salpêtrière Hospital in Paris [53,64].
Others
The British company Smartlife offers a textile sensor technology that can be integrated into comfortable active wear [49]. The company claims their device, called the Brain, to be small and discrete, allowing communication with third party apps. The textile sensors and smart device offered by them are claimed to be able to monitor ECG signals, impedance pneumography, impedance plethysmography, surface electromyogram, and accelerometry for 12 h.
The American company Sensoria offers SeCSs that can be of help for people suffering from gait impairments, short stride lengths, and slow walking speeds. The Sensoria ® Walk app works in conjunction with an electronic anklet and textile sensor infused smart socks to help its wearer set goals, and track daily activities including steps, cadence, and distance during rehabilitation after a stroke or post-surgery, with the ultimate goal of speeding up overall recovery time. As reported by Gaibizzi et al. [77], the Sensoria smart t-shirt could potentially be a promising candidate component, which is compatible with the Heart Sentinel™ smartphone app, to build a system for detecting and alerting cardiac arrest caused by life-threatening arrhythmias such as ventricular fibrillation (VF) during outdoor sports. A study by D'Addio et al. [78] on posturographic assessment with a small group of patients with Parkinson's disease identified Sensoria fitness e-textile socks as a low-cost alternative to evaluate variations in centre of pressure (CoP) signal when compared with the gold standard stabilometric Zebris platform (ZP).
SeCS for Sports
Five suppliers of SeCS were found to be active in the sports industry (see Tables 4 and 5). Except Komodotec, all others offer clothing items for sportspersons. Komodotec offers a compression sleeve for the arm, which has an e-module encaged into it. Again, knitted fabric is the common feature of the textile components of these products. Athos system from Mad Apparel Inc. (USA) includes a compression shirt and a detachable e-module, which offers real-time biometric tracking, including muscle activity, heart rate, calorie expenditure, and active time versus rest time [51]. It tracks exertion of the major upper-body muscle groups: pecs, biceps, triceps, deltoids, lats, and traps. When snapped on the Athos apparel, its e-module can collect and analyse data from the garment's sensors and delivers those data to the user's mobile app via Bluetooth. The proprietary software can display which muscles are firing and how much they are being exerted; shows the distribution of work by muscle group, from left to right, to detect if the user is overworking or compensating as a result of poor form; and helps understand how muscles are contributing to the movement. It is reported by the supplier that athletes from different professional league in the USA including the Philadelphia Phillies (MLB), LA Clippers (NBA), FC Dallas (MLS), and Ohio State (Collegiate Division 1) use this system for training purposes. Lynn et al. [82] studied surface electromygraphy (sEMG) measurements from twelve healthy subjects taken by Athos compression garments with built-in EMG electrodes and research grade Biopac bipolar Electrodes (Biopac Systems Inc., CA, USA). Their findings showed no significant differences between normalized EMG amplitude or in strength of the relationship between sEMG and torque output between Athos and Biopac.
Zephyr from Medtronic
Zephyr™ performance system from Medtronic (USA) [52] is a SeCS designed to support training of athletes, military, and first responders. The system can read six parameters (ECG, respiration, estimated core body temperature, accelerometry, time, and location) of its wearers and can process them to report twenty one biometrics (heart rate, breathing rate, heart variability, HR confidence, estimated core temperature, impact, activity, posture, caloric burn, % heart rate, % heart rate anaerobic threshold (AT), accelerometry, physiological and mechanical intensity loads, training loads and intensity, jump, explosiveness, peak force, peak acceleration, GPS speed, GPS distance, and GPS elevation). The combination of these biometrics can yield nine biomarkers of a wearer, as follows: (1) fatigue (HR recovery), (2) readiness (HRV), (3) safety (maximum HR, core body temperature, location), (4) over-training and under training (intensity and load), (5) fitness improvement (VO 2 max, HR @AT), (6) caloric expenditure and burn, (7) agility (accelometry, speed and distance), (8) athlete management (intensity and load), and (9) stress (HRV). Its sensor module known as BioModule™ can be worn via a compression shirt and sport bra or a strap. Nazari et al. [83], through a systematic review of literature, identified ten research studies focusing on the reliability and validity of heart rate measurements taken by the Zephyr device and concluded that the device displayed good agreement with the gold standard measurements.
Polar Team Pro
Polar Team Pro offers a team-based solution for athletes and their trainers [53,81]. The performance tracking sensor embedded in the garments is able to track motion, heart rate, and location through GPS. All information gathered by the garment is then sent to a tab, allowing the coach of a sports team to evaluate all their players at once and from a distance of up to 200 m [84].
Komodotec
Komodotec offers a smart compression sleeve, which can be paired with a separate sensor device to track heart rate, analyse sleep patterns, and provide full-time ECG monitoring [54]. The company claims the sleeve is easy to wear and does not interfere with everyday life. On the basis of heart rate variability, the sleeve can give information about the body's reaction to alcohol or drugs, recovery status, the wearer's stress level, and their reaction to food.
Sensoria
The running system from Sensoria includes a smart t-shirt or sport bra and smart socks, and supports professional runners with their training and coaching [55]. The Sensoria Run mobile app allows them to tailor their goals and track their progress, and the Sensoria Virtual Coach literally monitors every step and provides actionable audio and video feedback during running. It can help professional runners to improve their running mechanics by telling them when they are in the correct and incorrect running positions.
SeCS for Fitness
Although there are several wrist-worn wearable systems that support fitness activities and tracking are available on the market, only a handful of SeCSs are there to serve this sector, as can be seen in Tables 6 and 7. All of them are based on knitted platforms and the product range covers T-shirt, vest, sports bra, yoga pants, and socks. In addition to the smart shirts described in Section 6.2, Sensoria offers smart socks with integrated textile pressure sensor technology [58]; when paired with a Bluetooth enabled anklet, it can track the user's steps, walking time, and distance on a daily basis. The accompanying application allows to set independent goals on each metric that a user wants to track. The anklet is detachable, while the socks are infused with proprietary textile sensors. This allows the socks to monitor not only step counting, speed, calories, altitude, and distance, but also cadence and foot landing technique, while exercising.
Wearable X
Wearable X, an Australian American company, offers leggings, branded as "Nadi X", with knitted accelerometer and haptic feedback technology for yoga training. It can track the wearer's goal, performance, and progression to help personalised yoga training in a real-time yoga session [56]. In conjunction with its electronics component that integrates battery and Bluetooth data transmitter, the yoga pants can generate gentle vibrations to guide the wearer with yoga poses and can act as a yoga coach when paired with the Nadi X iOS app.
Supa
The brand Supa from USA offers a sports bra with integrated textile heart rate sensors [60]. The e-module, called SUPA reactor, can be attached to the sports bra and is then connected to a proprietary app (SUPA.AI). As this system is made for active wear, the smart device is water resistance and can track workouts by monitoring the heart rate of the wearer, similar to a sport monitoring chest belt. It is also supported by artificial intelligence within the application.
Syngal T-Shirt from Broadcast Wearables
This t-shirt by the Indian company Broadcast Wearables is being offered for the use during exercise and everyday life or in traffic [57]. The garment is able to track steps and floors climbed. It also provides how many calories are burnt and distance achieved during exercise. Additionally, the t-shirt can help the wearer navigate in traffic. The company claims that the t-shirt vibrates slightly on the wearer's shoulders to indicate the direction to turn. Compared with other garments in this category, this t-shirt does not include a heart rate monitor.
SeCSs for Social
Other than health, sport, and fitness sectors, there are a few SeCSs that can assist in communication, entertainment, and leisure activities of their users. This category of SeCSs includes woven fabrics in addition to knitted ones as the base platform onto which to attach electronics.
Trucker Jacket by Levi's & Google
American companies Levi's and Google jointly presented a smart jacket that facilitated a smooth commute for cyclist in big cities. With conductive yarns woven into the sleeve of the jacket, it works as an electronic platform. Digital connectivity is provided through the snap tag attached to the jacket's cuff. The snap tag, which is positioned at the cuff of the left sleeve, can communicate with the wearer through light and haptic feedback. The companies claim that the battery of the tag lasts up to two weeks and can be charged using USB. It is also claimed that wearing the trucker jacket, consumers will be able to connect to their digital life instantly and effortlessly. With a lateral brush of the cuff, the wearer can handle calls and texts without handling the mobile device, as well as navigate and play, pause, and skip through their favourite music [78,79].
Spinali Design
The French company Spinali Design offers different clothing ranges and swimsuits with embedded sensors for intelligent functions [60]. One of those functions is UV warning to the wearer via smartphone app to apply sunscreen. Its iOS/Android associated system "the Neviano UV Protection" comes with the functions like "weather", "pics", "suntanning tips", and "sunscreen alert". The application integrates a function called "Valentine" that alerts users' partners when to apply sunscreen to the users while sunbathing.
Research Gaps and Conclusions
It is evident from the Section 6 that only a few companies are offering SeCSs around the world. With the expansion of IoT application in various fields, it is expected that this number will grow gradually. However, there has been no study presented so far about consumers' perceptions and demands of SeCSs. The trend in wearable technology is to have them embedded within clothing, known as Wearable 2.0, which also envisages being convenient, comfortable, washable, highly reliable, and durable. At present, wearable electronics are mounted on textiles, but not fully embedded into textiles. They are only washable at the moment when electronic components are removed from them. There has so far been no study reporting how the rigid electronic components influence consumers' comfort perception. Another significant research problem is the energy sustainability and battery size, to move towards true Wearable 2.0. So far, only the sensor subsystem, out of eight subsystems of SeCSs, can be developed directly on textiles. The next step will be to attempt to develop other subsystems on textiles. The available SeCSs are only washable after detaching the electronics components from them. Developing a waterproof enclosure for e-components onto textiles is the prevailing challenge that needs to be addressed though research and development (R&D). | 2019-04-26T14:07:07.824Z | 2019-03-15T00:00:00.000 | {
"year": 2020,
"sha1": "98622f879d3c8b5963a723dd0cc05cd0145e4ef4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/3/587/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "924ad8fdaa0a8f4e67235a5f8b1ccc871d211e4c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
246348635 | pes2o/s2orc | v3-fos-license | An Adaptive Learning Model for Multiscale Texture Features in Polyp Classification via Computed Tomographic Colonography
Objective: As an effective lesion heterogeneity depiction, texture information extracted from computed tomography has become increasingly important in polyp classification. However, variation and redundancy among multiple texture descriptors render a challenging task of integrating them into a general characterization. Considering these two problems, this work proposes an adaptive learning model to integrate multi-scale texture features. Methods: To mitigate feature variation, the whole feature set is geometrically split into several independent subsets that are ranked by a learning evaluation measure after preliminary classifications. To reduce feature redundancy, a bottom-up hierarchical learning framework is proposed to ensure monotonic increase of classification performance while integrating these ranked sets selectively. Two types of classifiers, traditional (random forest + support vector machine)- and convolutional neural network (CNN)-based, are employed to perform the polyp classification under the proposed framework with extended Haralick measures and gray-level co-occurrence matrix (GLCM) as inputs, respectively. Experimental results are based on a retrospective dataset of 63 polyp masses (defined as greater than 3 cm in largest diameter), including 32 adenocarcinomas and 31 benign adenomas, from adult patients undergoing first-time computed tomography colonography and who had corresponding histopathology of the detected masses. Results: We evaluate the performance of the proposed models by the area under the curve (AUC) of the receiver operating characteristic curve. The proposed models show encouraging performances of an AUC score of 0.925 with the traditional classification method and an AUC score of 0.902 with CNN. The proposed adaptive learning framework significantly outperforms nine well-established classification methods, including six traditional methods and three deep learning ones with a large margin. Conclusions: The proposed adaptive learning model can combat the challenges of feature variation through a multiscale grouping of feature inputs, and the feature redundancy through a hierarchal sorting of these feature groups. The improved classification performance against comparative models demonstrated the feasibility and utility of this adaptive learning procedure for feature integration.
Introduction
Colorectal cancer (CRC) is one of the top fatal diseases in the United States. American Cancer Society ranks CRC as the third most common cancer and the third leading cause
Multiscale Sampling of GLCMs for Multiscale Features
Gray level co-occurrence matrix or GLCM as a typical texture pattern descriptor is widely used in medical imaging [9][10][11]. Its computation could be referred to according to the following expression in two-dimensional (2D) representation: In a digital image array, the first-and second-order neighbors, which comprise the first ring around the center image voxel, are most frequently used for vector calculation. A voxel in 3D volumetric data generally has 26 neighbors, which could produce 26 vectors, including 13 vectors and 13 negative vectors. From Equation (1), it is easy to prove that the GLCM of one vector is equal to the transposed GLCM of its negative vector. In a digital image array, the first-and second-order neighbors, which comprise the first ring around the center image voxel, are most frequently used for vector calculation. A voxel in 3D volumetric data generally has 26 neighbors, which could produce 26 vectors, including 13 vectors and 13 negative vectors. From Equation (1), it is easy to prove that the GLCM of one vector is equal to the transposed GLCM of its negative vector. Therefore, only 13 directions are preserved, while their negative vectors are all neglected in GLCM calculation due to redundant information, as shown in Figure 1b. Moreover, only the 1st ring neighbor around one concerned voxel is used; the gray level is set to be 32 in the calculation. [11]. In this article, only 28 of the 30 measures from eHM are used to construct the texture descriptors (two of the 30 were proved to have limited new information and are ignored [38]) and are generated using in-house software. Therefore, the GLCM-descriptor contains 364 variables from 28 HMs over 13 directions, expressed by: D = (d 1 , . . . , d 364 ) (2) Geometrically, the distance between the cubic center (of the first-and second-order voxel array) and the center of one neighbor voxel is not a constant and varies between 1 and √ 3 in terms of the voxel side unit. For example, d(. ) = 1 for the directions along x, y and z axes, d(.) = √ 2 for the diagonal directions in the 2D planes of the 3D x-y-z array coordinates, and d(.) = √ 3 for the diagonal directions in the 3D x-y-z array coordinates. In other words, in the discrete volumetric data, twenty-six neighbors around one voxel could produce three distances of 1, √ 2, and √ 3, i.e., a multi-scale data sampling nature. The 13 directions used to compute the GLCMs could be divided into 3 subgroups, i.e., D 1 , D 2 , and D 3 , according to their geometric distances. Each direction within the subgroup, therefore, shares the same geometric sampling distance. Figure 1b gives the geometric interpretation. G 1 (green) contains three directions, G 2 (red) contains six directions, and, lastly, G 3 (blue) contains four directions from this subdivision. The three GLCM groups would produce three descriptors, where their corresponding variable numbers are 84 (28*3 eHMs from G 1 ), 168 (28*6 eHMs from G 2 ), and 112 (28*4 eHMs from G 3 ). In this manuscript, the groups of GLCMs will be given the notation G i , and the groups of texture descriptors given the notation D i . These descriptors could further be written by: The traditional Haralick texture feature calculation considered these three direction groups as one scale by computing the average and range across all 13 directions for each of the 14 traditional HMs, resulting in a total of 28 traditional Haralick texture features (HFs). For the 28 eHMs, the average and range across all 13 directions result in a total of 56 extended HFs, called eHFs. These Haralick texture features will be used as the baseline reference in this work to show the gain by the consideration of the multi-scale data sampling nature in the following. The GLCMs are then calculated by three different scales, i.e., 1, √ 2 ≈ 1.414 and √ 3 ≈ 1.732, as shown in Figure 1b. Essentially, this multi-scaling feature extraction operation is not only a direction subgrouping but also a feature subdivision. Therefore, this method generates three GLCM subgroups and three texture descriptor subdivisions, each with a different scale, as shown in Table 1. In the following, the variables in each direction group are labeled as a set of data sampled from the polyp object and treat all three direction group datasets as three differently sampled data from the same subject. Then, an adaptive machine learning strategy is developed to integrate these different datasets together for improved CADx performance by circumventing the two problems of (1) variation in polyp texture descriptor computation and (2) redundancy in multi-scale computed features.
Analyze Group-Specific Information
To analyze and compare the differences among the three data subsets or multi-scale groups, the information provided by each group is then investigated. To understand these differences, the information that can be learnt by CNN on each individual group is first visually analyzed. Next, CNN models based on three GLCM subgroups are trained. Then, features learnt by CNN are understood via interpreting how the final decision is made given an input.
To accomplish this, a game theory based model called SHAP was adapted to explain the output of the machine learning models [39]. Each model was trained by the polyps' corresponding GLCM subgroup and is similar to GLCM-CNN, with network design optimized to the subgroups [40]. After the CNN model was trained, the decision criteria was visualized on the testing dataset using SHAP. Figure 2 demonstrates the learnt feature from the three subgroups by explaining the decision result of one representative polyp. The first column is the original GLCM. The corresponding label (0 for benign and 1 for malignant) and model score of the malignancy risk are listed on the top. The remaining two columns show the interpretation of model prediction on the two classes. Given a class, the red cells showed that the entries pushed the model's decision close to that class, while blue pixels pulled the prediction results away. Based on this visualization, it can be observed that the information provided by the three subgroups had both shared patterns and unique patterns. The visualization results of these patterns from deep learning showed the potential for the proposed adaptive learning model to learn these group specific and groupwise shared features.
Analyze Group-Specific Information
To analyze and compare the differences among the three data subsets or multi-scale groups, the information provided by each group is then investigated. To understand these differences, the information that can be learnt by CNN on each individual group is first visually analyzed. Next, CNN models based on three GLCM subgroups are trained. Then, features learnt by CNN are understood via interpreting how the final decision is made given an input.
To accomplish this, a game theory based model called SHAP was adapted to explain the output of the machine learning models [39]. Each model was trained by the polyps' corresponding GLCM subgroup and is similar to GLCM-CNN, with network design optimized to the subgroups [40]. After the CNN model was trained, the decision criteria was visualized on the testing dataset using SHAP. Figure 2 demonstrates the learnt feature from the three subgroups by explaining the decision result of one representative polyp. The first column is the original GLCM. The corresponding label (0 for benign and 1 for malignant) and model score of the malignancy risk are listed on the top. The remaining two columns show the interpretation of model prediction on the two classes. Given a class, the red cells showed that the entries pushed the model's decision close to that class, while blue pixels pulled the prediction results away. Based on this visualization, it can be observed that the information provided by the three subgroups had both shared patterns and unique patterns. The visualization results of these patterns from deep learning showed the potential for the proposed adaptive learning model to learn these group specific and groupwise shared features.
Adaptive Learning Model for Fusing Multi-Scale Features
As the variable number grows, simply combining all the input variables for classification can increase a high risk of clustering degradation, which is caused by counteractions of their variations [20,22]. In practice, not all variables of the descriptor will be useful for classification; lots of redundant information remains in the three scales. Inspired by [38], an adaptive learning model is designed to hierarchically circumvent the variation and reduce the redundant information from the multi-scale feature sets.
Problem Formulation: The problem is formulated as follows: Given a set S = {D i | i [1, n]} containing n feature groups D i , the task is to find an optimal set S ⊂ S that maximizes the polyp classification performance in terms of AUC. Actually, this is a famous problem of the curse of dimensionality, which is always NP-hard [41]. To avoid this problem, the greedy algorithm as the suboptimal scheme is introduced.
As shown in Figure 3, the proposed adaptive learning method works in two stages: baseline selection and hierarchical feature integration. The goal of the baseline is to select the best individual group that achieves the highest performance. After ranking the rest feature groups in a descending order based on its individual performance, the multilevel integration method integrates new group one by one following the forward step feature selection (FSFS) method. Given a new feature group D j , FSFS is designed to add new variables from the most significant to the least and to only keep the ones that have performance improvement.
interpretations of model prediction on the two classes. The red cells show the entries push the model's decision close to that class, while blue pixels pull the prediction results away.
Adaptive Learning Model for Fusing Multi-Scale Features
As the variable number grows, simply combining all the input variables for classification can increase a high risk of clustering degradation, which is caused by counteractions of their variations [20,22]. In practice, not all variables of the descriptor will be useful for classification; lots of redundant information remains in the three scales. Inspired by [38], an adaptive learning model is designed to hierarchically circumvent the variation and reduce the redundant information from the multi-scale feature sets.
Problem Formulation: The problem is formulated as follows: Given a set = | [1, ]} containing n feature groups , the task is to find an optimal set ⊂ that maximizes the polyp classification performance in terms of AUC. Actually, this is a famous problem of the curse of dimensionality, which is always NP-hard [41]. To avoid this problem, the greedy algorithm as the suboptimal scheme is introduced.
As shown in Figure 3, the proposed adaptive learning method works in two stages: baseline selection and hierarchical feature integration. The goal of the baseline is to select the best individual group that achieves the highest performance. After ranking the rest feature groups in a descending order based on its individual performance, the multi-level integration method integrates new group one by one following the forward step feature selection (FSFS) method. Given a new feature group , FSFS is designed to add new variables from the most significant to the least and to only keep the ones that have performance improvement. Two models for the adaptive learning method are proposed. The first one is a traditional hybrid method; the second is a deep learning-based method. They are detailed below.
Multigroup hybrid Method: The multigroup hybrid model (MGHM) was designed with random forest for priority calculations and a support vector machine (SVM) for final classification. Two models for the adaptive learning method are proposed. The first one is a traditional hybrid method; the second is a deep learning-based method. They are detailed below.
Multigroup hybrid Method: The multigroup hybrid model (MGHM) was designed with random forest for priority calculations and a support vector machine (SVM) for final classification.
For the baseline selection, as each group contained several descriptors, each group was compared by its best performance after feature selection. Separate random forest models were trained on each group; the importance of each feature was based on the GINI index [42], meaning that the information gain it could provide for each involved splitting. Then, in each group, an optimal subset that had the highest performance by AUC was found via SVM, while, naturally, the left-over variables built the complimentary set. D 0 i was used to denote the baseline set and D 1 i to denote the left-over set for group D i . The optimal set that had the highest AUC was selected as the initial baseline; then, the proposed multi-level feature integration was performed on the rest of the groups. The integration sequence was in a descending order of the pre-evaluated AUC on the whole group level. This ranked set of descriptor groups was hereafter referred to as the descriptor pool (DP).
Since there were three descriptor groups, the proposed hierarchical feature integration contained 4 levels. FSFS was performed on each level to find the optimal feature subset as output with support vector machine (SVM) as the classifier and the AUC as the metric, for which cross-validation evaluation was performed. Level i in the hierarchy model is denoted as L i , the current baseline is denoted as Baseline i , and the next candidate descriptor group in L i is denoted as Candidate i . The output of L i , denoted as Baseline i+1 , served as the baseline of L i+1 . Its flow chart is plotted in Figure 4. After all candidate sets were integrated, FSFS was run to integrate the complementary set of the initial baseline.
gration sequence was in a descending order of the pre-evaluated AUC on the whole grou level. This ranked set of descriptor groups was hereafter referred to as the descriptor po (DP).
Since there were three descriptor groups, the proposed hierarchical feature integr tion contained 4 levels. FSFS was performed on each level to find the optimal feature su set as output with support vector machine (SVM) as the classifier and the AUC as th metric, for which cross-validation evaluation was performed. Level i in the hierarch model is denoted as , the current baseline is denoted as , and the next cand date descriptor group in is denoted as . The output of , denoted , served as the baseline of . Its flow chart is plotted in Figure 4. After a candidate sets were integrated, FSFS was run to integrate the complementary set of th initial baseline.
As this method was designed to iteratively evaluate every variable, it served as th upper-bound of the performance that can be achieved on the dataset.
Multi-group CNN:
In the second model, CNN was adapted and performed adaptiv learning by each group, as shown in Figure 5. For the baseline selection, the CNN w designed to take the whole GLCM group as input and select the one with the highest AU Then, the integration was performed by iteratively adding a group with the next highe AUC following FSFS. The entire evaluation was based on a CNN network, where its d tailed structure is listed in Table 2 As this method was designed to iteratively evaluate every variable, it served as the upper-bound of the performance that can be achieved on the dataset.
Multi-group CNN: In the second model, CNN was adapted and performed adaptive learning by each group, as shown in Figure 5. For the baseline selection, the CNN was designed to take the whole GLCM group as input and select the one with the highest AUC. Then, the integration was performed by iteratively adding a group with the next highest AUC following FSFS. The entire evaluation was based on a CNN network, where its detailed structure is listed in Table 2 and the structure of the backbone is plotted in Figure 5. For each level, the input size of the network had 32 × 32 × c, where 32 is the grayscale and c is the number of channels/GLCMs of the input. The convolution network contained two convolution layers, each followed by a batch normalization layer, a max-pooling layer with stride 2 and ReLU as activation function. After the convolution part, three fully connected layers were designed to make a final prediction. For different group combinations, the number of input channels were modified to fit the current input data. This multi-group CNN method is denoted as MG-CNN in the rest of the paper. and c is the number of channels/GLCMs of the input. The convolution network contained two convolution layers, each followed by a batch normalization layer, a max-pooling layer with stride 2 and ReLU as activation function. After the convolution part, three fully connected layers were designed to make a final prediction. For different group combinations, the number of input channels were modified to fit the current input data. This multi-group CNN method is denoted as MG-CNN in the rest of the paper.
Results
In this section, the polyp mass dataset used for all experimental results is discussed in detail. The classification results of the multi-scale descriptor sets are presented with the proposed multi-level adaptive learning model. Finally, the proposed models are compared to similar classification methods which input all the multi-scale descriptor sets at once and ignore the differences among the data sets.
Polyp Dataset
The polyp dataset used for these experiments consisted of 59 patients with a total number of 63 polyp masses found through virtual colonoscopy and confirmed by clinical colonoscopy. A flowchart of the dataset acquisition and preparation is shown in Figure 6
Results
In this section, the polyp mass dataset used for all experimental results is discussed in detail. The classification results of the multi-scale descriptor sets are presented with the proposed multi-level adaptive learning model. Finally, the proposed models are compared to similar classification methods which input all the multi-scale descriptor sets at once and ignore the differences among the data sets.
Polyp Dataset
The polyp dataset used for these experiments consisted of 59 patients with a total number of 63 polyp masses found through virtual colonoscopy and confirmed by clinical colonoscopy. A flowchart of the dataset acquisition and preparation is shown in Figure 6 and described below. The polyp dataset used for these experiments was obtained from a retrospective study carried out at the University of Wisconsin Hospital and Clinics, Madison, WI, USA. Over 8000 patients were screened via CTC with the inclusion criteria that the patients were at least 50 years of age (normal screening age without family history of colorectal cancer), a polyp with a size of at least 30 mm in largest diameter was detected during CTC, and corresponding histopathology was available for those polyps. The CTC imaging was carried out according to the procedures described within [43]. Of those screened patients, only 59 patients, with a total of 63 polyp masses, fit the inclusion criteria. For classification discussed below, the dataset was divided into binary categories of 32 malignant adenocarcinomas, and 31 benign polyps including 3 serrated adenomas, 2 tubular adenomas, 21 tubulovillous adenomas, and 5 villous adenomas. All polyps had bulky mass morphology, except for six (four tubulovillous and two villous adenomas), which were designated as flat or carpet polyps. The patient demographics for this polyp dataset are presented in Table 3.
retrospective study carried out at the University of Wisconsin Hospital and Clinics, Madison, WI, USA. Over 8000 patients were screened via CTC with the inclusion criteria that the patients were at least 50 years of age (normal screening age without family history of colorectal cancer), a polyp with a size of at least 30 mm in largest diameter was detected during CTC, and corresponding histopathology was available for those polyps. The CTC imaging was carried out according to the procedures described within [43]. Of those screened patients, only 59 patients, with a total of 63 polyp masses, fit the inclusion criteria. For classification discussed below, the dataset was divided into binary categories of 32 malignant adenocarcinomas, and 31 benign polyps including 3 serrated adenomas, 2 tubular adenomas, 21 tubulovillous adenomas, and 5 villous adenomas. All polyps had bulky mass morphology, except for six (four tubulovillous and two villous adenomas), which were designated as flat or carpet polyps. The patient demographics for this polyp dataset are presented in Table 3. The clinical value of CADx models on CTC polyp mass images is due to their requirement for surgical removal from their size. Unlike endoscopic colonoscopy, CTC is noninvasive and cannot resect polyps during the procedure. Polyp masses that are 30 mm or larger in size, however, require surgical removal and are not treated via colonoscopy. The clinical value of CADx models on CTC polyp mass images is due to their requirement for surgical removal from their size. Unlike endoscopic colonoscopy, CTC is noninvasive and cannot resect polyps during the procedure. Polyp masses that are 30 mm or larger in size, however, require surgical removal and are not treated via colonoscopy. Therefore, the clinical value of examining this dataset is to provide physicians with diagnostic information on the polyp masses before their surgical removal without requiring expensive biopsy procedures. For example, surgeons may decide to be more aggressive in how much tissue they remove if the mass is malignant to ensure that any microscopic disease which may have invaded surrounding tissues can also be removed.
Regions of Interest
The area around the polyp region was manually selected and segmented on each CTC image slice containing the polyp. For each polyp, a volume was constructed by combining the segmentations on each slice to form the region of interest (ROI), which was confirmed by radiologists to ensure accuracy of the manual procedure. It is noted that a cleansing step was used to discard all voxels below −450 HU within these ROIs as being predominately air from the lumen of the colon [44]. The information encoded in these voxels from partial volume effects (above the range of pure air HU values) is minimal, if any, and contributes more noise to the features for classification. The ROIs were used to compute the multi-scale texture features described above. Sample polyp CT slices and their contours are shown in Figure 7.
nostic information on the polyp masses before their surgical removal without requiring expensive biopsy procedures. For example, surgeons may decide to be more aggressive in how much tissue they remove if the mass is malignant to ensure that any microscopic disease which may have invaded surrounding tissues can also be removed.
Regions of Interest
The area around the polyp region was manually selected and segmented on each CTC image slice containing the polyp. For each polyp, a volume was constructed by combining the segmentations on each slice to form the region of interest (ROI), which was confirmed by radiologists to ensure accuracy of the manual procedure. It is noted that a cleansing step was used to discard all voxels below −450 HU within these ROIs as being predominately air from the lumen of the colon [44]. The information encoded in these voxels from partial volume effects (above the range of pure air HU values) is minimal, if any, and contributes more noise to the features for classification. The ROIs were used to compute the multi-scale texture features described above. Sample polyp CT slices and their contours are shown in Figure 7.
Dataset Evaluation
A cross-validation strategy was used to evaluate the model performance. The leaveone-out and two-fold methods were adopted in this study to provide the two bounds of the classification performance, where the two evaluation methods were two extremes of the k-fold cross validation. The leave-one-out method tests only on one subject but trains on all the other subjects. The two-fold method trains on half the subjects and tests on the other half, which trains the model with the least data samples. This strategy is particularly attractive for small sized datasets. Results from both methods together will provide a fairer evaluation to consider the overfitting that might happen in the leave-one-out method and the lower amount of training that might happen in the two-fold method. Due to the paper length limit, only the two-fold testing results are used to show the advantage of the proposed model under the toughest conditions. The polyps were randomly divided into training and testing sets for classification with 31 polyps in the training set (15 benign and 16 malignant) and 32 polyps in the testing set (16 benign and 16 malignant). Repeating this random sampling method, 100 training and testing groups were generated to increase statistical confidence and to minimize bias. The 100 classification outcomes were averaged
Dataset Evaluation
A cross-validation strategy was used to evaluate the model performance. The leaveone-out and two-fold methods were adopted in this study to provide the two bounds of the classification performance, where the two evaluation methods were two extremes of the k-fold cross validation. The leave-one-out method tests only on one subject but trains on all the other subjects. The two-fold method trains on half the subjects and tests on the other half, which trains the model with the least data samples. This strategy is particularly attractive for small sized datasets. Results from both methods together will provide a fairer evaluation to consider the overfitting that might happen in the leave-one-out method and the lower amount of training that might happen in the two-fold method. Due to the paper length limit, only the two-fold testing results are used to show the advantage of the proposed model under the toughest conditions. The polyps were randomly divided into training and testing sets for classification with 31 polyps in the training set (15 benign and 16 malignant) and 32 polyps in the testing set (16 benign and 16 malignant). Repeating this random sampling method, 100 training and testing groups were generated to increase statistical confidence and to minimize bias. The 100 classification outcomes were averaged for the results and standard deviation (STD) served as the performance variation measurement.
Settings
For the traditional method, three multi-scale descriptors were calculated using the three groups in Table 1 relevant to the three scales. Then, these descriptors were used to generate 100 training and testing datasets due to the observation splitting schemes.
The Random Forest classifier contains 5000 trees with GINI index as the importance metric. The SVM classifier adapts a kernel function of cubic polynomial, with Gamma as 1/(variable number), coef0 as 0, tolerance as 0.001, and Epsilon as 0.1.
For each learning method, the i-th candidate group is denoted as D x i , where i [1, 3] and x [·, b, c] as the whole group, base group and complementary group. C i , with i [1,3], denotes the learned best set from stage i.
The CNN model is trained with Cross-entropy loss between the predicted score and label. Adam [45] was used for optimization. The learning rate was initialized as 0.001 and decayed by 0.01 every 10 epochs. Since the dataset was relatively small, the training ended after 40 epochs to prevent overfitting of the model.
The Outcomes of the Proposed Method
First, an investigation of how the descriptors from each group contribute to the model trained from all descriptors is analyzed. The statistics summary of the descriptors is listed in Table 1.
After acquiring the optimal subset of descriptors, the contribution of each group is analyzed by comparing how many variables contribute to the best AUC score and the importance of each descriptor. Figure 8 shows the different trends of AUC scores as a function of variable number, where the non-monotonic trend is usually seen due to the redundancy, resulting in parameter overtraining and clustering degradation. In addition, the differences among the multi-scale texture descriptors are also clearly seen.
Settings
For the traditional method, three multi-scale descriptors were calculated using the three groups in Table 1 relevant to the three scales. Then, these descriptors were used to generate 100 training and testing datasets due to the observation splitting schemes.
The Random Forest classifier contains 5000 trees with GINI index as the importance metric. The SVM classifier adapts a kernel function of cubic polynomial, with Gamma as 1/(variable number), coef0 as 0, tolerance as 0.001, and Epsilon as 0.1.
For each learning method, the i-th candidate group is denoted as , where [1,3] and [•, , ] as the whole group, base group and complementary group. , with [1,3], denotes the learned best set from stage i.
The CNN model is trained with Cross-entropy loss between the predicted score and label. Adam [45] was used for optimization. The learning rate was initialized as 0.001 and decayed by 0.01 every 10 epochs. Since the dataset was relatively small, the training ended after 40 epochs to prevent overfitting of the model.
The Outcomes of the Proposed Method
First, an investigation of how the descriptors from each group contribute to the model trained from all descriptors is analyzed. The statistics summary of the descriptors is listed in Table 1.
After acquiring the optimal subset of descriptors, the contribution of each group is analyzed by comparing how many variables contribute to the best AUC score and the importance of each descriptor. Figure 8 shows the different trends of AUC scores as a function of variable number, where the non-monotonic trend is usually seen due to the redundancy, resulting in parameter overtraining and clustering degradation. In addition, the differences among the multi-scale texture descriptors are also clearly seen. Based on the observation above, it is necessary to evaluate each descriptor group first before combining them all together in order to avoid deterioration on the overall performance. Besides, this can prove the feasibility of the proposed learning framework.
The performance of the three groups of descriptors using the hybrid model were analyzed first. Among all, as shown in Table 4, the highest AUC is achieved by where 6 variables were chosen for this preliminary classification result. Following the proposed method, every ranked descriptor was divided into two parts, baseline and complementary set. The six generated subgroups, or the baseline and the compliment for each of the three descriptor groups, are shown in Table 5. Based on the observation above, it is necessary to evaluate each descriptor group first before combining them all together in order to avoid deterioration on the overall performance. Besides, this can prove the feasibility of the proposed learning framework.
The performance of the three groups of descriptors using the hybrid model were analyzed first. Among all, as shown in Table 4, the highest AUC is achieved by D 3 where 6 variables were chosen for this preliminary classification result. Following the proposed method, every ranked descriptor was divided into two parts, baseline and complementary set. The six generated subgroups, or the baseline and the compliment for each of the three descriptor groups, are shown in Table 5. Table 5. Two parts of each descriptor divided by forward step feature selection method via SVM classifier.
Descriptor-ID
Number of Variables 65 19 3 165 6 106 After the first step, based on AUC scores, DP was initialized as Then, DP was fed into MGHL to remove the redundant variables and to improve classification performance via the proposed bottom-up hierarchical integration. Finally, 17 out of 364 variables were extracted to form the final descriptor. In terms of classification results, the AUC score increased from 0.892 to 0.925, while its standard deviation dropped from 0.098 to 0.035. The changes in AUC score and the chosen variables are listed in Table 6, which illustrates that the hybrid model has a monotonic learning process. The preliminary classification performances of the MG-CNN are also listed in Table 4. When compared to the results of using the whole 13 directions, the results indicated that multiple directions of GLCM could contribute to the classification performance, which means that GLCM with different directions could provide additional information.
Then, G 1 with 3 GLCMs was chosen as the baseline, with the remaining two groups to be iteratively tested for whether they should be included. Finally, three subgroups were selected and contributed to a final 0.909 AUC score. In addition, classification performance from two-scales already achieved better classification performance than using all the directions without the multiscale concept. The hierarchical learning process is shown in Table 7 and illustrates that the feature integration scheme was indeed useful to further optimize the classification performance.
Comparisons with State-of-the-Art Models
In addition to the above presentation of the performance details of the adaptive learning model for integration of multiscale texture features, the comparisons to several typical state-of-the-art models are also detailed, including: [40]. The network structure is optimized to fit the polyp dataset used. Table 8 lists the classification performance of all the methods on the polyp mass dataset, where the AUC, accuracy, sensitivity, and specificity of each model is reported. The AUC score and accuracy of the proposed method exceeds that of the post-KLT eHMs (the best result of the six typical methods) by 2% and 3%, respectively. Against VGG-16, the proposed model improves the AUC score by 10%. Moreover, all ROC curves are also plotted in Figure 9, where the proposed model's ROC curve is the top one among the seven. These ROC curves further demonstrate the advantage of the proposed method over the others. Based on the graphical judgement in Figure 9 and the quantitative measurements in Table 8, both results demonstrate the advantages of the two adaptive learning models over the rest of the methods by a large margin. Moreover, a significance test was performed, as shown in Table 9, by comparing their prediction probabilities with eight state-of-the-art methods. All the p-values are less than 0.05, which indicates that the proposed methods have significant differences from the comparative methods. . ROC curves of proposed and comparative methods. Table 9. p-values from statistical significance analysis over the ten methods using Wilcoxon Signedrank Test between the predicted probabilities of these methods.
Discussion
In this paper, a multi-layer adaptive learning model architecture is proposed. Instead of simply concatenating all the multi-scale texture features together for classification, the proposed architecture not only integrates multi-scale texture descriptors in an adaptive manner to consider the associated variation among multiple datasets, but also provides an effective solution for information redundancy. The primary novelty of this proposed work was in the weighted grouping of the texture patterns and assigning greater contributions to those higher weighted groups, instead of using all features entered into the classifier at the same time. Two schemes, i.e., traditional machine learning-based and CNN-based, were designed to demonstrate this idea. The proposed design contained two stages. In the first stage, GLCM was divided into three groups by their individual scales. Table 9. p-values from statistical significance analysis over the ten methods using Wilcoxon Signedrank Test between the predicted probabilities of these methods.
Discussion
In this paper, a multi-layer adaptive learning model architecture is proposed. Instead of simply concatenating all the multi-scale texture features together for classification, the proposed architecture not only integrates multi-scale texture descriptors in an adaptive manner to consider the associated variation among multiple datasets, but also provides an effective solution for information redundancy. The primary novelty of this proposed work was in the weighted grouping of the texture patterns and assigning greater contributions to those higher weighted groups, instead of using all features entered into the classifier at the same time. Two schemes, i.e., traditional machine learning-based and CNN-based, were designed to demonstrate this idea. The proposed design contained two stages. In the first stage, GLCM was divided into three groups by their individual scales. A baseline was selected, with the remaining groups ordered by their individual performance. In the second stage, the three group were integrated into one enhanced descriptor in a hierarchical architecture by a multi-layer learning scheme. On each layer, a forward stepwise feature selection method was introduced to selectively add some patterns or variables from complemental subgroups into the baseline to produce better performances. The greedy procedure guarantees a monotonically increasing AUC score from the initial descriptor groups at the first layer and reduces redundant information. Due to the variation among multiple datasets or multiscale descriptors, the proposed adaptive learning model increased the AUC score from 0.886 to 0.925 via MGHM and from 0.895 to 0.909 via MG-CNN.
When comparing against the deep learning state-of-the-art methods, the following observations were noted. The VGG16 and AlexNet models performed quite poorly, with AUC values of 0.823 and 0.779, respectively. These results were expected because deep learning methods tend to have much higher data requirements to fully train the high-level features from that methodology, and the dataset used for these experiments is relatively small. However, the proposed MG-CNN model still attained a significantly higher AUC value of 0.909. This showed that the GLCM input for the model already provided some higher-level texture information, so that the deep learning architecture did not have the same steep data requirements as the other methods. On a much larger dataset, it is expected that the VGG16 and AlexNet models will provide closer comparisons to the proposed models. Against the GLCM-CNN method, which was originally used on the same dataset as these experiments [40], the value of the proposed weighted grouping was demonstrated by the higher AUC value. Since the GLCM-CNN model similarly outperformed the VGG16 and AlexNet models, this further reinforced the value of the GLCM as inputs.
When comparing against the other state-of-the-art methods using traditional features and classifiers, the proposed MGHM still outperforms them significantly. In this category, the post-KLT eHMs obtained the best classification performance of the comparative methods likely because the KL transform provides a measure of reducing redundancy of the texture features through the change of basis representation. Against the other traditional feature selection methods, the value of the proposed model in further reducing variation and redundancy to achieve greater classification is even more significantly demonstrated by AUC values.
Although the presented adaptive learning model is implemented for integration of multiscale texture features, the integration strategy can be applied to fuse multimodal datasets, such as the polyp intensity images, the first derivative gradient image and the second order curvature images that were investigated in Song et al. [6] and Hu et al. [11]. While this work investigated spatial variations through the GLCM, this method may help expand upon those other models that integrated multiple feature sets. Future studies will look to expand on the multi-scale texture descriptors to include other types of descriptors and patterns into a study with a larger dataset. Funding: This research was partially supported by the NIH/NCI grants #CA206171 and #CA220004.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board. The most recent approval date is 18 May 2021. The study was assigned an ID number 93995_MODCR005 and has the title of "Integrating virtual and optical colonoscopies with pathological analysis to map the highly heterogeneous features of colorectal polyp biomarkers".
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All data used for these experiments can be made available from the contact author upon reasonable request. | 2022-01-28T16:11:42.106Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "09aee0a85c9e80c0ce4af27f68abb6eb5ac57259",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/3/907/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "160e5764e445927b23c544e5bd8b63a04aad918a",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
117843998 | pes2o/s2orc | v3-fos-license | Kochen-Specker Sets with Thirty Rank-Two Projectors in Three-Qubit System
A simple three rules supplemented by five steps scheme is proposed to produce Kochen-Specker (KS) sets with 30 rank-2 projectors that occur twice each. The KS sets provide state-independent proof of KS theorem based on a system of three qubits. A small adjustment of the scheme enables us to manually generate a large number of KS sets with a mixture of rank-1 and rank-2 projectors.
Introduction
The Kochen-Specker (KS) theorem demonstrates the inconsistency between predictions of quantum mechanics (QM) and noncontextual hidden-variable (NCHV) theories. Contextuality is one of the classically unattainable features of QM. The results of measurements in QM depend on context and do not reveal preexisting values. A context is a set of maximally collection of compatible observables. The results of measurements in QM depend on the choice of other compatible measurements that are carried out previously or simultaneously. The simplest system that can be used to prove KS theorem is a single qutrit. As a qutrit does not refer to nonlocality, it shows that KS theorem is a more general theorem compare to Bell theorem that rules out the local hidden variable model of QM.
The possibility of testing KS theorem experimentally was once doubted due to the finiteness in measurement times and precision [1,2]. Cabello [3] and others [4] suggested how KS theorem might be experimentally tested by deriving a set of noncontextual inequalities that are violated by QM for any quantum states but are satisfied by any NCHV theories. Recently, there are many successful experiments that show the violation of noncontextual inequality, for example the experiments on a pair of trapped ions [5], neutrons [6], single photons [7], two photonic qubits [8] and nuclear spins [9].
The original proof of KS theorem involves 117 directions in three-dimensional real Hilbert space [10]. Peres [11] found a simpler proof with 33 and 24 rays for three-and four-dimensional systems, respectively. Mermin [12] used an array of nine observables for two spin-1 2 particles to show quantum contextuality. Similar mathematical simplicity is also shown in KS theorem proof for the three-qubit eight-dimensional system using ten observables [12]. Up to now the smallest numbers of rays required in the proof of KS theorem are 31 [13], 18 [14] and 36 [15] in three-, four-and eight-dimensional systems, respectively.
The KS sets used to prove the KS theorem are difficult to obtain previously. For example, there is only one KS set reported in [16] and [15] with 20 and 36 rays in four-and eight-dimensional real Hilbert spaces, respectively. Recently, with the aid of computer, the number of KS sets available increases tremendously. For instance, the number of KS sets with 36 rays in three-qubit system is 320 according to [17]. In this Letter, we adopt a set of simple rules supplemented by a few steps to construct KS sets that consist of 30 rank-2 projectors without relying on computer computation. In Sec. 2 a brief introduction to the 25 bases formed by 40 rays of Kernaghan and Peres is given [15]. An example is given in Sec. 3 to explicitly show the steps to obtain KS sets involving 30 rank-2 projectors from KS sets formed by 40 rank-1 projectors provided in [17]. We generalize the steps in Sec. 4 and conclude in Sec. 5.
Kochen-Specker sets with 15 bases formed by 40 rays
For the sake of completeness, we furnish in this section some necessary basic facts prior to a detail discussion on the procedure of constructing rank-2 projectors (or plane) KS sets.
Based on the Mermin pentagram that consists of five sets of four mutually commuting operators, Kernaghan and Peres [15] derived 40 rank-1 projectors (or rays) to form 25 bases, where each of the bases is a set of mutually orthogonal projectors that spans an eight-dimensional real Hilbert space. Table 1 lists the 40 rank-1 projectors, R i with i =1, 2, 3, . . . , 40, and Table 2 which is taken from [17] lists the 25 bases. The first five bases in Table 2 are called pure bases (P B i , i =1, 2, . . . , 5) [17] and their mixture give rise to remaining hybrid bases (HB i , i =6, 7, 8, . . . , 25). Each of the rank-1 projectors occurs once in P B and four times in HB.
As a result of computer search, Waegell and Aravind [17] found 64 KS sets Table 1: The 40 rays derived by Kernaghan and Peres for KS proof in threequbit system. The symbol1 is used to denote −1. that are composed of 40 rays and 15 bases. A manual construction of these 64 KS sets can be found in [18]. Since these KS sets have 20 rays that occur twice each, 20 rays that occur four times each among its 15 bases, and each base contains 8 rays, they are labeled as 20 2 20 4 -15 8 [17]. The 15 bases are contributed by 5 P Bs and 10 HBs. An example of 20 2 20 4 -15 8 KS sets is given in Table 3. The KS sets in the form of 20 2 20 4 -15 8 is constructed completely by rank-1 projectors. However, they can easily be transformed to KS sets that composed merely of rank-2 projectors, see Section 3.
A Concrete Example: Steps of Construction
Example given in Table 3 is a KS set that involves 40 rank-1 projectors. We propose in this section steps to transform it to a KS set that involves 30 rank-2 projectors, where each of the projectors occurs twice among the 15 bases, as is shown in Table 4.
The rank-1 projectors in italic for a specific P B i form the set Γ i , and the remaining rank-1 projectors form the set ¬Γ i . Our steps of construction are guided by the following three rules: Table 4: KS set consists of 30 rank-2 projectors obtained from the KS set given in Table 3 Rank-1 projectors from Γ i must be coupled with rank-1 projectors from ¬Γ i to form 4 rank-2 projectors in P B and each of these rank-2 projectors repeats itself once in HB.
Note that the sequence of the above rules must be taken care of. It is important to apply the rules in the given order, i.e., ℜ1 first, followed by ℜ2 and lastly ℜ3. Now, let us apply them to our example.
Discussion
The scheme proposed in Sec. 3 is conceived based on the properties shared by all KS sets in the type of 20 2 20 4 -15 8 . Apart from the features reflected by the symbol 20 2 20 4 -15 8 , we would like to stress that these 15 bases must be composed of 5 P Bs an 10 HBs. Most importantly, the 20 rays that repeat four times each provide us clues to form the rank-2 projectors. Due to the common features shared, S1 to S5 used to construct KS set of 30 rank-2 projectors in Sec. 3 can be generalized and apply to all 64 KS sets with 20 2 20 4 -15 8 , as follows, Step 1 (S1 ′ ) : Apply ℜ1, ℜ2 and ℜ3 to Γ 1 .
Each application of ℜ2 and ℜ3 produces 4 and 2 rank-2 projectors, respectively. This clearly explains why there are in total 30 rank-2 projectors formed upon the completion of S1 ′ to S5 ′ . However, there are various combinations of invalidate or removing ℜ2 or ℜ3 through out the process of construction in order to obtain various numbers, ranging from two to thirty, of rank-2 projectors. Let us now consider one of the scenarios and investigate how, without ℜ3, the number of rank-2 projectors is affected. The aforementioned scheme needs to be further generalized as follow, Step 1 (S1 ′′ ) : Apply ℜ1 and ℜ2 to Γ 1 . Check if ℜ3 is applicable.
Note that if ℜ3 is applicable, it increases the number of rank-2 projectors formed by two every time we apply it.
In the S2 of our example (cf. Sec. 3), the choice of rank-2 projectors for base 2 shown in Table 6 guarantees the applicability of ℜ3. There are two more ways that make the ℜ3 applicable in S2. However, we can, for example, choose (9,10), (13,16), (14,12) and (15,11), for base 2 instead, but it will then make ℜ3 inapplicable. There are in total six ways of forming rank-2 projectors for base 2 that make ℜ3 inapplicable. Table 10 lists all the nine ways of forming rank-2 projectors for base 2. The same situation happens in S3 to S5 as well. Table 10: Each of the nine rows shows different way of forming rank-2 projectors for base 2 as a result of applying ℜ2. The first three ways make ℜ3 applicable while the other six ways render ℜ3 fails. The first way shown in the first row is the one adopted in Table 6 In the scenarios where ℜ2 and ℜ3 are both applicable, we always have the freedom to choose not to apply ℜ3 after the execution of ℜ2, depends on how many rank-2 projectors we aim to get in the transformed KS sets. However, in S1, as mentioned before, there are three ways of applying ℜ2 on base 1 that guarantee the applicability of ℜ3 and none of the cases make ℜ2 satisfied and ℜ3 dissatisfied. Again, our analysis of the example in Sec. 3 can be generalized to S1 ′′ to S5 ′′ . In short, there are three (six) ways of forming 4 rank-2 projectors in S1 ′′ (each of S2 ′′ to S5 ′′ ) by applying ℜ2 and not to execute ℜ3 although it is applicable, three ways of forming 6 (4+2) rank-2 projectors in each of S1 ′′ to S5 ′′ by applying both ℜ2 and ℜ3 and six ways of forming 4 rank-2 projectors in each of S2 ′′ to S5 ′′ by applying only ℜ2 due to the inapplicability of ℜ3. Table 11 shows the numbers of KS sets with various numbers of rank-1 and rank-2 projectors that can be generated via the adjustment on the number of times ℜ3 is applied throughout S2 ′′ to S5 ′′ (we always apply ℜ3 on S1 ′′ for the ease of computation in Table 11). Note that as N ℜ3 does not reflect specifically at which step the ℜ3 is inapplicable or not to be executed (in the case of ℜ3 is applicable), the result of N KS shown is for only one case.
So far we consider only one of the examples of KS sets in the form of 20 2 20 4 -15 8 , it is obvious that the number of KS sets with the mixture of rank-1 and rank-2 projectors that can be generated from our scheme is indeed huge. Finally, note that when N ℜ3 = 0, S1 ′′ to S5 ′′ reduced to S1 to S5 , and N KS = 243 is the same as the number of KS sets we deduced before in our example. Table 11: The number of KS sets generated by applying ℜ1 and ℜ2 while invalidating or not executing ℜ3 throughout S2 ′′ to S5 ′′ . Note that ℜ3 is always executed on S1 ′′ here. The symbols N ℜ3 , N KS , N 2 and N 1 denote the number of times ℜ3 is invalidated or not executed, the number of KS sets generated, the number of rank-2 projectors formed and the number of the remaining rank-1 projectors, respectively.
Conclusion
We proposed a simple scheme of three rules supplemented by five steps to transform the 20 2 20 4 -15 8 Kochen-Specker (KS) sets into KS sets that involve a mixture of rank-1 and rank-2 projectors. A concrete example is provided as illustration. By manipulating the rules throughout the five steps, we can determine the number of rank-2 projectors formed in the resultant KS sets. The simplest result obtained is the KS sets with 30 rank-2 projectors that occur twice each among 15 bases. To our knowledge, this is the first rank-2 projectors KS sets produced for three-qubit system based on the Mermin's pentagram. It can be cast in the form of testable inequality proposed by Cabello (see first inequality in [3]) . It is also noteworthy that a considerable number of KS sets can be generated by our scheme without resorting to any computer calculation. | 2013-04-29T15:47:14.000Z | 2013-04-23T00:00:00.000 | {
"year": 2013,
"sha1": "d6d26b3275bf756f7c2f2b07e7ff132e411e7afc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d6d26b3275bf756f7c2f2b07e7ff132e411e7afc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
247223064 | pes2o/s2orc | v3-fos-license | BPS and near-BPS black holes in $AdS_5$ and their spectrum in $\mathcal{N}=4$ SYM
We study quantum corrections in the gravitational path integral around nearly $1/16$-BPS black holes in asymptotically $AdS_5 \times S^5$ space, dual to heavy states in 4D $\mathcal{N}=4$ super Yang-Mills. The analysis provides a gravitational explanation of why $1/16$-BPS black holes exhibit an exact degeneracy at large $N$ and why all such states have the same charges, confirming the belief that the superconformal index precisely counts the entropy of extremal black holes. We show the presence of a gap of order $N^{-2}$ between the $1/16$-BPS black holes and the lightest near-BPS black holes within the same charge sector. This is the first example of such a gap for black holes states within the context of $AdS_5$ holography. We also derive the spectrum of near-BPS states that lie above this gap. Our computation relies on finding the correct version of the $\mathcal{N}=2$ super-Schwarzian theory which captures the breaking of the $SU(1, 1|1)$ symmetry when the black hole has finite temperature and non-zero chemical potential. Finally, we comment on possible stringy and non-perturbative corrections that can affect the black hole spectrum.
1 Introduction 1/16-BPS states have recently played a crucial role in analyzing the duality between Type-IIB string theory in AdS 5 × S 5 and 4D N = 4 super-Yang Mills . Such states can be accurately counted by computing the superconformal index, a grand canonical partition function with multiple chemical potentials for the black hole angular momenta and R-charges turned on.
On the boundary side the superconformal index can be obtained exactly since its independent of the coupling [24,25]. Expanding the exact answer in the large N limit, most contributions to the superconformal index were matched with a corresponding Euclidean gravity saddles, with the dominant contributions given by well-known supersymmetric black hole solutions [23]. The computation of the superconformal index, on both the boundary and the bulk side, has provided a detailed check of holography and a detailed counting of black hole micro-states.
A lot less is known about black hole states that are not protected by supersymmetry. In particular, an important issue in classical black hole thermodynamics is the breakdown of the statistical description of black holes [26,27], occurring at low temperatures whose scale is powerlaw suppressed in the number of degrees of freedom describing the black hole. When quantum effects are included, two resolutions to this thermodynamic breakdown were found. First, for non-supersymmetric black holes in flatspace or AdS, a recent computation [28] which accounted for quantum effects occurring in the near-horizon region of near-extremal black holes showed an effective continuum of states with a strongly modified spectrum at the scale identified in [26].
In the second resolution, due to the addition of fermionic degrees of freedom in the computation of quantum effects, the spectrum of nearly-supersymmetric black holes in supergravity in 4D flatspace or (4, 4) supergravity in AdS 3 was shown to be drastically different [29]: there is an exact degeneracy of supersymmetric black holes at extremality followed by a gap precisely at the energy scale at which a failure of black hole thermodynamics is predicted. Thus, quantum effects in the near-horizon region proved to be important in both cases. However, exact degeneracies at extremality and gaps between extremal and near-extremal states which were predicted in stringy constructions [30][31][32] were only found in supersymmetric theories.
In the case of supersymmetric black holes in AdS 5 , we want to ask whether there is a gap (with an appropriate power-law suppression in N ) in the spectrum of masses between the 1/16-BPS black holes and the lightest unprotected black hole state (which we call near-BPS) in a sector with the same angular momenta and R-charge quantum numbers. These states preserve less supersymmetry than their flatspace counterparts, and a rigorous understanding of whether a gap is truly present has, up to the point of this paper, not been achieved. 1 Even when it comes to the 1/16-BPS states themselves, not all of their properties are known. Firstly, the superconformal index cannot distinguish whether such states are purely bosonic or a combination of bosonic and fermionic states, which would yield cancellations in the index. In other words, it is unclear whether the entropy associated to the index precisely matches the actual entropy of BPS black holes. Secondly, from the gravitational perspective, the degeneracy of the extremal 1/16-BPS states has not been rigorously understood due to the presence of an infinite number of zero modes observed when computing the one-loop determinant for black holes at extremality. 2 In this paper, we address all these questions by computing the low-temperature expansion of the partition function of near-BPS black holes. At zero temperature, the leading contribution to the partition function comes from 1/16 BPS black holes that exhibit an SU (1, 1|1) isometry in their near-horizon region [33]. As the temperature is turned on, the SU (1, 1|1) isometry is broken and we explain how the gravitational modes associated to this breaking are effectively captured by the N = 2 super-Schwarzian theory [34][35][36][37][38]29]. 3 At small temperatures, the super-Schwarzian theory and the associated modes in supergravity become strongly coupled. Luckily the partition function of the super-Schwarzian can be found exactly and this consequently allows us to reliably compute the low-temperature corrections to the free energy of such black holes, appearing at linear order in T (from classically evaluating the super-Schwarzian action and the black hole action) and logarithmic order in T (from evaluating quantum corrections in the super-Schwarzian or, equivalently, specific set of modes of the graviton, gravitino and gauge fields in the black hole background). This calculation is valid up to temperatures that are much smaller than the one identified in [26].
Studying these quantum corrections to the black hole spectrum leads to the main results of our paper. We find that there is indeed a gap between the 1/16-BPS state and the lightest near-BPS black hole states precisely at the energy scale identified in [26] (once again, in contrast to the non-supersymmetric case of black holes in AdS 5 ). Above this gap there is a continuum of states whose density we can predict (a precise understanding of the discreteness of the spectrum in this sector would require a better non-perturbative understanding of type IIB string theory).
Additionally, we find that the 1/16 BPS black hole states all have the same charges, to leading 1 In the supersymmetric flatspace and (4, 4) AdS 3 cases the presence of a gap in stringy examples was explained in [31,32]. We could not find a string theory argument about the existence of the gap in the literature. 2 As we will explain shortly, these zero-modes get lifted when studying the partition function at small but non-zero temperature. This regularizes the one-loop determinant in the finite temperature partition function and this regularization will be responsible for the observed degeneracy among supersymmetric black holes. 3 The relation between these black holes and the N = 2 Schwarzian theory was explored at the classical level in [38,39]. order in N , confirming that there is no cancellation in the superconformal index. This improves a previous argument for black holes with an emergent SU (1, 1|1) symmetry described in [40], that did not take into account the physics from the Schwarzian mode. This is important since there are also theories with an emergent SU (1, 1|1) symmetry and vanishing index, and we show this does not happen for the 1/16-BPS states of N = 4 Yang Mills.
Before diving into a more quantitative description of our results, it is useful to first describe some of the properties of the black hole solutions whose spectrum we determine in this paper.
When viewed from the 10D perspective, such black hole solutions have five angular momenta: two angular momenta parametrizing rotations in AdS 5 as well as three angular momenta parametrizing rotations on the S 5 . On the boundary side, the former are the angular momenta of the dual state within the conformal group, which we will denote by J 1,2 while the latter are three Cartans of the SO(6) R-symmetry group, which for simplicity we will set to all be equal and denote by R. The mass of the black hole can be fixed in terms of the temperature in addition to these five charges. For extremal BPS black holes, their mass can be determined in terms of four (of the five) angular momenta since there is an additional non-linear relation between these charges necessary in order for Killing spinor solutions to exist in the geometry while at zero-temperature. The first line represents the degenerate contribution of extremal states when the R-charge is fixed to its BPS value R * by the angular momenta J 1,2 . The degeneracy is precisely found to be e S * , where S * is the Bekenstein-Hawking entropy of the extremal supersymmetric black hole. 4 The second line represents the contribution of the continuum of states (with non-perturbative gaps expected to be exponentially small in N ) starting at the scaling dimension ∆ extremal which then represents the extremal black hole state. This gives a gap above the BPS scaling dimension ∆ BPS (defined for R = R ), that scales as 2) where N is related to the 5D Newton constant by N 2 = π 3 AdS 5 2G 5 , and where we rescale the two angular momenta J 1,2 = N 2 J 1,2 , 5 and where∆(J 1 , J 2 ) is a function that we determine exactly.
Since the states that are part of the continuum are unprotected, they come in super-multiplets with charges R and R + 1. Now we can see the meaning of the gap scale ∆ gap . For any charge R = R , the spectrum starts at ∆ extremal (R) with no gap. But for R = R , we have the BPS states at scaling dimension ∆ BPS and the first excited state above them in this charge sector start at ∆ BPS + ∆ gap (the continuum sector at R = R comes from supermultiplets with highest R-charge R and R + 1). This is a large-N analysis and therefore we cannot rule out the possibility of an order O(1) number of states between ∆ BPS and ∆ BPS + ∆ gap . We conjecture that ∆ gap is the true gap of the theory at large N .
When the chemical potential of the R-symmetry is fixed appropriately, the grand canonical partition function can also be used to compute the superconformal index of such black holes and the only contribution comes from the first line of (1.1). This result can be obtained by directly computing the corresponding supersymmetric index in the N = 2 super-Schwarzian theory. This matches the result from the leading gravity saddle found in [8,23] and, additionally, shows that the one-loop determinant from the gravitational theory matches the computation of the index in the boundary theory. An interesting feature that comes out of our analysis is that the IR R-charge of the N = 2 super-Schwarzian theory is shifted by a known value R * compared to the UV R-charge of N = 4 Yang Mills.
We would like to emphasize that since we work in a mostly canonical ensemble where almost all charges are fixed, we avoid the subtleties encountered in the grand canonical ensemble regarding which sheet the superconformal index is computed [22]. In our formalism, these issues would appear from attempting to sum over charges to construct the grand canonical answer.
Additionally, we compute the leading (in α ) non-zero stringy correction to the black hole spectrum. While as expected the scaling dimension and charges of BPS states remains un- 5 Since J 1,2 do not need to scale with any parameter in the theory for the black holes discussed in this paper. affected, ∆ gap and the scaling dimension of the extremal state ∆ extremal (R) are affected, and are pushed towards lower energies. Nevertheless, because the N = 2 super-Schwarzian theory remains a good effective description even in the presence of stringy corrections, the overall dependence of the density of states on ∆ gap and ∆ extremal (R) remains as in (1.1).
A technical difficulty in our computation comes from identifying the correct parameters of the N = 2 super-Schwarzian, that capture the correct breaking of the SU (1, 1|1) isometry as the temperature of the near-1/16 BPS black holes is increased. In particular, there are multiple version of the N = 2 super-Schwarzian since the R-symmetry group associted to the breaking of the near-horizon isometry is U (1). One can change the radius of the associated U (1) mode in the super-Schwarzian by choosing a different value for the fundamental charge in the theory, and, additionally, one can add a topological θ-angle term associated to this U (1) mode in the super-Schwarzian action. Depending on this fundamental charge and on the value of θ, the N = 2 super-Schwarzian density of states can have widely different behaviors: 6 for instance, in some cases there is gap and the BPS states are purely bosonic, while in others the theory has no such gap and has degenerate bosonic and fermionic BPS states (leading to possible cancellations in the superconformal index). For black hole in AdS 5 , we explain how each parameter is fixed from the perspective of the N = 4 SYM boundary theory and of the bulk supergravity theory. In particular, we find that the fundamental R-charge in the super-Schwarzian is fixed to be 1 from the quantization of the SO(6) R-symmetry charges and two angular momenta along S 3 and find that θ = 0 by carefully evaluating the 10D supergravity action. Fixing these two parameters determines the value of the gap and density of states that we described above. Moreover, we also determine the Schwarzian energy and R-charge in the IR, in terms of the N = 4 SYM scaling dimensions ∆ and R-charge R.
One might however wonder whether there are near-BPS black holes in AdS/CFT whose density of states is also controlled by the N = 2 super-Schwarzian but with different values for the fundamental charge and θ-angle. For instance, one might ponder whether there are situations when there is no thermodynamic mass gap above the BPS state. We construct such near-BPS black holes in AdS 3 for N = (2, 2) and N = (2, 0) supergravity. We explain how, in such a case, the density of states predicted by the super-Schwarzian can also be obtained from modular invariance in the boundary 2D N = (2, 2) superconformal field theory by making mild assumptions about the boundary theory (i.e. that it has large central charge and a non-vanishing twist gap). 7 6 This is closely related to how the spectrum of a particle moving on a circle is also controlled by the radius of the circle and by the θ-angle term that one can also add to the action [41]. 7 As noticed in [36,42] in theories with other amounts of supersymmetry, this connection originates from The rest of this paper, is organized as follows. In section 2 we first describe the connection between the Schwarzian theory and the spectrum of near-extremal black holes more broadly. We then describe the various features of the N = 2 super-Schwarzian theories which will be needed to extract the spectrum of near-BPS black holes. In section 3, we give a detailed description of the BPS and near-BPS black hole solutions in AdS 5 . We work in a mixed ensemble with some charges fixed, and argue that in this ensemble the index is always in a deconfined phase. We follow this by an analysis focused on the steps necessary to determine the coupling, fundamental charge and θ-angle in the corresponding N = 2 super-Schwarzian theory by analyzing the lowtemperature expansion of the action and the super-algebra of the near-horizon isometry present for BPS black holes. Putting these results together, we give a detailed discussion of the spectrum of near-BPS black holes. We conclude with a discussion about the leading stringy corrections to the spectrum of such black holes. We discuss further appearance of the N = 2 super-Schwarzian within holography in section 4 for black holes in (2, 2) supergravity in AdS 3 , dual to heavy states in 2D (2, 2) SCFTs. Finally, in section 5 we discuss possible non-perturbative corrections to the spectrum of N = 4 SYM and 2D (2, 2) SCFTs and conjecture that the gap persists in the spectrum even when including states that exhibit string excitations in the black hole background.
From the Schwarzian to near-extremal black holes
The connection between the dynamics of near-extremal black holes and the one-dimensional Schwarzian theory has been extensively studied [43-57, 28, 29]. Through this connection, a better understanding of the spectrum of near-extremal black holes has been achieved and the previously open question about the existence of a thermodynamic mass gap has been resolved for a variety of near-extremal black holes [28,29]. Nevertheless, there are still near-extremal black holes whose connection to the Schwarzian has not yet been completely understood and consequently, their spectrum near-extremality remains unknown. One such example, are the near-BPS black holes in AdS 5 whose connection to the Schwarzian will be extensively discussed in section 3. It is useful to first review the general mechanism through which the spectrum of near-extremal black holes is related to that of the Schwarzian with various amounts of supersymmetry.
The clear-cut way to understand the relation between the two spectra is by performing a the equality between the partition function of the (super)Schwarzian and the semiclassical limit (large central charge) of the vacuum (super)Virasoro character.
dimensional reduction of the full gravitational theory in the near-horizon region to AdS 2 and isolate the low-energy modes whose one-loop determinant solely is temperature T dependent at leading order. In the limit in which the inverse-temperature β is much larger than the horizon size of the black hole at extremality r 0 (β r 0 ) and under the assumption that the extremal entropy S 0 is the largest dimensionless parameter in the problem, the theory that results from the dimensional reduction is a two-dimensional theory of gravity coupled to a dilaton field (which parametrizes the area of the sphere on which the reduction is performed). In addition, the twodimensional theory includes gauge fields that result either from the s-wave reduction of gauge fields (whose gauge group we will denote by G) in the original theory or as Kaluza-Klein modes that capture the isometry group of the space on which the dimensional reduction was performed (whose gauge group we will denote by G iso ). In theories of supergravity which include fermions in higher dimensions, an analogous procedure leads to a coupling between a dilatino field and the gravitino in the 2D theory. Expanding this action in S 0 , one finds that the theory can be approximated by JT gravity plus a BF theory whose gauge group is G BF = G × G iso . For higher dimensional theories of supergravity, the dilaton and metric as well as the gauge field entering in the BF theory all couple to the gravitino and dilatino. Fluctuations in the region outside of the near-horizon region can be captured by a boundary term for the JT gravity action and for the BF action computed along the curve that separates the near-horizon region from the asymptotic region.
For concreteness, we will briefly review the example of large near-extremal Reissner-Nordstrom black holes in (bosonic) Einstein-Maxwell theory in AdS 5 (i.e. whose horizon size at extremality r 0 is much larger than the AdS 5 size r 0 AdS 5 and whose inverse-temperature β r 0 ). The spectrum of such black holes was extensively studied in [28]. Following the steps outlined above, the canonical partition function of such black holes can be expressed as Extremal entropy, extremal energy, and the Schwarzian ∼1/β correction to the action where the extremal mass M 0 (Q), the "extremal entropy" S 0 (Q) and the Schwarzian coupling denoted by φ b (Q) (or by the more physically meaningful notation M −1 SL(2) which we shall explain shortly) are given by 8 where AdS 5 is the radius of AdS 5 and G 5 is the Newton constant. To obtain the first line of (2.1), we peform a dimensional reduction on S 3 in the AdS 2 × S 3 near-horizon region and then expand the resulting action at large S 0 . For large black holes, the radius of AdS 2 is the same as the radius of AdS 5 , which fixes Λ AdS 2 = 6/( 2 AdS 5 ). The leading order result yields the action of JT gravity, whose degrees of freedom are the 2D metric of the near-horizon region and the dilaton φ which parametrizes the size of S 3 . Above, the BF terms associated to the gauge group G BF = U (1) × SO(4) do not give a non-trivial contribution since we are fixing both the U (1) and SO(4) fluxes when fixing the charge of the black hole to Q and its angular momenta J 1 = J 2 = 0 (this will be contrasted with the case of the grand-canonical partition function below) [58]. There is additionally a Gibbons-Hawking-York (GHY) boundary term at the edge of the near-horizon region where the dilaton (fixed to φ| ∂AdS 2 = φ b / ) as well as the induced boundary metric are fixed (to have a proper boundary length β/ ).
In addition to these these modes there are also KK modes obtained from the dimensional reduction on S 3 . The one-loop determinant of most KK modes corresponding to massless fields in the original theory yield logarithmic corrections to the extremal entropy, given by S # 0 which can in principle be computed using the methods in [59][60][61][62]28]. 9 Here, the exact value of # depends on the massless field content in the original gravitational theory and is unimportant in the analysis of this paper since it does not affect the energy dependence of the density of states but only its overall scaling. The remaining modes which are not taken into account are precisely the JT gravity modes which we have separated in the first line of (2.1). At zero temperature, these are the zero-modes that are ubiquitous when computing one-loop determinants in black hole backgrounds and are given by the set of large diffeomorphisms which do not vanish close to the boundary of the near-horizon region. 10 However, when working at finite temperature, these zero-modes are lifted and become the boundary modes of the near-horizon region weighed 8 Here, Q can be viewed as the charge associated to the supergravity gauge field A, whose conventions we set in (3.1) and thereafter. For reader convenience, in this section we leave the radius of AdS 5 to be arbitrary and set to AdS5 . 9 For fields that are not massless in the original gravitational theory the one-loop determinant yields an answer that, to leading order, is independent of the extremal entropy S 0 .
10 See for instance [61] for a treatment of these modes.
by the Schwarzian action. This can be seen when going from the first to the second line of (2.1) by integrating out the dilaton and rewriting the GHY term in terms of a field f (τ ) which parametrizes the set of possible large diffeomorphisms. Equivalently, f (τ ) parametrizes the shape of the boundary of the near-horizon region. The Schwarzian theory can be seen to be weekly-coupled when T M SL (2) and becomes strongly coupled when T ∼ M SL (2) . Finally, to go from the second to the third line and compute the partition function for any coupling, one can use the fact that the path integral of the Schwarzian theory is exactly solvable and is in fact one-loop exact. This one-loop determinant can be obtained by accounting for the three remaining bosonic zero-modes that survive even at finite temperature and are due to the near-horizon SL(2, R) isometry.
The low-temperature expansion of the action is sufficient to obtain the full partition function in (2.1). This is because the Schwarzian theory can be viewed as the effective theory for the breaking of the near-horizon SL(2, R) isometry, as the temperature is turned-on. This motivates denoting the Schwarzian coupling by M SL (2) .
The result in (2.1) implies that the density of states obtained by Laplace transforming (2.1), For the purposes of this paper, it is also useful to study the partition function in the grandcanonical ensemble, imposing that the holonomy of the U (1) gauge field is fixed to e A = e −βµ at the boundary of the spacetime, along the thermal circle. The partition function can be instructively rewritten as, The grand-canonical partition function can be obtained from (2.1) by summing over fixed charges. This sum is dominated by the charge Q * , which in terms of the chemical potential Such transformations can be parametrized by a U (1) mode that captures the breaking of the U (1) symmetry as the chemical potential is turned on. Like the Schwarzian, this theory is oneloop exact, and its on-shell action and one-loop determinant are captured in the sum over n in (2.4). Once again, the full partition function of this effective theory can be read off from the low-temperature expansion of the on-shell action.
We can additionally fix the angular velocities of the black hole instead of its angular momenta.
This can be done by setting the holonomy of the SO(4) gauge field that is obtained from the dimensional reduction to e i B = e −βω 1 J 1 −βω 2 J 2 where J 1 and J 2 are two Cartans of SO(4) whose eigenvalues are J 1 and J 2 . Similar to the U (1) mode, excitations of the SO(4) gauge field can be captured by an SO(4) valued field which parametrizes non-vanishing gauge transformations at the boundary of the near-horizon region. As above, there are multiple saddles for this SO(4)mode which is once again due to the fact that the partition function does not explicitly depend on ω 1 or ω 2 , but rather on the holonomy e −βω 1 J 1 −βω 2 J 2 .
The black holes described above should also appear in the bosonic truncation of a supergravity theory. Nevertheless, black holes which at extremality preserve some amount of supersymmetry are instead described by different versions of the super-Schwarzian theory. This is because the effective theory which computes the low-temperature expansion of the partition function does not only capture the breaking of the near-horizon bosonic isometry group SL(2, R) × G BF ; instead, it captures the breaking of a super-group. As mentioned above, in such a case there are additional modes that remain massless in the near-horizon region, which contribute in the low-temperature expansion of the partition function -these are the dilatino and the gravitino.
These modes do not decouple from the bosonic modes and consequently the sum over n in (2.4) has a more complicated dependence with β and n. Nevertheless, as we will explain in section 2.3 the couplings determining the low-temperature expansion of the partition function can still be read-off from the on-shell action, as was the case in (2.1) or (2.4).
Thus, instead of performing the full-dimensional reduction, in this paper we will take the simpler approach of reading off the effective action which captures the low-temperature expansion of the partition function. As discussed above, we will identify this effective theory by (a) the near-horizon isometry that gets broken when the temperature, chemical potential and angular velocities are turned on and (b) the low-temperature expansion of the on-shell action. This will determine the correct version of the super-Schwarzian theory needed to describe the near-BPS black holes discussed in this paper. Before diving into that identification, it is useful to first discuss the properties of the effective theory that will end up being important in our analysis, the N = 2 super-Schwarzian theory.
The model
The N = 2 super-Schwarzian theory was described in detail in [34]. It is a theory that is described by N = 2 super-reparametrizations. Just like in the bosonic case, the bulk origin of these superreparametrizations is given by the action of super-diffeomorphisms on the metric, gravitino and gauge field at the boundary of the near-horizon region. After imposing the appropriate chirality constrains, these super-reparametrizations can be described in super-space coordinates (τ, θ, θ) → (τ , θ , θ ) in terms of two time-dependent bosonic fields f (τ ) and e irσ(τ ) ∈ U (1), where r is a normalization constant we will discuss below, as well as two fermionic fields η(τ ) and η(τ ): where the dots can be obtained by explicitly solving the super-reparametrizations constraints.
The fields entering in these super-reparametrizations become the degrees of freedom of the N = 2 super-Schwarzian theory. The Schwarzian derivative is then given by from which one can write the N = 2 super-Schwarzian action in terms of super-coordinates For this reason, we should quotient the path integral over these modes by whatever super-isometry is present in the bulk. In N = 2 JT gravity this is SU (1, 1|1). These super-isometries can be explicitly identified in (2.11) as a global SU (1, 1|1) acting on the fields f, σ, η, η [34]. Consequently, we should quotient the space of configurations of f, σ, η, η by such transformations.
In addition, one can add a topological term in the bulk associated to the U (1) R-symmetry gauge field. On the boundary, one should add to the action an associated topological term to the U (1) mode, σ: Due to its topological nature this term is also invariant under the SU (1, 1|1) transformations.
Because e irσ(τ ) ∈ U (1) and thus σ ∼ σ + 2π/r, we can identify theories with θ ∼ θ + 2π and will therefore restrict to θ ∈ [0, 2π). From the bulk perspective, since the boundary holonomy of the U (1) R-symmetry gauge field is given by e i(σ(β+τ )−σ(τ )) , r is determined by the smallest R-charge among any field that could possibly be coupled to the U (1) R-symmetry gauge field in N = 2 JT. 12 As in (2.1), we will next review the features of the resulting partition function where the partition function depends on the inverse temperature β which sets the length of the thermal circle and on a U (1) chemical potential α associated to the mode σ, under which fermionic fields are also charged. Thus, the periodicity conditions for the resulting path integral become f (τ + β) = f (τ ), e iσ(τ +β) = e 2πiα e iσ(τ ) , η(τ + β) = −e 2πirα η(τ ) and similarly for η.
Exact partition function and its spectrum
The partition function of the N = 2 super-Schwarzian theory can be computed exactly [35,36].
We work in conventions where the U (1) R charge of the complex supercharge is one and the minimal R-charge of fundamental fields is fractional and given by r. Consistency of the spectrum with N = 2 supersymmetry requires that 1/r is an integer, since states related by applying a supercharge should both be in the spectrum. The partition function is one-loop exact and given by [35,36] . (2.14) The sum over n is a sum over saddles analogous to the previous result ( Supersymmetry in the on-shell action. The contribution of the SL(2, R) Schwarzian mode to the action is independent of the winding mode n and gives Thus, the equality of coupling seen in (2.11), translates to a relation between the α-dependent terms in the on-shell and the α independent terms, in the convention in which α ∼ α + 1/r.
Density of states within each supermultiplet. We can Fourier transform the above result to obtain the decomposition of Z N =2 Schw (β, α) as a sum over U (1) R charges Z, with the smallest charge equal to r. Because of supersymmetry the spectrum should organize itself in supermultiplets (Z) ⊕ (Z − 1) for states with E = 0 and solely charge Z for states with E = 0.
Based on symmetry principles the partition function should thus be decomposed as In contrast to Schwarzian theories with smaller amounts of supersymmetry (N = 0 and N = 1), the density of states for N = 2 splits into an extremal piece and a continuous piece. Since the first term in (2.15) is temperature independent this implies that those states are extremal and yield an exact Dirac delta-function in the density of states at E = 0. Rewriting (2.14) as in (2.15), gives [36] The supersymmetric index. For particular values of the U (1) R chemical potential α, the grand canonical partition function computes a supersymmetric index. In particular, for any α such that e i2πα = −1 the fermionic degrees of freedom η and η become periodic and consequently, turning on this fugacity is equivalent to the insertion of (−1) F in the super-Schwarzian path integral. This works for any α k = k − 1/2 with k = 1, . . . , r −1 . For r = 1 there is only one index (with α = 1/2) and if we fix −π ≤ θ ≤ π then it is given by In such a case, note that Z(β → ∞, α → 0) = Z(β, α → 1 2 ) which implies that in such a case the "ground state" of the N = 2 super-Schwarzian are purely bosonic.
The 't Hooft anomaly. Finally, we will comment on the symmetries of the particular theories with θ = 0 and θ = π, for which charge conjugation invariance appears, at least classically. For a related toy model see Appendix D of [41]. When θ = 0 and 1/r ∈ Z the theory presents both charge conjugation and U (1) R symmetry. This is evident from the fact the partition function (2.14) is real and that the charges are integer multiples of r. Instead, the theory with θ = π has a 't Hooft anomaly between charge conjugation and U (1) R symmetries. For example, the localization calculation leading to (2.14) is manifestly charge conjugation invariant, since it is real. Nevertheless the spectrum in (2.16) is shifted by a half-integer unit of charge, inconsistent with U (1) R symmetry. This issue can be fixed by shifting the charge by a constant Z → Z + r/2, making the charge operator still commutes with the Hamiltonian; however, the price to pay is to break charge conjugation invariance. For this reason the theory with θ = π has a 't Hooft anomaly, while θ = 0 preserves the symmetries at the quantum level.
BPS and near-1/16 BPS black holes in AdS 5
In the previous section, we have reviewed the role Jackiw-Teitelboim gravity and the Schwarzian theory play in determining the form of the spectrum of near extremal black holes in general. It can be thought of as the soft mode coming from the broken symmetries that emerge in the near extremal limit. We also consider in particular the case of a broken SU (1, 1|1) symmetry which is described by N = 2 Schwarzian theory.
In this section we are going to apply this to 1/16-BPS black holes in AdS 5 × S 5 , which AdS/CFT predicts are dual to 1/16 BPS states in N = 4 Super Yang-Mills. We will begin by reviewing general properties of these black holes in sections 3.1 and 3.2, taken from [6], but see also [63,4,5,64,7]. Black holes in AdS 5 have a-priori several related but distinct limits: • The near-extremal limit in which T → 0.
• The extremal black holes in which T = 0.
• The supersymmetric limit in which the geometry has Killing spinors.
• The BPS limit which has T = 0 and an enhanced set of Killing spinors.
By adjusting the parameters of the solution, one may move independently off of extremal and supersymmetric surfaces in parameter space [65,8]. We adopt the convention of these references to refer to the intersection of these surfaces as the BPS limit.
Recently the Bekenstein-Hawking entropy of exactly 1/16-BPS black holes was reproduced from the superconformal index of N = 4 Yang Mills [24,25,[8][9][10]. The goal of this section is to reproduce this result from the point of view of the gravitational path integral, and in the process predict the spectrum of excited black holes states above the BPS ones; these states are not protected by exact supersymmetry. In section 3.3 we specify a mixed ensemble where some charges are fixed that will allow us to use the results of section 3.1 without requiring the most general black hole solution. In section 3.4 we analyze the BPS limit and show the black holes have an AdS 2 × S 3 × S 5 throat with an emergent SU (1, 1|1) symmetry. We identify then the N = 2 Schwarzian theory controlling the near BPS spectrum from the explicit breaking of this superconformal group. In section 3.5 we verify this at the level of the classical action of the black hole saddle and in section 3.6 we put everything together and give a picture of the spectrum of nearly 1/16-BPS states in N = 4 Yang Mills.
General black hole solutions in AdS 5
The goal of this paper is to extract information about the spectrum of nearly 1/16-BPS states in N = 4 Yang Mills. According to AdS/CFT, these states are dual to black holes and we will study their spectrum using the gravitational path integral. Since we want to know the energy and charges of these states, we will compute and derive the spectrum from the dual field theory partition function on S 1 β × S 3 with angular velocities and chemical potentials conjugate to the SO(6) R-charges turned on. From the bulk perspective, the gravitational path integral instructs us to sum over all geometries satisfying boundary conditions appropriate for our choice of ensemble in asymptotically AdS 5 . Therefore we need to know black hole solutions carrying these charges, which we briefly review next, mostly to establish our conventions.
The theory of gravity in asymptotically AdS 5 space dual to four dimensional N = 4 Yang Mills arises from a dimensional reduction of ten dimensional type IIB supergravity on AdS 5 ×S 5 .
From this perspective the SO(6) gauge symmetry in the bulk is identified with the isometries of S 5 , with the SO(6) charges associated to rotations along the S 5 . Since SO(6) has rank three it leads to three gauge fields A 1,2,3 and charges R 1,2,3 . The resulting theory in AdS 5 is quite complicated, but simplifies drastically when the rotation happens symmetrically along the three We will restrict only to that case in this paper, the reduction leads to a simple theory of minimal gauged supergravity on AdS 5 with a single gauge field 13 . The bosonic part of the action is given by (e.g. [66,67]): We work in units in which AdS 5 = 1 and the gauge coupling g has been set to 1. When this effective action is obtained from string theory, the dimensionful parameters will appear in the full action through the 5D Newton constant G 5 . The Chern-Simons term affects both the definition of asymptotic electric charges and the value of the on-shell action.
Before writing the solutions we should determine the boundary conditions. We will parametrize the bulk by coordinates (τ, r, θ, φ, ψ) where (φ, ψ) are 2π periodic and θ ∈ [0, π 2 ]. The coordinates (τ, θ, φ, ψ) also parametrize the boundary S 1 × S 3 at large radius r. This means that the metric should behave asymptotically, after a rescaling of r, as ds 2 = dr 2 r 2 + r 2 (dτ 2 + dΩ 2 3 ) + O(r 0 ), with the unit S 3 element dΩ 2 3 = dθ 2 + sin 2 θdφ 2 + cos 2 θdψ 2 . In order to fix the R-charge chemical potential to be Φ, we fix the gauge field holonomy at large r through A = −Φdτ + O(r −1 ). The above electric potential is related to the R-charge chemical potentials as . Following [68] we fix the angular velocities along the S 3 and the temperature through the A general solution with this asymptotic behavior is a charged rotating black hole with unequal angular momenta. The solution, which we write in Lorentzian signature for simplicity defining t = −iτ , was found in [6], but we follow the conventions of [23]. In terms of asymptotically static coordinates the metric is given by The above metric gives a four parameter family of solutions for charged and rotating black holes in five dimensions, parametrized by (m, q, a, b), where we will restrict to 0 < a, b < 1. These parameters are related to (m, q, a, b) → (β, Φ, Ω 1 , Ω 2 ) by imposing the solution is smooth at the Euclidean horizon, located at the radius r + defined as the largest positive root of ∆ r (r + ) = 0.
We trade from now on the variable m by r + through (3.9) One can show the solution is smooth with appropriate choice of parameters following the methods in [68]. The cycle becoming contractible at the horizon is generated by the vector field V = where denote angular velocities on the horizon. Smoothness also determines the temperature to be . (3.11) The chemical potential at the boundary is fixed by the asymptotic value of A → −Φdt, when r → ∞. The relation between Φ and q is determined by demanding the solution is regular at the Euclidean horizon. Finally smoothness of the gauge potential A at the horizon A µ V µ | r + = 0 gives the final relation . (3.12) Altogether, smoothness gave us the four relations needed to solve for (r + , q, a, b) in terms of (β, Φ, Ω 1 , Ω 2 ). Interestingly, in addition to the solution we have just discussed, there are other distinct solutions with the same boundary conditions obtained by integer shifts of the chemical potentials an angular velocities, first analyzed in the context of AdS 5 black holes in [23]. We will comment on these solutions in section 3.3, where they play an important role in our analysis.
Alternatively, these black holes can be identified through specifying the values of their four conserved charges (E, R, J 1 , J 2 ) canonically conjugate to (β, Φ, Ω 1 , Ω 2 ). The charges can be defined by the ADM procedure which gives (3.14) The energy of vacuum AdS 5 is given by E 0 = 3π 32G 5 . We will denote the black hole mass by M ≡ E − E 0 . To reiterate, R is proportional to the angular momenta along S 5 distributed symmetrically along the Cartan directions.
Having found the solution of the equations of motion filling the boundary conditions, we can approximate the partition function in this grand-canonical ensemble by the exponential of the classical action (3.15) As mentioned above, there are other saddles which are relevant in the near-extremal limit which we discuss in section 3.3 but ignore for now. The action I GCE (β, Ω 1,2 , Φ 1,2,3 ), including now the GHY boundary term and the holographic counterterm [69] appropriate to this ensemble where we fix the metric and gauge potential at infinity, is given by Defining the Bekenstein-Hawking entropy as the area of the horizon the on-shell action satisfies the so-called quantum statistical relation, which can explicitly be checked using the expressions for the charges and potentials. Finally, if we want to compute the partition function in a fixed charge sector we need to not only write (r + , q, a, b) in terms of the charges but also add the apropriate boundary terms in the action to make the variational principle well-defined [72]. For example, if we wanted to fix the charge R this amounts to adding an extra I → I + βΦR term in the action, understanding that Φ should be written as a function of charges, and similarly for angular momentum.
The 1/16-BPS black hole solution
As discussed in the introduction of section 3, the general AdS 5 black hole solution has different limits corresponding to extremality (T → 0) and supersymmetry (existence of a Killing spinor).
Written in terms of charges, the supersymmetric (but not yet extremal) condition is [73] M which after inserting the explicit expressions for M , R, and J 1,2 valid for both BPS and non-BPS black holes, takes the form We next want to impose extremality, meaning that the solution has zero temperature. This can be achieved by imposing the further condition The size of the horizon for these BPS black holes is given by the simple expression In terms of r * the constraint (3.21) becomes q = q * = (a + b)(1 + a)(1 + b). It is easy to check that (3.21) together with (3.22) implies that T = 0. This means that, while in general one could have supersymmetric non-extremal (T = 0) black holes, imposing extremality leads to the BPS condition, which is the intersection of the supersymmetric and extremal surfaces. Note that this also means that BPS black holes are now labeled by only two parameters (a, b), through the relations (r * (a, b), q * (a, b)). In the above, following [8], we introduced the ( ) * -notation, which from now on will denote quantities evaluated after imposing supersymmetry and extremality.
The black hole solutions that are both extremal and supersymmetric are the ones dual the .
Importantly, we cannot yet determine from gravity whether this is the true entropy of BPS states without including quantum effects from the gravity path integral, as explained in section 2. The field theory calculation of the index supports this interpretation of S * which we will verify from gravity as well below. The BPS values for the energy and charges are It is easy to verify the relation M * − 3 2 R * − J * 1 − J * 2 = 0. Thus, due to the BPS constraints, given the angular momenta J * 1 and J * 2 we uniquely determine the equal U (1) 3 ∈ SO(6) charge R * for the BPS black hole. A field theory explanation of this constraint from the point of view of N = 4 Yang Mills was suggested in [74,75]. The chemical potentials potentials associated to the BPS black holes are given by Φ This restriction on parameters by the BPS condition is not unfamiliar. In the case of black holes in four dimensional ungauged supergravity, supersymmetry only implies that the mass is equal to the charge, while BPS states satisfy the further condition that the angular momentum vanishes. A motivation for this is obvious in this example, black holes with real angular momentum that are supersymmetric have a naked singularity in Lorentzian signature 14 .
The issue with supersymmetric non-extremal AdS 5 black holes is instead the presence of closed timelike curves (CTC) in Lorentzian signature outside the horizon [6], and imposing extremality together with supersymmetry removes this pathology.
What we want to compute
In this section we clarify two issues that arise from the discussion so far. The first is that in the grand-canonical ensemble the solution reviewed in section 3.1 is not the only one. Take for example the gauge potential A. While we demanded that A µ V µ | horizon = 0, the gauge invariant statement that the holonomy is trivial along a contractible cycle allows for more general values of the potential at the horizon, which can be removed by a large gauge transformation but lead to physically distinct solutions. The other saddles are mostly subleading but become important at low temperatures, which is the regime of interest of this paper. They were considered in the near extremal limit in the context of a different black hole solution in [28,29], and we will discuss them in the context of AdS 3 in section 4.
The second issue is related to the fact that the supergravity theory considered above and the corresponding electrically charged black holes arise from the dimensional reduction of Type IIB supergravity solutions on S 5 in the sector in which the solutions have equal Kaluza-Klein momenta R ≡ R 1 = R 2 = R 3 on the S 5 factor. In a grand canonical ensemble, even if we fix the chemical potentials conjugate to the R-charges R i to be equal, these new solutions discussed in the previous paragraph will inevitably involve configurations that do not respect this symmetry.
This would require dealing with the much more complicated solution of type IIB with scalar fields turned on [77], which we will not attempt in this paper.
We will address these issues in the following way. First, we will use information about the UV completion of the theory given by N = 4 Yang Mills to determine the full space of saddle point configurations. Second, we will choose an ensemble for which only configurations with Our conventions in this subsection closely follow those of [23].
We begin by identifying the new solutions of the equations of motion. The most obvious way to generate new solutions is the following. Looking at the boundary conditions specified in section 3.1 it is clear that any geometry obtained by an integer shift Ω i → Ω i + 2πiZ β solves the same equations with the same boundary condition, while being physically distinct. Therefore the gravitational path integral instructs us to sum over all of them. Something similar is true for the gauge potential. Two configurations related by A → A + 2πiZ β dt satisfy the same gauge invariant boundary condition at infinity while continuing to be smooth since the extra integer flux can be undone by a large gauge transformation. This is not surprising since the R-charge chemical potential in AdS 5 corresponds to angular velocity along S 5 direction in ten dimensions.
The partition function with all chemical potentials fixed is given in the large N limit by a sum over saddles is the action of the black hole considered in [77] with Dirichlet boundary conditions. When Φ 1 = Φ 2 = Φ 3 = 2 3 Φ (which along with the factors of 1 2 in (3.27) defines our normalization for Φ i ) this action above becomes equivalent to the action computed in section 3.1. We will discuss the one-loop contribution later. At this point we need to make a choice regarding the range of integers that are allowed in the sum (3.27). One option is to consider more carefully the global nature of the gauge groups in five dimensions from a ten dimensional perspective given by Type IIB. A simpler option is to use AdS/CFT and analyze the properties of this partition function given by N = 4 Yang Mills, which we turn to next.
The N = 4 Yang Mills theory has the superconformal symmetry P SU (2, 2|4), and parallel to the gravity analysis we decompose the Lorentz and R-symmetry factors to the Cartan subalgebra R i , with i = 1, 2, 3. In N = 1 language, this theory is a particular instance of a vector multiplet coupled to a triplet of adjoint chiral multiplets X i , where we normalize the R-charges such that R i assigns charge 2 to the corresponding chiral multiplet. This means that scalars have even integer R-charge and fermions have odd integer R-charge. States of the theory are also labeled by the two angular momenta J 1 , J 2 which as usual are integer or halfinteger and satisfy spin statistics. With these conventions, the fermion number operator, is given by F = 2J 1,2 = R i mod 2. This constraint implies a particular periodicity of the grand canonical partition function under shifts of the chemical potential [23], which we can use to deduce the allowed saddles we need to include. Comparing with the gravitational answer (3.27) the constraint implies the following restriction on the solutions: (3.28) This determines the saddles that should in principle be included based on smoothness and global properties of the gauge groups 15 . The need to sum over saddles was pointed out in this context in [23] for supersymmetric configurations only, but is true more broadly.
This brings us to our second issue. We see clearly now that even if we restrict to Φ 1 = Φ 2 = Φ 3 = 2 3 Φ, the sum over integers will necessarily involve configurations outside the scope of section 3.1. In order to resolve this we will go to a mixed ensemble where some charges will be fixed. First of all we will introduce some notation, writing the grand-canonical partition function as and Q, Q † is the supercharge preserved by the 1/16-BPS states at zero temperature. This will allow us later to make a more direct connection with the superconformal index [25]. The definition of {Q, Q † } is chosen to match the supersymmetry condition, (3.20), when all R i are equal. We redefine the chemical potentials to track their temperature scaled deviations from their BPS values [8], as well as new flavor charges which commute with the N = 1 subalgebra, as well as where In terms of these, the grand canonical partition function becomes with unconstrained integers (m, m 1 , m 2 , n 1 , n 2 ). This in part motivates these redefinitions. If we expect to obtain an emergent N = 2 Schwarzian mode in the nearly 1/16-BPS limit, it should come with a sum over saddles involving an unconstrained integer.
Avoiding black hole solutions with unequal R-charges suggests we use a mixed ensemble in which we fix the inverse temperature β and the BPS chemical potential α, but Laplace transform with respect to the other potentials to obtain a trace over charges. For simplicity we will also work in an ensemble of fixed j 1,2 charges To ease the notation we will use the same letter to denote the generators j 1,2 , q 1,2 and their fixed numerical values in the ensemble. To work with equal charges R 1 = R 2 = R 3 , we set q 1 = q 2 = 0, which leaves us with one free U (1) charge which we have previously called R in the gravity solution. As mentioned above, solutions with more general charges are increasingly difficult to construct [79,73,64,7,77] because one must include scalar moduli corresponding to deformations of the S 5 metric, and are outside the scope of the present work. The mixed ensemble at equal U (1) 3 charges is then: Now we can put everything together and write an expression for the semiclassical limit of the partition function in the mixed ensemble we are interested in, with fixed α, j 1,2 and R 1 − The answer is a sum over a single integer-valued family of saddles In the first line we write the action in this mixed ensemble as the grand-canonical action I GCE (q 1 = 0, q 2 = 0, j 1 , j 2 , α + n) plus the evaluation of the boundary terms needed to make the variational problem well-defined (adapting [72] to our problem) resulting in the extra terms 2πiω 1 j 1 + 2πiω 2 j 2 . In these last two terms ω 1 and ω 2 should be written in terms of the quantities that are fixed in this ensemble. In the second line we defined the total value of the action as We now have only the sum over the integer shifts of the holonomy corresponding to the BPS potential in (3.40). The sum over saddles has the schematic form of (2.14) in which the integer and computed for N = 4 Yang Mills in [24,25]. In our ensemble, the index obtained from evaluating I(β, j 1 , j 2 ) ≡ Z(β, j 1 , j 2 , α = 1/2) is related to the superconformal index of [24,25] by the following exact relation 16 While the superconformal index I(β, ω 1 , ω 2 , ∆ 1 , ∆ 2 ) can be in a confined phase for some ranges of parameters, hiding the black hole behavior [21,22,19], the index I(β, j 1 , j 2 ) defined above (with some charges fixed) is always dominated by the black hole saddle. We leave for future work verifying this from a boundary perspective.
In section 3.5, we will see that not only does (3.40) share the same sum over U (1) saddles of the N = 2 super-Schwarzian partition function, but it also has the same classical action as a function of β and α in the limit of low temperatures and fixed α. From this we will determine the one-loop determinant around each saddle in section 3.6.
Near horizon geometry and supersymmetries
Having discussed the general black hole solution, the mixed ensemble partition function, and the saddles that contribute to it, we turn our attention to the near-extremal (and specifically near-BPS) limits. For the AdS 5 black hole (3.3), the exact supersymmetric and extremal limits have a near-horizon geometry described by AdS 2 , and upon uplifting to 10D, the solution is a specific fibration of AdS 2 × S 3 × S 5 which we describe in detail.
In the strict BPS near-horizon limit, we show the solution develops emergent superconformal isometries. On the bosonic side, there is both an emergent SL(2, R) for the AdS 2 as well as a U (1) which rotates the solution along a particular combination of angles on S 3 × S 5 . For the fermions, we find as expected a doubling of the number of supersymmetries corresponding to the appearance of a conformal Killing spinor.
Altogether, the BPS gravity theory has a local SU (1, 1|1) superconformal algebra of symmetries in the near horizon region, but finite temperature quantum corrections explicitly break this local symmetry to a global SU (1, 1|1) which acts as isometries of the near-AdS 2 saddles.
The N = 2 Schwarzian theory captures the breaking of these symmetries. In order to demonstrate that we have identified the correct pattern of symmetry breaking (and thus the correct Schwarzian), we consider the BPS limit of the leading saddle and compute explicitly the global super-isometries of the 10D solution explained above.
In contrast to other parts of section 3, in this subsection we will set the rotation parameters a = b for simplicity. This will not change the general structure of the symmetry algebra, but it will impact the specific form of the Killing vectors and Killing spinors. Much of this section is based on the analysis of [33], but see also [80]. Details of the supergravity analysis are given in Appendix B.
In the original coordinates (t, r, θ, φ, ψ) the metric is asymptotically static and the Killing horizon is generated by For the purpose of determining the near horizon geometry, it is convenient to switch to corotating coordinates, in which V = ∂ ∂t becomes null at the horizon. This amounts to changing angles (φ, ψ) to (φ,ψ), such that φ =φ + Ω 1 t and ψ =ψ + Ω 2 t. In these coordinates the horizon becomes static and the asymptotic metric is now rotating. In the BPS limit, going to corotating coordinates amounts to φ =φ + Ω * 1 t =φ + t, ψ =ψ + Ω * 2 t =ψ + t. (3.46) The near-horizon limit is now taken by setting r → a(a + 2) + r, (3.47) and expanding for r r * . In what follows, we introduce parameters We also make a further general coordinate transformation in which we scale then drop the tilde for notational convenience. This leads to a particular fibration of AdS 2 × S 3 : where dΩ 2 3 = dθ 2 + cos 2 θdψ 2 + sin 2 θdφ 2 is the metric on S 3 , and σ L 3 = 2(cos 2 θdψ + sin 2 θdφ).
To determine the Killing spinors and Killing vectors in this geometry, we follow [33] and work with a 10D lift of the above metric. While it is not strictly necessary to work in 10D, this presentation serves to set our conventions and gives a geometrical origin for all the symmetries of the solution. Working with the leading order Type IIB supergravity theory, the massless fields are the NS-NS sector fields (G M N , B M N , Φ), the RR-form fields (C (0) , C (2) , C (4) ), as well as complex Weyl spinors and gravitinos (λ, Ψ M ) of the same chirality. The field strength F (5) = dC (4) is self-dual, F (5) = (10) F (5) . In the empty AdS 5 × S 5 as well as the 1/16-BPS AdS backgrounds of IIB supergravity, we may set the axio-dilaton (Φ, C (0) ) to a constant, all fermions (λ, Ψ M ) to zero, and only turn on a supersymmetric background for the metric and the 5-form flux (G M N , F M N P QR ). In Einstein frame, there is no explicit dependence on the scalars, no Chern-Simons terms, and the action takes the simple form: As is standard for theories with self-dual gauge fields, above we wrote the kinetic term for F (5) , but one should always impose the F = * F equation of motion by hand 17 .
In the 10D background, the supersymmetry transformation of the gravitino is Using this, the independent Killing spinors are found to be where + 0 and − 0 are constant Majorana-Weyl spinors determined from integrability condition for Killing spinors (B.22). They are parametrized by two arbitrary real parameters each, giving four real parameters in total. This means that the near-horizon region preserves four supersymmetries, in contrast to full geometry of the 1 16 BPS black hole we started with [85,86]. Knowing the Killing spinors of the near-horizon metric, we can immediately find its Killing vectors by computing linearly independent Killing spinor bilinears I Γ a J . After appropriately normalizing the constant spinors ± 0 we find Killing vectors We can thus identify D, E ± as the generators of SL(2, R) and Z as a U (1) R-symmetry generator.
Using a standard procedure of determining spacetime isometry superalgebras [87][88][89][90] we can interpret the above commutation relations as giving the bosonic part of the isometry superalgebra (3.65) with Q B (k i ) denoting the bosonic generator associated to Killing vector k i . To determine the rest of the superalgebra we follow the prescription where Q F ( I ) denotes now the fermionic generators associated to Killing spinor I , and the Killing vectors act on the Killing spinors through the spinorial Lie derivative [90], shown explicitly in (B.52). With that, we obtain together with which can be identified as the SU (1, 1|1) superalgebra that the extremal black holes exhibit in their near-horizon region.
It is worth mentioning, as explained in [33] (which was specialized to the case J 1 = J 2 , , that the solution has additional bosonic isometries SU (2) × U (3) which do not mix with SU (1, 1|1). In principle we could introduce more general charges or chemical potentials corresponding to these symmetries, but as we have already explained in the discussion surrounding (3.39), we are working in a mixed ensemble in which some combination of the rotations and R-charges is held fixed, so these extra symmetries play no role in the thermodynamics.
In the standard example of AdS 2 at low but non-zero temperature, the boundary modes of the metric become strongly coupled, and the gravitational path integral reduces to that of JT gravity at leading order in e S 0 and β. JT gravity may itself by written as an SL(2, R) BF theory in the first order formulation of gravity, subject to the correct boundary conditions from the second order formalism [91]. In the present example, superconformal symmetry of the 10D solution implies that both the boundary modes of the metric and its superpartners become strongly coupled, and the effective two-dimensional theory in the AdS 2 throat should instead be based on a SU (1, 1|1) BF theory. In the present work, we have not attempted to derive the full two-dimensional dilaton supergravity, however, this point of view is not essential for our argument. A reader interested in dimensional reductions to JT supergravity as well as the relation to super-BF theory may consult [29,37,[92][93][94][95][96].
The low-temperature expansion of the action
The purpose of this section is to show the emergence of the N = 2 Jackiw-Teitelboim mode in the near 1/16-BPS limit. This was anticipated in the previous section based on the pattern of symmetry breaking of nearly 1/16-BPS states. We will work in an ensemble of fixed j 1 , j 2 and q 1 = q 2 = 0. For simplicity we focus first on saddles with integer n = 0 since the general case can be obtained by a simple shift of the chemical potential α → α + n, with n ∈ Z. Instead of implementing a dimensional reduction of type IIB supergravity near the horizon to obtain JT gravity, we will take a shortcut and match the classical action. We leave a full treatement of the reduction for future work.
We now turn to expansion of the on-shell action around the 1/16-BPS values of parameters T = 0 and ϕ = 0. We turn on non-zero values of T and chemical potential ϕ and assume T, ϕ 1, keeping α = β 4πi ϕ fixed. As explained above, in the fixed (β, j 1,2 , q 1,2 = 0, α) ensemble the classical action involves new boundary terms compared with the grand canonical ensemble where the GCE action is evaluated at necessary chemical potentials and angular momenta such that it is dominates by the quantum numbers q 1 = 0, q 2 = 0, j 1 , and j 2 . To set up the calculation, we begin picking a , b to correspond to the zero temperature 1/16-BPS black hole. The parameters a , b determines the fixed values of j 1,2 in the mixed ensemble through where we restored the N dependence by replacing G 5 = π 2N 2 to make the comparison with N = 4 Yang Mills more transparent. Next we go away from the 1/16-BPS black hole by expanding around these values (r , q , a , b ) → (r + δr, q + δq, a + δa, b + δb) where (r , q , a , b ) are determined in terms of j 1 and j 2 due to the nonlinear relation imposed for BPS black holes.
Imposing then that we work with fixed (T, ϕ, j 1 , j 2 ), fixes the expansion parameters δr, δq, δa , δb in terms of T and ϕ, using the expressions given in section 3.1. We carry this out in detail in Appendix A. It is convenient to write all formulas in terms of a and b , understanding they should be thought of as functions of j 1 and j 2 .
The final answer for the action in the low temperature expansion in the mixed ensemble is given by where we defined the BPS Bekenstein-Hawking entropy and R-charge by the expressions , (3.75) and as in (2.1), we defined the parameter M SU (1,1|1) explicitly as .
Other evidence of the dynamics in the near-extremal limit being controlled by the breaking of SU (1, 1|1) was given in [38], which also identified (3.76) as the relevant energy scale. , (3.77) where for now we ignored the one-loop corrections. This is precisely the classical partition function as computed by the N = 2 Schwarzian action, as given in (2.14). In this identification S corresponds to the topological term S 0 in the Schwarzian theory. The only difference is the first term e 2πiαR . Since for the theory under consideration the R charge is an integer, that phase is not affected by the sum over saddles and can be pulled in front of the sum in (3.77).
This term comes from the fact that there is a mismatch between R, (the R-charge in the UV) and the Schwarzian U (1) R charge Z given by Z = R − R . This is analogous to how in section 2.1, the grand-canonical ensemble was dominated by some charge Q * around which excitations were captured by a U (1) whose contribution once again consisted of a sum over saddles (2.4).
Additionally, we observe that the R-charge of the supercharge is the same as the fundamental charge, so we have r = 1 compared to (2.14). Finally, there are no Schwarzian θ terms arising from our UV theory which would have appeared through an imaginary contribution to the action, iθn for each saddle in (3.77). Such a topological term might have been present due to the Chern-Simons term in the 5d action (3.1), but turns out to be absent here.
In the general case, the equations above depend non-trivially on the angular momenta j 1,2 through the complicated dependence on a and b . This dependence becomes simpler in the limit where for example a → 1 and b → 1. In this limit J 1,2 ≡ j 1,2 /N 2 → ∞, and we get Since in this limit R ∼ N 2 O(J 2/3 ), we see that in defining j 1,2 , the contribution from R is subleading compared with J 1,2 . Therefore in this limit we can also identify J 1,2 with the angular momentum along S 3 ⊂ AdS 5 directly.
The black hole spectrum including quantum corrections
The classical analysis above determined that the relevant effective theory which describes the Combining the classical analysis above with the quantum corrections from the Schwarzian mode gives the following prediction for the mixed ensemble partition function from gravity: . (3.80) As explained in section 2.1, if we work in an ensemble where we fix all charges except the one that couples to the Schwarzian mode, the one-loop determinant coming from all additional fields is a constant proportional to N # . These corrections were recently studied in the context of AdS 5 black holes in, for example, [97]. These appear in the exponential as log N corrections. These will not be the focus of our discussion and therefore in the equation above we absorbed them into S itself. Following 2.3, the quantum corrected partition function can also be written We introduce in the equation above the variables E Sch and Z Sch when performing the Laplace transform. These are to be interpreted as the energy and charge respectively of the effective N = 2 Schwarzian mode arising in the IR.
We can relate the Schwarzian charge and energy defined above to N = 4 Yang Mills data in a simple way. First by taking into account in (3.81) the e 2πiαR prefactor, and remembering from the discussion in section 3.3 that α couples to R, we see the N = 4 Yang Mills R-charge is shifted by R with respect to the Schwarzian charge. Second, we observe that temperature in (3.81) couples to E 0 + E Sch . We can use E = E 0 + ∆ to relate the Schwarzian energy to the N = 4 Yang Mills scaling dimension. It is useful to first define which is the scaling dimension of the 1/16-BPS extremal states. It is also useful to define the scaling dimension obtained from imposing the supersymmetric constraints as with ∆ SUSY (R ) = ∆ BPS . Combining these definitions with the fact that temperature in allows us to find ∆ in terms of E Sch . To summarize, the N = 4 Yang Mills data is given by Using this identification, we can interpret both lines of equation ( as usually assumed, given by The first term is the overall sign depending on whether R is even or odd. Consequently, if R is even we find that the BPS states are bosonic, while when R is odd we find that the BPS states are fermionic. In either case, there is no cancellation (at least to this order in N ) in the superconformal index and thus the matching between the index and the entropy of BPS black holes is accurate. The change in sign of the index as R goes from even to odd was also observed explicitly in the boundary theory, in the calculations of [15].
Going beyond the BPS states, the second line of (3.81) gives the spectrum of non supersymmetric black holes. These can be extremal or not. For example when R = R the spectrum presents extremal black holes when that have zero-temperature but are not supersymmetric and only when R = R the two notions match ∆ extremal (R = R ) = ∆ BPS . The minimum is taken between the two supermultiplets with charge R states. The degeneracy for these black holes predicted by (3.81) goes to zero instead of being exponential in N 2 . 18 The result for the energy of the extremal states should be compared to the naive classical answer, which in an (R − R )/R expansion is given by, Interestingly, for instance when R ≥ R , the quantum corrected extremal energy in (3.88) is lower than the naive classical value in (3.89). 19 Having a quantum corrected energy ground state below the extremality bound has a possible interpretation in the context of the weak gravity conjecture [98]. While usually such corrections are seen to come from higher-derivative terms in the action 20 (which we will in fact discuss in section 3.7), in this case, the correction comes from the temperature dependence of the one-loop determinant in the gravitational path integral. We summarize all these results about the spectrum of extremal black holes states in figure 1(a).
With the expression above we can derive the result quoted in the introduction regarding the spectrum of black hole states in N = 4 Yang Mills for states with charge R = R . These come from the 1/16-BPS states counted by the first line of (3.81), but also from supermultiplets in the second line with maximal R-charge R + 1. These states appear at a scaling dimension in the range From this expression we can determine the gap between the 1/16-BPS black hole and the lowest black hole state with charge R , which we denote by ∆ gap . Using (3.76), the quantum corrections at low energies in the gravitational path integral thus gives where a , b are functions of J 1,2 ≡ j 1,2 /N 2 given above. Moreover, the density of states at charge R above this gap is given by (3.92) where S , ∆ gap and ∆ BPS are the functions of j 1 and j 2 given above. A similar formula can be rewritten for states with R = R , although the delta function is not present for these states where we have suppressed the dependence of all parameters on j 1 and j 2 . For both cases, we plot the results in figures 1(b) and (c). As in section 2.1, the continuum part of the density of states (3.92) and (3.93) receives perturbative and non-perturbative corrections in N . We expect that in the continuum region these corrections could lead to non-perturbative gaps in the spectrum, which we expect to be exponentially small in N .
We can also compare the quantum corrected entropies and energies at fixed R with corresponding semiclassical results. We show this in figure 2.
As a final comment, it is reasonable to ask whether there are other BPS states with general charge R not necessarily equal to R . There is evidence that there are BPS black hole solutions with generic charges [100,101]. These are found by incorporating scalar hair, and can be interpreted as a black hole with a condensate of BPS particles outside the horizon. The entropy of these solutions is expected to be subleading since they do not dominate the superconformal index, but this deserves further study. With L AdS 5 = 1, string = √ α , we have
Type IIB string corrections to the Schwarzian Result
In the preceding sections, we have analyzed the euclidean gravity partition function of black holes only in the limit of N → ∞, λ → ∞. In the bulk, this is the limit of classical supergravity with all higher derivative terms due to string corrections and loop corrections suppressed. To our knowledge, this is the only limit in which the rotating black hole solutions such as those in section 3.1 are known. The spectrum of these black holes in this limit is dual to the spectrum of N = 4 SYM at large N and strong coupling. However, to first order in the 't Hooft expansion, we can actually evaluate the first correction to the Schwarzian result using only the leading gravity solution and knowledge of the corrected 10-dimensional action.
In the 10D uplift of the black hole solution (3.53), the only IIB fields turned on are the metric and the five form; therefore we may use the α corrected action: where in a particular scheme the 8-derivative term can be written in terms of the Weyl tensor C and: In [102], the action of this form was used to compute corrections to the free energy of the nearextremal D3-brane, but the full supersymmetric completion of W was not known. Following earlier work of [103,104], [105] found W to be a sum of 20 monomials built from the Weyl tensor C M N P Q and a certain polynomial of derivatives of F 5 denoted T M N P QRS . In the application to AdS 5 black holes, [106] carefully computed the effective action and its valuation on the AdS 5 black hole. In perturbation theory, the shift in the on-shell action due to the new terms comes entirely from evaluating the leading solution on the perturbation [107], and [106] found the final result in the case of equal rotation to be relatively simple 21 : Evaluating this correction in the near-BPS limit in terms of N = 4 Yang Mills data we find where M SU (1,1|1) ∼ N −2 is the answer found above in (3.76) evaluated at b * = a * . From this expression it is easy to find the correction to the gap δ∆ gap = ∆ gap δM SU To write a more explicit expression, we can take the limit 1 − a small, such that In particular we find that the gap above the supersymmetric bound in figure 1(a) becomes smaller due to α corrections and the extremal states deviate even further from their naive classical value.
As mentioned in the previous section, stringy corrections to the extremality bound, in particular their sign (which in this case proved to be negative) are typically important in the context of the weak gravity conjecture program (once again, see [99] for a review).
To get a sense about the importance of stringy corrections it is useful to compare their magnitude to that of the quantum corrections discussed in section 3.6, when it comes to the energy of the extremal black hole states. Writing for R > R . 22 In such a case, both the quantum corrections and the stringy corrections push the extremal black hole states below the classical extremality bound. Nevertheless, when it 21 The final expression given in equation 16 of [106] v2 is off by a factor of N −4 . We thank J. Melo and J. Santos for confirming it. 22 There is a similar expression for R < R which will lead to similar conclusions.
comes to the energy at extremality, the stringy corrections can become more important then the Schwarzian quantum corrections, when This can occur when we can scale R → ∞ while making R − R 1 with (R − R )/R → 0.
Thus, the contribution of the Schwarzian quantum corrections and the stringy corrections to the extremal energy could exchange dominance when R − R ∼ λ 3/2 .
4 Further examples: black holes in N = (2, 2) supergravity in AdS 3 In this section we will consider a simpler theory with similar properties as those of the one studied above. We consider (2, 2) supergravity in an asymptotically AdS 3 spacetime. Regardless of the matter content, it has rotating and charged black holes solutions which at low temperatures, in a certain charge sector, also present an emergent SU (1, 1|1) symmetry in the IR. We will show this is true using the Virasoro symmetry following the approach of [108,29].
We will focus on the contribution from supergravity only. The reason is that the pure gravity answer is universal and an approximation of the full spectrum for holographic theories, either for large central charge [109] or for large angular momentum sector [110]. We will focus mostly on the role that integer spectral flow has on the spectrum, using the technical results derived in [111].
States and representations of 2D N = (2, 2) CFT
We consider a theory with a gravity sector in asymptotically AdS 3 given by Chern-Simons theory with group SU (1, 1|1) L × SU (1, 1|1) R , and an asymptotic symmetry algebra given by N = (2, 2) Virasoro symmetry. The left-and right-mover generators contain a bosonic Virasoro algebra generated by stress tensor T L and T R , the complex supercurrent G ± L and G ± R and a U (1) current J L and J R . We denote the charges associated to the U (1) currents by Q L and Q R respectively.
The Virasoro central charge c L = c R = c is given by c ≡ c/3 = 1 + 2b −2 , which defines the parameters c and b.
We will consider this theory when the boundary is a complexified torus with moduli τ and τ and define q = e 2πiτ and q = e −2πiτ . We also introduce a chemical potential z and z conjugated to the U (1) charges Q L and Q R . A state with charge (Q L , Q R ) contributes then with a weight y Q L y Q R , where we define y = e 2πiz .
Before analyzing the black hole spectrum in this theory, we will first quickly review the different types of representations of the (2, 2) Virasoro algebra that can appear. We will describe the left-moving case in the NS sector for concreteness. A general representation is labeled by a scaling dimension ∆ and charge Q. Unitarity implies ∆ > |Q|/2 and 0 ≤ |Q| < c − 1. The character of this representation, after summing over descendants, is given by When the bound on dimension is saturated ∆ = |Q|/2 and 0 < |Q| < c, the representations are BPS and preserve one supercharge. The character of these representations is given by The denominator comes from the preserved supercharge. Finally, we have the vacuum representation with ∆ = Q = 0, preserving two supercharges. The character is given by The denominator comes form the preserved supersymetry, while the numerator comes from the preserved SL(2, R) symmetry.
This can be extended to other sectors by simple transformations. First of all, we can insert a (−1) F by simply shifting y → −y. We can also go to the Ramond sector by shifting z → z + τ /2.
In the next section, we will be interested in computing the partition function with Ramond boundary conditions such that fermions are periodic along the spatial circle. Since we will study black holes, this circle is not contractible.
Naive Spectrum
We now move to computing the contribution to the black hole partition function from the supergravity sector. We will study it in the RR sector, as explained above. This can be related to the partition function in vacuum AdS 3 by a modular transformation τ → −1/τ and z → z/τ .
After this transformation, the spatial direction becomes time-like and therefore it involves a (−1) F insertion. On the other hand, time becomes space and the boundary conditions are antiperiodic. Therefore the black hole partition function is given by the vacuum characters in the NS sector, with a (−1) F insertion Z BTZ,RR = |e −iπ cz 2 τ ch N S (−1/τ, z/τ + 1/2)| 2 . The origin of the prefactor is explained in [112]. After replacing the explicit form of this character we get where q = e 2πi(−1/τ ) and y = e 2πi(z/τ ) . The first term is the exponential of the classical action on the black hole background. The second term is the one-loop determinant around this solution, where we wrote first the graviton contribution, then the U (1) R Chern-Simons one, and finally that from the gravitini. This is exact for pure gravity, but also reproduces a universal feature of the spectrum in the large central charge for holographic theories [109] or large angular momentum for theories with a twist gap [110].
From this expression for the partition function, we can extract the black hole spectrum expanding it in a sum over the characters introduced above. Explicitly, we define the density of states from The density of states as a function of charge and energy can be extracted from the following modular transformation for the vacuum character where we defined following [113] the modular S-matrix (4.7) Taking the square of this expression we can compute ρ Q L ,Q R (E L , E R ) comparing with (4.5).
The answer for the fully degenerate states is given by a product of a left-and right-moving modular S-matrices, together with the Jacobian appearing when replacing an integral over P to an integral over energies. The full answer is (4.8) whenever E L/R > Q 2 L/R /(2( c − 1)) and zero otherwise. The energy spectrum is continuous but which is fine since we believe that non-perturbative corrections could fix that.
However, the first issue is that the spectrum involves an integral over Q L/R ∈ (−∞, +∞).
This corresponds to an R-symmetry group being R instead of U (1). This will be resolved in the next section by including additional saddles we so far ignored. This is the problem that [111] address. A second issue is the contribution from the BPS states. We see that it involves a sum over integer spectral flowed characters and its not clear how to interpret that sector of the spectrum. The sum over saddles in the next section will correct both problems.
Before explaining the resolution of these issues we will take the near extremal limit of the continuum part and show it matches the N = 2 Schwarzian answer. For theories with large c and at large E R and fixed E L ∼ O(1/ c) the density of states is where defined S 0 = 2π c−1 2 (E R − Q 2 R 2( c−1) ) and M SU (1,1|1) = 4/( c−1) to make a direct comparison with the N = 2 JT gravity answer. Notice that given the standard 2D CFT conventions, the N = 2 supermultiplet has charge assignment (Q + 1/2) ⊕ (Q − 1/2), and therefore there is a shift of the charge Z Sch = Q CF T + 1/2 needed in order to match with the Schwarzian answer.
Corrected Spectrum
As emphasized in [111] this cannot be the whole story since the spectrum derived from this partition function has a continuum value of charges. From the bulk perspective, we need to sum over saddles that recover the discreteness of charge. From the boundary perspective, we need to include the integer spectral flow generator as part of the algebra.
We will begin by describing the boundary perspective. The spectral flow generators U η are defined more generally, in terms of a real parameter η, by the following transformation of the currents U −1 η L n U η = L n + ηJ n + c 2 η 2 δ 0,n , (4.10) The extended algebra we will consider includes, besides the stress tensor, U (1) current and supergenerators, the integer spectral flow generator U ±1/r for integer 1/r ∈ Z. The reason to parametrize this integer by r is to make a connection later with the spectrum described in section 2.3. It was shown in [111] that the modular properties of representations of this extended algebra are only consistent when the central charge has the form where k is a positive integer that can be chosen independently of the integer 1/r. We will see below that when this generator is included in the algebra the spectrum of charges is fractional Q ∈ r · Z. Before showing that we will present the new set of extended characters, written in the NS sector for concreteness, given by Again the first term is the exponential of the classical action, while the second term is the one-loop determinant in the new saddles.
As mentioned above, it was shown in [111] that the extended characters have consistent modular transformations when the central charge is fractional. For the vacuum representation case that we need their result is 2r sin [πQ] r∈Z N χ R (Q, r; τ, z), (4.18) where S(P, Q) is the same function defined above. Now we see that both problems are resolved.
The charge spectrum is discrete with a fractional unit charge r and the modular transformation involves the same representations that are allowed by the extended algebra. The overall factor of r in the right-hand side is required to reproduce (4.6) in the r → 0 limit.
Having the modular properties of the vacuum character of the extended algebra, we can extract the improved spectrum for a finite fractional charge r. First of all it is easy to see that other than the fact that charge is discrete the continuum density of states ρ Q L ,Q R (E L , E R ) is exactly the same as in the previous case (4.8), up to an extra factor of r from (4.18) and the final answer is whenever E L/R > Q 2 L/R /(2( c − 1)) and Q L/R ∈ r · Z c−1 and zero otherwise. In the near extremal limit, at large c, this is exactly the same as the N = 2 Schwarzian answer with coupling M SU (1,1|1) = 2/(rk), for large k. Moreover, the answer is still universal either for holographic theories at large c [109] or any theory with a twist gap at large angular momentum [110]. We can also extract the density of states of BPS states, when either E R = 0 or E L = 0, from the black hole spectrum. Then answer when E R = 0 is The answer when E L = 0 is completely analogous. In the near extremal limit with large E R , we reproduce the Schwarzian theory answer ρ Q L ,Q R (E R ) ≈ e S 0 2r sin(πQ L ) where S 0 is the same function of E R and Q R as defined in the previous section. Finally we can look at full BPS states with E L = E R = 0. The answer for pure N = (2, 2) gravity is This example is beyond the near extremal regime and therefore is not universal for N = (2, 2) CFTs either from the regime considered by [109] nor [110].
The elliptic genus of these theories vanishes. This can be resolved by defining a refined elliptic genus analogous to [34], assuming an exact Z 1/r symmetry. The situation simplifies when r = 1, since in that case BPS states come only with zero charge and the usual elliptic genus does not vanish anymore.
Finally, even though this discussion has been done for the case of N = (2, 2) supersymmetry, this can be generalized to cases with N = (0, 2) using the results of [108] or N = (4, 2) using [29].
Discussion and future work
In this paper, we have argued that the spectrum of 1/ [114,115]; however, in contrast to the case discussed in 3.1 the bulk does not have an SO(6) R-symmetry gauge field with the three Cartans whose eigenvalues we labeled by R 1,2,3 , but instead generically has a single U (1) R-symmetry gauge field. Since in section 3.3 -3.5 we have solely focused on the case when R 1,2,3 = R, the analysis of the isometry of the near-horizon region as well as that of the low-temperature expansion of the on-shell action should similarly hold for theories in AdS 5 × M 5 . Consequently, we expect that the spectrum of near-BPS black holes is also controlled by the N = 2 super-Schwarzian theory. Nevertheless, a detailed check, regarding the correct value of r and θ-angle, needs to be performed in order to be certain that the spectrum is described by precisely the same super-Schwarzian theory for all Sasaki-Einstein spaces.
Non-perturbative corrections
In this paper we have focused on the leading quantum corrections around the leading near-BPS black hole geometry in AdS 5 × S 5 with a nearly AdS 2 throat. When computing quantities not protected by supersymmetry, such as the partition function with all chemical potentials turned on, there can be a a variety of non-perturbative geometries contributing, including for example spacetime wormholes.
The situation from the perspective of the gravitational path integral is different when computing protected quantities such as the index [76] or even some quantities that are not protected by supersymmetry, such as the zero temperature partition function which yields the degeneracy of 1/16-BPS states. In that case, we expect a reduced number of geometries to contribute when they preserve supersymmetry. These have been considered in [76] in the case of throats with emergent P SU (1, 1|2) symmetry (in contrast to the SU (1, 1|1) near-horizon geometry considered in this paper), where it was shown quantum corrections are described by a deformation of the N = 4 Super-Schwarzian theory. This result from the N = 4 Super-Schwarzian theory is consistent with examples of black holes in type IIA where the exact index was shown to be reproduced by a sum over these orbifolds [116].
Nevertheless, similar results should hold for the black holes discussed in this paper due to similarities between the effective theory of N = 2 JT gravity found in this paper and N = 4 JT gravity analyzed in [76]. Both theories have vacuum BPS states, all with the same spinstatistics, that are separated by a gap for the lightest near-BPS states. In both cases, even though the boundary conditions are supersymmetric for α = 1/2, the only geometry which preserves supersymmetry in the bulk is the one with n = 0 (for the case discussed in this paper, see (3.80)). A similar calculation as that in [76] shows that, from the perspective of the near-horizon geometry, the only other geometries preserving supersymmetry in N = 2 JT gravity are particular orbifolds of AdS 2 (these were considered in bosonic gravity previously in [117][118][119]). In particular, the contribution of higher genus geometries or, more broadly, any near-horizon geometry that involves spacetime wormhole can be seen to vanish in N = 2 JT gravity. Consequently, in the full gravitational path integral only orbifolds of AdS 2 that can be uplifted to smooth AdS 2 × S 3 × S 5 geometries contribute.
Have these geometries already been observed from the boundary side? When expanding the N = 4 Yang Mills superconformal index in large N , while the leading contribution is given by the black hole considered in section 3.1, there are subleading contributions which corresponds to supersymmetric orbifolds of the black hole geometry [23]. We leave a detailed comparison of these contributions, including the effect of the N = 2 super-JT one-loop determinant, for future work. We should however stress that as opposed to the case in [116], which have a P SU (1, 1|2) near-horizon isometry, for the superconformal index in N = 4 Yang Mills there are contributions at large N which cannot be interpreted as orbifolds of the black hole [23], such as contributions from wrapped D-branes or black hole solutions which are yet unknown. It would be interesting to investigate if these non-perturbative corrections can be analyzed also in the context of the near-BPS limit. In such a case the angle of the defect, the rotation on S 3 and that on S 5 are all related in order for supersymmetry to be preserved and for the spacetime to be smooth. These geometries were also explicitly seen in [23] as subleading corrections to the superconformal index.
Beyond computing the index, the effect of geometries of higher topology or with a larger number of defects in the near-horizon region (from higher-dimensional generalizations of Seifert geometries [118]) on the N = 2 super-JT path integral, shows that a gap is present in the spectrum associated to each specific geometry. 23 While such geometries are off-shell (for example, the equation of motion for the dilaton cannot be satisfied on such geometries [91]) they can nevertheless be systematically accounted for using the sum over topologies in the N = 2 super-JT path integral. Thus, when summing over all geometries we expect that they might yield non-perturbative corrections (in N 2 ) to the value of the gap, but will not affect its existence by inserting additional states between the extremal BPS state and the lightest near-BPS black hole state coming from the original black hole saddle.
One can additionally discuss non-perturbative effects coming from different (possibly supersymmetric) black hole solutions which have not yet been understood analytically. For instance, as 23 The gap was also found to persist when accounting for other topologies in the N = 4 super-JT path integral [76].
previously mentioned, [100,101] found numerical evidence for black hole solutions that support scalar hair and can be supersymmetric even away from R = R . Such black holes would yield corrections to the spectrum found in figure 1 that would, once again, be non-perturbatively suppressed in N 2 . Nevertheless, one might hope to analyze such solutions at low-temperatures in Euclidean signature where we expect there to still be an AdS 2 ×S 3 ×S 5 near-horizon region. If at extremality the near-horizon super-isometry is still SU (1, 1|1), we expect that the temperature dependence of quantum corrections around these new saddles still be captured by the N = 2 super-Schwarzian. Thus, we expect that even if sectors with R = R are populated by new BPS solutions, energy gaps should still be present within each charge sector; moreover, even if the degeneracy of the 1/16-BPS black states that we have found in the R = R sector is affected by such non-perturbative corrections coming from hairy black holes, the gap should be unaffected within that charged sector.
Thus, this (albeit incomplete) accounting for possible gravitational non-perturbative effects, all of which preserve the existence of the mass gap, along with the fact that the leading order stringy corrections also did not affect its present, prompts us to conjecture that the gap persists within each large charge sector of N = 4 Yang-Mills.
An effective field theory for near-BPS states from the boundary-side
We derived the spectrum of nearly 1/16-BPS black hole states from a bulk AdS 5 ×S 5 calculation.
Due to AdS/CFT, this makes a prediction for the spectrum of N = 4 Yang Mills. This raises the question of how to derive the same spectrum from an independent boundary CFT argument.
This would be an extremely non-trivial check of holography that would help understand quantum aspects of gravity in these black hole backgrounds better. At weak 't Hooft coupling some of the states with the black hole quantum numbers were constructed in [74,75] in the limit of large spin J/N 2 1, and some interaction effects were considered. We leave for future work to identify the most relevant interactions in this limit, and to derive the emergence of a softly broken SU (1, 1|1) symmetry. This would give the first example of a quantum theory describing a local nearly AdS 2 background, as opposed to SYK which describes a highly non-local bulk. A Details on the low temperature expansion In this section we explain in more details how to expand the action I ME (β, j 1 , j 2 , α) = I GCE (q 1 = 0, q 2 = 0, j 1 , j 2 , α) + 2πiω 1 j 1 + 2πiω 2 j 2 , (A.1) around its BPS limit [38]. We start with introducing four expansion parameters ( r , q , a , b ) such that r 2 + = r * 2 + r , q = q * + q , a = a * + a , b = b * + b , (A.2) where parameters (a * , b * ) are defined through the BPS configuration of fixed charges j 1 (a * , b * ) = π (a * + b * ) (a * + 1) (b * + 1) 4G 5 (a * − 1) 2 (1 − b * ) , j 2 (a * , b * ) = π (a * + b * ) (a * + 1) (b * + 1) We want to perform the expansion in such a way that the variables (T, ϕ, j 1 , j 2 ) are fixed. This imposes the following four relations, since the fixed parameters in the PCE ensemble are: ϕ = ϕ(r + , q, a, b), T = T (r + , q, a, b), j 1 (a * , b * ) = j 1 (r + , q, a, b), j 2 (a * , b * ) = j 2 (r + , q, a, b).
B Details on Killing spinors
In this section we verify explicitly the Killing spinors (3.57) and Killing vectors (3.58)-(3.62) of the near-horizon geometry lifted to ten-dimensions (3.53). This has been first derived in [33].
Here we review the construction in order to account for differences in conventions and provide reader with necessary formulas.
The 5d tetrads of the near-horizon metric are given by (B.16) 24 The relation between the near-horizon gauge field used here and the one used in [33] is A here = − √ 3A there .
In this basis, the electromagnetic fields are explicitly given by Our goal is to verify the solutions to the Killing spinor equation N 1 ]S 1 S 2 S 3 S 4 + 1 96 F This implies the following projection conditions Let us now choose a general constant spinor 0 = 0,R + i 0,I , subject to all four of the above projection conditions. Such a spinor is now labeled by four independent real parameters. We can split it into two chiralities under projector P ± = 1 2 (1 ± Γ 09 ) such that satisfy the Killing spinor equation. Because the above spinors are labeled by four independent real parameters, we conclude that the 10d lift of the near-horizon geometry preserves four supersymmetries. Note that this is in contrast with the full black hole geometry, which preserves only two supersymmetries [85]. This means that there is a supersymmetry enhancement in the near-horizon region.
Knowing the Killing spinors of the near-horizon geometry, we can now find its Killing vectors by computing independent Killing spinor bilinears whereẽ a denotes the dual tetrad basis, explicitly given bỹ The bilinears are found to be where we have chosen a specific normalization for constant spinors ± 0 and introduced four independent Killing vectors (Z, D, E + , E − ). They are defined as We are now in a position to fully determine the isometry superalgebra of the near-horizon geometry [87][88][89][90]. A standard prescription is to associate bosonic generators Q B (k i ) and fermionic generators Q F ( I ) to Killing vectors and Killing spinors respectively. The superalgebra is then | 2022-03-04T06:47:10.893Z | 2022-03-02T00:00:00.000 | {
"year": 2022,
"sha1": "33b8765b1e1275459fe85da726844c7e0856c605",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4cff8d4174e0110907b82ff7362a9ddf82ae9e94",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247150303 | pes2o/s2orc | v3-fos-license | Local fiscal multipliers of different government spending categories
This paper compares multipliers of different categories of US federal government spending, and in doing so provides a new insight as to why fiscal multipliers may differ across countries and time. We identify exogenous federal government spending shocks at the state level for defense and non-defense spending. Using a projection-based approach, we estimate the cumulative multiplier due to shocks in either of these spending categories. Our results indicate that defense spending yields lower multipliers than non-defense spending. Thus, focusing only on defense spending may result in underestimating the multiplier for government spending.
Introduction
A central question in macroeconomics is whether the government can effectively stimulate the economy by increasing spending or decreasing taxes. The effectiveness of a short-run spending stimulus can be assessed by estimating the fiscal multiplier, i.e., the additional income generated per unit of additional government spending. However, the existing estimates of the fiscal multiplier vary substantially across studies. These differences are not only due to the empirical approaches used. Economic factors, such as the state of the business cycle, the exchange rate regime, and the openness of a Institute of Economics and EMbeDS, Scuola Superiore Sant'Anna, Pisa, Italy country, also affect the size of the multiplier (Ramey 2019). In this paper, we focus on another factor that may contribute to differences in fiscal multipliers, namely the category (or function) of government spending that dominates a fiscal policy stimulus. Our investigation of the role of government spending categories starts with a simple comparison: defense spending versus non-defense spending. Indeed, defense spending has been often used in the fiscal policy literature due to its acyclical property, allowing researchers to characterize defense spending shocks as events exogenous to the business cycle (Ramey 2019). However, the way defense spending impacts the local economy may be quite indirect, and depends on the reaction from a few defense contractors. Instead, non-defense spending, which includes social transfers, education, health, or infrastructure, is more aligned with the typical counter-cyclical stimulus tool. The mechanisms by which defense and non-defense spending affect the local economy may differ, and we hypothesize that their economic impact might also depend on the category of spending. From a policy perspective, it is important to know whether there is a difference in the effectiveness across government spending categories in order to effectively use specific counter-cyclical policy measures to dampen economic slowdowns.
Previous studies have estimated fiscal multipliers for defense and for non-defense spending, either at the national level using time series data, or at the sub-national level using panel data. 1 Regarding defense spending, there are both estimates at the aggregate (Ramey and Zubairy 2018) and the state levels (Nakamura and Steinsson 2014;Dupor and Guerrero 2017). Similarly, there are studies investigating the effects of non-defense spending at the aggregate (Alesina et al. 2018) and the state levels (Leduc and Wilson 2013;Clemens and Miran 2012;Shoag 2016). Comparing the fiscal multipliers in these different studies points at higher multipliers for non-defense spending at the state level. However, such a gap may also be driven by other differences between the papers. Indeed, no study has attempted to directly compare the multipliers across government spending categories in a unified methodological framework, relying on the same identification method. This is because the identification method often relies on a specific spending grant or event, while a comparison requires to devise a method which (i) could be applicable in a similar way to defense and non-defense spending and (ii) allows for causal inference in order to quantify fiscal multipliers.
In this paper we contribute to the literature by systematically comparing fiscal multipliers of different government spending categories. To this aim we identify exogenous federal government spending shocks at the state level for defense and non-defense spending. We isolate the exogenous part of federal spending allocated to states by removing the common component of spending across states and the endogenous statelevel component. This identification strategy, to recover state-level spending shocks for different categories, exploits the cross-sectional variation in federal spending. Specifically, we use the fact that part of federal spending to states is allocated independently of the state-level economic conditions. Our estimated shocks, for both defense and non-defense spending, predict federal spending at the state level more accurately than many existing instruments, which typically relate only to a small (exogenous) component of state-level spending.
Investigating the dynamic effects of the estimated spending shocks, we find that defense spending yields lower income multipliers than non-defense spending. 2 This is in line with previous findings on non-defense spending (state) multipliers (Leduc and Wilson 2013;Clemens and Miran 2012;Shoag 2016). We conclude that by focusing only on defense spending, one may underestimate the government spending multiplier.
The outline of this paper is as follows. Section 2 presents the structure of fiscal spending in the USA, which will be used for our empirical strategy. Section 3 places our research question and approach in the related literature. Section 4 explains the method and data that we use to estimate fiscal multipliers for defense and non-defense spending. Section 5 discusses the baseline results, robustness checks, and extensions. We also test the effect of accounting for spillovers across states and study a further disaggregation of the non-defense spending category. Finally, Sect. 6 concludes the paper.
The structure of government spending in the USA
In a fiscal union or federally organized state, multiple levels of government are engaged in fiscal policy. Typically, there exists a common structure of fiscal flows between the federal government and state governments. These fiscal flows include transfer payments that aim to achieve short-run economic stabilization of a state, or promote long-run convergence between states. Therefore, both the federal and state-level governments are involved in most spending categories. The US federal government is often not actively involved in the implementation of a specific policy, but it finances the policies implemented by lower-level governments through transfers. An exception is defense spending, which usually only takes place at the federal level.
The structure of fiscal flows in the case of the USA is summarized in the nonhierarchical overview in Fig. 1. We distinguish three different fiscal channels. Channel A is the direct federal spending channel, where the federal government interacts with the state-level economy without cooperation with the state government. Instead, in channel B the federal government interacts with the state economy only indirectly, via the state government. Finally, in channel C the state government directly interacts with the state economy, without any involvement of the federal government.
Total government spending affecting the local economy is the sum of channels A, B, and C. Our aim is then to evaluate how different categories of such spending affect the local economy, by estimating the corresponding local fiscal multipliers. To do so, we therefore need to break down total spending by category. Those categories can be related, for example, to government functions (education, health, infrastructure, and defense) or types of spending (consumption, investment, interest payments). Although the latter could make more sense from an economic perspective, only the former disaggregation is allowed given the way the data is constructed. Indeed, education
State Economy
A spending could affect the development of human capital, but would be expected to have a rather long-term effect on income, while infrastructure spending, such as a new bridge or highway, may increase the productivity of firms in the area in the short run. To better complement large parts of the literature we focus in this paper on defense versus non-defense spending (c.f. Barro and Redlick 2011). We classify all non-military spending automatically as non-defense spending. The categories of spending are linked to the structure of fiscal flows too. Indeed, we know that only the federal government is involved in defense spending. The state economy is therefore only affected by defense spending through channel A. On the other hand, non-defense spending does not belong to a specific channel, although channels B and C dominate. 3 Hence, we have to take into account the specific structure of US federal and state-level fiscal flows when analyzing the effect of defense versus non-defense spending on state economies. The disadvantage of using the specific structure of government spending is that we cannot disentangle the effect of different spending channels versus different categories. However, we believe that there is no reason to assume that the channel plays an important role in the local effects of spending. Instead, if we find differences in the estimated multipliers, then these are more likely driven by the different spending categories. In this study, we limit ourselves to federally financed spending at the state level (channels A and B) because it is easier to identify the exogenous spending component for these channels (see our identification strategy).
Local fiscal multipliers in the related literature
The aim of this paper is to estimate and compare the fiscal multipliers of state-level defense versus non-defense spending in the US context. In this section, we first outline crucial empirical challenges when estimating local multipliers and how they have been tackled in the related literature. We then highlight how we complement these approaches. Our method is further discussed in detail in Sect. 4.
Estimating local fiscal multipliers: objectives and challenges
The estimated fiscal multiplier based on state-level or sub-national data is often labeled as the 'local fiscal multiplier'. Nakamura and Steinsson (2014) argue that such local fiscal multiplier should be interpreted as an 'open economy relative multiplier' because federal states can be considered as open economies that form a currency union. Therefore, the local fiscal multiplier is different from the closed economy aggregate multiplier that can be estimated with time series data (Blanchard and Perotti 2002;Barro and Redlick 2011;Ramey 2011;Ramey and Zubairy 2018). Furthermore, positive spillovers and general equilibrium effects (i.e., across states) may contribute to a difference between the local and the aggregate multipliers (Auerbach et al. 2019;Dupor and Guerrero 2017).
Estimating the local fiscal multiplier for different categories of government spending in a fiscal union is difficult because local and federal government spending shocks are partially endogenous to the state-level business cycle for two reasons. First, government spending by the state government is partially endogenous to the state-level business cycle because the state government relies on its own revenues to finance spending. In times of economic slack, revenues decrease and therefore the state has to decrease spending, especially because state governments often face balanced budget requirements (Conti-Brown and Skeel 2012). Second, many federal spending programs intend to provide economic support to states. Hence, the federal government spending stimulus to a state is also endogenous to the state-level business cycle. If we fail to take this into account, then the multiplier estimates are biased upward.
Estimating local fiscal multipliers: possible solutions
To solve the problems highlighted above, it is common to instrument state or federal spending with variables that are uncorrelated to the state-level business cycle. Following Chodorow-Reich (2019), we can decompose state-level spending for a specific category into three components: a common component across states, an endogenous state-level component, and an exogenous (random) state-level component. The typical approach in the related literature is to find a variable that proxies the exogenous state-level component to instrument the endogenous state-level component. The common component can be controlled for using time fixed effects. Previous works can be further separated into those studying defense spending socks and those focusing on non-defense spending shocks. As we detail in what follows, the instrument chosen is often specific to the chosen spending category and therefore cannot be used to compare fiscal multipliers across categories (i.e., defense versus non-defense spending).
The papers that study defense spending focus on the effect of direct federal government spending on state-level income, according to channel A (see also Fig. 1). Nakamura and Steinsson (2014), Dupor and Guerrero (2017) and Auerbach et al. (2019) construct an instrument to exploit the heterogeneity in the response of local defense contracts to national variation in defense spending. The identifying assumption is that the federal government does not increase national defense spending to give a disproportional economic stimulus because a particular state is doing economically poorly compared to other states. These studies use the approach by Bartik (1991) to instrument military spending per state with the change in national defense spending, scaled by the average level of military spending in that state to its output in previous years. Nakamura and Steinsson (2014) estimate a local fiscal multiplier of 1.5 for a two-year horizon, but Dupor and Guerrero (2017) show that the multiplier is probably lower, depending on the estimation period. Finally, Auerbach et al. (2019) construct a defense contract dataset at the city level for different industries. They confirm the multiplier found by Nakamura and Steinsson (2014) once they sum over positive within-city and rest-of-state spillover effects. Next to these studies, Biolsi (2015) and Hausman (2016) use historical policy examples from the 1930s to illustrate the effect of direct federal spending at the local level without estimating multipliers.
On the other hand, papers that study non-defense spending use specific instruments that relate to spending from the local government, either indirectly funded by the federal state or by states' own resources, which correspond to our channels B and C respectively (see Fig. 1). 4 In the first case, the federal government provides state governments with grants to carry out a particular spending program. The effectiveness of an indirect federal fiscal shock depends on how the shock is transmitted by state governments and whether the grants crowd out spending by state governments. To avoid this, the federal government attaches conditions to grants. In total, 77% of such granted amount involves either matching or maintenance-of-effort requirements in the USA (FFIS 2016). Matching requirements mandate a state government to use its own revenues to match the amount granted by the federal government, while maintenanceof-effort requirements ensure that state governments do not cut spending in the program tied to the grant when it receives federal funds. As a result, empirical studies have found that federal grants do not crowd out state spending, which is the so-called flypaper effect (Gamkar and Oates 1996;Nesbit and Kreft 2009). 5 Many studies that use indirect federal spending focus on exogenous drivers of the grants to state governments, such as specific rules in grant mechanisms (Chodorow-Reich et al. 2012;Wilson 2012;Fishback and Kachanovskaya 2015;Suárez Serrato and Wingender 2016;Chodorow-Reich 2019). These rules are used to instrument grants from the federal to the state government. Alternatively, Leduc and Wilson (2013) do not use an IV-setup. Instead, they use the difference between the expected amount of grants and the actual amount received to estimate the indirect federal government spending shock. All fiscal multiplier estimates for this spending channel range between 1.4 and 2.0. 6 Finally, there are papers that use internally financed state government (non-defense) spending (channel C in Fig. 1). 7 In this case, state governments finance the spending with state taxation, deficits, or other funds, such as rainy-day funds, pension funds, or lottery funds. Clemens and Miran (2012) analyze deficit-financed spending by state governments. They use differences in states' balanced-budget requirements and estimate a multiplier of 0.3. Shoag (2016) analyzes an external shock to state government spending by exploiting windfalls in state pension funds and estimates a multiplier close to 2.0.
The specific instruments used in previous studies do not allow for a comparison across government spending categories. In the next section we describe our method, which, similar to previous studies, removes the endogenous component of state-level spending, but does so in a unified methodological framework which is valid for defense as well as non-defense spending. Note however that the impacts of defense and nondefense spending shocks are still estimated separately.
Empirical approach
In this section, we discuss the methodology we use to estimate fiscal multipliers for different spending categories. We present the methodology to estimate state-specific federal government spending shocks. We include the shocks in a dynamic panel data model to estimate impulse responses and construct cumulative multipliers.
Data
We use state-level panel data for 50 US states from 1963 to 2014. In the next paragraphs, we discuss the main variables that we use.
Income
The main dependent variable is personal income at the state level, available in the Regional Economic Accounts from the US Bureau of Economic Analysis (BEA). Personal income sums income of all residents, including income received from other states. 8 We convert these data in real per capita terms, using the population estimates and deflators from Dupor and Guerrero (2017). In line with Shoag (2016), we use personal income instead of output, due to limitations in data availability. 9 In addition, income follows fluctuations of output closely.
Defense spending
In the case of defense spending, there is no involvement from the state government (only channel A). In order to estimate the effect of defense spending, we use data on direct federal spending on defense contracts from Dupor and Guerrero (2017). 10 The database was constructed using annual reports on defense contracts. 11 The amounts are aggregated to estimate military procurement actions per state. We use the variable in real per capita terms. It captures the total real amount spent on defense procurement by the federal government in a state and year.
There are two possible concerns when using this variable. First, it is possible that contractors rely on subcontractors in other states. However, Nakamura and Steinsson (2014) have shown that controlling for subcontracting does not change their results.
Second, the actual work related to the defense contract could take place in a different year than the announcement (i.e., the signing) of the contract. This time inconsistency could create biased estimates of the effect that government spending has on income. Dupor and Guerrero (2017) have shown that their results are robust to this problem.
Non-defense spending
To estimate the effect of non-defense spending, we use data on intergovernmental transfers from the federal government to the state government (channel B1) 12 and state government spending (channel B2) 13 . Both variables are available in the Annual Survey of State and Local Government Finances from the US Census Bureau. The series are transformed in real per capita terms. By focusing on intergovernmental transfers, (indirect) federal non-defense spending can be captured in the most complete sense. Instead, in the related literature, studies often focus on specific grants, such as American Recovery and Reinvestment Act (ARRA) grants (Chodorow-Reich 2019) or the Federal-Aid Highway Program (Leduc and Wilson 2013). The variable intergovernmental transfers consist of all categorical and block grants awarded both on a competitive or formula basis. 14 These grants are an important source of revenue for state governments. 15 A possible concern is that although intergovernmental transfers measure revenue for state governments, these are not one-to-one linked with state government spending. 16 The funds from the federal government could crowd out other parts of the state government budget, such as own-revenue financed spending (channel C). However, 77% of the total grant amount from the federal government to state government contain either matching or maintenance-of-effort requirements (FFIS 2016). Hence, it is unlikely that grants crowd out state spending significantly.
In addition, there is a close link between intergovernmental grants and state spending. Indeed, when evaluating the impact of intergovernmental transfers on state spending below (see the next subsection, Step 2), the F-test indicates strong explanatory power.
Econometric model
Our main goal is to compare fiscal multipliers for defense spending and non-defense spending. To do this, we need to estimate the dynamic responses of both income and the government spending categories to a shock in one of these spending categories. We propose to use a dynamic approach, which can take directly into account the underlying structure of the government spending categories to extract the exogenous component of state-level federal government spending. Instead of relying on instruments, we can remove the common and the endogenous state-level component in the estimation steps. In appendix A, we discuss why using the static IV approach from Nakamura and Steinsson (2014) is not preferable: The constructed instruments are very weak, in particular at longer horizons.
Specifically, the shocks are based on forecast errors for the federal government spending categories at the state level, denoted as x i,t , i.e., defense contracts or intergovernmental transfers. By using state-level shocks of federal spending (channels A and B), we avoid the endogeneity problems associated with federal spending at the state level. Since the spending decisions are made on the federal level, we identify 'external shocks' from the federal government for every state. Formally, the shocks s i,t are defined as the one-step ahead forecast error, i.e., the difference between the realized value and the forecast: where E[x i,t+1 |I t ] is the one-step ahead forecast, using the available information at the state level. The shocks are unforecastable with the available information set of economic agents I i,t . This information set can be used to extract the exogenous component of x i,t if it contains information on aggregate government spending to remove the common component, and confounding business cycle variables to remove the endogenous state-level component. The analysis consists of three steps. First, we obtain government spending shocks for each state from the one-step ahead out-of-sample forecast error of federal government spending at state level for each considered spending category. Second, we use local projections to estimate impulse responses for income and each government spending category considered. Third, we construct the cumulative multiplier over an eight-year horizon.
Step 1: estimating the shocks
We start by estimating the shocks for the different government spending categories: defense contracts and intergovernmental transfers for non-defense spending. Ideally, we would have used official forecasts for these variables that are available to the government. This is the approach of Clemens and Miran (2012) and Leduc and Wilson (2013). However, there are no official state-level forecasts available for the variables that we use. Therefore, we estimate state-level forecasts with a panel data model using a rolling forecast of 10 years to obtain an out-of-sample, one-step ahead forecast.
To obtain the forecast, we use a panel regression for every category of government spending separately, including state fixed effects. We do not include year fixed effects in this regression because the model is used for out-of-sample forecasting, and it should therefore not be optimized for in-sample fit. 17 However, we add independent variables 17 Furthermore, we include time fixed effects in the second step anyway to remove common cycles. that control for the national business cycle and aggregate shocks. The panel data fixed effects model that we use can be written down as follows: where x i,t is either defense contracts for defense spending or intergovernmental transfers for non-defense spending. We include state fixed effects α i and two lags of the dependent variable (i.e., L = 2). The set of control variables in the vector V i,t includes state personal income, state government spending, state government tax revenue, federal government spending, the oil price, and the real interest rate. All variables but the real interest rate have been converted in logs and are expressed in real per capita terms.
Prior to estimation, we de-trend the variables using a state-specific linear trend. 18 Using the out-of-sample prediction of defense (resp. non-defense) spending at time t in state i ( x i,t ), we define the prediction error aŝ We interpretŝ i,t as the government spending shock at time t in state i for defense and non-defense spending, respectively. 19 Note that the above approach to estimate government spending shocks shares important similarities with the identification strategy in Blanchard and Perotti (2002): We assume that economic conditions (at the state level) have no contemporaneous impact on (exogenous) state-level spending. We formulate the local projection regressions below such that a change in exogenous state-level spending may have instantaneous effects on state-level income/output. Even though we do not estimate the defense and non-defense shocks jointly and do not impose orthogonality between the shocks, the correlation between the shocks is small once we control for year fixed effects.
Step 2: estimating the impulse responses
The (estimated) shocks from the first step (ŝ i,t ) allow us to estimate the effect of federal government spending on the state-level income. We use the local projection method by Jordà (2005) to obtain impulse response functions tracing the dynamic local effect of the shocks without modeling explicitly the dynamic interaction between the variables. Note that this does not allow for dynamic interaction between states and therefore we cannot investigate general equilibrium effects as in, for example, Auerbach et al. (2019). In Sect. 5.2, we investigate whether the shocks also have cross-sectional spillovers.
We are interested in estimating the effect of the defense contracts shock on defense contracts and the non-defense (intergovernmental transfers) shock on state government spending. 20 To that aim we estimated the following regression: For horizon h, the government spending variable g i,t (either defense or non-defense spending) is regressed on both (estimated) shocks of defense spending,ŝ D i,t , and nondefense spendingŝ N D i,t . This is done to rule out potential omitted variable bias. For the same reason, all local projection regressions include two lags of both, defense and non-defense spending as regressors. The vector of controls, W i,t , contains state-level personal income, the state-level unemployment rate 21 and the state government deficit ratio 22 . Following our identification assumption that state-level economic conditions have no contemporaneous effects on (exogenous) state-level government spending, we only include lagged realizations of the variables in W i,t . We set L = 2. Finally, state and year fixed effects (ζ i,h and φ t,h ) are included. The latter removes the common component across states. 23 The local projection regression in Eq. 4 is separately estimated for each spending category and each horizon h = 0, 1, 2, . . . , 8. When the dependent variable is defense contracts, the estimate of ω D h is the estimated impulse response of defense spending at horizon h to a defense spending shock. Similarly, when the dependent variable is state government spending, the estimate of ω N D h is the estimated impulse response of state government spending at horizon h to a non-defense spending shock. These coefficients should be interpreted as the percentage change in defense contracts or state spending as a result of one percent increase in either defense contracts or intergovernmental transfers. Alternatively, the significance of the estimated coefficients can be interpreted as a measure of how well the shock explains the variation in the government spending variable at different horizons, which is comparable to a first stage (weak instrument) F-test in IV. As expected, when we perform a marginal F-test onω D h (resp.ω N D h ), we obtain F-statistics of 114 and 101 at h = 1 for defense contracts and intergovernmental transfers, respectively, which are well above the common rule-of-thumb critical value of 10 to reject the null hypothesis of a weak instrument. 24 20 State government spending also includes spending that is financed with non-federal funds (channel C). As a robustness check, we use only federally financed state spending (channel B2). See the robustness test entitled 'NASBO vs. Census data' and discussed in Section 5, 'Robustness tests'. 21 The source of the unemployment rate is the Bureau of Labor Statistics (BLS), Local Area Unemployment Statistics (LAUS). These data are available from 1976 until 2014. 22 Since many states have balanced-budget requirements, state governments budgets are highly pro-cyclical. We calculate the deficit ratio using US Census Bureau data on state expenditures and revenues. 23 By using year fixed effects, we assume that states react homogeneously to common shocks. It is difficult to control for all aggregate business cycle dynamics and common shocks without time fixed effects. Not including time fixed effects may create upward bias in the response of income. 24 The marginal F-test is separately performed for each horizon on the coefficient for the shock in Eq. 4, using 1790 and 1791 degrees of freedom, respectively. By construction, the explanatory power should be high at short horizons, since the shocks contain the unexplained variation in federal spending one step-ahead. In fact, the F-statistics for the shocks gradually decrease and drop below the value of 10 after 4 years for defense contracts and after 6 years for intergovernmental transfers.
The position of the impulse response function depends on the scaling (or normalization) of the shocks. We scaled the shocks such that the integral of the estimated impulse response function of government spending, i.e., the sum of the coefficients, equals one. The motivation is that we require total defense and non-defense spending (i.e., state government spending) to equal each other over the horizon that we analyze.
We proceed similarly when estimating the response of income. We estimate the following panel regression for state-level income y i,t , including state and year fixed effects (η i,h and θ t,h ). As above we set L = 2. The vector of controls, W i,t , contains state government spending, state-level defense spending, the state-level unemployment rate, and the state government deficit ratio. Our identification assumption allows for contemporaneous effects of the variables included in W i,t on state-level income. Thus, contemporaneous controls are included as well. 25 where this timeρ h D andρ h N D correspond to the estimated responses of income to a defense and non-defense spending shock at horizon h, respectively.
Confidence intervals are constructed using a nonparametric bootstrap approach. We implement the moving block-bootstrap proposed in Gonçalves (2011) or Gonçalves and Kaffo (2015), to re-sample both the regressant and the regressors (including the estimated shock) in the local projection regressions (4) and (5). This allows us to take (estimation) uncertainty about the shock into account when constructing inference. Furthermore, the chosen bootstrap approach is particularly suited in our case (using macro panel data) due to its robustness to serial and cross-sectional dependence.
Step 3: constructing cumulative multipliers
The final step consists in assessing the overall effect of one unit of government spending on the outcome variable personal income. We define the cumulative multiplier as the ratio of the sum of responses of personal income over h = 0, 1, 2, . . . , H and the sum of responses of government spending over the same horizon 26 scaled by the sample Fig. 2 Impulse response functions and bootstrapped 90% confidence intervals, using a state-specific linear trend and state and time fixed effects. The top two panels show the effect of a non-defense shock on state income and on state spending. The bottom two panels show the effect of a defense shock on state income and on defense contracts income-spending average: Note that the coefficient ω h (effect on the government spending variable) is scaled such that it sums up to one after H periods. As a consequence, the cumulative multiplier at horizon H is equal to the normalized sum of the point estimates ρ h , i.e., the effect of shock i on income.
Results
Next we present and discuss our results. First, we use the approach described in Sect. 4, and then, we test their robustness to changes in some of our assumptions. Finally, we analyze potential spillover effects across states.
Fig. 3
Cumulative multipliers and bootstrapped 90% confidence intervals of a non-defense spending shock (blue dots) and a defense spending shock (red diamonds)
Baseline results
In this section we report the results of the analysis following the approach discussed in Sect. 4. Figure 2 shows the estimated impulse responses, and 90% confidence intervals in shaded areas, for a shock in non-defense spending (first row) and defense spending (second row), referring to Eqs. 4 and 5 . The figure shows both the response of income (left column) and of the government spending variable, i.e., state spending or defense contracts (right column). The plots in the right column show that both shocks have a strong initial positive effect on the spending variable. Over time, this effect slowly decreases. The response for defense contracts shows a stronger initial effect, whereas the response for state spending appears to be more persistent. This is possibly due to the fact that non-defense spending programs are more long-term oriented and therefore a one time shock in nondefense spending has a longer lasting effect on spending. Instead, defense spending is more volatile and a one-time increase in spending does not have a persistent effect on defense spending at the state level.
The figure, furthermore, shows the response of income to a defense or to a nondefense shock (left column). The response of income to a non-defense shock remains positive up to the 8-year horizon, but is only significant in years 2 and 6. For defense spending, coefficients are also all positive but only turn significant after the third year. Indeed, the defense shock has a small positive effect on income, which accumulates over time. Figure 3 shows estimates of the cumulative multiplier and the 90% bootstrapped confidence intervals as shaded areas, referring to Eq. 6. We find that the multiplier for a non-defense shock starts at 0.5 and continues increasing until 1.1 after 8 years. The multiplier for a defense shock starts around zero on impact and increases toward 0.6 after 8 years.
From year 2 onward, we find a significant difference between the two multipliers. The cumulative multiplier for a non-defense shock is significantly higher than for a defense shock.
Discussion
The lower local fiscal multiplier for defense spending compared to non-defense spending is in line with the existing literature. Several studies confirmed that local fiscal multipliers are higher for non-defense spending (Clemens and Miran 2012;Leduc and Wilson 2013;Shoag 2016) than for defense spending (Nakamura and Steinsson 2014;Dupor and Guerrero 2017). 27 We will discuss three channels that could explain why defense spending results in lower fiscal multipliers. First, firms play an important role in the transmission mechanism of fiscal policy. The characteristics of the firms that receive contracts determine how the economy is affected by defense spending. For example, the connectivity and competition levels of the firm within the industry affect the positive and negative spillovers of such a firm-specific shock. More specifically, when an industry is concentrated around a few large firms, then firm-specific shocks can affect the whole industry. 28 Nakamura and Steinsson (2014) indicate that there are limited positive spillovers from defense contractors to other firms. In contrast, non-defense spending affects a broader set of firms and, hence, may create stronger positive spillover effects within the state-level economy. 29 Second, the effectiveness of government spending relies on how news about a policy change is received. Clemens and Miran (2012) claim that military build-ups increase future tax expectations more than increases in non-defense spending. The strong 'Ricardian' response of consumers could explain why the multiplier for defense spending is lower. If consumers are aware that non-defense spending stimulates productivity and labor supply more than defense spending (Barro and Redlick 2011;Clemens and Miran 2012), they might believe that an increase in defense spending does not stimulate future income enough to finance government borrowings for military build-ups.
Third, it is likely that defense spending has a different effect on the supply-side of the economy. When firms in a state receive more defense contracts, this attracts production factors (capital, labor, etc.) from other industries. Since the reallocation of production factors is costly (Ramey and Shapiro 1998), this limits the positive effect of an increase in defense contracts on state-level income. Meanwhile, non-defense spending raises household demand for products and services across a wider variation of industries. Since the demand is less concentrated in a specific industry, the re-allocation effect is likely less pronounced.
Robustness checks
We investigate the robustness of our results in several ways.
First, we control for political party dominance in states, by using political party dummies (i.e., Republican or Democrat). When a state is dominated by one political party, it may be easier to gain influence at the federal level, especially if that party also controls the federal administration. In that case, the federal government might be willing to spend more in a state that is in need for federal spending. Not controlling for this might create upward bias in the multipliers. We use data on the political dominance of the Democratic Party versus the Republican Party in the state senate, state house, and governorship. We updated the data by Klarner (2013) for the last 4 years using the State Partisan Composition tables from NCSL. 30 Including variables on the political party dominance (of the Democratic party) has no effect on the multiplier (see appendix C).
Second, we investigate whether using fiscal years vs. calendar years has an impact on our results. The government spending variables are reported in fiscal years (July 1-June 30), whereas the economic variables income and unemployment are reported in calendar years (January 1-December 31). Using calendar years instead of fiscal years may create an upward bias if one does not control for business cycles. The estimation results in appendix C confirm this hypothesis: We find slightly lower multipliers when we match all economic variables to fiscal years.
Finally, we check whether the responses to non-defense spending shocks are affected by using an alternative definition of state spending. Instead of the data from the US Census Bureau, we use the data on federally financed state spending from the National Association of State Budget Officers (NASBO). These data correspond exactly to channel B2 in Fig. 1. The estimation results for the sample 1991-2014 are very similar when using the NASBO and Census data (see appendix C), although the multiplier is slightly lower when the NASBO data are used.
To conclude, the results seem to be robust to alternative specifications. Although the exact multiplier estimates can slightly differ across specifications, our main finding that defense spending results in a lower multiplier is robust.
Spillovers across states
So far, we have analyzed the effects of shocks to defense and non-defense spending to the states itself. However, it is possible that a shock to one state also affects other states. We therefore estimate the cross-state spillover effect in state i from other states by weighting the shocks from all other states using the bilateral Commodity Flow Survey 2012 data on total shipments between states i and j. 31 The weights w i, j from state j to state i are constructed such that they sum up to 1 for every state i. The 'partner shock' p i,t (in either defense or non-defense spending 32 ) can be written down as follows: We cannot just analyze the spillover effect of a shock in state j on income in state i because it is likely that the shocks (in a specific spending category) are correlated between states i and j. To make sure that we can interpret the effect of the partner shock as the cross-state spillover effect, we remove the effect of the own shock by regressing state-by-state the estimatep i,t on the own shockŝ i,t and using the residual from this regression, which we denote asp i,t . Afterward, we include both filtered defense and non-defense (partner) shocks,p i,t andŝ i,t , in the local projection regressions as before to estimate impulse responses.
The results are shown in Fig. 4. The two figures on the left show the response of income to the partner shock. We find that non-defense spending clearly results in significant positive spillover effects all throughout the 8-year horizon. However, for defense spending, we find a significant negative impact from year 2 onward. These results show that beyond the spillover effects within a state that are found in Auerbach et al. (2019), there are also significant cross-state spillover effects. This could indicate that for non-defense spending the positive demand effects outweigh negative supply effects. However, the opposite holds for defense contracts. An increase in production by contractors in other states seems to increase competition for production factors in state i, which cancels out the positive demand effect.
The two right-hand side figures show the effect of a partner shock on state spending and defense contractors. On the one hand, we find that state spending increases after one year. Since state government spending is largely pro-cyclical, it increases when there is an increase in income due to the partner shock. On the other hand, we mostly do not see a significant effect on defense contracts in state i when other states experience an increase in defense contracts, which is in line with our intuition. (It is only slightly significant and positive in years 2, 3, and 7.)
Conclusion
In this paper we estimate local fiscal multipliers for different categories (or types) of government spending in the USA. We focus in particular on the difference between defense and non-defense spending. Using a dynamic approach, we isolate the part of federal defense or non-defense spending allocated to state governments that is exogenous to the state-specific economic conditions. We estimate this exogenous component of federal spending at the state level by removing the common component of spending across states, and the endogenous state-level spending component. The estimated shocks are included in a dynamic model, which avoids the weak instrument problem in instrumental variables (IV) analysis. We use US data on state-level defense contracts and federal intergovernmental transfers to state governments to estimate (state-level) shocks to federal defense and non-defense spending and their implied (local) multipliers.
We find that non-defense spending multipliers are higher than those for defense spending. This finding is robust across different model specifications, controls, constructions of the measure we use to estimate the shocks, and other factors. Moreover, beyond the within-state spillover effects found by Auerbach et al. (2019), we find significant cross-state spillover, as well particularly strong for non-defense spending.
Our point is not that the federal state should substitute defense spending with more effective non-defense spending, since defense spending does not directly aim at stimulating the economy in the short to medium run. However, our findings may imply that multipliers estimated based on defense expenditures understate the effectiveness of other types of spending policies. Policymakers should therefore be more confident in using these tools to stimulate the economy, at least at the local level. Yet, this paper only scratches at the surface and a deeper understanding of how different types of spending affect the economy is crucial for more effective policy design.
Declarations
Conflicts of interest All authors declare that they have no conflict of interest.
Human and animal rights This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Robust and clustered SEs in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01 spending (for non-defense spending), is instrumented using the instrument z i,t , and state and year fixed effects (μ i and ν t ): We use the instrumented value in the second stage to estimate the effect on the two-year growth rate of income. The second-stage regression can be written down as follows: where y i,t is personal income and the third term on the right-hand side is the instrumented value from the first stage of either defense contracts or state government spending for non-defense spending. This regression also includes state and year fixed effects (α i and γ t ). We can compute directly the short-horizon two-year multiplier through the coefficient β. 34 Table 1 shows the estimated multipliers from the second stage regression. We estimate a 2-year multiplier around 1.2 for defense spending with an F-statistic of 7.24. 35 This result is close to the estimate from Nakamura and Steinsson (2014), even though we use a longer period for estimation. For non-defense spending we estimate a 2-year multiplier of 1.4 with an F-statistic of 8.53. However, the large standard errors prevent us from claiming that there is a significant difference between the estimated multipliers.
Our finding, that the above defense spending multiplier is much larger than we have estimated using our (dynamic) local projection approach is consistent with the observations in Ramey (2020). Investigating the approach in Nakamura and Steinsson (2014), she finds that using annual instead of biennial data and addressing the serial correlation in the instrument by estimating a dynamic model leads to substantially smaller multiplier estimates. The first-stage F-statistics indicate that both instruments suffer from the weak instrument problem because both F-statistics are below 10, which is the common rule-of-thumb critical value to reject the null hypothesis of a weak instrument. 36 The low relevance of the instruments indicates that they do not capture the main fluctuations in spending. The underlying reason is probably that federal spending at the state level is largely driven by aggregate dynamics. However, since the Bartik (1991) approach uses mainly aggregate information to construct the instrument, the relevance of the instrument decreases once we take into account state and year fixed effects to control for aggregate dynamics.
Furthermore, the results are sensitive to the choice of the horizon for the dependent variable. To illustrate this, consider Table 2, which shows the second stage coefficients and the first stage F-statistics for longer horizons, where we shift the dependent variable one period ahead for each horizon. We would expect to see a peak in the F-statistic at a short horizon and then a gradual decrease in the statistic. However, for both instruments this is not the case. The estimated second-stage coefficients are very erratic.
To conclude, the results indicate that the multiplier for non-defense spending is higher than for defense spending, although the large standard errors prevent us from concluding that there is a significant difference between the multipliers. However, we do not think that the static approach produces reliable measures of the fiscal multipliers since the instruments are weak and the estimation result is very sensitive to the chosen horizon.
Appendix B Forecasting model for the dynamic approach
We construct a forecast for the federal spending variables defense contracts and (the categories of) intergovernmental transfers for non-defense spending. This forecast 36 We do not use the critical values from Stock and Yogo (2005) because they assume i.i.d. standard errors. is based on a 10-year rolling-window panel regression with fixed effects using two lags of the following variables: personal income, state-government spending, stategovernment tax revenue, federal-government spending, the oil price, and the real interest rate. Based on this regression, we compute a one-step ahead out-of-sample forecastX i,t . The out-of-sample forecast precision of the model can be evaluated using the root mean squared forecast error (RMSFE). We calculate the mean over the cross-sectional as well as time series dimension. Below we compare forecast errors for different trend specifications, using a linear and quadratic trend polynomial. In Table 3, we compare these to the naive forecasts, where we use last year's value as a forecast.
The results in the table show that including a quadratic trend does not improve the forecast quality. The model performs better in forecasting IGT than defense contracts and, in particular, education and Medicaid grants. However, for some spending categories, the naive forecast performs even better. Especially for total IGT, Highways and Medicaid, the RMSE naive forecast is lower.
Fig. 5
Estimates and bootstrapped 90% confidence intervals of cumulative multipliers using the benchmark specification and additionally control for party dominance in each state using a political party dummy
Fig. 6
Estimates and bootstrapped 90% confidence intervals of cumulative multipliers using the benchmark specification but using government spending variables sampled at fiscal years (instead of calendar years)
Fig. 7
Estimates and bootstrapped 90% confidence intervals of cumulative multipliers to a non-defense spending shock using the benchmark specification and state spending measured using NASBO and Census data, respectively | 2022-02-28T16:08:19.561Z | 2022-02-26T00:00:00.000 | {
"year": 2022,
"sha1": "3399ae69ca2117e699c0ea3cc49bf05a4229c011",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00181-022-02217-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e8f0ac8a1fab4399501a16545e7b57a7f9b19f14",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
250678803 | pes2o/s2orc | v3-fos-license | Reconstruction of surface impedance of an object located over a planar PEC surface
A method for the determination of inhomogeneous surface impedance of an arbitrary shaped cylindrical object located over a perfectly conducting (PEC) plane is presented. The problem is reduced to the solution of an ill-posed integral equation by the use of single layer representation which is handled by Truncated Singular Value Decomposition (TSVD). The total field and its normal derivative on the boundary of the object which are required for the evaluation of the surface impedance are obtained through Nyström method. The method can also be used in shape reconstruction by using the relation between the shape of a PEC object and its equivalent one in terms of the surface impedance. The numerical implementations yield quite satisfactory results.
Introduction
One of the effective approaches used in the solution of electromagnetic scattering problems is to establish an equivalent problem to the original one through the impedance boundary condition (IBC), which aims to reduce its mathematical and numerical complexities. IBC gives a relation between the electric and magnetic field vectors on a certain boundary in terms of a coefficient called surface impedance. IBC can directly be defined either on the surface of the actual scatterer or on a fictitious boundary [1][2][3][4]. On the other hand, due to the equivalence principle the surface impedance is related to the geometrical and physical properties of the actual scatterer. In [5] it is shown that there is an explicit relation between the geometrical variations of an arbitrary perfectly conducting object and equivalent surface impedance placed on a circular one covering the original scatterer. Thus as long as the surface impedance is known, one can extract the geometrical properties of the PEC scatterer from this information. This property can be used in the inverse scattering problems whose aim is to determine the shape of an inaccessible PEC object from the measured scattered field. For that reason, the reconstruction of the inhomogeneous surface impedance of a known boundary is of importance from both theoretical and application points of view.
The main objective of this paper is to extend the method developed in [6] to the reconstruction of the surface impedance of a cylindrical object located above a perfectly conducting planar surface and its use in the shape reconstruction of PEC targets. The results of such problems may have applications in the inverse scattering problems related to objects located in the atmosphere. In such a case, the earth surface can be modeled by the perfectly conducting plane. In the present study, the scattered field in the half space above the PEC plane is represented by a single-layer potential and the density of the single layer potential is obtained by solving the resulting Fredholm integral equation of the first kind.
Since the latter one is ill posed, a regularized solution is given via the truncated singular value decomposition scheme (TSVD). The use of the jump relations for single-layer potential leads to the explicit expressions of the scattered field and its normal derivative on the impedance surface. Then the least-squares reconstruction of the surface impedance is achieved by using the SIBC itself. The surface impedance reconstruction algorithm mentioned above is also extended to the shape reconstruction of the perfectly conducting objects. To this aim, the unknown object is equivalently represented in terms of a known fictitious surface having a surface impedance on its boundary. Then the surface impedance is reconstructed via the method described above from the measured values of the scattered field due to the object to be reconstructed. The determination of the surface impedance is achieved using the explicit expression between the surface impedance and the shape of the actual object given in [5] the determination of the shape is achieved. The method is very effective for the reconstruction of smooth and slightly varying impedances. Similar observation is valid for the application of the method to the shape reconstruction problems, i.e it is capable of determining the shape of the PEC objects whose surfaces are relatively simple.
In section 2 the general formulation of the inverse impedance problem is given and a solution is presented. Section 3 is devoted to the application of the method for the shape reconstruction of PEC objects while in section 4 the numerical simulations are given. Finally, conclusions and concluding remarks are given in section 5.
A time factor exp(−iωt) is omitted throughout the paper.
Solution of the Inverse Problem
Consider the two-dimensional (2D) electromagnetic scattering problem illustrated in figure 1. In this configuration a body D having an inhomogeneous surface impedance Z(x) on its boundary ∂D is located in a homogeneous half space bounded by the perfectly conducting plane 2 is the position vector in 2 . It is assumed that ∂D is a smooth boundary and can also be represented in the parametric form ∂D : constitutive parameters of the background medium are ε, µ and σ = 0. On the boundary ∂D the applicable boundary condition is the standard impedance boundary condition given by where E and H are the total electric and magnetic field vectors, respectively and n is the outward unit normal vector of ∂D. The inverse scattering problem considered here is to reconstruct the inhomogeneous surface impedance Z(x) through the measured values of the scattered field on a certain limited domain denoted by Γ (See Figure 1) To this aim the body is illuminated by a time-harmonic TM polarized plane wave whose electric field vector is is the incidence angle and k = ω εµ stands for the wave number of the background medium. Note that in such a case the total electric vector will be in the form of E = (0, 0, u) and hence the problem can be formulated in terms of the scalar field function u. In order to formulate the problem in an appropriate way, we first introduce the field 0 u which is the total field in the half space in the absence of the body D and its explicit expression can be found in any ordinary textbook, Then the difference , where 0 / Z µ ε = denotes the intrinsic impedance of the background medium.
In view of (2.3) the surface impedance can be obtained from the values of the total field u and its normal derivative ∂u/∂n on ∂D via In what follows a method similar to one given in [6] is described for reconstructing the required field values on the boundary ∂D from the measured scattered field data. To this aim by the use of image theorem the scattered field is represented as a single-layer potential of the form with an unknown density function ϕ. Here the point ( ) the values u and ∂u/∂n of the total field on the boundary ∂D can be recovered through the jump relations for the single-layer potential [7], that is, by For the numerical evaluation of these singular integrals over ∂D, we make use of the same quadrature formulae in connection with the Nyström method [8].
Application to shape reconstruction of conducting objects
In [5] it is shown that there is an explicit relation between the shape of a PEC object and its equivalent surface impedance defined on a fictitious boundary which is assumed to cover the actual one. By considering this property, the reconstructed surface impedance can be used for the shape determination of the PEC objects. To this aim, a circular domain is considered which covers the perfectly conducting object to be reconstructed and set an equivalent problem by imposing an inhomogeneous surface impedance Z(φ) on the new circular boundary (See figure 3). If the object has a slightly varying and smooth boundary, it can be represented in terms of a standard impedance boundary condition [5] and the following relation is valid between the surface function of the PEC object and parametric form of the equivalent surface impedance Z(t) on the circular boundary ρ = a, namely, Then, for the given measured scattered data related to the actual object, we first reconstruct the surface impedance Z(t) via the method given in previous section then obtain the surface function.
Numerical Results
In this section, some illustrative examples are presented both for surface impedance determination and its use in the shape reconstruction. The data which should be collected by real measurements are created synthetically by solving the associated scattering problem through the mixed layer potential approach [9]. In all numerical examples, the frequency of the incident wave is chosen as f = 300 MHz and the background as free space corresponding to a wavelength λ = 1 m. A random noise of level 1% is added to the simulated data for each example. In particular, a random term The scattered field measurements due to this object is assumed to be performed on the semi circle R = 5, φ (0, π). The equivalent impedance boundary is chosen as the circle with the radius a = 0.42 and center (0, 2) and the surface impedance is reconstructed on this circle from above given data. By using the relation (3.1) the shape of the object is reconstructed. In figure 4 the variation of the exact and reconstructed values of the surface impedance on the circle a=0.42m are given. The exact and reconstructed shapes of the object are illustrated in figure 5. As can easily be observed the reconstructed shape is very close to the actual one. Note that in this example the variation of the surface is small compared to wavelength.
Conclusions
The inverse scattering problem whose aim is to reconstruct the surface impedance of a cylindrical object of arbitrary shape over a PEC plane is solved through the extension of the method in [6]. On the other hand, by means of the equivalency theorem, it is possible to show that a PEC object can be represented in terms of a surface impedance defined on a known surface and the surface impedance is explicitly related to the shape of the actual object. By the use of this result the method can also be used in the shape reconstruction problems related to PEC bodies.
The method yields quite satisfactory surface impedance reconstructions for a single illumination even in the case of aspect limited data. This is due to the fact that the planar PEC boundary reflects all the field which carries the information about the non-illuminated part of the object. In other words, the reflection of the incident field from the PEC plane behaves like a second excitation and interacts with the shadow part of the object. When a PEC object is equivalently represented by a surface impedance, it yields quite accurate shape reconstructions for object with slightly varying boundaries. This is the result of using only the standard impedance boundary condition for the equivalent problem. By using higher order IBC, it may be possible to reconstruct more complex shapes. Future studies are devoted in this direction. | 2022-06-28T00:59:52.617Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "a5e57e9176bc9c9e0f4ab46c57dae58f54325dab",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/135/1/012099",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a5e57e9176bc9c9e0f4ab46c57dae58f54325dab",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
13906975 | pes2o/s2orc | v3-fos-license | Pose Graph Optimization in the Complex Domain: Lagrangian Duality, Conditions For Zero Duality Gap, and Optimal Solutions
Pose Graph Optimization (PGO) is the problem of estimating a set of poses from pairwise relative measurements. PGO is a nonconvex problem, and currently no known technique can guarantee the computation of an optimal solution. In this paper, we show that Lagrangian duality allows computing a globally optimal solution, under certain conditions that are satisfied in many practical cases. Our first contribution is to frame the PGO problem in the complex domain. This makes analysis easier and allows drawing connections with the recent literature on unit gain graphs. Exploiting this connection we prove non-trival results about the spectrum of the matrix underlying the problem. The second contribution is to formulate and analyze the dual problem in the complex domain. Our analysis shows that the duality gap is connected to the number of eigenvalues of the penalized pose graph matrix, which arises from the solution of the dual. We prove that if this matrix has a single eigenvalue in zero, then (i) the duality gap is zero, (ii) the primal PGO problem has a unique solution, and (iii) the primal solution can be computed by scaling an eigenvector of the penalized pose graph matrix. The third contribution is algorithmic: we exploit the dual problem and propose an algorithm that computes a guaranteed optimal solution for PGO when the penalized pose graph matrix satisfies the Single Zero Eigenvalue Property (SZEP). We also propose a variant that deals with the case in which the SZEP is not satisfied. The fourth contribution is a numerical analysis. Empirical evidence shows that in the vast majority of cases (100% of the tests under noise regimes of practical robotics applications) the penalized pose graph matrix does satisfy the SZEP, hence our approach allows computing the global optimal solution. Finally, we report simple counterexamples in which the duality gap is nonzero, and discuss open problems.
Introduction
Pose graph optimization (PGO) consists in the estimation of the poses (positions and orientations) of a mobile robot, from relative pose measurements. The problem can be formulated as the minimization of a nonconvex cost, and can be conveniently visualized as a graph, in which a (to-be-estimated) pose is attached to each vertex, and a given relative pose measurement is associated to each edge.
A motivating example in robotics is the one pictured in Fig. 1(a). A mobile robot is deployed in an unknown environment at time t = 0. The robot traverses the environment and at each discrete time step acquires a sensor measurement (e.g., distances from obstacles within the sensing radius). From wheel rotation, the robot is capable of measuring the relative motion between two consecutive poses (say, at time i and j). Moreover, comparing the sensor measurement, acquired at different times, the robot can also extrapolate relative measurements between non consecutive poses (e.g., between i and k in the figure). PGO uses these measurements to estimate robot poses. The graph underlying the problem is shown in Fig. 1(b), where we draw in different colors the edges due to relative motion measurements (the odometric edges, in black) and the edges connecting non-consecutive poses (the loop closures, in red). The importance of estimating the robot poses is two-fold. First, the knowledge of the current robot pose is often needed (a) (b) Figure 1: (a) Pose graph optimization in robotics. A mobile robot is deployed in an unknown environment at time t = 0. At each time step the robot measures distances from obstacles within the sensing radius (red circle). The sensor footprint (i.e., the set of measurements) at time T is visualized in orange. By matching sensor footprints acquired at different time steps, the robot establishes relative measurements between poses along its trajectory. PGO consists in the estimation of robot poses from these relative measurements. (b) Directed graph underlying the problem.
for performing high-level tasks within the environment. Second, from the knowledge of all past poses, the robot can register all sensor footprints in a common frame, and obtain a map of the environment, which is needed for model-based navigation and path planning.
Related work in robotics. Since the seminal paper [53], PGO attracted large attention from the robotics community. Most state-of-the-art techniques currently rely on iterative nonlinear optimization, which refines a given initial guess. The Gauss-Newton method is a popular choice [51,45,44], as it converges quickly when the initialization is close to a minimum of the cost function. Trust region methods (e.g., the Levenberg-Marquart method, or Powell's Dog-Leg method [52]) have also been applied successfully to PGO [61,62]; the gradient method has been shown to have a large convergence basin, while suffering from long convergence tails [57,37,48]. A large body of literature focuses on speeding up computation. This includes exploiting sparsity [45,32], using reduction schemes to limit the number of poses [50,9], faster linear solvers [34,22], or approximate solutions [24,12].
PGO is a nonconvex problem and iterative optimization techniques can only guarantee local convergence. State-of-the-art iterative solvers fail to converge to a global minimum of the cost for relatively small noise levels [13,15]. This fact recently triggered efforts towards the design of more robust techniques, together with a theoretical analysis of PGO. Huang et. al [41] discuss the number of local minima in small PGO problems. Knuth and Barooah [49] investigate the growth of the error in absence of loop closures. Carlone [10] provides conservative approximations of the basin of convergence for the Gauss-Newton method. Huang et. al [40] and Wang et. al [78] discuss the nonlinearities in PGO. In order to improve global convergence, a successful strategy consists in solving for the rotations first, and then using the resulting estimate to bootstrap iterative methods for PGO [11,12,13,15]. This is convenient because the rotation subproblem 1 can be solved in closed-form in 2D [13], and many heuristic algorithms for rotation estimation also perform well in 3D [55,35,33,15]. Despite the empirical success of state-of-the-art techniques, no approach can guarantee global convergence. It is not even known if the global optimizer itself is unique in general instances (while it is known that the minimizer is unique with probability one in the rotation subproblem [13]). The lack of guarantees promoted a recent interest in verification techniques for PGO. Carlone and Dellaert [14] use duality to evaluate the quality of a candidate solution in planar PGO. The work [14] also provides empirical evidence that in many problem instances the duality gap, i.e., the mismatch between the optimal cost of the primal and the dual problem, is zero.
Related work in other fields. Variations of the PGO problem appear in different research fields. In computer vision, a somehow more difficult variant of the problem is known as bundle adjustment [35,55,2,36,68,38,33]. Contrarily to PGO, in bundle adjustment the relative measurements between the (camera) poses are only known up to scale. While no closedform solution is known for bundle adjustment, many authors focused on the solution of the rotation subproblem [35,55,2,36,68,33,38]. The corresponding algorithms have excellent performance in practice, but they come with little guarantees, as they are based on relaxation. Fredriksson and Olsson [33] use duality theory to design a verification technique for quaternion-based rotation estimation.
Related work in multi robot systems and sensor networks also includes contributions on rotation estimation (also known as attitude synchronization [73,39,56,17,79]). Borra et. al [6] propose a distributed algorithm for planar rotation estimation. Tron and Vidal [77,74] provide convergence results for distributed attitude consensus using gradient descent; distributed consensus on manifold [64] is related to estimation from relative measure-ments, as discussed in [75]. A problem that is formally equivalent to PGO is discussed in [59,58] with application to sensor network localization. Piovan et. al [59] provide observability conditions and discuss iterative algorithms that reduce the effect of noise. Peters et. al [58] study pose estimation in graphs with a single loop (related closed-form solutions also appear in other literatures [68,25]), and provide an estimation algorithm over general graphs, based on the limit of a set of continuous-time differential equations, proving its effectiveness through numerical simulations. We only mention that a large literature in sensor network localization also deals with other types of relative measurements [54], including relative positions (with known rotations) [4,63], relative distances [23,30,19,5,69,8,20,21] and relative bearing measurements [72,31,76].
A less trivial connection can be established with related work in molecular structure determination from cryo-electron microscopy [70,71], which offers very lucid and mature treatment of rotation estimation. Singer and Shkolnisky [70,71] provide two approaches for rotation estimation, based on relaxation and semidefinite programming (SDP). Another merit of [70] is to draw connections between planar rotation estimation and the "max-2-lin mod l" problem in combinatorial optimization, and "max-k-cut" problem in graph theory. Bandeira et. al [3] provide a Cheeger-like inequality that establishes performance bounds for the SDP relaxation. Saunderson et. al [66,65] propose a tighter SDP relaxation, based on a spectrahedral representation of the convex hull of the rotation group.
Contribution. This paper shows that the use of Lagrangian duality allows computing a guaranteed globally optimal solution for PGO in many practical cases, and proves that in those cases the solution is unique. Section 2 recalls preliminary concepts, and discusses the properties of a particular set of 2×2 matrices, which are scalar multiples of a planar rotation matrix. These matrices are omnipresent in planar PGO and acknowledging this fact allows reformulating the problem over complex variables. Section 3 frames PGO as a problem in complex variables. This makes analysis easier and allows drawing connections with the recent literature on unit gain graphs [60]. Exploiting this connection we prove nontrival results about the spectrum of the matrix underlying the problem (the pose graph matrix ), such as the number of zero eigenvalues in particular graphs. Section 4 formulates the Lagrangian dual problem in the complex domain. Moreover it presents an SDP relaxation of PGO, interpreting the relaxation as the dual of the dual problem. Our SDP relaxation is related to the one of [70,33], but we deal with 2D poses, rather than rotations; moreover, we only use the SDP relaxation to complement our discussion on duality and to support some of the proofs. Section 4.3 contains keys results that relate the solution of the dual problem to the primal PGO problem. We show that the duality gap is connected to the zero eigenvalues of the penalized pose graph matrix, which arises from the solution of the dual problem. We prove that if this matrix has a single eigenvalue in zero, then (i) the duality gap is zero, (ii) the primal PGO problem has a unique solution (up to an arbitrary roto-translation), and (iii) the primal solution can be computed by scaling the eigenvector of the penalized pose graph matrix corresponding to the zero eigenvalue. To the best of our knowledge, this is the first work to discuss the uniqueness of the PGO solution for general graphs and to provide a provably optimal solution. Section 5 exploits our analysis of the dual problem to devise computational approaches for PGO. We propose an algorithm that computes a guaranteed optimal solution for PGO when the penalized pose graph matrix satisfies the Single Zero Eigenvalue Property (SZEP). We also propose a variant that deals with the case in which the SZEP is not satisfied. This variant, while possibly suboptimal, is shown to perform well in practice, outperforming related approaches. Section 6 elucidates on our theoretical results with numerical tests. In practical regimes of operation (rotation noise < 0.3 rad and translation noise < 0.5 m), our Monte Carlo runs always produced a penalized pose graph matrix satisfying the SZEP. Hence, in all tests with reasonable noise our approach enables the computation of the optimal solution. For larger noise levels (e.g., 1 rad standard deviation for rotation measurements), we observed cases in which the penalized pose graph matrix has multiple eigenvalues in zero. To stimulate further investigation towards structural results on duality (e.g., maximum level of noise for which the duality gap is provably zero) we report simple examples in which the duality gap is nonzero.
Notation and preliminary concepts
Section 2.1 introduces our notation. Section 2.2 recalls standard concepts from graph theory, and can be safely skipped by the expert reader. Section 2.3, instead, discusses the properties of the set of 2 × 2 matrices that are multiples of a planar rotation matrix. We denote this set with the symbol αSO (2). The set αSO(2) is of interest in this paper since the action of any matrix Z ∈ αSO(2) can be conveniently represented as a multiplication between complex numbers, as discussed in Section 3.3. Table 1 summarizes the main symbols used in this paper.
Notation
The cardinality of a set V is written as |V|. The sets of real and complex numbers are denoted with R and C, respectively. I n denotes the n × n identity matrix, 1 n denotes the (column) vector of all ones of dimension n, 0 n×m denotes the n × m matrix of all zeros (we also use the shorthand 0 n . = 0 n×1 ). For a matrix M , M ij denotes the element of M in row i and column j. For matrices with a block structure we use [M ]ij to denote the d × d block of M at the block row i and block column j. In this paper we only deal with matrices that have 2×2 blocks, i.e., d = 2, hence the notation [M ]ij is unambiguous.
Graph terminology
A directed graph G is a pair (V, E), where the vertices or nodes V are a finite set of elements, and E ⊂ V × V is the set of edges. Each edge is an ordered pair e = (i, j). We say that e is incident on nodes i and j, leaves node i, called tail, and is directed towards node j, called head. The number of nodes is denoted with n . = |V|, while the number of edges is m . = |E|. The incidence matrix A of a directed graph is a m × n matrix with elements in {−1, 0, +1} that exhaustively describes the graph topology. Each row of A corresponds to an edge and has exactly two non-zero elements. For the row corresponding to edge e = (i, j), there is a −1 on the i-th column and a +1 on the j-th column.
The set of outgoing neighbors of node i is N out The set of neighbors of node i is the union of outgoing and incoming neighbors N i .
The set αSO(2)
The set αSO (2) is defined as where SO(2) is the set of 2D rotation matrices. Recall that SO(2) can be parametrized by an angle θ ∈ (−π, +π], and any matrix R ∈ SO(2) is in the form: Clearly, SO(2) ⊂ αSO (2). The set αSO(2) is closed under standard matrix multiplication, i.e., for any Z 1 , Z 2 ∈ αSO(2), also the product Z 1 Z 2 ∈ αSO(2). In full analogy with SO (2), it is also trivial to show that the multiplication is commutative, i.e., for any Z 1 , Z 2 ∈ αSO(2) it holds that The set αSO (2) is also closed under matrix addition, since for R 1 , R 2 ∈ SO(2), we have that where we used the shorthands c i and s i for cos(θ i ) and sin(θ i ), and we defined a .
is a rotation matrix; if α 3 = 0, then α 1 R 1 +α 2 R 2 = 0 2×2 , which also falls in our definition of αSO (2). From this reasoning, it is clear that an alternative definition of αSO(2) is αSO(2) is tightly coupled with the set of complex numbers C. Indeed, a matrix in the form (3) is also known as a matrix representation of a complex number [29]. We explore the implications of this fact for PGO in Section 3.3.
3 Pose graph optimization in the complex domain 3.1 Standard PGO PGO estimates n poses from m relative pose measurements. We focus on the planar case, in which the i-th pose x i is described by the pair x i . = (p i , R i ), where p i ∈ R 2 is a position in the plane, and R i ∈ SO(2) is a planar rotation. The pose measurement between two nodes, say i and j, is described by the pair (∆ ij , R ij ), where ∆ ij ∈ R 2 and R ij ∈ SO(2) are the relative position and rotation measurements, respectively.
The problem can be visualized as a directed graph G(V, E), where an unknown pose is attached to each node in the set V, and each edge (i, j) ∈ E corresponds to a relative pose measurement between nodes i and j (Fig. 2).
In a noiseless case, the measurements satisfy: Real pose graph matrix Real anchored pose graph matrix p ∈ R 2n Node positions ρ ∈ R 2(n−1) Anchored node positions r ∈ R 2n Node rotations Complex PGO formulatioñ W ∈ C (2n−1)×(2n−1) Complex anchored pose graph matrix ρ ∈ C n−1 Anchored complex node positions r ∈ C n Complex node rotations Miscellanea SO(2) 2D rotation matrices αSO (2) Scalar multiple of a 2D rotation matrix |V| Cardinality of the set V I n n × n identity matrix 0 n (1 n ) Column vector of zeros (ones) of dimension n Tr (X) Trace of the matrix X In absence of noise, the problem admits a unique solution as long as one fixes the pose of a node (say p 1 = 0 2 and R 1 = I 2 ) and the underling graph is connected.
In this work we focus on connected graphs, as these are the ones of practical interest in PGO (a graph with k connected components can be split in k subproblems, which can be solved and analyzed independently).
Assumption 1 (Connected Pose Graph)
The graph G underlying the pose graph optimization problem is connected.
In presence of noise, the relations (4) cannot be met exactly and pose graph optimization looks for a set of positions {p 1 , . . . , p n } and rotations {R 1 , . . . , R n } that minimize the mismatch with respect to the measurements. This mismatch can be quantified by different cost functions. We adopt the formulation proposed in [14]: where · 2 is the standard Euclidean distance and · F is the Frobenius norm. The Frobenius norm R a − R b is a standard measure of distance between two rotations R a and R b , and it is commonly referred to as the chordal distance, see, e.g., [38]. In (5), we used the short-hand notation {p i } (resp. {R i }) to denote the set of unknown positions {p 1 . . . , p n } (resp. rotations).
Rearranging the terms, problem (5) can be rewritten as: where we exploited the fact that the 2-norm is invariant to rotation, i.e., for any vector v and any rotation matrix R it holds Rv 2 = v 2 . Eq. (6) highlights that the objective is a quadratic function of the unknowns. The complexity of the problem stems from the fact that the constraint R i ∈ SO(2) is nonconvex, see, e.g., [65]. To make this more explicit, we follow the line of [14], and use a more convenient representation for nodes' rotations. Every planar rotation R i can be written as in (1), and is fully defined by the vector Using this parametrization and with simple matrix manipulation, Eq. (6) becomes (cf. with Eq. (11) in [14]): and where the constraints r i 2 2 = 1 specify that we look for vectors r i that represent admissible rotations (i.e., such that cos(θ i ) 2 + sin(θ i ) 2 = 1).
Problem (8) is a quadratic problem with quadratic equality constraints. The latter are nonconvex, hence computing a local minimum of (8) is hard in general. There are two problem instances, however, for which it is easy to compute a global minimizer, which attains zero optimal cost. These two cases are recalled in Propositions 1-2.
Proposition 1 (Zero cost in trees) An optimal solution for a PGO problem in the form (8) whose underlying graph is a tree attains zero cost.
The proof is given in Appendix 8.1. Roughly speaking, in a tree, we can build an optimal solution by concatenating the relative pose measurements, and this solution annihilates the cost function. This comes with no surprises, as the chords (i.e., the extra edges, added to a spanning tree) are indeed the elements that create redundancy and improve the pose estimate. However, also for graphs with chords, it is possible to attain the zero cost in problem (8).
Definition 1 (Balanced pose graph) A pose graph is balanced if the pose measurements compose to the identity along each cycle in the graph 23 .
In a balanced pose graph, there exists a configuration that explains exactly the measurements, as formalized in the following proposition.
Proposition 2 (Zero cost in balanced pose graphs) An optimal solution for a balanced pose graph optimization problem attains zero cost.
The proof is given in Appendix 8.2. The concept of balanced graph describes a noiseless setup, while in real problem instances the measurements do not compose to the identity along cycles, because of the presence of noise.
We note the following fact, which will be useful in Section 3.2.
Matrix formulation and anchoring
In this section we rewrite the cost function (8) in a more convenient matrix form. The original cost is: where we denote with p ∈ R 2n and r ∈ R 2n the vectors stacking all nodes positions and rotations, respectively. Now, let A ∈ R m×n denote the incidence matrix of the graph underlying the problem: if (i, j) is the k-th edge, 2 We use the somehow standard term "composition" to denote the group operation for SE (2). For two poses T1 . = (p1, R1) and T2 . = (p2, R2), the composition is T1 · T2 = (p1 + R1p2, R1R2) [16]. Similarly, the identity element is (02, I2). 3 When composing measurements along the loop, edge direction is important: for two consecutive edges (i, k) and (k, j) along the loop, the composition is Tij = T ik · T kj , while if the second edge is in the form (j, k), the composition becomes Tij = T ik · T −1 jk .
, and denote with A k ∈ R 2×2n the k-th block row ofĀ. From the structure ofĀ, it follows thatĀ k p = p j − p i . Also, we defineD ∈ R 2m×2n as a block matrix where the k-th block rowD k ∈ R 2×2n corresponding to the k-th edge (i, j) is all zeros, except for a 2 × 2 block −D ij in the i-th block column. Using the matricesĀ andD, the first sum in (10) can be written as: Similarly, we defineŪ ∈ R 2m×2n as a block matrix where the k-th block row U k ∈ R 2×2n corresponding to the k-th edge (i, j) is all zeros, except for 2 × 2 blocks in the i-th and j-th block columns, which are equal to −R ij and I 2 , respectively. UsingŪ , the second sum in (10) becomes: Combining (11) and (12), the cost in (10) becomes: where we definedQ . =D D +Ū Ū andL . =Ā Ā , to simplify notation. Note that, sinceĀ . = A ⊗ I 2 , it is easy to show thatL = L ⊗ I 2 , where L . = A A is the Laplacian matrix of the graph underlying the problem. A pose graph optimization instance is thus completely defined by the matrix From (13), W can be easily seen to be symmetric and positive semidefinite. Other useful properties of W are stated in the next proposition. (14) is positive semidefinite, and 2. is composed by 2 × 2 blocks [W]ij, and each block is a multiple of a rotation matrix, i.e., [W]ij ∈ αSO(2), ∀i, j = 1, . . . , 2n. Moreover, the diagonal blocks of W are nonnegative multiples of the identity matrix, i.e.,
Proposition 4 (Properties of W) Matrix W in
A formal proof of Proposition 4 is given in Appendix 8.
3. An intuitive explanation of the second claim follows from the fact that (i) W contains sums and products of the matrices in the original formulation (8) (which are in αSO(2) according to Lemma 3), and (ii) the set αSO (2) is closed under matrix sum and product (Section 2.3).
The presence of two eigenvalues in zero has a very natural geometric interpretation: the cost function encodes inter-nodal measurements, hence it is invariant to global translations of node positions, i.e., f (p, . . a ] (n copies of a), with a ∈ R 2 . Algebraically, this translates to the fact that the matrix (1 n ⊗ I 2 ) ∈ R 2n×2 is in the null space of the augmented incidence matrixĀ, which also implies a two dimensional null space for W.
Position anchoring In this paper we show that the duality properties in pose graph optimization are tightly coupled with the spectrum of the matrix W. We are particularly interested in the eigenvalues at zero, and from this perspective it is not convenient to carry on the two null eigenvalues of W (claim 1 of Proposition 4), which are always present, and are due to an intrinsic observability issue.
We remove the translation ambiguity by fixing the position of an arbitrary node. Without loss of generality, we fix the position p 1 of the first node to the origin, i.e., p 1 = 0 2 . This process is commonly called anchoring. Setting p 1 = 0 is equivalent to removing the corresponding columns and rows from W, leading to the following "anchored" PGO problem: where ρ is the vector p without its first two-elements vector p 1 , and W is obtained from W by removing the rows and the columns corresponding to p 1 . The structure of W is as follows: whereĀ = A ⊗ I 2 , and A is the anchored (or reduced ) incidence matrix, obtained by removing the first column from A, see, e.g., [12]. On the righthand-side of (16) we definedS . =Ā D andL . =Ā Ā . We call W the real (anchored) pose graph matrix. W is still symmetric and positive semidefinite (it is a principal submatrix of a positive semidefinite matrix). Moreover, since W is obtained by removing a 2 × 4n block row and a 4n × 2 block column from W, it is still composed by 2 × 2 matrices in αSO (2), as specified in the following remark.
After anchoring, our PGO problem becomes:
To complex domain
In this section we reformulate problem (17), in which the decision variables are real vectors, into a problem in complex variables. The main motivation for this choice is that the real representation (17) is somehow redundant: as we will show in Proposition 7, each eigenvalue of W is repeated twice (multiplicity 2), while the complex representation does not have this redundancy, making analysis easier. In the rest of this paper, any quantity marked with a tilde (·) lives in the complex domain C. Any real vector v ∈ R 2 can be represented by a complex numberṽ = ηe ϕ , where 2 = −1 is the imaginary unit, η = v 2 and ϕ is the angle that v forms with the horizontal axis. We use the operator (·) ∨ to map a 2-vector to the corresponding complex number,ṽ = v ∨ . When convenient, we adopt the notation v ∼ṽ, meaning that v andṽ are the vector and the complex representation of the same number.
The action of a real 2 × 2 matrix Z on a vector v ∈ R 2 cannot be represented, in general, as a scalar multiplication between complex numbers. However, if Z ∈ αSO(2), this is possible. To show this, assume that Z = αR(θ), where R(θ) is a counter-clockwise rotation of angle θ. Then, With slight abuse of notation we extend the operator (·) ∨ to αSO (2), such that, given Z = αR(θ) ∈ αSO (2), then Z ∨ = αe θ ∈ C. By inspection, one can also verify the following relations between the sum and product of two matrices Z 1 , Z 2 ∈ αSO(2) and their complex representations Z ∨ 1 , Z ∨ 2 ∈ C: We next discuss how to apply the machinery introduced so far to reformulate problem (17) in the complex domain. The variables in problem (17) are the vectors ρ ∈ R 2(n−1) and r ∈ R 2n that are composed by 2-vectors, i.e., ρ = [ρ 1 , . . . , ρ n−1 ] and r = [r 1 , . . . , r n ] , where ρ i , r i ∈ R 2 . Therefore, we define the complex positions and the complex rotations: Using the complex parametrization (20), the constraints in (17) become: Similarly, we would like to rewrite the objective as a function ofρ andr. This re-parametrization is formalized in the following proposition, whose proof is given in Appendix 8.4.
Proposition 5 (Cost in the complex domain) For any pair (ρ, r), the cost function in (17) is such that: where the vectorsρ andr are built from ρ and r as in (20), and the matrix Remark 2 (Real diagonal entries forW ) According to Remark 1, the diagonal blocks of W are multiples of the identity matrix, i.e., Proposition 5 enables us to rewrite problem (17) as: s.t.: We callW the complex (anchored) pose graph matrix. Clearly, the matrix W preserves the same block structure of W in (16): whereS * is the conjugate transpose ofS, and L .
= A A where A is the anchored incidence matrix. In Section 4 we apply Lagrangian duality to the problem (23). Before that, we provide results to characterize the spectrum of the matrices W andW , drawing connections with the recent literature on unit gain graphs, [60].
Analysis of the real and complex pose graph matrices
In this section we take a closer look at the structure and the properties of the real and the complex pose graph matrices W andW . In analogy with (13) and (16), we writeW as whereŨ ∈ C m×n andD ∈ C m×n are the "complex versions" ofŪ andD in (13), i.e., they are obtained asŨ ij The factorization (25) is interesting, as it allows to identify two important matrices that composeW : the first is A, the anchored incidence matrix that we introduced earlier; the second isŨ which is a generalization of the incidence matrix, as specified by Definition 2 and Lemma 1 in the following. Fig. 3 reports the matrices A andŨ for a toy example with four poses.
Definition 2 (Unit gain graphs) A unit gain graph (see, e.g., [60]) is a graph in which to each orientation of an edge (i, j) is assigned a complex numberzij (with |zij| = 1), which is the inverse of the complex number 1 zij assigned to the opposite orientation (j, i). Moreover, a complex incidence matrix of a unit gain graph is a matrix in which each row corresponds to an edge and the row corresponding to edge e = (i, j) has −zij on the i-th column, +1 on the j-th colum, and zero elsewhere.
Roughly speaking, a unit gain graph describes a problem in which we can "flip" the orientation of an edge by inverting the corresponding complex weight. To understand what this property means in our context, recall the Incidence matrix: Anchored Incidence matrix: Figure 3: Example of incidence matrix, anchored incidence matrix, and complex incidence matrix, for the toy PGO problem on the top left. If Rij = R(θij) is the relative rotation measurement associated to edge (i, j), then the matrixŨ can be seen as the incidence matrix of a unit gain graph with gain e θ ij associated to each edge (i, j).
definition (12), and consider the following chain of equalities: which, written in the complex domain become: Eq. (27) essentially says that the term Ũr 2 2 does not change if we flip the orientation of an edge and invert the relative rotation measurement. The proof of the following lemma is straightforward from (27).
Lemma 1 (Properties ofŨ )
MatrixŨ is a complex incidence matrix of a unit gain graph with weights Rij ∨ = e θ ji associated to each edge (i, j).
Our interest towards unit gain graphs is motivated by the recent results in [60] on the spectrum of the incidence matrix of those graphs. Using these results, we can characterize the presence of eigenvalues in zero for the matrix W , as specified in the following proposition (proof in Appendix 8.5).
Proposition 6 (Zero eigenvalues inW ) The complex anchored pose graph matrixW has a single eigenvalue in zero if and only if the pose graph is balanced or is a tree.
Besides analyzing the spectrum ofW , it is of interest to understand how the complex matrixW relates to the real matrix W . The following proposition states that there is a tight correspondence between the eigenvalues of the real pose graph matrix W and its complex counterpartW .
Lagrangian duality in PGO
In the previous section we wrote the PGO problem in complex variables as per eq. (23). In the following, we refer to this problem as the primal PGO problem, that, definingx . = [ρ r ] , can be written in compact form as In this section we derive the Lagrangian dual of (28), which is given in Section 4.1. Then, in Section 4.2, we discuss an SDP relaxation of (28), that can be interpreted as the dual of the dual problem. Finally, in Section 4.3 we analyze the properties of the dual problem, and discuss how it relates with the primal PGO problem.
The dual problem
The Lagrangian of the primal problem (28) is where λ i ∈ R, i = 1, . . . , n, are the Lagrange multipliers (or dual variables).
Recalling the structure ofW from (24), the Lagrangian becomes: where for notational convenience we defined The dual function d : R n → R is the infimum of the Lagrangian with respect tox: For any choice of λ the dual function provides a lower bound on the optimal value of the primal problem [7, Section 5.1.3]. Therefore, the Lagrangian dual problem looks for a maximum of the dual function over λ: The infimum overx of L(x, λ) drifts to −∞ unlessW (λ) 0. Therefore we can safely restrict the maximization to vectors λ that are such that W (λ) 0; these are called dual-feasible. Moreover, at any dual-feasible λ, thex minimizing the Lagrangian are those that makex W (λ)x = 0. Therefore, (30) reduces to the following dual problem The importance of the dual problem is twofold. First, it holds that This property is called weak duality, see, e.g., [7, Section 5.2.2]. For particular problems the inequality (32) becomes an equality, and in such cases we say that strong duality holds. Second, since d(λ) is concave (minimum of affine functions), the dual problem (31) is always convex in λ, regardless the convexity properties of the primal problem. The dual PGO problem (31) is a semidefinite program (SDP). For a given λ, we denote by X (λ) the set ofx that attain the optimal value in problem (29), if any: Since we already observed that for any dual-feasible λ the pointsx that minimize the Lagrangian are such thatx W (λ)x = 0, it follows that: The following result ensures that if a vector in X (λ) is feasible for the primal problem, then it is also an optimal solution for the PGO problem.
A proof of this theorem is given in Appendix 8.7.
SDP relaxation and the dual of the dual
We have seen that a lower bound d on the optimal value f of the primal (28) can be obtained by solving the Lagrangian dual problem (31). Here, we outline another, direct, relaxation method to obtain such bound.
Observing thatx Wx = Tr (Wxx ), we rewrite (28) equivalently as s.t.: Tr E iX = 1, i = n, . . . , 2n − 1, where E i is a matrix that is zero everywhere, except for the i-th diagonal element, which is one. The conditionX =xx is equivalent to (i)X 0 and (ii)X has rank one. Thus, (34) is rewritten by eliminatingx as Dropping the rank constraint, which is non-convex, we obtain the following SDP relaxation (see, e.g., [80]) of the primal problem: which we can also rewrite as whereX ii denotes the i-th diagonal entry inX. Obviously, s ≤ f , since the feasible set of (37) contains that of (35). One may then ask what is the relation between the Lagrangian dual and the SDP relaxation of problem (37): the answer is that the former is the dual of the latter hence, under constraint qualification, it holds that s = d , i.e., the SDP relaxation and the Lagrangian dual approach yield the same lower bound on f . This is formalized in the following proposition. (37) is problem (31), and vice-versa. Strong duality holds between these two problems, i.e., d = s . Moreover, if the optimal solutionX of (37) has rank one, then s = f , and hence d = f .
Proposition 8 The Lagrangian dual of problem
Proof. The fact that the SDPs (37) and (31) are related by duality can be found in standard textbooks (e.g. [7, Example 5.13]); moreover, since these are convex programs, under constraint qualification, the duality gap is zero, i.e., d = s . To prove that rank(X ) = 1 ⇒ s = d = f , we observe that (i) TrWX .
To the best of our knowledge this is the first time in which the SDP relaxation has been proposed to solve PGO. For the rotation subproblem, SDP relaxations have been proposed in [71,66,33]. According to Proposition 8, one advantage of the SDP relaxation approach is that we can a-posteriori check if the duality (or, in this case, the relaxation) gap is zero, from the optimal solutionX . Indeed, if one solves (37) and finds that the optimal X has rank one, then we actually solved (28), hence the relaxation gap is zero. Moreover, in this case, from spectral decomposition ofX we can get a vectorx such thatX = (x )(x ) * , and this vector is an optimal solution to the primal problem.
In the following section we derive similar a-posteriori conditions for the dual problem (31). These conditions enable the computation of a primal optimal solution. Moreover, they allow discussing the uniqueness of such solution. Furthermore, we prove that in special cases we can provide apriori conditions that guarantee that the duality gap is zero.
Analysis of the dual problem
In this section we provide conditions under which the duality gap is zero. These conditions depend on the spectrum ofW (λ ), which arises from the solution of (31). We refer toW (λ ) as the penalized pose graph matrix. A first proposition establishes that (31) attains an optimal solution.
Proposition 9
The optimal value d in (31) is attained at a finite λ . Moreover, the penalized pose graph matrixW (λ ) has an eigenvalue in 0.
Proof. SinceW (λ) 0 implies that the diagonal entries are nonnegative, the feasible set of (31) is contained in the set {λ :W ii − λ i ≥ 0, i = 1, . . . , 2n − 1} (recall thatW ii are reals according to Remark 2). On the other hand, λ l = 0 2n−1 is feasible and all points in the set {λ : λ i ≥ 0 yield an objective that is at least as good as the objective value at λ l . Therefore, the problem is equivalent to max λ i λ i subject to the original constraint, plus a box constraint on λ ∈ {0 ≤ λ i ≤W ii , i = 1, . . . , n}. Thus we maximize a linear function over a compact set, hence a finite optimal solution λ must be attained. Now let us prove thatW (λ ) has an eigenvalue in zero. Assume by contradiction thatW (λ ) 0. From the Schur complement rule we know: The condition L 0 is always satisfied for a connected graph, since L = A A, and the anchored incidence matrix A, obtained by removing a node from the original incidence matrix, is always full-rank for connected graphs [67,Section 19.3]. Therefore, our assumptionW (λ ) 0 implies that which is positive by the assumptionW (λ ) 0. Consider λ = λ + 1, theñ thus λ is dual feasible, and i λ i > i λ i , which would contradict optimality of λ . We thus proved thatQ(λ ) must have a zero eigenvalue.
Proposition 10 (No duality gap) If the zero eigenvalue of the penalized pose graph matrixW (λ ) is simple then the duality gap is zero, i.e., d = f .
In the following we say thatW (λ ) satisfies the single zero eigenvalue property (SZEP) if its zero eigenvalue is simple. The following corollary provides a more explicit relation between the solution of the primal and the dual problem whenW (λ ) satisfies the SZEP.
Corollary 1 (SZEP ⇒x ∈ X (λ )) If the zero eigenvalue ofW (λ ) is simple, then the set X (λ ) contains a primal optimal solution. Moreover, the primal optimal solution is unique, up to an arbitrary rotation.
Proof. Letx be a primal optimal solution, and let f = (x ) * W (x ) be the corresponding optimal value. From Proposition 10 we know that the SZEP implies that the duality gap is zero, i.e., d = f , hence Sincex is a solution of the primal, it must be feasible, hence |x i | 2 = 1, i = n, . . . , 2n − 1. Therefore, the following equalities holds: Plugging (42) back into (41): which proves thatx belongs to the null space ofW (λ ), which coincides with our definition of X (λ ) in (33), proving the first claim. Let us prove the second claim. From the first claim we know that the SZEP implies that any primal optimal solution is in X (λ ). Moreover, wheñ W (λ ) has a single eigenvalue in zero, then X (λ ) = Kernel(W(λ )) is 1dimensional and can be written as X (λ ) = {γx :γ ∈ C}, or, using the polar form forγ: From (44) it's easy to see that any η = 1 would alter the norm ofx , leading to a solution that it's not primal feasible. On the other hand, any e ϕx belongs to X (λ ), and it's primal feasible (|e ϕx i | = |x i |), hence by Theorem 1, any e ϕx is primal optimal. We conclude the proof by noting that the multiplication by e ϕ corresponds to a global rotation of the pose estimatex : this can be easily understood from the relation (18).
Proposition 10 provides an a-posteriori condition on the duality gap, that requires solving the dual problem; while Section 6 will show that this condition is very useful in practice, it is also interesting to devise a-priori conditions, that can be assessed from the pose graph matrixW , without solving the dual problem. A first step in this direction is the following proposition.
Proposition 11 (Strong duality in trees and balanced pose graphs) Strong duality holds for any balanced pose graph optimization problem, and for any pose graph whose underlying graph is a tree.
Proof. Balanced pose graphs and trees have in common the fact that they attain f = 0 (Propositions 1-2). By weak duality we know that d ≤ 0. However, λ = 0 n is feasible (asW 0) and attains d(λ) = 0, hence λ = 0 n is feasible and dual optimal, proving d = f .
Algorithms
In this section we exploit the results presented so far to devise an algorithm to solve PGO. The idea is to solve the dual problem, and use λ andW (λ ) to compute a solution for the primal PGO problem. We split the presentation into two sections: Section 5.1 discusses the case in whichW (λ ) satisfies the SZEP, while Section 5.2 discusses the case in whichW (λ ) has multiple eigenvalues in zero. This distinction is important as in the former case (which is the most common in practice) we can compute a provably optimal solution for PGO, while in the latter case our algorithm returns an estimate that is not necessarily optimal. Finally, in Section 5.3 we summarize our algorithm and present the corresponding pseudocode.
Case 1:W (λ ) satisfies the SZEP
According to Corollary 1, ifW (λ ) has a single zero eigenvalue, then the optimal solution of the primal problemx is in X (λ ), where X (λ ) coincides with the null space ofW (λ ), as per (33). Moreover, this null space is 1dimensional, hence it can be written explicitly as: which means that any vector in the null space is a scalar multiple of the primal optimal solutionx . This observation suggests a computational approach to computex . We can first compute an eigenvectorṽ corresponding to the single zero eigenvalue ofW (λ ) (this is a vector in the null space of W (λ )). Then, sincex must be primal feasible (i.e., |x n | = . . . = |x 2n−1 | = 1), we compute a suitable scalar γ that makes 1 γṽ primal feasible. This scalar is clearly γ = |ṽ n | = . . . = |ṽ 2n−1 | (we essentially need to normalize the norm of the last n entries ofṽ). The existence of a suitable γ, and hence the fact that |ṽ n | = . . . = |ṽ 2n−1 | > 0, is guaranteed by Corollary 1. As a result we get the optimal solutionx = 1 γṽ . The pseudocode of our approach is given in Algorithm 1, and further discussed in Section 5.3.
Case 2:W (λ ) does not satisfy the SZEP
Currently we are not able to compute a guaranteed optimal solution for PGO, whenW (λ ) has multiple eigenvalues in zero. Nevertheless, it is interesting to exploit the solution of the dual problem for finding a (possibly suboptimal) estimate, which can be used, for instance, as initial guess for an iterative technique.
Eigenvector method. One idea to compute a suboptimal solution from the dual problem is to follow the same approach of Section 5.1: we compute an eigenvector ofW (λ ), corresponding to one of the zero eigenvalues, and we normalize it to make it feasible. In this case, we are not guaranteed that |ṽ n | = . . . = |ṽ 2n−1 | > 0 (as in the previous section), hence the normalization has to be done component-wise, for each of the last n entries ofṽ. In the following, we consider an alternative approach, which we have seen to perform better in practice (see experiments in Section 6).
Null space method. This approach is based on the insight of Theorem 1: if there is a primal feasiblex ∈ X (λ ), thenx must be primal optimal. Therefore we look for a vectorx ∈ X (λ ) that is "close" to the feasible set. According to (33), X (λ ) coincides with the null space ofW (λ ). Let us denote withṼ ∈ C (2n−1)×q a basis of the null space ofW (λ ), where q is the number of zero eigenvalues ofW (λ ). 4 Any vectorx in the null space ofW (λ ) can be written asx =Ṽz, for some vectorz ∈ C q . Therefore we propose to compute a possibly suboptimal estimatex =Ṽz , wherez solves the following optimization problem: s.t.: whereṼ i denotes the i-th row ofṼ , and real(·) and imag(·) return the real and the imaginary part of a complex number, respectively. For an intuitive explanation of problem (46), we notice that the feasible set of the primal problem (28) is described by |x i | 2 = 1, for i = n,. . ., 2n−1. In problem (46) we relax the equality constraints to convex inequality constraints |x i | 2 ≤ 1, for i = n, . . . , 2n − 1; these can be written as |Ṽ iz | 2 ≤ 1, recalling that we 4Ṽ can be computed from singular value decomposition ofW (λ ).
are searching in the null space ofW (λ ), which is spanned byṼz. Then, the objective function in (46) encourages "large" elementsṼ iz , hence pushing the inequality |Ṽ iz | 2 ≤ 1 to be tight. While other metrics can force large entriesṼ iz , we preferred the linear metric (46) to preserve convexity. Note thatx =Ṽz , in general, is neither optimal nor feasible for our PGO problem (28), hence we need to normalize it to get a feasible estimate. The experimental section provides empirical evidence that, despite being heuristic in nature, this method performs well in practice, outperformingamong the others-the eigenvector method presented earlier in this section.
Pseudocode and implementation details
The pseudocode of our algorithm is given in Algorithm 1. The first step is to solve the dual problem, and check the a-posteriori condition of Proposition 10. If the SZEP is satisfied, then we can compute the optimal solution by scaling the eigenvector ofW (λ ) corresponding to the zero eigenvalue µ 1 . This is the case described in Section 5.1 and is the most relevant in practice, since the vast majority of robotics problems falls in this case.
The "else" condition corresponds to the case in whichW (λ ) has multiple eigenvalue in zero. The pseudocode implements the null space approach of Section 5.2. The algorithm computes a basis for the null space ofW (λ ) and solves (46) to find a vector belonging to the null space (i.e, in the form x =Ṽz) that is close to the feasible set. Since such vector is not guaranteed to be primal feasible (and it is not in general), the algorithm normalizes the last n entries ofx =Ṽz , so to satisfy the unit norm constraints in (28). Besides returning the estimatex , the algorithm also provides an optimality certificate whenW (λ ) has a single eigenvalue in zero.
Numerical Analysis and Discussion
The objective of this section is four-fold. First, we validate our theoretical derivation, providing experimental evidence that supports the claims. Second, we show that the duality gap is zero in a vast amount or practical problems. Third, we confirm the effectiveness of Algorithm 1 to solve PGO. Fourth, we provide toy examples in which the duality gap is greater than zero, hoping that this can stimulate further investigation towards a-priori conditions that ensure zero duality gap.
Simulation setup. For each run we generate a random graph with n = 10 nodes, unless specified otherwise. We draw the position of each pose by a uniform distribution in a 10m × 10m square. Similarly, ground truth node orientations are randomly selected in (−π, +π]. Then we create set of edges defining a spanning path of the graph (these are usually called odometric edges); moreover, we add further edges to the edge set, by connecting random pairs of nodes with probability P c = 0.1 (these are usually called loop closures). From the randomly selected true poses, and for each edge (i, j) in the edge set, we generate the relative pose measurement using the following model: where ∆ ∈ R 2 and R ∈ R are zero-mean Normally distributed random variables, with standard deviation σ ∆ and σ R , respectively, and R( R ) is a random planar rotation of an angle R . Unless specified otherwise, all statistics are computed over 100 runs.
Spectrum ofW . In Proposition 6, we showed that the complex anchored pose graph matrixW has at most one eigenvalue in zero, and the zero eigenvalue only appears when the pose graph is balanced or is a tree. Fig. 4(a) reports the value of the smallest eigenvalue ofW (in log scale) for different σ R , with fixed σ ∆ = 0m. When also σ R is zero, the pose graph is balanced, hence the smallest eigenvalue ofW is (numerically) zero. For increasing levels of noise, the smallest eigenvalue increases and stays away from zero. Similarly, Fig. 4(b) reports the value of the smallest observed eigenvalue ofW (in log scale) for different σ ∆ , with fixed σ R = 0rad.
Duality gap is zero in many cases. This section shows that for the levels of measurement noise of practical interest, the matrixW (λ ) satisfies the Single Eigenvalue Property (SZEP), hence the duality gap is zero (Proposition 10). We consider the same measurement model of Eq. (47), and we analyze the percentage of tests in whichW (λ ) satisfies the SZEP. pose graph matrixW (λ ) has a single zero eigenvalue, for different values of rotation noise σ R , and keeping fixed the translation noise to σ ∆ = 0.1m (this is a typical value in mobile robotics applications). For σ R ≤ 0.5rad, W (λ ) satisfies the SZEP in all tests. This means that in this range of operation, Algorithm 1 is guaranteed to compute a globally-optimal solution for PGO. For σ R = 1rad, the percentage of successful experiments drops, while still remaining larger than 90%. Note that σ R = 1rad is a very large rotation noise (in robotics, typically σ R ≤ 0.3rad [13]), and it is not far from the case in which rotation measurements are uninformative (uniformly distributed in (−π, +π]). To push our evaluation further we also tested this extreme case. When rotation noise is uniformly distributed in (−π, +π], we obtained a percentage of successful tests (single zero eigenvalue) of 69%, which confirms that the number of cases in which we can compute a globally optimal solution drops gracefully when increasing the noise levels. Fig. 5(b) shows the percentage of the experiments in whichW (λ ) has a single zero eigenvalue, for different values of translation noise σ ∆ , and keeping fixed the rotation noise to σ R = 0.1rad. Also in this case, for practical noise regimes, our approach can compute a global solution in all cases. The percentage of successful tests drops to 98% when the translation noise has standard deviation 1m. We also tested the case of uniform noise on translation measurements. When we draw the measurement noise from a uniform distribution in [−5, 5] 2 (recall that the poses are deployed in a 10 × 10 square), the percentage of successful experiments is 68%.
We also tested the percentage of experiments satisfying the SZEP for different levels of connectivity of the graph, controlled by the parameter P c . We observed 100% successful experiments, independently on the choice of P c , for σ R = σ ∆ = 0.1 and σ R = σ ∆ = 0.5. A more interesting case if shown in Fig. 5(c) and corresponds to the case σ R = σ ∆ = 1. The SZEP is always satisfied for P c = 0: this is natural as P c = 0 always produces trees, for which we are guaranteed to satisfy the SZEP (Proposition 11). For P c = 0.2 the SZEP fails in few runs. Finally, increasing the connectivity beyond P c = 0.4 re-establishes 100% of successful tests. This would suggest that the connectivity level of the graph influences the duality gap, and better connected graphs have more changes to have zero duality gap.
Finally, we tested the percentage of experiments satisfying the SZEP for different number of nodes n. We tested the following number of nodes: n = {10, 20, 30, 40, 50}. For σ R = σ ∆ = 0.1 and σ R = σ ∆ = 0.5 the SZEP was satisfied in 100% of the tests, and we omit the results for brevity. The more challenging case σ R = σ ∆ = 1 is shown in Fig. 5(d). The percentage of successful tests increases for larger number of poses. We remark that current SDP solvers do not scale well to large problems, hence a Monte Carlo analysis over larger problems becomes prohibitive. We refer the reader to [14] for single-run experiments on larger PGO problems, which confirm that the duality gap is zero in problems arising in real-world robotics applications.
Performance of Algorithm 1 In this section we show that Algorithm 1 provides an effective solution for PGO. WhenW (λ ) satisfies the SZEP, the algorithm is provably optimal, and it enables to solve problems that are already challenging for iterative solvers. When theW (λ ) does not satisfy the SZEP, we show that the proposed approach, while not providing performance guarantees, largely outperforms competitors.
Case 1:W (λ ) satisfies the SZEP. WhenW (λ ) satisfies the SZEP, Algorithm 1 is guaranteed to produce a globally optimal solution. However, one may argue that in the regime of operation in which the SZEP holds, PGO problem instances are sufficiently "easy" that commonly used iterative techniques also perform well. In this paragraph we briefly show that the SZEP is satisfied in many instances that are hard to solve. For this purpose, we focus on the most challenging cases we discussed so far, i.e., problem instances with large rotation and translation noise. Then we consider the problems in which the SZEP is satisfied and we compare the solution of Algorithm 1, which is proven to attain f , versus the solution of a Gauss-Newton method initialized at the true poses. Ground truth poses are an ideal initial guess (which is unfortunately available only in simulation): intuitively, the global minimum of the cost should be close to the ground truth poses (this is one of the motivations for maximum likelihood estimation). Fig. 6 shows the gap between the objective attained by the Gauss-Newton method (denoted as f GN ) and the optimal objective obtained from Algorithm 1. The figure confirms that our algorithm provides a guaranteed optimal solution in a regime that is already challenging, and in which iterative approaches may fail to converge even from a good initialization. Case 2:W (λ ) does not satisfy the SZEP. In this case, Algorithm 1 computes an estimate, according to the null space approach proposed in Section 5.2; we denote this approach with the label NS. To evaluate the performance of the proposed approach, we considered 100 instances in which the SZEP was not satisfied and we compared our approach against the following methods: a Gauss-Newton method initialized at the ground truth poses (GN), the eigenvector method described at the beginning of Section 5.2 (Eig), and the SDP relaxation of Section 4.2 (SDP). For the SDP approach, we compute the solutionX of the relaxed problem (37). IfX has rank larger than 1, we find the closest rank-1 matrixX rank-1 from singular value decomposition [28]. Then we factorizeX rank-1 asX rank-1 =xx * (x can be computed via Cholesky factorization ofX rank-1 [70]). We report the results of our comparison in the first row of Fig. 7, where we show for different noise setups (sub-figures (a1) to (a4)), the cost of the estimate produced by the four approaches. The proposed null space approach (NS) largely outperforms the Eig and the SDP approaches, and has comparable performance with an "oracle" GN approach which knows the ground truth poses. One may also compare the performance of the approaches NS, Eig, SDP after refining the corresponding estimates with a Gauss-Newton method, which tops off residual errors. The cost obtained by the different techniques, with the Gauss-Newton refinement, are shown in the second row of Fig. 7. For this case we also added one more initialization technique in the comparison: we consider an approach that solves for rotations first, using the eigenvalue method in [70], and then applies the Gauss-Newton method from 8.14e-08 µ 3 8.82e-02 µ 4 2.46e+01 (d) 5.14e-03 µ 3 8.43e-02 µ 4 1.28e+01 (f) Figure 8: (a) Toy example of chain pose graph in which the SZEP fails. In each plot we also report the four smallest eigenvalues of the penalized pose graph matrixW (λ ) for the corresponding PGO problem. Removing a node from the original graph may change the duality properties of the graph. In (b), (c), (d), (e), (f) we remove nodes 1, 2, 3, 4, 5, respectively. Removing any node, except node 3, leads to a graph that satisfied the SZEP. the rotation guess. 5 Fig. 7(b1) to Fig. 7(b4) show less differences (in average) among the techniques, as in most cases the Gauss-Newton refinement is able to converge starting from all the compared initializations. However, for the techniques Eig, SDP, and EigR we see many red sample points, which denote cases in which the error is larger than the 75th percentile; these are the cases in which the techniques failed to converge and produced a large cost. On the other hand, the proposed NS approach is less prone to converge to a bad minimum (fewer and lower red samples).
Chain graph counterexample and discussion. In this section we consider a simple graph topology: the chain graph. A chain graph is a graph with edges (1, 2), (2, 3), . . . , (n − 1, n), (n, 1). Removing the last edge we obtain a tree (or, more specifically, a path), for which the SZEP is always satisfied. Therefore the question is: is the SZEP always satisfied in PGO whose underlying graph is a chain? The answer, unfortunately, is no. Fig. 8(a) provides an example of a very simple chain graph with 5 nodes that fails to meet the SZEP property. The figure reports the 4 smallest eigenvalues of W (λ ) (µ 1 , . . . , µ 4 ), and the first two are numerically zero. If the chain graph were balanced, Proposition 11 says that the SZEP needs to be satisfied. Therefore, one may argue that failure to meet the SZEP depends on the amount of error accumulated along the loop in the graph. Surprisingly, also this intuition fails. In Fig. 8(b-f) we show the pose graphs obtained by removing a single node from the pose graph in Fig. 8(a). When removing a node, say k, we introduce a relative measurement between nodes k − 1 and k + 1, that is equal to the composition of the relative measurements associated to the edges (k − 1, k) and (k, k + 1) in the original graph. By constructions, the resulting graphs have the same accumulated errors (along each loop) as the original graph. However, interestingly, they do not necessarily share the same duality properties of the original graph. The graphs obtained by removing nodes 1, 2, 4, 5 (shown in figures b,c,e,f, respectively), in fact, satisfy the SZEP. On the other hand, the graph in Fig. 8(c) still has 2 eigenvalues in zero. The data to reproduce these toy examples are reported in Appendix 8.8.
We conclude with a test showing that the SZEP is not only dictated by the underlying rotation subproblem but also depends heavily on the translation part of the optimization problem. To show this we consider variations of the PGO problem in Fig. 8(a), in which we "scale" all translation measurements by a constant factor. When the scale factor is smaller than one we obtain a PGO problem in which nodes are closer to each other; for scale > 1 we obtain larger inter-nodal measurements; the scale equal to 1 coincides with the original problem. Fig 9 shows the second eigenvalue ofW (λ ) for different scaling of the original graphs. Scaling down the measurements in the graph of Fig. 8(a) can re-establish the SZEP. Interestingly, this is in agreement with the convergence analysis of [10], which shows that the basin of convergence becomes larger when scaling down the inter-nodal distances.
Conclusion
We show that the application of Lagrangian duality in PGO provides an appealing approach to compute a globally optimal solution. More specifically, we propose four contributions. First, we rephrase PGO as a problem in complex variables. This allows drawing connection with the recent literature on unit gain graphs, and enables results on the spectrum of the pose graph matrix. Second, we formulate the Lagrangian dual problem and we analyze the relations between the primal and the dual solutions. Our key result proves that the duality gap is connected to the number of eigenvalues of the penalized pose graph matrix, which arises from the solution of the dual problem. In particular, if this matrix has a single eigenvalue in zero (SZEP), then (i) the duality gap is zero, (ii) the primal PGO problem has a unique solution (up to an arbitrary roto-translation), and (iii) the primal solution can be computed by scaling an eigenvector of the penalized pose graph matrix. The third contribution is an algorithm that returns a guaranteed optimal solution when the SZEP is satisfied, and (empirically) provides a very good estimate when the SZEP fails. Finally, we report numerical results, that show that (i) the SZEP holds for noise levels of practical robotics applications, (ii) the proposed algorithm outperforms several existing approaches, (iii) the satisfaction of the SZEP depends on multiple factors, including graph connectivity, number of poses, and measurement noise.
2. For each neighbor j of the root i, if j is an outgoing neighbor, set r j = R ij r i , and p j = p i + D ij r i , otherwise set r j = R ji r i , and p j = p i + D ji r j ; 3. Repeat point 2 for the unknown neighbors of every node that has been computed so far, and continue until all poses have been computed.
Let us now show that this procedure produces a set of poses that annihilates the objective in (8). According to the procedure, we set the first node to the origin: p 1 = 0 2 , r 1 = [1 0] ; then, before moving to the second step of the procedure, we rearrange the terms in (8): we separate the edges into two sets E = E 1 ∪Ē 1 , where E 1 is the set of edges incident on node 1 (the root), andĒ 1 are the remaining edges. Then the cost can be written as: We can further split the set E 1 into edges that have node 1 as a tail (i.e., edges in the form (1, j)) and edges that have node 1 as head (i.e., (j, 1)): Now, we set each node j in the first two summands as prescribed in step 2 of the procedure. By inspection one can verify that this choice annihilates the fist two summands and the cost becomes: Now we select a node k that has been computed at the previous step, but has some neighbor that is still unknown. As done previously, we split the setĒ 1 into two disjoint subsets:Ē 1 = E k ∪Ē k , where the set E k contains the edges inĒ 1 that are incident on k, andĒ k contains the remaining edges: Again, setting neighbors j as prescribed in step 2 of the procedure, annihilates the first two summands in (51). Repeating the same reasoning for all nodes that have been computed, but still have unknown neighbors, we can easily show that all terms in (51) become zero (the assumption of graph connectivity ensures that we can reach all nodes), proving the claim.
Proof of Proposition 2: Zero Cost in Balanced Graphs
Similarly to Appendix 8.1, we prove Proposition 2 by showing that in balanced graphs one can always build a solution that attains zero cost. For the assumption of connectivity, we can find a spanning tree T of the graph, and split the terms in the cost function accordingly: whereT .
= E \ T are the chords of the graph w.r.t. T . Then, using the procedure in Appendix 8.1 we construct a solution {r i , p i } that attains zero cost for the measurements in the spanning tree T . Therefore, our claim only requires to demonstrate that the solution built from the spanning tree also annihilates the terms inT : To prove the claim, we consider one of the chords inT and we show that the cost at {r i , p i } is zero. The cost associated to a chord (i, j) ∈T is: Now consider the unique path P ij in the spanning tree T that connects i to j, and number the nodes along this path as i, i + 1, . . . , j − 1, j.
Let us start by analyzing the second summand in (54), which corresponds to the rotation measurements. According to the procedure in Appendix 8.1 to build the solution for T , we propagate the estimate from the root of the tree. Then it is easy to see that: where R ii+1 is the rotation associated to the edge (i, i + 1), or its transpose if the edge is in the form (i + 1, i) (i.e., it is traversed backwards along P ij ). Now we notice that the assumption of balanced graph implies that the measurements compose to the identity along every cycle in the graph. Since the chord (i, j) and the path P ij form a cycle in the graph, it holds: Substituting (56) back into (55) we get: which can be easily seen to annihilate the second summand in (54). Now we only need to demonstrate that also the first summand in (54) is zero. The procedure in Appendix 8.1 leads to the following estimate for the position of node j: The assumption of balanced graph implies that position measurements compose to zero along every cycle, hence: or equivalently: Substituting (60) back into (58) we obtain: which annihilates the first summand in (54), concluding the proof.
Proof of Proposition 4: properties of W
Let us prove that W has (at least) two eigenvalues in zero. We already observed that the top-left block of W isL = L⊗I 2 , where L is the Laplacian matrix of the graph underlying the PGO problem. The Laplacian L of a connected graph has a single eigenvalue in zero, and the corresponding eigenvector is 1 n (see, e.g., [18, Sections 1.2-1.3]), i.e., L · 1 n = 0. Using this property, it is easy to show that the matrix N . = [0 n 1 n ] ⊗ I 2 is in the nullspace of W, i.e., WN = 0. Since N has rank 2, this implies that the nullspace of W has at least dimension 2, which proves the first claim.
where we defined β i Clearly,Q has blocks in αSO(2) and the diagonal blocks are nonnegative multiples of I 2 .
Now, it only remains to inspect the structure ofĀ D . The matrixĀ D has the following structure: [Ā D ] ii = j∈N out i D ij , i = 1, . . . , n; Note that j∈N out i D ij is the sum of matrices in αSO (2), hence it also belongs to αSO (2). Therefore, also all blocks ofĀ D are in αSO (2), thus concluding the proof.
Proof of Proposition 5: Cost in the Complex Domain
Let us prove the equivalence between the complex cost and its real counterpart, as stated in Proposition 5. We first observe that the dot product between two 2-vectors x 1 , x 2 ∈ R 2 , can be written in terms of their complex representationx 1 .
2 , as follows: Moreover, we know that the action of a matrix Z ∈ αSO(2) can be written as the product of complex numbers, see (18).
Combining (65) and (18) we get: wherez = Z ∨ . Furthermore, when Z is multiple of the identity matrix, it easy to see that z = Z ∨ is actually a real number, and Eq. (66) becomes: With the machinery introduced so far, we are ready to rewrite the cost x W x in complex form. Since W is symmetric, the product becomes: Using the fact that [W ] ii is a multiple of the identity matrix,W ii . = [W ] ∨ ii ∈ R, and using (67) we conclude x i [W ] ii x i =x * iW iixi . Moreover, defining W ij . = [W ]ij ∨ (these will be complex numbers, in general), and using (66), eq. (68) becomes: where we completed the lower triangular part ofW asW ji =W ij * .
Proof of Proposition 6: Zero Eigenvalues inW
Let us denote with N 0 the number of zero eigenvalues of the pose graph matrixW . N 0 can be written in terms of the dimension of the matrix (W ∈ C (2n−1)×(2n−1) ) and the rank of the matrix: Now, recalling the factorization ofW given in (25), we note that: rank(W ) = rank AD 0Ũ = rank(A) + rank(Ũ ) (71) where the second relation follows from the upper triangular structure of the matrix. Now, we know from [67,Section 19.3] that the anchored incidence matrix A, obtained by removing a row from the the incidence matrix of a connected graph, is full rank: Therefore: Now, since we recognized thatŨ is the complex incidence matrix of a unit gain graph (Lemma 1), we can use the result of Lemma 2.3 in [60], which says that: rank(Ũ ) = n − b, where b is the number of connected components in the graph that are balanced. Since we are working on a connected graph (Assumption 1), b can be either one (balanced graph or tree), or zero otherwise. Using (73) and (74), we obtain N 0 = b, which implies that that N 0 = 1 for balanced graphs or trees, or N 0 = 0, otherwise.
Proof of Proposition 7: Spectrum of Complex and Real Pose Graph Matrices
Recall that any Hermitian matrix has real eigenvalues, and possibly complex eigenvectors. Let µ ∈ R be an eigenvalue ofW , associated with an eigenvectorṽ ∈ C 2n−1 , i.e.,Wṽ = µṽ From equation (75) we have, for i = 1, . . . , 2n − 1, where v i is such that v ∨ i =ṽ i . Since eq. (76) holds for all i = 1, . . . , 2n − 1, it can be written in compact form as: hence v is an eigenvector of the real anchored pose graph matrix W , associated with the eigenvalue µ. This proves that any eigenvalue ofW is also an eigenvalue of W . To prove that the eigenvalue µ is actually repeated twice in W , consider now equation (75) and multiply both members by the complex number e π 2 : For i = 1, . . . , 2n − 1, we have: where w i is such that w ∨ i =ṽ j e π 2 . Since eq. (79) holds for all i = 1, . . . , 2n − 1, it can be written in compact form as: hence also w is an eigenvector of W associated with the eigenvalue µ.
Now it only remains to demonstrate that v and w are linearly independent. One can readily check that, ifṽ i is in the formṽ i = η i e θ i , then v i = η i cos(θ i ) sin(θ i ) .
Moreover, observing thatṽ j e π 2 = η i e (θ i +π/2) , then From (81) and (82) is it easy to see that v w = 0, thus v, w are orthogonal, hence independent. To each eigenvalue µ ofW there thus correspond an identical eigenvalue of W , of geometric multiplicity at least two. SinceW has 2n − 1 eigenvalues and W has 2(2n − 1) eigenvalues, we conclude that to each eigenvalue µ ofW there correspond exactly two eigenvalues of W in µ. The previous proof also shows how the set of orthogonal eigenvectors of W is related to the set of eigenvectors ofW .
Proof of Theorem 1: Primal-dual Optimal Pairs
We prove that, given λ ∈ R n , if anx λ ∈ X (λ) is primal feasible, thenx λ is primal optimal; moreover, λ is dual optimal, and the duality gap is zero. By weak duality we know that for any λ: However, if x λ is primal feasible, by optimality of f , it must also hold Now we observe that for a feasible x λ , the terms in the Lagrangian associated to the constraints disappear and L(x λ , λ) = f (x λ ). Using the latter equality and the inequalities (83) and (84) we get: which implies f (x λ ) = f , i.e., x λ is primal optimal. Further, we have that d ≥ min x L(x, λ) = L(x λ , λ) = f (x λ ) = f , which, combined with weak duality (d ≤ f ), implies that d = f and that λ attains the dual optimal value.
Numerical Data For the Toy Examples in Section 6
Ground truth nodes poses, written as x i = [p i , θ i ]: | 2015-05-13T17:45:32.000Z | 2015-05-13T00:00:00.000 | {
"year": 2015,
"sha1": "180f54cd249c86ce05d1df1b5e3689b8ff1fd3b2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "180f54cd249c86ce05d1df1b5e3689b8ff1fd3b2",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4469063 | pes2o/s2orc | v3-fos-license | Highlights in the Study of Exoplanet Atmospheres
Exoplanets are now being discovered in profusion. However, to understand their character requires spectral models and data. These elements of remote sensing can yield temperatures, compositions, and even weather patterns, but only if significant improvements in both the parameter retrieval process and measurements are achieved. Despite heroic efforts to garner constraining data on exoplanet atmospheres and dynamics, reliable interpretation has oftimes lagged ambition. I summarize the most productive, and at times novel, methods employed to probe exoplanet atmospheres, highlight some of the most interesting results obtained, and suggest various broad theoretical topics in which further work could pay significant dividends.
Introduction
The modern era of exoplanet research started in 1995 with the discovery of the planet 51 Peg b [1] due to the detection of the periodic radial-velocity (RV) Doppler wobble in its star (51 Peg) induced by the planet's nearly circular orbit. With these data, and knowledge of the star, one could derive orbital period (P ) and semi-major axis (a), and constrain the planet's mass. However, the inclination of the planet's orbit was unknown and, therefore, only a lower limit to its mass could be determined. With a lower limit of 0.47 MJ (where MJ is the mass of Jupiter), and given its proximity to its primary (a is ∼0.052 A.U.; one hundred times closer to its star than Jupiter is to the Sun), the induced Doppler wobble is optimal for detection by the RV technique. The question was how such a "hot Jupiter" could exist and survive. While its survival is now understood (see section below on "Winds from Planets"), the reason for its close orbital position is still a subject of vigorous debate. Nevertheless, such close-in giants are selected for using the RV technique and soon scores, then hundreds, of such gas giants were discovered in this manner. However, aside from a limit on planet mass, and the inference that proximity to its star leads to a hot (∼1000-2000 Kelvin (K)) irradiated atmosphere, no useful physical information on such planets was available with which to study planet structure, their atmospheres, or composition. A breakthrough along the path to characterization and the establishment of a mature field of exoplanet science occurred with the discovery of giant planets, still close-in, that transit the disk of their parent star. The chance of a transit is larger if the planet is close and HD 209458b at a∼0.05 A.U. was the first found [2]. Optical measurements yielded a radius (Rp) for HD 209458b of ∼1.36 RJ, where RJ is the radius of Jupiter. Jupiter is roughly ten times, and Neptune is roughly four times, the radius of Earth (RE). Since then, hundreds of transiting giants have been discovered using ground-based facilities. The magnitude of the attendant diminution of a star's light during such a primary transit (eclipse) by a planet is the ratio of their areas ( , where Rp and R * are the planet's and star's radius, respectively), so with knowledge of the star's radius the planet's radius can be determined. Along with RV data, since the orbital inclination of a planet in transit is known, one then has a radius−mass pair with which to do some science.
The transit depth for a giant passing in front of a solar-like star is ∼1%, and such a large magnitude can easily be measured with small telescopes from the ground. A smaller Earthlike planet requires the ability to measure transit depths one hundred times more precisely. Soon, many hundreds of gas giants were detected both in transit and via the RV method, the former requiring modest equipment and the latter requiring larger telescopes with state-of-the-art spectrometers with which to measure the small stellar wobbles. Both techniques favor close-in giants, so for many years these objects dominated the beastiary of known exoplanets.
Better photometric precision near or below one part in 10 4−5 , achievable only from space, is necessary to detect the transits of Earth-like and Neptune-like exoplanets across Sunlike stars, and, with the advent of the Kepler [3] and CoRoT [4] satellites, astronomers have now discovered a few thousand exoplanet candidates. Kepler in particular revealed that most planets are smaller than ∼2.5 RE (four times smaller than Jupiter), but fewer than ∼100 of the Kepler candidates are close enough to us to be measured with state-of-the-art RV techniques. Without masses, structural and bulk compositional inferences are problematic. Moreover, the majority of these finds are too distant for photometric or spectroscopic follow-up from the ground or space to provide thermal and compositional information.
A handful of the Kepler and CoRoT exoplanets, and many of the transiting giants and "sub-Neptunes" discovered using ground-based techniques are not very distant and have been followed up photometrically and spectroscopically using both ground-based and space-based assets to help constrain their atmospheric properties. In this way, and with enough photons, some information on atmospheric compositions and temperatures has been revealed for ∼50 exoplanets, mostly giants. However, even these data are often sparse and ambiguous, rendering most such hard-won results provisional [5]. The nearby systems hosting larger transiting planets around smaller stars are the best targets for a program of remote sensing to be undertaken, but such systems are a small subset of the thousands of exoplanets currently in the catalogues.
One method with which astronomers are performing such studies is to measure the transit radius as a function of wavelength [6,7,8]. Since the opacity of molecules and atoms in a planet's atmosphere is a function of wavelength, the apparent size of the planet is a function of wavelength as well, in a manner characteristic of atmospheric composition. Such a "radius spectrum" can reveal the atmosphere's composition near the planet terminators, but the magnitude of the asso-ciated variation is down from the average transit depth by a factor of ∼ 2H Rp , where H is the atmospheric scale height (a function of average temperature and gravity). This ratio can be ∼0.1 to 0.01, making it correspondingly more difficult to determine a transit radius spectrum. Only space telescopes such as Spitzer [9] and the Hubble Space Telescope (HST), and the largest ground-based telescopes with advanced spectrometers, are up to the task, and even then the results can be difficult to interpret.
Another method probes the atmospheres of transiting exoplanets at secondary eclipse, when the star occults the planet ∼180 • out of phase with the primary transit. The abrupt difference between the summed spectrum of planet and star just before and during the eclipse of the planet by the star is the planet's spectrum at full face. Secondary eclipse spectra include reflected (mostly in the optical and near-ultraviolet) and thermally emitted (mostly in the near-and mid-infrared) light, and models are necessary to distinguish (if possible) the two components. Note that separate images of the planet and star are not obtained via this technique, and a planet must be transiting. With few exceptions, when the planet does not transit the summed light of a planet and star varies too slowly and smoothly for such a variation to be easily distinguished from the systematic uncertainties of the instruments to reveal the planet's emissions as a function of orbital phase. For the close-in transiting "hot Jupiters," the planet flux in the near-infrared is ∼10 −3 times the stellar flux, much higher than the ratio expected for the class of planet in a wide orbit that can be separated from its primary star by high-contrast imaging techniques. In cases when such "high-contrast" direct imaging is feasible, the planet is farther away from the star (hence, dim) and difficult to discern from under the stellar glare. However, hot, young giants can be self-luminous enough to be captured by current high-contrast imaging techniques and a handful of young giant planets have been discovered and characterized by this technique. More are expected as the technology matures [10,11,12,13].
The secondary eclipse and primary transit methods used to determine or constrain atmospheric compositions and temperatures (as well as other properties) generally involve lowresolution spectra with large systematic and statistical errors. These methods are complementary in that transit spectra reliably reveal the presence of molecular and atomic features and are an indirect measure of temperature through the pressure scale height, while the flux levels of secondary eclipse spectra scale directly with temperature, but could in fact be featureless for an isothermal atmosphere. The theoretical spectra with which they are compared to extract parameter values are imperfect as well, and this results in less trustworthy information than one would like. Giant planets (and "Neptunes") orbiting closely around nearby stars are the easiest targets, and are the stepping stones to the Earths. Secondary and primary transit spectral measurements of Earth-like planets around Sun-like stars, as well as direct high-contrast imaging of such small planets, are not currently feasible. However, measurements of exo-Earths around smaller M dwarf stars might be, if suitable systems can be found. Nevertheless, with a few score transit and secondary eclipse spectra, some planetary phase light curves, a few high-contrast campaigns and measurements, and some narrow-band, but very high spectral resolution measurements using large telescopes, the first generation of exoplanet atmosphere studies has begun.
There are several helpful reviews of the theory of exoplanet atmospheres [14,15,16,17,18,19,20,21,22]. To these can be added informed discussions on the molecular spectroscopy and opacities central to model building [23,24,25,26,27]. Mono-graphs on the relevant thermochemistry and abundances have been published over the years [28,29,30,31,32]. In this paper, I will not attempt to review the literature of detections and claims, nor will I attempt to review the thermochemical, spectroscopic, or dynamical modeling efforts to date. Rather, I will focus on those few results concerning exoplanet atmospheres that to my mind stand out, that seem most robust, and that collectively serve to summarize what we have truly learned. This will of necessity be a small subset of the published literature, and, if only for lack of space, some compelling results will no doubt be neglected. In addition, I will touch on only the basics of the atmosphere theory applied to date, preferring to focus where possible on the progress in theory necessary for the next generation of exoplanet atmosphere studies to evolve productively. I now embark upon a discussion of what I deem a few of the milestone observational papers in core topics. These might be considered to constitute the spine of progress in recent exoplanet atmosphere studies. I accompany each with a short discussion of the associated theoretical challenges posed by the data.
Transit Detection of Atoms and Molecules
The apparent transit radius of a planet with a gaseous atmosphere is that impact parameter of a ray of stellar light for which the optical depth at that wavelength (λ) is of order unity. Note that at that level the corresponding radial optical depth, which if in absorption is relevant to emission spectra at secondary eclipse, will be much smaller. Since an atmosphere has a thickness (extent) and absorption and scattering cross sections are functions of photon wavelength that in product with the air column constitute optical depth, the measured transit radius is a function of wavelength. Therefore, measurements of a planet's transit depths at many wavelengths of light reveal its atomic and molecular composition. To good approximation [33]: where σ(λ) is the composition-weighted total cross-section and the scale height, H, is kT /µg, where g is the planet's surface gravity, µ is the mean molecular weight, T is an average atmospheric temperature, and k is Boltzmann's constant. H sets the scale of the magnitude of potential fluctuations of Rp with λ and σ(λ) is determined mostly by the atomic and molecular species in the atmosphere. Charbonneau et al. [34] were the first to successfully employ this technique with the ∼4−σ measurement of atomic sodium (Na) in the atmosphere of HD 209458b. Along with HD 189733b, this nearby giant planet has been the most scutinized photometrically and spectroscopically. Since then, Sing et al. [35] have detected potassium (K) in XO-2b and Pont et al. [36] have detected both sodium and potassium in HD 189733b. These are all optical measurements at and around the Na D doublet (∼0.589 µm) and the potassium resonance doublet (∼0.77 µm), and the measurements revealed the telltale differential transit depths in and out of the associated lines.
From experience with brown dwarfs, the presence of neutral alkali metals in the atmospheres of irradiated exoplanets with similar atmospheric temperatures (∼1000−1500 K) was expected and their detection was gratifying. Indeed, there is a qualitative correspondence between the atmospheres of close-in/irradiated or young giant planets (of order Jupiter's mass) and older brown dwarfs (with masses of tens of MJ). Alkalis persist to lower temperatures (∼800-1000 K) to be revealed in close-in exoplanet transit and emission spectra and in older brown dwarf emission spectra because the silicon and aluminum with which they would otherwise combine to form feldspars are sequestered at higher temperatures and depths into more refractory species and rained out. Had the elements with which Na and K would have combined persisted in the atmosphere at altitude, these alkalis would have combined and their atomic form would not have been detected [38]. The more refractory silicates (and condensed iron) reside in giant exoplanets (and in Jupiter and Saturn), but at great depths. In L dwarf brown dwarfs they are at the surface, reddening the emergent spectra significantly.
However, the strength in transiting giant exoplanets of the contrast in and out of these atomic alkali lines is generally less than expected [8]. Subsolar elemental Na and K abundances, ionization by stellar light, and hazes have been invoked to explain the diminished strength of their associated lines, but the haze hypothesis is gaining ground. The definition of a haze can merge with that of a cloud, but generally hazes are clouds of small particulates at altitude that may be condensates of trace species or products of photolysis by stellar UV light and polymerization. They are generally not condensates of common or abundant molecular species (such as water, ammonia, iron, or silicates, none of which fit the bill here). Though what this haze is is not at all clear, hazes at altitude (≤0.01 bars) can provide a nearly featureless continuum opacity to light and easily mute atomic and molecular line strengths. Indeed, hazes are emerging as central and ubiquitous features in exoplanet atmospheres. Annoyingly, not much mass is necessary to have an effect on transit spectra, making quantitative interpretation all the more difficult. The fact that the red color of Jupiter itself is produced by a trace species (perhaps a haze) that as of yet has not been identified is a sobering testament to the difficulties that lie ahead in completely determining exoplanet atmospheric compositions.
The multi-frequency transit measurements of HD 189733b performed by Pont et al. [36,37] from the near-ultraviolet to the mid-infrared are the clearest and most dramatic indications that some exoplanets have haze layers (Figure 1). Curiously, no water or other molecular features are identified by Pont et al. [36,37] in transit. Aside from the aforementioned Na and K atomic features in the optical, the transit spectrum of HD 189733b is consistent with a featureless continuum. Water features in a H2 atmosphere are very difficult to completely suppress, so this is strange. What is more, the transit radius increases below ∼1.0 µm with decreasing wavelength in a manner reminiscent of Rayleigh scattering, However, due to the large cross sections implied, the culprit can only be a haze or cloud. Note that these transit data can't distinguish between absorption and scattering, though scattering is the more likely for most plausible haze materials and particle sizes. Scattering is also indicated by the near lack of evidence for absorbing particulates in its secondary eclipse emission spectrum [39]. Together, these data suggest that a scattering haze layer at altitude is obscuring the otherwise distinctive spectral features of the spectroscopically active atmospheric constituents.
Transit spectra for the mini-Neptune GJ 1214b have been taken by many groups, but the results until recently have been quite ambiguous concerning possible distinguishing spectral features [40]. In principle, there are diagnostic water features at ∼1.15 and 1.4 µm. However, Kreidberg et al. [41] using the WFC3 on the Hubble Space Telescope, have demonstrated that from ∼1.1 to 1.6 µm its transit spectrum is ∼5−10 times flatter than a water-dominated atmosphere or the canonical molecular-hydrogen(H2)-dominated atmosphere with a solar abundance of water (oxygen), respectively ( Figure 2). Flat-ness could indicate that the atmosphere has no scale height (eq. 1) (due, for example to a high mean molecular weight, µ), or herald the presence yet again of a thick haze layer obscuring the molecular features. Not surprisingly, a pan-chromatically obscuring haze layer is currently the front runner.
Lest one think that hazes completely mask the molecules of exoplanet atmospheres, Deming et al. [42] have published transit spectra of HD 209458b ( Figure 3) and XO-1b that clearly show the water feature at ∼1.4 µm. However, the expected accompanying water feature at ∼1.15 µm is absent. The best interpretation is that this feature is suppressed by the presence of a haze with a continuum, though wavelength-dependent, interaction cross section that trails off at longer wavelengths. The weaker apparent degree of suppression in these exoplanet atmospheres might suggest that their hazes are thinner or deeper (at higher pressures) than in HD 189733b. Physical models explaining this behavior are lacking.
So, the only atmospheric species that have clearly been identified in transit are H2O, Na, K, and a "haze". Molecular hydrogen is the only gas with a low enough µ to provide a scale height great enough to explain the detection in transit of any molecular features (eq. 1) in a hot, irradiated atmosphere, and I would include it as indirectly indicated. However, carbon monoxide (CO), carbon dioxide (CO2), ammonia (NH3), nitrogen gas (N2), acetylene (C2H2), ethylene (C2H4), phosphine (PH3), hydrogen sulfide (H2S), oxygen (O2), ozone (O3), nitrous oxide (N2O), and hydrogen cyanide (HCN) have all been proferred as exoplanet atmosphere gases. Clearly, the field is in its spectroscopic infancy. Facilities such as nextgeneration ground-based telescopes (Extremely-Large Telescopes, ELTs) and space-based telescopes such as the James Webb Space Telescope (JWST) [22], or a dedicated exoplanet space-based spectrometer, will be vital if transit spectroscopy is to realize its true potential for exoplanet atmospheric characterization. JWST in particular will have spectroscopic capability from ∼0.6 to ∼25 µm and will be sensitive to most of the useful atmospheric features expected in giant, neptune, and sub-neptune exoplanets. It may also be able to detect and characterize a close-in earth or super-earth around a small nearby M star.
There are a number of theoretical challenges that must be met before transit data can be converted into reliable knowledge. Such spectra probe the terminator region of the planet that separates the day and night sides. They sample the transitional region between the hotter day and cooler night of the planet where the compositions may be changing and condensates may be forming. Hence, the compositions extracted may not be representative even of the bulk atmosphere. Ideally, one would want to construct dynamical 3D atmospheric circulation models that couple non-equilibrium chemistry and detailed molecular opacity databases with multi-angle 3D radiation transfer. Given the emergence of hazes and clouds as potentially important features of exoplanet atmospheres, a meteorologically credible condensate model is also desired. We are far from the latter [43], and the former capabilities are only now being constructed, with limited success [44]. The dependence of transit spectra on species abundance is weak, making it difficult now to derive mixing ratios from transit spectra to better than a factor of ten to one hundred. Though the magnitude of the variation of apparent radius with wavelength depends upon atmospheric scale height, and, hence, temperature, the temperature−pressure profile and the variation of abundance with altitude are not easily constrained. To obtain even zeroth-order information, one frequently creates isothermal atmospheres with chemical equilibrium or uniform composition. Current haze models are ad hoc, and adjusted a posteriori to fit the all-too-sparse and at times ambiguous data. To justify doing better will require much better, and higher-resolution measured spectra [5].
Data at secondary eclipse require a similar modeling effort, but probe the integrated flux of the entire dayside. Hence, a model that correctly incorporates the effects of stellar irradiation ("instellation") and limb effects is necessary. Moreover, the flux from the cooling planetary core, its longitudinal/latitudinal variation, and a circulation model that redistributes energy and composition are needed. Most models employed to date use a representative 1D (planar) approximation and radiative and chemical equilibrium for what is a hemispherical region that might be out of chemical equilibrium (and slightly out of radiative equilibrium). The emission spectra of the dayside depend more on the absorptive opacities, whereas transit spectra depend on both scattering and absorption opacities. Hence, if the haze inferred in some transit spectra is due predominantly to scattering, its effect on secondary eclipse spectra will be minimal, making it a bit more difficult to use insights gained from one to inform the modeling of the other.
Many giant exoplanets, and a few sub-Neptunes, have been observed at secondary eclipse, but the vast bulk of these data are comprised of a few photometric points per planet. The lion's share have been garnered using Spitzer, the HST, or large-aperture ground-based telescopes, and pioneering attempts to inaugurate this science were carried out by Deming et al. [45] and Charbonneau et al. [46]. Photometry, particularly if derived using techniques subject to systematic errors, is ill-suited to delivering solid information on composition, thermal profiles, or atmospheric dynamics. The most one can do with photometry at secondary eclipse is to determine rough average emission temperatures, and perhaps reflection albedos in the optical. Temperatures for close-in giant exoplanet atmospheres from ∼1000 to ∼3000 K have in this way been determined. Of course, the mere detection of an exoplanet is a victory, and the efforts that have gone into winning these data should not be discounted. Nevertheless, with nearly fifty such campaigns and detections "in the can", one has learned that it is only with next-generation spectra using improved (perhaps dedicated) spectroscopic capabilities that the desired thermal and compositional information will be forthcoming.
One of the few reliable compositional determinations at secondary eclipse obtained so far is for the dayside atmosphere of HD 189733b using the now-defunct IRS spectrometer onboard Spitzer [39]. This very-low-resolution spectrum nevertheless provided a ∼3-σ detection of water at ∼6.2 µm. There are other claims in the literature to have detected molecules at secondary eclipse, but many are less compelling, and previous claims to have detected water using photometry alone at secondary eclipse are very model-dependent [47]. It is only with well-calibrated spectra that one can determine with confidence the presence in any exoplanet atmosphere of any molecule or atom.
Winds from Planets
The existence of what are now somewhat contradictorily called "hot Jupiters" has since the discovery of 51 Peg b in 1995 been somewhat of a puzzle. They likely cannot form so close to their parent star and must migrate in by some process from beyond the so-called ice line. In such cold regions, ices can form and accumulate to nucleate gas giant planet formation. Subsequent inward migration could be driven early in the planet's life by gravitational torquing by the protostellar/protoplanetary disk or by planet-planet scattering, fol-lowed by tidal dissipation in the planet (which circularizes its orbit). However, once parked at between ∼0.01 and 0.1 A.U. from the star, how does the gaseous planet, or a gaseous atmosphere of a smaller planet, survive evaporation by the star's intense irradiation during perhaps billions of years seemingly in extremis? The answer is that for sub-Neptunes and rocky planets their atmospheres or gaseous envelopes may indeed not survive, but for more massive gas giants the gravitational well at their surfaces may be sufficiently deep. Nevertheless, since the first discoveries evaporation has been an issue [48]. The atmospheres of Earth and Jupiter are known to be evaporating, though at a very low rate. But what of a hot Jupiter under ∼10 4 × the instellation experienced by Jupiter?
The answer came with the detection by Vidal-Madjar et al. [49] of a wind from HD 209458b. Using the transit method, but in the ultra-violet around the Lyman-α line of atomic hydrogen at ∼0.12 µm, these authors measured a transit depth of ∼15 %! Such a large depth implies a planet radius greater than 4 RJ, which is not only far greater than what was inferred in the optical, but beyond the tidal Roche radius. Matter at such a distance is not bound to the planet and the only plausible explanation is that a wind was being blown off the planet. The absorption cross sections in the ultraviolet are huge, so the matter densities necessary to generate a transverse/chord optical depth of one are very low, too low to affect the optical and infrared measurements. The upshot is the presence of a quasi-steady planetary wind with a mass loss rate of 10 10−11 gm s −1 . At that rate, HD 209458b will lose no more than ∼10% of its mass in a Hubble time.
Since this initial discovery, winds from the hot Jupiters HD 189733b [50] and WASP-12b [51] and from the hot Neptune GJ 436b [52] have been discovered by the UV transit method and partially characterized. In all cases, the tell-tale indicator was in atomic hydrogen. Mass loss rates have been estimated [53], and in the case of WASP-12b might be sufficient to completely evaporate the giant within as little as ∼1 gigayear. The presence of atomic hydrogen implies the photolytic or thermal breakup of the molecular hydrogen, so these data simultaneously suggest the presence of both H and H2. Linsky et al. [54] detected ionized carbon and silicon in HD 209458b's wind and Fossati et al. [51] detected ionized magnesium WASP-12b's wind, but the interpretation of the various ionized species detected in these transit campaigns is ongoing.
The theoretical challenges posed by planetary winds revolve in part around the driver. Is it energy-limited UV and X-ray flux from the parent star, or heating by the integral intercepted stellar light? In addition, in the rotating system of the orbiting planet, what ingress/egress asymmetries in the morphology of the wind are there? There are indications that Coriolis forces on planet winds are indeed shifting the times of ingress and egress. What is the effect of planet-star wind interactions? There are suggestions of Doppler shifts of lines of the UV transit data that arise from planet wind speeds, but how can we be sure? How is the material for the wind replenished from the planet atmosphere and interior? And finally, what is the correspondence between the UV photolytic chemistry in the upper reaches of the atmosphere that modifies its composition there and wind dynamics? This is a rich subject tied to many sub-fields of science, and is one of the important topics to emerge from transit spectroscopy.
Phase Light Curves and Planet Maps
As a planet traverses its orbit, its brightness as measured at the Earth at a given wavelength varies with orbital phase. A phase light curve is comprised of 1) a reflected component that is a stiff function of star-planet-Earth angle and is most prominent in the optical and UV and 2) a thermal component that more directly depends upon the temperature and composition of the planet's atmosphere and their longitudinal variation around the planet and is most prominent in the near-and mid-infrared. Hence, a phase light curve is sensitive to the day-night contrast and is a useful probe of planetary atmospheres [55,56,57,58,59]. Note that the planet/star contrast ratio is largest for large exoplanets in the closest orbits, so hot Jupiters currently provide the best targets.
In the optical, there has been some work to derive the albedo [55,56], or reflectivity, of close-in exoplanets, which is largest when there are reflecting clouds and smallest when the atmosphere is absorbing. In the latter case, thermal emission at high atmospheric temperatures can be mistaken for reflection, so detailed modeling is required. In any case, Kepler, with its superb photomteric sensitivity, has been used to determine optical phase curves [60] of a few exogiants in the Kepler field and the MOST microsatellite has put a low upper limit on the optical albedo of HD 209458b [61,62], but much remains to be done to extract diagnostic optical phase curves and albedos for exoplanets.
Interesting progress has been made, however, in the thermal infrared. Using Spitzer at 8 µm, Knutson et al. [63] not only derived a phase light curve for HD 189733b, but derived a crude thermal map of its surface. By assuming that the thermal emission pattern over the planet surface was fixed during the observations, they derived the day-night brightness contrast (translated into a brightness temperature at 8 µm) and a longitudinal brightness temperature distribution. In particular, they measure the position of the "hot spot." If the planet is in synchronous rotation (spin period is the same as the orbital period), and there are not equatorial winds to advect heat around the planet, one would expect the hot spot to be at the substellar point. The light curve would phase up with the orbit and the peak brightness would occur at the center of secondary eclipse. However, what they observed was a shift "downwind" to the east by ∼16 • ±6 • . The most straightforward interpretation is that the stellar heat absorbed by the planet is advected downstream by superrotational flows such as are observed on Jupiter itself before being reradiated. Moreover, these data indicate that since the measured daynight brightness temperature contrast was only ∼240 K the zonal wind flows driven by stellar irradiation indeed carry heat to the night side, where it is radiated at a detectable level. Hence, these data point to atmospheric dynamics on the exoplanet HD 189733b qualitatively (though not quantitatively) in line with theoretical expectations [44].
For HD 189733b, this work has been followed up using Spitzer at 3.6 and 4.5 µm [64] and in a competing effort a more refined map has been produced [65]. Infrared phase curves for the giants HD 149026b [66], HAT-P-2b [67], and WASP-12b [68], among other exoplanets, have been obtained. However, one of the most intriguing phase curves was obtained by Crossfield et al. [69] using Spitzer at 24 µm for the nontransiting planet υ And b (Figure 4). These authors found a huge phase offset of ∼80 • , for which a cogent explanation is still lacking. The closeness of this planet to Earth could compensate in part for the fact that it is not transiting to allow sufficient photometric accuracy without eclipse calibration to yield one of the few non-transiting light curves. All these efforts collectively demonstrate the multiple, at times unanticipated and creative, methods being employed by observers seeking to squeeze whatever information they can from exoplanets.
Theoretical models for light curves have been sophisticated, but theory and measurement have not yet meshed well. Both need to be improved. Models need to 1) improve their treatment of hazes and clouds that could reside in exoplanet atmospheres and will boost reflection albedos significantly; 2) incorporate polarization to realize its diagnostic potential [59,70]; 3) constrain the possible range of phase functions to aid in retrievals; 4) embed the effects of variations in planet latitude and longitude in the analysis protocols; 5) provide observational diagnostics with which to probe atmospheric pressure depths, particularly using multi-frequency data; 6) be constructed as a function of orbital eccentricity, semi-major axis, and inclination; and 7) span the wide range of masses and compositions the heterogeneous class of exoplanets in likely to occupy. Accurate spectral data with good time coverage from the optical to the mid-infrared could be game-changing, but theory needs to be ready with useful physical diagnostics.
High Spectral Resolution Techniques
The intrinsic dimness of planets under the glare of stars renders high-resolution, pan-chromatic spectral measurements difficult, if desirable. However, ultra-high spectral resolution measurements using large-aperture ground-based telescopes, but over a very narrow spectral range and targeting molecular band features in a planet's atmosphere otherwise jumbled together at lower resolutions, has recently been demonstrated. Snellen et al. [71] have detected the Doppler variation due to HD 209458b's orbital motion of carbon monoxide features near ∼2.3 µm. The required spectral resolution ( λ ∆λ ) was ∼10 5 and the planet's projected radial velocity just before and just after primary transit changed from +15 km s −1 to -15 km s −1 . This is consistent with the expected circular orbital speed of ∼140 km s −1 and provides an unambiguous detection of CO. What is more, this team was almost able to measure the zonal wind speeds of air around the planet, estimated theoretically to be near ∼1 km s −1 , thereby demonstrating the potential of such a novel technique to extract weather features on giant exoplanets. The same basic method has been applied near primary transit to detect CO [72] and H2O [73] in HD 189733b. Carbon monoxide is detected in Jupiter and was thermochemically predicted to exist in abundance in the atmospheres of hot Jupiters [31], but its actual detection by this method is impressive.
In fact, the same technique has been succesfully applied in the CO band to the non-transiting planet τ Boo b [74] and for the wide-separation giant planet/brown dwarf β Pictoris b [75], verifying the presence of CO in both their atmospheres. Finally, using a related technique Crossfield et al. [76] have been able to conduct high-resolution "Doppler imaging" of the closest brown dwarf known (Luhman 16B). By assuming that the brown dwarf's surface features are frozen during the observations and that it is in solid-body rotation, tiling its surface in latitude and longitude they were able to back out surface brightness variations from the variations of its flux and Doppler-shift time series. By this means, they have mapped surface spotting that may reflect broken cloud structures (Figure 5).
In support of such measurments, theory needs to refine its modeling of planet surfaces, zonal flows and weather features, three-dimensional heat redistribution and velocity fields, and temporal variability. Currently, most 3D general circulation models do not properly treat high Mach number flows, yet they predict zonal wind Mach numbers of order unity. There are suggestions that magnetic fields affect the wind dynamics and heating in the atmosphere, but self-consistent multi-dimensional radiation magnetohydrodynamic models have not yet been constructed.
This series of measurements of giant exoplanets and brown dwarfs using high-resolution spectroscopy focused on narrow molecular features emphasizes two important aspects of exoplanet research. The first is that observers can be clever and develop methods unanticipated in Roadmap documents and Decadal Surveys. The second is that with the next-generation of ground-based ELTs equiped with impressive spectrometers astronomers may be able to measure and map some exoplanets without employing the high-contrast imaging techniques that are now emerging to compete and to which I now turn.
High-Contrast Imaging
Before the successful emergence of the RV and transit methods, astronomers expected high-contrast direct imaging that separated out the light of planet and the star, and provided photometric and spectroscopic data for each, would be the leading means of exoplanet discovery and characterization. A few wide-separation brown dwarfs and/or super-Jupiter planets were detected by this means, but the yield was meager. The fundamental problem is two-fold: 1) the planets are intrinsically dim, and 2) it is difficult to separate out the light of the planet from under the glare of the star for planet-star separations like those of the solar system. Imaging systems need to suppress the stellar light scattered in the optics that would otherwise swamp the planet's signature. The planet/star contrast ratio for Jupiter is ∼10 −9 in the optical and ∼10 −7 in the mid-infrared. For Earth, the corresponding numbers are ∼10 −10 and ∼10 −9 . These numbers are age, mass, orbital distance, and star dependent, but demonstrate the challenge. What is more, contrast capabilities are functions of planet-star angular separation, restricting the orbital space accessible.
However, high-contrast imaging is finally emerging to complement other methods. It is most sensitive to widerseparation (∼10−200 AU), younger, giant exoplanets (and brown dwarfs), but technologies are coming online with which to detect older and less massive exoplanets down to ∼1 AU separations for nearby stars (≤10 parsecs) [10,11,13,12,77]. Super-Neptunes around M dwarfs might soon be within reach. Using direct imaging, Marois et al. [78,79] have detected four giant planets orbiting the A star HR 8799 (HR 8799b,c,d,e) and Lagrange et al. [80] have detected a planet around the A star β Pictoris. The contrast ratios in the near infrared is ∼10 −4 , but capabilities near 10 −5 have been achieved and performance near 10 −7 is soon anticipated [10,11]. One of the results to emerge from the measurements of both the HR 8799 and β-Pic planets is that to fit their photometry in the near-infrared from ∼1.0 to ∼3.0 µm thick clouds, even thicker than seen in L dwarf brown dwarf atmospheres, are necessary [81]. This (re)emphasizes the theme that the study of hazes and clouds (nephelometry) has emerged as a core topic in exoplanet studies.
One of the most exciting recent measurements via direct imaging was by Konopacky et al. [82] of HR 8799c. Using the OSIRIS spectrometer on the 10-meter Keck II telescope, they obtained unambiguous detections between ∼1.95 and ∼2.4 µm of both water and carbon monoxide in its ∼1000 K atmosphere ( Figure 6). This λ ∆λ = 4000 spectrum is one of the best obtained so far, but was enabled by the youth (∼30 million years), wide-angular separation, and large mass (∼5−10 MJ) of the planet. Improvement in theory needed to support direct imaging campaigns mirror those needed for light curves, but are augmented to include planet evolution modeling to account for age, metallicity/composition, and mass variations. Most high-contrast instruments are focused on the near-infrared, so cloud physics and near-infrared line lists for likely atmospheric constituents will require further work. The reader will note that the vast majority of observations and measurements of exoplanet atmospheres has been done for giants. There are a few for sub-Neptunes and super-Earths, but high-contrast measurments of earths around G stars like the Sun is not likely in the near future [83,84]. The planet/star contrast ratios are just too low, though earths around M stars might be within reach if we get lucky. For now, giants and Neptunes are the focus, as astronomers hone their skills for an even more challenging future.
What We Know about Atmospheric Compositions
To summarize, the species we have, without ambiguity, discovered to date in exoplanet atmospheres are: H2O, CO, Na, K, and H (H2), with various ionized metals indicated in exoplanet winds. Expected species, but as yet undetected, include: NH3, CH4, N2, CO2, H2S, PH3, HCN, C2H2, C2H4, O2, O3, and N2O. The nature of the hazes and clouds inferred is at yet unknown. The atmospheres probed have temperatures from ∼600 K to ∼3000 K. Good spectra are the essential requirements for unambiguous detection and identification of molecules in exoplanet atmospheres, and these have been rare. Determining abundances is also difficult, since to do so requires not only good spectra, but reliable models. Errors in abundance retrievals of more than an order of magnitude are likely, and this fact has limited the discussion of abundances in this paper.
Nevertheless, with the construction of ground-based ELTs, the various campaigns of direct imaging [10,11,12], the launch of JWST, the possible launch of WFIRST-2.4/AFTA [13], and the various ongoing campaigns with HST and Spitzer and extant ground-based facilities, the near-term future of exoplanet atmospheric characterization promises to be even more exciting than its past. presence of water is demonstrated by the feature at 1.4-µm, but the corresponding ∼1.15 µm feature is absent. The best explanation is that the latter is supporessed by haze scattering. Not obvious here is the fact that even the 1.4-µm feature is muted with respect to non-haze models. The two colored curves are representative model spectra with different levels of haze. Reprinted with permission from reference [42]. Reprinted with permission from reference [76]. | 2014-09-25T16:37:22.000Z | 2014-09-17T00:00:00.000 | {
"year": 2014,
"sha1": "33d34cadc1231f7313b252f6ec9662d61b3fd58b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1409.7320",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "33d34cadc1231f7313b252f6ec9662d61b3fd58b",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
158162847 | pes2o/s2orc | v3-fos-license | Study of the Right of Foreign Ship Against State Sovereignty (Case Study Indonesia)
Recognition of the archipelagic concept accommodated in chapter IV United Nations Conventions Law of The Sea (LOSC) 1982. The implication of this recognition is archipelagic states have sovereignty for their marine space. There are 3 zonation in sea sovereignty, that are inland water, territorial waters and archipelagic water. However, only in inland water that archipelagic states has full sovereignty such as on land, while another zone, it has followed by other states rights, One of which is right of passage. The right of another state passage is consisted by right of innocent passage, right of archipelagic sealine passage and there is also right of transit passage, which one all of that right of passage are depending on zones depending on the zone that is crossed. Talked about archipelagic sea-lanes passage (ASLP), Indonesia has to determine 3 archipelagic sea lanes passage and the consequence that all foreign ships should pass over that routes. For that reason, in this paper will discuss Indonesian sea sovereignty zone and right another state inside, along with implications for Indonesia after determination of archipelagic sealine passage.
Based on these conventions, the archipelagic state has sovereignty over the whole territory of the sea and all regimes inside of archipelagic baseline. Sea sovereignty of the archipelagic state sea, can be divided (based on archipelagic base-line) into several zones, comprise the sovereignty in regime inland waters, regime archipelagic waters and regime territorial waters. 4 Each zone has a different regime, even with this zoning concept will make a difference between the concepts of sovereignty in the sea with sovereignty on land, and will impact on the difference of navigation regime in each zone of the waters of an archipelagic state.
Recognition of the archipelagic sovereignty caused a consequence on international interest had already existed. The Archipelagic state is obliged to ensure the rights of users state to passing the sea zone of an archipelagic state. That zone includes the right of innocent passage in the territorial sea, archipelagic sea line passage (ASLP) in archipelagic waters and the rights of transit passage in the strait has been established Even when using archipelagic sea line passage, then any passing ships can use normal mode in the meaning of sailing normally, should not be restricted and unobstructed passage for example submarine could cross an archipelagic sea line without being disturbed and with D ¿[HG SRVLWLRQ GLYH LQ WKH ZDWHU MXVW DV VDLOLQJ RQ WKH KLJK VHDV 7KLV is certainly different from the right of innocent passage in the territorial sea, where at the time of sailing ships then the subs are required to QDYLJDWH RQ WKH VXUIDFH RI WKH ZDWHU DQG VKRZ WKH ÀDJ 10 Indonesia is the one and (only) archipelagic state in the world, which has been, assigns archipelagic sea-lanes (ASL). Although Indonesia has set three (3) archipelagic sea lanes known as archipelagic sea lanes passage (ASLP) which connects the north and south however concern and debate about sovereignty nonetheless appear. Use of the concept routes normally and normal mode in archipelagic sea-lanes currently remains a dilemma for archipelagic state. This occurs because an archipelagic state cannot prohibit, interfere and even close the sea-lanes Foreign cruise ships that crossed its territory, so the concept "regarded" could interfere implementation an archipelagic state sovereignty. Normal mode applicable in ASLP make archipelagic sea-lanes in Indonesia as At the level of practical implementation of archipelagic sea lanes regime seemed to make the archipelagic sea be open and free navigability, "seemed to" there are still regime of the high seas in Indonesian archipelagic sea lanes passage, especially with ASLP archipelagic state is not allowed to interfere and banned navigation. Thus, in this paper will focus on the discussion about how real archipelagic sovereignty water after determination Indonesian ASL in terms of international law of the sea.
II. ARCHIPELAGIC STATE AS A RESULTS OF CONSENSUS
The Third United Nations Conference on the Law of the Sea (UN-CLOS III) is an important conference in the history of international law because of the scope, substance of the issues with which it is concerned, and represents a major international experiment in decision making by consensus. So this conference regarded as a unique event, due to Most of the attention it has attracted so far has been focused on the problems, progress, and prospects of the conference 11 without formal votes. This archipelagic state concept was pioneered by Indonesia and the Philippines where at that time still unknown in international law and international law of the sea. An archipelagic state concept is a geographical concept of state which consists of islands and oceans so that archipelagic state will use the straight archipelagic baselines of all the outer islands thus making all the waters inside the base line into inland water. This is certainly got disagreement-overdeveloped states and another state who want the freedom of navigation in the sea with no one can obstruct.
Result of the seriousness of the negotiation process that long and drawn, one of Accepted consensus are the principle archipelagic state as a new legal regime at sea. $ $5&+,3(/$*,& 67$7( $1' 6($ 629(5(,*17< $UWLFOH /26& LV D IRUP RI SROLWLFDO UHFRJQLWLRQ DQG XQLW\ RQ WKH territory of archipelagic state by the international under the LOSC. An archipelagic state has sovereignty over the air space above the waters, seabed and subsoil thereof, and all natural resources contained therein. An archipelagic state Delimitation of sovereignty must be based on the baselines called archipelagic baselines. Told as archipelagic baselines : "straight archipelagic baselines joining the outermost points of the outermost islands and drying reefs of the archipelago provided that within such baselines are included the main islands and an area in which the ratio of the area of the water to the area of the land, including atolls, is between 1 to 1 and 9 to 1. 22 " Based on Archipelagic baselines, then archipelagic state territorial water sat sea, can be divided based on zoning by looking at the position of the. zoning on the territorial sovereignty of the sea includes all inter-QDO ZDWHUV DQG DUFKLSHODJLF ZDWHUV DQG WKH PLOHV RXWZDUG EDVHOLQHV territorial waters. As for the territorial sovereignty of the sea in every zone of the archipelagic state applies its own regime. Each zone of the territory would have its own navigation regime that are : Innocent passage, transit passage and archipelagic sea line passage (ASLP) that % 6($ 629(5(,*17< =21( $1' 5,*+7 2) 3$66$*( &$6( ARCHIPELAGIC STATE)
Internal waters and no right of navigation
Internal Waters is a zone sovereignty states, which lie on the land side of normal baseline from, then each islands will have its own baseline, drawn according to the normal principles. Internal water is a measured, including lakes, rivers, canals, ports, bays, and historic bay. A coastal state has complete authority to control access of vessels, both private and governmental over its internal waters. International law authorizes states because internal waters adjoin the land territory of state of a state. This authority is nearly as comprehensive as sovereignty over the landmasses in other words coastal state enjoys full territorial sovereignty over them.
Effect determination of internal waters is the closing of the entire sea area previously not considered so, so that no right of innocent passage for foreign vessels as through the territorial sea. However, when the foreign ship enters the port or other internal waters, ships put themselves to the territorial sovereignty of the coastal state. Additionally, specialized in this zone, An archipelagic state able to close and prohibit or allow Navigation to stopover on the port or entering inland waters when An archipelagic state wants, 30 in this zone absolute territorial waters of owned by coastal state then the setting is subject to under its national law. QDXWLFDO PLOHV WRZDUGV WKH KLJK VHDV drawn from archipelagic straight baseline of archipelagic states outer islands. Although the territorial sea is the realm of coastal state sovereignty, but sovereignty is relatively because of other states rights namely "innocent passage rights" which are subject based on international law of innocent passage. 33 %DVHG RQ DUW RI /26& SURYLGHV LQ WKLV regard that:
The Territorial Sea and Innocent Passage
"Passage is innocent so long as it is not prejudicial to the peace, good order or security of the coastal state. Such passage shall take place in conformity with this convention and with other rules of international law". Navigation carried out in the territorial waters must necessarily just traversing that sea without entering internal waters or calling at a roadstead or port facility outside internal waters; or proceeding to or from internal waters or a call at such roadstead or port facility. 34 Another requirement when ships through innocent passage that the cruise must necessarily done continuously, submarines and other underwater vehi-FOHV DUH UHTXLUHG WR QDYLJDWH RQ WKH VXUIDFH DQG WR VKRZ WKHLU ÀDJ other than that, every passing ship must be subject to coastal state laws and regulations and also international law for the security and safety of navigation.
Even though other states has rights of innocent passage, Due to territorial sea regime is still in a coastal state sovereignty zone so coastal state still be able to take procrastination or even ban or restrict for passing ships in the territorial waters. Procrastination can be applied when coastal state concerned to conduct combat training or to a cause that GH¿QLWH DQG FOHDU VXFK DV IRU WKH SURWHFWLRQ RI VDIHW\
Archipelagic Waters as a Sui Generis and ASLP
Archipelagic water is a zone of sovereignty that solely owned by Archipelagic State enclosed by archipelagic baselines drawn. Of this zone is a new concept in international law. Called by sui generis where the waters are neither like inland water regime nor the regime of territorial waters. 43 However, this zone is a special zone contained in the sovereign territory of the sea in Archipelagic State only. Sovereignty in this zone is relative, because sovereignty in the archipelagic waters subject to some of the third nations rights. According to Churchill, there are four of third nations rights over archipelagic waters that must be respected by the archipelagic State 44 which is the result of a consensus between the archipelagic State with the user state, among other things: About right of passage, in archipelagic water referred to Archipelagic Sea lines Passage (ASLP). Every state enjoyed ASLP in archipelagic sea line (ASL). The excess of ASLP is that the ship can use normal mode when crossing ASL. Basically, archipelagic sea-lanes passage is equal to transit passage in the straits regime, where the rights and obligations of users State and archipelagic State are the same "mutatis mutandis" with rights and obligations of users State and archipelagic State in the straits regime. Moreover same as straits regime, ASLP could not be deferred by the Archipelagic State (un-obstructive passage). Another specialty of this regime are when archipelagic State not specify ASL, it will apply normal routes for any sea lanes which is considered normally each of the user States.
& ,17(51$7,21$/ 675$,76 )25 ,17(51$7,21$/ 1$9,*$-TION AND TRANSIT PASSAGE Strait regime is not included into the distribution zone in the territorial sea. Strait regime closely related to passage regime / international navigation which passes through sea area called the strait. Strait for International navigation could have been among the two territorial state or in exclusive economic zone. Navigation through straits used for in-WHUQDWLRQDO SDVVDJH FDOOHG WKH WUDQVLW SDVVDJH $FFRUGLQJ WR DUW LOSC, which is called transit passage, is:
the exercise in accordance with this Part of the freedom of navigation DQG RYHUÀLJKW VROHO\ IRU WKH SXUSRVH RI FRQWLQXRXV DQG H[SHGLWLRXV WUDQVLW
of the strait between one part of the high seas or an exclusive economic zone and another part of the high seas or an exclusive economic zone. However, the requirement of continuous and expeditious transit does not preclude passage through the strait for the purpose of entering, leaving or returning from a State bordering the strait, subject to the conditions of entry to that State.
When Ships used right of transit passage, there shall be no suspension by coastal states. despite the fact that strait is in its territory (such as Malacca Strait), but the regime applicable navigation is transit passage regime. The other hand all ships using the transit passage obliged not to interfere and take action that are considered to affect the coastal state sovereignty.
Indonesia's ASLP I, for navigation from:
South China Sea -Natuna Sea -Karimata Strait -Java Sea and Sunda Strait to Hindia Ocean (or on the contrary).
ASLP IIID, for navigation from:
3DVL¿N 2FHDQ 0DOXNX 6HD 6HUDP 6HD %DQGD 6HD 2PEDL 6WUDLW -Sawu Sea -TimurSawu Island -Hindia Ocean or on the contrary. With the determination of Indonesian ASLP then all International navigation either using neither innocent passage nor the archipelagic passage shall pass "only" in Indonesia ASLP not on another sea-lane passage. The problem is Indonesia does not apply for ASLP east to west yet, giving rise varying interpretations of the routes normally who always used as a reason user state when through ASLP from East to West or on the contrary. However, according to the Indonesian version due to Indonesia has provided ASLP then there is no more another ASLP. This caused Bawean dispute between Indonesia and the United States LQ Bawean incident is a case that happened because of differing interpretations between the concept of routes normally with the interpretation of Indonesia that has been providing sea line archipelagic passage. Americans assume that because Indonesia does not provide East -West If an archipelagic State does not designate sea lanes or air routes, the right of archipelagic sea lanes passage may be exercised through the routes normally used for international navigation.
7KXV PDNLQJ WKH ÀHHW RI VKLSV DQG $PHULFDQ ZDU SODQHV SDVVLQJ and maneuvering over Bawean island because of US assume they are using their freedom of navigation as effects of the implementation of WKH QRUPDO URXWHV DV PHQWLRQHG LQ /26& DUW :KHUHDV LQ WKH opinion of Indonesia based on Article 3 paragraph 1 of the Government Regulation UHDGV Then by the Government Regulation, there is no more routes normally post-determination of 3 ASLP North / South. So the act of Ameri-FDQ ¿JKWHU PDQHXYHUV RYHU WKH LVODQG %DZHDQ FRQVWLWXWH YLRODWLRQ RI Indonesian territorial sovereignty and can interfere with the safety and security of civil aviation.
IV. DELIMITATION AND REGULATION ON SEA SOVEREIGN-TY ZONATION
Privileged of sovereignty at the sea which is different from the concept in land, must be responded by Indonesian government to prevent the occurrence of violation of the sovereignty at the sea. Due to the zone of sovereignty at the territorial sea and archipelagic water extant another states right, and only in inland water zone States have full sovereignty. Therefore Indonesian government should be immediately establish and announce delimitation of in land water zone in each of the islands, territorial zone and territorial water zone. More over any announcements and socialization should be done for Indonesian people or users states. This inland water delimitation is very important to prevent violations of sovereignty at the sea (inland water) from the existence and implementation of foreign states passage right.
Ibid, Lung -Chu Chen,p.133 Strait determination, that to be entrance and exit for archipelagic sea-lanes is indispensable to determine that the passing ship has entered into archipelagic sea regime of the States. So that can be determined whether the user state will use the innocent passage right or apply archipelagic sea-lanes passage right. This is crucial, due to differences right of navigation concept will have different legal liability on it. Even though foreign ships passage right has been awarded by LOSC for all user States, it does not mean they can sail freely. There are liabilities that must be obeyed by passing foreign ship that the passage must necessarily innocent and should not interfere with the sovereignty of the territory and Indonesian legal sovereignty as the archipelagic state.
Law or regulation must be prepared in order to regulate the procedure and terms of ship that when used foreign ships passage right in the archipelagic water zone, territorial water and the strait zone, Even though there are also, Special rules for ship regulating when trying to enter the area of inland water zones as a regime of absolute sovereignty (there is no foreign ships passage right). Other regulations that need to be established are provisions of concerning foreign ships passage when passing through the crowded straits which is not the strait for international navigation. Indonesia has two regime of strait for innocent passage / archipelagic sea lanes such as the Sunda Strait, Karimata Strait, Lombok Strait etc. and strait regime for international navigation like Malacca Strait.
V. CONCLUSION
$ERYH GLVFXVVLRQ KDV ¿UPO\ VWDWHG WKDW LQGHHG WKHUH LV QR PRUH problems with the archipelagic State sovereignty. It is obvious and UHDO LQ WKH /26& %DE ,9 DUW UHFRJQL]HV DQG UHJXODWHV sovereignty to the rights and obligations of archipelagic State and users States, that archipelagic State sovereignty of sea area consists of 3 zones, that are zone of inland waters, territorial sea and archipelagic sea. The sovereignty of the territorial waters zone has its own regime. Whereas for navigation passage consists of three forms of navigational rights which is innocent passage, transit passage and archipelagic sea lines passage.
That indeed Indonesia has been correctly assigning its ASLP. By assigning ASLP then all of Navigation passing through Indonesian territorial waters required traverse on the line determined by the Indonesian government. Determination ASLP will also be closes any user state reasons for using routes normally in straits and Indonesian sea. So with this designation will make it easier to monitor all of passing ships, because of the user state also has an obligation to comply with rules and agreements ordinances ASLP pass through.
ASLP real problems in Indonesia are not on the problem of territorial sovereignty but the inability to maintain and show sovereignty (exercises sovereignty). With normal mode concept when any ships pass through in ASLP will require equipment and highly advanced technologies. Because when vessels using the right of archipelagic sea lane passage then there would be no liabilities to give a report, so it takes radar with high technology in every ASLP choke point in order to monitor the WUDI¿F RI SDVVLQJ VKLSV LQ $6/3 Indonesian government should hasten the determination of the zoning cover inland water, territorial water and archipelagic water, so that the known boundary region where there are marine navigational rights of other states. | 2018-12-15T05:38:04.617Z | 2017-07-30T00:00:00.000 | {
"year": 2017,
"sha1": "b54a1a3ce2829b87244daaab56dbeeab080a59c9",
"oa_license": null,
"oa_url": "https://doi.org/10.17304/ijil.vol14.4.704",
"oa_status": "GOLD",
"pdf_src": "Neliti",
"pdf_hash": "42c5ead803dc04408f2060300512f94636563d28",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Political Science"
]
} |
225009769 | pes2o/s2orc | v3-fos-license | Protecting Compressive Ghost Imaging with Hyperchaotic System and DNA Encoding
. As computational ghost imaging is widely used in the military, radar, and other fields, its security and efficiency became more and more important. In this paper, we propose a compressive ghost imaging encryption scheme based on the hyper-chaotic system, DNA encoding, and KSVD algorithm for the first time. First, a 4-dimensional hyper-chaotic system is used to generate four long pseudorandom sequences and diffuse the sequences with DNA operation to get the phase mask sequence, and then N phase mask matrixes are generated from the sequences. Second, in order to improve the reconstruction efficiency, KSVD algorithm is used to generate dictionary D to sparse the image. The transmission key of the proposed scheme includes the initial values of hyper-chaotic and dictionary D , which has plaintext correlation and big key space. Compared with the existing compressive ghost imaging encryption scheme, the proposed scheme is more sensitive to initial values and more complexity and has smaller transmission key, which makes the encryption scheme more secure, and the reconstruction efficiency is higher too. Simulation results and security analysis demonstrate the good performance of the proposed scheme.
Introduction
In recent years, with the rapid development of computer network and communication technology, information security issues have become more and more important. As an emerging optical imaging technology [1][2][3], CGI (computational ghost imaging) has attracted the attention of researchers once it had appeared and has been widely used in military, encryption, radar, and other fields [4,5]. erefore, the security of the CGI is especially vital.
CGI is developed based on ghost imaging technology [6], which can transmit image information through one optical path, with simple structure, strong anti-interference ability, and good imaging effect. In 2010, Clemente proposed an image encryption technology based on CGI [7]; as shown in Figure 1, this solution can encrypt plain-images into light intensity values and only requires a bucket detector without spatial resolution to receive the light intensity, which indicates a new research direction for optical information security [8]. To achieve high image reconstruction efficiency, Katz proposed a compressive ghost imaging (CSGI) scheme, which combines CGI with a compressive sensing (CS) algorithm to reduce the number of measurements required for image recovery by an order of magnitude [9][10][11].
en, Durfin et al. proposed a CSGI encryption scheme [12]. Zhao et al. further improved the security of optical encryption by utilizing the high fault tolerance of QR encoding, which reduce the size of the transmitted images and enhance the robustness [13]. Wu et al. proposed an optical multiple-image encryption scheme based on CGI, and this method can transmit multiple images at the same time; but with the distance as the keys, it is vulnerable to brute-force attacks [14]. Zhu et al. use fingerprint technology to produce a phase modulation matrix, the fingerprint has the uniqueness, but it is easy to be obtained, and as a transmitted key, fingerprint is too big [15]. Li et al. proposed a multiple-image CSGI encryption method based on the LWT and XOR operations [16]. Most works in the literature fail to associate the key with plaintext image and have big transmission keys. is motivates us to look for a novel CSGI encryption method with plaintext correlation, smaller transmission keys, larger key spaces, and high image reconstruction efficiency.
e DNA encoding and decoding technology [46] is a kind of biological method to process information, which has the characteristics of largescale parallelism, high-storage density, ultra-low-power consumption, unique molecular structure, and intermolecular recognition mechanism. DNA has great development prospects in the field of information encryption [47][48][49][50][51][52]. In this paper, a CSGI encryption scheme based on the hyper-chaotic system, DNA and KSVD technologies is proposed. First, given four transmission keys, input the four keys as initial values to the hyper-chaotic-system; second, 4 long chaotic sequences are generated by a hyper-chaotic system, then three of them are arranged into a phase sequence, and the other sequence is used to produce a DNA sequence. ird, diffuse the phase sequence with the DNA sequence by DNA operation and then get phase modulation matrixes, which are used as the input of the spatial light modulator (SLM). At the same time, get dictionary matrix D with the original image by KSVD and achieve original signal sparse representation through D. Finally, complete the encryption of the scheme. Compared with the existing CSGI encryption scheme, the proposed scheme has smaller double transmission key, larger key space, high key sensitivity, plaintext correlation, and unpredictability. e use of DNA further increases the complexity and randomness of the encryption scheme. e rest of this paper is organized as follows. In section 2, the basic theories of the CGI, hyper-chaotic system, DNA technology, compressed sensing, and singular value decomposition are described. In Section 3, the system framework of our proposed scheme and the generation process of phase mask matrixes are described in detail. e simulation results and security analysis are performed in section 4. e paper is summarized in section 5.
CGI.
In CGI, as shown in Figure 2, a spatial laser beam transmits through a spatial light modulator (SLM), which introduces an arbitrary phase mask matrix φ(x, y), generating a spatially incoherent beam. Knowing the random phase and the distribution of laser light field U in (x, y), one can evaluate the distribution of the light intensity U i (x, y) right after the SLM: rough Fresnel diffraction, the light field distribution of signal light in front of object plane is the same as reference light, the light travels to the object plane which is z distance away from the SLM, and the speckle filed I i (x, y) can be calculated: where h z (x, y) is the transfer function in the spatial domain at a distance z, ⊗ represents the convolution operation, and I i (x, y) is defines as the reference light. e signal light intensities detected by a bucket detector placed behind the object, which can be represented by a transmission function of the object T(x, y) and written as To construct the object's transmission function T(x, y), the reference light speckle filed I i (x, y) cross-correlated with the signal light intensities B i : where G(x, y) denotes the recovered object information, 〈·〉 � (1/N) i · is an ensemble average over N measurements, I i (x, y) is calculated by the receiver according to equation (2), and B is the average value for the measured components B i [53].
Hyper-Chaotic
System. In our proposed CSGI encryption scheme, the phase mask matrix required on the SLM is generated by the hyper-chaotic system: Complexity By setting parameters a � 35, b � 3/8, c � 55, and d � 1.3, we obtain four Lyapunov exponents, including two positive Lyapunov exponents, λ 1 � 1.4164 and λ 2 � 0.5318, a zero Lyapunov exponent λ 3 � 0, and a negative Lyapunov exponent λ 4 � −39.1015 [54]. By this means, the system exhibits hyper-chaotic behavior. Figure 3 depicts the phase portraits of the hyper-chaotic system. Here, we take the fourth-order Runge-Kutta method to solve (5) and obtain the four hyper-chaotic sequences.
DNA.
DNA is a long-chain polymer, and the basic elements are four nucleic acid bases, namely, A (adenine), C (cytosine), G (guanine), and T (thymine), where A and T, C and G are complementary, respectively. In a binary system, 0 and 1 are complementary. It can be concluded that 00 and 11 are complementary, and 10 and 01 are complementary. Encoding the four bases A, C, G, and T with 0 and 1, eight encoding methods can be obtained, as shown in Table 1. Each of the DNA coding rules corresponds to an operation rule, and the following algorithm is based on the encoding rule 1 and rule 2. According to the binary calculation rule, we can get the corresponding rules of DNA addition, subtraction, and complement rule listed in Tables 2 and 3.
Compressed Sensing.
Compressed sensing technology uses sparse basis such as DCT or DFT to represent the signal sparsely, measures the signal based on Gaussian random matrix, and then reconstructs the signal based on L1 norm and other algorithms.
Suppose a signal is x ∈ R N×1 , before sampling the signal x, select a suitable and orthogonal sparse base Ψ ∈ R N×N to sparsely represent the signal x as where s is the sparse representation of x on a sparse basis Ψ. s has K nonzero elements, and other N − K(N ≫ K) elements' values are 0. Sparse operation and the measurement matrix must satisfy restricted isometry property (RIP). Discrete cosine transform, fast-Fourier transform, etc. are common sparse operations.
During measuring x, in order to reduce the number of the measurements and ensure that the measuring result contain as much information of x as possible, we need an appropriate measurement matrix Φ ∈ R M×N (M < N). Bernoulli matrix, Gaussian distribution matrix, Hadamar matrix, Toeplitz matrix, etc. are often used in compressed sensing technology. e measurement of signal x can be expressed as where ΦΨ � Θ is a sensor matrix and y ∈ R M×1 is the measurement result.
In the end, use the compressed sensing reconstruction algorithm to reconstruct s from y: e approximate solution vector can be obtained by applying inverse transformation for s:
Singular Value Decomposition (SVD)
. Suppose a real matrix E K ′ ∈ R m×n can be decomposed can be decomposed into where E ∈ R m×n is a singular value matrix whose nonzero elements are only located on the diagonal. U ∈ R m×m and V ∈ R n×n are both-unit orthogonal matrices, and U means the left singular matrix and V means the right singular matrix, respectively. Generally, E is represented as Decomposing E K ′ by equation (10), then we can get Complexity Table 2: DNA encode rule. Table 3: DNA encode rule. With the eigen-decomposition of E K ′ E ′T K and E ′T K E K ′ , the left singular matrix U and the right singular matrix V can be obtained.
Proposed Encryption Scheme of CSGI
e proposed encryption scheme of CSGI includes three main parts: the generated of the phase mask φ, original image sparse presentation, and CSGI encryption. Next, the implementation processes will be introduced in detail.
Generation of the Phase Mask Matrix.
Chaotic systems have some significant features, such as deterministic, pseudorandomness, and ergodicity, and they are sensitive to initial points and parameters. Supposing that the original image is denoted as T, whose size is M × M, the initial values are x 0 , y 0 , z 0 , and w 0 and the N phase mask matrices φ(x, y) can be realized as follows: (1) Use the initial values x 0 , y 0 , z 0 , and w 0 to produce four pseudorandom sequences C x, C y, C z, and C w by iterating equation (5) the new sequence P st can be obtained.
, and w 0 � C w(L − 1), and repeat step 1 to step 6 N − 1 times, and then we can get N phase mask matrices φ(x, y).
Original Image Sparse Presentation with KSVD.
Suppose the original signal (image) is a matrix X ∈ R m×n . D ∈ R m×K is dictionary matrix, and each column of the dictionary is called atomic vector d k . S is the sparse matrix. Ideally, there is X � DS, with the original signal X sparse representation through D. erefore, solving the dictionary matrix and sparse matrix can be converted into an optimization problem as follows: where s i (i � 1, 2, . . . , K) is the row vector of the sparse matrix S and ‖s i ‖ 0 ≤ T 0 is the limitation, namely, each row of the sparse matrix has nonzero elements as few as possible. is problem can be converted to a nonconstrained optimization problem by using Lagrangian multiplier method, which is In order to simplify the optimization problem, ‖s i ‖ 0 is replaced by ‖s i ‖ 1 .
Complexity
So, the main problem is converted to D and S two objective optimization problems, and the common method of S optimizing is orthogonal matching pursuit (OMP) algorithm, which has been discussed in [55]. e optimization of D can be described as follows: Suppose the sparse matrix S is known, we can implement the columnwise update of dictionary matrix D. s T k is the k-th row vector of S, and E k denotes residual, so where E k � X − j≠k d j s T j . Aforementioned optimization question can be converted into where d k and s T k become the variables to optimize, and equation (20) can be described as a least squares problem which can be solved by using SVD. Extract all nonzero terms of E k and then rebuild new matrix E k ′ . Hence, the optimization turns into rough SVD, we can get Replace d k with the first column vector u 1 of left singular matrix, and then get one column of D. Multiply the first row of right singular matrix and the largest singular value; afterwards, we can obtain s ,T k . Replace s T k of sparse matrix S with the new result. Singular values of E should be ordered from largest to smallest.
Repeat the above steps to update each column of dictionary, and then we can gain the final dictionary matrix D and sparse matrix S from original signal. Figure 4 shows the process of CSGI encryption. e detailed steps are as follows:
CSGI Encryption.
(1) e phase masks matrices φ(x, y) generated in Section 3.1 are uploaded to SLM, and the laser beam is phase modulated by SLM according to equation (1). (2) e sparse matrix S and dictionary matrix D are gained by sparse representation of the original image according to Section 3.2. (3) e sparse matrix S is placed at z distance from the SLM. According to Fresnel diffraction, we can get the light field distribution that can be obtained in the front of the image, and the light field intensity I i (x, y) can be further obtained according to equation (2). (4) e total light intensity B i (x, y) can be calculated by a bucket detector behind the image according to equation (3). (5) e initial values of the chaotic system and dictionary matrix D transmit through the private channel as the transmission key. And, B i (x, y) transmits through the public channel. Figure 5 shows the decryption process, and the detailed steps are as follows:
Decryption Process.
(1) e transmission key is received through the private channel x 0 , y 0 , z 0 , w 0 , and D, and the random phase mask matrices are calculated using the received transmission key according to the method in Section 3.1.
Simulation Results and Security Analysis
In this part, the proposed scheme is simulated with MATLAB R2016a to verify the feasibility. As shown in Figure 6(a), a grayscale image of 128 × 128 size is used as the original image. e initial values of the hyper-chaotic system are set as (x 0 � 1, y 0 � 0.949, z 0 � 1, and w 0 � 1), and then referring to the mentioned point in Section 3, we obtain N different random phase mask matrices and the sparse representation of original image which is shown in Figures 6(b) and 6(c). e following is the brief descriptions of the computational complexity of our algorithm and comparison with other algorithms.
In generation of phase mask matrix, the size of gray image is m × m, the main operations are "addition", "multiplication," and "mod", and then the operand is N(40m 2 + 19M 0 ), where N is the number of measurements. In the step of sparse representation, the main operation is the computation of sparse matrix S and dictionary matrix D, the operand is 5m 3 where the number of iterations is 10, and for projection and image reconstruction, the operand is 4 Nm 2 and 4m 3 , respectively. In order to guarantee the quality of results, we set N ≫ m in our experiments. e total operand is N(44m 2 + 19M 0 ) + 4m 3 , because the generation of phase mask matrix and sparse representation can be performed simultaneously. erefore, the computational complexity of CSGI can be expressed as Θ(Nm 2 ). Compared with other algorithms, the computational complexity of using QR code and compressed sensing to encrypt the image (QR-CGI-OE) [13] is Θ(Nm 2 ). And, the method based on the LWT, XOR operations (XOR-LWT-OE) [16], spends most time on the process of measurement, whose computational complexity can be expressed as Θ(Nm 2 ). ese illustrate that the computational complexity of the method used in this paper is identical to some relatively new algorithms', whereas our algorithm have better performance and less times of the measurement, which is stated below.
During the encryption of the CSGI, the wavelength of the plane wave is selected 0.532 μm. e image is placed at a distance z � 200 mm from the SLM, and the transmitted light is collected into a bucket detector. en, the image can be reconstructed according to Section 3.4 and Figure 6(c).
Key Space Analysis.
If the cryptographic scheme has enough large key space, it can resist brute-force attacks. Here, the transmission keys are x 0 , y 0 , z 0 , w 0 , and D. D is smaller and can be ignored. e operational precision of the computer is 10 16 ≈ 2 52 , and the key space of our proposed scheme is 2 52 × 2 52 × 2 52 × 2 52 ≈ 2 208 , which is much larger than the security requirement of the key space 2 100 . us, the key space of our proposed scheme is strong enough and can effectively resist brute-force attacks.
Key Sensitivity Analysis.
A highly secure computation ghost imaging system must be sensitive to the key. To verify the security performance of our proposed scheme, a security test is carried out. e private keys are set as (x 0 � 1, y 0 � 0.949, z 0 � 1, and x 0 � 1). In decryption process, we change the value of the private keys to (x 0 ′ � 1 + 10 − 15 , y 0 ′ � 0.949, z 0 ′ � 1, and w 0 ′ � 1) and then use to reconstruct the image. e sampled object is shown in Figure 7. Obviously, the information related to the plaintext image cannot be obtained when the private key is changed slightly.
Correlation Analysis.
To evaluate the quality of the reconstructed image, the correlation coefficient between the reconstructed image G and the original image T can be calculated by 8 Complexity where and XOR-LWT-OE is conducted in this experiment. Figure 9 shows the curve of the correlation coefficient change with the measurement based on several different means, the abscissa represents the number of measurements, the ordinate represents the correlation coefficient between the decrypted image and the original image. As shown in Figure 9, the correlation coefficient increases with the increase of measurement. e reconstructed image with high quality can be obtained with the increase of the measurement number. In addition, the result suggests that based on KSVD sparse representation, a high-quality image can be reconstructed in less measurements than other methods.
Make measurement 3000 times on Lena, and the reconstructed images are shown in Figures 10(b)-10(f ). e r TG value of DCT, DFT, QR-CGI-OE, XOR-LWT-OE, and KSVD is 0.7353, 0.7821, 0.4540, 0.6248, and 0.9729, respectively. en, we compare maximum r TG and measurements of KSVD with other sparse basics and algorithms, as shown in Table 4. By the way, after measuring 7100 times, the decrypted QR code can just be recognized and original image can be restored in QR-CGI-OE.
NIST Statistical Test.
In this paper, the NIST SP 800-22 test suite [56] is used to analyse randomness and discover potential defects in the structure of the pseudorandom sequence generator. During the test, we used the default values that came with the NIST test. e test result is expressed as p. According to NIST test rules, to pass the test, p has to be greater than 0.01. e pseudorandom sequences generated by the hyper-chaotic map are successfully passed the NIST SP 800-22 statistical test. e test results are listed in Table 5.
Noise Addition.
As the phase mask matrices may be attacked by noise, to test the robustness of this scheme, we add Gaussian noise, salt and pepper noise, and speckle noise on the phase mask matrices, respectively. As shown in Figure 11, Figure 11 10 Complexity phase mask matrices are added; for Gaussian noise, mean value is zero and variance is 0.005 and r TG � 0.9549; Figure 11(b) is added Salt and pepper noise, density is 0.005, and r TG � 0.9306; Figure 11(c) is added speckle noise, mean is zero, variance is 0.01, and the r TG � 0.9119. Obviously, the proposed scheme can resist noise attacks well.
Conclusion
In this paper, a CSGI encryption scheme based on hyperchaotic-system and DNA and KSVD technology is proposed for the first time. e hyper-chaotic system is used to generate four long pseudorandom sequences, the sequences are diffused with DNA operation, and then the phase mask matrices for encryption can be obtained. e original image is sparse by dictionary D generated by KSVD. e transmission key of the scheme is composed with the initial values of the hyper-chaotic system and the dictionary D. Compared with the existing scheme, the proposed scheme has a small transmission key which ensures the security of the private key, big key space, highly sensitive to the key, high complexity, and strong plaintext correlation and ensures the security of the scheme. Simulation results and security analysis show that the proposed scheme can resist most of the known attacks well and has high security and great performance.
Data Availability
e data used to support the findings of the study are available from the corresponding author upon request.
Conflicts of Interest
All authors declare that they have no conflicts of interest. | 2020-10-19T18:06:11.823Z | 2020-10-05T00:00:00.000 | {
"year": 2020,
"sha1": "72e69589323d97e254cd5288dc7e1db775e867d4",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2020/8815315.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c546eeef423a663cc83928bcf3bdc9e937ae7872",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221882490 | pes2o/s2orc | v3-fos-license | Genome-Wide Identification and Characterization of the bHLH Transcription Factor Family in Pepper (Capsicum annuum L.)
Plant basic helix–loop–helix (bHLH) transcription factors are involved in the regulation of various biological processes in plant growth, development, and stress response. However, members of this important transcription factor family have not been systematically identified and analyzed in pepper (Capsicum annuum L.). In this study, we identified 122 CabHLH genes in the pepper genome and renamed them based on their chromosomal locations. CabHLHs were divided into 21 subfamilies according to their phylogenetic relationships, and genes from the same subfamily had similar motif compositions and gene structures. Sixteen pairs of tandem and segmental duplicated genes were detected in the CabHLH family. Cis-elements identification and expression analysis of the CabHLHs revealed that they may be involved in plant development and stress responses. This study is the first comprehensive analysis of the CabHLH genes and will serve as a reference for further characterization of their molecular functions.
INTRODUCTION
The bHLH transcription factor (TF) family, named for its basic helix-loop-helix (bHLH) structure, is the second largest class of TFs and is widely distributed in animals, plants, and microorganisms (Guo et al., 2008). The bHLH domain consists of approximately 60 amino acids and is divided into a basic amino acid region and a helix-loop-helix region (Toledo-Ortiz et al., 2003). The basic region is located on the N-terminal side of the bHLH domain and is approximately 15 amino acids in length. These amino acids are mainly responsible for binding to cis-elements in DNA. The HLH region is located on the C-terminal side of the domain, consists of approximately 40 amino acids, and promotes the formation of homo-and heterodimer complexes (Murre et al., 1989;Ferre-D'Amare et al., 1994).
According to their evolutionary relationships, DNA binding abilities, and functional characteristics, bHLH proteins in animals have been divided into six groups, A-F (Atchley and Fitch, 1997). Many of the plant bHLH proteins that have been identified belong to Group B (Sailsbery and Dean, 2012). According to classification criteria developed in animals, the 133 bHLH genes found in Arabidopsis thaliana have been divided into 12 subfamilies based on conserved amino acids at specific positions and on the presence or absence of additional conserved domains (Heim et al., 2003). Fourteen new bHLH TFs were subsequently discovered and further divided among 21 subfamilies, but this classification was limited to higher terrestrial plants (Toledo-Ortiz et al., 2003). As more family members were identified in species such as moss and seaweed, bHLH TFs were subdivided into 32 subfamilies (Carretero-Paulet et al., 2010). At present, the classification of the plant bHLH TF family is not clearly defined, and there are no corresponding names for each subfamily across species.
Pepper (Capsicum annuum L.) is an economically important vegetable and the most widely grown cooking ingredient in the world. With the completion of the pepper genome sequence (Qin et al., 2014), genome-wide identification and classification of gene families can be performed to study genes that are critical for pepper growth and development. To date, a number of TF families have been characterized in pepper, such as the Dof (Wu et al., 2016) and Hsp70 (Guo et al., 2016) families. However, the pepper bHLH family has not been characterized previously. Here, we use a bioinformatics approach to identify and characterize members of the bHLH family in pepper. We report basic information about each gene, including its conserved domains, evolutionary relationships, chromosomal location, expression in various pepper tissues, and response to abiotic stress. These data provide a reference for further exploration of the molecular functions of bHLH genes in regulating pepper growth and stress responses.
Identification of the bHLH Gene Family
Annotated sequences of pepper and tomato genes were downloaded from the Solanaceae Genomics Network 1 , and annotated sequences of Arabidopsis bHLHs were obtained from TAIR 2 . We used HMMER 3.0 (Eddy, 1998) to identify Arabidopsis, tomato and pepper sequences that contained the complete bHLH domain (PF00010), using an E-value < 1e −5 threshold. Candidate sequences were verified using the SMART 3 and NCBI databases 5 . Sequences with confirmed bHLH domains were retained for further analysis.
Phylogenetic Analysis and Classification of the CabHLH Gene Family
The sequences of the CabHLH and AtbHLH proteins were extracted, and a multiple alignment of the sequences was performed using ClustalW 2.0 (Larkin et al., 2007). A phylogenetic tree was constructed in MEGA 7.0 using the neighbor joining (NJ) method (Tamura et al., 2007) with the following parameters: 1,000 bootstrap replicates, Poisson model, and pairwise deletion. CabHLHs were placed into subfamilies based on the classification of closely related AtbHLHs and the bootstrap support values at relevant nodes.
Protein Properties, Conserved Motifs and Gene Structures
CabHLH protein sequences were uploaded to the ExPASy website 6 to calculate their molecular weights (MW) and isoelectric points (pI). MEME tools 7 v5.1.1 (Bailey et al., 2009) were used to identify up to ten conserved motifs in each CabHLH protein with an optimal motif width of 10-200 residues and all other parameters set to their default values. Intron locations were determined based on the GFF3 files of Arabidopsis, pepper and tomato sequences. Gene structures were drawn using TBtools v0.66833 (Chen et al., 2020).
Chromosomal Mapping and Gene Duplication Analysis
The chromosomal positions of the CabHLH genes were obtained from the gene annotation file and visualized using MapGene2Chromosome 8 v2. Within a genome, homologous gene pairs located within 100 kb on the same chromosome were considered to be tandem duplicates, whereas blocks of genes copied from one region to another were considered to be segmental duplications (Tang et al., 2008;Liu et al., 2011). Segmental and tandem duplicated gene pairs within the pepper genome and collinear gene pairs among the pepper, tomato and Arabidopsis genomes were identified using MCScanX with a match score of 50, a match size of 5, a gap score of -3, and an E-value of 1e −05 (Wang et al., 2012). The nonsynonymous substitution rate (Ka) and synonymous substitution rate (Ks) were calculated using KaKs_Calculator 2.0 (Wang et al., 2010), and a collinearity map was drawn with Circos software (Krzywinski et al., 2009).
Analysis of cis-Regulatory Elements
SeqKit v0.13.0 (Shen et al., 2016) was used to extract the promoter sequences of each CabHLH gene from the pepper genome file, 2000 bp upstream of the ATG start codon. Promoters were uploaded to the PlantCARE website 9 (Lescot et al., 2002) to predict their cis-elements.
Expression Analysis of the CabHLH Genes
RNA-Seq data were used to examine the expression of CabHLH genes in multiple tissues and in response to various abiotic stress treatments (Liu et al., 2017). The expression level of each gene was calculated as FPKM (fragments per kilobase of transcript per million mapped reads), transformed as log 2 (FPKM + 1). Finally, expression heatmaps were generated in R v3.6.1.
Seeds of the pepper cultivar "6421, " which exhibits good heat, drought, and disease tolerance, were obtained from the Vegetable Institute of the Hunan Academy of Agricultural Sciences. Plants were grown using the substrate floating seedling method at 24/16 • C with a 16 h light/8 h dark photoperiod. Following a previously published treatment protocol (Liu et al., 2017), 40day-old replicate pepper seedlings were exposed to 200 mM NaCl (salt stress), 400 mM mannitol (drought), 10 • C (cold stress), or 42 • C (heat stress). Salt stress was imposed by adding NaCl to a final concentration of 200 mM in the nutrient solution, and drought stress was applied by adding D-mannitol to a final concentration of 400 mM. For heat and cold stress treatments, the seedlings were transferred to a growth chamber at 42 or 10 • C, and the illumination, photoperiod, and relative humidity were identical to those in the control treatment. Leaf tissue of treated and control plants was sampled at four time points 1, 6, 12, and 24 h after treatment initiation. Samples of treated and control plants were harvested at 7:00, 12:00, and 18:00 h on the first day and at 6:00 h on the following day. Three seedlings were randomly selected and combined to create one biological replicate, and three biological replicates were collected for each treatment and time point. Samples were frozen in liquid nitrogen and stored at -80 • C until further use.
Total RNA was extracted from frozen leaf samples using an RNA kit (TaKaRa, Dalian, China) and reverse transcribed into cDNA with a PrimeScript RT reagent kit (TaKaRa). The SYBR Premix Ex Taq kit (TaKaRa) was used to measure relative gene expression levels following the manufacturer's instructions 9 http://bioinformatics.psb.ugent.be/webtools/plantcare/html/ of the One-step Real-Time PCR System Time PCR Detection System (Applied Biosystems, Foster City, CA, United States). The cycling steps were 94 • C for 30 s, 94 • C for 10 s for 40 cycles, and 58 • C for 30 s, followed by melting curve analysis at 65 • C for 10 s for 61 cycles. The relative expression levels of selected genes were calculated using the 2 − Ct method (Schmittgen and Livak, 2008).
Identification, Phylogenetic Analysis, Classification and Protein Properties of the CabHLH Genes
We used HMMER 3.0 to search for bHLH domains (PF00010) in the pepper and tomato proteins using an E-value threshold of <1e −5 . All candidate sequences were filtered with NCBI and SMART to further confirm that they contained complete bHLH domains. A total of 122 CabHLHs and 140 SlbHLHs were identified (Supplementary Table S1); the pepper bHLH genes were named CabHLH1 to CabHLH122 based on their arrangement on the pepper chromosomes. We constructed a phylogenetic tree of CabHLH and AtbHLH proteins in order to investigate their evolutionary relationships and to classify the CabHLHs into 21 established subfamilies according to the classifications of their Arabidopsis homologs (Li et al., 2006). Subfamily VII had the largest number of members in pepper (14 genes), whereas subfamilies IIIf and VIIIa had the fewest (one gene each) (Figure 1). Compared with Arabidopsis, pepper had no members of the XV subfamily but contained a unique X subfamily. In many cases, Arabidopsis and pepper had different numbers of genes in a given subfamily.
Comprehensive information on the CabHLH genes, including locus names, gene positions, protein lengths, exon numbers, molecular weights (MW), and isoelectric points (pI), is provided in Supplementary Table S2. The CabHLH proteins range in size from 117 (CabHLH21) to 633 (CabHLH71) amino acids, with an average length of approximately 344 amino acids. The MWs of the CabHLH proteins range from 12.9 kDa (CabHLH31) to 69.3 kDa (CabHLH2), and their pIs range from 4.6 (CabHLH48) to 10.32 (CabHLH108). The CabHLH genes contain 1 to 10 exons, highlighting the diversity of their structures.
To further explore the evolutionary relationships among bHLH TFs from different species, we constructed a collinearity plot of the pepper, tomato, and Arabidopsis bHLH gene families (Figure 3). A total of 117, 64, and 105 collinear gene pairs were identified between pepper and tomato, pepper and Arabidopsis, and tomato and Arabidopsis, respectively, indicating that significant expansion of the gene family had occurred before divergence of the three species (Supplementary Table S3). For example, 44 CabHLHs and 54 AtbHLHs had a collinear relationship, and most such relationships were one-to-one matches such as CabHLH2/AtbHLH2 and CabHLH12/AtbHLH45. There were also one-to-many matches, such as CabHLH17/(AtbHLH4, AtbHLH5, AtbHLH6) and CabHLH23/(AtbHLH18, AtbHLH25). Many-to-one cases also existed, such as (CabHLH6, CabHLH8, CabHLH17)/AtbHLH4 and (CabHLH27, CaHLH108, CabHLH44)/AtbHLH88. These results indicate that bHLHs are relatively conserved and that collinear bHLHs between species may originate from the same ancestor.
Gene Structure and Motif Analysis of CabHLH Family
Conserved motifs of the CabHLH proteins were analyzed using MEME tools, and ten conserved motifs from 26 to 154 amino acids in length were identified (Supplementary Figure S1). The number of conserved motifs in each CabHLH protein varied from one to five (Figure 4). Each subfamily contained several common motifs, while few subfamilies possessed unique motifs. For example, motifs 1 and 2 were present in almost all CabHLH proteins and represented the position of the bHLH domain, whereas motifs 9 and 10 were only found in subfamilies III (a + c) and VII, respectively, and may be related to unique functions of individual subfamilies. CabHLH proteins from the same subfamily exhibited similar motifs, suggesting that they may also share a degree of functional similarity. The diversity of motifs in different subfamilies suggests that CabHLH functions have tended to diversify during evolution.
We used TBtools to map the structures of the pepper, tomato and Arabidopsis bHLH genes (Figure 4, Supplementary Figure S2) and found that most bHLHs from the same subfamily shared similar gene structures. For example, subfamily III (d + e) contains 0-2 introns, subfamily IX has 4-6 introns. Intron gain and loss is a frequent phenomenon during evolution and can increase the complexity of gene structures (Roy and Gilbert, 2005). In the CabHLHs, most tandem duplicates (5/6) had different numbers of introns, whereas most segmental duplicates (7/10) had the same number of introns, suggesting that tandem duplicates may have undergone greater divergence in gene function over the course of evolution. In addition, we also analyzed the introns of collinear bHLH pairs. There were 53, 31, and 46 pairs of collinear bHLH pairs with different numbers of introns between pepper and tomato, pepper and Arabidopsis, and tomato and Arabidopsis, respectively, indicating that the functions of these collinear genes may have undergone a degree of differentiation (Supplementary Table S3).
Cis-Element Analysis of the CabHLH Genes
We extracted the 2,000 bp upstream promoter sequences of the CabHLH genes for cis-element analysis using the PlantCARE database. Ten common cis-elements were identified (Supplementary Table S4), and 119 CabHLHs contained at least FIGURE 3 | Collinear analysis of bHLH genes among pepper (Ca), tomato (Sl), and Arabidopsis (At). Green, black and yellow lines represent the collinear gene pairs between pepper and tomato, pepper and Arabidopsis, tomato and Arabidopsis chromosomes, respectively. Blue lines indicate the segmental duplicated bHLH genes in pepper.
one cis-element. The ABRE, CGTCA-motif, and GARE-motif elements respond to ABA, JA, and GA stimulation, These motifs were present in the promoters of 86, 77, and 27 CabHLHs, suggesting that the expression of these genes responds to levels of the corresponding hormones. Two light-responsive elements (G-box and Sp1) that are ubiquitous in plants were identified in 90 and 12 CabHLHs, respectively. Stress-responsive cis-elements, including those associated with LTR, defense and stress (TCrich repeats), drought (MBS), and anaerobic induction (ARE), were identified in the promoters of 33, 42, 52, and 27 CabHLHs, respectively. Diverse response elements indicate the importance of CabHLHs in stress responses.
Expression Patterns of CabHLH Genes in Various Tissues
We obtained the expression data of CabHLH genes from previous research (Liu et al., 2017) and removed 22 CabHLH genes with FPKM values of less than one in all tissues (Zhuo et al., 2018). An expression heatmap was created using the remaining 100 genes (Figure 5, Supplementary Table S5). Most CabHLHs differed in their expression patterns, although a few showed similar expression patterns. Some CabHLHs (such as CabHLH100, CabHLH11, CabHLH8, and CabHLH43) showed high expression levels (FPKM > 10) in most tissues analyzed, whereas other CabHLHs (such as CabHLH23, CabHLH85, CabHLH39, CabHLH105, and CabHLH108) were not expressed in any tissues. In addition, several CabHLHs showed extremely high expression in specific tissues, such as CabHLH33/CabHLH100 in flower buds, CabHLH33 in petals, and CabHLH42 in the placenta. We obtained transcriptome data for CabHLH33 and CabHLH100 from another study and found that their expression was also significantly higher in flowers or flower buds than in any other tissues (Figure 6) (Qin et al., 2014). These genes may therefore have specific roles in flower development. We also analyzed the expression of duplicated genes in various tissues and found
Expression Analysis of CabHLH Genes Under Abiotic Stresses
To analyze the response of the CabHLH genes to abiotic stress, we extracted transcriptome data for CabHLH gene expression after 6 h of cold, heat, salt, and drought stress. We used genes with FPKM values of bigger than one in at least one group to create a clustered heatmap and found many CabHLHs responded to abiotic stress (Figure 7, Supplementary Table S6). We also analyzed the relationship between transcriptome data and ciselements and found that gene expression results were not clearly correlated with the presence/absence of specific cis-elements. For example, some CabHLHs with LTR promoter elements, such as CabHLH5/17/32/65/90, were upregulated under LTR treatment. However, some CabHLHs with LTR elements, such as CabHLH16/36/48/114, were downregulated or remained unchanged (Figure 7). This result indicates that the expression of these CabHLHs might induced by several cis-elements, and unidentified cis-elements might contribute to regulating the expression of these CabHLHs under abiotic stress.
To further validate the effects of abiotic stress on the expression of CabHLH genes, we selected eight genes that responded to abiotic stress (Supplementary Table S6) and verified the expression patterns of these genes using qRT-PCR (Figure 8). The specific primers used are listed in Supplementary Table S7. After cold stress treatment, the expression levels of CabHLH30, CabHLH37, CabHLH42, CabHLH71, and CabHLH111 were upregulated, CabHLH11 was downregulated, CabHLH28 was first upregulated and then downregulated, and CabHLH41 remained unchanged. After high temperature treatment, the expression levels of CabHLH37 and CabHLH42 were downregulated, and the expression levels of the remaining genes were upregulated. After drought treatment, CabHLH30, CabHLH71, and CabHLH111 were upregulated, CabHLH41 and CabHLH37 were downregulated, and CabHLH42 was first upregulated and then downregulated. Under salt stress, the expression levels of CabHLH30, CabHLH37, CabHLH71, and CabHLH111 were upregulated, CabHLH11 and CabHLH28 were first upregulated and then downregulated, CabHLH41 and CabHLH42 were downregulated. In general, there was good correspondence between the RNA-seq data and the qRT-PCR results. However, few exceptions existed. For example, in the qRT-PCR experiment, the expression level of CabHLH11 decreased after 6 h of cold treatment, but its expression was unchanged in the RNA-seq analysis, perhaps due to different sampling time points (qRT-PCR at 12:00, RNA-seq at 14:00).
DISCUSSION
A growing body of evidence suggests that plant bHLH genes are involved in physiological and biochemical processes such as stress resistance, growth and development, biosynthesis, and signaling (Duek and Fankhauser, 2003;Hernandez et al., 2004;Castillon et al., 2007). Members of the bHLH TF family have been identified in Arabidopsis (Toledo-Ortiz et al., 2003), rice (Li et al., 2006), apple (Yang et al., 2017), cabbage (Song et al., 2014), tomato (Sun et al., 2015), ginseng (Chu et al., 2018), and other species by comparative genomics. Until now, this family had not been characterized in pepper. In this study, we systematically analyzed the pepper bHLH TF family and provided a reference for further exploration of the roles of bHLH genes in regulation of pepper growth and stress responses.
A total of 122 CabHLH genes were identified and classified into 21 subfamilies according to their phylogenetic relationships with known bHLH genes from Arabidopsis (Li et al., 2006). Compared with Arabidopsis, pepper lacks members of the XV subfamily but contains a unique X subfamily. The acquired genes may counter gene losses, or even evolve novel functions (Qian et al., 2010). The functions of some AtbHLHs have been identified in previous studies. For example, AtbHLH15 and AtbHLH8 from subfamily VII (a + b) can combine with active phytochromes and mediate light signaling responses (Castillon et al., 2007). AtbHLH44, AtbHLH58, and AtbHLH50 in subfamily VII are early response BR signaling components required for full BR response (Friedrichsen et al., 2002). Overexpression of AtbHLH116 from subfamily IIIb in wild-type plants improves the expression of the CBF regulon in the cold and enhances freezing tolerance of transgenic plants (Chinnusamy et al., 2003). AtbHLH1 from subfamily IIIf encodes a bHLH protein that regulates trichome development in Arabidopsis through interaction with GLABRA3 and TESTA GLABRA1 (Payne et al., 2000). CabHLHs and AtbHLHs from the same subfamilies may have similar functions, although this will require further experimental verification.
Gene duplication, including tandem duplication and segmental duplication, is the most important pathway for the evolution and expansion of gene families (Vision et al., 2000). We identified six tandem duplicated CabHLHs and ten segmental duplicated CabHLHs in the pepper genome. Collinear genes derive from a common ancestor and are present in the same relative positions in the genomes of two or more species. We identified 117, 64 and 105 collinear bHLH pairs between pepper and tomato, Arabidopsis and pepper, and Arabidopsis and tomato, respectively. In the process of evolution, collinear blocks may be disrupted by various factors. The greater the evolutionary distance, the fewer collinear gene pairs will be identified between species, and collinearity can therefore be used as a measure of the evolutionary distance between species (Wicker et al., 2010). There were more collinear gene pairs between tomato and pepper, consistent with the fact that both are members of the Solanaceae family (Qin et al., 2014). Previous studies have shown that the amplification of transposable elements has eroded collinearity in the pepper genome (Wicker et al., 2010;Qin et al., 2014), which may explain why the number of collinear gene pairs between pepper and Arabidopsis is much lower than that between Arabidopsis and tomato.
FIGURE 6 | Heatmap of expression profiles [in log 2 (RPKM + 1)] of CabHLH33 and CabHLH100 in two pepper cultivars "Zunla-1" (Capsicum annuum L.) and "Chiltepin" (C. annuum var. glabriusculum). The expression levels are displayed by the color bar. F-Dev-1, F-Dev-2, F-Dev-3, F-Dev-4, and F-Dev-5 (0-1 cm, 1-3 cm, 3-4 cm, 4-5 cm, and mature green fruit), F-Dev-6 (fruit turning red), F-Dev-7, F-Dev-8, and F-Dev-9 (3, 5, and 7 days after turning red). RPKM, reads per kilobase million. We identified ten highly conserved motifs in the CabHLH proteins. Similar to the bHLHs of potato, lotus and Arabidopsis (Wang et al., 2018;Mao et al., 2019), motif 1 and motif 2 were present in almost all CabHLH proteins and represented the position of the bHLH domain, which is highly conserved among species. However, motif 9 and 10 were only present in subfamilies III (a + c) and VII, respectively. Variation in conserved motifs permits the classification of proteins into subfamilies and reflects each subfamily's specific functions (Jiang et al., 2019). Gene structure can also provide information for the study of gene family evolution (Guo et al., 2013). The number of introns varied from 0 to 9, indicating that the gain and loss of introns had occurred, which may be another reason for the differences among CabHLH subfamilies (Paquette et al., 2000).
We analyzed the expression profiles of CabHLHs in different tissues and found a large variety of expression patterns. Some CabHLHs (such as CabHLH100, CabHLH11, CabHLH8, and CabHLH43) were highly expressed (FPKM > 10) in most tissues analyzed and may participating in various development processes of pepper. Several CabHLHs were highly expressed in specific tissues, suggesting that they may have a role in those tissues' development. For example, CabHLH33, a homolog of AtbHLH31, was highly expressed in flower buds and petals. Previous studies suggest that AtbHLH31 regulates petal growth by controlling cell expansion (Varaud et al., 2011), and CabHLH33 may have FIGURE 8 | qRT-PCR analysis of CabHLH genes under cold, heat, salt and drought treatments following a 24 h time course. y-axis: relative expression levels; x-axis: the time (hours) course of stress treatments. t-test: one asterisk denotes significant differences (P < 0.05) between treatment group and control group (CK); two asterisks denote extremely significant differences (P < 0.01). a similar function in pepper. However, CabHLHs that were not expressed in any tissues (such as CabHLH23, CabHLH85, CabHLH39, CabHLH105, and CabHLH108) may have lost their functions during evolution and become pseudogenes, as has been demonstrated in the evolution of other plant genomes (Innan and Kondrashov, 2010;Xie et al., 2019). In addition, several duplicated pairs (such as CabHLH34/CabHLH58 and CabHLH8/CabHLH17) had significantly different expression patterns, indicating that functional diversification of duplicated CabHLH pairs had occurred during the course of evolution (Blanc and Wolfe, 2004).
DATA AVAILABILITY STATEMENT
All datasets presented in this study are included in the article/Supplementary Material.
AUTHOR CONTRIBUTIONS
ZZ, FL, XH, and XZ conceived and designed the experiments. ZZ, JC, and CL performed the experiments. ZZ, CL, FL, and XH analyzed data. ZZ and XZ wrote the manuscript. All authors read and approved the manuscript.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fgene. 2020.570156/full#supplementary-material FIGURE S1 | Sequence logos of conserved motifs of CabHLHs.
FIGURE S2 | Exon-intron structures of AtbHLHs and SlbHLHs. The phylogenetic tree of AtbHLHs and SlbHLH proteins was constructed by MEGA7 using the neighbor-joining (NJ) method (1,000 bootstrap). Green boxes represent exons and black lines indicate introns. | 2020-09-25T13:03:03.369Z | 2020-09-25T00:00:00.000 | {
"year": 2020,
"sha1": "4361f54176316da153e056b3c178894835139d98",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2020.570156/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4361f54176316da153e056b3c178894835139d98",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
245283069 | pes2o/s2orc | v3-fos-license | Experimental Analyses of the Additive Effect of TiO2 Nanoparticles on the Tribological Properties of Lubricating Oil
Titanium dioxide (TiO 2 ) is a promising lubricant additive for enhanced engine eciency. In this study, pure base engine oil 10 W-30 was improved with titanium dioxide (TiO 2 ) nanoparticles at different concentrations and experimentally evaluated with the scope of tribological behavior improvement. The tribological tests were performed at ambient temperature as well as at 75°C using a four ball tribometer for 30 minutes. Due to their small particle size (approx. 21 nm), the TiO 2 nanoparticles were properly dispersed in oil based on optical microscopy evaluation. The tribological results indicate that the friction coecient of engine oil with 0.075 wt.% TiO 2 reached 0.05 at 75°C, which was much lower that of pure oil (1.20), and at room temperature (23°C), it decreased from 1.8 for pure oil to 0.4 for oil with 0.075 wt.% TiO 2 due to the formation of a stable tribolm formed by the MoS 2 , MoO 3 , FeS, and FeSO 4 composite within the wear track. The lowest wear volume was measured on samples tested at 75°C for the oil with 0.075 wt.% TiO 2 . The TiO 2 additive lubricant effect on the tribolm properties led to a decrease in friction and wear at an operating temperature of 75°C. The main objective of the paper is to present the recent progress and, consequently, to develop a comprehensive understanding of the tribological behavior of engine oil mixed with TiO 2 nanoparticles.
Introduction
Research conducted in recent years on the addition of nanoparticles (NPs) to oils used in industry by increasing the load-bearing capacity of friction parts in mechanical systems has been friction modi ers and anti-wear additives. Moreover, increasing the severity of loading and speed conditions in machines is a constant challenge for tribologists to develop improved solutions to increase the performance properties of used oil. Nanoparticles, due to their small size, are able to access areas with extremely small surface roughness and therefore have great potential in improving the tribological properties of lubricants and contact surfaces. Numerous studies have been conducted in the last two decades regarding the use of nanoparticles as lubricating additives [1][2][3]. The addition of nanoparticles to the base oil can reduce friction and wear, and it can be said that the nanoparticles could be bene cial lubricating additives, although some may be hard and abrasive [4,[5][6][7]. In recent years, various nanoparticles have been investigated [4,[6][7][8]. The nanoparticles used are generally metals such as Cu, Ni, Mo, Ag, and Pd; metal oxides including TiO 2 , SiO 2 , ZnO, ZrO 2 , and CuO; and sul des WS 2 , MoS 2 , and PbS [2].
Several hypotheses emerge from the open literature about how nanoparticles contribute to the reduction of friction and wear under different laboratory testing conditions. However, useful information can be detached if prototypical materials are used together with a balance between the applied mechanical parameters (loads, speeds, temperatures, contact pressures) and surface conditions.
The transfer and adhesion of the nanoparticles leads to the change of the surface condition, the selfreduction, and the formation of a thin TiO 2 tribo-lm which conduct to the decrease of the coe cient of friction, of the pressure and the temperature in the contact area, therefore of the wear phenomenon.
The addition of TiO 2 nanoparticles to the lubricating oil showed stable friction due to the formation of protective lms on the worn surfaces [9]. Shenoy et al. [10] analysed the in uence of TiO 2 nanoparticles in lubricating oil. The results obtained showed a higher bearing capacity by approximately 35% compared to the use of lubricating oil without the addition of nanoparticles. The experiments performed by Kao and Lin [11] using an alternative sliding tester to analyse friction and wear in the presence of additive rapeseed oil showed that there was an 80.84% reduction in mean surface roughness. The average diameter of TiO 2 was 50 nm, and the particle concentration was 5 weight percent (wt.%). Using a low concentration of TiO 2 nanoparticles is enough to improve the tribological characteristics. The coe cient of friction and the wear scars decreased by approximately 15.2% and 11%, respectively [12].
Several studies have been conducted on nanoparticles as oil additives [10][11][12][13]. The particle size of TiO 2 affected the wear behavior of the composite material [14]. It was found that the microscale particles of TiO 2 damaged the surface due to severe adhesion and abrasion. The surface damage due to nanoscale TiO 2 particles was due to slight abrasion. More investigations have been carried out on TiO 2 as a coating material [12][13][14][15] or reinforcement in composites [10] for better tribological performance [13].
The lubrication mechanism in the presence of nanoparticles produces ( Figure 1): (a) the surface properties will be modi ed and two friction surfaces will be separated with tribo-lm formation, thus offering promising tribological performance; (b) the nanoparticles roll between the friction surfaces, reducing friction and wear; and (c) the heat and pressure generated during operation lead to the compaction of the nanoparticles following wear, which is considered a mending effect of the surface and a polishing effect [7].
In this paper, the antifriction and antiwear behaviors of TiO 2 nanoparticle suspensions in 10 W-30 oil with different percentages (0.010, 0.025, 0.050 and 0.075 wt.%/v) were evaluated using a tribometer with four balls. The results present the in uence of the nanoparticle percentage on the tribological behaviour of the mixed oil, the evaluation of worm surface structure through the various operating conditions tested by scanning electron microscopy (SEM), and the depth of wear marks on the spheres measured using the Alicona Inginite Focus G5 Microscope. the samples was continuously recorded according to the normal load and the elapsed time. Four nanooil samples were prepared and tested repeatedly with various specimens from the 4-ball tribometer to evaluate the direct effect and the surface-enhancing effect of nanoparticles in the lubricating oil.
Therefore, the purpose of this paper is dedicated to nding the possibility of tribological performance improvement of conventional engine oil using TiO 2 nanoparticle additives. This promising technology has a large impact on fuel consumption and engine durability for a greener future.
Samples Description
Experimental studies involve the formulation of stable TiO 2 -based nanolubricant samples at different concentrations. The physicochemical properties, such as density, viscosity and viscosity index, were measured using an automated SVM 3000 Anton-Paar rotational Stabinger viscometer. Density was measured using a vibrating U-tube densimeter, and for the determination of viscosity, a Peltier element was used to thermoregulate the samples.
Materials
Against the background that TiO 2 is best suited for many tribological applications (including solid lubricants) due to its excellent tribological behavior, these nanoparticles were chosen for the current investigation. Commercial TiO 2 Degussa P25 nanoparticles supplied from Sigma-Aldrich and Motul 5100 4T 10 W-30 engine oil were used in the formulation of nanolubricant samples. The average sizes of the TiO 2 nanoparticles were 18-21 nm.
Nanolubricants preparation
The mixing of nanoparticles with engine oil is an important step towards the improvement of engine oil quality. Different amounts of TiO 2 nanoparticles (Table 2) were dispersed into engine oil, and 0.2 mM Triton X surfactant was added to increase the stability of the nanolubricants. The 0.2 mM Triton X surfactant plays an important role in blending the nanoparticles in a manner that makes them soluble in the engine oil, also providing stability for the nanoparticles, preventing the agglomeration of the nanoparticles within engine oils. Moreover, to increase the stability over time, the prepared suspensions were subjected to magnetic stirring, followed by ultrasound treatment for 30 minutes.
Page 6/20 The physicochemical properties of engine oil and TiO 2 nanolubricants are presented in Table 3. It can be observed that with an increase in the TiO 2 amount in the engine oil, the density tends to increase by 0.2% (from 862 kg/m3 for the base oil up to 864 kg/m3 for the nanolubricant, which contains 0.075 wt.%/v) due to the high density of the nanoparticles (3900 kg/m3), even if the concentration of TiO 2 is very small.
Additionally, cinematic viscosity at both 40℃ and 100℃ tends to increase with TiO 2 concentration in nanolubricants to 3% and 4%, respectively (from 89.7 mm2/s to 92.3 mm2/s and from 13.6 mm 2 /s to 14.2 mm 2 /s, respectively). According to Ali et al. [16], this behavior is related to the fact that the nanoparticles act as catalysts in a cracking reaction and heat transfer properties. The high viscosity of the nanolubricant oils improves the lubricating property because it reduces friction and prevents rapid wear. The viscosity index increases up to 158 when TiO 2 nanoparticles are added into the oil, which means that the variation of viscosity with temperature is lower for nanolubricant oils than base oil.
Ball materials
The standard ball test material was 52100 grade 25 chrome alloy steel, 12.7 mm in diameter with a surface roughness (R a ) of 0.1 µm, extra polish (EP) grade of 25 and hardness of 54 to 58 HRC. Four new balls were used for each test. Before starting a new test each time, the balls were cleaned with a technical cleaner (isopara nic-based solvent cleaner) and wiped dry using tissues. The chemical composition and mechanical properties of the ball material are listed in Table 4.
Experimental Method And Device
The American Society of Lubrication Engineers has published a catalog of friction and wear testing devices that describes in detail over 230 different tribometers [7]. Each device has its strengths and weaknesses. For this work, we used the four-ball wear test geometry, which has been selected to provide the following test conditions: high contact pressures that ensure operation in boundary lubrication mode; good control of the operating parameters of load, temperature, speed, operating time, atmosphere; high sensitivity of wear and friction measurements; simple samples for testing, small dimensions and easy manufacture.
The four-ball tester is an excellent tool for checking and determining the quality of lubricants and additives. This tribometer can be used to determine the wear preventive properties (WPs), extreme pressure properties (EPs) and friction behaviour of lubricants. The widespread acceptance of Four Ball Tester test results makes it an excellent tribological device.
This tribometer consists of a device where an upper ball can be rotated and is in contact with three xed balls that are immersed in the oil sample. Different loads are applied to the ball by weights applied to a system with the load lever. The upper rotary ball is held in a special chuck located at the lower end of the vertical axis of an electric motor and rotates at a constant speed. The lower xed balls are held in contact with each other in a steel pot by a clamping ring and a locking nut. The arrangement is illustrated in Figure 2. The basic sample con guration (Fig. 3) consists of a tetrahedral arrangement of four balls.
These tests were carried out under the American Society for Testing and Materials (ASTM) condition and ASTM D 4172 method test B. The tests were conducted under dry conditions at a room temperature RT (23 ± 2) o C and then at (75 ± 2) o C and relative humidity RH (35 ± 2)%, the speed (1200 ± 60) rpm, the time test (30 ± 1) minutes and the load: (396 ± 4) N. The tests were repeated at least three times for every measurement. All presented results in this work are the arithmetic mean value of the measured values, unless otherwise stated. The corresponding standard deviation is indicated by the error bars, describing the scatter of the results.
Experimental procedure involves the following steps: a. Before starting to do the experiment, all parts of the balls were cleaned with acetone and then wiped; b. A four-ball machine was set up with the correct speed, temperature and time; c. Three clean balls were inserted into the ball pot, and then the ball lock ring was placed inside the ball pot and around the three balls; d. A lock nut was tied on the ball pot, and torque wrench was used to fasten it with a force of 68 Nm.
e. One clean ball in collet was put in and inserted into the tapper at the end of the motor spindle; f. Approximately 10 ml of test lubricant was added to the ball pot assembly, and the lubricant had to become 3 mm above the tip of the ball.
g. A ball pot assembly was placed on the antifriction disk inside the machine and under the spindle.
h. Thermocouple wire was connected to ball pot assembly; i. Load was added or removed to the loading arm until digital monitoring showed the correct load for the experiment; j. In ASTM D 4172 method test B, the amount of load is 396 N, and in the four balls test machine, the load cell is tted at a distance of 80 mm from the center of the spindle; k. The last step was to measure the wear scars on the three lower balls with the help of an image acquisition system. The wear was measured with the average of horizontal and vertical scars with SEM, and the depth of wear marks on the spheres was measured using an Alicona Inginite Focus G5 microscope.
The viscosity is a very important parameter for a lubricant, as it affects the lm thickness and the wear rate of the sliding surface. It is used for the identi cation of individual grades of oil and for monitoring the changes occurring in the oil while in service. Increasing the viscosity normally shows that the used oil has been deteriorated by contamination or oxidation. Additionally, decreased viscosity usually indicates dilution of the oil. The oil viscosity was measured with a viscosity meter at experimental temperatures of 23°C and 75°C. In test B method ASTM D 4172, the load was 396 N, and in the four-ball test machine, the load cell was mounted 80 mm from the center of the shaft. In addition, the wear was measured with the average of horizontal and vertical scars with SEM, and the depth of wear marks on the spheres was measured using an Alicona Inginite Focus G5 microscope. The duration of the test was selected to ensure that the running-in period was less than 30% of the total duration of the test. shows that the wear degree of the samples containing TiO 2 was obviously improved compared to that of the base oil. When the base oil was a lubricant, the worn surface of the ball presented black sediment, which con rmed that the main component was Fe resulting from ball-on ball milling. Moreover, the 0.01 wt.% and 0.025 wt.% samples also presented back sediment, but the wear radii were visibly reduced. In particular, the radius of the 0.075 wt.% oil sample in the stabilized state after rubbing was smaller than that of the other samples, showing the superior anti-wear performance of this sample.
Results
To further investigate the wear mechanism, SEM images of worn surfaces of the lower balls lubricated by the base engine oil and the engine oil samples containing 0.01, 0.025, 0.050 and 0.075 wt.% TiO 2 nanoparticles were investigated. The tests were performed at two different working temperatures, 23°C (RT) and with oil heated to 75°C.
Following the evaluation of the wear traces with SEM, radius differences of the circular wear trace are observed, and the radius difference is approximately 5-10%. This should not greatly in uence the mechanism of wear (generation of wear residues, increase of the tribo-layer, etc.). Figure 4 shows that for samples LS and L0, there was obvious plowing and some pits on the worn track, which were due to the second local rupture of debris during the sliding process; moreover, the friction of debris at the sliding interface clearly furrows on the worn surface. Figure 4 (L2 and L3) shows that the worn surface presented ner mesh-like grooves, in agreement with the COF result in Figure 11. For the 0.05 wt.% sample, the worn surface was covered by ne grooves and detachments after the rolling test, as shown in Figure 4 (L1), and the mechanism of wear was dominated by microplowing. Furthermore, Figure 4 (L2 and L3) shows that the wear scratch was almost invisible; through a local zoom of the images in Figure 4, slight furrows of wear could be observed. This is mostly attributed to the increase in TiO 2, which could reduce the wear of frictional interfaces. This indicates that TiO 2 played a lubricating role and prevented wear in the rolling process. The above results are in agreement with the tribological results showing the decrease in the friction coe cient in Figure 11.
The wear scar diameter of each of the three bottom test balls was measured to determine the lubricity performance of the test lubricant. In general, the larger the wear scar diameter is, the more severe the wear, but we also consider the depth of the wear trace. The wear scar diameter was determined for each of the three xed balls.
The temperature of the test lubricants was measured by a thermocouple attached to the four-ball tester to record the temperature changes throughout the duration of the experiment.
The base oil and nano-oils were tested at temperatures of 23°C and 75°C (close to the temperature service for engine oil). Increasing the temperature results in nanoparticle movement, which is associated with reducing the uid resistance over the ow; therefore, the viscosity is reduced. With respect to viscosity, it is clear that either of the base oils or nano-oils are non-Newtonian uids. However, the temperature of the contact point for the balls is also in uenced by the sliding speed. A sliding speed of 0.80 ms −1 (calculated based on input parameters) was selected to provide the minimum and extreme heating due to sliding.
Wear Depth Scar on the Balls
The depth of the wear scar on the spheres was measured using an Alicona Inginite Focus G5 microscope. The surfaces were scanned with a microscope using 50x magni cation, and the light source was coaxial with the eyepiece (lenses) and supplemented with a light ring. Scanning was performed using Image Field mode with a vertical resolution between 0.003 and 0.032 microns and a horizontal resolution of 2.13 microns. The duration of a scan was between 1.5 and 3 minutes. The average scan height was 0.150 mm. This gives a Vertical Dynamic of 150/0.032=4687.5.
The evaluation of the wear depth was performed by measuring the distance from the ideal circle, constructed using a xed 6.35 mm beam with the Measure Circle function. The traces of the intersection between the scanned surface and the plane in which the depth measurement was performed were used to orient and position the ideal circle. The depth of wear (difference between the ideal circle and the trace on the sphere) was measured using the measure height step function or maximum distance. The 2D pro les of the worm surfaces for different lubricant oils after the wear test are shown in Figs. 6 and 7.
Compared with pure oil, TiO 2 had a signi cant improvement in the wear surface. After the wear test at RT, only small groves could be observed, and the wear mechanism was mainly formed by microplowing.
Although the wear depth of the 0.01 wt.% sample was much smoother than that of the base oil, microplowing still existed, with a corresponding wear depth of 1.2 µm -RT and 0.41 µm -75°C. Moreover, the anti-wear property is improved with the amount of TiO 2 . This indicates that TiO 2 in engine oil prevents the plowing wear that existed in the control sample. In the current study, the authors tried to avoid overloading the tribocouple due to a high risk of layer deformation and change in the wear mechanism. Considering the wear rates of the balls studied by the authors (33 -70 x 103 µm3 ) at RT, it can be concluded that the wear of the balls is a few orders of magnitude larger for balls lubricated at 75°C. This can be explained by the fact that the connecting rods are always in contact, and therefore, the phenomenon of continuous overheating occurs.
No transfer lm was observed on the balls at a slip velocity of 0.80 ms −1 . However, the tests were accompanied by vibrations and unwanted noise.
Friction property of lubricating oils with TiO 2 additives
The tribological performance of engine oil (Motul 5100 4T10 W-30) with TiO 2 additive loading as a lubricant additive is shown in Figure 10. Clearly, the nanolubricant with more TiO 2 nanoparticles had the best coe cient of friction ( Figure 9).
These results indicate that the TiO 2 nanoparticles decrease the ball-to-ball friction contact compared to the base lubricant.
The medium lowest COF of 0.01 was obtained by the oil sample with 0.075 wt.% TiO 2 under a 75°C lubricant temperature. Moreover, Figure 10 shows the in uence of particle concentration on the COF of oil suspensions, indicating that the average COF was in uenced by the presented a lower antifrictional property, as shown in Figures 9 and 10. The relevant tribological mechanism at RT is due to TiO 2 particles at higher concentrations accumulating in the inlet of the ball-onball contact area, which causes an insu cient supply of lubricant and starvation in the contact area.
The running-in period is of great signi cance to the regulation of tribological performance to a certain extent. Reducing the running-in period is bene cial to improving the antifrictional property. The formation of a boundary lubrication lm is the main reason for the stability of the friction coe cient. The coe cients of friction stabilized in the second part of the test time. The rubbing period of nano-oil with a 0.1 wt.% concentration lasted longer than that of the others with a time of 780 s. In addition, it is noteworthy that the rubbing time obviously decreased with increasing concentration. The 0.05 and 0.075 wt.% samples had the shortest rubbing time in terms of friction properties.
The friction coe cient was calculated according to IP-239 and is expressed as follows:
Flash Temperature Parameter
The ash temperature parameter is a unique number that gives us indications of the critical ash temperature above which the lubricant used will be out of use [16] For working conditions in the four-ball tribometer, the ash temperature parameter is: A ash temperature parameter (FTP) was calculated for all of the experimental conditions according to Eq. 2. In this equation, F is the normal load in kilograms and d is the mean wear scar diameter in millimeters at the particular load. A detailed explanation of the parameter is given by Lane [16,17].
High values for the ash temperature parameter indicate that the lubricant shows good performance with a reduced possibility of lubricant breakdown [15]. Figure 11 shows the plot of TiO 2 percentage vs. ash temperature parameter (FTP) for different testing temperatures, more exactly room temperature RT and 75°C. From the gure, it can be seen that the maximum and minimum FTPs were obtained from 0.075 wt.% contaminated lubricant and pure lubricant, respectively. The maximum FTP value means that good lubricating performance occurred, indicating a lower possibility of lubricant lm breakdown. This phenomenon has also been observed by other researchers [11]. This seems to indicate that TiO 2 nanoadditives are a potential anti-wear additive for lubricating oil. The 0.075 wt.% TiO 2 in this investigation improved the lubricant performance based on the higher value of FTP observed compared with pure lubricant. The graphs also show the effect of temperature on the FTP of lubricants.
Conclusions
For the tests performed on a four-ball wear machine with different percentages of TiO2-contaminated lubricant, the conclusions drawn are as follows. Page 19/20 Ball wear volume for oils used in this work at different temperature | 2021-12-19T16:11:05.585Z | 2021-12-17T00:00:00.000 | {
"year": 2021,
"sha1": "ff10a0ba4f0029b88f0d65c72d9307f014123d1b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1123984/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "1ea733506ce249841ab29a199cff6dbd9ab36e17",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
989899 | pes2o/s2orc | v3-fos-license | Method Validation: a Complex Concept
Editorial I welcome all to the current issue of Pharmaceutical Methods, and thank all our contributors. The reader will find that many of the articles that are published in the current issue refer to method validation and in particular to the International Conference on Harmonization (ICH) guidelines as the basis for their approach. The ICH was launched in 1990, with the objective of harmonizing technical and regulatory approaches in the Pharma-Chem industry between international markets, with the ultimate objective being the protection of human health. Every analytical chemist and process owner knows the sequence of analytical validation requirements [linearity, level of detail (LOD), level of quantification (LOQ), precision, accuracy, robustness, etc.]. The requirements are universal; a measure to assure the quality of the results. However, the ICH along with organizations like the International Organization for Standardization (ISO) and the Food and Drug Administration (FDA) view their role as progressing and evolving the concept of analytical quality assurance into a multidimensional approach. Consider for example the 'simple' concept of 'precision'. In the Pharma-Chem industry there are no simple concepts, only 'far reaching' concepts. Determining the precision of the quantitation of a particular drug component using a given method demands the evaluation of system precision, sample precision, intermediate precision, and long-term and intra-laboratory precision; refer to the onion-like diagram below. Precision (and of course accuracy) is also linked with the determination of process capability, process stability, and process improvement measures. Along with these considerations, regulators demand extensive investigative procedures to be deployed in the case of out-of-specification results. Additionally, the analytical chemist and process operator must of course display a high level of competence, and along with the competence comes a personal responsibility. A veritable sword of Damocles is suspended above the head of each analyst, which extends right up, along the hierarchical chain of the organization! Naturally these measures are in place for
Method validation: A complex concept
Editorial I welcome all to the current issue of Pharmaceutical Methods, and thank all our contributors.
The reader will find that many of the articles that are published in the current issue refer to method validation and in particular to the International Conference on Harmonization (ICH) guidelines as the basis for their approach.The ICH was launched in 1990, with the objective of harmonizing technical and regulatory approaches in the Pharma-Chem industry between international markets, with the ultimate objective being the protection of human health.
Every analytical chemist and process owner knows the sequence of analytical validation requirements [linearity, level of detail (LOD), level of quantification (LOQ), precision, accuracy, robustness, etc.].The requirements are universal; a measure to assure the quality of the results.However, the ICH along with organizations like the International Organization for Standardization (ISO) and the Food and Drug Administration (FDA) view their role as progressing and evolving the concept of analytical quality assurance into a multidimensional approach.
Consider for example the 'simple' concept of 'precision'.In the Pharma-Chem industry there are no simple concepts, only 'far reaching' concepts.Determining the precision of the quantitation of a particular drug component using a given method demands the evaluation of system precision, sample precision, intermediate precision, and long-term and intra-laboratory precision; refer to the onion-like diagram below.
Precision (and of course accuracy) is also linked with the determination of process capability, process stability, and process improvement measures.Along with these considerations, regulators demand extensive investigative procedures to be deployed in the case of out-of-specification results.
Additionally, the analytical chemist and process operator must of course display a high level of competence, and along with the competence comes a personal responsibility.A veritable sword of Damocles is suspended above the head of each analyst, which extends right up, along the hierarchical chain of the organization!Naturally these measures are in place for Pharmaceutical Methods | January-March 2011 | Vol 2 | Issue 1 a very good reason: to prevent harm to the consumer, sometimes a very vulnerable customer.
In my opinion, one very innovative and enabling approach to the quality assurance of analytical methods is the process approach, advocated in recent years by ISO and a stalwart of process engineers.In its simple form, this involves producing a comprehensive and interlinked flow diagram of the process (the process may be a single analytical method using this definition) and sub-processes, and using this map as a basis for comprehending the process, controlling it, identifying the regions where data must be collected and analyzed, where SOPs must be written, and so on.
The other advantage of the process approach is that it facilitates a logical approach to quality risk analysis and thus management by expert teams.Quality risk management is indispensable in the analytical environment of the pharmaceutical sector.Risk management is a systematic and team-driven approach that tries to anticipate possible quality aberrations, their probability, their severity, and their consequences for the consumer.
Ultimately the point of all of this is that method validation is the tip of the iceberg when it comes to the control of quality of analytical methods in the pharmaceutical environment, where events of the past have taught us that there can be no room for complacency.
Ambrose Furey
Department of Chemistry, Cork Institute of Technology Rossa Ave., Bishopstown, Cork, Ireland | 2017-04-26T02:48:18.582Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "ae5bfe9a09fc01e3a41660a794e588e447d0c5ec",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3658027",
"oa_status": "GREEN",
"pdf_src": "Grobid",
"pdf_hash": "ae5bfe9a09fc01e3a41660a794e588e447d0c5ec",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
214394040 | pes2o/s2orc | v3-fos-license | SMART HEALTH CARE SYSTEM USING SENSORS, IOT DEVICE AND WEB PORTAL
Smart health care devices are slowly gaining popularity because of their many advantages over conventional health care system. In the conventional approach, a patient approaches a doctor either in the clinic or hospital. Much of time is spent in patients travel and wait period before he gets approval to meet the doctor. This is much worse for a patient who lives far away and has to spend lots of time in travelling. In general, when a patient first meets the doctor for treatment, he needs to register and then get diagnosed followed by some prescription. After that the patient routinely meets the doctor again leading to travel and wait periods. This will build up lots of stress in the patient especially if he has become weak and if the patient is quite old. The doctor maintains a record of diagnosis and prescription for each patient and this record gets updated on every visit by patient. It may also happen that the doctor may not be available for consultation on certain days due to some emergency or other reasons. This paper suggests a method of handling these issues faced by patient by developing a device and a web portal. The device consists of microcontroller connected to some bio-medical sensors like Temperature, Pulse-Oximeter, ECG, etc. This device can be used to read the patient’s health data on a regular basis and then send it to the Web Server via Wi-Fi module.A Web Portal is also being developed for viewing patient’s data regularly.
I. Introduction
The advent of Internet Of Things (IoT) has drastically improved the efficiency of many routine processes like health care, safe driving, logistic tracking, irrigation, industrial control, etc. The IoT incorporates many technologies which have evolved over a period of few decades and is fast maturing in the next few years. The IoT is becoming a vast network around the globe which will connect billions of people, object and devices to sensors and actuators. Hence enabling and making it easy and faster to perform many activities with much less effort and expenses. This is becoming possible because of the dropping prices of sensors, actuators, electronic devices and network connectivity. A great paradigm shift is being witnessed from the decades old internet connecting end user devices to the new internet connecting everything. The IoT will enable wide range of interaction within this highly interconnected and networked world and will lead to a highly smart world. Every object and devices that connects to IoT requires a unique address or identification which can be accomplished with the help of IPv6 internet protocol.
The IoT can enable object linking to the Internet using unique marker which are attached or integrated with the object [IV].Some of these unique markers are RFID tags, Bluetooth Beacons, Barcodes[IX], [XIII], [XXI], [XXII].These object tags or markers can be read by the corresponding wireless modules and the information about the object can be displayed [III].
A variety of sensors which are attached to the body of a patient can be used to get health data securely, and the collected data can be analysed and sent to the server using different transmission media which is connected to the Internet [XXIV]. All the medical professionals can access and view the data, take decision accordingly to provide services remotely.
We can construct systems which can continuously monitor the patients, perform remote consultationand health care management. These platforms use different techniques and equipment which can sense, capture, measure and transmit the information of body [VII].
With sensors and microcontroller we can get accurate measurement and then monitor and analyse the health condition of a patient. This combined with IoT will significantly increase the contribution of IoT in healthcare. The sensors can include temperature, heart rate, blood pressure, oxygen saturation in blood, levels of glucose and motion of body [VIII].
Anarchitecture of Smart Community and IoT was developed, which consisted of the Neighbourhood Watch Application and Pervasive Healthcare Application [XII]. This smart community architecture has three domains: (i) Home Domain, (ii) Community Domain and (iii) Service Domain. It also explains the model of Pervasive Healthcare in normal situation, emergency situation, and critical situation.
A 6LoWPAN-based IoT architecturewas developed for connecting the realtimeglucose sensor with their IoT (called m-IoT)in diabeticpatients [XVIII]. They implemented and tested the system performanceusing Java and with the help of 6LoWPAN and TelsoB sensors.
A 6LowPAN-based ubiquitoushealthcare system called U-healthcare was developedwhich performs thehealth monitoring in both indoor and outdoor conditions [XIX]. Thesystem uses a live streaming platform for reading of remotemonitoring sensors of ECG and temperature.The designed system can store the sensed data at remote serverand use free Cloud service like UbuntuOne.
Remote Monitoring Information System based on IoT was developed which can collectand process the information intelligently with the help ofhuman monitoring sensors and WSN technology [XI]. It can monitor the user physical information like temperature, heart rate,oxygen, blood pressure etc. The system also monitors motioninformation like physical exertion, speed, respiration rate etc.
Different applications of IoT in e-Healthcare, in particular sleep studies and elderly care w.r.t remote monitoring applications have been implemented [XIV]. Here the authors explain the concept of Remote Sleep Monitoring and Elderly Monitoring in the context of IoT. They also discuss the privacy and security issues related to electronic medical data.
Introduction and comparison of different IoT paradigms and applications of IoT in medical like identification and authentication, tracking of patient flow or moment, data collection of patients, and sensing for diagnosing patient conditions are discussed by the authors in reference [X].
In reference [V], the authors present the implementation and testing of an application called CardioNet, which is a distributed medical system linking different medical entities and systems like hospitals, emergency units, general practitioner cabinets, laboratories, personnel and patients. The implemented system is web based using ontology and can provide different services such as remote monitoring, online consultations, and hospital activity administration.
IoT supports different and latest technologies like RFID, WSN, 3G, 4G networks etc. Using these technologies, one can obtain data related to patient's health and send it to a remote server for further processing and storage [XXI].
An IoT based Smart Health Care Kit was developed by Punit Gupta et al [XVII], which provides support for emergency medical services like intensive care unit (ICU).
This paper presents a Smart Healthcare Device which can be used regularly at home or any place. Any individual can read all his health data like body temperature, pulse rate, oximeter, ECG graph, etc using this device and the associated sensors. The data can then be sent to the server/cloud using Wi-Fi and Internet connection. A Web Application (www.eiotlab.com) has been developed, from which the data can be downloaded from the server and examined. This server can receive the data coming from many devices/individuals. The doctor can log into the Web Application and constantly monitor his patient's health status. The doctor can use this application to register new patients. Once registered, the patient can regularly send his health data to the server. There is no limit on the number of doctors who can use this application and also on the number of individuals registered under each doctor. The application also has the provision of sending medical prescriptions to each patient.
II. Block Diagram
The DOIT ESP32 microcontroller is used to capture the health data from the various sensors like LM35 Temperature Sensor, MAX 30102 Pulse Oximeter, Mikroe ECG Click Module-Cable-Electrodes. An LCD is used for interacting with the user and also for displaying the various data and the connection and internet transfer status. The ESP32 has a Wi-Fi module which can be used to make connection to any local Wi-Fi hotspot and then to Internet.
III. Methodology
There are mainly two subsystems in the smart healthcare system: (1) ESP32 Microcontroller subsystem along with the associated sensors and Wi-Fi module. This subsystem reads all the sensor data and transmits it to the server using the in-built Wi-Fi module. This is called IoT device which can be used by the patient regularly to send his health data to the server. The server holds data for each doctor registered on the web app. Each doctor's data base has a list of patient data in the form of (2) Web Application based on the linux apache server system with the domain name www.eiotlab.com. The main start page is a HTML file used for logging into the healthcare page. The healthcare login page is implemented using PHP. Each doctor can login using a unique User Name and Password. After login the doctor can view the table consisting of the list of his patients and their health data. The Server uses the MySQL Database for storing the health data.
ESP32 Microcontroller Software
The development of ESP32 software is carried out using the ARDUINO IDE. The following are the various steps performed by the software code.
In the setup function:
Set the serial baud rate at 115200 In After sending the data to the server, the data can be viewed using any browser for the web appwww.eiotlab.com
Web Application
The application is available by entering the website www.eiotlab.com in any browser.On the main page the application name HCARE needs to be entered for logging in. After this the "Health Care Login Form" appears. This page can be used by the doctors for login. After entering the USERNAME and PASSWORD and login, the "Health Care Main Page" appears. Now by pressing the Healthcare Menu, a Page which shows the table consisting of the list of patients and their health data will be displayed. Any new data sent by the patient's iot device will update this table in the server. Table 1 below is just an illustration. There are three additional features listed below: (1) Adding new patient. By clicking the "SUBMIT NEW PATIENT" button, a form appears for entering/registering a new patient. Once registered, the patient can use his IoTdevice for sending his health data to the server. (2) The doctor can view the patient's ECG graph/plot by clicking the "Show" link under the ECG column. (3) The doctor can send prescription to patients email address by clicking the "Edit" link under the Prescription column. Provision can also be made to send Prescription through SMS.
IV. Conclusion and Future Scope
The proposed IoT device can be used for regular health check-up by the patients. The device will be inexpensive if it can be mass produced. It will be very useful for patients who need to perform regular check-up with doctors.The doctors also can monitor their patient on a continuous basis as he gets the patients data regularly. The device can be further enhanced by adding additional sensors for measuring patient's blood pressure, sugar levels, etc. The device can also be enhanced to send the exact location of patients and any other additional information. | 2020-01-16T09:05:13.139Z | 2019-12-28T00:00:00.000 | {
"year": 2019,
"sha1": "30995c5d67b8b00cdaa40c17b313468c43dae8e3",
"oa_license": null,
"oa_url": "https://doi.org/10.26782/jmcms.2019.12.00001",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "30995c5d67b8b00cdaa40c17b313468c43dae8e3",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
229371221 | pes2o/s2orc | v3-fos-license | Slow decay of infection in the inhomogeneous SIR model
The SIR model with spatially inhomogeneous infection rate is studied with numerical simulations in one, two, and three dimensions, considering the case that the infection spreads inhomogeneously in densely populated regions or hot spots. We find that the total population of infection decays very slowly in the inhomogeneous systems in some cases, in contrast to the exponential decay of the infected population I(t) in the SIR model of the ordinary differential equation. The slow decay of the infected population suggests that the infection is locally maintained for long and it is difficult for the disease to disappear completely.
the first-order phase transition, where both the diffusion and coarsening processes are important [30,31]. We think that the diffusion, coarsening, and quenched randomness are important for the slow decay in our model, however, the theoretical understanding is not sufficient yet and the details are left to future study.
II. SIR MODEL
We make a very brief review of the dynamics of the Kermack-McKendrick model as an ordinary differential equation [3,4]. The Kermack-McKendrick model is expressed as where S and I denote susceptible and infected populations. The parameters β and γ denote infection and recovering rates. The recovered population R is calculated from Since the three variables S, I, and R are used, the Kermack-McKendrick model is called the SIR model. If the population E of exposed persons before the appearance of symptoms is included, the SIR model is generalized to the SEIR model: where δ denotes the incidence rate. In the SIR model, recovered persons are assumed not to be infected again owing to the immunity. However, there are diseases such as Malaria, for which there is a possibility that recovered persons are infected again. For such diseases, the SIS model where dS/dt = −βSI + γI is used instead of Eq. (1). In the SIR and SEIR models, I decays to zero finally, but I does not decay to zero in the SIS model. The contact process is a stochastic version of SIS model. In this paper, we consider mainly the SIR model, however, slow dynamics is observed even in the SEIR model as shown in Sec. IV. There is a stationary solution S = S 0 and I = 0 to Eqs. (1) and (2). The stationary state is unstable for S 0 β > γ. Figure 1(a) shows trajectories in (S, I) space starting from the initial condition S(0) = 4, 2.5, and 1.3 at β = γ = 1. The initial condition for I is fixed to be 0.00001. Figure 1(b) shows time evolutions of I(t) for S(0) = 4 and 2.5 at β = γ = 1 in the semi-logarithmic scale. The infection I(t) spreads initially and then decays to zero exponentially since the susceptible population S(t) becomes below γ/β. This is a state that the herd immunity is attained. The final state is (S, I) = (S ∞ , 0) where S ∞ < γ/β depends on the initial value S(0). The uninfected population S ∞ takes a smaller value for a larger initial value S(0) of S(t). S ∞ can be calculated for the conserved quantity Q of this equation: If I(0) is sufficiently small, S ∞ is a solution of Figure 1(c) shows S ∞ as a function of S(0) at γ = β = 1. The ratio S ∞ /S 0 takes any value between 0 and 1, depending on the initial value S 0 .
III. SLOW DECAY OF INFECTION IN SIR MODELS WITH ONE HOT SPOT
Hereafter, we consider spatially extended systems. The spread of infection occurs in densely populated regions. In this paper, we consider mainly SIR models with inhomogeneous infection rate on one-, two-, ang three-dimensional lattices. In one dimension, the model equation is written: where D S and D I are diffusion constants. Periodic boundary conditions are imposed and the system size is N . The initial conditions were set to be S i = 1 and I i = 0.00001 for numerical simulations. Fig. 2(a). The infection occurs locally near i = N/2, S i diffuses into the infection region, and is infected near i = N/2. Outside of the infection region, the profile of S i has a nearly constant slope, that is, Figure 4(c) shows the relationship between β N/2 and S N/2 . The localized structure disappears for β N/2 < 1.5 and S i = S N/2 = S B = 1 is satisfied. Next, we make an analysis of the localized state and the power-law decay of exponent 1/2. If the continuum approximation is taken, Eqs. (5) and (6) are rewritten as where ∇ 2 = ∂ 2 /∂x 2 in one dimension, however, the model will be later extended to two and three dimensions. For the stationary state in one dimension, S(x) and I(x) satisfy If I(x) and β(x) are assumed to be Equation (10) yields at x = N/2, which leads to Equations (11) and (13) yield The dashed line in Fig. 4(c) shows the relationship between β N/2 = β 0 + β 1 and S(N/2) by Eq. (14). Fairly good agreement with direct numerical results is seen. If the infection region is sufficiently small, Eq. (9) gives From Eq. (15), I(N/2) is approximated at If the boundary conditions are not fixed to S B , the power-law decay is observed. Similar power-law decay is observed even for D I = 0. In the continuum approximation, the solution satisfies for x = N/2, since I(x, t) is almost zero for x = N/2, and 2D S ∂S/∂x = β N/2 S(N/2)I(N/2) at x = (N/2) + . Because S(N/2) is fixed to be γ/β N/2 in case of D I = 0, I(N/2) is determined to be 2D S ∂S/∂x/γ. That is, the population of infection is determined by the diffusion process of susceptible population for large t. The slope ∂S/∂x at x = (N/2) + can be calculated from the solution of S(x, t) to the diffusion equation. The diffusion equation can be solved by the Fourier transform I(N/2) is evaluated as Since the summation is negligible for n satisfying 4D S π 2 n 2 t/N 2 >> 1 or n >> N/(4D S π 2 t) 1/2 , I(N/2) is approximated as if the contribution (1 − S(N/2))/(N/2) in the second term of Eq. (18) is neglected for large N/2. This is a reason of the power law of exponent 1/2. The dashed line in Fig. 5(a) denotes this relation, which is good approximation to the direct numerical simulation. Two-and three dimensional models are expressed with Eqs. (7) and (8) if ∇ 2 is rewritten as ∂ 2 /∂x 2 + ∂ 2 /∂y 2 in two dimensions and ∂ 2 /∂x 2 + ∂ 2 /∂y 2 + ∂ 2 /∂z 2 in three dimensions. If S, I and β depends only the radius r, Eqs. (7) and (8) are expressed as where d denotes the dimension 2 or 3. We have performed numerical simulation of Eqs. (19) and (20) as a onedimensional discrete system similar to Eqs. (5) and (6)
IV. SLOW DECAY OF INFECTION IN RANDOM SIR MODELS
In this section, we study random SIR models in one, two and three dimensions. The one-dimensional model is again expressed as where β i is a uniform random number between 0 and β m . Fig. 7(a). Figure 7(c) shows four snapshot profiles of S i at t = 200, 300, 2000, and 10000 in the same frame. The infection occurs at many hot spots at t = 200, and the spatial profile is intermittent as shown in Fig. 7(b). The number of hot spots decrease with time. The cusp points in Fig, 7(c) correspond the hot spots. S i decreases with time by the diffusion to the hot spots and infection at the hot spots. Figure 7(d) shows the local average of β i (red dotted line), that is, j=i−10 β i , 100I i (blue dashed line) at t = 1000 and the maximum value of I i (t) (green solid line) for 0 < t < 10000 in the range of 2000 < i < 3000. In most cases, hot spots appear near the points where the locally averaged infection rateβ i is large. Infection is stamped out at some hot spots, however, strong hot spots survive for long. The lifetime of a hot spot with an interval L is estimated as O(L 2 ), because the width of the diffusion field increases as t 1/2 as shown in Fig. 5(b) and the power-law decay changes to an exponential decay when the width reaches the size of interval. If the intervals between strong hot spots become longer after the burnout of some hot spots, the lifetime of the survived hot spot becomes even longer. The power-law decay of t −1/2 by the diffusion effect and the increase of lifetime by the coarsening process might be the origin of the slow decay in one dimension.
We study the SEIR model to check the generality of the slow dynamics: The model equation is where S i , E i and I i denote respectively the susceptible, exposed, and infected populations, and δ i denotes the incidence rate. The parameters are set to be D S = D E = 1, D I = 0.5, γ = 1, and δ = 2. Figure 8 The two-dimensional random SIR model is expressed as The dotted line is SI ∝ 1/t 0.85 . The system size is 600 × 600. The initial condition is S i,j = 1 and I i,j = 0.00001. The total population of infection seems to decay in a power law in these numerical simulations. Figure 9(b) shows some snapshots of I i,j at a section of j = N/2 when β i,j takes a uniform random value between 0 and 3.2 at t = 50, 100, · · · , 500. Localized clusters of infection survive for long. The number of hot spots decreases with time, or a coarsening occurs also in two dimensions.
The three-dimensional model is expressed as dI i,j,k dt = β i,j,k S i,j,k I i,j,k − γI i,j,k + D I (I i+1,j,k + I i−1,j,k + I i,j+1,k + I i,j−1,k + I i,j,k+1 + I i,j,k−1 − 6I i,j,k ), (24) to the case shown in Fig. 6(d). I N/2+1,N/2+1,N/2 is almost constant after t > 50, that is, the localized infection is maintained for very long. Figure 10
V. SUMMARY
We have found slow decay of infection in the Kermack-McKendrick model with spatially inhomogeneous infection rate in some parameter range. First, we have studied the Kermack-McKendrick model with one hot spot where the infection rate is locally higher than the surrounding region. We have shown theoretically a power-law decay of exponent 1/t 1/2 in the one-dimensional system with a spatially localized hot shot. The slow decay occurs as 1/ log t in the two-dimensional system with one hot spot, and the decay seems to be even slower in three dimensions. Next, we have studied the random Kermack-McKendrick model, and found the power-law type slow decay in one, two, and three dimensions. We found that the infection occurs locally at hot spots. The uninfected persons in the surrounding area around the hot spots diffuse into the hot spots and are infected at the hot spots. The number of hot spots decreases in time and the lifetime of the survived hot spots become even longer because the surrounding areas of the survived hot spots become larger. The mechanism of the slow decay in our system is unique in that the diffusion, coarsening, and quenched randomness are important, although there is some similarity to the slows dynamics in the Griffiths phase in the contact process [18,19] and the phase transition dynamics [30,31]. However, the slower decay | 2020-12-25T02:15:28.880Z | 2020-12-24T00:00:00.000 | {
"year": 2020,
"sha1": "e2bd6905d50212cc9ce55ccd6cd273aece62e38f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e2bd6905d50212cc9ce55ccd6cd273aece62e38f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
266723390 | pes2o/s2orc | v3-fos-license | Evolutionary genomics of three agricultural pest moths reveals rapid evolution of host adaptation and immune-related genes
Abstract Background Understanding the genotype of pest species provides an important baseline for designing integrated pest management (IPM) strategies. Recently developed long-read sequence technologies make it possible to compare genomic features of nonmodel pest species to disclose the evolutionary path underlying the pest species profiles. Here we sequenced and assembled genomes for 3 agricultural pest gelechiid moths: Phthorimaea absoluta (tomato leafminer), Keiferia lycopersicella (tomato pinworm), and Scrobipalpa atriplicella (goosefoot groundling moth). We also compared genomes of tomato leafminer and tomato pinworm with published genomes of Phthorimaea operculella and Pectinophora gossypiella to investigate the gene family evolution related to the pest species profiles. Results We found that the 3 solanaceous feeding species, P. absoluta, K. lycopersicella, and P. operculella, are clustered together. Gene family evolution analyses with the 4 species show clear gene family expansions on host plant–associated genes for the 3 solanaceous feeding species. These genes are involved in host compound sensing (e.g., gustatory receptors), detoxification (e.g., ABC transporter C family, cytochrome P450, glucose-methanol-choline oxidoreductase, insect cuticle proteins, and UDP-glucuronosyl), and digestion (e.g., serine proteases and peptidase family S1). A gene ontology enrichment analysis of rapid evolving genes also suggests enriched functions in host sensing and immunity. Conclusions Our results of family evolution analyses indicate that host plant adaptation and pathogen defense could be important drivers in species diversification among gelechiid moths.
These species especially the two Phthorimaea species are found invading many non-native regions including Asia, Europe, and Africa.Research on these moths have focused largely on their host preference, identification, and management.Despite their importance as major global pests to agriculture, their genomic framework and the evolutionary process of host plant preference in insect pests is still poorly understood (but see [11]).
Host selection and host use in insects is determined by a series of physiological processes including host plant compound sensing, detoxification, and nutrient digestion.Several genes are thought to be involved in these processes that affect host selection [12].Genes associated with sensing phytocompounds include olfactory receptors (OR), gustatory receptors (GR), ionotropic receptors (IR), odorant-binding proteins (OBP), and chemosensory proteins (CSP).Genes associated with detoxification include cytochrome P450 (P450), ATP-binding cassette transporter (ABC), and glutathione S-transferases (GST), and genes associated with digestion include serine protease (SP) and beta-fructo-furanosidases (BFF) [13][14][15][16][17][18][19][20][21].A crucial question in understanding pest evolution is how these genes evolved among pest species and their relatives.Whole genome sequencing of pest species has shown great promise for revealing the evolutionary processes that led to the formation of a pestiferous species.For example, recent studies on the genomic evolution of agricultural pests, with subsequent comparative genomics analyses such as orthology, gene family evolution, selected region detections, and structural variant analyses, have identified putative genetic bases of their ecological features or pest species profiles [22][23][24][25].
Despite the diversity of gelechiid moths, the many studies on the impact of gelechiids to agriculture, and the release of nearly a thousand Lepidoptera genome assemblies in GenBank thus far [26], only a few gelechiid genome assemblies are publicly available [11,27,28].Considering its high species diversity and economic importance, more attention and efforts on genomic data accumulation and exploration are required for further understanding the evolution of this moth family.In this study, we sequenced and assembled the genomes of three gelechiid moth pests, Keiferia lycopersicella, Phthorimaea absoluta, and Scrobipalpa atriplicella to examine their genomic features and how they relate to host preference.Specifically, we investigate how rapidly evolving genes are correlated with host preference and life history.
Sample information and sequencing
Three gelechiid moth species (K.lycopersicella
Genome size and sequence coverage estimations
To verify read quality, we first assessed the HiFi sequence quality using FASTQC v 0.11.7 (RRID:SCR_014583) to summarize read profiles [29].The genome size and sequence coverage were estimated with two methods.First, we counted k-mers and calculated the k-mer density distribution for the HiFi reads using K-Mer Counter (KMC) v.3.2.1 (RRID:SCR_001245) with kmer size of 31 nucleotides.Density distributions were subsequently submitted to GENOMESCOPE v2.0 online tool (RRID:SCR_017014) [30] with default setting for diploid species to estimate the genome size, heterozygosity, sequence coverage and other genomic profiles (Supplementary Figure S1).Second, we mapped HiFi reads to final assemblies to estimate genome size and sequence coverage.This process was conducted in the program MODEST (backmap.plv 0.5) [31].
Estimated genome sizes and read coverages from GENOMESCOPE were used to certify autodetected estimates from HIFIASM assembler (see next section) to ensure the accuracy of autodetected assembling assumptions [32].
Genome assembly, quality assessment, and non-target sequence removal
We used HIFIASM v 0.16.1 (RRID:SCR_021069) to assemble the genome from HiFi reads using default settings, except for reads of P. absoluta, for which we applied a 2 (-l 2) purging level to keep a greater number of haplotigs for downstream purging.We kept more haplotigs because the sequence coverage for this species was low, and it generated the best assembly evaluated by N50 and BUSCO V 5.3.0 (RRID:SCR_015008) completeness (based on the lepidoptera_odb10 database) [33,34].We also applied the haplotig purging pipeline to remove duplicated haplotigs [35].For K. lycopersicella and S. atriplicella, we first mapped the HiFi reads to the assemblies with MINIMAP v. 2.21 (RRID:SCR_008103) and sorted using SAMTOOLS v 1.15 (RRID:SCR_002105) [36,47].The sorted mapped reads were subsequently used to draw the density distribution histogram of coverage using "hist" function in the purge_haplotigs pipeline (RRID:SCR_017616) [34].The histogram was used to identify the peaks of homozygous and heterozygous reads and the low point between the peaks.These values were then used to define the aggressiveness of the purging.Finally, we used low and high coverage cutoffs to purge the duplicated contigs.Other parameters were kept default as suggested by purge_haplotigs pipeline.
For P. absoluta, since its sequence depth is relatively low (see results), we used the Illumina short reads published by [28] to perform the purge_haplotigs.Specifically, we ran HIFIASM with less aggressive purging (l -2) to allow more duplicated haplotypes in the assembly for the haplotig purging pipeline and used the short-read coverage histogram to define the peaks.To identify potential non-target sequences in assemblies, we created blobplots using BLOBTOOLS (RRID:SCR_017618) to visualize the distribution of GC content and read coverage for contigs [37].To determine read coverage, we aligned HiFi reads to the assembly using MINIMAP2 [36].To assign taxonomy to reads, we used BLASTN (RRID:SCR_001598) to blast contigs against the NCBI nt database with an e-value cutoff of 1e-25.Contigs assigned to non-arthropods with deviating GC content and sequence coverage were determined to be non-target sequences and removed from assemblies (Supplementary Figure S2).A BUSCO score using lepidoptera_odb10 database was calculated to evaluate the completeness of each assembly (Table 1).Genome assemblies of the three species are available through NCBI (BioProject accession number: PRJNA932016).
Gene models and annotations
In the genome annotation pipeline, we first identified repeat regions using REPEATMODELER2 (RRID:SCR_015027) [38].The genome assemblies were soft-masked with repeats from three lines of evidence including simple and short repeats, the identified repeats from REPEATMODELER2, and the evidence from the lepidopteran repeat database in Repbase using REPEATMASKER (RRID:SCR_012954) with the blast tool RMBLAST (RRID:SCR_022710) [39,40].The BRAKER2 gene prediction pipeline (RRID:SCR_018964) was applied to soft-masked genomes [41][42][43][44][45][46][47].For K. lycopersicella and S. atriplicella, we used arthropod protein sequences from orthoDB (RRID:SCR_011980) (odb10_arthropoda) in the PROTHINT pipeline (RRID:SCR_021167) to generate hints to train GENEMARK-EP+ (RRID:SCR_011930) [48] and predict gene models alone with the AUGUSTUS (RRID:SCR_008417).For P. absoluta, we also included published RNA sequences to train the gene model [49].Specifically, we ran BRAKER2 pipeline twice (one with protein and one with RNA) and used TSEBRA [50] with default settings to integrate the two models.To further refine models for the three species, we removed genes identified solely by AUGUSTUS ab initio prediction without hint supports (e.g., introns, start and stop codons) from the protein database using the python script "selectSupportedSubsets.py" provided by BRAKER2.Final gene models were evaluated using a BUSCO protein model with the lepidoptera_odb10 database.Gene model profiles, including the monoexonic rate and sequence lengths of gene, intron, and exon, were summarized using GFACS v1.0.0 (RRID:SCR_022017) [51] (Supplemental Table S1).
For functional annotations, we first annotated gene function by blasting transcript sequences from the BRAKER2 pipeline to the RefSeq non-redundant protein database and Swiss-Prot arthropodan protein database (Reviewed UniPort database) using the blastp function in DIAMOND v2.0.9 (RRID:SCR_016071) [43].Additionally, we performed default INTERPROSCAN (RRID:SCR_005829) annotation which integrates 14 member databases including PFAM and PANTHER [52].For gene ontology terms (GO terms) and KEGG pathway annotations (RRID:SCR_012773), we queried transcript sequences to the PANNZER webserver (Protein annotation with z-score) [53] and KEGG automatic annotation server (KAAS) [54] with bidirectional best hits.
Phylogeny and Gene evolution
To explore the evolution of the three gelechiid moths and their genes, we created a phylogeny using two additional published genomes of Gelechiidae: Phthorimaea operculella and Pectinophora gossypiella.We used the genome of Hyposmocoma kahamanoa as an outgroup, as this species belongs to a moth family closely related to Gelechiidae (Cosmopterigidae) [55,56].
Published genome assembly of P. operculella (GCA_024500475.1)was downloaded from NCBI GenBank while those of P. gossypiella (GCF_024362695.1)and H. kahamanoa (GCF_003589595.1)were downloaded from the NCBI Reference Sequence (RefSeq) Database (O' Leary et al., 2016).We performed the same BUSCO approach using the lepidoptera_odb10 database to obtain compatible single-copy amino acid orthologs for these three species [32].The final data matrix contained 4,876 single copy orthologs that contained at least two ingroup species and the H. kahamanoa outgroup (385 orthologs did not fit these parameters and were removed).
Phylogeny of these five gelechiid moth species was constructed using the concatenated sequences from the BUSCO single copy gene alignments.We assigned a single substitution model (Q.insect+FO+G4 substitution model, the Q matrix estimated for insects) to the alignment and built a maximum likelihood tree in IQ-tree v 2.1.3(RRID:SCR_017254) [58][59][60][61].Branch supports were calculated using ultrafast bootstrap [62] and SH-aLRT [63][64].Since the genome assembly of S. atriplicella is less complete (see results), we also constructed phylogeny with the four gelechiid moth species (excluding S. atriplicella) and H. kahamanoa (outgroup) for gene family analysis (see below) to avoid the noise from the incomplete gene model of S. atriplicella with same tree building approach.
To investigate gene family evolution, we inferred an ultrametric tree from the concatenated sequence species tree using TREEPL with default settings [65].For gene family identification, we employed ORTHOFINDER v2.5.2 (RRID:SCR_017118) using the primary isoform of the annotated gene models from each of the five species (four gelechiid species and one outgroup) [66].Gene models of K. lycopersicella and P. absoluta were predicted from the BRAKER2 pipeline while the other three gene models were directly downloaded from appropriate databases.In ORTHOFINDER, we chose gene families as defined by phylogenetic hierarchical orthogroups (HOGs), an approach which is thought to be more accurate than similarity-based methods [66].For each gene family, the HOGs gene-counting matrix and ultrametric tree were used to estimate repertoire size changes in CAFE v 5.0.0 (RRID:SCR_018924) [67].We extracted HOGs under rapid repertoire size expansion and contraction, with the significance level set to 0.01 and branch lengths calculated from the ultrametric tree.For each gene associated with these HOGs, we used the top annotated function (lowest e-value) from INTERPROSCAN to represent the gene function.
For the HOGs with significant rapid expansion and contraction, we assessed their gene functions and GO terms using INTERPROSCAN annotations.To standardize annotations, we reannotated gene functions for the three downloaded gene models (Phthorimaea operculella, P. gossypiella, and H. kahamanoa) using default settings in INTERPROSCAN (Jones et al., 2014).For associated GO terms, we performed enrichment analysis using the R package TOPGO 2.40.0 (RRID:SCR_014798) [68] with a significance level of 0.05 for both fisher classic and weight01 algorithms.
After haplotig removal, considerable reductions in the number of contigs were found while assembled sizes and BUSCO completeness remained nearly consistent, indicating that smaller duplicated contigs were removed.From the assemblies of K. lycopersicella and S. atriplicella, we identified non-target sequences contributing to a small portion of the assemblies.In K. lycopersicella, a 10 kbp-contig was blasted to Streptophyta while in S. atriplicella, 15 small contigs (total 380 kbp) were blasted to Proteobacteria.After removing these non-target contigs, 443.65 Mb from 61 contigs, 652.7 Mb from 688 contigs, and 301.15 Mb from 7,092 contigs were found in the assemblies of K. lycopersicella, P. absoluta, and S. atriplicella, respectively.BUSCO scores for these assemblies are shown in Table 1.Estimated genome sizes from GENOMESCOPE for K. lycopersicella, P. absoluta, and S. atriplicella were 396, 514, and 276 million base pairs (Mb); much smaller than our assemblies and likely due to the exclusion of extremely high number of k-mers from the repeats in the genomes.Estimating genome size with MODEST resulted in 422, 1,040, and 344 Mbs with peak coverages at 49X, 10X, and 55X for K. lycopersicella, P. absoluta, and S. atriplicella, respectively (Supplemental Table S2).The estimated genome and assembly sizes of P. absoluta based on MODEST were nearly twice that of GENOMESCOPE, this discrepancy was likely due to different estimated sequence coverage.
For gene annotation, we first annotated repeats using REPEATMODELER2 [38] and P. absoluta showed the highest proportion of repeats (54.4%), followed by K. lycopersicella (48.22%) and S. atriplicella (32.83%).Soft-masked genomes were used to run the BRAKER2 pipeline with protein evidence for K. lycopersicella and S. atriplicella, resulting in 15,405 and 14,647 genes, respectively [42].For P. absoluta, we used both protein and RNA sequence evidence to predict the gene model.After removing genes without hint support, the gene model with 19,106 genes was used for functional annotation.BUSCO scores for these gene models reflect their assembly features, including the higher duplication rate in P. absoluta and higher missing rate in S. atriplicella (Table 1).
Phylogeny and gene family evolution
The maximum likelihood tree, derived from the concatenated supermatrix, shows that K. lycopersicella and P. operculella are most closely related (Figure 1).Phthorimaea absoluta, another species feeding on solanaceous hosts, is the sister species to K. lycopersicella and P. operculella.It is noteworthy that P. absoluta was previously and widely recognized as Tuta absoluta indicating a need of a comprehensive phylogenomic analysis on this group.Scrobipalpa atriplicella, an amaranthaceous feeder, is recovered as the sister taxon to the other three members of subfamily Gelechiinae in the six-species phylogeny (Supplementary Figure S3).Finally, Pectinophora gossypiella, a member of subfamily Apatetrinae, is the sister taxon to all four Gelechiinae species in the phylogeny, supporting the current taxonomic arrangement [69] (Figure 1).
We identified 14,384 HOGs in the protein sequences of the six species, and 809 HOGs were found to evolve rapidly at least in one branch along the ultrametric tree (Supplemental Table S3).Gene family evolution analyses showed a general pattern of rapid expansions at the tips of the tree and rapid contractions along internal branches (Figure 1).Specifically, 85 and 52 HOGs were identified in K. lycopersicella with rapid repertoire size expansion and contraction, respectively.Among these HOGs, 67 are annotated with gene functions where 5 are putatively involved in host plant adaptation (Glucose-methanol-choline oxidoreductase, Cytochrome P450 superfamily, insect cuticle proteins, and trypsin family serine proteases), 37 involved in immunity (PiggyBac transposable elements, Retrotransposon Pao-related genes, Serpin superfamilies, and Toll-like receptors (Supplemental Tables S4 and S5).
We also found that 68
Gene ontology enrichment analyses
From genes that were identified to be rapidly evolving, we found a handful of biological function terms that were enriched from the rapidly evolving genes.(Table 2).This includes sensory perception of taste (GO:0050909), toll-like receptor signaling pathway (GO:0002224), and immune response (GO:0006955), DNA integration (GO:0015074), and plasma membrane phospholipid scrambling (GO:0017121).All the enriched terms, including those terms passing through fisher classic but not weight01 threshold, are listed in Supplemental Table S6.
Genome assembly quality and its implications for gene family evolution
Long read sequencing technologies such as Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT) have provided a promising future for de novo assemblies of highquality genomes for non-model species [70,71].These recent advancements have the potential to significantly expand our understanding of the evolutionary mechanisms underlying plant-insect interactions and contribute to prevent future catastrophic crop damage.
In this study, we used HiFi long-read to assemble genomes for three gelechiid moth species (Table 1).Although BUSCO completeness of the S. atriplicella genome was relatively low (73.3%),3,246 of its BUSCO genes could be used to reconstruct a phylogeny with four other gelechiid species (Supplemental Figure S3).However, we note that the robustness of gene family evolution analyses relies heavily on the quality of the genome assembly and the subsequent gene model predictions.The incomplete genome assembly with higher heterozygosity or lower sequence coverage could confound the result by providing missing, incomplete, or duplicated gene prediction.Therefore, we used phylogeny with the four more complete genome assemblies and an outgroup to detect gene-family evolution with rapid repertoire size changes.We note that the assembly of P. absoluta has higher BUSCO duplication rate, likely the result of shallower sequence depth and higher heterozygosity, could possibly overestimate the gene copy number.
Although programs such as CAFE were designed to cope with such issues [72], the result of repertoire size changes for species with lower assembly completeness should be interpreted with some caution.
It should also be noted that sequencing interferences were encountered for S. atriplicella library samples, which were prepared together with those of P. absoluta and K. lycopersicella using the same DNA extraction, clean-up, library preparation, and sequencing protocols.We therefore sequenced the amplified DNA library with trial-and-error.According to the BUSCO completeness score, only part of the genome was covered by the HiFi reads despite the high sequence depth (37X and 55X from GENOMESCOPE and MODEST, respectively).This is likely due to replication bias or errors during the amplification process.Based on these experiences, we would recommend optimization of DNA extraction to enable native DNA to be sequenced over the whole genome amplification route we followed in with this challenging sample.
Finally, the different methods and assembled sizes resulted in considerable differences in estimated genome sizes.This discrepancy seems to be more prominent in taxa with lower sequence coverage and higher heterozygosity (e.g., P. absoluta).We therefore recommend using multiple approaches to better estimate the true genome size and understand the quality of the assembly.
Genomic adaption of the solanaceous feeding gelechiid moths
Moths use a combination of olfactory (smell) and gustatory (taste or contact) chemoreception to find oviposition sites.Olfactory and gustatory cues are often thought to function in long-and short-range detection of suitable hosts, respectively, but volatile cues at the host surface may also stimulate olfactory sensilla and determine oviposition choice in some species [73].Indeed, for P. absoluta, olfactory cues in the form of tomato leaf volatiles result in oviposition rates indistinguishable from to those involving direct contact with the leaf surface [74].This is not the case for K. lycopersicella and P. operculella where contact chemoreception appears to play a more important role in oviposition choice, with surface compounds of host plants shown to stimulate egg-laying in both species [75][76][77], and those of non-hosts shown to act as deterrents in P. operculella [76].It is notable that our gene family evolution analyses show an increase in rapidly evolving genes associated with host plant sensing, particularly gustatory receptors [78], coincident with a shift to solanaceous feeding in gelechiid moths.This could serve as an indication of selective pressure on host plant association through female oviposition or larval feeding choice.It is likely that caterpillars of leaf-rolling and leaf-mining species are confined to the plant where they hatch [79,80] and therefore do not search extensively for a new host plant, but the role of contact chemoreception in oviposition choice in gelechiids is better characterized than that of caterpillar host searching.
While an increase in host plant association genes correlates with a shift to feeding on Solanaceae, we do not observe a directional shift in terms of gains/losses.Thus, while we detected gains in gustatory receptor genes in P. absoluta and P. operculella, K. lycopersicella shows losses in this gene family.While a number of studies have shown a correlation between host range and chemosensory receptor gene repertoire size or specific losses [81][82][83], the gelechiids studied here appear to have experienced a host shift from one plant family (Amaranthaceae in S. atriplicella) to another (Solananceae in P. absoluta and its relatives), instead of an expansion or contraction of host range.Thus, we might not expect directional changes in chemosensory receptor repertoire size in instances of host shifts in the same manner that has been observed after expansion or contraction of host range.
For many lepidopteran species, detoxification of plant secondary metabolites is essential in host adaptation [84].Several genes, including ABC transporters, P450, GMC oxidoreductase, UGT, and insect cuticle protein play important roles in detoxifying the defending compounds from their host plants [85,86].Our gene family evolution analysis reveals that these detoxification genes also rapidly evolve in the focal gelechiid moths, while most expansions are found in the two solanaceous feeding species (i.e., P. absoluta and P. operculella) (Figure 1).It is noteworthy that K. lycopersicella, the sister species of P. operculella in our tree, shows only four detoxification genes expanding.This result may be explained partially by the different annotation pipelines that were used for these two genomes.However, since the gene model of K. lycopersicella covers 93.2% of the BUSCO single copy genes and CAFE was designed controlling such confounding factors from the incomplete or biased annotation, it is fair to conclude that detoxification gene expansion is not a general feature of solanaceous-feeding species [72].Interestingly, K. lycopersicella and P. absoluta are found to prefer tomato over potato while P. operculella feeds mainly on potato, implying that the feeding and oviposition preferences are not directly related to the evolution of detoxification genes.One other possible explanation is that the rapid expansion of detoxification genes on P. absoluta and P. operculella has resulted from the frequent exposure to pesticides, as these two species are well-known agriculture pests with many pesticide resistances reported [87][88][89].Although K. lycopersicella is also considered an agricultural pest, the damage it causes is not comparable to the two Phthorimaea species [7].Further studies using population genomic approaches to determine the relationship between detoxification-gene evolution and pesticide resistance might provide more evidence supporting or opposing this hypothesis.
One other important mechanism in host adaptation involves digesting nutrients from plant tissue.For many phytophagous insects, coping with host plant protein peptidase inhibitors and efficiently breaking down these complex molecules are an essential first step in digestion [20].By comparing genomes of four gelechiid species, we identified many serine protease genes which are known for its function of host-plant protein digestion.According to the gene evolution results, serine protease genes rapidly expanded in P. absoluta and P. operculella, and, P. gossypiella (Figure 1).These genes not only digest proteins, they also act as species-specific antagonists interfering with the function of host plant peptidase inhibitors [94][95][96].The expansion of trypsin and chymotrypsin in two global pests on solanaceous crops implies their underlying contributions to important pest species features such as shorter life spans relative to K. lycopersicella, a species that has fewer copies of these genes [7,[97][98][99][100].In general, our gene family evolution analysis reveals indirect but important signals of genome evolution underlying the host adaptation in these agricultural pests.
Rapid evolution of retrotransposable elements and other immune related genes in gelechiid moths
We found that many of the rapidly evolving genes present in all four gelechiid species are genes associated with retrotransposons and reverse transcription.For example, HOGs annotated with Pao, a retrotransposable element involved in antiviral mechanism, were found to be rapidly evolving in all four gelechiid species (Supplemental Table S3).This element usually contains five protein domains where reverse transcriptase (RTase), retrotransposon gag domain, aspartic protease (or aspartic peptidase), and Ribonuclease H superfamily (RNase H) are repeatedly found evolving rapidly [101,102].The RTase in this retrovirus-like element reverse-transcribes the invading virus RNA into DNA (stored in retrotransposon sequences or forming a viral circular DNA), and the infection is suppressed by RNase H through cleavage of the DNA-RNA hybrids or by the downstream RNAi pathway [103][104][105][106][107].Many other significant HOGs found in these gelechiid species may also have similar antiviral mechanisms, including PiggyBac transposable element, Ty3 transposon capsid-like protein, and Transposase, L1 [108,109] (Supplemental Table S3).However, rapid repertoire size changes of these retrotransposable elements could be the result of the transposon activity instead of gene copy accumulation through recombination.
We also found many rapidly evolving HOGs annotated with immune related genes such as those involved in the Toll-like receptor (TLC) pathway.These genes (e.g.Toll-like receptor, Leucine-rich repeat domain superfamily, and NF-kappa-B inhibitor-interacting Ras-like protein) were found with rapid size changes in all tested gelechiid species.Unlike retrotransposable elements, the TLC pathway targets a wider range of pathogens including bacteria, fungi, and viruses.Finally, many other genes that we identified have putative functions in immunity, including Serpin superfamily, Immunoglobulin, Pacifastin domain, and Gamma interferon inducible lysosomal thiol reductase GILT.The presence of many rapidly evolving, immune-related genes suggests that managing potential threats from pathogens is also a significant selection pressure.This finding is supported by comparative genomic studies on other moths where viral defending genes (RNase H, RTase, retrotransposon Pao, Toll-like receptor, Leucine-rich repeat domain) were identified to evolve rapidly [11,110].In sum, our gene family evolution approach highlights the importance of host adaptation and immune-related genes in these closely related gelechiid species.Table 1.Assembly statistics of the three newly sequenced gelechiid moth species, compared to statistics of the published Phthorimaea absoluta v1 assembly.BUSCO results from the Phthorimaea absoluta v1 assembly have been re-analyzed using BUSCO v5.
Phthorimaea absoluta v1
Phthorimaea References supporting categorizations of gene function are provided in Supplemental Table S5.
Click here to access/download [NCBI:txid1511203], P. absoluta [NCBI:txid702717], S. atriplicella [NCBI:txid687131]) were collected from laboratory colonies at University of California, Davis, USA, Khumaltar, Lalitpur, Nepal, and the Saskatoon Research and Development Centre of Agriculture and Agri-Food, Canada, respectively.Genomic DNA from one moth of each species was extracted from the whole moth (larva) using the DNA isolation protocol of the OmniPrep Genomic DNA Extraction Kit (G-Biosciences, St. Louis, MO).For S. atriplicella, we encountered sequencing interference for several library samples.Therefore, we amplified genomic DNA with illustra™ GenomiPhi V2 DNA Amplification Kits, Cytiva and the amplified DNA was used to replace the native DNA extracted from the tissue.The genomic and amplified DNA samples were subsequently used to perform fragment size selection and sample purification with the DNeasy PowerClean CleanUp Kit before library preparation.Libraries were sequenced with a single SMRT cell in the Pacbio Sequel IIe system.The DNA clean-up, library construction, and sequencing steps were performed in the Interdisciplinary Center for Biotechnology Research (ICBR) at the University of Florida.The HiFi sequences are deposited in NCBI (BioProject accession number: PRJNA932016; SRA sample accession: SRR23497930, SRR23497929, and SRR23497928).
Figure 1 .
Figure 1.(left) Maximum likelihood tree of four gelechiid species from a concatenated supermatrix analysis of 4,876 single-copy genes, presented alongside a color-coded number of rapidly evolving gene families (red: expanding, blue: contracting).The tree is rooted with Hyposmocoma kahamanoa (Gelechioidea: Cosmopterigidae).Nodes are labelled with branch supports (ultrafast bootstrap/SH-aLRT).(right) The list of rapidly evolving gene families that are associated with host plants includes a host compound-sensing gene family, 20 detoxification genes, and seven digestion-related genes.Numbers in color-coded cells represent repertoire size change in corresponding branches on the tree in (left), and gene-family functions are shown at the top of columns.The significant repertoire size changes are marked with outside borders on the cell.
Table 2 .
Enriched GO terms from the rapidly evolving genes of the five gelechiid species in this study. | 2024-01-03T06:17:27.242Z | 2024-01-02T00:00:00.000 | {
"year": 2024,
"sha1": "000792f17c9ae1fd969d719bc8ab56be3e834162",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a38f5bcc4e782df5a053ba365ecc630d964c4c40",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3526816 | pes2o/s2orc | v3-fos-license | The Risk Factors Influencing between the Early and Late Recurrence in Systemic Recurrent Breast Cancer
Purpose Patients with recurrent breast cancer usually die of their disease, even after radical surgery and adjuvant therapies which could reduce the odds of dying. Many studies analyzed and compared patients who died of recurrent disease with those that died without recurrent disease. However, less attention has been paid to evaluating factors associated with the timing of recurrence. Thus, the objective of this study is to investigate the correlation between various factors and the timing of recurrence. Methods We retrospectively reviewed the data of 95 recurrent breast cancer patients who underwent curative surgery to determine the prognostic factors such as menopausal status, operation method, stage, nodal status, histologic grade, nuclear grade, extensive intraductal carcinoma component, hormone receptor, p53, c-erbB-2, Ki-67, and molecular subtype. We had attempted to compare the recurrent patients within 2 years after operation and adjuvant chemotherapies as the early recurrence with those over 2 years as the late recurrence. Results Histologic grade (p=0.005), nuclear grade (p<0.001), p53 (p=0.022), and Ki-67 (p<0.001) were significant different factors that influenced the systemic recurrence between early recurrence and late recurrence. In stage I/II, histologic grade (p=0.001), nuclear grade (p<0.001), and Ki-67 (p=0.005) were significant factors that influenced the systemic early recurrence. In stage III, nuclear grade (p=0.024), and Ki-67 (p=0.001) were significant factors that influenced the systemic early recurrence. But subtypes (p=0.189, p=0.132, p=0.593, p=0.083) are not associated with the timing of recur rence. Conclusion In systemic recurrent breast cancer patients, the risk factors such as histologic grade, nuclear grade, p53 and Ki-67 are also associated with the timing of recurrence. We sug gest that these patients should be proper treated and be closely followed up.
INTRODUCTION
According to the Annual Report of Korea Central Cancer Registry in 2009 the number of breast cancer occurrences for women's in Korea in 2007 is 15.1%, the second highest one next to thyroid cancer [1].
Recently the treatment outcome of breast cancer has been improved owing to the development in curative surgery, chemotherapy and endocrine therapy; nevertheless, about 25% to 30% of the patients having no axillary lymph node metastasis and about 75% to 80% of the patients having axillary lymph node metastasis experience recurrence in 10 years, and most of them die due to the metastatic breast cancer [2,3]. As a result, studies on the factors that influence prognosis have been com-pleted. The factors known up to now are the axillary lymph node metastasis, tumor's size, the histologic type and differentiation degree of tumors, the receptors of estrogen and progesterone, and overexpression of the genes p53 and c-erbB-2 [4].
However, these studies focused on comparing the patients who died due to recurrences and the patients who were alive, and they have not evaluated the important indexes which have relationship with recurrence period, though 70% of the breast cancer patients experience recurrences in 3 years [5,6] and patients with early recurrence experience a shorter median survival than patients with late recurrence [7].
In this context, this study was performed to evaluate the factors influencing recurrence period in the cases of the patients who experienced recurrence after first treatment of breast cancer.
Subject
The
Methods
We evaluated retrogressively the recurrence features in regard to patient's age, menopausal timing, method of surgery, stage, nodal status, histologic differentiation, existence of estrogen receptor, existence of progesterone receptor, existence of an extensive intraductal carcinoma component (EIC), existence of overexpression of genes p53 and c-erbB-2, expression degree of Ki-67, molecular subtype (luminal A type, luminal B type, HER2 positive type, triple negative type) and the supplementary curing such as chemotherapy, radiation therapy or endocrine therapy after curative surgery.
We evaluated the factors influencing the recurrence period for the patients who had recurrence after the first treatment of breast cancer by classifying them into two groups; the first group of early recurrence in which they had the recurrence before 2 years from the completion of curative surgery and chemotherapy and the second group of late recurrence in which they had the recurrence after 2 years from the completion of treatment. Because 70% of breast cancer patients experience recurrences within 3 years, we adopted these criteria and some of these patients were done with radiation therapy and others were with endocrine therapy when needed.
The data were processed with SPSS version 18.0 (SPSS Inc., Chicago, USA). The correlation between each clinopathologic factor and recurrent period was analyzed with chi-square test. And for the analysis of statistical significance of patient distribution between the groups nonparametric chi-square test was used. A p-value < 0.05 was considered as statistically significant.
General characteristics of patients
The average period of monitoring of the patients is 53.5 months (range, 6.4-116.5 months). The general characteristics of patients are presented in Table 1. Each patient was that was classified according to nodal status and stage. After curative surgery, chemotherapy and radiation therapy and endocrine therapy were done as supplementary curing. For chemotherapy all patients received chemotherapy and lymph node negative breast cancer patients received adjuvant systemic treatment with cyclophosphamide, methotrexate, 5-fluorouracil (CMF) chemotherapy and lymph node positive breast cancer patients netic resonance imaging (MRI). Lung metastasis was confirmed via lung biopsy, pleural cytology or serial follow up, when the chest computed tomography (CT) was positive. When the brain CT was positive, brain metastasis was confirmed by MRI and when liver CT was positive, liver metastasis was confirmed by MRI or serial follow up and other parts (mediastinal lymph node metastasis, supraclavicular lymph node metastasis) was confirmed by biopsy.
Comparison according to recurrent period
We checked the statistical significance between two groups: the first group of early recurrence (recurrence 2 years before the completion of curative surgery and chemotherapy) and the second group of late recurrence (recurrence after 2 years from the completion of treatment).
As for the stage, there was statistically significant difference (p = 0.019) between early recurrence and late recurrence but in nodal status there was no statistically significant difference between them (p= 0.365) ( Table 3).
In histologic differentiation there was a statistically signifi- received adjuvant systemic treatment with anthracycline based regimen chemotherapy.
Systemic recurrence sites
Systemic recurrence sites are presented in Table 2. When bone scan was positive, bone metastasis was confirmed by mag-cant difference (p = 0.005) between early recurrence and late recurrence. The nucleus differentiation also showed a statistically significant difference (p< 0.001). A statistically significant difference was found in p53 (p= 0.022) and Ki-67 (p< 0.001) ( Table 3).
Comparison according to recurrent period at each stage
We checked the statistical significance between two groups at each stage.
As for Stage I/II, the histologic differentiation there was statistically significant difference (p= 0.001) between early recurrence and late recurrence and in nucleus differentiation there was also a statistically significant difference (p< 0.001). A statistically significant difference was also found for Ki-67 (p = 0.005) ( Table 4).
Comparison according to recurrent period at subtype
As far as the subtype, the luminal A type has many instances of late recurrence but there was no statistical difference (p = 0.189). Also luminal B type, HER2 positive type and triple negative type had many instances of early recurrence, but there was no statistical difference (p = 0.132, p = 0.593, p = 0.083) ( Table 6).
DISCUSSION
In general, metastasis after breast cancer treatment is relatively particular that it is exclusively concentrated on specific body parts. The most favorable systemic metastasis parts were reported by Kamby et al. [8] as: bones 31%, lungs 19%, and liver 15% and similar things were reported domestically. In this study metastasis was found in the bones 38 patients (40.0%), lungs 14 patients (14.7%), brain 6 patients (6.3%), liver 5 patients (5.3%) and other places 2 patients (2.1%), and 30 patients (31.6%) had recurrence in more than two parts. In early recurrence 17 patients (32.7%) had recurrence at more than two parts and in late recurrence 13 patients (34.2%) had recurrence at more than two parts; there is no statistically significant difference. Hence, it is thought that there is no relationship between the parts of recurrence and the period of recurrence.
We studied the factors influencing the recurrence period for the patients who had recurrence after the first treatment of breast cancer by classifying them into two groups: 1) early recurrence in which they had the recurrence before 2 years from the completion of curative surgery and chemotherapy and 2) which they had the recurrence after 2 years from the completion of treatment.
Debonis et al. [9] reported that metastasized parts, combined chemotherapy and disease-free interval influence the survival rate. And Pater et al. [10] reported that disease-free interval, metastasized parts and the pathologic stage of the cancer at the time of diagnosis, histologic subtype and primary tumor size influence the survival rate. In this study, positive axillary lymph node metastasis groups are for both the early recurrence and late recurrence, but there is no statistically significant difference between the groups. On the other hand, the stage is statistically significant between early recurrence and late recurrence. And it means that nodal status is not worth of a risk factor in early recurrence. Breast cancers are diagnosed mostly for women in their 40's in Korea. In this study the patients were between ages 35 and 50 were 52.6% and occupied the biggest part. In case breast cancer appears when a client is young, it was generally thought that the more invasive and worse in prognosis, but it is known that there is no relationship between the prognosis and ages [11]. On the contrary, Retsky et al. [12] reported that there is significant difference in early recurrence danger between premenopausal women and postmenopausal ones. In this study premenopausal women are more for both of early recurrence and late recurrence, but there is no statistically significant difference between the groups. Also in each stage, there is no statistically significant difference between the groups.
The most widely used standard to measure the differentiation of breast cancer is Scaff-Bloom-Richardson Classification, in which grades from I to III are given after considering cell differentiation, the degree of polymorphism status and frequency of nucleus division. Kute et al. [13] reported that histologic differentiation is a prognostic factor for breast cancer. On the other hand, Younes and Laucirica [14] reported that it is not worth consideration as a prognostic factor. In this study, the histologic differentiation showed a statistically significant difference between early recurrence and late recurrence. And as for Stage I/II, in histologic differentiation there is statistically significant difference between early recurrence and late recurrence. But as for Stage III, in histologic differentiation there is not a statistically significant difference between early recurrence and late recurrence. So it means that histologic differentiation is worth being included as a risk factor in predicting early recurrence at early breast cancer. Also, in nucleus differentiation there is a statistically significant difference between early recurrence and late recurrence. And as for both stages, in nucleus differentiation there is statistically significant difference within them. So it means that nucleus differentiation is worth of a risk factor in early recurrence, irrespective of stage.
Noh et al. [15] reported that EIC can be a cause of local recurrence, because EIC can exist around invasive cancer and can exist around primary cancer that looks visually sound. Park et al. [16] reported that EIC has a relationship with systemic recurrence as well as local recurrence. In this study there is no statistical significance in all groups for EIC.
In many studies it is known that a positive hormone receptor is better than a negative one for prognosis [17,18]. In this study, there were many patients with a positive hormone receptor in both of early recurrence and late recurrence. But there is no statistically significant difference between the groups. So, there is no relationship with early recurrence.
It is reported that c-erbB-2 has a negative correlation with the estrogen receptor and progesterone receptor, appears well in high nucleus level, and it appears better in invasive cancer [19,20]. In the study of molecular biological factors it is the only indicator to have a bad prognosis in breast cancer [21,22]. In this study there is no statistical significance in all groups with c-erbB-2.
The p53 gene mutation appears in the early stage of breast cancer. Barnes et al. [23] argued that p53 has relationship with patients' prognosis regardless of lymph node metastasis. In this study there is a statistically significant difference between early recurrence and late recurrence as a whole. But there is no statistically significant difference between early recurrence and late recurrence at each stage. So there is relationship with early recurrence, but p53 is dependent factor associated with stage.
Among the predictive factors and prognostic factors under molecular biological study Ki-67 is a monoclonal antibody appearing in all cell cycles except G0 and is a good indicator of cell proliferation. It is under dispute that Ki-67 can be regarded as an independent predictive factor and prognostic factor; many studies reported Ki-67's effectiveness as a predictive factor and prognostic factor for breast cancer [24][25][26]. In this study there is a statistically significant difference between early recurrence and late recurrence. And as for both stages, in Ki-67 there is statistically significant difference with them. So it means that Ki-67 is worthy as a risk factor in early recurrence irrespective in stage. Ki-67 is an independent factor that is associated without stage.
Many studies have shown both the triple negative and HER2 positive subtypes to have poorer clinical, pathologic and molecular prognoses. The triple negative group has the worst overall and disease-free survival [27][28][29]. But in this study luminal A type had many late recurrences but there is no statistical difference. Also luminal B type, HER2 positive type and triple negative type had many early recurrences but there is no statistical difference. Therefore, subtype is not associated with the timing of recurrence.
In systemic recurrent breast cancer patients there is statistically significant difference between early recurrence and late recurrence in histologic differentiation, nucleus differentiation, p53 and Ki-67. When the recurrence periods depending on the each stage are compared, there is statistically significant difference between early recurrence and late recurrence in histologic differentiation, nucleus differentiation and Ki-67 for stage I/II. There is statistically significant difference between early recurrence and late recurrence in nucleus differentiation and Ki-67 for stage III. But subtypes are not associated with the timing of recurrence.
Hence, tailored therapy and detailed follow-up are thought to be necessary for these patients. Additional future predictive risk factor focused large scale investigations for molecular bio- | 2016-05-12T22:15:10.714Z | 2012-06-01T00:00:00.000 | {
"year": 2012,
"sha1": "f8671a33760a1f67a2c3c8567cc1ac8929bb0e73",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4048/jbc.2012.15.2.218",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8671a33760a1f67a2c3c8567cc1ac8929bb0e73",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247209224 | pes2o/s2orc | v3-fos-license | Effects of Chemical Cleaning on the Ageing of Polyvinylidene Fluoride Microfiltration and Ultrafiltration Membranes Fouled with Organic and Inorganic Matter
Herein, the effects of cleaning with sodium hydroxide and citric acid solutions as cleaning reagents on the changes in the properties of two hollow-fiber PVDF microfiltration (MF) and ultrafiltration (UF) membranes fouled with organic and inorganic matter were investigated. Accelerated membrane ageing was induced by use of high concentrations of tannic acid and iron oxide (Fe2O3) particles in the feed water; these conditions were kept with different membrane soaking times to observe temporal effects. It was found that tannic acid molecules adsorb onto the membrane surface that results in changes in surface characteristics, particularly surface functional groups that are responsible for enhancing membrane’s hydrophilicity. Experimental results demonstrate that NaOH had a stronger effect on the tensile strength and surface chemistry of the fouled MF and UF membranes than citric acid. Prediction of lifetime by an exponential (decay) model confirmed that the UF membrane cleaned with NaOH would be aged within about 1.8 years and the MF membrane after about 5 years, at cleaning every 15 days, downtime 2 h per cleaning, when a 10% tensile strength decrease against the original membrane is allowed.
Introduction
The processing of agricultural products produces processed waters with different characteristics that depend on a variety of raw materials, crop growing locations, technological mechanization, and the availability of a secure energy supply. In cashew nut processing, either the roasting process or the steam cooking process is widely used. In the former process, processed water is generated during a quenching step of the roasted cashew nuts. In the boiling process, on the other hand, the water is drained from the pressure cooker. Cashew nut shell liquid (CNSL) contains recalcitrant organic compounds that are not easily biodegradable, such as long-chain phenolic compounds [1]. Direct disposal of CNSL can cause irreparable damage to ecosystems and threaten aquatic and terrestrial fauna and flora.
The microfiltration (MF) and ultrafiltration (UF) processes ensure the removals of bacteria, germs, viruses, and colloidal particles from processed waters. In addition, quality of the treated water remains the same, independent of pollution level of the feed water. Nonetheless, the MF and UF processes have relatively low efficacy on organics and ions removal, as compared to nanofiltration and reverse osmosis. Different integration of these pressure driven membrane processes have been applied in a variety of wastewater treatment settings: particularly MF and UF as pretreatment to other unit processes [2]. Compared to the alternatives of electrodialysis and reverse electrodialysis-which are promising desalination technologies that can transform salinity gradient energy into electricity [2][3][4]-electrodialysis shows a relatively higher total organic carbon rejection than nanofiltration [3] but it is still prone to organic and inorganic fouling. Meanwhile, pretreatment with coagulation can enhance organic removal efficacy by the UF process [2].
Recent studies have reported the use of lab-prepared polymeric membranes, made of cellulose acetate and polyethylene glycol, in removing iron and zinc ions from synthetic wastewaters through an electrodialysis process [5,6]. The authors found that membrane enrichment with chitosan-silver ions increased electrical conductivity and hydrophilicity [5], while adding SiO 2 nanoparticles to the polyvinyl alcohol matrix improved proton conductivity [6]. As for commercially available polymeric MF and UF membranes, hollow-fiber membrane modules are widely used for water treatment and reuse due to their high packing density and relatively low manufacturing cost [2]. Hollow-fiber membranes are usually pressurized in a housing or immerged in a basin and are mainly made of polyvinylidene fluoride (PVDF) or polyethersulfone (PES) [7]. PVDF is widely used as a polymeric membrane material due to its outstanding properties: i.e., high mechanical strength, flexibility, thermal stability, and chemical resistance [8].
Membrane fouling is a major impediment to the long-term operation of small water treatment plants such as CNSL plants. It causes a decrease in permeate flux, a rapid increase in transmembrane pressure (TMP), and probable degradation of the mechanical properties of the membranes [2,7]. Fouling by organic compounds, known as "organic fouling", may require extensive cleaning with chemicals such as chlorine or a strong alkali to restore membrane permeability. This is because organic molecules can adsorb on the inner walls of membrane pores and cause membrane pore plugging that cannot be effectively removed by physical cleaning [2]. In electrodialysis processes, the removal of anionic organic and inorganic foulants from ion exchange membranes can also be performed through "cleaningin-place" methods using acidic and alkaline solutions [3].
Unfortunately, chemical cleaning can not only remove contaminants, but also attack the membrane materials, resulting in polymer degradation. Chemical cleaning has a relatively greater impact on the chemical and physical properties of polymeric membranes than physical cleaning. Polymers are broken down into monomers during hydrolysis [9], a reaction that involves the consumption of a water molecule while breaking the covalent bond that holds two components of a polymer together (such as an ionized and an unionized component). Various studies have confirmed that sodium hydroxide and citric acid, which are commonly used membrane cleaning solutions, affect the properties and performance of membranes. These chemicals create a net negative charge on the membrane surface, which increases hydrophilicity and leads to a decrease in membrane permeability [10,11]. Elevated temperatures, bleach solution concentration, and extreme pH environments can exacerbate polymer degradation [12].
Increasing attention is being paid to the integrity of membranes, as membrane damage will result in deterioration of permeate water quality. In the case of hollow fiber membranes, previous studies have focused on their mechanical properties, which are directly related to a faulty structural arrangement of the membrane module or the external pressure exerted on the fibers during operation [7]. On the other hand, chemical attack (or oxidation) leading to membrane fiber failure may be caused by incompatibility between chemicals in the feedwater with the membrane material. Ravereau et al. [8] studied the degradation of two PVDF hollow fiber membranes in chlorine solutions with different pH and found that prolonged exposure of PVDF membranes to chlorine leads to the formation of double bonds, chain scission, and crosslinking, especially in acidic solutions. In addition, strong reactivity of non-fluorinated aliphatic or cycloaliphatic additives with bleach accelerates chain scission.
Currently, a study examining the effects of fouling by CNSL components (i.e., phenolic compounds and iron oxide particles) on membrane integrity is still pending. A recent study in full-scale water treatment plants revealed that aged membranes may be more susceptible to the loss of mechanical properties than newer membranes [13], and yet the relationship between the fouling effects and changes in surface functional groups associated with chemical cleaning has not yet been studied in detail. Unfortunately, most existing studies on membrane ageing by chemical treatments only focus on new membranes. It is to our best knowledge that this current study is the first to investigate the effects of sodium hydroxide and citric acid solutions on the physico-chemical properties of PVDF MF and UF membranes fouled by simulated CNSL. Since the degree of polymer damage is time-dependent, we intend for this paper to establish better understanding of the mechanism of membrane degradation located on the additives and backbone structures of the membranes resulting in extended chemical attacks when exposure duration is varied. The main objective of this study is to establish a relationship between cleaning procedures and changes to the mechanical and physicochemical properties of two PVDF membranes, as well as to establish the lifetimes of the two membranes as affected by cleaning procedures.
Hydrophilized PVDF Hollow-Fiber MF and UF Membranes
An outside-in PVDF hollow-fiber MF membrane (model no. LE-US02-125, Kuraray Co., Ltd., Tokyo, Japan) and UF membrane obtained from National Nanotechnology Center (NANOTEC, Pathumthani, Thailand) were used in this study. These membranes were packed in a cartridge-like configuration and designed for use in a variety of applications, e.g., water reuse. The nominal pore diameters of the MF and UF membranes are 0.02 µm and approximately 15 nm, respectively (molecular-weight cut-off (MWCO) of 100-200 kDa). The pressurized module has an effective filtration area of 0.0864 m 2 (effective length of 27 cm, 102 fibers) and operating flux of 75-100 L/m 2 -h for MF at a transmembrane pressure (TMP) of <0.1 MPa (or 1 bar). The effective filtration area of the UF membrane is 0.0143 m 2 with 14 fibers that can be used at a TMP < 0.2 MPa, giving a filtrate flux of 50-70 L/m 2 -h.
The properties of the UF and MF membranes are summarized in Table A1 (Appendix A). The functional groups and chemical structures of the UF and MF membranes analyzed by FTIR spectroscopy confirm characteristic peaks of PVDF (e.g., -CF and -CF2 peaks) and additives (i.e., Polyvinylpyrrolidone (PVP) for MF and Polyethylene glycol (PEG) for UF) as shown in Figure A1. Thermal gravimetric analysis (TGA) of the PVDF membranes showed that the PVDF content of MF was 69% and UF was 64% based on the weight loss of PVDF decomposition at elevated temperatures of 385-510 • C ( Figures A2 and A3).
Synthetic Wastewater
The cashew nut processing industry typically produces processed water containing high concentrations of organic matter, usually refractory, and iron oxide [1]. To reflect this, two synthetic feedwaters were prepared: Water A, which represents organically contaminated water; and Water B, which represents organic-inorganic contaminated water to which iron oxide (Fe 2 O 3 ) was added. Both waters contained tannic acid (Wako Pure Chemical Industries, Ltd., Osaka, Japan) as a surrogate for organics in the cashew nut processing wastewater at a relatively higher concentration than would be expected in real wastewater to accelerate membrane fouling. Iron oxide particles (CAS No. 458700010, Iron (III) oxide, 95% pure, Acros Organics, NJ, USA) were used as model iron oxide in the processed water. The characteristics of the synthetic cashew nut waters used in this study are shown in Table 1. Based on the laser scattering particle size distribution analyzer (LA -960, Horiba), the mean size of iron oxide particles dispersed in MilliQ water and tannic acid solution were 3.2 and 13.84 µm, respectively.
Membrane Filtration Processes
The bench-scale MF and UF system was operated with dead-end filtration under an outside-in mode ( Figure 1). Briefly, the filtration cycle comprised of filtration, air scouring, physical cleaning/chemical cleaning, drainage, and refill. During filtration, membrane fouling would lead to a gradual increase in TMP. The membranes were subjected to physical cleaning when the TMP reached nearly 100 kPa. Physical cleaning comprised of several steps as follows: first, pressurized air at 200 kPa was supplied from the filtrate side for 10 s; next, pressurized air was supplied from the lower end of the membrane housing to exfoliate particles adhered to the membrane surface for 60 s; finally, the solution (containing detached particles) inside the membrane housing was drained out. Inevitably, after numerous filtration cycles, physical cleaning became inefficient at removing fouling, at which point the fouling was termed as "irremovable fouling". At this point, the initial TMP of the next filtration cycle, even immediately after backwashing, was approximately 25 kPa (~25% of the maximum TMP) due to the accumulation of non-removable impurities; to resolve this, cleaning with reagents was performed. During the filtration experiments with either one of the two prepared feed waters, TMP trends and permeate water flow rates were recorded. A summary of the filtration procedure is shown in Figure A4. The total filtration volume during MF-membrane filtration of feed water A (tannic acid only) was 2546 L/m 2 , and of feed water B (tannic acid with iron oxide) was 3241 L/m 2 .
Membrane Filtration Processes
The bench-scale MF and UF system was operated with dead-end filtration under an outside-in mode ( Figure 1). Briefly, the filtration cycle comprised of filtration, air scouring, physical cleaning/chemical cleaning, drainage, and refill. During filtration, membrane fouling would lead to a gradual increase in TMP. The membranes were subjected to physical cleaning when the TMP reached nearly 100 kPa. Physical cleaning comprised of several steps as follows: first, pressurized air at 200 kPa was supplied from the filtrate side for 10 s; next, pressurized air was supplied from the lower end of the membrane housing to exfoliate particles adhered to the membrane surface for 60 s; finally, the solution (containing detached particles) inside the membrane housing was drained out. Inevitably, after numerous filtration cycles, physical cleaning became inefficient at removing fouling, at which point the fouling was termed as "irremovable fouling". At this point, the initial TMP of the next filtration cycle, even immediately after backwashing, was approximately 25 kPa (~25% of the maximum TMP) due to the accumulation of non-removable impurities; to resolve this, cleaning with reagents was performed. During the filtration experiments with either one of the two prepared feed waters, TMP trends and permeate water flow rates were recorded. A summary of the filtration procedure is shown in Figure A4. The total filtration volume during MF-membrane filtration of feed water A (tannic acid only) was 2546 L/m 2 , and of feed water B (tannic acid with iron oxide) was 3241 L/m 2 .
Chemical Cleaning Regime
Chemical cleaning is usually used to remove contaminants that are strongly adsorbed on membrane surfaces and cannot be removed by physical cleaning alone. The performance of chemical cleaning must be evaluated not only by its ability to restore membrane flux, but also by its ability to maintain product water quality, which depends on several factors, including the concentration of the cleaning agents, temperature, cleaning time, and hydrodynamic conditions [2,8]. In this study, sodium hydroxide (NaOH) and citric acid (CA) were used as membrane cleaning agents.
Amounts of 2.5 wt.% NaOH and 2.0 wt.% citric acid were used to clean the membrane fouled by tannic acid and deposits of multivalent cations and metal oxides, respectively, at 25 • C. The selected concentrations of NaOH and citric acid were in accordance with the usual concentrations used for chemical cleaning in real pilot and large-scale wastewater treatment plants. The alkaline cleaning conditions allowed for hydrolysis of organic compounds such as polysaccharides and proteins. The cleaning time was extended in order to accelerate the ageing of the membrane. Therefore, the membrane lifetime was tested by cutting the membranes (UF and MF membranes) into small segments, about 10 cm long, from which 2 or more fiber samples were taken and immersed in a beaker containing NaOH or citric acid for 30 min to 2 weeks.
Membrane Characterization
Attenuated Total Reflectance Fourier Transmission Infrared (ATR-FTIR) spectroscopy, used to follow the changes in the organic and inorganic functional groups of the membrane, was analyzed with a scan resolution of 4 cm −1 from a wavenumber of 6000 to 400 cm −1 using the Thermo Scientific Nicolet 6700 FTIR (Waltham, MA, USA) spectrometer. The contact angle was analyzed using an OCA 15 Plus (DataPhysics Instruments GmbH, Filderstadt, Germany) by Pico droplet measurement and dispensing a 0.5 µL droplet of water. To obtain representative values for the entire membrane sample of virgin and chemically cleaned membranes, at least three points on the membrane surface were randomly selected to perform the ATR-FTIR and contact angle measurements ( Figure A5). If the resultant membrane contact angle value was less than 50 • , it was considered hydrophilic, and any higher values were considered hydrophobic [14,15].
Tensile tests were performed to determine the tensile strength and elongation (maximum elongation without permanent deformation) of MF and UF membranes according to ASTM D882 standard. All membrane samples (12-13 cm long) were rinsed and immersed in distilled water for approximately 1 h to remove any detergent residue remaining on the membrane surface. The membrane fibers were dried overnight in a desiccator and stored in a stainless-steel box until analysis. Thermogravimetric/differential thermal analysis (TG and DTA) was performed by measuring the thermal stability of the membrane samples with a DTG-60AH analyzer (Shimadzu Corp., Kyoto, Japan) with a heating rate of 10 • C/min from room temperature to 800 • C under a nitrogen gas flow of 50 mL/min.
Lifetime Prediction
The Weibull distribution was used as the lifetime model because it is robust and can be applied to many types of lifetime data and has only two model parameters (i.e., β: shape parameter; and η: scale parameter) [16]. The probability density function (PDF) and the cumulative distribution function (CDF) are important statistical functions used to describe a lifetime distribution. Two parameters of the Weibull PDF are defined as follows [17]: where β is the shape parameter; η is the scale parameter or characteristic life. The reliability function of Weibull, R(t) distribution is described as follows: The Weibull failure rate function or hazard function, λ(t) can be described as follows: When • populations with β < 1 exhibit a failure rate that decreases with time, • populations with β = 1 have a constant failure rate, and • populations with β > 1 have a failure rate that increases with time.
The procedure for predicting the lifetime of the hollow fiber membranes (UF and MF) after chemical cleaning at different soaking times is described in Figure A6. Briefly, the measured physical properties, i.e., tensile strength and soaking time, of the membranes were the input parameters for Equation (3), which were used to simulate the lifetime. According to Arkhangelsky et al. [18], the tensile strength decreased by 5% to 30% after ageing, while Le-Clech [19] reported that the breaking stress of the hollow fibers decreased by more than 0.004%, which is a significant change between virgin and aged membrane. The criteria for assessing whether the membrane had aged included a combination of tensile strength, effluent quality, and FTIR results. Therefore, to predict the life of the membrane, the tensile strength was assumed to decrease to 10% of the tensile strength of the unaged membrane due to ageing. In addition, the ATR-FTIR result helps us to confirm the ageing of the polymeric membranes as this can be seen from the shift in the characteristic peaks of PVDF and additives.
Membrane Filtration Performance
The MF and UF membranes were filtered with two different types of feed water containing TA and iron oxide, which are important organic and inorganic constituents that cause membrane fouling, to simulate the low-pressure filtration treatment of processed water from the cashew nut processing industry. Membrane fouling can be observed as an increase in TMP or a decrease in permeate flux and permeate water quality. Our previous study indicates that ultrafiltration of the aqueous suspension of iron oxide particles might suffer from pore blocking and cake fouling [20]. In addition, iron oxide particles could adsorb organic matter via ligand exchange and electrostatic interaction [21].
The performance of MF membrane evaluated on flux decline behavior shows that tannic acid with iron oxide (TA + Fe 2 O 3 ) and tannic acid (TA) alone fouled the membrane significantly. J/J 0 decreased to about 0.3 at a filtered volume of 200 L/m 2 of the TAcontaining water ( Figure A7), which corresponds to a COD loading of 0.8 kg-COD/m 2 . It is worth noting that iron oxide particles were completely retained as their particle size was larger than the pore diameter of the MF membrane (~160 to 700 times larger). In the presence of iron oxide, the flux of the membrane was restored to some extent after backwashing. However, when tannic acid was filtered alone, no recovery of flux was observed. The removal efficiency of total organic carbon (TOC) was about 5% of the original TOC in the feed water for feed water A and about 6% for feed water B (Table A2), indicating that a small amount of retained organic molecules causes severe fouling of the MF membrane. The small difference between the two feed waters indicates that the presence of iron oxide particles did not affect the removal of organics.
The previous study shows that fouling depends on hydrodynamic conditions and foulant properties [2,20,22]. The cake filtration was distinguished from other fouling mechanisms by plotting t/V versus V (see Figure 2), where a linear relationship identifies cake filtration as the dominant fouling mechanism [22,23]. TA filtration with the MF membrane shows a curve from the initial phase up to about 50 L filtrate volume, indicating pore-clogging processes by low molecular weight TA. In the presence of iron oxide (TA + Fe 2 O 3 ), t/V was linearly correlated with V from a filtrate volume of about 35 L, indicating that the fouling mode shifted to cake filtration. Thereafter, a decrease in t/V with increasing V was observed, probably caused by air scouring during filtration, which peeled off the loose cake of iron oxide.
Membranes 2022, 12, x FOR PEER REVIEW cake filtration as the dominant fouling mechanism [22,23]. TA filtration with the MF brane shows a curve from the initial phase up to about 50 L filtrate volume, indic pore-clogging processes by low molecular weight TA. In the presence of iron oxide Fe2O3), t/V was linearly correlated with V from a filtrate volume of about 35 L, indic that the fouling mode shifted to cake filtration. Thereafter, a decrease in t/V with in ing V was observed, probably caused by air scouring during filtration, which peel the loose cake of iron oxide. In contrast to the MF membrane, the UF membrane exhibited a more rapid steeper flux decrease in the presence of iron oxide particles (feed water B) compar the filtration of tannic acid alone ( Figure A7). This was also confirmed by the ste crease in the plot of t/V versus V for the UF membrane in Figure 2b. Previous st reported that coating membrane surfaces with iron oxide particles led to a reduct fouling [24]. This discrepancy in membrane fouling by iron oxide could be due to th ferent relative sizes of the iron oxide particles in their study.
ATR-FTIR spectroscopy was used to study the chemical properties of the f membranes, as shown in Figure 3. The characteristic peaks of the virgin UF memb at wavenumbers of 1180 cm −1 [25], 1234 cm −1 [26], and 1275 cm −1 [27], which represe -CF out-of-plane stretching [28], disappeared from the spectra of the TA-filtered and Fe2O3-filtered UF membranes. This phenomenon was not observed for MF memb suggesting that organic impurities cover the surface of the UF membrane more tha of the MF membrane. This was also confirmed by the appearance of a peak at 1000 cm −1 for only the UF membranes. In addition, the carbonyl group (i.e., C=O), which be derived from a hydrophilic additive of PVP, was observed on both virgin and f MF membranes. The carbonyl group, which is derived from PEG, and the meth group (i.e., -CH2) on virgin UF membrane disappeared from the spectra of the foul membrane (Figure 3), suggesting surface coverage by fouling materials. In contrast to the MF membrane, the UF membrane exhibited a more rapid and steeper flux decrease in the presence of iron oxide particles (feed water B) compared to the filtration of tannic acid alone ( Figure A7). This was also confirmed by the steep increase in the plot of t/V versus V for the UF membrane in Figure 2b. Previous studies reported that coating membrane surfaces with iron oxide particles led to a reduction in fouling [24]. This discrepancy in membrane fouling by iron oxide could be due to the different relative sizes of the iron oxide particles in their study.
ATR-FTIR spectroscopy was used to study the chemical properties of the fouled membranes, as shown in Figure 3. The characteristic peaks of the virgin UF membranes at wavenumbers of 1180 cm −1 [25], 1234 cm −1 [26], and 1275 cm −1 [27], which represent the -CF out-of-plane stretching [28], disappeared from the spectra of the TA-filtered and TA -Fe 2 O 3 -filtered UF membranes. This phenomenon was not observed for MF membranes, suggesting that organic impurities cover the surface of the UF membrane more than that of the MF membrane. This was also confirmed by the appearance of a peak at 1000-1100 cm −1 for only the UF membranes. In addition, the carbonyl group (i.e., C=O), which may be derived from a hydrophilic additive of PVP, was observed on both virgin and fouled MF membranes. The carbonyl group, which is derived from PEG, and the methylene group (i.e., -CH2) on virgin UF membrane disappeared from the spectra of the fouled UF membrane (Figure 3), suggesting surface coverage by fouling materials.
Overall, both membranes were able to adequately retain iron oxide particles, but they did not have a strong effect on the chemical properties of the membranes. Tannic acid, on the other hand, was an important influencing factor that could change the chemical properties of the membrane by adsorption on the membrane surface; although, only a small amount was removed. This aligns with our previous study, which found that tannic acid coating on PVDF membranes reduced the membrane's water contact angle by 17-27%, depending on the acid's molar concentration [15]. Overall, both membranes were able to adequately retain iron oxide particles, but they did not have a strong effect on the chemical properties of the membranes. Tannic acid, on the other hand, was an important influencing factor that could change the chemical properties of the membrane by adsorption on the membrane surface; although, only a small amount was removed. This aligns with our previous study, which found that tannic acid coating on PVDF membranes reduced the membrane's water contact angle by 17%-27%, depending on the acid's molar concentration [15].
NaOH and Citric Acid Cleaning
Cleaning the membranes with 2.5% wt NaOH affected the physical and chemical properties of the PVDF hollow fiber membrane. Physicochemical properties that were considered in the cleaning and ageing studies included contact angle and chemical functional groups of the membrane surface [13,29], whereas the physical property considered was the commonly referred to parameter of tensile strength. Changes to the membrane's tensile strength with increasing cumulative cleaning time are shown in Figure 4. The tensile strength of the UF membrane obviously decreased with longer durations of cleaning with NaOH, indicating that the mechanical strength of the membranes deteriorated with increasing cleaning time. The membrane surface's hydrophobicity after intensive cleaning at different exposure times was evaluated by contact angle measurements. The fouled MF and UF membranes after filtration of TA solution then cleaned with NaOH showed a gradual decrease in contact angle up to a soaking time of 5 days ( Figure A7). Subsequently, the contact angle increased significantly after a soaking time of 2 weeks, indicating that prolonged exposure to NaOH changed the chemical structure of the membrane. Caustic cleaning agents have been reported to lead to dehydrofluorination phenomena, in which C=C bonds are formed when the H-F units are eliminated from the polymer [8,19].
NaOH and Citric Acid Cleaning
Cleaning the membranes with 2.5% wt NaOH affected the physical and chemical properties of the PVDF hollow fiber membrane. Physicochemical properties that were considered in the cleaning and ageing studies included contact angle and chemical functional groups of the membrane surface [13,29], whereas the physical property considered was the commonly referred to parameter of tensile strength. Changes to the membrane's tensile strength with increasing cumulative cleaning time are shown in Figure 4. The tensile strength of the UF membrane obviously decreased with longer durations of cleaning with NaOH, indicating that the mechanical strength of the membranes deteriorated with increasing cleaning time. The membrane surface's hydrophobicity after intensive cleaning at different exposure times was evaluated by contact angle measurements. The fouled MF and UF membranes after filtration of TA solution then cleaned with NaOH showed a gradual decrease in contact angle up to a soaking time of 5 days ( Figure A7). Subsequently, the contact angle increased significantly after a soaking time of 2 weeks, indicating that prolonged exposure to NaOH changed the chemical structure of the membrane. Caustic cleaning agents have been reported to lead to dehydrofluorination phenomena, in which C=C bonds are formed when the H-F units are eliminated from the polymer [8,19].
FTIR analysis characterized the fouling materials deposited on the membrane surface after filtration (Figure 3). The peak at 1700 cm −1 belongs to the carbonyl group (C=O), associated with the PVP additive commonly added to increase the hydrophilicity of the membrane [25]. This characteristic peak of the carbonyl group (C=O) disappeared from the IR spectra of the membrane after cleaning with NaOH for 2 weeks. There were peaks representing CH 2 -OH deformation and C=C bonding of the fouled membrane after filtration of TA solution. After soaking in NaOH from 30 min to 5 days, the CH 2 -OH and C=C peaks gradually decreased and disappeared after a soaking period of 2 weeks, indicating that the cleaning with NaOH must be sufficiently long to achieve complete removal of organic contaminants. FTIR analysis characterized the fouling materials deposited on the membrane surface after filtration (Figure 3). The peak at 1700 cm −1 belongs to the carbonyl group (C=O), associated with the PVP additive commonly added to increase the hydrophilicity of the membrane [25]. This characteristic peak of the carbonyl group (C=O) disappeared from the IR spectra of the membrane after cleaning with NaOH for 2 weeks. There were peaks representing CH2-OH deformation and C=C bonding of the fouled membrane after filtration of TA solution. After soaking in NaOH from 30 min to 5 days, the CH2-OH and C=C peaks gradually decreased and disappeared after a soaking period of 2 weeks, indicating that the cleaning with NaOH must be sufficiently long to achieve complete removal of organic contaminants.
Citric acid is normally used to remove inorganic particles (e.g., iron oxide) and debris from the membrane surface. Analysis of the contact angle helps us to understand that the MF and UF membranes tend to have lower contact angles after cleaning with citric acid ( Figure A8) than the virgin membranes. This indicates a decrease in hydrophobicity and shows an opposite trend to that observed when the membranes were cleaned with NaOH.
Polymer Hydrolysis and Ageing
To combat the hydraulically irremovable fouling that occurs after long-term filtration, NaOH and citric acid were used to clean the fouled membranes. A previous study indicated that the support material of commercial UF membranes made of polyethylene terephthalate broke down into their monomers under strong alkaline conditions through the hydrolysis reaction [30]. The results of FTIR and contact angle measurements ( Figures 5 and A8) indicated that the membrane cleaned with NaOH became increasingly hydrophobic after extensive cleaning with NaOH. It is known that PVP was added to PVDF to improve the hydrophilic property of the MF membrane, and the extensive cleaning with NaOH probably caused a loss of the hydrophilic additive since it was previously found that soaking the fouled membrane in NaOH for a long period of time can result in a crosslinking reaction on the membrane surface, causing the membrane surface to become hydrophobic [31]. Similarly, hydrolysis of the membrane may have occurred as evident from the loss of the characteristic C=O bond absorbance band in the 2-week-soaked membrane (see Figure 5). This is consistent with the study of Hashim et al. [32], which found that by soaking virgin PVDF hollow-fiber membranes in NaOH solutions, a reaction between NaOH and PVDF was initiated even with low NaOH concentrations and was aggravated Citric acid is normally used to remove inorganic particles (e.g., iron oxide) and debris from the membrane surface. Analysis of the contact angle helps us to understand that the MF and UF membranes tend to have lower contact angles after cleaning with citric acid ( Figure A8) than the virgin membranes. This indicates a decrease in hydrophobicity and shows an opposite trend to that observed when the membranes were cleaned with NaOH.
Polymer Hydrolysis and Ageing
To combat the hydraulically irremovable fouling that occurs after long-term filtration, NaOH and citric acid were used to clean the fouled membranes. A previous study indicated that the support material of commercial UF membranes made of polyethylene terephthalate broke down into their monomers under strong alkaline conditions through the hydrolysis reaction [30]. The results of FTIR and contact angle measurements (Figures 5 and A8) indicated that the membrane cleaned with NaOH became increasingly hydrophobic after extensive cleaning with NaOH. It is known that PVP was added to PVDF to improve the hydrophilic property of the MF membrane, and the extensive cleaning with NaOH probably caused a loss of the hydrophilic additive since it was previously found that soaking the fouled membrane in NaOH for a long period of time can result in a crosslinking reaction on the membrane surface, causing the membrane surface to become hydrophobic [31]. Similarly, hydrolysis of the membrane may have occurred as evident from the loss of the characteristic C=O bond absorbance band in the 2-week-soaked membrane (see Figure 5). This is consistent with the study of Hashim et al. [32], which found that by soaking virgin PVDF hollow-fiber membranes in NaOH solutions, a reaction between NaOH and PVDF was initiated even with low NaOH concentrations and was aggravated at the 24-hour treatment time. Other studies have concluded that-based on the FTIR results-additives in PVDF membranes degraded during hypochlorite soaking [19,29]. Back to the results of our study, the hollow fiber membrane's tensile strength changed slightly after hydrolysis, while elongation decreased by 23%, as shown in Table 2. Similar results have also been reported by a previous study where commercial PVDF hollow-fiber membranes experienced changes to their elongation between 84% and 183%, with moderate elongation reduction occurring when the membranes were treated with 1% and 4%wt NaOH solutions [32]. In the same study, the authors also reported a loss of mechanical integrity in the membrane after treatment with 10%wt NaOH. after hydrolysis, while elongation decreased by 23%, as shown in Table 2. Similar results have also been reported by a previous study where commercial PVDF hollow-fiber membranes experienced changes to their elongation between 84% and 183%, with moderate elongation reduction occurring when the membranes were treated with 1% and 4%wt NaOH solutions [32]. In the same study, the authors also reported a loss of mechanical integrity in the membrane after treatment with 10%wt NaOH. Moreover, Young's modulus is a measure of the stiffness of the material or its resistance to elastic loading. The modulus of elasticity of PVDF is generally 145,000-333,500 psi [33]. It is generally assumed that the mechanical property of the membrane is not important because it is held in place by a support material. This is not the case with the hollow fiber membrane because it is self-supporting, so its mechanical strength becomes very important [8,19]. For example, fibers with a high modulus of elasticity can easily withstand higher operating pressures.
Lifetime Estimation
Membrane ageing is complex [8,13]. A previous study has suggested that cleaning rate is not limited to characteristic changes, and there are relatively few links to the fouling rate [34]. In this study, accelerated membrane ageing was induced by use of high concentrations of tannic acid and iron oxide particles in the feed water; these conditions were kept with different membrane soaking times in order to observe temporal effects. In accelerated ageing tests, the fibers need a longer soaking time to fail. The fouled membrane was soaked in NaOH 2.5 wt.% and citric acid 2 wt.% at different cumulative soaking times from 30 min to 2 weeks with seven samples. NaOH was chosen to predict the ageing of the membrane because FTIR and contact angle results showed that NaOH causes hydrolysis of the membrane by a strong base and changes the structure of the membrane after prolonged soaking. The membrane filtered with a tannic acid solution was chosen to determine the ageing of the membrane and the NaOH solution was used for removing organic contaminants from the fouled membrane.
The two most important ageing factors were the chemical properties of the membranes (analyzed by FTIR and contact angle) and the physical properties of the membranes (i.e., tensile strength and elongation). The tensile strength data were used to analyze the ageing of the membranes using the Weibull model. The tensile strength data were fitted to an exponential model, which is also part of the Weibull distribution. To predict the time of membrane ageing, two different scenarios were created: a minimum tensile strength decrease of 10% and a maximum of 30% against the virgin membrane were assumed as the definitions of when the membrane had deteriorated and ageing had occurred [19]. The membrane ageing results based on the exponential equation are shown in Table 3. According to Judd [35], membranes should typically be cleaned two times per month (every 15 days) with chemical reagents, with each cleaning taking 2 h (2 h downtime). Table 3. Membrane ageing predicted by the data obtained from cleaning with NaOH solution after filtration with TA solution using the exponential models presented in Figure 6. Figure 6 shows that the tensile strength data were well-fitted to the exponential model (i.e., y(t) = y 0 + ae −bx ), resulting in an acceptable R-squared and standard error result. The results of the exponential model show that the UF membrane may age 1.8 years after filtration with tannic acid solution and cleaning with NaOH when the tensile strength decreases to 10% against the original membrane, and seven years when the tensile strength decreases up to 30% (Table 3). On the other hand, the ageing of MF membrane was estimated to be about 5.1 years when the tensile strength decreased by 10%. The ageing of MF would be longer than 5.1 years if a tensile drop of more than 10% is allowed (i.e., 30% of the initial tensile strength, Table 3), based on the equation indicated in Figure 6. This estimation is consistent with a recent finding in full-scale water treatment plants where the mechanical properties of PVDF deteriorated and filtration performance dropped after 5 years of operation [13]. Table 3. Membrane ageing predicted by the data obtained from cleaning with NaOH solution after filtration with TA solution using the exponential models presented in Figure 6.
Conclusions
The following conclusions can be drawn from this study: Both UF and MF membranes were able to completely retain suspended solids, yet retain small amounts of tannic acid. The UF membrane was able to retain more TOC than the MF membrane. Tannic acid molecules adsorb onto the membrane surface, which results in changes in surface characteristics, especially surface functional groups that are responsible for enhancing membrane's hydrophilicity.
Conclusions
The following conclusions can be drawn from this study: • Both UF and MF membranes were able to completely retain suspended solids, yet retain small amounts of tannic acid. The UF membrane was able to retain more TOC than the MF membrane. Tannic acid molecules adsorb onto the membrane surface, which results in changes in surface characteristics, especially surface functional groups that are responsible for enhancing membrane's hydrophilicity. • NaOH could remove tannic acid and citric acid could remove inorganic matter (Fe 2 O 3 ) that fouled the membranes. However, it was confirmed that NaOH had a stronger effect on the tensile strength and surface chemistry of the fouled MF and UF membranes than citric acid. The results infer the relationship between the fouling effects and changes in surface functional groups associated with chemical cleaning.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A Figure A1. FTIR spectra of virgin MF and UF membranes. | 2022-03-03T16:26:24.075Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "421bfa704913757892ce84bc4405735a36b2aa54",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/12/3/280/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "506dcb2248d60a37c44f359e416cae9f0eb7fa91",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247346003 | pes2o/s2orc | v3-fos-license | CONTINUING CLASSROOM ACTION RESEARCH TOWARD SUSTAINABLE HIGH SCHOOL TEACHERS IN INDONESIA: A COMMUNITY SERVICE PROGRAM
[classroom action research, senior high school teachers, TPD, workshop] Abstrak: This paper reports a workshop series of community services conducted in a senior high school in Indonesia. This workshop series focused on the developing the school teachers’ comprehension about Classroom Action Research to promote their instructional professional development. For the data gathering, conferences were main technique in exploring and discovering their view points of the workshops. The results show that, regardless their barriers of being directly observed in classroom practices, the school teachers’ awareness of being professional teaching is improved. They become more motivated to attend the conference and enthusiastically involved in the discussion. Yet, barriers are also reported to show that limitation is also found during the workshop and services.
Introduction
School teachers are demanded to have abilities in conducting and writing classroom action research as recommended in the Undang-undang Guru dan Dosen No. 14 tahun 2005 (Law No. 14 2005 on Teachers and Lecturers) in Indonesia. As they have passed the professional certification program (Kusumawardhani, 2017), they have to morally improve and develop their expertise toward the sustainable teacher professional development (Widodo & Allamnakhrah, 2020).
School teacher professional development will provide better pedagogic skills and awareness. In this situation, professional teachers are going to be reflective school educators (Saito, Hawe, Hadiprawiroc, & Empedhe, 2008) who always keep improved and sense of services to face the 21 st century instructional demands. Research reveals challenges are face during this era such as technological barriers and access (Falkner, Vivian, & Williams, 2018;(Misdi, Hartini, Kusriandi, & Tambunan, 2020).
Efforts have been made. However, barriers come around school teachers, e.g. rush hours as administrators and subject teachers, overloaded hours, district regulation. As a result, little is accommodated for teacher professional development (TPD). Thus, this community services was conducted in the spare-times as the school teachers manage their slots of work-times.
The school
The school site is a state senior high school located in West Java, Indonesia. This school is well-known for their academic achievement in the town, but crowded populated school. No sufficient play-grounds are around. But, the school has an ease access to public sport centers for their physical exercises.
Since the school has large-size classes as public school, many teachers and administrative staff are employed as civil servants coming. Most teachers and staff went on local professional trainings which were officially governed by the district educational agency. To this, impacts are still poorly reported as fascinating, especially to the teachers' capabilities in conducting classroom action research.
Technique of the community services
The technique of the community service program was conducted throughout after school projects. Workshop on classroom action-the name of the community service program, was almost done after school at around 2. pm. Evaluation of the program was conducted in the form of onsite conference-face-to-face interviews and note-takings.
Participant recruitments
All participants were recruited voluntarily. There were six school teachers. Their subjects vary from social to science disciplines. All participants were female teachers aging from 35-45 years old. All participants had already been certified as professional school teachers as mandated in the Undang-undang Guru dan Dosen No. 14 tahun 2005.
Procedure
The workshop The initial workshop began after letter of assignment was loaded by the faculty and approved by the school principle. The workshop projects valid for two semesters beginning from September 2019 to August 2020. Before the program ran, negotiation about the participants and allotted times was discussed. Thus, rapport and ethics clearance were employed. Finally, six voluntarily participants joined the workshop.
The workshop
The workshop include the over view of the classroom action research (CAR), procedure for conducting CAR and Reporting CAR. The materials were presented and discussed in several meetings. Most meetings were conducted in Friday after school. In some cases, the meetings were skipped due school immediate meetings and projects such as try out, tests, and student fair weeks. The following images show the workshop atmospheres.
Source: Author's document
Enthusiasm for being more professional reflective teaching through CAR
The participants enthusiastically joined and participate during the program. Even though interruption mostly occurred during the period, they managed the workshop well. This is an indication that the workshop project is successfully. Showing feeling of being self-driven participants tend to show their abilities to motivate and coaching the other members of the schools (Zwart, Korthagen, & Attema-Noordewier, 2015). This finding indicates there is a sense of being self-driven for having continuous learning and development (Owen, Palekahelu, Sumakul, Sekiyono, & White, 2018) among the school members.
Issue arise in the workshops
The current series of workshops on CAR in the scheme of community service highlight initial issues. Several issues immerge during workshops. The issues are presented in the Table 2. It is clear problems are identical to common public schools' problems in improving their teachers' professionalism, e.g. in conducting CAR. Overloaded teaching credits (24 hours) mostly make the school teachers difficult to have a plenty of conducting reflective teaching whereas administrative works and barriers make it worse. However, the findings show that the success of developing TPD totally depends on the school teachers themselves. The findings revealed the participants enable to activate their slots of time to join the workshop series after school. This is an indication of empowered classroom teachers (Imants & Van der Wal, 2020); Soine & Lumpe, 2014), e.g. ability for establishing active learning. Perceive values Participants' voice 1 Self-evaluated For us, it is not a matter of having publication by instant process. For us, a process-based learning will be more fruitful than instant results without any impacts 2 Skill-oriented Instead of direct results, thoroughly involved and engaged in the workshop which gradually provides impact on our TPD is the main targeted objective for joining the workshop Participants often express their willingness and motivation to learn how to conduct a classroom-based research (Irnidayanti, Maulana, Helms-Lorenz, & Fadhilah, 2020). According to them, conducting improper steps will lead to meaningless impact on their sense of being professional. The kind of self-evaluated teachers tend to aware of being student-centered learning (Misdi, Hartini, Farijanti, & Wirabhakti, 2013) which make it different from those who are not (Soine & Lumpe, 2014).
As the participants of the workshop on CAR, the teachers perceive an impact of the program is more important than instant results, e.g. finished with an article of CAR. Believed as being empowered participants whose impact of their academic writing and practice, they can know the what and how to do. This critical thinking perspective (Misdi, Hartini, & Farijanti, 2014) is fundamental of being professional school teacher scholars as an agent of change at the school context (Cheng & Li, 2020).
To summary, there are several findings related to the "Workshop" community services program conducted in the current school. Positive impacts are clearly observed during the workshop and conference. At the same time, immerging issues show there is a couple of challenging situation and atmosphere at the schools (Harjanto, Lie, Wihardini, Pryor, & Wilson, 2018;McChesney & Aldridge, 2018;van Griethuijsen, Kunst, van Woerkom, Wesselink, & Poell, 2019;Widodo & Allamnakhrah, 2020). This is a normal situation as the school is large and heterogeneous populated school. Overall, the workshop is successful to foster the school teachers' sense of empowerment (Misdi, 2017) and this should be a starting point to be better for the school (Zwart et al., 2015) toward a community-based professionalism development (Harjanto et al., 2018).
Conclusion
A series of workshop on classroom action research and writing the report had been conducted successfully to address its objectives. School teachers successfully promote their perspective and comprehension about the CAR. It concludes the workshops perceived positively and enable to foster teachers' sense of self-efficacy as professional reflective educators. | 2022-03-09T18:57:01.366Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "22edaac061541b08f4c44d90a78d737c87fa3a0b",
"oa_license": "CCBY",
"oa_url": "https://dmi-journals.org/jai/article/download/167/151",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1fe6b05de7238b7b302ad4856a729f9d0dc48813",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
51728634 | pes2o/s2orc | v3-fos-license | Relapsing Polychondritis following Treatment with Secukinumab for Ankylosing Spondylitis: Case Report and Review of the Literature
Relapsing polychondritis (RP) is an autoimmune disorder that often occurs concomitantly with other autoimmune diseases, though RP has been infrequently associated with ankylosing spondylitis (AS). There is a small, but growing, body of the literature demonstrating case reports describing RP secondary to AS in patients treated with tumor necrosis alpha inhibitors (TNFi's). We present the first case in which RP developed in AS while treated with an interleukin 17A inhibitor (IL-17Ai), secukinumab. With this case report, we hope to raise physician awareness of the possible autoimmune disorders that may arise subsequent to novel immunomodulation therapies, particularly that RP may develop subsequent to inhibition of IL-17A.
Introduction
Relapsing polychondritis (RP) is classified as a rare disease by the National Organization for Rare Disorders, with an incidence of 3.5 per 1,000,000 and has a paucity of research elucidating a clear etiology [1]. RP is an autoimmune disease, which targets the cartilaginous framework in multiple organ systems, but demonstrates a predilection for the ears, nose, and larynx [2]. RP is documented in patients with other autoimmune diseases as frequently as 30-37% of the time [3]. e association between RP and AS has been reported in only few cases over the many decades of study. However, recent work estimates that as many of 12% of patients with AS have concomitant RP, with a risk of lifetime comorbidity risk of 67% to develop RP [4]. Historical treatment of AS focused on nonsteroidal anti-inflammatory drugs (NSAIDs), with more recent therapeutic regimens including TNFi. Recently, a human monoclonal antibody targeting IL-17A, secukinumab, has demonstrated efficacy in AS patients who failed a trial of TNF-alpha inhibitors, and for those who have never utilized any other biologic drugs [5]. Of the major clinical trials examining secukinumab, side effects were found to be similar to the placebo group, with nasopharyngitis and inflammatory bowel disease (IBD) being the most common adverse event [5]. Given the recent availability of secukinumab, no research has yet associated with antibody formation and newly developed autoimmune disease with the IL-17A inhibitors. e current case is the first report of RP development secondary to the use of an IL-17Ai for treatment of AS.
Case Report
Our patient, M.J., is a 56-year-old male, who has had inflammatory back pain since his twenties, but was diagnosed with AS at 53 years while hospitalized for small bowel obstruction. He was found to have sacroiliitis, enthesitis, inflammatory arthritis, positive HLA-B27, and elevated C-reactive protein (CRP) at 2.1 mg/dl (normal < 0.6 mg/dl).
At the time of diagnosis, M.J. was started on adalimumab 40 mg subcutaneously once every 14 days and celecoxib as needed. Despite an initial positive symptomatic response, his axial manifestations persisted, and he developed peripheral inflammatory arthritis in ankles, feet, wrists, and metacarpophalangeal (MCP) joints. At 18 months after initiation of adalimumab, the patient developed leukopenia and neutropenia, associated with mild infections such as cellulitis and gastroenteritis. Adalimumab was held for 6 months, and etanercept was initiated due to AS flares. After 3 months of symptomatic relief and adequate disease control, he developed leukopenia and etanercept was subsequently discontinued. At the time the leukopenia occurred, the patient did not have clinical manifestations of drug-induced SLE (rash, arthritis, hypocomplementemia, or proteinuria/hematuria); he was found to have +ANA (1 : 160, homogeneous pattern) and negative double-stranded DNA. A thorough hematological workup ruled out any other causes of leukopenia, and a decision was made to avoid TNFi's and to start the patient on secukinumab (complete clinical course is shown in Figure 1).
Secukinumab was started with an initial loading dose of 150 mg subcutaneously weekly for five weeks, followed by monthly doses. Following the last loading dose, the patient had an episode of gastroenteritis which was treated with 7 days of ciprofloxacin, and developed swelling, erythema, and throbbing pain of his bilateral ears and tip of the nose. He started a 17-day course of intravenous daptomycin and ertapenem, as his symptoms were thought to be secondary to a neutropenic infection. However, his symptoms did not abate. e patient developed periorbital edema and uveitis, which resolved with topical steroids. Additionally, he was started on 60 mg of daily prednisone by his primary care provider, tapered to 20 mg daily within a week, with resolution of his swelling and pain.
Upon physical examination in our rheumatology clinic (four days after the steroids were stopped), there was nasal erythema diffusely, with mild tenderness to palpation. Bilateral auricular chondritis was present, with moderate hyperemia of the right ear ( Figure 2). Some cartilaginous collapse was noted as well. Additionally, the patient had mild anterior uveitis present on the lateral aspect. At this time, the patient had a positive Schober's test of 13.5 centimeters, and no synovitis of the appendicular joints was noted. Based upon the presence of bilateral auricular chondritis, nasal chondritis, and recent ocular inflammation, RP was diagnosed based upon clinical presentation and history. e patient was started on oral prednisone 20 mg for seven days, with a reduction of 5 mg per week as tolerated, along with 20 mg of methotrexate once a week, with a folic acid supplement of 1 mg. Following initiation of prednisone, there was resolution of the clinical manifestation of RP (auricular and nasal hyperemia and chondritis, as well as uveitis) along with improvement in the inflammatory markers.
Discussion
e Assessment of Spondyloarthritis treatment guidelines include a trial of NSAIDs for four weeks, following by TNFi's in the event NSAIDs do not provide clinical improvement [6]. However, numerous patients do not achieve remission with initial TNFi therapy [7].
is poses a problem in treatment, as a trial of a different TNFi is suggested following the failure of a first TNFi. Recent clinical trials examining
Case Reports in Rheumatology
new IL-17Ai have demonstrated their utility in achieving clinically significant improvement in patients with AS and were approved by the FDA in January of 2016 [8,9]. IL-17Ai has good tolerability, with a side effect distribution similar to the effect of placebo. Notable side effects found in the MEASURE 1 and MEASURE 2 trials were nasopharyngitis, headache, viral infection, dyslipidemia, nausea, influenza, and mouth ulcers [5]. RP is considered an autoimmune disease with a reaction to endogenous type II collagen [10]. Chondrocytes are targeted by antibodies becoming necrotic before being replaced with fibrotic cell lineages [11]. e current paradigm for the pathogenesis of RP involves cytokine-mediated immunologic activity via IL-17A and TNF-alpha leading to matrix-degrading proteinases production from chondrocytes [12]. Additionally, antibodies to type II collagen and CD4+ cells have been implicated in the disease pathogenesis, though the exact relationship is not clearly known [13].
ese underlying biochemical processes lead to the clinical manifestations of auricular chondritis, nasal chondritis, laryngeal chondritis, nondeforming or erosive arthritis, and various ocular manifestations, including uveitis.
Various treatment modalities have been found to be effective in treating RP. Patients with mild inflammation can be treated with NSAIDs and low-dose prednisone. Dapsone or higher prednisone doses can be utilized in patients with more severe symptoms. In patients for whom an effective dose of steroids is not an option, methotrexate or azathioprine may be used to reduce the necessary burden of steroid therapy [2].
RP has been sparingly diagnosed in the context of TNFi therapy for patients with AS. In 2014, Azevedo et al. described a case of etanercept-induced RP in the treatment of an adult man with AS. e patient was HLA-B27 positive and was diagnosed based upon clinical suspicion two months after initiating etanercept therapy. is patient was taken off the TNF-alpha inhibitor and with the addition of corticosteroids saw improvement in his RP within five months [14]. Two similar cases were described by Hernández et al. in 2011, in which two HLA-B27 were diagnosed with newly developed RP as consequence of TNA-alpha inhibitor therapy [15]. ese cases similarly presented with RP after approximately two months on TNFi. Steroids were started in each patient in combination with TNFi cessation, with resolution of symptoms after five to six months. TNFi's were successfully restarted without documented recurrence of RP [14,15].
To our knowledge, this is the first described case of RP induced by IL-17Ai therapy. Our patient never presented with symptoms of RP prior to initiation of the IL-17Ai.
ough a different agent, the clinical course is similar to cases in the literature documenting RP following TNF-alpha inhibitors. Autoantibody development seen in patients treated with TNF-alpha inhibitors may follow a similar pathogenesis as the clinical case documented in the current report. TNF-alpha and IL-17A are proinflammatory cytokines involved in the same pathway [16]. is pathway is a particularly exciting target for AS treatment. When compared to healthy controls, patients diagnosed with AS exhibit significantly higher levels of serum IL-17A [17].
Designing therapies to inhibit IL-17A may be of even more benefit in HLA-B27-positive patients. Misfolded HLA-B27 has been introduced as an important factor in upregulation of 17 cytokines, including IL-17A [18]. e expression of IL-17A in a pathogenic context arises predominantly from a subset of cells, 17. ese cells also express TNF [19]. IL-17A and TNF have been suggested as synergistic inflammatory factors [20]. e dual effect of these cytokines can cause damage particularly to cartilage and bone [21]. ese two cytokines augment inflammation via an increase in endothelial selectins for neutrophil chemotaxis as well as expression of neutrophil chemokines [22]. e balance of these cytokines is altered by biologic drugs. Patients who respond to TNFi decrease endogenous levels of IL-17A and TNF-alpha. However, in nonresponders to TNFi, there is a paradoxical elevation of 17 and IL-17 [23]. e novel therapeutic targeting of IL-17A is attractive due to the relationship between AS and poorly modulated IL-17A production, and the ability of IL-17 inhibitors to diminish overexpression of IL-17 [8]. Additionally, the effectiveness as studied in the MEASURE 1 and 2 trials demonstrates the powerful utility of this class of drugs in achieving clinical remission [5]. e development of RP and paradoxical increase in inflammation of cartilage secondary to IL-17A inhibitors is likely the result of a disturbance in the equilibrium of these cytokines. Blocking IL-17A cytokines allows for the potential of increasing other inflammatory cytokines, such as IL-17F, which can act at the same receptor, and TNF-alpha [24].
ere is still an incomprehensive understanding of the causal relationship between TNFi's and IL-17A inhibitors with RP development, necessitating further study to elucidate the relationship.
Conclusion
is is the first reported case of RP following treatment with an IL-17A inhibitor, adding to a growing body of evidence emphasizing new onset of autoimmune diseases in a subset of patients. Prior cases have been found subsequent to TNFi use and have increased clinician awareness of the potential development of RP in patients with AS. e clinical diagnosis of RP in the current case is supported by clinical evidence of polychondritis and elevated inflammatory markers, as well as symptom resolution following the discontinuation of secukinumab and initiation of prednisone therapy. Given the novelty of IL-17Ai and restricted treatment options of AS, it is important that physicians are wary of the potential development of RP following IL-17Ai use in patients with AS.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2018-08-14T10:54:24.980Z | 2018-07-02T00:00:00.000 | {
"year": 2018,
"sha1": "00584ab12ffc7838afb802cbc8eac107bc165435",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crirh/2018/6760806.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aea225ea441cf503eb6b1566064d427ab8bc2f9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90979907 | pes2o/s2orc | v3-fos-license | Effect of Storage Period on Physio-Chemical Properties of Guava Fruit Leather
Guava (Psidium guajava L.) is quite hardy, prolific bearer with sweet aroma and pleasant sour sweet taste, This is a member of dicotyledonous, belong to large member of Myrtaceae or Myrtle family believed to be originated in Central America and Southern part of Mexico (Somogyi et al., 1996). It is a small tree or shrub of 2 to 8 m in height with wide spreading branches (Singh, 1988). It is claimed to be the fourth most important cultivated fruit in area and production after mango, banana and citrus. India leads the world in guava production (Singhal, 1996). Crop in India occupies an area of 2.20 lack ha with annual production 25.72 lack MT having productivity 11.70 MT/ha (2010). Major International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 04 (2018) Journal homepage: http://www.ijcmas.com
guava producing states are Uttar Pradesh, Maharashtra, Bihar, Andhra Pradesh, Gujarat, Madhya Pradesh, and Karnataka. In Maharashtra Guava is an important commercial horticultural crop and stands 2 nd place in production with an area of 33,469 ha, produce of 2.58 lack MT and productivity 7.80 MT/ha (Bijay Kumar 2011). The quality and nutritional value of guava fruits are influenced by physical and biochemical changes during maturation by photosynthesis and accumulation. Fully mature guava fruits have very strong flavour therefore it is unsuitable to use as a table purpose. The fruit has about 83% moisture and is an excellent source of ascorbic acid (100 -260 mg/100 g pulp) and pectin (0.5 -1.8 %) (Verma and Shrivastava, 1965), but has low energy (66 Cal/100 g) and protein content (1%) (Bose et al., 1999). The fruit is rich in minerals like phosphorous (23-37 mg/100 g), calcium (14-30 mg/100 g), iron (0.6-1.4 mg/100 g), as well as vitamins like Niacin, Pantothenic acid, Thiamine, Riboflavin, vitamin A (Bose et al., 1999). Whole fruit is edible along with skin, considered as one of most delicious and luxurious fruits, often marketed as " Super fruits" which has a considerable nutritional importance in terms of vitamins A and C with seeds that are rich in omega-3, omega-6 polyunsaturated fatty acids and especially dietary fiber, riboflavin, as well as in proteins, and mineral salts, calcium, etc. Guava is normally consumed fresh as a desert fruit can be processed into juice, nectar, pulp, jam, jelly, slices in syrup, fruit bar or dehydrated products, as well as being used as an additive to other fruit juices or pulps (Leite et al., 2006). Excellent salad, pudding, jam, jelly, cheese, canned fruit, RTS, nectar, squash, ice cream and toffees are made from guava (Jain and Asati, 2004). However, guava is highly perishable and cannot be stored for longer period. Moreover considerable proportion of the produce is lost during post-harvest linkage (Ahire, 1989). It is, therefore, imperative to develop suitable technology for preservation and processing of such surplus produce. With the changing consumer attitudes, demands and emergence of new market products, it has become imperative for producers to develop products, which have nutritional as well as health benefits. In this context, guava has excellent digestive and nutritive value, pleasant flavor, high palatability and availability in abundance at moderate price there has been greater increase in the production rate of these fruits over the years, and this may be due to their increased consumption pattern in the tropics (FAO, 1983). Fruit leathers are dehydrated fruit based products. Fruit leathers are made by pouring pureed fruit onto a flat surface for drying. Due to its novel and attractive structure, and for being products that do not require refrigeration, they constitute a practical way to incorporate fruit solids, especially for children and adolescents. Fruit leathers allow leftover ripe fruits to be preserved. Therefore Preparation of guava leather from two kinds of fully ripened guava fruits one is of white flesh of Sardar variety and another of pink flesh Lalith fruits considered to study effect of storage period on physio chemical properties of guava leather.
Raw materials
Well-matured, healthy, uniform sized over ripen fruits of local Lalith of pink and Sardar or Lucknow-49 of white flesh cultivars were collected from the Department of Horticulture and progressive farmers of the Rahuri, Nasik, Yeola Tahashils.
Ingredients
Citric acid, salt, sugar and hydrogenated fat were obtained from market and used as ingredients for preparation of guava leather.
Chemicals
Most of the chemicals used in this investigation were of analytical grade, obtained from M/s. British Drug House Mumbai, M/s. Sarabhai M. Chemicals, M/s. Baroda, S.D. Fine Chemical Ltd., Mumbai and E. Merck (India), Mumbai.
Preparation of guava leathers
The guava fruit pulp was used for the preparation of fruit leather. In the pulp sugar, salt as per the formula added, mixed well and then smeared on the aluminium or stainless steel trays. Spread the pulp in thin layer (0.5 to 1.0 cm thick). Then the pulp was dried in hot air oven at 50 0 C for 8-10 hrs. After that dried pulp sheets were cut into desired size and again dried for 8-10 hrs. After drying three layers of sheets were kept together and pressed properly to form one sheet. Then desired size (3 x 4 cm) cutting was done and dried under fan for 2-3 hrs and then wrapped into a metalized polyester wrapper and then kept in plastic bag for storage study.
Standardization of ingredient levels for guava leather
Preliminary experiments were conducted to select the optimum level of each ingredient like sugar, salt, citric acid. The optimum levels of ingredients were finalized by sensory evaluation of guava leather by a panel of minimum ten semi-trained judges using 9 points Hedonic scale (Amerine et al., 1965).
Packaging
The prepared leathers were packed in a butter paper stored at both ambient (25+2ºC) and refrigerated (7+2ºC) temperature safely in laboratory at the middle compartment of the refrigerator for 3 months storage study. Chemical analysis, organoleptic evaluation and microbial analysis of stored guava leathers were carried out at an interval of 0, 30, 60, 90 day's storage period.
Physicochemical analysis
The over ripen guava fruit pulp was analyzed for the moisture, TSS, titrable acidity, reducing sugars, total sugars, and vitamin C using standard methods of AOAC (2005).
Statistical analysis
Results and experiments were planned and carried out using Factorial Completely Randomized Design (FCRD) using three to ten replications according to methods of the procedure given by Panse and Sukhatme (1967).
Physio-chemical characteristics of Sardar guava fruit and pulp
The physio-chemical composition of fruit plays a very important role in processing technology of guava as well as final quality of the product. The Physio-chemical composition of Sardar cultivar of guava is presented in Table 2. The over ripened fruits were round, yellowish in color. The average weight of fruit was 139 g/fruit. The average values for recovery of pulp and processing losses were 92.60 and 7.40 per cent, respectively.
Physio-chemical characteristics of Lalith guava fruit and its pulp
Lalith fruits were attractive, saffron yellow with occasional red blush and medium sized with firm pink colored flesh. It has good blend of sugar and acid and suitable for both processing and table purpose. Its yield was more than 24 per cent than the Allahabad Safeda variety (Yadav, 2007). The over ripen fruits of Lalith were round, yellowish in color. The average weight of fruit was 126 g/fruit. The average values for recovery of pulp and processing losses were 91.0 and 9.0 per cent, respectively.
Changes in chemical composition of guava leathers during storage
Guava leather prepared from selected treatments from both varieties was kept for storage study at ambient (27 + 2ºC) and refrigerator (7 + 2ºC) temperatures. The storage study results of guava leathers were presented in Tables 4 to 7.
Chemical properties of guava leathers
Chemical properties of guava leathers are mentioned in table 3. There was slight variation in chemical properties which might be due to change in variety. Pink flesh guava leather has low amount of ascorbic content when compared to the sardar guava leather
Moisture (%)
The moisture content was reduced from 15.85 to 14.67 per cent at ambient temperature and 15.85 to 15.07 per cent at refrigerated temperature when stored for three months. Mean values of moisture content were reduced with the advancement of increase in storage period as shown in Tables 4 to 7. The moisture content in guava leathers stored at ambient condition was reduced at higher rate than in the refrigerated condition, which might be due to the higher temperature of the ambient condition than the refrigerated temperature, responsible for removal of moisture from guava leather samples., V 2 T 1 treatment was found more suitable to maintain the moisture level at higher value in guava leathers than the other treatments. In consistent with these results, the decrease in moisture content during storage was reported in mango leather (Rao and Roy, 1980a), sweet potato leather (Collins and Hutsell, 1987), dried fig (Chandeshwar et al., 2004), mango leather (Gill et al., 2004), fig leather (Kotlawar, 2008), tamarind leather (Kharche, 2012), the results obtained in present investigation are parallel with literature
Total soluble solids TSS ( o Brix)
Due to decrease in moisture content there was increase in TSS content of guava leathers from 75.95 to 77.20 per cent at ambient temperature, 75.95 to 76.81 per cent at refrigerated temperature.
With the advancement of increase in storage period mean values of TSS content were increased as shown in Tables 4 to 7. It was observed that there was gradual increase in TSS content at ambient condition than at refrigerated condition. Sample V 1 T 1 stored at ambient temperature had the highest content of total soluble solids.
The increase in TSS content during storage period was reported in fig (Mali, 1997;Palve, 2002;Gawade andWaskar, 2003 andChandeshwar et al., 2004) dried fig leather (Kotlawar, 2008), changes in guava leather packed in different packaging materials and stored at different storage conditions (Muhammad, 2014) and (Chavan, 2015) mixed toffee from guava and strawberry also increased TSS level due to reduction in moisture content. The results obtained in present investigation showed similar trend as shown in literature.
Titrable acidity (%)
The titrable acidity of guava leathers increased in all samples. Mean values of titrable acidity are increased from 0.476 to 0.518 per cent at ambient temperature and from 0.476 to 0.506 per cent at refrigerated temperature during storage period of 3 months. Acidity was at higher level in treatment V 1 T 1 and V 2 T 1 than in V 1 T 2 and V 2 T 2 , it may be due to the addition of citric acid in treatments V 1 T 1 and V 2 T 1 . Whereas, in other two treatments citric acid was not added. The changes in titrable acidity of guava leathers are presented in Tables 4 to 7. Changes in titrable acidity statistically were non-significant up to 30 days but after that there was significant change. The increase in titrable acid content was reported in mango leather (Rao and Roy, 1980), fig leather (kotlawar, 2008), high protein tamarind leather (Kharche, 2012) and changes in guava leather packed in different packaging materials, at different storage conditions (Muhammad, 2014). The results obtained in present investigation are parallel to earlier reports
Reducing sugars (%)
A significant variation in reducing sugar content of guava leathers was observed during storage. Due to more inversion of added sugars in guava leather samples during storage. The content of reducing sugars in guava leathers increased with progress of storage period. Average length (cm) 6.20 4.10 4.
Chemical constituents of Pulp 1.
Moisture (%) 82.56 83.60 The mean values of reducing sugar content increased from 13.88 to 16.35 per cent at ambient temperature and from 13.88 to 16.02 per cent at refrigerated temperature during 3 months storage. The increase in reducing sugars at ambient temperature was more than at refrigerated temperature.
The changes in reducing sugar content of guava leather samples are presented in Tables 4 to 7. These results indicated that the increase in storage temperature is the responsible factor for increase in reducing sugars while storing the guava leathers at two different storage temperature conditions.
Similar results of increase in reducing sugars were also reported in mango leather sugars during were reported in mango leather (Rao and Roy, 1980), mango fruit bars (Mir and Nirankarnath, 1993), jackfruit bar (Krishnaveni et al., 1999), papaya-guava fruit bar (Vennilla et al., 2004(Vennilla et al., ), fig leather (kotlawar, 2008, mixed fruit toffee from fig and guava fruits (Chavan, 2012) and Muhammad (2014) also reported that when guava leather packed in different packaging materials and stored at different storage conditions also increased reducing sugar levels.
Total sugars (%)
There was gradual increase in total sugar content of guava leathers with increase in storage time. This may be due to higher storage temperature at ambient temperature and reduction in moisture content from guava leather samples. The total sugars of guava leather samples ranged from 68.42 to 69.02 per cent at ambient temperature and from 68.42 to 68.73 per cent at refrigerated temperature during 3 months storage. The results on changes in total sugar content of guava leathers during storage are presented in Tables 4 to 7.
Similar results were reported that total sugar content also increased in sweet potato leather (Collins and Hutsell, 1987), jack fruit leather (Che Man and Taufik, 1995), fig and other fruit products (Doreyappa Gowda et al., 1995), mango fruit bar with respect to storage temperature (Doreyappa Gowda et al., 1995), guava-papaya fruit bar (Vennilla et al., 2004), changes in guava leather packed in different packaging materials stored at different temperature conditions (Muhammad, 2014) and mixed toffee from guava and strawberry (Chavan, 2015). The results obtained in the present investigation are comparable to those reported in the literature.
Ascorbic acid (mg/100 g)
Significant difference in the ascorbic acid content was observed in guava leather samples during storage with two different temperature conditions with respect to storage period of 3 months. The ascorbic acid content of guava leather samples gradually decreased with the advancement of storage period. It decreased from 99.36 to 73.79 mg/100 g at ambient temperature and from 99.36 to 60.16 mg/100 g at refrigerated temperature.
It was observed that ascorbic acid content of guava leather samples was higher level when stored at refrigerated temperature than at ambient temperature. The ascorbic acid content of guava leather samples were successfully maintained when stored at refrigerated temperature. The decrease in the ascorbic acid content at ambient condition might be due to oxidation of ascorbic acid at high storage temperature. The result on changes in ascorbic acid content of guava leathers during storage are presented in Tables 4 to 7.
From the results of this research it was concluded that in physicochemical analysis, guava leather prepared with treatment T 1 showed better organoleptic properties as well as good storage stability at both storage (ambient and refrigerated) conditions up to 3 months storage period
Recommendations
Study should be carried out in the effect of different packaging materials Study the effect of different drying methods Further studies on preparation of guava leather and preservation using other preservatives Preparation of guava leather on pilot scale needed to undertake for its better utilization. | 2019-04-02T13:14:10.246Z | 2018-04-20T00:00:00.000 | {
"year": 2018,
"sha1": "cb57fc98070b94e10ff14cd9781c35a88261c57c",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/7-4-2018/Shaik%20Jakeer%20Basha,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fce19c9bba0b801c859bc7b2d37ae9befc30184e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
29054026 | pes2o/s2orc | v3-fos-license | How does ethanol induce apoptotic cell death of SK-N-SH neuroblastoma cells
A body of evidence suggests that ethanol can lead to damage of neuronal cells. However, the mechanism underlying the ethanol-induced damage of neuronal cells remains unclear. The role of mitogen-activated protein kinases in ethanol-induced damage was investigated in SK-N-SH neuroblastoma cells. 3-[4,5-Dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide cell viability assay, DNA fragmentation detection, and flow cytometric analysis showed that ethanol induced apoptotic cell death and cell cycle arrest, characterized by increased caspase-3 activity, DNA fragmentation, nuclear disruption, and G1 arrest of cell cycle of the SK-N-SH neuroblastoma cells. In addition, western blot analysis indicated that ethanol induced a lasting increase in c-Jun N-terminal protein kinase activity and a transient increase in p38 kinase activity of the neuroblastoma cells. c-Jun N-terminal protein kinase or p38 kinase inhibitors significantly reduced the ethanol-induced cell death. Ethanol also increased p53 phosphorylation, followed by an increase in p21 tumor suppressor protein and a decrease in phospho-Rb (retinoblastoma) protein, leading to alterations in the expressions and activity of cyclin dependent protein kinases. Our results suggest that ethanol mediates apoptosis of SK-N-SH neuroblastoma cells by activating p53-related cell cycle arrest possibly through activation of the c-Jun N-terminal protein kinase-related cell death pathway.
INTRODUCTION
Alcohol is a toxic and dependence-producing substance that can damage most organs in the body, including the liver [1][2][3] , pancreas [4][5][6] , skeletal and cardiac muscle [7] , and brain [8][9] . The brain is particularly sensitive to the toxic effects of alcohol. The model of alcohol-related brain damage can be used to investigate the effects of chronic alcohol consumption on human brain structure and function in the absence of well-characterized neurological concomitants of alcoholism [10][11][12][13][14][15] . For example, structural imaging techniques have revealed that chronic alcohol use is accompanied by volume reductions of gray and white matter, microstructural disruption of various white matter tracts, and enlargement of cerebral ventricles and sulci [16][17] . Postmortem studies of brain tissue in both humans and animals confirmed the observation by structural imaging techniques, showing significant reductions in the weight of the cerebral hemispheres and the cerebellum in severe alcoholics [18][19] . Except the structure of normal adult brain, alcohol also damages neurogenesis of the developing brain as well as adult neuroregenesis of the adult brain. Ethanol has been shown to disrupt numerous events in the developing brain, including neurogenesis, cell migration, cell adhesion, neuron survival, axon outgrowth, synapse formation, and neurotransmitter function [20][21][22] . Since similar events also happen during neuroregeneration of the adult brain, it is possible that ethanol also affects adult neuroregeneration. Inhibition of neuroregeneration by ethanol has been demonstrated in animal models that receive low dose of ethanol.
The structural alterations due to alcohol abuse clearly lead to changes in brain function, with the degree of dysfunction dependent upon the duration and amount of alcohol consumed, which include neuropsychological deficits in alcoholics, particularly in those with Korsakoff's psychosis [22][23][24] . Alcoholics also show the abnormality of executive cognitive function, the ability to use higher mental processes to adaptively shape future beha-vior [25][26] . Long-term follow-up of the fetal effects of ethanol demonstrates that mental retardation, abnormal behavior, and facial dysmorphism persist into adulthood [27] . In rodents exposed in utero to ethanol, the hippocampi display reduced number of neurons and dendritic spine density, correlating with the animals' impaired learning and memory [27] .
A large number of works have been done to unveil the mechanisms for the toxicity of ethanol to the brain. Although the exact mechanism behind alcoholic neuropathy is not well understood, several explanations have been proposed. It is believed that chronic alcohol use can damage the brain by inducing malnutrition and thiamine deficiency leading to Wernicke-Korsakoff syndrome. This indirect toxic effect of ethanol results from the compromised absorption and abnormal metabolism of thiamine and other vitamins induced by ethanol [28] . In addition, reduced availability of neurotrophins, increased levels of homocysteine, and activated microglia are also proposed to be responsible for the neurodegeneration induced by ethanol [28] . Except the indirect toxic effect, studies support a direct toxic effect of ethanol to neurons, since a dose-dependent relationship has been observed between severity of neuropathy and total lifetime dose of ethanol [29][30] . For example, axonal degeneration has been documented in rats receiving ethanol while maintaining normal thiamine status [31] . The direct toxic effect of ethanol on nerve cells has been directly observed in cultured cells. For example, the moderate or high concentration of ethanol could lead to morphological changes and cytoskeleton organization of the cultured neurons [32][33] . Ethanol can affect the differentiation of neural stem cells [34] . Numerous recent in vitro and in vivo studies provide evidence showing that ethanol can directly induce apoptotic cell death of the neurons [35][36][37][38] . However, the signaling mechanism of neuronal apoptosis induced by ethanol remains unclear. It is known that the initiation and execution of apoptosis depend on activation of the extrinsic and/or intrinsic death pathways. Mitogen-activated protein kinases (MAPKs) are protein Ser/Thr kinases that convert extracellular stimuli into a wide range of cellular responses [39][40] . MAPKs are among the most ancient signal transduction pathways and are widely used throughout evolution in many physiological processes [39][40][41] . In mammals, there are more than a dozen MAPK enzymes that coordinately regulate cell proliferation, differentiation, motility, survival, and apoptosis. The best known are the conventional MAPKs, which include the extracellular signal-regulated kinases (ERK), c-Jun amino-terminal kinases (JNK), and p38 MAP kinases (p38K). While ERKs are key transducers of proliferation signals and are often activated by mitogens, the JNKs and p38K are poorly activated by mitogens but strongly activated by cellular stress inducers [39][40][41] . It has been shown that both the JNK and p38K can be activated by ethanol exposure [42][43][44] . However, how their activation initiates neuronal apoptosis has yet to be identified. The p53 tumor suppressor protein exerts its growth inhibitory activity by activating and interacting with diverse signaling pathways. As a downstream target, p53 protein is phosphorylated and activated by a number of protein kinases including JNK and p38K in response to stressful stimuli [45] . As an upstream activator, activated p53 acts as a transcription factor to induce and/or suppress a number of genes whose expression leads to the activation of diverse signaling pathways and many outcomes in cells, including cell cycle arrest and apoptosis [46] . SK-N-SH neuroblastoma cells are hybrid cells of neurons and blastomas that are phenotypically similar to neurons but able to proliferate. Therefore, this cell line has been extensively used to study the effect of ethanol on neuronal cells. By using SK-N-SH neuroblastoma cells, the current study was designed to investigate the effect of ethanol on the JNK and p38K pathways and their roles in ethanol-induced cell death of neuronal cells. In addition, the expression levels of p53 protein and various proteins associated with cell cycle arrest and apoptosis were measured after ethanol exposure in order to unveil the signaling mechanisms in the ethanol-induced cell death.
RESULTS
Ethanol reduced cell viability of SK-N-SH neuroblastoma cells SK-N-SH neuroblastoma cells were divided into a control (C) group and four ethanol treatment groups and received PBS and various concentrations (25,50,100,200 mmol/L) of ethanol treatment for 24 hours. Phase contrast photomicrographs showed that most of the ethanol-treated SK-N-SH neuroblastoma cells shrank into spherical shape and only a few exhibited normal spindle shape ( Figure 1A). 3-[4,5-Dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay indicated that ethanol induced a concentration-and exposure time-dependent increase in cell death rates of the SK-N-SH neuroblastoma cells (P < 0.01; Figure 1B).
Ethanol induced apoptotic alterations and cell cycle arrest in SK-N-SH neuroblastoma cells
After treatment with 100 mmol/L ethanol, the levels of caspase-3 were increased after 16 hours, which still kept at high levels at 24 hours (P < 0.01; Figure 2A). In addition, DNA fragmentation analysis showed that in the ethanol-treated cells, there were fragmented DNA at 24 hours, which became apparent after the treatment time was prolonged from 24 to 72 hours ( Figure 2B). We stained the cells with 4',6-diamidino-2-phenylindole (DAPI), a sensitive assay for apoptosis. Without ethanol treatment, the nuclei of control cells showed uniform staining, indicating that these cells were healthy and nuclei intact. In contrast, after 24 hour treatment with 100 mmol/L ethanol, the SK-N-SH neuroblastoma cells exhibited typical alterations of apoptosis, such as nuclear condensation and disruption ( Figure 2C). Flow cytometric analysis with the cellular DNA stained with propidium iodide revealed that the percentage of M1 cells, which indicated the cells in sub-G 1 stage of cell cycle, increased from 0.84% at 0 hour to 15.82% at 36 hours. In contrast, the cells at M3 and M4 phases, which represented S phase and G 2 /M phase, decreased from 8.33% and 27.52% at 0 hour to 4.98% and 17.21% at 36 hours, respectively ( Figure 3).
Ethanol increased the levels of phosphorylated JNK (p-JNK) and p38K (p-p38K)
To determine the induction of JNK expression after ethanol exposure in SK-N-SH neuroblastoma cells, JNK protein levels were determined by immunoblot analysis after exposure to 100 mmol/L ethanol at different time points. As shown in Figure 4, ethanol increased p-JNK levels in a time-and concentration-dependent manner. Ethanol treatment a a Within 1 hour after ethanol exposure, p-JNK levels increased, and the elevated p-JNK persisted until 16 hours after ethanol exposure. Different from JNK, p-p38K levels transiently increased at 1-4 hours after ethanol treatment before returning to control levels.
Inhibition of JNK and p38K phosphorylation reduced ethanol-induced cell death As described above, ethanol treatment led to remarkable increases in the levels of p-JNK and p-p38K. To examine the specific roles of JNK and p38K phosphorylation in ethanol-induced cell death, the cells were pretreated for 3 hours with SP600125 (a JNK inhibitor) and SB203580 (a p38K inhibitor). As shown in Figure 5, the inhibitors significantly reduced ethanol-induced cell death as well as p-JNK and p-p38K levels in SK-N-SH cells, suggesting that JNK and p38K phosphorylation are important during ethanol-mediated cell death.
Ethanol induced p53 phosphorylation in SK-N-SH neuroblastoma cells
To determine the involvement of p53 in ethanol-mediated SK-N-SH neuroblastoma cell death, the level of p53 was assayed by western blot analysis in SK-N-SH neuroblastoma cells treated with 100 mmol/L ethanol. Ethanol induced the phosphorylation of p53, which led to accumulation of p53 protein at 1 hour after ethanol exposure. Furthermore, this p53 activation was followed by an increase in the p21 tumor suppressor protein and a gradual decrease in phosphorylated Rb protein ( Figure 6).
Ethanol reduced expression and activity of cyclin dependent protein kinases
To investigate the effect of ethanol on cell cycle, the expressions and activity of the cyclin-dependent protein kinases were examined. As shown in Figure 7A, the levels of Cdk2 and Cdk4 decreased in a time-dependent manner in SK-N-SH neuroblastoma cells treated with 100 mmol/L ethanol. In addition, the protein kinase activity associated with the immunoprecipitated CDK (Cdk2 and Cdk4) and cyclin proteins (cyclin D1 and cyclin E) also reduced in a time-dependent manner in SK-N-SH neuroblastoma cells treated with 100 mmol/L ethanol ( Figure 7B).
DISCUSSION
Long term alcohol exposure has been shown to be toxic to the nerve cells in either the developing or the adult brain [21] . However, whether the toxic effect of ethanol on nerve cells is due to indirect or direct mechanism remains unclear. While some studies show that ethanol may induce apoptotic neurodegeneration by some indirect mechanisms such as increase in oxidative stress, induction of proinflammatory cytokines, deficiency in thiamine, and accumulation in GM2 ganglioside and sphingosine 1-phosphate [28,[47][48] , some others suggest that a direct mechanism may play a role in the etha- (A) SK-N-SH neuroblastoma cells grown in microtiter plates were pretreated with dimethyl sulfoxide (DMSO; control) or 10 µmol/L SB203580 (JNK inhibitor) or 500 nmol/L SP600125 (p38K inhibitor) for 3 hours before exposure to 100 mmol/L ethanol for an additional 24 hours. Cell viability was then determined using 3-[4,5dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide (MTT) assay. Data are expressed as mean ± SEM, n = 5, one-way analysis of variance. a P < 0.01, vs. ethanoluntreated control. b P < 0.05, vs. ethanol-treated samples.
(B) SK-N-SH neuroblastoma cells were pretreated with 10 µmol/L SB203580 or 500 nmol/L SP600125 for 3 hours and then treated with 100 mmol/L ethanol for an additional 24 hours before cells were collected for immunoblot analyses of the phosphorylated-p38K (p-p38K) or phosphorylated JNK (p-JNK) proteins. Ethanol-induced increases in p-p38K and p-JNK were inhibited by SB203580 and SP600125. SK-N-SH cells were treated with 100 mmol/L ethanol for the indicated time periods (0-24 hours). The soluble fraction from each sample was separated by 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) followed by western blot analysis. Each antigenic protein was detected using antibodies against p53, phosphorylated p53 (p-p53), p21, or phosphorylated Rb (p-Rb). h: Hour; min: minute. nol-induced neuronal cell death [35][36][37][38] . By using cultured SK-N-SH neuroblastoma cells treated for various periods of time by different concentrations of ethanol, we showed that ethanol could significantly reduce the cell viability of the SK-N-SH neuroblastoma cells.
The reduction of cell viability induced by ethanol may result from increased apoptotic cell death and decreased cell proliferation. That the ethanol-treated SK-N-SH neuroblastoma cells underwent apoptotic cell death was evidenced by several typical apoptotic changes the cells presented after ethanol exposure. These include caspase-3 activation, DNA fragmentation, and nuclear condensation and disruption. In addition, flow cytometric analysis identified that the percentage of G 1 cells in cell cycle increased dramatically, indicating that ethanol could also induce cell cycle arrest of the SK-N-SH neuroblastoma cells.
To further understand the potential mechanism for the ethanol-induced apoptotic cell death and cell cycle arrest in SK-N-SH neuroblastoma cells, we tried to identify the possible signal transduction pathways related to the ethanol-induced apoptosis and cell cycle arrest. As have demonstrated previously, MAPKs appear to participate in the ethanol-induced cell death. Both the JNK and p38K, the two subfamilies of MAPKs that are usually activated by stressful stimuli, were shown to be activated by ethanol exposure [42][43][44] . However, how their activation initiates neuronal apoptosis and cell cycle arrest has yet to be elucidated. Our results showed that both the levels of phosphorylated JNK and p38K were increased by ethanol treatment, indicating these two pathways were activated during ethanol exposure. To demonstrate that the ethanol-induced activation of JNK and p38K is associated with the ethanol-induced cell death, we used JNK inhibitor SP600125 and p38K inhibitor SB203580 to treat the cells before ethanol exposure. The results showed that the inhibitors significantly reduced the ethanol-induced cell death as well as the levels of the phosphorylated JNK and p38K in SK-N-SH neuroblastoma cells, suggesting that the ethanol-mediated cell death is mediated by JNK and p38K activation.
p53 is the most commonly mutated gene in human cancer. The p53 tumor suppressor protein is a nuclear phosphoprotein with a short half life that is regulated mainly through post-translational modifications. Upon stressful stimuli, p53 protein is modified through multiple post-translational events, including phosphorylation and acetylation. These modifications stabilize and activate p53 protein. Once p53 protein is activated, it acts as a transcription factor for many genes that contain the consensus p53-binding sites in their promoters or intronic sequences. It is accepted that activation of p53 protein triggers a number of signaling pathways that lead to cell cycle arrest, apoptosis, senescence, DNA repair and antiangiogenesis [45] . It has been shown that the MAP kinases including p38 and JNKs can phosphorylate p53 in response to different stressful stimuli, and such phosphorylation can initiate p53 response, leading to cell cycle arrest and apoptosis [45][46] .
To determine the involvement of p53 in ethanol-mediated SK-N-SH cell death and cell cycle arrest, the level of p53 was assayed by immunoblot in the SK-N-SH cells treated with ethanol. We found that ethanol induced the phosphorylation of p53, which led to accumulation of p53 protein at 1 hour after ethanol exposure. This result indicates that p53 protein is involved in the apoptotic cell death and cell cycle arrest after modification by activated p38K and JNK in the ethanol-treated SK-N-SH neuroblastoma cells.
It is known that cell cycle progression is controlled by a set of cyclin-dependent kinases (CDKs), which are activated by their associated cyclins, but inhibited by two classes of CDK inhibitors. One of the CDK inhibitors is p21, which is a small 165 amino acid protein also known as p21WAF1/Cip1 and has been shown to be an impor- (A) The soluble fractions from SK-N-SH neuroblastoma cells treated with 100 mmol/L ethanol for different time periods were subjected to 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) followed by western blot analysis using the specific antibodies against each target protein.
(B) Protein kinase activity associated with the immunoprecipitated CDK and cyclin proteins was determined with immunocomplex kinase activity assay using Histone H1 as the substrate. CDK: Cyclin-dependent kinase.
tant mediator in p53-dependent cell cycle arrest and apoptosis [49][50] . Another CDK inhibitor is retinoblastoma protein (pRb) that works in the late G 1 , and phosphorylation of pRb is found to be essential for G 1 /S transition [51][52] . It is established that the p53 protein can enhance the transcription of p21 [53] . Binding of p21 to the cyclin-Cdk complex therefore results in an inhibition of a kinase activity, thereby interfering with phosphorylation of pRb and inducing arrest of cell growth [54][55][56] .
In accordance with the above theory, we showed that p53 activation by ethanol was followed by an increase in the p21 tumor suppressor protein and a gradual decrease in phospho-Rb protein.
In addition, we showed that both the levels of Cdk2 and Cdk4, the protein kinase activity associated with CDK (Cdk2 and Cdk4) and cyclin proteins (cyclin D1 and cyclin E) decreased in a time-dependent manner in SK-N-SH neuroblastoma cells treated with ethanol. Since the cyclin D1/Cdk4 complex can activate cell cycle progression early in the G 1 phase by phosphorylation of pRb, while the Cdk2/cyclin E complex plays a role in the transition from the G 1 to S phase [57][58][59] , the above results can well explain our flow cytometric analysis showing that the cells treated with ethanol arrested in G 1 stage. The p53 protein-mediated cell cycle arrest can further lead to apoptosis if the DNA cannot be repaired effectively. This may be one of the mechanisms for the ethanol-induced apoptotic cell death in the SK-N-SH neuroblastoma cells. However, since p53 can also induce apoptosis through cascade of caspases [60] , and caspase-3 is activated in the ethanol-treated cells, it is also possible that ethanol-induced apoptotic cell death may be partially mediated through activation of caspase-3 by p53.
In conclusion, the present study strongly indicates that ethanol can directly induce cell cycle arrest and apoptosis in SK-N-SH neuroblastoma cells. Ethanol may first activate p53 protein through phosphorylation of JNK and p38K, and further initiate the cell death pathways leading to cell cycle arrest and apoptosis.
MATERIALS AND METHODS Design
A randomized, controlled, in vitro experimental study.
Time and setting
The experiment was performed at Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Kangdong Sacred Heart Hospital, Hallym University, Seoul, Korea from March 2011 to April 2012.
Materials
SK-N-SH cells were obtained from the American Type Culture Collection (Rockville, MD, USA).
Detection of ethanol-treated SK-N-SH neuroblastoma cell viability by MTT assay
SK-N-SH neuroblastoma cell viability was measured after ethanol exposure using MTT assay (Sigma-Aldrich, St. Louis, MO, USA). Briefly, the medium was removed and replaced with 20 μL of MTT (5 mg/mL; Sigma-Aldrich) in PBS. The plates were incubated at 37°C for 4 hours, followed by addition of 100 μL of dimethyl sulfoxide (DMSO). The multi-well plates were then shaken for 15 seconds, and the signals were detected with a microplate reader at a wavelength of 595 nm. Cell death rate was expressed as the percentage of the ethanol-treated cells in the vehicle-treated control cells.
Detection of ethanol-treated SK-N-SH neuroblastoma cell apoptosis by DAPI staining
The SK-N-SH neuroblastoma cells were fixed at room temperature with 4% paraformaldehyde, and stained for DAPI using Cell Apoptosis DAPI Detection Kit (GenScript USA Inc., Piscataway, NJ, USA) according to the instruction provided. The stained cell nuclei were examined under a fluorescence microscope (Nikon TE2000, Tokyo, Japan).
Analysis of DNA fragmentation in ethanol-treated SK-N-SH neuroblastoma cells
DNA fragmentation in the SK-N-SH neuroblastoma cells was measured using a previously published method [61] . Briefly, genomic DNA isolated from ethanol-treated and untreated cells was mixed with unphosphorylated oligonucleotides in T4 DNA ligase buffer (Boehringer Mannheim, Stuttgart, Germany). Oligonucleotides were annealed. 3 U of T4 DNA ligase (Boehringer Mannheim) were added for ligations. The reactions were then diluted with TE buffer to a final concentration of 5 ng/mL. Samples were stored at 20°C until PCR. The ligated DNA was amplified by PCR using a specific linker primer. The PCR products were analyzed by electrophoresis through 1.2% agarose gels. After electrophoresis, the gels were stained by ethidium bromide and photo-graphed on a UV transilluminator (JUNYI, Beijing, China).
Detection of ethanol-treated SK-N-SH neuroblastoma cells by flow cytometry
After trypsin digestion, approximately 10 6 cells were collected by centrifugation at 1 000 × g for 5 minutes. The cells were then washed in PBS followed by re-suspension and fixation in 70% ethanol for approximately 2 hours. The cells were washed once with PBS, re-suspended in 0.5 mL PBS containing 0.1 mg RNAase, and incubated for 30 minutes at 37°C. Cellular DNA was then stained with 10 µg of propidium iodide. The stained cells were subsequently analyzed on a FACScan with the Cellquest software (Becton Dickinson, Franklin Lakes, NJ, USA).
Immunocomplex kinase activity assay SK-N-SH neuroblastoma cells treated with ethanol for the indicated times were harvested, homogenized in ice cold lysis buffer, and used to determine the activities of cdk2, cdk4, cyclin D1, and cyclin E in the soluble fraction (300 µg per reaction) according to the published method [63] . Briefly, cells were then washed twice in cold PBS and lysed by the addition of RIPA-buffer (1% Triton X-100, 1% sodium deoxycholate, 0.1% SDS, 0.15 mol/L NaCl and 0.01 mol/L Tris, pH 7.4). Lysates were clarified by centrifugation at 15 000 × g for 30 minutes at 4°C. Sam-ples were incubated with each rabbit polyclonal antibody against Cdk2, Cdk4, cyclin D1, and cyclin E overnight at 4°C, followed by incubation for another 60 minutes with Protein A-Sepharose CL-4B (Pharmacia LKB Biotechnology, Piscataway, NJ, USA). Immune complexes were centrifuged, and pellets were washed in RIPA-buffer, resuspended, boiled in SDS sample buffer and analyzed on discontinuous 12.5% SDS-polyacrylamide slab gels followed by fluorography. The protein kinase activity associated with the immunoprecipitated CDK (Cdk2 and Cdk4) and cyclin proteins (cyclin D1 and cyclin E) was measured using purified histone H1 as the substrate.
Statistical analysis
All experimental results shown were repeated five times unless otherwise indicated. The results were expressed as mean ± SEM. One-way analysis of variance was performed to determine the significance among groups using SPSS 11.5 software (SPSS, Chicago, IL, USA). P < 0.05 was considered statistically significant. | 2017-06-17T06:02:07.301Z | 2013-07-15T00:00:00.000 | {
"year": 2013,
"sha1": "f8a782462cbcfdf8853bf5c984eed2b213432c8b",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "71fe96356e3a7a157e631ef6dc4de0ef14ed711e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119737320 | pes2o/s2orc | v3-fos-license | A note on the Matlis dual of a certain injective hull
Let $(R,\mathfrak{m})$ denote a local ring with $E = E_R(R/\mathfrak{m})$ the injective hull of the residue field. Let $\mathfrak{p} \in \Spec R$ denote a prime ideal with $\dim R/\mathfrak{p} = 1$, and let $E_R(R/\mathfrak{p})$ be the injective hull of $R/\mathfrak{p}$. As the main result we prove that the Matlis dual $\Hom_R(E_R(R/\mathfrak{p}), E)$ is isomorphic to $\hat{R_{\mathfrak{p}}}$, the completion of $R_{\mathfrak{p}}$, if and only if $R/\mathfrak{p}$ is complete. In the case of $R$ a one dimensional domain there is a complete description of $Q \otimes_R \hat{R}$ in terms of the completion $\hat{R}$.
INTRODUCTION
Let R denote a commutative Noetherian ring. For injective R-modules I, J it is well-known that Hom R (I, J) is a flat R-module. In order to understand them the first case of interest is when I, J are indecomposable (as follows by Matlis' Structure Theory (see e.g. [7] or [3])).
Let (R, m) denote a local ring with the injective hull E = E R (R/m) of the residue field k = R/m. In this situation it comes down to understand the Matlis dual Hom R (I, E) of an injective R-module, in particular for I = E R (R/p), the injective hull of R/p for p ∈ Spec R. It was shown (see [3, 3.3.14] and [3, 3.4
Therefore Hom R (E R (R/p), E) is the completion of a free R p -module of rank µ p .
Here we shall prove -as the main result of the paper -the following result on the Matlis dual of a certain E R (R/p). Theorem 1.1. Let (R, m) denote a local ring. Let p denote a one dimensional prime ideal. Then Hom R (E R (R/p), E) ≃ R p (i.e. it is the completion of a free R p -module of rank one) if and only if R/p is complete.
Let p denote a one dimensional prime ideal in a local ring (R, m). The equality µ p = 1 was proved in [4] resp. in [5] in the case of R a complete Gorenstein domain resp. in the case of R a complete Cohen-Macaulay domain. The proofs are based on the use of the dualizing module of a complete Cohen-Macaulay domain. Note that the dualizing module is isomorphic to R in the case of a complete Gorenstein domain.
Here we use as a basic ingredient Matlis Duality and -as a main step -the reduction to the case of dim R = 1 suggested by one of the reviewer's. In the case of a one dimensional domain there is a complete description of Hom R (E R (R), E) and Q ⊗ RR in terms of the completionR (see Theorem 2.5 for the precise formulation).
PROOFS
In the following (R, m) denotes always a local ring with E = E R (R/m) the injective hull of the residue field R/m. Then D R (·) = Hom R (·, E) denotes the Matlis duality functor. that is always injective. If (R, m) is complete it is an isomorphism whenever X is an Artinian R-module resp. a finitely generated R-module (see [7, p. 528] and [7,Corollary 4.3]). Moreover it follows that the map is an isomorphism if and only if there is a finitely generated R-submodule Y ⊂ X such that X/Y is an Artinian R-module. For the proof we refer to [8] and also to [1] for a generalization.
Then M admits the structure of anR-module compatible with its R-module structure such that X ⊗ RR → X is an isomorphism (see e.g. [8, (2.1)]). Let M denote an R-module and N anR module. Then Ext i R (M, N ), i ∈ Z, has the structure of an R-module. Moreover, here are natural isomorphisms As a technical tool we shall need the short exact sequence of the following trivial Lemma.
Proof. We start with the following short exact sequence 0 → R → Q → Q/R → 0. The long exact local cohomology sequence provides an isomorphism Q/R ≃ H 1 m (R). To this end recall that H i m (Q) = 0 for all i ∈ Z. This proves the statement.
As one of the main ingredients of the proof we start with a reduction to the one dimensional case suggested by one of the reviewers.
Lemma 2.3. Let p be a prime ideal in a local ring (R, m). Let
Proof. Since k(p) is an R/p-module the adjunction formula gives the following isomorphisms For the last isomorphism note that Hom R (R/p, E) ≃ E R/p (k). Proof. Consider the short exact sequence of Lemma 2.1 and apply · ⊗ RR . It induces a commutative diagram with exact rows The vertical homomorphism at the right is an isomorphism since H 1 m (R) is an Artinian R-module (see Remark 2.1). Because the vertical homomorphisms are injective the snake lemma implies an isomor-phismR/R ≃ (Q ⊗ RR )/Q. Whence there is the short exact sequence
By virtue of Remark 2.1 there is an isomorphism
This proves the first isomorphisms. MoreoverR/R ≃ Hom R (Q,R/R) sinceR/R admits the structure of a Q-vector space.
Next we claim that Ext i R (Q,R) = 0 for all i ∈ Z. By Matlis Duality and adjointness there are the following isomorphisms . So it will be enough to show that Tor R i (Q, E) = 0 for all i ∈ Z. This follows since Q is a flat R-module and Q ⊗ R E = 0.
With this in mind the long exact cohomology sequence of Ext i R (Q, ·) applied to the short exact sequence 0 → R →R →R/R → 0 induces the isomorphism . This provides the second isomorphism of the statement and finishes the proof of the first equivalence. For the second equivalence note that dim Q Hom R (Q, E) = 1 implies D R (D R (Q)) ≃ Q and thereforê R = R by view of the short exact sequence of Lemma 2.1 and D R (D R (R)) ≃R.
In the following we consider the general case of a one dimensional domain. To this end let AssR = {q 1 , . . . , q r } denote the set of associated prime ideals of the completionR of the domain R. Then q i ∩ R = (0) and dimR/q i = 1 for i = 1, . . . , r.
Theorem 2.5. Let (R, m) denote a one dimensional domain. Then there is are isomorphisms
where R q i denotes the completion ofR q i , i = 1, . . . , r.
Proof. It is known that Hom R (H 1 m (R), E) ≃ HomR(H 1 mR (R), E) is the dualizing module ωR ofR. Its minimal injective resolution asR-module has the following form 0 → ωR → ⊕ r i=1 ER(R/q i ) → E → 0 (for these results on the dualizing module see e.g. [2, Section 3.3]). By applying the Matlis dual functor D R (·) to the short exact sequence of Lemma 2.2 it provides a short exact sequence ofR-modules 0 → ωR → HomR(Q ⊗ RR , E) → E → 0 ofR-modules. Whence there is a commutative diagram with exact rows By the snake lemma it yields an isomorphism ⊕ r i=1 ER(R/q i ) ≃ HomR(Q ⊗ RR , E) and therefore Hom R (Q, E) ≃ ⊕ r i=1 ER(R/q i ). By Matlis Duality (see Remark 2.1) there are the following isomorphisms
REMARKS
We conclude with a few discussions on the previous results.
Remark 3.1. Theorem 1.1 does not hold for a prime ideal p ⊂ R in a complete local ring R with dim R/p > 1. To this end let (R, m) a complete local two dimensional domain. Then E R (R) = Q, the quotient field of R. But now Hom R (Q, E) ≃ Q can not be true. Assume that it holds. Then the natural map Q → D(D(Q)) is an isomorphism too. This can not be the case as follows by view of Remark 2.1.
Let k denote a field and x an indeterminate over k. Consider the situation of k[x] (x) and its completion k[|x|]. Then their quotient fields are k(x) and k((x)) resp. Then as follows by Theorems 2.4 and 2.5.
Problem 3.2.
Let p denote a one dimensional prime ideal in a local ring (R, m). Suppose that R/p is complete. We know that Hom R ( R p , E) is again an injective R-module. Since the natural homomorphism E R (R/p) → I = D(D(E R (R/p))) is injective it turns out that E R (R/p) is a direct summand of I. By view of Remark 2.1 it can not be an isomorphism.
By the Matlis Structure Theorem it follows that I ≃ ⊕ q∈Spec R E R (R/q) µ(q,I) where µ(q, I) = dim k(q) Hom Rq (k(q), I q ) denotes the multiplicities of the occurrence of E R (R/q) in I (see e.g. [3, Section 3.3]). We know that µ(q, I) = 0 for all q ⊂ p and µ(p, I) ≥ 1. It is not clear to us whether µ(p, I) is finite or even 1? Which of the µ(q, I) are not zero? Remark 3.3. Let (R, m) denote a one dimensional domain. One might ask whether the Q-rank of Q ⊗ RR is finite only ifR = R. This is not true as it follows by Nagata's Example (E3.3) (see [6, page 207]). | 2013-06-14T07:09:52.000Z | 2013-06-14T00:00:00.000 | {
"year": 2013,
"sha1": "10e9b39cb00adeee60e1291913887b75f3cf86fa",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.3311",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "10e9b39cb00adeee60e1291913887b75f3cf86fa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
2238114 | pes2o/s2orc | v3-fos-license | Birth weight and breast cancer risk
Exploring whether the positive association between birth weight and breast cancer risk differs by other breast cancer risk factors may help inform speculation about biological mechanism. In these data, high birth weight was associated with breast cancer risk in younger and in more educated women, but was not associated overall.
Many, but not all studies of birth weight and subsequent breast cancer risk suggest a positive association, with the most consistent finding being an association in younger or premenopausal women, often with either no or a reduced association among postmenopausal women (Ekbom et al, 1992;Michels et al, 1996;Sanderson et al, 1996;De Stavola et al, 2000;Innes et al, 2000;Andersson et al, 2001;Hilakivi-Clarke et al, 2001;Titus-Ernstoff et al, 2002;Vatten et al, 2002Vatten et al, , 2005Ahlgren et al, 2003;Kaijser et al, 2003;McCormack et al, 2003;Mellemkjaer et al, 2003;dos Santos Silva et al, 2004;Lahmann et al, 2004).
We evaluated the association of birth weight and breast cancer risk in the National Cancer Institute's (NCI) Combined Diethylstilbestrol (DES) Cohorts Follow-up Study. The strengths of this resource are the availability of weight from birth records, adult breast cancer risk factor data from three phases of questionnaire follow-up, and a subset of the population receiving very high pharmacologic doses of oestrogen, which could inform some of the speculation about possible hormonal mechanisms.
MATERIALS AND METHODS
Approvals for the study were obtained from the committees for the review of research involving human subjects at the field centres and the NCI.
The NCI DES Combined Cohort Study started in 1992 with the aggregation of prior US cohorts of individuals with medical record documentation of DES exposure and a comparable cohort of unexposed women (Bibbo et al, 1977;Labarthe et al, 1978;Greenberg et al, 1984). Questionnaires were mailed to participants in 1994, 1997, and 2001, and the National Death Index (NDI)-Plus was used to identify women whose whereabouts were unknown. Of the 5847 eligible subjects with birth weight data who were free of breast cancer at the start of follow-up, 97 developed breast cancer and 1245 were lost before the end of follow-up in 2001; the remaining 4505 were followed through the 2001 data collection phase. Incident cases of breast cancer were identified through questionnaire self-reports and searches of the NDI-Plus. Pathology reports or death certificates were obtained for 91% of the reported breast cancer cases eligible for analysis, confirming invasive disease in 88% and in situ disease in an additional 11%. Only primary invasive cases were analysed.
Data on birth weight and gestational age were available from obstetrical charts for 80% of the women. For the remaining 20%, these data were ascertained from the mothers at the time of their daughter's original enrollment in the study (the average age of the daughters ¼ 24 years). Information on covariates was obtained from the study questionnaires, obstetrical records or interviews, or from earlier questionnaires from the original cohort studies.
Follow-up began on 1 January 1978 (or the date of first enrollment if it occurred later). Person-years accrued until the earliest of the following dates: first breast cancer diagnosis, last known follow-up, death, or return of the 2001 questionnaire. The median number of follow-up years was 23.5 (0.1 -25.9 years) for a total of 118 985 person-years.
Poisson regression analysis was used to estimate the ageadjusted incidence rate ratios of breast cancer for each category of birth weight and gestational age. A test for trend was assessed by using an ordinal variable for the birth weight categories. To assess confounding, estimates were individually adjusted for each of the covariates. As a hypothesis-generating exercise, interactions of birth weight with the collected covariates were assessed.
RESULTS
Birth weight was not associated with attained age, age at first birth/ parity, menopausal status, or family history of breast cancer, but was inversely associated with mother's smoking status and use of DES during pregnancy (Table 1). An inverse association between birth weight and age at menarche was also suggested. Birth weight tended to be positively associated with adult height (r ¼ 0.25, Po0.0001), BMI (r ¼ 0.03, P ¼ 0.06), and BMI at age 20 (r ¼ 0.04, Overall, there was no association between birth weight and breast cancer risk comparing women who weighed o3000 g (rate ratio (RR) ¼ 0.93) or 43500 g (RR ¼ 1.09) with women who weighed 3000 -3499 g at birth (P for trend ¼ 0.69) ( Table 2), and there was no obvious pattern in the association of gestational age with breast cancer incidence (P for trend ¼ 0.66). These results were similar with simultaneous adjustment for age and gestational age in the birth weight models, or age and birth weight in the gestational age models (data not shown). Estimates changed less than 10% with further adjustment individually for calendar year, and the variables listed in Table 1 (data not shown). With a more detailed examination of birth weight, the RR were 1.3, 0.82, 1.1, and 1.1 for o2500 g, 2500 -3000 g, 3500 -3999 g, and greater than 4000 g compared with 3000 -3499 g.
Among women under the age of 40 years, the RR for women who weighed 43500 g at birth was 2.19 (95% confidence interval (CI) 0.83 -5.7) compared with those who weighed 3000 -3500 g ( Table 3). As the CI indicates, this was an imprecise estimate, based on only 10 cases. High birth weight was associated with an elevated breast cancer risk in highly educated women but a reduction in risk in the less-educated women (P for interaction ¼ 0.004). However, neither of the estimates was statistically significant and the latter was based on only two exposed cases. There was no evidence of interaction in the association of birth weight with breast cancer incidence by any other breast cancer risk factor, including in utero DES exposure, although there were few cases in many of these subanalyses reflected by unstable estimates and wide CIs (data not shown). In analyses restricted to the DESexposed women, the risk estimates for birth weight and breast cancer by education and age strata were similar to those observed in the combined group of exposed and unexposed women (data not shown).
DISCUSSION
Most studies find evidence of a positive association between birth weight and breast cancer risk, but several have not (Ekbom et al, 1997;Sanderson et al, 1998Sanderson et al, , 2002Titus-Ernstoff et al, 2002;Hodgson et al, 2004). Although not associated overall in our data, risk was elevated, albeit not statistically significantly, with high birth weight in younger women consistent with previous observations (Michels et al, 1996;Sanderson et al, 1996;De Stavola et al, 2000;Innes et al, 2000;Mellemkjaer et al, 2003;McCormack et al, 2005).
The effect of birth weight varied by level of education with an increased risk for high birth weight in more educated women and an apparent risk reduction in the less-educated women. While earlier studies controlled for social class (Ekbom et al, 1997;De Stavola et al, 2000;Sanderson et al, 2002;Vatten et al, 2002Vatten et al, , 2005Titus-Ernstoff et al, 2002;McCormack et al, 2003McCormack et al, , 2005Lahmann et al, 2004;Lahmann et al, 2004;dos Santos Silva et al, 2004), none found evidence of confounding of the birth weight and breast cancer association. Only one investigated the interaction of birth weight and education (Titus-Ernstoff et al, 2002), reporting a stronger association of high birth weight with breast cancer risk in women whose fathers were the most educated. As discussed elsewhere (Hodgson et al, 2004), most studies have been conducted in Caucasians from high-risk populations. Results from studies in a relatively disadvantaged population in the US (Hodgson et al, 2004) and in Chinese women with limited education (Sanderson et al, 2002) suggest an inverse association of birth weight and breast cancer. If the association of birth weight with breast cancer differs by social class, this might explain some of the heterogeneity of findings reported in the literature on birth weight and breast cancer risk. It would be useful to know if any of the other studies with information on socioeconomic status have similar findings.
If the positive association of birth weight and breast cancer risk observed among younger women and those with more education is real and reflects differences in biology, our observation argues against the hypothesis that the operable mechanism is mediated through higher levels of oestrogen. Most of these women (and all in the analyses restricted to DES-exposed women), regardless of their birth weight, received pharmacologic doses of oestrogen during prenatal breast development. Recent observations that cord blood estrogen levels -reflecting fetal exposure -are not associated with birth weight (Troisi et al, 2003) also undermine the proposed oestrogen mechanism.
In conclusion, while there was no overall association, we found an elevated risk of breast cancer with high birth weight among younger women and those of higher educational attainment, findings consistent with several other observations. If true, these subgroup differences might explain some of the inconsistencies between existing studies of this relationship. In addition, the Birth weight and breast cancer risk presence of the association in our DES-exposed population argues against the popular hypothesis that such a mechanism is oestrogen mediated. | 2017-09-02T02:27:38.084Z | 2006-04-25T00:00:00.000 | {
"year": 2006,
"sha1": "e30799f613b47c9fdfef70d520d89d0389c6b43f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6603122.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e30799f613b47c9fdfef70d520d89d0389c6b43f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252007729 | pes2o/s2orc | v3-fos-license | Multi-stakeholder Engagement for the Sustainable Development Goals: Introduction to the Special Issue
The world is not on track to achieve Agenda 2030—the approach chosen in 2015 by all UN member states to engage multiple stakeholders for the common goal of sustainable development. The creation of the 17 Sustainable Development Goals (SDGs) arguably offered a new take on sustainable development by adopting hybrid and principle-based governance approaches, where public, private, not for profit and knowledge-institutions were invited to engage around achieving common medium-term targets. Cross-sector partnerships and multi-stakeholder engagement for sustainability have consequently taken shape. But the call for collaboration has also come with fundamental challenges to meaningful engagement strategies—when private enterprises try to establish elaborate multi-stakeholder configurations. How can the purpose of businesses be mitigated through multi-stakeholder principle-based partnerships to effectively serve the purpose of a common sustainability agenda? In selecting nine scholarly contributions, this special issue aims at advancing this discourse. To stimulate further progress in business studies, this introductory essay, furthermore, identifies three pathways for research on multi-stakeholder engagement processes in support of the Decade of Action along three coupling lines: multi-sector alignment (relational coupling), operational perception alignment (cognitive coupling) and goal and strategic alignment (material coupling).
The Context: The Growing Importance of Principle-Based Approaches
The adoption of 17 Sustainable Development Goals (SDGs) in September 2015 by all 193 United Nations member countries, created a common framework for sustainability-Agenda 2030. The SDGs were adopted as a universal call to action to end poverty, protect the planet, and ensure that by 2030 all people enjoy peace and prosperity. Almost all countries agreed on "a shared blueprint for peace and prosperity for all people and the planet" (https:// sdgs. un. org/ goals). The formation and creation of the goals also portrays the most inclusive and participatory global approach to strategy formulation and development to date. The goals were established following a massive, three years' process of global multi-stakeholder consultation in which hundreds of big and small corporations, governments, civil society groups, knowledge institutes and other organizations participated. The SDGs can, therefore, arguably, be considered the most all-encompassing, ambitious, as well as action-oriented agenda for progress on a global scale, ever agreed upon by 1 3 humankind. This does not mean, however, that the SDGs are free of flaws or criticism, but they do point to a way forward for addressing humanity's sustainability challenges in the future (Van Tulder & Van Mil, 2023).
The common aspiration of the SDGs also signaled the increased recognition of the urgency to take a 'systems approach' to sustainable development ('grand') challenges. These challenges include ending poverty and other deprivations while developing strategies to improve health and education, reduce inequality, support economic development, and addressing the climate crisis urgency, as well as the critical need to protect and regenerate lands, waters and biosphere-almost at the same time. In an increasingly complex world facing major challenges, the adoption of the 17 Goals should help build capacity, strategy, commitment, and engagement to address humanity's most threatening challenges in an inclusive manner.
The SDGs introduced a 'principles-based' and a 'governing through goals' (Kanie and Biermann 2017) based approach to sustainable development. The 17 goals and 169 targets selected are guided by five basic principles (People, Planet, Prosperity, Peace and Partnering) and one overarching principle 'no one left behind'.
To realize the SDGs, the active participation of corporations was considered vital. This recognition also triggered a form of 'hybrid' or 'transition' governance for development guided by the common good on a global scale that has not been tried before. A 'hybrid governance' approach can be designed with representatives of the public, private, not-for-profit sectors and also knowledge institutions working together to achieve goals through targeted action (van Tulder 2021). As such, the hybrid governance structure of the SDG approach was intended to channel progress in several concrete areas by means of goal prioritization, improved narratives to facilitate broad awareness and commitment, better data development, and instilling of active participation in, for instance, joint research or the creation of new platforms and partnerships around the implementation of the common agenda. This set-up signaled a considerable shift away from traditional ways of thinking about sustainable development as the prime responsibility of governments. Companies-whether big and small, local, and multinational, in all sectors of society-thereby have a crucial role to play, according to the UN: "We acknowledge the role of the diverse private sector, ranging from micro-enterprises to cooperatives to multinationals […] in the implementation of the new Agenda" (United Nations, 2015: 10).
Companies took up the challenge for a variety of ethical and strategic reasons. By 2016, 87% of CEOs believed that the SDGs provide an opportunity to rethink approaches to sustainable value creation, while 78% already recognized opportunities to contribute through integrating the SDGs into their core business (UNGC and Accenture Strategy 2016). However, as van der Waal and Thijssens (2020) point out, there seems to be a tendency to subscribe to SDGs rather symbolically and strategically primarily for legitimacy reasons. In a similar vein, Gneiting and Mhlanga (2021) suggest that companies aim at realizing reputational gains from their SDP-related activities rather than significantly contributing to them. Notwithstanding these early critical observations, company representatives replied that the SDGs promised to unlock an estimated annual US$12 trillion in business investment opportunities (Business & Sustainable Development Commission, 2017) for companies able to come up with innovative sustainable solutions and inclusive business models. The alliance between the UN and major companies also reiterated the importance of a distinct dimension that had not been embraced earlier in initiatives around sustainability issues: rather than a 'rulebased' approach to sustainability (stressing laws, codes of conduct and treaties), did the SDG approach embraced 'principles-based' management practices as agreed upon in the decade before the SDGs materialized. This included frameworks partly initiated by companies themselves or in coalitions with other stakeholders, like the Ten Principles of UN Global Compact, the Guiding Principles on Multinational Enterprises by the OECD, principles of responsible organizing as initiated by ISO (26,000) or reporting principles as initiated by the Global Reporting Initiative (GRI) (Van Tulder & Van Mil, 2023). The extent to which the SDG agenda can lead to impact on sustainable development thus can become testimony of the question whether a 'principlesbased' approach can be better, worse, complementary or equal to a 'rules-based' approach towards corporate action on sustainable development.
Slow Progress, and What is Needed?
The results of the first years of the SDG principles-based governance experiment show a mixed picture. Despite widespread support for the goals by almost all leading companies, implementation trails behind expectations and ambitions at the macro-level of analysis (van Tulder & Van Mil, 2023). Since 2015, progress on the SDGs has been slow and, since 2020, even negative in many areas. A 2018 UN global checkup report already warned that progress on the SDGs proved uneven "across regions, between sexes and among people of different ages, wealth and locales, including urban and rural dwellers", and was not moving fast enough on almost all accounts (UN 2018). Conflict, war and violence were identified as significant exogenous barriers to poverty eradication and sustainability, while progress had not yet reached the people who need it most. But could this situation be attributed to corporate behavior? One particularly relevant finding for assessing the potential role played by companies, was that they were insufficiently succeeding in integrating the SDGs in their core activities and, most importantly, in engaging in meaningful collaboration with other societal actors. The UN Global Compact Progress Report 2019, for instance, found that while 71% of the CEOs recognize the critical role that business could play in contributing to delivery of the SDGs, a mere 21% believed that business is indeed playing that role (UN Global Compact 2019).
Despite these findings, the SDGs can still be considered the most sophisticated principles/goals-based approach available. In a unique confluence of circumstances-which would probably not have been possible one or two years later-the SDGs in 2015 provided a common agenda. This agenda is supported by a coalition of stakeholders willing to coordinate action and collect and harmonize relevant databases around each of the 230 indicators constituting the SDG framework. The latter ambition was organized by the creation of so-called 'custodian agencies' like the World Bank, the OECD or specialized organizations like the International Organization for Migration (IOM) or UNESCO. They all promised to work on a unifying scheme of common indicator development of each SDG and the organization of voluntary reporting to keep track of stages of transition.
These efforts have seriously increased the analytical intelligence of the world, by producing a large variety of interim evaluation studies, corporate reports, NGO studies, SDG benchmarks, SDG repositories and the like (cf. Van Tulder & Van Mil, 2023). But they also showed the existence and nature of a sizable gap between intention and realization, which prompts the need for further studies on root-causes and ways to proceed at multiple levels of analysis. Some observers contend that these findings illustrate the failure of the whole SDG exercise (Deacon, 2016;Buhmann, Jonsson and Fisker 2018;c.f. Waage et al., 2015). Others have taken the opposite view that the SDGs are more needed than ever: thus, the plea for a Decade of Action by the UN.
Whatever perspective is taken, however, their holders probably would agree that one of the biggest normative and strategic challenges lies in the middle: how to link microlevel actions with macro-level demands and outcomes through multi-stakeholder and partnering strategies.
The Case for Collaboration in Principles-Based Approaches
The critical importance of stakeholder engagement and partnerships was already acknowledged in the basic set-up of the SDGs. The United Nations department of Economic and Social Affairs (UNDESA)--which acts as the Secretariat for the SDGs-highlighted the critical importance of stakeholder engagement and partnerships as follows: "Sustainable development decision making requires broad participation of all. The Division therefore aims to support the effective participation of Major groups (as defined in Agenda 21) and other stakeholders in the UN political process, including through efforts to build their capacity, knowledge and skills base" (UN 2015a).
Agenda 2030, in their terms, has brought clarity about the common purpose that mankind needs to achieve in collaboration to achieve the goals of the agenda and a request for all, including business organizations, to respond to the threats to sustainability. This approach builds on cross-sector partnerships as a key enabler and the principal way forward to serve sustainable development goals and/or address wicked problems and common good challenges (Austin & Seitanidi, 2014;Waddock, Meszoely, Waddell and Dentoni 2015). Wicked problem is a term that reflects the true nature of the SDGs; they are complex issues and interconnected (van Tulder, 2018;Van Tulder & Van Mil, 2023). As highlighted in previous publications, multi-stakeholder initiatives and partnerships are complex and dynamic arrangements which may be subject to specific contextual circumstances (Utting & Zammit, 2009) and additional research has been suggested to study multi-stakeholder arrangements (Eweje, Sajjad, Deba Nath and Kobayashi 2021). While the need for collaboration has been widely acknowledged (Albrectsen, 2017), further research is needed to reveal the uniqueness of each industry, sectors, and partnerships. Several publications have highlighted the challenges in the implementation of SDGs in strategies and operations of business organizations as well as the financial, reputational and organizational consequences thereof (c.f. MacDonald et al., 2018;Rashed & Shah, 2021;Soberón et al., 2020). However, former research has not yet sufficiently answered the question to what extent businesses will need to change and adapt in order to incorporate and achieve the SDGs. The SDGs, formulated as global goals, ask for actions on the micro level (organizations) and the meso level (networks, industries) to achieve results on the macro-level (countries, continents, global sustainability). The demand for actions has an impact on the way how to strategize and organize the approach to these goals. This special issue contributes to multi-stakeholder literature by further studying the role of partnerships and multi-sector alignment, operational perception alignments, and finally, strategic and goals alignment to facilitate change.
One of the biggest bottlenecks, therefore, appears in the practical elaboration of the SDG agenda in concrete partnering strategies-in particular when partnerships are organized across societal sectors (bringing together state, civil society and market actors in varying constellations). Effective cross-sector partnerships (CSPs; c.f. Bryson, et al., 2015;Selsky & Parker, 2005;van Tulder et al., 2016) prove difficult to operationalize and to sustain. A global survey on 'external engagement' (McKinsey, 2020), for instance, found that nearly 60 percent of CEOs rank stakeholder engagement 1 3 among their top three priorities. However, the survey also indicates vast 'intention-realization' gaps, with merely 7% of respondents stating that their organization regularly aligns the interests of stakeholders and their business. Moving on from external stakeholder engagement to a committed, formal, and embedded CSP strategy proves even more challenging-which probably explains why many companies still underutilize the potential of partnerships. The UN Global Compact (2020) assessed that only 52% of their signatories are engaging in partnership projects with public or private organizations, whilst endorsing that multi-stakeholder collaboration is key to achieving the transformations and systemic effects needed.
Cross-sector collaboration is complex and 'collaborative advantage' challenging to create and harness (Bryson, et al., 2016), which is among the prime reasons why progress on the SDGs has been slow. In 2019, UN DESA concluded that despite the strong rhetoric and overwhelming efforts put into partnering around the world, "the reality is that we are still only scratching the surface in terms of the number, and quality, of partnerships required to deliver the SDGs" (UNDESA, 2019). Undoubtedly, the Agenda 2030 has triggered business research into understanding obligations, contributions and impact of business engagement in sustainability goals. Academic literature on SDGs and company involvement therein unveils (c.f. Montiel et al., 2021;Pizzi et al., 2020) that there is a tendency to focus on certain aspects of company involvement in SDGs and rather apply a macro-perspective or a "global view" (ibid. 13). Other aspects that are prominent in business research are strategic and performance management questions (ibid.). However, so far, the relationship between micro-level initiatives of businesses with meso-level factors and aspects of collaboration, on the one hand and consequences for sustainability on the macro level is under-researched and needs attention to understand the processes that take place in multi-stakeholder engagement.
Principles-Based Approaches to Complex Challenges: the Need for Stakeholder Engagement
At face-value, the SDG agenda seems to be a normative agenda, but can it also serve as a strategic agenda for companies? An important aspect of business strategies is survival and business success, also in terms of financial and organizational viability. Business sustainability is dissimilar from social sustainability. However, the important question in this context is to show that business organizations should engage in social sustainability goals. That is also posing questions about the business perspective about business roles in society. And taking on social responsibilities may imply adopting an ethical perspective to business and their goals. What perspective does the SDG agenda imply?
As the political orientation of the UN demands-and the choice of words in the preamble underlines-the Agenda 2030 is foremost a normative agenda whose goals are not to be understood as instrumental goals but as absolute goals, i.e. the goals of the Agenda 2030 are directly related to universal human rights. And they go beyond that to describe intact nature and animal welfare as desirable.
The achievement of Agenda 2030 is partly impeded by its overall complexity and all the issues related to it (Waddock, Meszoely, Waddell, Dentoni, 2015). Organizations are limited in the support they can offer to innovate towards accomplishing the SDGs. The intricacy of the 17 goals and the diversity of stakeholders related to issues addressed by organizations can be seen as a positive force as well as a threat. The diversity of stakeholders affected forces organizations to pursue multi stakeholder engagement and collaboration in order to achieve the SDGs (Rotheroe et al., 2003). On the other hand, pre-existing institutional arrangements and procedures for facilitating or fostering collaboration between multi-stakeholders is not available and does not even seem to be possible. In many cases, collaboration for the SDGs can even be typified as a "[…] result of emergent and unforeseen interorganizational dynamics […]" (vv, Whiteman and Parker 2019: 367). The United Nations acknowledges that progress on the SDGs agenda is too slow, and asks how the pace for change can accelerate. The voluntary nature of the SDGs, the need to clarify organizational and individual moral and ethical obligations, the absence of legal enforcement and sanctions, and the lack of formal processes to ensure the accomplishment of the goals (Biermann et al., 2017;Bowen et al., 2017) mean that the SDGs are often perceived as recommendations and their targets are in need of being legally enforced with a common legal agenda (Persson, Weitz and Nilsson 2016;Van Tulder & Keen, 2018).
There are several reasons why the SDGs can be considered strategic and instrumental as well as normative and absolute.
(A) [a] At least for the UN, the goals of the 2030 Agenda are SMART in the sense of Peter Drucker (1954) and thus follow the operational logic of business organizations, which are to a large extent guided by clearly formulated goals (Veggeland, 2014). The goals are specific because each goal describes a clear object, they are measurable because in some cases the target is concretely specified (e.g., zero poverty, zero hunger), and in other cases a benchmark is available in the form of the status quo. The goals are achievable, even if their attainability is not guaranteed; they are relevant because they contribute directly to the realization of human rights. Finally, they are time-bound, which is already given by the Agenda's deadline of 2030. How-ever, it is questionable if the goals can be translated directly into SMART business goals. (B) [b] Moreover, the goals remain also vague in some important respects. It is not clearly defined who has to do what, how the measures to achieve the respective goals are to be financed, and who can assess whether the goals have been achieved. In connection with this, no sanctions are named, either positive or negative. (C) [c] Furthermore, there exist two tensions in setting the goals. First, the goals are about social and environmental sustainability concerns; at the same time, the goals establish cooperation between different sectors and forms of organization. These goals are very clearly different from typical business goals (greater sales, greater market share, increased efficiency, customer satisfaction, or employee satisfaction), but at the same time they are formulated in such a way that companies should commit to them and collaborate in their implementation. Second, these goals are formulated as normative goals, but they are written for actors with a strategic agenda. (D) [d] Finally, the SDGs explicitly refer to "multi-stakeholder partnerships and voluntary commitments". This refers to two terms that are discussed very prominently but also controversially in the CSR and business ethics literature. The stakeholder concept was first introduced as a strategic management term by Freeman (1984). Freeman (1999) himself refers to the distinction between a descriptive, normative, and instrumental theory, and the discussion of whether stakeholder theory is normative or instrumental or even strategic is not settled (c.f. Freeman et al., 2020;Reed, 1999). The second concept that is contested in the CSR literature is that of voluntarism (c.f. Gatti et al., 2019aGatti et al., , 2019bGössling, 2011). Until 2011, for example, the European Commission held on to the notion of voluntariness in the definition for CSR and then dropped it (c.f. Gatti, Seele and Rademacher 2019).
The notion of stakeholders and the necessity to engage them towards common ethical and responsible actions to help achieve the SDGs has been extensively discussed in literature. Most stakeholders seem to agree on the importance of the 17 SDGs and on their 169 sub-targets. That said, what will determine the effectiveness of any specific intervention will be the way the goals and targets are interrelated and their complexity; the difficulty of addressing what should be done that also requires legal reinforcement; determining what contributes to moral and/or ethical value at the social level (rather than just the profit motive or business operations); and the ability for involved or affected stakeholders to work together towards a common vision will determine the effectiveness of the chosen intervention (Van Tulder, 2018). Due to their intertwined high level of complexity, SDGs are described as wicked problems requiring cross-sector partnerships, inclusion of multi-stakeholder's perspectives, and involving different partnerships to create systemic change (Van Tulder & Keen, 2018). Working with the SDGs reveals the importance of partnerships to help address wicked problems.
Even though the logics of the two sectors-the UN on the one hand as an international organization representing governments, not-for-profits, and civil society; and business enterprises on the other hand as market-oriented actors-are different, the SDGs aim to provide a translation service or function to increase the willingness of companies to collaborate-including across sectors-on solutions to the most pressing problems facing the world's population. In doing so, the SDGs offer important assistance in substantiating what sustainability means; at the same time, however, companies are offered room for maneuver, e.g., regarding which of the 17 goals they will contribute to. Moreover, the way in which business enterprises contribute is also not specified in terms of content. The 2030 Agenda offers important recommendations for implementing the idea of cooperation.
But this recommendation is at the same time a major challenge and management task. For it also means that management should understand and appreciate the relationship between partnership and multi-stakeholder engagement and attach significance to it for decision-making. This also formulates a task for management that has little to do with intra-organizational optimization and more to do with crosslevel, cross-organizational, and cross-communication skills, and a willingness to commit to sustainability. In addition, there is the task of recognizing the importance of dialog and communication between different groups of stakeholders to ensure transparency and successful results. This includes dealing with the complexity of defining stakeholders, since stakeholders are a socially constructed phenomenon (Fineman & Clarke, 1996;Winn, 2001). Individuals cannot be assumed to belong to only one group; they often belong to more than one group, and stakeholder groups are heterogeneous (Crane & Livesey, 2003;Gao & Zhang, 2001;Winn, 2001). Identifying stakeholder groups and describing their characteristics and what they mean for relationships (Bryson, 2004;Mitchell et al., 1997) is a highly complex managerial task. This poses the interesting challenge of addressing the empirical question of which organizational and strategic management methods business enterprises do or should use to successfully work on the implementation of the SDGs, whereby the question of success also refers to the two references mentioned, namely the success of the respective enterprise on the one hand and the achievement of the societal goals of the Agenda 2030 on the other.
Contributions to This Special Issue
The call for papers for this special issue was published in 2020. The nine articles presented in this issue contribute to understanding conditions and meaning of MSEs for the SDGs. To document these contributions, we analyzed the research papers in terms of the following points, which are also listed in Table 1 -What challenge to sustainability does the article address? -What are the main results? -What data were used and how did the authors approach the research problem? -What theory and method were used to analyze the data and answer the research question? -What context-specific information was provided about the research setting?
The contributions selected for this special issue reflect the desire of international researchers to add empirical substance to claims that multi-stakeholder engagement offers relevant approaches to addressing sustainability challenges. The contributions also reveal that multi-stakeholder engagement 'with a common purpose' such as the SDGs allows for more focused and directed thinking about complexity.
To navigate the challenges of all the issues involved in achieving Agenda 2030 (Waddock et al., 2015), principlesbased collaborative strategies in support of the SDGs are recommended by all authors. Three strategic levels are discussed by the authors: The role of partnerships and multisector alignment, operational perception alignments, and finally, goal and strategic alignments to facilitate change.
The Role of Partnerships and Multi Sector Alignment
The role of partnerships and multi sector alignment is discussed extensively in the submissions as it has the potential to move beyond the fragmented configurations that currently prevail and help increase and align organizational fit between the complexity of the issue and the potential partners. Previous research highlighted the critical importance of ecosystem management (Dietz 2003) and describes the necessity of societal cross-sector collaboration in support of sustainability (Heuer 2011). Similarly, the importance of the ecosystems and their relevance to facilitate multistakeholder collaborations is discussed in this issue (Stubbs, Dahlmann and Raven 2022, THIS ISSUE).
All contributions highlight the importance of multistakeholders' relationships. Examples are diverse in terms of geography and industries. An interesting example is the relational coupling of stakeholders in Ethiopia designed to facilitate the achievement of sustainable prosperity that benefits local and international communities in a context of severe poverty and liquidity constraints. It reveals the importance of cooperation-facilitating agencies, dialogues and collaboration across businesses, governments, and NGOs (Legesse Segaro and Haag 2022, THIS ISSUE).
Additionally, to prevent risks associated with power asymmetries among sectors (Waddell, 2000), differing notions of trust (Selsky & Parker, 2005) and turbulent crosssector relationships (Trist, 1983), the importance of nonprofit organizations potentially acting as meta-governors of collaborative innovation for sustainability is highlighted in several contributions (Martini, Rivellato, Martini and Marafioti 2022, THIS ISSUE). The role of the non-profit sector has already been highlighted in past contributions as potentially incubators for sustainability and social movements (Heuer 2011).
The People Who Facilitate These New Strategies Will Need Specific Cognitive Competencies.
The People Who Facilitate These New Strategies Will Need Specific Cognitive Competencies. "Most of all, we need to understand that humankind and the natural environment are both part of the ecosystem" (Heuer 2011, p. 219). Literature previously studied the main characteristics of an ecosystem management approach: it should be holistic, interdisciplinary, goal-oriented, participatory and designed to help people realize how they are part of the ecosystem, and not separate from it (Slocombe, 1993). To support such participative and collaborative ecosystems for Agenda 2030, the choice of the facilitators' profile is therefore highly relevant. In addition, the type of communication used by these agents of change is essential to engage and facilitate any collaboration. To be successful, multi-stakeholder engagement must be participatory and requires a thorough understanding of processes of inter-organizational decision-making integrating emotions and the role of ethical values (Alexander et al.
2022, THIS ISSUE).
Research shows how specific individuals' profiles can help overcome the difficulties of implementing of strategic responses to Agenda 2030 challenges. When faced with changes, cognitive requirements are high to create coalitions and engage across industries, across the supply chain, and sectors. Specific agents' profiles can help create interconnectedness and inclusiveness (Fiandrino, Scarpa and Torelli 2022, THIS ISSUE). The cognitive roles played by key actors who can take multiple roles and engage with diverse stakeholders is critical. Some of these roles are described as sponsors, who have the needed influence to get organizations involved; others are champions, who are there on a day-to-day basis to make sure good things happen (see also Bryson et al., 2015). Other are influencers and facilitators, ', 'pioneers' and 'critical friends' (Stubbs et al. 2022, THIS ISSUE). Those playing these roles will jointly with others help engage dialogue, sharing, and the more toward more shared leadership (see also Quick 2015). They will help navigate the complexities by helping gain access to information, knowledge, resources and skills. Together they will help maintain a focused agenda and deliver the needed results. When engaging with several actors who sometimes have no experience of working together and do not share the same organizational priorities (Gray & Purdy, 2018;Innes & Booher, 2010), a common and collaborative approach can be very hard to create. To help cope with the complexity of the process, communication is key and actors will benefit from clear messages versus mixed messages (Karakulak, Stadtler 2022, THIS ISSUE).
Engaging in the alignment of Operations and Perceptions Requires A Clarified Strategic Plan
Ecosystems are complex, dynamic and subject to an immense number of internal and external relationships (Heuer 2011). Creating coalitions where diverse stakeholders will share resources and engage together is extremely uncertain and presents unique challenges. Clarifying their strategic plans and helping each organization develop a sense of strategic belonging by engaging them on a common strategic agenda and targets is a way to avoid fragmentation and improve interagency collaboration -a common vision to adhere to, one that will help them pool their resources and work on a similar journey. To this end, Agenda 2030 and the 17 SDGs, along with their 169 sub-goals provide an essential framework to help define a common global agenda for nations, organizations and civil societies (Williams and Blasberg 2022, THIS ISSUE). The UN SDGs help frame the tools, discussions, and interactions in a global discourse on humanity's challenges (Stubbs et al. 2022, THIS ISSUE).
To ensure the success of the new strategy designed around the Agenda 2030, many implementation challenges must be addressed. To facilitate stakeholder engagement, (Gutierrez, Montiel, Surroca and Tribo 2022, THIS ISSUE) have developed a typology of six different strategies for engaging with stakeholder groups: opportunity exploration, uncommitted diversification, rainbow war, rainbow washing and progressive learning. When facing difficult issues, it is of utmost importance to clarify the common purpose and benefits to be gained from collaboration (Huxham and Vanagen 2005;Bryson et al., 2016) in order to facilitate the steering and navigation that is needed in the new landscape of SDG's challenges (Sebhatu and Enquist 2022, THIS ISSUE).
Conclusion and Areas for Further Research
The call for proposals not only triggered a large number of potential contributions, but also showed that there are still major areas of research to be covered in the coming years. The aim of a special issue is not only to showcase present research, but also to stimulate future research. What can we learn from the present 'harvest' of special issue papers? Most of the work that is published in this special issue further builds on findings and discussions known from the multistakeholder and partnership alliance literature. The impact of multi-stakeholder processes is more difficult to research and thus is not yet well covered. The present research mostly focuses on interaction between partners, but not really on the ethical principles -like procedural justice (cf., Page et al., 2015)-or governance principles-like hybrid governance or 'governing through goals'-that might provide answers to the ultimate impact that effective partnering can have on a number of focused goals (cf. Van Tulder et al., 2016). Extant research on multi-stakeholder processes for the SDGs seems to favor governance over ethics and pragmatics over principles, and reactive (negative duty) approaches over proactive (positive duty) approaches. This tends to underestimate the principles-based potential of the SDGs agenda: common goals and principles that require a more pragmatic angle towards reaching goals. A future research agenda in support of the 'Decade of Action' (which can improve the contribution of business research to the much-needed acceleration of the SDGs) then can be as much strategic as ethical and normative, while the engagement of multi-stakeholders can be as much practical as principled.
Navigating research around relevant themes can then be guided by the following analytical scheme that summarizes the state-of-research in business ethics research on SDGs as shown in Fig. 1.
Furthering the SDG agenda and contributing to the Decade of Action, presents a number of challenges to researchers: (1) Neither the necessity for cross-sector collaboration to achieve the SDGs nor the importance of private sector contributions presents a particularly major issue for further ethical research. The relevance of collaboration for the creation of common goods is undisputed.
(2) The need for private engagement and collaborative (multi-sector) efforts in dealing with complex/wicked problems in a 'fair' and 'equitable' manner is also widely acknowledged. So-called 'second generation' wicked problems and complexity theory (Head 2015;Termeer et al., 2019) shows that wicked problems cannot be solved per se, but can be addressed by multistakeholder arrangements and by the involvement of private actors. (3) Effective MSPs for the SDGs partly depend on the translation of sustainability strategies into effective network and collaborative strategies aimed at achieving longer term impact on complex issues such as the SDGs and effectively contribute to the Decade of Action. The translation of sustainability strategies into network and collaborative strategies poses a number of challenges; for example, how to use the nexus potential of the SDG agenda (cf. Stockholm Resilience Centre) while translating this to individual corporate action (cf. Van Zanten & Van Tulder, 2020); how to create the proper coalitions ex-ante and design a proper theory of change and 'developmental evaluation' principles that also leaves room to learn from the experience of the partnership to improve its impact along the way (Patton, 2021); and how to overcome the gap between strategic intent (and normative absolute principles) and Fig. 1 Navigating principles-based collaborative strategies in support of the SDGs operational realization (and operational instrumental business principles). (4) These concerns boil down to the translation of corporate strategies into effective partnering strategies that can reap 'collaborative advantage' for common goals such as the SDGs. Here we see the biggest gaps in our understanding of relevant management practices in support of the SDGs. For further research we propose to focus on three types of alignment-and related coupling-questions which also concern ways in which business ethics research on the principles-based initiatives such as SDGs can profit from interdisciplinary crossovers from a variety of scientific disciplines, in particular (1) strategic management (Gond et al., 2018), (2) human resources management, (3) political economy and governance studies, and (4) organization sciences. The three couplings may be described as follows: Multi-sector alignment and relational coupling: linking the relationships among partners to highlight, embrace and commit to the SDGs addressed; Operational perception alignment and cognitive coupling: linking implementation challenges related to translating intentions into realizations and the concrete cognitive requirements of effective managers.
Goal and strategic alignment and material coupling: linking present materiality questions of strategic actionsuch as strategic plans, KPIs, and business models-to fully integrate addressing the SDGs into corporate strategies.
[ad.4a] Multi-sector alignment and relational coupling The cross-sector partnering literature already shows great potential in addressing ways to look at the alignment between complex issues and corporate strategies. The extent to which partnerships can create sufficient 'complementary' value by aligning 'coalitions of the needed'-instead of the relatively fragmented 'coalitions of the willing' that presently prevail-can increase the organizational 'fit' between the complexity of the issue and the partnering configuration. This presents a promising area of further research (cf. Austin & Seitanidi, 2014;Van Tulder & Keen, 2018). More interdisciplinary work is required. In particular, business management literature and scholarship would also benefit from paying more attention to the public and nonprofit management and political science literatures, where collaboration, governance, and social movements have been important research topics for decades. To the extent that governments, government agencies, and nonprofits are necessary for achievement of the SDGs, the collaborative advantage of building of these literatures should be pursued. Given the urgency of the challenges, time should not be wasted on reinventing the wheel.
[ad. 4b] Operational cognitive coupling: The strategic management literature talks about strategic 'tinkering' (Mintzberg, 1987) as a relevant frame to assess more or less 'salient' implementation strategies. The gap between 'intention' and 'realization' is not necessarily a 'moral gap.' It may simply be a part of day-to-day practice, especially in the case of organizations trying to address complex issues in collaborative efforts. Thus, despite good intentions, unintended outcomes are likely to emerge out of these innovative processes designed to address complex situations. An adequate cognitive (collaborative) mindset is needed to manage these processes. Clarification of the process, the animating vision, and dialogue to discuss hurdles and strategies are key factors supporting intended outcomes (Legesse Segaro, E. et al., THIS ISSUE 2022).
[ad.4c] Strategic and material coupling: Addressing the SDGs effectively also requires the adaptation of more traditional strategic management approaches to situations that go beyond what any organization can accomplish by itself, and where no organization is wholly in charge. Strategic management of a single organization involves a fairly wellknown set of tasks and often involves the development of a strategic management system to ensure direction, alignment, and commitment across the organization (Drath et al., 2008;Whittington & Yakis-Douglas, 2020). Strategic management of collaborations and even social movements, though less studied, is becoming more common and necessary, given the boundary-crossing challenges facing the world. Leading several organizations to achieve a common purpose has been called leading strategy management-at-scale, meaning the scale of the challenge to be addressed (Bryson et al., 2021). Such cross-boundary issues include the global COVID-19 pandemic and how to achieve the SDGs. Such issues occur within a shared-power, no-one-wholly-in-charge environment and demand a response from multiple organizations. Various strands of reasonably aligned, if not directly coordinated, effort are required. Two complementary approaches to strategy management-at-scale include collaboration itself, and beyond that, community organizing, coalition building, and advocacy.
One popular approach to collaboration in the US is called Collective Impact (CI), which became quite popular after a now widely cited article by John Kania and Mark Kramer with a catchy title, "Collective Impact," in a 2011 issue of the Stanford Social Innovation Review. The authors asserted that achieving CI requires a disciplined cross-organizational and cross-sector approach on a scale that matches the challenge. They argued that "five conditions" were necessary to achieve collective impact (39-40): a common agenda, shared measurement system, mutually reinforcing activities, frequent and structured communications, and a "backbone organization." The approach has been modified since, but the basic idea still has merit (Bryson et al., 2021;c.f. Kania et al., 2022).
The most serious critique of the CI approach is that it has great difficulty achieving deep-seated system change, equity, and justice (e.g., Christens and Inzio 2015;Wolff, 2016). This critique draws limits around the situations in which CI is likely to be helpful. Specifically, really addressing issues of equity, social justice, and system change requires community organizing, coalition building, and advocacy (Wolff et al. 2016;Almeida, 2019). CI initiatives and community organizing efforts of course can be mutually reinforcing. System changes that require better alignment and interorganizational service coordination may be achieved relatively quickly using a CI approach. When "changes require concessions from entrenched interests, or reorganization and reorientation of existing institutions," community organizing, coalition building, and advocacy are "likely the more effective approach" (Christens & Inzeo, 2015, 431.) When both kinds of changes are needed, the two approaches can be complementary.
The nine contributions that were selected for this special issue help address the question of how collaboration and communication in multi-stakeholder contexts can contribute to effectively addressing global sustainability challenges as defined by the SDGs. The articles show that while multistakeholder approaches can produce significant gains, the approaches are never particularly easy to pursue. We have outlined three major research pathways that address the role of members to a partnership, including alignment across sectors, operational perception alignment, and goal and strategic alignment. In addition, our model discusses the connections between the grand challenges behind the Agenda 2030 on the one hand, and contributions of businesses to these goals, on the other. Finally, all contributions THIS ISSUE contribute to the unfolding of a research agenda that centers around the question about the possible impact of multi-partner collaboration on sustainable goals.
Funding Open access funding provided by Jönköping University.
Conflict of interest
There is no conflict of interest.
Research Involve Human and Animals
The research does not involve human and/or animals.
Consent to Participate No need for informed consent was required.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-09-03T05:07:23.946Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "4324291cfe8f4fc0ea98f33538d3b3dd95926bcd",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10551-022-05192-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4324291cfe8f4fc0ea98f33538d3b3dd95926bcd",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
217190704 | pes2o/s2orc | v3-fos-license | The Generalized Gielis Geometric Equation and Its Application
: Many natural shapes exhibit surprising symmetry and can be described by the Gielis equation, which has several classical geometric equations (for example, the circle, ellipse and superellipse) as special cases. However, the original Gielis equation cannot reflect some diverse shapes due to limitations of its power-law hypothesis. In the present study, we propose a generalized version by introducing a link function. Thus, the original Gielis equation can be deemed to be a special case of the generalized Gielis equation (GGE) with a power-law link function. The link function can be based on the morphological features of different objects so that the GGE is more flexible in fitting the data of the shape than its original version. The GGE is shown to be valid in depicting the shapes of some starfish and plant leaves.
Introduction
In geometry, the equation of a circle in the Euclidean plane is usually expressed as where x and y are the coordinates of the circle on the x-and y-axes, respectively, with r the radius. The circle is a special case of that of an ellipse: where A and B (A ≥ B > 0) represent the major and minor axis semi-diameters, respectively. Interestingly, circles and ellipses, as well as squares and rectangles, can be regarded as special cases of Lamé curves [1,2], whose mathematical expression is listed as where n is a real number. This equation has been shown to be valid for describing the actual crosssections of tree rings and bamboo shoots [2][3][4][5]. Equation (3) in polar coordinates can be rewritten as where r and ϕ are the polar radius and the angle between the straight line where the polar radius lies and the x-axis, respectively. Gielis proposed a more general polar equation that can reflect more complex natural shapes [1,2]: where n1, n2 and n3 are constants (both ∈ ℝ); positive integer m was introduced to make the curve generate arbitrary polygons (with m angles) consequently enhancing the flexibility of Lamé curves. We refer to Equation (5) as the original Gielis equation (OGE) in the following text for simplicity. OGE has been used to simulate many natural shapes, e.g., diatoms, eggs, cross sections of plants, snowflakes and starfish [1,2]. OGE also has shown its validity in describing several actual natural shapes, e.g., leaf shapes of Hydrocotyle vulgaris L., Polygonum perfoliatum L. and seed planar projections of Ginkgo biloba L. [6,7]. Furthermore, OGE can produce regular, or at least very approximately regular polygons [8,9]. When m = 1, A = B, n1 = n, and n2 = n3 = 1, OGE has a special case: where = ⁄ . This simplified version has been used to describe the actual leaf shapes of 46 bamboo species [2,10,11].
Although OGE is rather flexible in fitting the edge data of many natural shapes, it sometimes fails to describe accurately some symmetrical natural shapes. To further strengthen its flexibility of data fitting, we attempt to build a more generalized equation based on OGE, motivated by the study of starfish, which display a wide diversity of shapes, not only the archetypical shapes. In the Plateau problem of minimal surfaces, one of the constant mean curvature solutions for a soap film is a sphere, but these solutions are for isotropic energy distributions only.
In crystallography, Wulff shapes describe anisotropic distributions of energy, and can take many forms, with their corresponding constant anisotropic mean curvature surfaces [12]. Extending this principle to biological species, starfish can be considered as spheres for specific anisotropic energy distributions. Remarkably, pincushion starfish of the genus Culcita are close to spherical, intermediate between classical spheres and the archetypical shapes of five-armed starfish. Another notable group of starfish are biscuit starfish, almost pentagonal and flat. They belong to the genus Tosia. In order to apply the above methods to these groups, a modification of the original Gielis equation is necessary.
The Generalized Gielis Equation (GGE) and Its Two New Special Cases
OGE can be rewritten as [6] ( ) = 1 cos 4 + sin 4 where = ⁄ , = ⁄ and = 1 ⁄ . Consider the formula inside the parentheses of Equation (7), which we define as follows: We refer to this as the elementary Gielis equation (EGE) for convenience. OGE hypothesizes the existence of a power-law relationship between r and re. We refer to this relationship as the link function, f. As this link function can take on other forms, we use the following more general expression to replace OGE: which we refer to as the generalized Gielis equation (GGE). In actuality, re is the polar radius of the elementary Gielis curve generated by EGE, and r is the polar radius of the generalized Gielis curve generated by GGE. Therefore, OGE is actually a special case of GGE with a power-law link function.
In the current study, we propose the following two candidate forms of the link function: and = exp + ln( ) + (ln( )) If we use the log transformation for the left-and right-hand sides of the above two equations, we obtain, respectively: and where = ln( ) and = ln( ). The first equation is actually a hyperbolic equation, and the second one is a quadratic equation. If we also use the log transformation for both sides of the power-law link function in OGE, a linear equation is obtained. We can express that by letting the coefficient δ2 in Equation (13) be zero. Of course, in nature, there are forms other than the above link functions; however, these other forms also belong to the scope of GGE if there exists a clear functional expression between r and re. Figure 1 provides a simulation example for Equation (10).
Application of the Generalized Gielis Equation
In this section, we mainly illustrate the application of GGE to several starfish species of the families Goniasteridae and Oreasteridae. Culcita and Tosia are the target genera, and Anthenoides tenuis and Stellaster equestris are used as long-armed reference species. Furthermore, we test this on the leaves of four plant species; in particular, we test whether this extension is applicable to elliptical leaves with a broad basis. Table A1 in Appendix A shows the source of the material and species information.
Considering that the five arms of the starfish in Goniasteridae and Oreasteridae are approximately equal, we further simplify Equation (8) to For starfish, we fix m to be 5; for leaves, we fix m to be 1. To measure the goodness of fit, rootmean-square error (RMSE) is used: where and ̂ represent the observed and predicted polar radii of a starfish or a leaf described by GGE; N represents the number of data points on the edge of that starfish or that leaf. In fact, ∑ ( −̂ ) = RSS, where RSS is the residual sum of squares. However, RMSE is not suitable for comparing the goodness of fit among different samples. The reason is that a large object usually has a larger RMSE than a small object even when the fit to the former is just as good. Thus, we use the following adjusted RMSE (RMSEadj) that can reduce the influence of the object's size [13]: where A represents the area of an object of interest. We do not use the coefficient of determination (i.e., r 2 ) as an indicator because it has been considered to be problematic for reflecting the goodness of fit of a nonlinear regression [14,15].
We developed a group of R scripts for extracting planar coordinates of shapes of interests and fit GGE based on R (version 3.62) [16] (see Appendices S1 and S2 in the online supplementary materials). In Appendix S2, we minimize the residual sum of squares (RSS) between the observed and predicted polar radii of a starfish or a leaf described by GGE to estimate the parameters in GGE. The number of data points on an image edge ranged between 1200 and 2700 depending on the original image size and resolution, which is sufficient for describing the profile of the image. Figure 2 shows the original images and the fitted results for the edge data of eight starfish using GGE, and Figure 3 shows the fitted functional relationships (i.e., link functions) between r and re of the eight starfish on a log-log plot. Figure 4 exhibits the fitted leaf shapes and corresponding link functions for the four leaves on a log-log plot. Table A2 in Appendix A tabulates the estimated parameters and indicators of goodness of fit. Table A1) and fitted generalized Gielis curves. The number in the upper left corner for each black background image panel represents its sample code. The panel below each black background image panel shows the scanned edge (represented by a gray curve) and the fitted edge using GGE (represented by a red curve). The intersection between the blue vertical and horizontal dashed lines represents the polar point; the black inclined dashed line represents the previously used horizontal line of a standard GGE without an angle transformation. In each panel, the small open circles represent the actual values, and the red curve represents the fitted link function based on Equation (12). RMSE values shown here were calculated based on the log-transformed data, while RMSE values in Table A1 were based on the untransformed data of r vs. re. . It proves that the adjusted RMSE is more valid than RMSE in comparing the goodness of fit when there is a large difference in size between any two objects.
Discussion
In Table A1, samples 1, 5 and 12 have larger RMSE values (> 0.10) than the others. For the first two samples, the two starfish have relatively longer arms than the other 7 starfish. RMSE = ∑ ( −̂ ) ⁄ can be considered to be an "average absolute deviation", that is, an average difference, ignoring sign, between the observed and predicted radii. According to Taylor's power law, there is a power-law relationship with an exponent > 0 (usually falling within a range of 1 to 3) between the variance and mean of a non-negative random variable [17,18]. In other words, the variance (and its square root) is an increasing function of the mean. Similarly, the RMSE of radii is positively related to the size of the object and the extent of variation in polar radii. The bigger the object is or the larger the extent of variation in polar radii is, the larger is its RMSE value. That is why samples 1 and 5 have large RMSE values. However, for sample 12, the main reason for its large RMSE value is as a result of the fitting approach. We minimized RSS as the target function of convergence. For the generalized Gielis curve that we used to depict the blade with a power-law function, the polar point was very close to the leaf base (i.e., the connection point of the blade and the petiole) if the leaf is narrow [10,11; also see Figure 5 below]. The ratio of leaf width to length has a big effect on the goodness of fit. A broad leaf shape ensures that the radii for the data points on the edge of a leaf close to the polar point are not too small, which enhances the goodness of fit when RSS is minimized as the target function. However, a narrow leaf shape gives rise to many small radii for the data points on the edge of the leaf near to the polar point, and that results in a large deviation between the actual and predicted radii ( Figure 5). There are many data points that are far away from the polar point for a narrow leaf shape. Minimizing RSS will tend to reduce the deviations for these data points because the radii are larger than those of the data points close to the polar point. Unfortunately, most bamboo leaves are narrow [19]. Thus, the minimization of RSS has resulted in a large deviation for the data points close to the polar point. That is why sample 12 fits the data worse than the other samples. The quadratic function given by Equation (13) with δ2 ≠ 0 did not improve the goodness of fit. Figure 5. Illustration for the comparison between a broad blade (a) and a narrow blade (b). In each panel, the red point represents the polar point; the blue points represent the data points on the edge of the blade; the gray curve represents the blade edge; the segments between the polar point and the data points on the edge represent radii. Each leaf shape was generated by GGE with a power-law relationship (which is equation [13] when δ2 = 0). The horizontal axis of the generalized Gielis curve is rotated counterclockwise by π/4 to conveniently show the image. Here, when δ1 decreases towards 0, the curve approximates a circle with radius exp(δ0) and the polar point becomes the center of this circle at the point (0, 0). In contrast, when δ1 increases towards a large value, the curve approximates a line segment with length exp(δ0) and the polar point approaches the left endpoint of the segment.
For leaves which are flat, the 2D representations are of immediate value to quantify shape and area. For starfish, the 2D representations can serve as one of the two sections to build a 3D starfish. Whether the proposed two functions (i.e., the hyperbolic and quadratic equations) can apply to more shapes is still unknown, meriting further investigation. We believe that other forms of link functions can be found for shapes that cannot be adequately fitted by Equations (12) and (13).
Conclusions
In the present study, we propose a generalized Gielis equation (GGE) by introducing a link function for the polar radius of the elementary Gielis equation (EGE). The original Gielis equation (OGE) can then be regarded as a special case of GGE with a power-law link function between the polar radius of EGE and that of GGE. Although OGE can produce a lot of shapes, the power-law link function has limited validity for describing shapes such as the planar projection of starfish. In that case, we put forward two candidate link functions (a log-hyperbolic function and a log-quadratic function) to make OGE applicable to these shapes. We found that these two functions describe the shapes of the investigated starfish and leaves well, showing that in nature, not all ontogenetic radial growth follows the power-law relationship.
Supplementary Materials: The following are available online at www.mdpi.com/2073-8994/12/4/645/s1, Appendix S1: R script for extracting the planar coordinates of an image, Appendix S2: R script for fitting the edge data using the generalized Gielis equation.
Author Contributions: P.S. and J.G. conceived and designed the experiment together; P.S. and D.A.R. analyzed the data; P.S., D.A.R. and J.G. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
There are two tables for tabulating the sample collection information and the fitted results using the generalized Gielis equation, respectively. In this table, Code represents the sample code (see Table A1 for details); ( , ) represents the estimated planar coordinates of the polar point; represents the estimated angle by which the horizontal line (i.e., the original x-axis) of the generalized Gielis curve was rotated; , , , , , and are estimated parameters in Equations (12) and (13); Sample size represents the number of data points on the edge of an image; Area represents the scanned (actual) area of a starfish or leaf image; RSS is the residual sum of squares; RMSE is the root-mean-square error; RMSEadj is the adjusted root-mean-square error. | 2020-04-23T09:14:49.269Z | 2020-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "530dadd0296b344425f982dc870917da8d9b79da",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/12/4/645/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6babcb27135dd6e34e42c2f6d2800c28640af2b1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
58919936 | pes2o/s2orc | v3-fos-license | Writing as a technology of the self in Kierkegaard and Foucault
. -Writing is a very important means by which we can work on ourselves. Yet as a ((technology of the self}} writing has changed substantially at different timcs during European history. This essay sketches some of the crucial characteristics of wriring as a technology of the self for Plato's contemporaries, for rhe early church fathers, and then for Peter Abclard. The changes exemplified in the confessional writing of AbeIard became the platform for writing as a technology of the sclf in European modernism. 'l'hc characteristics of modernist writing as a technology of the self are examined in some detail in the work of Kierkegaard, particularly wirh respecr ro his aesthetic writings and his use of multiple narrative voices. Kierkegaard's uses of writing are compared and contrasted with rhose of Baudelaire and Foucault.
Introduction
Kierkegaard used writing for a variety of purposes: to communicate (indirectly); to explore (subjective) truth; to push discursive reason to its limits; to engage in <<conversation» with the writings of philosophers, theologians and literary figures present and past.He also used writing as a technology for working upon himself.The use he made of this technology in working upon himself is significantly different from writing as a technology of the self in other eras.Its use shares substantial features with that of Baudelaire, and subsequently that of Michel Foucault.Kierkegaard's use of writing as a technology of the self also diverges significantly from that of Baudelaire and Foucault.
In this paper 1 will situate Kierkegaard's writing as a technology of the self by contrasting it with other significant uses in European history.But 1 will confine my remarks about Kierkegaard's writing mainly to his practices in EitberlOr.
Part 1: Previous Uses of Writing as a Technology of the Self
PLatoi Greece
Plato's discussion in the Pbaedrus is one of the earliest problematizations of writing in European philosophy.As Derrida's discussion in ((Plato's Pharmacy))' has shown us, Plato's apparent privileging of speech over writing in this dialogue is not as straightforward as it seems.Derrida's strategy is to deconstruct the hierarchy Plato has established between speech and writing by exposing the ambiguity of the pivotal term ((pharmakon)) .
Derrida seeks to make a general point about the relations benveen speech and writing, in pursuit of his critique of the metaphysics of presence.What he overlooks is the historical context of Plato's discussion.Why was Plato even concerned with the question of writing?Why did he problematize writing with respect to memory?
The answer is that Plato was responding polemically to a new use of writing which was much in vogue in contemporary Greek society.This was the use of hypornnernata or notebooks, both in personal life and in business and administration.As the name of these notebooks suggests, they were conceived primarily as a mnemonic aid.Civic oficials, heads of households, merchants, ctc., used notebooks to jot down ideas, appointments, things to be remembered.These notes then formed the raw data for reorganizing the enterprise, for improving management, for increasing efficiency or productivity.The raw data might come from ideas spontaneously occurring to the notetaker, or from things seen, heard, or read about.
Much the same use was made of the notebooks in personal life.The private individual noted things he2 thought worth remembering.These might simply be things that he needed or wanted to remember to do, or they might be ideas that would aid with selfmanagement.For example, if some observed behaviour were seen to be effective for the management of envy or greed or suffering, or to help someone be more energetic or happy, it could be noted in writing for future reference.The notebook then became a data bank which could be meditated on, organized, systematically reconstructed, so that it could form the basis of a program of self-management.
Self-mastery, by subordinating the unruly appetites to reason, was already a widespread aspiration among aristocratic Greek men.It was articulated strongly by Plato.For example, the Phaed rus contains an extended discussion of desire and erotic love, and the need to control these powerful forces with reason.To illustrate, Plato used the image of the soul as a chariot drawn by two horses: the horses are honour and appetite, the charioteer reason.Ethics in this context is largely a matter of prudential ~elfmana~ernent, using a hierarchical organization of faculties and drives.The manner of organization was modelled on the domestic economy, with reason in control.
So summarize, writing was used as a technology of the self in this context to serve the goal of self-mastery.Its relevant features for this purpose were that it acted as a memory bank, whose resources could be drawn upon as required.Furthermore, it rendered observations and memories in a form which allowed for their rational reorganization.Personal data in the memory bank could be used for self-transformation by modelling one's life on the rationally reconstructed writing.The aim of self-mastery was to augment pleasure, or to satisfy interests, at least insofar as these were compatible with an honourable life.
II. Early Church Fathers
Many of the early church fathers also practised writing as a means of worhng on themselves.But there were significant differences benveen their use of writing and that of Plato's contemporaries.According to Foucault, the most significant difference was that the early church fathers used writing primarily as a means of self-interpretation3.
Christianity, for Foucault, is characterized by a ~hermeneutics of suspicionn (the phrase is Paul Ricoeur's).Thoughts are not always what they seem to be: they often contain indirect evidence of sinful attitudes and satanic influences.The inner is not the outer.It is necessary to be vigilant with respect to one's innermost self, to see that it is not wandering off the true path to salvation.The technology of writing about oneself is aimed not at self-mastery for its own sake, but at purity so that one conforms to the word of God.
It is not a prudential self-management so that one attains a maximum of pleasure or satisfaction of interests, but a self-purification in order to purge oneself of the unholy and become closer to God.
Autobiographical writing at this time was primarily a means of confessing one's innermost thoughts, so as to expose them to objective scrutiny (whether another's, or one's own at another time, or God's).Greek writing about the self had confined itself to phenomena things as they appeared on the surface; thoughts observed, or acquired, or spontaneously occurring.These phenomena were then rearranged.But Christian writing was suspicious; it sought to delve beneath the surface appearances and uncover the innermost secrets of the seif, It assurned the self was open to subde deceptions.Only the most rigorous pursuit of truth, guided by the firmest religious faith could cut through the layers of deception.
This use of writing about the self was ultimately appropriated by the church in the practice of confession.Confession even became a compulsory annual practice for al1 parishioners by the decree of the Fourth Lateran Council in 121 5.It formed a crucial stage in the development of a self who seeks its truth rather than a self who seeks to style itself.
An important additional feature of this early Christian confessional writing is that the subject's life is presented as an exemplar offered up to God.The confssio, as a form of writing, was always mediated by a relationship with God.In Augustine's Confssions, for example, the aim is to praise divine goodness and mercy rather than to reveal idiosyncratic episodes from the life of Augustine as a unique individual.Personal revelations are only made insofar as they relate to the fortunes of Christianity as a whole4.While suspicious self-scrutiny was set in train by this practice (in speech and in writing), its domain was restricred to spiritual self-scrutiny with respect to sinful, or potentially sinful, thoughts and behaviour5.
III. Abelard? Autobiographical Writing
In his autobiography, The Histo y ofMy Calamities, Peter Abelard made a substantial departure from this mode of confessional writing.Abelard wished to reveal himself as a unique individual whose biography could not be confused with anyone else's.He revelled in his idi~s~ncrasies.He revealed facts about his life which could not be socially approved.
5.
Cf. Augustine: a 0 Lord, my Helper and my Redeemer, 1 shall now te11 and confess to [he glory of your name how you released me from the fetrers of lust which held me so tightly shackled and from my slavery to rhe things of this world),; Guibert of Nogent: e1 confess to Thy Majesty, O God, my endless wanderings from Thy paths, and my turning back so often to h e bosom of Thy Mercy, directed by Thee in spite of all. 1 confess the wickedness 1 did in childhood and in youth, wickedness that yet boils up in my mature years, and my ingrained love of crookedness, which still lives on in the sluggishness of my worn b0dy.nBoth quoted in Philip Barker, Michel Foucault: ~subversions of the subject, Allen & Unwin, 1994: 134-135.
Abelard's writing about himself can be distinguished from that of his predecessors in the following ways.There is no mediation by God; there is only the self-reflection of the writer.This self-reflection is used in a project of selfconstruction, like the Greek use of hypomnemata, but it is directed by a quest for the truth about himself rather than by prudential self-management.Abelard's ethics has as a theme, know thyselJ: so that his autobiography poses two questions: (1) Who is responsible for the life of the subject that 1 arn? and (11) How did 1 become the subject that 1 am?Abelard sought to be perfectly honest about himself, in order that he appear as a transparent subject.This transparent subject was to be the foundation for his reasoning.
Rather than exercising a hermeneutics of suspicion, Abelard pursued a rigorous methodological scepticism.This raises a problem since the scepticism requires a grounding in a transparent subject, but if it is so grounded the scepticism evaporates.If there is no such grounding, then the subject of doubt itself becomes open to doubt, and madness threatens.Descartes was faced with the same problem, but evaded it by excluding by fiat the possibility of himself being mad6.Abelard adopted a similar solution, which, has become associated with the continued use of this methodological doubt, viz. he built a system.He created a world dominated by his own philosophical and methodological system, so that the innerlouter distinction collapsed and reality was made to agree with his individual subjective philosophical perspective7.But the whole system was potentially unstable due to the ambivalence between the opacity and transparency of the subject at its origin.This is the point of departure 1 wish to take for a discussion of the use Kierkegaard makes of confessional writing as a technology of the self.Although the first volume of Ezjber/Or is quite explicitly a critique of the self-constructive techniques of the German Romantics, it both draws on and undermines Abelard's autobiographical technology for creating the self-reflexive, transparent subject of reason.
Part 11: Kierkegaard's EitherlOr
The first volume of Either/Or is presented as a collection of papers written by an aesthete.They have been published by the pseudonymous editor Victor Eremita, who has taken the liberty of using a phrase from two of the scraps of paper to serve as an epigraph for the first section of the volume.This phrase, ad se ipsum (to himself), might just as well have been used as an epi- graph to the whole volume.The writings of the aesthete are to himself, for himself.
The aesthete uses writing as a means of producing himself.He reflects on things in the world, and then reflects on his reflections.He jots down notes on bits of paper as they occur to him, perhaps to meditate on them later.In fact the phrase ((ad se ipsum)) is the Latin translation of the Greek title of Marcus Aurelius Meditations.In one of the diapsalmata, the aphoristic fragments which form the first section of the aesthete's papers, the aesthete adverts explicitly to the diary as an instrument for aiding memory: «If any man needs to keep a diary, 1 do, and that for the purpose of assisting my memory) (E O, 1: 32).It seems at first glance, that we have a resurrection of the hypomnemata as a technology of the self.
But it is clear from the preface by Victor Eremita, and from numerous passages in the aesthete's own papers, that these writings also share features with the Christian tradition of hermeneutic suspicion.The very first line of Eremita's preface introduces the distinction between the interna1 and the external.When he notices contradictions benveen what he hears and what he sees in a person, he suspects inner secrets.The pursuit of these secrets is one of the passions of Eremita's life.He also uses the image of the confessional on the first page of his preface.But the priest in the confessional is not in a position o observe the telling contradictions which reveal conceded inwardness, since the priest only hears a voice.O n the basis of the heard voice the priest ((constructs an ounvard appearance which corresponds to the voice he hears)) (E O , 1: 3).In this way the confessing subject becomes cctransparent)) to the priest in the same manner as Abelard's self becomes ((transparent)) to himselfi.e. by building a system consistent with what is projected to be the true subject.This in effect collapses the innerlouter distinction, and hence any opportunity for doubting the coherence and tranparence of that subject.
The aesthetic papers are al1 written in the first person.They appear to be confessional.Unlike those of the early church fathers, the aesthete's confessions are not mediated by God.They are more like Abelard's confessions in revealing even socially shocking things about the individual.But they are unlike Abelard's confessions in that they do not directly revealfacts about the individual.Rather, thev reveal vaiues, attitudes, and psychological perspectives.But they do more than this.The voice of the aesthete does not appear in monologue, as the only basis for constructing a picture of the aesthete.It is embedded within the editorial voice of Eremita, it is contrasted with the voice of Judge William (the ethicist of Volume 2), and even has other voices embedded within it (e.g. the voices of Johannes the seducer and his victim Cordelia).
It is this nesting of narrative voices within EithevlOr which allows it to make a distinctive break as a technology of the self.Such a multiplication of narrative voices within a literary text was not new.In fact it was in vogue in the arabesque novel, characterized by Friedrich Schlegel as having fragmen-tary form and a mixture of genres*.Eitber/Or is subtitled <<A Fragment of Lifen.
Its mixture of literary genres includes aphorisms, essays, letters, diaries, even a sermon.The arabesque novel was also frequently elided with the Bil!ngsroman, or novel of self-cultivation, where the protagonist's consciousness evolves with the narrative point of view.
But Kierkegaard made new use of these conventions.Like Plato, he took issue with a prevailing use of writing as a technology of the self.Kierkegaard's tactic was to redouble the technology in parody, then to bracket the aesthetic practice of self-writing in such a way as to expose its limitations.The parodic transgression of the limits of aesthetic self-writing, however, does not abolish the aesthete's strategies aitogether.Rather, the aesthetic is aufgeboben (sublared) i.e. negated by being preserved in a higher sphere.In Kierkegaard's case, the aesthetic is to be sublated by being preserved as a transfiguring vision of reality, but operating within the context of an ethico-religious life.
Kierkegaard's tactic is to write the aesthetic point of view as if from the inside.Once the aesthetic self has been objectified in material artefacts, then the author can step back from it and appraise it.This is precisely how the slave manages to reverse the masterlslave dialectic in Hegel's Phenomenology of Spirit in the quest for selfrecognition.As a technology of the self, writing is here an objectification of oneself in the world which allows one to see oneself as if from the outside.The inner becomes outerbut in Kierkegaard's case, this is to allow one to become truly inner.
The first volume of EitberlOr gives us a portrayal of aesthetic self-construction in the uses the aesthete, A, makes of his own writing.But the distance afforded by the multiplicity of narrative voices allows us as readers, and Kierkegaard as a writer, to make an existential evaluation of the aesthetic life.
II. Romantic Irony
What are the characteristics of writing as a technology of the self for the aestheteas presented in Eitber/OrVolume l ?Since the aesthete is modelled on the German romantic ironist, we can expect a high degree of overlap in their uses of writing.As we shall see, Kierkegaard's critique of romantic irony also applies to the aesthete.
Writing for the aesthete is a space for spontaneous self-expression and play.This expression is governed by the mood of the moment -at least when writing aphorisms.No regard is paid to consistency or to an overail telos.Eremita even thinks the order of the aesthete's papers is arbitrary-they follow no apparent narrative plan.
While the aesthete has no overall telos, he does pursue limited goals.One of the main motivations for aesthetic action is to escape boredom.The aesthete uses a combination of imagination and accident to ((poeticize)) the mundane.Irony, caprice, and reversal are tactics used to transfigure the actual world to render it interesting.For example, A says he knew a man ((whose chatter certain circurnstances made it necessary for me to listen to.At every opportunity he was ready with a little philosophical lecture, a very tiresome harangue.Almost in des~air, 1 suddenly discovered that he perspired copiously when talking. 1 saw the pearls of sweat gather on his brow, unite to form a stream, glide down his nose in a drop-shaped body.From the moment of making this discovery, al1 was changed. 1 even took pleasure in inciting him to begin his philosophical instruction, merely to observe the perspiration on his brow and at the end of his nose» (EO, 1: 295).Similar transfigurations through injections of arbitrariness can be achieved by seeing the middle of a play, or by reading the third part of a book.Another tactic for escaping boredom, for the aesthete, is the ((rotation meth o d ~.This amounts to the prudential management of one's moods and desires, in a libidinal economy.This requires a certain degree of self-knowledge, so that one can predict when a desire will be satiated, when a particular mood needs to lie fallow, what succession of psychological states would be most titi-Ilating.
Aesthetic transfiguration of experience to escape boredom corresponds to the romantic ironist's aspiration to invest the mundane with infinite significance.This requires selective memory and forgetfulness.But this is not the same as the use of hypomnemata as a mnemonic aid.Memory and forgetfulness are conceived as Nietzsche later conceived them not as the brute presence or absence of sense impressions of the facts, but as the principles which organize our observations, and which preselect what we notice and overlook.In short, memory and forgetfulness are our principles of interpretation.It is by means of memory and forgetfulness that the aesthete poeticizes actuality.
Because the primary negative motivation for the aesthete is to escape boredom, the interesting is the primary positive motivation.The aesthete is a sensualist in the realm of reflection.While he adores im-mediate, i.e. unmediated, experience, he cannot attain it directly himself.Instead he has to enjoy im-mediate experience vicariously, or transfigured by his own poetic activity.
In his unpublished work De Omnibus Dubitandum Est, Kierkegaard (or his pseudonym Johannes Climacus) points out that etymologically the word ((interest» breaks down into the Latin ccinter)) (benveen) and «esse» (being).Interest is therefore a being-benveen.Aesthetes and ironists throughout Kierkegaard's oeuvre are fascinated by young wornen.These are metaphors of immediacyexperiential virgins, unviolated by reflection.They are spontaneous, naive, and pure.Language is glossed frequently in Kierkegaard's work as that which mediates benveen world and consciousness.The greater the linguistic reflectiveness, the greater the degree of mediation.The aesthete uses language to get benveen (inter) the being (esse) of the immediate.Language is the interesting.By prising apart the im-mediately given with language, the aesthete transfigures it.
A metaphor for the investment of the im-mediate with inter-est is tautology.
The aesthete devote one of the diapsalmata to the topic: c(Tauto1ogy is and remains still the supreme principle, the highest law of thought.What wonder then that most men use it?Nor is it so entirely empty that ir rnay well serve to fdl out an entire life)) (E O, 1: 37).That is, a tautology seerns to be saying nothing at all, but just the repetition in language of the subject by the predicate is a rnediation.It is an analogue of fictional language.Its meaning does not derive from reference to the actual world, but is produced by the interplay of signs within a language.
It is just this insertion of language into the self through the technology of writing, in an effort to invest irn-mediately given experience with interest, that constitutes aesthetic self-creation.But this act of getting between experience to expand it into something inter-esting requires a starting point.That is why the aesthete needs an occasion.The spontaneous jottings in a diary, such as those found in the diapsalmata, can provide occasions for aesthetic expansion.A young woman, too, can be an occasion for the differential work of language (an idea explored at length in ccDiary of the Seducen).
The aesthete conceives of language and consciousness as systems of dzf férance9, 11 is only in naive, spontaneous im-mediacy that consciousness is present to its object.When language or reflection intervene, there is a deferral and displacement of both the subject and the object.There is only the interplay of signs.T h e subject is no longer transparent and self-present, but lost in a labyrinth of ccunlirnited semiosis)).For the romantic ironists, this is infinitely interesting; for Kierkegaard, this is a condition of despair.
Kierkegaard had already criticized romantic irony as theorized and practised in the work of Fichte, Friedrich Schlegel, Tieck and Solger in The Concept oflrony.The main thrust of the critique is that romantic irony loses touch with actuality.It turns everything into a poetic drearn.For example, in Tieck, ((Anirnals talk like human beings, human beings talk like asses, chairs and tables becorne conscious of their meaning in existence, human beings find existence meaningless.Nothing becornes everything, and everything becomes nothing; everything is possible, even the impossible; everything is probable, even the improbable)) (Cl, 3 18).
The rnain characteristic of this forrn of aestheticism is that it transfigures actuality, but in such a way that it loses touch with actuality.Kierkegaard wants to retain the transfiguration of everyday life by means of an inward infinity, but he wants to reject the extreme subjective idealism of romantic irony.
Kierkegaard i Modernism
The solution Kierkegaard has to the problems of aesthetic self-writing has a lot in common with the model of modernism proposed by Baudelaire in his figure of the dandy.This in turn was used by Foucault as a model for using writing as a technology in the pursuit of individual freedom.Al1 three depart from romantic irony in requiring the tran~fi~uration of actuality to be complemented with an exacting respect for actuality.In fact, al1 three require the transfiguration to occur by simultaneously respecting and transgressing actuality.Al1 three engage in a relentless Socratic interrogation of given actuality in pursuit of an ethic of truthfulness.This uncompromising pursuit of truth, no matter how dangerous the social context, is what Foucault dubbed parrhésia.In al1 three cases truth is not straightfonvardly a matter of correspondence, nor language simply representational.Truthfulness is performative, and transgressive uses of language create the conditions for new experiences of actuality.
According to Kierkegaard's analysis in The Concept ofIronj Socratic irony is the midwife at the birth of subjectivity.That is, Socrates' unremitting interrogation and ironic subversion distanced his interlocutors from al1 received opinion and given actuality.This resulted initially in aporia, a stale of utter bewilderment and disorientation.Ultimately it forced each individual to take responsibility for their own thoughts and actions.They could no longer rely on what they had learned from tradition or from their peers.Each individual had to become responsible for themselves in the face of truth Socrates did not allow them to turn away in bad faith and forgetfulness.Romantic irony also distances the individual from received opinion and given actuality.But this distancing is performed by the ironist on him or herself.There is no other voice to perform the deconstruction or ironic subversion of one's bad faith Socratic irony was always performed by another upon the subject Romantic irony is autodidactic, and like Abelard's doubting subject threatens LO become purely subjective.It is not that romantic irony fails to have a multiplicity of narrative voices, but these are not used to gain critical perspective on one another.They do not allow us to experience each voice as a ~limited whole)) to use Wittgenstein's expression.From within the perspective of romantic irony we cannot redraw the limits of the world of romantic irony.
The modernist tactic is to use multiple voices within the text.In modernism Literature emerges as self-reflexive writing that folds back on itself to create a virtual space of meaning.Wirhin this spacc characters can comment on themselves and on one another at a critical distance.Foucault cites Cervantes Don Quixote as one of the earliest instances of this form of writing, where language is no longer simply continuous with «the great chain of being)), nor transparently representational of the actuallo.This can be used as a technology for 10.Cf.Michel Foucault, The Order of Things: an archtleolo~ of the humnn sciences, translator unnamed, Tavisrock, 1970.
working on oneself, but has various potential outcomes.One posible outcome is to lose oneself in the mise en abime of language and to lose touch with actuality (as does subjective idealism, romantic irony, and the nihilistic relativism of some forms of postmodernism and neo-pragmatism).Another possible outcome is to use the virtual space of Literature as a realm for experimentation, self-objectification, and testing of the limits of language (critique).
It is by means of a combination of poetic tran~fi~uration and critique that Kierkegaard, Baudelaire and Foucault work on themselves through Literature.
Here is how Foucault characterizes the project of modernity: «For the attitude of modernity, the high value of the present is indissociable from a desperate eagerness to imagine it, to imagine it othenvise than it is, and to transform it not by destroying it but by grasping it in what it is.Baudelairean modernity is an exercise in which extreme attention to what is real is confronted with the practice of a liberty that simultaneously respects this reality and violates it".
But this is very close to Kierkegaard's notion that the aesthetic relation to actuality must be preserved within the context of an ethico-religious life.The pitfalls of romantic aestheticism must be avoided, but the positive values of irony and poetic transfiguration must be preserved.This is achieved when the poetic transfiguration is accomplished by means of ((extreme attention to what is real,).As it happens, the Danish word for transfiguration (Forklarelse) also means «clarification,).
The poetic transfiguration of actuality (given reality) is achieved by becoming clear about its limits, then transgressing them in such a way that those limits are both exceeded and preserved.In Literature this is achieved by writing the self from the point of view of contemporary consciousness, then enfolding that point of view in another which exceeds it.Kierkegaard portrays the aestheticism of the romantics, then exceeds it with the point of view of the ethicist (and later the religious point of view).Foucault explores in minute detail the epistemic conditions for the production of knowledge in the renaissance, and exceeds it with a description of the epistemic conditions in the classical age (and later the modern age).
Dlffnerences Between KierkegaardS Writing As A Technology of the Self and FoucaultS
Kierkegaard retained the idea from the church fathers that al1 self-writing be mediated by God.Foucault the atheist repudiated any such idea.For Kierkegaard ((the art of existing is a ski11 that must be acquired and cultivated via a relation to the infinite, rather than performed simply on the basis of natural talents and Yet there are hints in Foucault's later work that retrospective recuperations of his life's writings had been oriented by the Kierkegaardian and Nietzschean principles to ((become who he was)).
Conclusion
Kierkegaard's use of writing to work upon himself is continuous with aesthetic selfwriting insofar as it helps to transfigure given actuality (the appearance of the world we have inherited).But this is not a licence to create the world ex nihilo.We are constrained by the way the world is taken to be.Our first task (the ethical) is to acknowledge these limits; our second task (the religious) is to transfigure our epistemic limits or the limits of the «universal)) in Kierkegaard's terminology Within the shifting, relativistic world of interaction benveen subject and object, Kierkegaard thinks we need a constant to orient our transgressive inventions.This he finds in the single life goal and in the practice of faith.17.Michel Foucault, The Use ofPleasure, translared by Robert Hurley, Vinrage, 1985: 8-9.
9. Cf.Jacques Derrida, xDifferanceb, in Speech and Phenomena And Other Essays on Hztsserl?Theory of Signs, translated by David B. Allison, Northwestern University Press, 1973: 129-160.LYiilliam McDonald 11. Michel Foucault, ~W h a t 1s Enlightenrnent?,> in Paul Rabinow (ed.),The Foucault Reader, Penguin, 1984: 4 1. pose.But, then, what is philosophy today -philosophical activity, 1 mean-if it is not the critica1 work that thought brings to bear on itself?In what does it consist, if not in the endeavour to know how and to what extent it might be possible to think differently, instead of legitimating what is already known?l7 | 2018-12-18T14:30:40.708Z | 1996-01-01T00:00:00.000 | {
"year": 1996,
"sha1": "38e38bfcaefa38c1d65f59dd60d86d133bc8879e",
"oa_license": "CCBYNC",
"oa_url": "https://revistes.uab.cat/enrahonar/article/download/v25-mcdonald/591-pdf-en",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "38e38bfcaefa38c1d65f59dd60d86d133bc8879e",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
201115637 | pes2o/s2orc | v3-fos-license | Extensive Cryptic Diversity Within the Physalaemus cuvieri–Physalaemus ephippifer Species Complex (Amphibia, Anura) Revealed by Cytogenetic, Mitochondrial, and Genomic Markers
Previous cytogenetic and phylogenetic analyses showed a high variability in the frog taxa Physalaemus cuvieri and Physalaemus ephippifer and suggested the presence of undescribed diversity in this species complex. Here, by 1) adding specimens from the Brazilian Amazon region, 2) employing sequence-based species delimitation approaches, and 3) including RADseq-style markers, we demonstrate that the diversity in the P. cuvieri–P. ephippifer species complex is even greater than previously suspected. Specimens from Viruá and Western Pará, located at the Guiana Amazonian area of endemism, were recovered as distinct from all previously identified lineages by the phylogenetic analyses based on mitochondrial DNA and RAD markers, a PCA from RAD data, and cytogenetic analysis. The sequence-based species delimitation analyses supported the recognition of one or two undescribed species among these Amazonian specimens and also supported the recognition of at least three other species in the P. cuvieri–P. ephippifer species complex. These new results reinforce the need for a comprehensive taxonomic revision.
INTRODUCTION
The neotropical region is known for its high species richness (Myers et al., 2000), although the processes responsible for this richness remain under debate (see Haffer, 1997;Hoorn et al., 2010;Álvarez-Presas et al., 2014;Fouquet et al., 2015;Garzón-Orduña et al., 2015), and a large part of this diversity is still undescribed (Myers et al., 2000;Fouquet et al., 2007;Giam et al., 2012). Giam et al. (2012), based on the analysis of the taxonomic effort dedicated to the description of species over time and the geographic distribution of the described species, estimated that approximately 33% of the species of amphibians were not described at that time, including those in the neotropical forests, a biome supposedly hosting a great part of these unknown species. Delimiting valid species, however, can be a complicated task, and DNA sequence data sets are useful in this matter, as they enable the identification of historical lineages in phylogenetic (or tree-based) analyses and inferences of genetic distances and gene flow statistics in non-tree-based methods (see Wiens and Penkrot, 2002;Camargo et al., 2013;examples in Elmer et al., 2007;Funk et al., 2012;Fouquet et al., 2012;Ortega-Andrade et al., 2015). These methods are especially useful for cryptic species for which morphological characters provide insufficient or misleading evidence for species delimitation.
One such example includes the South American frogs assigned to Physalaemus cuvieri or Physalaemus ephippifer (Anura, Leptodactylidae). A previous phylogenetic study recognized two major clades in the genus Physalaemus, which were informally referred to as the Physalaemus signifer Clade and Physalaemus cuvieri Clade (Lourenço et al., 2015). That study also tested the monophyly of the species groups previously proposed based on phenetic analyses (Lynch, 1970;Nascimento et al., 2005) and recognized five groups in the P. cuvieri Clade: the P. biligonigerus species group, the P. cuvieri species group, the P. henselii species group, the P. gracilis species group, and the P. olfersii species group. Currently, the P. cuvieri species group encompasses nine species, among them P. cuvieri and P. ephippifer (see list in Frost, 2018).
Using DNA sequence data, Lourenço et al. (2015) recovered four distinct lineages among specimens first identified as either Physalaemus cuvieri or Physalaemus ephippifer. These four lineages correspond to karyological groups recognized previously by Quinderé et al. (2009), which were distinguishable particularly by the location of nucleolus organizer regions (NORs). Thus, it appears that there is undescribed, cryptic diversity within these two species. Miranda et al. (2019) also described deep phylogenetic structure among populations identified as P. cuvieri, corroborating the presence of undescribed species in this group. Here, we follow the naming conventions of Lourenço et al. (2015) and refer to these four lineages individually as P. ephippifer and lineages 1 to 3 of "P. cuvieri, " and collectively as the P. cuvieri-P. ephippifer species complex.
The lineages 1 to 3 of "Physalaemus cuvieri" (L1-L3) have primarily allopatric distributions; L1 occurs in northern and northeastern Brazil, L3 was recognized based on specimens from just one locality (i.e., Porto Nacional, in central Brazil), and L2 occupies a broader area, which extends from the central state of Bahia to southern Brazil and northern Argentina (Lourenço et al., 2015) (Figure 1, inset).
3
August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org the mouth of the Amazon River, with the type locality in the Brazilian municipality of Belém (Figure 1). Although P. ephippifer has also been reported in the Guianas, the Bolívar region of Venezuela, and Suriname (Frost, 2018), the true extent of the geographic distribution of this species is still unclear (see comment in Frost, 2018). The specimens of P. ephippifer previously included in the cytogenetic (Nascimento et al., 2010) and phylogenetic analyses (Lourenço et al., 2015) were all from Belém, which is located in the eastern Amazonia. Specimens from central and western Amazonia have not been included in any of the studies of the P. cuvieri-P. ephippifer species complex hitherto conducted.
Given the considerable genetic and cytogenetic variation found among populations of the P. cuvieri-P. ephippifer species complex and the paucity of data from the Amazon region, we improve the analysis of this group by 1) adding localities of the Brazilian Amazon region not sampled before by Lourenço et al. (2015) or Miranda et al. (2019), 2) employing sequencebased species delimitation approaches, and 3) including RADseq-style markers.
Specimens
Six specimens of Physalaemus ephippifer from Santa Bárbara, a locality in the Brazilian State of Pará situated near the type locality of this species, were analyzed cytogenetically. We also karyotyped 28 specimens of Physalaemus (Table 1) from different localities of the Amazon region, situated in the Brazilian States of Pará and Roraima (Figure 1). Considering the taxonomic uncertainties surrounding P. ephippifer, we refer to these specimens as Physalaemus sp. throughout this manuscript. Sixteen of the specimens analyzed cytogenetically ( Table 1) and five additional specimens of Physalaemus sp. (SMRP 1 252.100,) from Óbidos municipality, State of Pará, Brazil, were included in the analyses performed with mitochondrial DNA sequences (Supplementary Table S1). All mitochondrial nucleotide sequences available at GenBank for the P. cuvieri-P. ephippifer species complex were also included, as well as representatives of the remaining eight species currently assigned to the P. cuvieri species group (i.e., P. albifrons, P. albonotatus, P. atim, P. centralis, P. cuqui, P. erikae, P. fischeri, and P. kroyeri) (Supplementary Table S1). One representative of each of the four other species groups previously recognized in the P. cuvieri Clade and P. nattereri, a species of the P. signifer Clade (which is the sister clade of the P. cuvieri Clade; Lourenço et al., 2015), were included to represent groups distantly related to the P. cuvieri-P. ephippifer species complex (Supplementary Table S1). Physalaemus nattereri was used to root the mitochondrial cladograms.
For the analysis based on RADseq-style markers, we used 14 of the 28 specimens of Physalaemus sp. analyzed cytogenetically, two exemplars of P. ephippifer, and five, two, and three individuals of the lineages 1, 2, and 3 of "P. cuvieri," respectively (Table 1 and Supplementary Table S1). We did not have a tissue sample for P. fischeri, and because we expected high locus dropout among distantly related species, we did not include a more distant outgroup when generating this data set.
The specimens were collected under a permit issued by the Instituto Chico Mendes de Conservação da Biodiversidade/ Sistema de Autorização e Informação em Biodiversidade (ICMBio/SISBIO) (permit number 32483), which also includes the authorization for extracting tissue samples. The animal vouchers were deposited at the amphibian collection of the Museu de Zoologia "Prof. Adão José Cardoso" at the Institute of Biology, University of Campinas (ZUEC).
Cytogenetic Analyses
Frogs were injected intraperitoneally with 2% colchicine (0.02 ml/g body weight). After 4 h, they were euthanized with an overdose of 2% lidocaine (50 mg/g body weight-cutaneous administration) and had the intestines and testis removed. Chromosome preparations were obtained from these tissue samples following King and Rofe (1976), with modifications described in Gatto et al. (2018), or following Schmid (1978). This protocol was approved by the Committee for Ethics in Animal Use of the University of Campinas (CEUA/UNICAMP) (permit number 3454-1).
The metaphases were observed through conventional 10% Giemsa staining, and then C-banded following the method described by King (1980). Once the images were obtained, the Giemsa stain was removed using 70% ethanol, and the C-banded metaphases were stained with DAPI (4′,6-diamidino-2-phenylindole) at 0.5 µg/ml. Finally, the material was subjected to the Ag-NOR method (Howell and Black, 1980). The images were obtained using a BX60 Olympus microscope attached to a Q-Color3 digital camera and were edited in Adobe Photoshop CS3 and/or Image-ProPlus 4.0 (Media Cybernetics, Bethesda, MD, USA). The classification of the chromosomes in relation to the position of the centromere was based on the criterion proposed by Green and Sessions (1991).
Extraction of DNA and Sequencing of Mitochondrial Genes
Liver samples were obtained from animals anesthetized with 2% lidocaine (protocol approved by CEUA/UNICAMP, permit number 3454-1). Genomic DNA was obtained from these samples as reported by Medeiros et al. (2013). A region of approximately 2,300 bp of the mitochondrial ribosomal genes 12S and 16S genes and the RNAt-Val gene was isolated by PCR using the primer pairs MVZ 59 (Graybeal, 1997), Titus I (Titus, 1992), 12L13 (Feller and Hedges, 1998), and 16Sbr (Palumbi et al., 2002). The products of these PCR reactions were purified using the Wizard SV Gel and PCR Clean-up System (Promega, USA). The samples were sequenced using the BigDye Terminator kit (Applied Biosystems), with the primers mentioned above, together with MVZ50 (Graybeal, 1997), 16SL2a (Hedges, 1994), 16H10 (Hedges, 1994), and 16Sar (Palumbi et al., 2002), in an ABI 3730xL DNA Analyzer automatic sequencer (Applied Biosystems). The sequences obtained were edited in the BioEdit Sequence Alignment Editor software, version 7.2.5 (Hall, 1999).
Phylogenetic Inferences
The sequences of the mitochondrial 12S, RNAt-Val, and 16S genes composed a matrix of 84 terminals (for details, see Supplementary Table S1) and 2,311 characters. The sequences were aligned using Muscle (Edgar, 2004). The phylogenetic inferences were generated by the Maximum parsimony (MP) criterion in the TNT. v. 1.1 (Goloboff et al., 2003) and by Bayesian analysis in MrBayes v.3.2.5 (Ronquist et al., 2011).
Maximum parsimony trees were obtained by a heuristic search (best length was hit 100 times), using the new technology search option, which included sectorial searches, ratchet, tree drifting, and tree fusing. The gaps were considered as fifth state. The support of the edges was evaluated by bootstrap analysis with 1,000 pseudoreplicates, using a traditional search.
For the Bayesian analyses, the GTR+I+G model of DNA evolution was used as inferred in MrModeltest v. 2.3 (Nylander, 2004). Two simultaneous analyses were run, each with four chains (three heated and one cold) and 2 million generations. One tree was sampled every 100 generations. Consensus topology and posterior probabilities were produced after discarding the first 25% of the trees generated. The average standard deviation of split frequencies (ASDSF) value was below 0.01 and the Potential Scale Reduction Factor values were approximately 1.000. The stabilization of posterior probabilities was checked using Tracer v. 1.6 (Rambaut et al., 2014).
Mitochondrial Sequence-Based Species Delimitation Analyses
Two distinct approaches were employed to evaluate the diversity within the Physalaemus cuvieri-Physalaemus ephippifer species complex. First, we used the Poisson Tree Processes (PTP) method, which infers putative species boundaries on a given phylogenetic input tree based on the fundamental assumption that the number of substitutions between species is significantly higher than the number of substitutions within species (Zhang et al., 2013). Second, we used a distance-based approach employing the Automatic Barcode Gap Discovery (ABGD) method (Puillandre et al., 2012). The PTP analysis was conducted on the bPTP webserver (http://species.h-its.org/ptp), with the tree inferred in the Bayesian analysis and using 500,000 MCMC generations, thinning the set to 100 and a burn-in of 25%. The ABGD analysis was performed at the ABGD webserver (http:// wwwabi.snv.jussieu.fr/public/abgd/abgdweb.html), using simple distances and setting the minimum and maximum values of prior intraspecific divergence (P) to 0.001 and 0.1, respectively, and the minimum gap width to 1.0. The data matrix used for the ABGD analysis differed from that used in the phylogenetic inferences by the number of sequences (only the clades belonging to the P. cuvieri-P. ephippifer species complex were included to avoid species represented by only one sequence) and number of characters (only 2,173 bp were analyzed to avoid the inclusion of missing data).
Because 16S is a powerful marker for DNA barcoding of anurans (Vences et al., 2005a;Vences et al., 2005b;Fouquet et al., 2007), we also used a 1,381 bp-fragment of the 16S mitochondrial gene to provide the genetic distances between and within clades inferred in the phylogenetic analyses. Uncorrected p distances were calculated in MEGA 6 (Tamura et al., 2013), treating gaps and missing data as pairwise deletions.
RADseq-Style Data Analyses
Preparation and Sequencing of 3RAD Libraries Liver samples were obtained from animals anesthetized with 2% lidocaine (protocol approved by CEUA/UNICAMP, permit number 3454-1). Genomic DNA was obtained from these samples as reported by Medeiros et al. (2013) or using the DNeasy Blood 5 August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org and Tissue Kit (Qiagen). RADseq-style data were generated with the 3RAD (triple-digest RADseq) protocol proposed by Bayona-Vásquez et al. (2019), as briefly described below.
Approximately 100 ng of genomic DNA from each specimen (for details on specimens, see Supplementary Table S1) was digested with the restriction enzymes MspI, ClaI, and BamHI-HF (New England BioLabs; 10 U each) for 1 h at 37°C. Without disabling the restriction enzyme, the digested DNA was ligated to iTru adapters specific to MspI and BamHI-HF cutsites (Supplementary Table S2) using T4 DNA ligase (New England BioLabs; 100 U). In the digestion and ligation, ClaI functions as the third restriction enzyme, which is designed to cleave dimers of the phosphorylated adapter and leave only fragments cut by both MspI and BamHI-HF. Samples were incubated for two cycles of 22°C for 20 min and 37°C for 10 min, followed by a final incubation at 80°C for 20 min to inactivate the enzymes. The resulting samples were cleaned with NaCl-PEG diluted SpeedBeads (Rohland and Reich, 2012) (in a 1.2:1 SpeedBeads to DNA volume ratio), washed with 80% EtOH and resuspended in TLE (10 mM Tris pH 8; 0.2 mM EDTA). Full-length 3RAD libraries were made using PCR with iTru5 and iTru7 primers (Supplementary Table S2) and KAPA HiFi Hotstart DNA Polymerase (KAPA Biosciences). For PCR, samples were incubated at 95°C for 2 min, followed by 16 cycles of 98°C for 20 s, 60°C for 15 s, and 72°C for 30 s, with a final elongation step of 72°C for 5 min. The PCR product was purified using SpeedBeads, washed with 80% EtOH and resuspended in TLE. The samples were quantified using BioSpectrometer (Eppendorf) and pooled by combining 150 ng of each sample. This pool was concentrated using SpeedBeads and electrophoresed on a Pippin Prep system (Sage Science) to size-select for 500 bp fragments (+/− 10%). The resulting libraries were pooled with samples from unrelated projects and sequenced by Georgia Genomics Facility on an Illumina HiSeq platform to obtain paired-end 150 nt (PE150) reads.
3RAD Data Filtering, Assembly, and Phylogenetic Analysis
Sequence reads were filtered and assembled using ipyrad v. 0.7.28 (Eaton, 2014;Eaton and Overcast, 2018). Internal indexes were removed, and reads were trimmed to 120 bases. The clustering threshold was set at 85%, the minimum depth for statistical base calling was set to 6, the minimum depth for majority-rule base calling was set to 4, and the minimum number of individuals per locus was 10. Up to two alleles per site in consensus sequence and 20 SNPs per read per locus were allowed. All the parameters used in this analysis are presented in Data Sheet S1. The resulting loci were concatenated in a Phylip file (i.e., the.u.snps.phy output file from ipyrad) and used for phylogenetic inferences in RAxML v. 0.4.1b (Stamatakis, 2014) under GTR + G model. Because we lacked a tissue for a suitable outgroup (and thus, lacked data for an outgroup), this phylogeny was not rooted. All 3RAD sequence data are available from the NCBI SRA (PRJNA527881).
3RAD Data-Based Species Delimitation Analyses
To further assess species boundaries, two additional analyses were performed with the 3RAD data set. First, a principle components analysis (PCA) was conducted using the package "adegenet" in R v3.5.1 (Jombart, 2008;R Core Team, 2018). One random SNP per locus (i.e., the.u.str output file from ipyrad) was used, and variables were centered, but not scaled. The first two principle components (PC1 and PC2) were plotted. Second, Bayesian species delimitation analyses were conducted using the program BPP (Yang and Rannala, 2010). Informed by the results of our phylogenetic analyses (see Results), individuals were binned into six groups-Western Pará, Viruá, P. epphippifer, lineage 1 of "P. cuvieri, " lineage 2 of "P. cuvieri," and lineage 3 of "P. cuvieri." Because the 3RAD data set did not include a true outgroup and thus we did not have a rooted species tree, two different species trees were used for these species delimitation analyses-the topology we recovered in our rooted, mtDNA phylogeny (which is also the topology of our 3RAD phylogeny if rooted using lineage 3 of P. cuvieri as an outgroup) and an alternative topology created by (speculatively) rooting our unrooted 3RAD phylogeny using the Western Pará and Viruá clades as outgroups. Following the recommendation of Rannala and Yang (2013), separate analyses were conducted with the following parameters: ϵ = (2, 5, 10, 20), α = (1, 1.5, 2), and m = (1, 1.5, 2). A θ prior from 2 to 2000 and a τ prior from 2 to 200 were used, and sampling occurred every 10 MCMC iterations for 10,000 iterations, with the first 1,000 iterations discarded as burn-in. All analyses were conducted using data from 500 loci derived from the .loci output file from ipyrad. All input files for BPP were created using ipyrad, and all analyses were conducted on an Amazon EC2 Instance.
Cytogenetic Analyses of the Physalaemus Specimens from the Brazilian Amazon
All of the specimens analyzed cytogenetically had a diploid complement of 22 chromosomes. The Physalaemus ephippifer specimens from Santa Bárbara have the same karyotype described previously by Nascimento et al. (2010) (Supplementary Figure S1). The karyotypes found in the remaining specimens were similar to each other but diverged with respect to the NOR sites, allowing for the recognition of two cytotypes (I-II). Cytotype I was present in the specimens of Physalaemus sp. from Alenquer, Monte Alegre, Óbidos, and Prainha, localities from Western Pará. This karyotype has metacentric (1, 2, 5, 6, and 8-11) and submetacentric (3, 4, and 7) chromosomes (Figures 2A-C). The Ag-NOR method revealed two NORs in chromosomes 8, one pericentromerically located in the short arm and one terminally located in the long arm (Figure 2A). C-banding strongly detected the centromeres of all of the chromosomes and an interstitial band in the short arm of chromosomes 5, the pericentromeric band in the short arm of chromosomes 3, the terminal NOR in chromosomes 8, and a segment that included the pericentromeric NOR in the short arm of chromosomes 8 and its adjacent region ( Figure 2B). These C-bands, except for those coincident with the NORs, were strongly stained with DAPI ( Figure 2C). In addition, the DAPI staining also revealed a proximal C-band in the long arm of chromosomes 4 and terminal bands on 6 August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org chromosomes 7 (short arm) and 9 to 11 (both arms) ( Figure 2C), all of them hardly seen in C-banded metaphases not stained with DAPI. In all of the metaphases from the specimen SMRP 252.88, chromosome pair 8 was heteromorphic in size because of the presence of a very large pericentromeric NOR in one homologue, whereas its partner had no evident pericentromeric NOR (Figure 2D).
Cytotype II was found in all seven specimens of Physalaemus sp. from Viruá National Park, State of Roraima, Brazil. This cytotype differed from cytotype I by the absence of the terminal NOR in chromosomes 8 ( Figure 2E).
Because no female of Physalaemus sp. was analyzed cytogenetically, the presence of sex-related variations could not be investigated in the cytotypical groups I and II. 7 August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org
Phylogenetic Analyses of Mitochondrial Sequences
The Bayesian and Maximum Parsimony inferences from the mtDNA data set were congruent in recovering the specimens of Physalaemus sp. from Alenquer, Monte Alegre, Óbidos, Prainha, and Viruá in a highly supported clade (Physalaemus sp. clade) sister to the clade composed of P. ephippifer and lineage 1 of "P. cuvieri" (Figure 3; Supplementary Figure S2). Additionally, in all mtDNA analyses, lineage 2 of "P. cuvieri" was recovered as sister to the clade including Physalaemus sp., P. ephippifer, and the lineage 1 of "P. cuvieri." Physalaemus fischeri was inferred as sister to the clade composed of all of the aforementioned groups and lineage 3 of "P. cuvieri" in all mtDNA analyses (Figure 3; Supplementary Figure S2).
The Bayesian and Maximum Parsimony analyses recovered two clades of Physalaemus sp.: the Western Pará clade, composed of the specimens from Alenquer, Monte Alegre, Óbidos, and Prainha, which show the cytotype I described above; and the Viruá clade, which comprises the specimens from Viruá, which have the cytotype II described above.
Species Delimitation Analyses and Genetic Variation Within and Between Groups Based on Mitochondrial DNA
The bPTP analysis suggested between 16 and 32 species in our whole sample (outgroup included), with 18 species estimated in the maximum likelihood solution. According to this maximum likelihood solution of bPTP, the Physalaemus cuvieri-Physalaemus ephippifer species complex consists of the following five species: species 1-the Western Pará clade (posterior delimitation probability: 0.59); species 2-the Viruá clade (posterior delimitation probability: 0.85); species 3-lineage 2 of "P. cuvieri" (posterior delimitation probability: 0.45); species 4-lineage 3 of "P. cuvieri" (posterior delimitation probability: 0.87); and species 5-P. ephippifer and the lineage 1 of "P. cuvieri" (posterior delimitation probability: 0.56) (Figure 4). In some of the species delimitation solutions, P. ephippifer was recognized as a distinct species (posterior delimitation probability: 0.31), separate from lineage 1 of "P. cuvieri" (posterior delimitation probability: 0.10). It is also noteworthy that lineage 1 of "P. August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org FIGURE 4 | Species delimitation estimated in the maximum likelihood solution of bPTP analysis. Numbers on the branches indicate posterior delimitation probabilities. The outgroup species are not shown. The five putative species inferred by bPTP analysis to compose the Physalaemus cuvieri-Physalaemus ephippifer species complex are shown in different colors. In the clade shown in red, the posterior probability (0.56) that supports this group as a single species is highlighted as well as the probabilities that support the recognition of three species in this group (see text for details).
9
August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org cuvieri" was split into two estimated species in some of the MCMC samples, one representing the lineage 1A of "P. cuvieri" (cluster of specimens from Alagoinhas and Caruaru-see Lourenço et al., 2015) (posterior delimitation probability: 0.33) and another corresponding to the lineage 1B of "P. cuvieri" recognized by Lourenço et al. (2015) (posterior delimitation probability: 0.32). The bPTP analysis also showed some support for the recognition of Western Pará clade + Viruá clade as a single species (Physalaemus sp. in Figures 3 and 4), as this delimitation hypothesis was recovered in some of the MCMC solutions (posterior delimitation probability: 0.14) (Figure 4).
The same five species recovered in the maximum likelihood solution of bPTP were recovered in the recursive partition of the ABGD analysis when the intraspecific variation (P) is 0.77%. The primary partition of the ABGD analysis recognized four entities when P ≤ 0.77%, differing from the aforementioned result by identifying Western Pará clade + Viruá clade as a single entity instead of recognizing the Western Pará clade and Viruá clade as different groups. In the recursive partition when P = 0.48%, an increased number of entities was found (n = 8), because Western Pará clade was split into two, and P. ephippifer as well as the lineages 1A and 1B of "P. cuvieri" were recognized.
Analyses of 3RAD Data Set
A total of 319,878 loci were recovered from the 3RAD data set, and 23,911 loci were retained after filtering, with the number of loci per individual varying from 3,210 to 20,793 (Supplementary Tables S1, S3). The unrooted maximum likelihood phylogenetic analysis of the 3RAD data set recovered the same major groups inferred from the mitochondrial DNA sequences for the Physalaemus cuvieri-P. ephippifer species complex, including Physalaemus sp. (composed of Western Pará and Viruá clades), P. ephippifer, and lineages 1 to 3 of "P. cuvieri." A long branch was recovered between Physalaemus sp. and the remaining groups ( Figure 5). Although we recovered lineage 3 as sister to the remainder of the Physalaemus cuvieri-Physalaemus ephippifer species complex in our analysis of mtDNA sequence data, our 3RAD data set did not include a true outgroup (see the section Specimens), and we were thus unable to root this phylogeny. In the PCA, PC1 (15.4% of variation explained) separated P. ephippifer and lineages 1 to 3 of "P. cuvieri" from the Western Pará clade, and PC2 (10.1% of variation) separated the Viruá clade from all other samples (Figure 6). In BPP, across all combinations of priors and both species tree topologies, posterior probabilities that designated groups represent distinct species were high. Physalaemus ephippifer, lineage 1 of "P. cuvieri," lineage 2 of "P. cuvieri," and lineage 3 of "P. cuvieri" were each recovered as distinct species with posterior probabilities of 1.00. Likewise, the Western Pará clade (posterior probability = 1.00) and Viruá clade (posterior probability = 0.93-1.00) were each recovered as distinct species, although there was some support for the recognition of these two groups together as a single species (posterior probability = 0.00-0.07).
DISCUSSION
Previous, independent studies by Lourenço et al. (2015) and Miranda et al. (2019) both recovered high diversity and deep genetic structure in Physalaemus cuvieri. Miranda et al. (2019) provided dense sampling in central Brazil, especially from State of Goiás, but did not include topotypes of P. ephippifer, a species that was shown to render P. cuvieri paraphyletic by Lourenço et al. (2015). Because Miranda et al. (2019) did not include DNA sequences previously generated by Lourenço et al. (2015) and did not make their own sequence data publicly available, we could not include them here, and we cannot make strong conclusions about the correspondence of major groups recovered in each study. However, based on the geographic distribution of the major clades recognized in each study, we can tentatively recognize a correspondence between populations A, B, and D from Miranda et al. (2019) and lineages 2, 3, and 1 of "P. cuvieri" from Lourenço et al. (2015), respectively.
Although the samples analyzed by both Lourenço et al. (2015) and Miranda et al. (2019) cover a large geographical area, the Amazon region remained under-sampled in each study. Here, the inclusion of specimens from Viruá and Western Pará, which are located in the mid-northern Amazon, revealed that the diversity within the Physalaemus cuvieri-Physalaemus ephippifer species
11
August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org complex is even higher than previously described or suspected (Lourenço et al., 2015;Miranda et al., 2019). Specimens from Viruá and Western Pará were recovered in a well-supported clade (Physalaemus sp.; Figure 3) in the mtDNA phylogenetic analyses, distinct from P. ephippifer and lineages 1 to 3 of "P. cuvieri" previously identified by Lourenço et al. (2015). This clade most likely represents one or two unnamed species, according to bPTP and ABGD analyses. Genetic distances (measured from partial 16S gene sequences) between Physalaemus sp. and lineages 1 to 3 of "P. cuvieri" are consistent with interspecific variation, following the general guideline of a 3% divergence threshold between intraspecific and interspecific divergences among Neotropical anurans (Fouquet et al., 2007;Lyra et al., 2017; see further discussion about this threshold value below). The maximum likelihood phylogeny, PCA, and BPP analyses from 3RAD data likewise demonstrate the distinctiveness of Physalaemus sp. from other members of the P. cuvieri-P. ephippifer species complex, and cytogenetic data reveal that these populations are readily distinguished from other members of the group, primarily by NOR patterns.
All mtDNA phylogenetic analyses recovered the Physalaemus sp. clade as sister to a clade composed of P. ephippifer and lineage 1 of "P. cuvieri." These analyses also corroborated the paraphyly of P. cuvieri with respect to P. ephippifer (Lourenço et al., 2015). In addition, the mtDNA phylogenetic analyses support P. fischeri as the sister taxon of the P. ephippifer-P. cuvieri species complex (including Physalaemus sp.), a relationship that remained unresolved in previous studies. The maximum-likelihood phylogeny inferred from 3RAD did not include P. fischeri and is thus unrooted, but it likewise recovered the monophyly of each of the major lineages described above.
The mtDNA sequence-based species delimitation analyses (ABGD and bPTP) support the recognition of at least four species in the Physalaemus cuvieri-Physalaemus ephippifer species complex. Both analyses support the recognition of lineages 2 and 3 of "P. cuvieri" as distinct species and also support the recognition of at least two additional species, with ambiguity remaining regarding two cases: 1) the existence of a total of one to three species in the clade that contains P. ephippifer and lineage 1 of "P cuvieri"; and 2) the existence of a total of one or two species within Physalaemus sp. We discuss these two cases in greater detail below.
Species delimitation analyses based on our mtDNA data set recovered only partial support for the recognition of P. ephippifer as a species distinct from lineage 1 of "P. cuvieri." However, the BPP analyses using the 3RAD data set recovered P. ephippifer as a distinct species with a posterior probability of 1.00. These analyses should be interpreted with caution, as they may identify population structure rather than true species boundaries (e.g., Sukumaran and Knowles, 2017). Corroborative evidence for the recognition of two distinct species comes from our cytogenetic data. Heteromorphic sex chromosomes are present in Physalaemus ephippifer (Nascimento et al., 2010), but sex chromosome heteromorphism was not observed in lineage 1 of "P. cuvieri" (Quinderé et al., 2009). Sex chromosomes are known to play important roles in the evolution of intrinsic postzygotic isolation and consequently in speciation processes (Saether et al., 2007;Masly and Presgraves, 2007;Presgraves, 2008;Graves, 2016). Based on the analysis of crosses between species from distinct taxonomic groups, Lima (2014) concluded that given a similar amount of genetic divergence, taxa with homomorphic sex chromosomes show intermediate levels of postzygotic isolation compared to taxa with heteromorphic sex chromosomes and taxa without sex chromosomes. Thus, it is reasonable to suspect that the cytogenetic divergence observed between P. ephippifer and the lineage 1 of "P. cuvieri" may create a reproductive barrier between these lineages, and an incipient speciation may be in progress. Therefore, further study of these sex chromosomes and contemporary gene flow between these genetic lineages are still necessary to assess whether P. ephippifer and lineage 1 of "P. cuvieri" should be considered distinct species.
Another ambiguity regarding lineage 1 of "P. cuvieri" refers to the recognition of samples from Alagoinhas and Caruaru as a distinct species. The bPTP and ABGD analyses of mtDNA data, which included samples from both sites, provided some support for the recognition of two species within lineage 1 of "P. cuvieri" (referred to as lineages 1A and 1B of "P. cuvieri"). Although samples from Alagoinhas and Caruaru were not included in our 3RAD data set, the available data suggest the diversity inside the lineage 1 of "P. cuvieri" should be evaluated with caution in further taxonomic studies.
There is likewise some ambiguity regarding species boundaries within the Amazonian populations we refer to Physalaemus sp. Although we demonstrate their distinctiveness from P. ephippifer and lineages 1 to 3 of "P. cuvieri, " the existence of one or two species within this lineage remains unresolved. All phylogenetic analyses recovered two reciprocally monophyletic groups within Physalaemus sp. (i.e., Western Pará and Viruá clades), and the PCA from 3RAD data revealed substantial variation between these two groups. Although these two groups were recovered as distinct species with high posterior probabilities in BPP analyses, these probabilities were lower than for any other proposed species. The genetic distance in 16S rDNA marker between Western Pará and Viruá clades (i.e., 2%) is near the lowest value of interspecific distance found in the analysis of Fouquet et al. (2007), which was 1.9%. Although the divergence threshold value of 3% originally proposed by Fouquet et al. (2007) based on 16S gene partial sequences of 60 frog species is useful for preliminary suspicion of cryptic species, this guideline should be followed with caution, as distinct groups may present very different levels of interspecific variation. For example, in the genera Pristimantis (Padial et al., 2009) and Oreobates (Pereyra et al., 2014), interspecific distances over 3% are observed, while values lower than 1% are found between species of Alsodes (Blotto et al., 2013) or Rhinella (Pereyra et al., 2015). Therefore, the genetic distance found between Western Pará and Viruá clades of Physalaemus sp. may be consistent either with interspecific or intraspecific variation.
Additional evidence for the distinctiveness of the Western Pará and Viruá clades of Physalaemus sp. can be found among cytogenetic differences. Although very similar, the cytotype I of Physalaemus sp., presented by the specimens of the Western Pará clade, diverges in NOR pattern from the cytotype II, which is found in the specimens from Viruá. Cytotype I shows a terminal NOR in the long arm of chromosome 8 that was absent in all of 12 August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org the specimens from Viruá. Although this cytogenetic variation may be consistent with interspecific divergence, it may also be interpreted as an interpopulational variation in Physalaemus sp.
Therefore, the available molecular and cytogenetic data are inconclusive with respect to the interpretation of the diversity within Physalaemus sp. and further studies, which should include morphological and acoustic data, are still necessary. Because we have a large geographic gap in our data set, the additional sampling of animals found in the region between Viruá and the sites we sampled in Pará will be particularly helpful to evaluate contemporary gene flow in Physalaemus sp. and assist in further taxonomic decisions.
Although necessary, the taxonomic revision of the species complex Physalaemus cuvieri-Physalaemus ephippifer will be not a trivial task. The P. cuvieri species group has a complex and confusing taxonomic history due to a combination of factors, which include overlapping species descriptions, highly polymorphic taxa, and cryptic species. This problem is most evident in its namesake species, Physalaemus cuvieri. The type locality of P. cuvieri is imprecise ("America, Brasilia"), the type specimens are not noted in recent type specimen lists (although they were presumably deposited in the NHMW 1 collection), and no illustration or collector name was given in the original description. Additionally, several available names are included in the synonymy of P. cuvieri (i.e., Paludicola neglecta Ahl, 1927 and Gomphobates notatus Reinhardt and Lütken, 1862 "1861"), and other names included as synonyms of other species of the P. cuvieri species group must be carefully reviewed (for example, Paludicola bischoffi Boulenger, 1887). Future taxonomic revisions will need to grapple with these challenges to resolve the taxonomy of the group.
Cytogenetic Comparisons
In specimens assigned to the lineage 1 of "Physalaemus cuvieri," two NOR-bearing submetacentric chromosome pairs, classified as pairs 8 and 9, were detected (see the cytogenetic study by Quinderé et al., 2009 and the phylogenetic inferences of Lourenço et al., 2015). The NOR in chromosome 8 was interstitially located in the long arm, adjacent to faint heterochromatic bands, and polymorphic in size, whereas chromosome 9 was highly polymorphic with respect to NOR number and size, with the most frequent NOR being distally located in the long arm and coincident with a C-band (Quinderé et al., 2009). A similar NOR-bearing chromosome 8 is present in several specimens assigned to the lineage 2 of "P. cuvieri," although in specimens from Argentina, which clustered within this lineage, the principal NOR is terminally located in the short arm of the metacentric chromosome pair 11 (see Quinderé et al., 2009 and the phylogenetic inferences of Lourenço et al., 2015).
The metacentric NOR-bearing chromosomes 8 of the cytotypes I and II of Physalaemus sp. described here differ from all the aforementioned NOR-bearing chromosomes 8, 9, and 11 of "P. cuvieri," especially by the presence of a fixed pericentromeric NOR. Also, among the specimens assigned to the lineage 3 of 1 NHMW, Naturhistorisches Museum Wien. "P. cuvieri," which showed high intrapopulational variation in NOR pattern and several NOR-bearing chromosomes, no unambiguous similarity was found with respect to the NOR-bearing chromosomes 8 of cytotypes I and II, although a pericentromeric NOR had been found in a chromosome classified as number 10 in that sample (see Quinderé et al., 2009 and the phylogenetic inferences of Lourenço et al., 2015). It is also interesting to note that the NORs found in chromosome 9 of specimens from lineage 1 of "P. cuvieri" coincide with C-bands (Quinderé et al., 2009), as well as do the NORs of cytotypes I and II of Physalaemus sp. analyzed here, and that a pericentromeric NOR was additionally found in one chromosome 9 of an individual analyzed by Quinderé et al. (2009) and posteriorly assigned to the lineage 1 of "P. cuvieri" by Lourenço et al. (2015).
In Physalaemus ephippifer, the NORs are found in chromosome pair 8, which corresponds to the sex chromosomes Z and W of this species. Chromosomes Z and W of P. ephippifer are heteromorphic in morphology, C-banding pattern, and NOR number (Nascimento et al., 2010) and both sex chromosomes differ from the NOR-bearing chromosomes found in the specimens of Physalaemus sp. analyzed here.
Despite the conspicuous differences discussed above, the cytotypes I and II of Physalaemus sp. share with P. ephippifer (Nascimento et al., 2010) and lineages 1 to 3 of "P. cuvieri" (Silva et al., 1999;Quinderé et al., 2009;Vittorazzi et al., 2014) the interstitial C-band in the metacentric chromosome 5, which was inferred as a synapomorphy of the Physalaemus cuvieri species group (see Vittorazzi et al., 2014 andLourenço et al., 2015). Another remarkable characteristic observed in the cytotypes I and II of Physalaemus sp. and also in the karyotype of P. ephippifer (Nascimento et al., 2010) is the DAPI-positive pericentromeric C-band of the short arm of chromosome pair 3.
Although very similar, the cytotypes I (Western Pará clade) and II (Viruá clade) of Physalaemus sp. diverge from each other with respect to the NOR pattern, as the terminal NOR found in chromosome 8 of cytotype I was absent in all of the specimens from Viruá. Therefore, cytogenetic signatures may be assigned to the Western Pará and Viruá clades, which may be interpreted as either interspecific or intraspecific variation, as discussed above.
Phylogeographic Implications
The Physalaemus cuvieri-Physalaemus ephippifer species complex is widely distributed and occurs in diverse morphoclimatic domains of South America, including the Atlantic Forest, the Amazon Forest, and regions characterized by open vegetation areas such as the Caatinga of north-eastern Brazil the Cerrado of central Brazil. The Western Pará and Viruá clades of Physalaemus sp. we describe here occur in the Guiana Amazonian area of endemism, which is one of the eight areas of endemism recognized in Amazonia (Silva et al., 2005;López-Osorio and Miranda-Esquivel, 2010). None of the remaining lineages/clades previously assigned to the P. cuvieri-P. ephippifer species complex (Lourenço et al., 2015;Miranda et al., 2019) are distributed in this area. The sister clade of the group composed of Western Pará and Viruá clades, which encompasses the lineage 1 of "P. cuvieri" and P. ephippifer, is distributed from the Belém Amazonian area 13 August 2019 | Volume 10 | Article 719 Frontiers in Genetics | www.frontiersin.org of endemism to the Atlantic Forest, including the intervening region of the Caatinga. Finally, lineage 2 of "P. cuvieri" occurs in the Atlantic Forest, the Cerrado, and the southern Caatinga, while lineage 3 of "P. cuvieri" includes specimens from the Cerrado (Supplementary Figure S3).
In a recent phylogeographic study, Miranda et al. (2019) used a variety of methods to discuss processes potentially responsible for the diversification of some populations currently recognized as P. cuvieri. However, as mentioned earlier, this study did not include samples from P. ephippifer-which was already suggested by Lourenço et al. (2015) as a member of this group-nor samples from the Amazonian regions we included here. It is likely that the inclusion of these populations would dramatically influence phylogeographic inferences, but because we also have incomplete geographic sampling, we refrain from drawing any further conclusions. Future phylogeographic studies of the P. cuvieri-P. ephippifer species complex are certainly warranted; these should include topotypes of P. ephippifer and the Amazonian lineages described here, and effort should be made to locate and obtain samples from other regions that may be home to previously unsampled populations of the group.
CONCLUSION
In conclusion, our cytogenetic and molecular data demonstrate that the species-level diversity in the Physalaemus cuvieri-Physalaemus ephippifer species complex is much higher than currently described. In particular, we demonstrate the distinctiveness of frogs (Physalaemus sp.) from the Amazon and geographic regions that deserve greater attention. A comprehensive taxonomic revision of these frogs is warranted and should include a review of specimens in collections and in literature, analysis of the advertisement calls, and a focus on contact zones between putative species. We encourage future studies to collect and integrate genomic and cytogenetic data to unravel this intricate taxonomic situation.
ETHICS STATEMENT
The specimens were collected under a permit issued by the Instituto Chico Mendes de Conservação da Biodiversidade/ Sistema de Autorização e Informação em Biodiversidade (ICMBio/SISBIO) (permit number 32483), which also includes the authorization for extracting tissue samples. Chromosome preparations were obtained using a protocol that was approved by the Committee for Ethics in Animal Use of the University of Campinas (CEUA/UNICAMP) (permit number 3454-1).
AUTHOR CONTRIBUTIONS
LL conceived the study and conducted the analyses of the mtDNA sequences. JN obtained the cytogenetic data and mitochondrial sequences. PS collected the frogs from Santa Bárbara, obtained some of the chromosomal preparations, and assisted in figure edition. JL collected the frogs from Western Pará. TP and LL prepared the 3RAD libraries and analyzed the 3RAD data set. | 2019-08-22T13:13:08.211Z | 2019-08-14T00:00:00.000 | {
"year": 2019,
"sha1": "c567fac3339f3a520ff9fef23d35bf68945ef4f9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2019.00719/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5588ab0c3b4a0534131f0bab6324901610d9916c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
148566224 | pes2o/s2orc | v3-fos-license | Adaptation of Perfectionism Cognitions Inventory into Turkish *
The aim of this study was to translate Perfectionism Cognitions Inventory (PCI; Flett, Hewitt, Blankstein & Gray, 1998) into Turkish and to conduct its validity and reliability studies with a sample of university students. PCI measures perfectionistic cognitions by focusing on automatic thoughts about perfectionism. The inventory composed of 25 Likert type items rated on a 4-point scale. The study was conducted with participants from a public university in Ankara in two phases. The first phase of the study included 418 students (238 female and 180 male). In the second phase, 715 students (351 female and 364 male) participated in the study. Results provided evidence for reliability and validity of the Turkish version of PCI in a sample of university students.
Introduction
Perfectionism was described as putting high standards for self-performance and trying to achieve those standards (Flett & Hewitt, 2002).The first signs of theoretical framework of perfectionism can be traced back to psychodynamic theory in which Adler pointed out to the adaptive and maladaptive perfectionism influencing psychological health (Akay-Sullivan, Sullivan, & Bratton, 2016).Adler stated: "the striving for perfection is innate in the sense that it is a part of life, a striving, an urge, a something without which life would be unthinkable" (Ansbacher & Ansbacher, 1956, p. 104, cited in Stoeber, 2018).However, the excessive focus on perfectionism might turn into maladaptive behavior, which is considered as the reason of having perfectionism in DSM-V (American Psychiatric Association, 2013) under obsessive-compulsive personality disorder (Stoeber, 2014).Hewitt and Flett (1991) argued that perfectionism was multidimensional construct by indicating the difference between self-oriented perfectionism, otheroriented perfectionism and socially-prescribed perfectionism.In self-oriented perfectionism, individuals strive for being perfect by reaching highest standards they set for their own behaviors.In other-oriented perfectionism, the individuals put high standards for others to achieve (Stoeber, Feast & Hayward, 2009).On the other hand, socially-prescribed perfectionism describes the situation in which individuals believe that other people set high standards for them and they try to reach those standards.
While the source of self-oriented perfectionism mostly comes from the inside, the source of socially-prescribed perfectionism is outside.Enns and Cox (2002) implied that socially-prescribed perfectionism was associated with psychological maladjustment while self-oriented perfectionism represented both negative and positive characteristics like ruminative brooding and task-oriented coping respectively.Thereby, Stoeber (2014) stated that other-oriented perfectionism was positively related to narcissistic and antisocial personality disorder and similarly; socially prescribed perfectionism was positively associated with obsessive-compulsive and antisocial personality disorder.
As a multidimentional construct, perfectionism has been studied widely and several instruments have been developed to measure its dimensions.For example, the Multidimensional Perfectionism Scale (Hewitt & Flett, 1991) measures self-oriented perfectionism, other-oriented perfectionism and socially prescribed perfectionism; the Frost Multidimensional Perfectionism Scale (Frost, Marten, Lahart, & Rosenblate, 1990) was developed to find students' perfectionism tendencies and Almost Perfect Scale Revised (Slaney, Rice, Mobley, Trippi, & Ashby, 2001) aims to differentiate adaptive and maladaptive perfectionism people experience.Flett et al. (1998) suggest that multidimensional perfectionism can be measured in order to gather individual differences in perfectionism.Previous research indicated that multidimensional perfectionism was in relation with obsessive compulsive disorder, borderline disorder, passive-aggressive behavior and narcissism (Hewitt & Flett, 1991).The other-oriented perfectionism and socially-prescribed perfectionism was also found as an indicator of personality disorders (Ayearst, Flett, & Hewitt, 2012).
Multidimensional perfectionism is not only studied with disorders but also with other variables such as test anxiety and parental attitude.Some of the perfectionism measurements were adapted into Turkish and new instruments were also developed.For example, Multidimensional Perfectionism Scale (Hewitt & Flett, 1991) was adapted to Turkish by Oral (1999).The Turkish adaptation study of Frost Multidimensional Perfectionism Scale was conducted by Özbay and Mısırlı-Taşdemir (2003) with a sample of high school students.The Adaptive-Maladaptive Perfectionism Scale (AMPS) (Rice & Preusser (2002) was translated into Turkish by Uz Baş (2010) and Almost Perfect Scale Revised was adapted into Turkish by Ulu, Tezer and Slaney (2012).Furthermore, the Positive Negative Perfectionism Scale (Kırdök, 2004) was developed in Turkey.
The literature in Turkey is rich in terms of research investigating perfectionism as a multidimensional construct.For example, Koydemir, Selışık and Tezer (2005) studied the association between marriage satisfaction and multidimensional perfectionism.Similarly, Erözkan (2009) focused on the link between depression and multidimensional subscales of perfectionism in eight grade students.Dilmaç, Aydoğan, positively related with attention to errors, distrust in behaviors, family expectations and parental criticism of perfectionism.Başol and Zabun (2014) investigated the relationship between academic success and the role of multidimensional perfectionism, test anxiety, parental attitude and private academic course attendance among middle school students.The results of the study indicated that order dimension of perfectionism was negatively related to student success.In addition, Özgüngör (2003) worked on the multidimensional aspects of perfectionism in predicting students' academic goal orientation.
In the last two decades, perfectionism studies have been extended to include cognitions or automatic thoughts regarding the attempt to be perfect (Flett et al., 1998).Flett, Hewitt, Whelan, and Martin (2007) argue that people who have differences between their own actions and their ideal goals show the signs of perfectionist thinking based on automatic thoughts of "should" sentences regarding expectations.Within this regard, irrational thinking has been related to perfectionist thinking (Ellis, 2002).Stoeber, Kobori and Brown (2014) pointed to the importance of perfectionism cognitions in terms of explaining maladjustment and trait perfectionism.The difference between perfectionism cognitions and trait perfectionism is that while trait perfectionism asks for statements of beliefs, feelings and behaviors (Hewitt & Flett, 1991), perfectionism cognitions "focuses on the way perfectionists think, what thoughts they have, and how frequently they have these thoughts" (Stoeber et al., 2014, p.648).Stoeber et al. (2014) pointed to the importance of perfectionism cognitions in terms of explaining maladjustment as much as trait perfectionism.
Parallel to the studies indicating the importance of cognitions in perfectionism, the scale development efforts that aim at measuring perfectionist cognitions have emerged.In this regard, The Perfectionism Cognitions Inventory (PCI) was developed to measure the frequency of automatic thoughts related to perfectionism by Flett et al. (1998).As described by Enns and Cox (2002), the scale was designed totally from cognitive aspects including both perfectionism and imperfectionism thoughts; and it measures the frequency of thoughts during the past week.PCI consisted of 25 items rated on a 4-point Likert type from 0 (never) to 4 (always).Additionally, Perfectionistic Self-Presentation Scale (Hewitt et al., 2003) was developed to test one's desire to be considered as perfect for others.It consists of 27 items on a 7-point scale and three subscales: perfectionistic self-promotion, non-display of imperfection and nondisclosure of imperfection.Finally, Rice and Preusser (2002) developed Adaptive-Maladaptive Perfectionism Scale to measure adaptive and maladaptive features of perfectionism in elementary level children.The scale consisted of 27 Likert type items rated on a 4-point scale.The four subscales of the measure are as sensitivity to mistakes, contingent self-esteem, compulsiveness and need for admiration.
Among others, the PCI (Flett et al., 1998) has not been adapted to Turkish yet and currently there is no perfectionism scale that measures cognitive aspects including perfection, imperfection thoughts and frequency of those thoughts in Turkish.Thus, the aim of present study was to adapt PCI into Turkish and test the reliability and the validity of the measure.The PCI has not been adapted to other languages yet as well.
Therefore, this is the first study regarding the translation of PCI into another language.
It is hoped that the findings of the current study can contribute measuring cognitive aspects of perfectionism in Turkey and contribute future studies investigating perfectionism and related variables.
Participants
The participants of the first phase of study were 418 English language preparatory school students of a public university in Ankara, Turkey.Data were collected via an online survey system and convenience sampling was used.Among participants, 238 (56.9 %) were female and 180 (43.1 %) were male.The age range of participants in the first phase changed between 17 and 48 with a mean of 19.69.The participants of the second phase were 715 (351 female and 364 male) English language preparatory school students.Data were collected via paper-pencil format and stratified sampling was used.The age range of participants changed from 17 to 27 with a mean of 18.57.
Instruments
The demographic information form and translated version of PCI were used to collect data.The demographic form included three questions about gender, language level and age; and The Perfectionism Cognitions Inventory (PCI) developed by Flett et al. (1998) to measure the frequency of automatic thoughts related to perfectionism was used.PCI consisted of 25 items on a 4-point Likert type from 0 (never) to 4 (always) and the items were loaded on one factor with an eigenvalue of 9.39 and explaining 37.6 % of the variance (Flett et al., 1998).For the scale, higher scores indicated higher level of perfectionistic thoughts and a total score that can be gathered from the scale changed from 0 to 100.Cronbach's alpha of the measure was .96and the test-retest reliability was reported as .67(Flett et al., 1998).The validity studies also proved that PCI had correlated with Attitudes Toward Self Scale (r=.55); self-criticism, r=.57; overgeneralization, r=.43 (Flett et al., 1998) and anxiety (Beck Anxiety Inventory, r=.42) and depression (Beck Depression Inventory, r=.48) (Flett, et al., 2007).Some sample items from the scale are: "I expect to be perfect."and "My work has to be superior".
Procedure
Prior to data collection, researchers received permission from the Human Subjects Ethics Committee of the university where the study was conducted.The adaptation process of the PCI into Turkish included following steps suggested by Sousa and Rojjanasrirat (2011).The steps were as follows a) translation of the measure into the target language, b) comparison between translated forms of the scales by experts, c) conducting cognitive debriefing and d) testing psychometric properties with the target population.
In the current study, firstly, the necessary permission to translate the PCI into Turkish was taken from the author of the scale, G.L. Flett.Secondly, the scale was translated from English to Turkish by five experts independently.Three of the experts were advanced PhD students from the field of psychological counseling and guidance and two of them were instructors of English as a foreign language in public high schools.Secondly, after five experts completed the translation of the measure, researchers examined each item regarding the clarity and objectivity of the translation.
In the next step, researchers consulted to an English language expert to get final feedback about the accuracy of the translation.The necessary wording or grammar changes were made based on the English language experts' feedback.Later, in cognitive debriefing, Turkish translated items of the PCI were also discussed with five English Preparatory School students to check the clarity of the items and to assess whether translations lead to any misunderstanding.The students stated the indefinite pronoun written in the beginning of a sentence was causing uncertainty.Therefore, they had difficulty in understanding whether the pronoun was referring to academic tasks or everyday tasks.In this regard, the language expert's opinion was taken into consideration for this item.The language expert stated that there was not any other reflection of that meaning.After all these steps, the scale was finalized to be administered.
Then, the reliability and validity of Turkish version of PCI was conducted in two phases.In the first phase of the study, Exploratory Factor Analysis (EFA) was conducted to test the underlying factor structure of the instrument.In the second phase of the study, Confirmatory Factor Analysis (CFA) was applied to test the previous theory about the psychometric properties of instrument.Using a different sample for CFA was required (Costello & Osborne, 2005) to be able to provide strong evidence for the measurement and to gather similar results across different samples (MacCallum, Widaman, Zhang, & Hong, 1999).Both groups of participants were university students attending an English language preparatory school of a public university.Data were collected via the online survey system of the university in the first phase and it took participants ten minutes to fill out the instrument.In the second phase, data were collected during class hours and the students were asked to fill in the scales in paperpencil format.The first phase of the study was conducted in spring semester and after necessary analysis, the second phase was conducted in fall semester.
Data Analyses
The descriptive statistics and exploratory factor analyses were conducted via SPSS 24 (Statistical Package for Social Sciences) program and confirmatory factor analyses were carried out by LISREL 8.80.The results of confirmatory factor analysis were analyzed based on the fit indices: Chi square/df ratio, the goodness of fit index (GFI), comparative fit index (CFI) and the root mean square error of approximation (RMSEA).The criteria GFI and CFI .90 or above, RMSEA .08 or below and Chisquare/df ratio 5 or lower offered by Schumacker and Lomax (2010) were considered as the reference point in reporting the results of the present study.
Results Regarding the First Phase of the Study
In order to support the previously established unidimensional factor structure of PCI, Exploratory Factor Analysis (EFA) was conducted with the participants of the first phase.The factor structure of Turkish version of PCI was tested with 418 English language preparatory school students.EFA was conducted to test the factor structure of Turkish version of PCI.The Kaiser-Meyer-Olkin's (KMO) measure of sampling adequacy value (.92) and Barlett's Test of Sphericity (.00) indicated a good factorability of the data.The Eigenvalues and Scree test showed a single factor solution and the unidimensional structure of the scale accounted for 34.62 % of the variance in the data set.The factor loadings are given in Table 1.Inventory with slight modifications.The results of the second phase of the study also verified the unidimensional factor structure of PCI (shown in Figure 1).Moreover, further analysis was conducted to confirm the one-factor structure of Turkish version of PCI with unstandardized, standardized parameter estimates, t values and explained variance and the results were summarized in Table 2.The Cronbach alpha value of the PCI was .94.In line with the low loadings of in EFA, item 22 and item 24 had standardized factor loadings below .30.It should be noted that there was no need to remove the items considering the significance of t value.The standardized estimates, t values and explained variance also supported one-factor structure of PCI.In conclusion, the results provided evidence for reliability and validity of the Turkish version of PCI in a sample of university students.
Discussion
The current study aimed to test the psychometric properties of Perfectionism Cognitions Inventory (Flett et al., 1998) and to adapt the scale into Turkish.The perfectionism has been extensively studied as a multidimensional construct.The previously adapted or developed measures of perfectionism in Turkey were also multidimensional (e.g.Kırdök, 2004;Oral,1999;Özbay & Mısırlı-Taşdemir, 2003;Uz Baş, 2010).Thus, there has not been any developed or adapted instrument aiming to measure perfectionism cognitions.Therefore, the limited number of research about scale development in perfectionism and not having any measure in Turkey that aimed to measure perfectionism cognitions increases the importance of the present research.
Within the scope of current study, the unidimensional factor structure and reliability of Perfectionism Cognitions Inventory were tested.
In the first phase of the study, EFA results indicated that Turkish PCI had unidimensional factor structure as it had in the original English form.Although item 22 and 24 had factor loadings of .32,all other items had factor loading above .32.The total variance accounted for 34.62 % in present study which was also quite the same of the variance explained in the original study; 37.6 % (Flett et al., 1998) and it can be concluded that the scale had construct validity.Similarly, the results of CFA in the first phase supported one-factor structure of PCI.The results in the second phase of study also indicated acceptable model fit indices.Particularly, the value of chi square divided by degrees of freedom was below five indicating an acceptable model fit according to criteria offered by Schumacker and Lomax (2010).In the current study, the explained variance in CFA was low due to low factor loadings for the item 22 and 24.However, it should be noted in the original study, item 22 and 24 had also the lowest factor loadings (Flett et al., 1998).Overall, the findings indicated one-factor structure as in the original inventory proposed by Flett et al. (1998).The Cronbach alpha coefficient of the current study was .94 which was quite similar to the original study indicating Cronbach alpha value of .95(Flett et al., 1998).The high internal consistency coefficient indicated a high reliability for the scale.Moreover, the test-retest reliability of the scale was .89which was higher than the original scale development study which showed the value of .67 (Flett et al., 1998).
There has not been any published study regarding the translation of PCI into other languages.The results of the study could support the psychometric properties of the original scale and give opportunity to compare the findings in further adaptation of scale in other languages.Considering the fact that PCI has been used with a variety of samples changing from clinical patients, adults to students (Hewitt et al., 2003), Turkish version of PCI can be used with other samples like teenagers, high school settings, adults or even elder people in relation with other psychological variables as a further suggestion because the items are not restricted to be used only with this sample.
Although these were the strengths of the study, some limitations should be predicated while discussing the results.First of all, the sample consisted of English language preparatory school students of a university.Therefore, the results cannot be generalized to college students at other class levels.In future studies, Turkish version of PCI should be tested in a representative sample of university students from different class levels.Additionally, further studies could provide much evidence for the convergent validity of PCI by calculating the correlation between PCI scores and the scores of other related scales.
Figure 1 -
Figure 1-The Coefficients in Standardized Values for Turkish Version of PCI
Table 1
Factor Loadings and Communalities of Turkish Version of PCI The results of the Confirmatory Factor Analysis for Turkish version of PCI indicated an adequate model fit for the unidimensional factor structure of PCI: [Satorra-Bentler χ² (265) = 1285.96,p =.00; χ²/dfratio = 4.85; GFI = .89,CFI= .96,RMSEA = .07,SRMR = .06]with some modifications between the error terms: item 5-item 7, item 2-item 7, item 9-item 12, item 3-item 15.As GFI was sensitive to sample size and other fit indices were in accordance with cut-off values, it was concluded that the results confirmed the single-factor structure of the Turkish version of Perfectionism Cognitions | 2019-05-05T20:18:43.403Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "580b3387af9b335be2a1f91632623d7f8d3a24af",
"oa_license": "CCBYNC",
"oa_url": "https://yjer.yildiz.edu.tr/storage/upload/pdfs/1672403236-en.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "580b3387af9b335be2a1f91632623d7f8d3a24af",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
236486516 | pes2o/s2orc | v3-fos-license | The exploration of PBL mixed teaching mode in secondary vocational classes
The The large area coverage of network and the upgradation of intelligent software promote the further reform of network teaching. With the discussion of hot topics of Online teaching and Network teaching, teachers are also experiencing how to adopt appropriate teaching methods to solve the teaching problem of “being apart from each other” between teachers and students. The rise of platform technology and technical resources also promote the innovation of teaching. This paper will research on the mixed teaching mode based on PBL learning method, using platform tools to explore the practical significance of this teaching mode in secondary vocational classes, it provides a kind of thinking framework for teachers’ reference.
Introduction
In April 2018, the Ministry of Education issued the Education Informationization 2.0 Action Plan ,asking for active promotion of "Internet + education" and insisting on the core concept of deep integration of information technology and education and teaching.Vocational education as a type of education has certain particularity in professional courses, teaching methods, learning process and other aspects [1].The wider choices of courses,the more complex teaching methods and the more flexible learning process and so on all promote the application of education informationization in secondary vocational classes.
Mycos' research data shows that most teachers think online teaching whether in course resources or in learning time,its "freedom to be flexible" becomes its main advantage.The application of MOOC,Construction of National Excellent Online Courses, Blue Ink Cloud Class, Rain Classroom and Learning Pass and so on all provide a solid foundation for online teaching [2] .These technical resources and platform tools come into the classroom which has a great influence on the classroom teachers' "teaching", students' "learning" and schools' "management" and other educational forms [3] .This paper integrated PBL learning mode, mixed teaching mode and platform tools which makes their advantage complementary, form melted and effect promoted mutually .
PBL learning mode
PBL(Problem-Based Learning) is the learning mode based on problems. it is also the learning mode based on reality and student-centered.PBL connects learning to reality problem,according to the concepts,basic principles to be mastered and the problems that the basic principle design to solve
Introduction to Platform Tools
In recent years,the application of platform tools lays a foundation for online teaching,its application has promoted the teaching innovation.Mycos' data shows that 55.4% of teachers agreed students to bring electronic devices into the classroom,which can enrich teaching methods.
Blue Ink Cloud Class is an educational software that is free to use on both mobile phones and computers.There are resources sharing, q&A discussion, brainstorming, job mutual evaluation, data export and other functions in this educational software.It provides the conditions for teachers and students to carry out remote synchronous online learning.Teachers learn online with students after class through the teaching platform,there is no limitation of space and time [6].The teacher-directed teaching process will be transformed into a process in which teachers and students communicate and learn together.
Rain Classroom is an educational learning software launched by Tsinghua University.It has many functions such as PPT making, real-time answering, multi-screen interaction, q&A bullet screen and so on.The functions of " You can point where you do not understand" and "Submit questions in class" not only promote the interaction between teachers and students, but also adjust the teaching schedule and rhythm according to the needs of students [7].
The functions of Blue Ink cloud class and RainClassroom are both repetitive and unique.According to the comparison between the two,Blue Ink Cloud class has more obvious advantages in the use before and after class and the advantage of Rain class is more obvious in class .So in the choice of platform tools,using a combination of the two.Blue Ink Cloud class is adopted to monitor students' preview before class and homework completion,the RainClassroom was used to record the learning process and participation of students in class.
Application of platform tools
4.2.1. Before class. PBL learning model requires problem-oriented to train students' ability to raise, research and solve problems.To stimulate students' enthusiasm for the classroom,cultivate students' ability of independent learning and awareness of using knowledge to solve problems.
Students need to be inputted knowledge before class,guided to understand the course content information to prepare the course content in advance.Teachers upload courseware and other course resources to the Blue Ink Cloud class platform, let the students study independently,ask questions based on what you have learned.Divide the students into groups,about 4-6 students are divided into groups,explore and research the questions raised in the study. Groups work together to prepare materials.
During the class.
Based on the questions designed by the team members to collect and organize data.In class,the group's results were displayed on the RainClassroom platform.Students become teachers,a teacher becomes an assistant.The RainClassoom can promote courseware, conduct 3 bullet screen interaction, submit questions, unit test and so on to make the classroom be the home court of the students,and to arouse students' enthusiasm for study.
After class.
The learning process is not episodic but continuous,it should not be confined to the classroom [8].After class, students need to broaden their knowledge.To assign students tasks that can help them think outside the box and develop students' creativity.Let the students succeed in a special field of study, use what they have learned flexibly,and cultivate students' ability to solve practical problems with what they have learned.
PBL BLENDED Teaching mode architecture
The thinking structure of PBL blended teaching mode is divided into three modules: before, during and after class(See Figure 1),It also includes three parts: teaching environment, teaching activities and teaching evaluation (see Figure 2).The teaching environment needs abundant resources and a teaching platform for uploading materials expediently .At the same time,it also requires centralized teaching in digital classrooms,in the course of teaching, platform tools are also used to enrich teaching methods and guide students' learning attention.Teaching activities mainly focus on students' independent inquiry learning,in the way of group cooperation to discuss, research and summarize the achievements of problems designed by teachers. Students are guided to expand their knowledge structure and present their learning achievements in a way of centralized teaching.Finally, Adopting process evaluation,summative evaluation and other evaluation and self-evaluation to evaluate the teaching effect of PBL blended teaching mode. Most secondary vocational students are addicted to video games,have weak control force in the electronic devices.Secondary vocational schools usually ask students to hand in electronic devices in class,students are not interested or enthusiastic about lessons,very few students are listening attentively.Applying PBL blended teaching mode to convert teacher instruction into student learning.The teacher is no longer the center of the classroom,the classroom atmosphere with students as the principal part is much better.Letting electronic devices enter the classroom,this new teaching method will greatly arouse students' interest in learning,more interaction between teachers and students, the participation of students in class will also be greatly increased.However, as for the students' weak ability to control electronic devices in class,teachers should design the teaching process of using electronic equipment well to use electronic devices reasonably to help students manage and control themselves.
Integrating information technology deeply into the classroom,promoting information-based teaching reform
It is clearly proposed in the article 20 of Vocational Education to promote the further development of "vocational education + Internet".The construction of technical resources and platform tools lay a foundation for the implementation of PBL blended teaching mode.In the course of teaching,teachers should use the teaching platform to change the teaching method, enrich the course content and enliven the classroom atmosphere.Information-based teaching can break through the limitation of time and space,with more abundant teaching resources,broaden students' knowledge horizon and renew their knowledge system.In the implementation of teaching, information technology and classroom in-depth integration to promote information-based teaching reformation in vocational education,under special circumstances, Under special circumstances, it can also be done without stopping classes. [9].
6.2. Improving the teaching ability and quality of teachers PBL blended teaching mode requires teachers to design problems according to reality,let students master the concept and theoretical basis of knowledge in the process of solving problems.This requires teachers to carefully design the teaching process,improve teaching ability to design. Teachers use the teaching platform and network resources to enrich the classroom,teachers are required to operate the network skillfully and keep pace with The Times.PBL blended teaching model breaks the defects of the closed learning model,make network resources and information technology flow into the classroom [10].Teachers can use the teaching platform to supervise the whole process of students,monitor the whole process of students' learning behavior.It is helpful for teachers to give targeted guidance to students,to better control the teaching schedule and guide students to learn.By analyzing the data of the teaching platform to understand the teaching effect and improve the teaching quality.
Changing students' learning style and improving their independent learning ability
One of the essence of learning is learning to learn.PBL blended teaching mode is active learning in the real sense,the classroom becomes the students' home -court show.Teachers cannot stand by and watch,they should be a learner facilitator and assist students in their study. To set up the learned conceptual theory in the real, complex and meaningful practical problems,let students work in groups to solve problems to learn the hidden knowledge concepts, theories and skills.Team members are required to present their results in class,it can exercise students' ability to express.The questions raised by students in the course of listening to the lecture are putted forward through their own thinking,it can train students' ability to think.The introduction of network resources makes students' learning scope more extensive and learning resources more convenient [11].The presence of these conditions can change the way that students learn,stimulate students' interest in learning,promote students' independent learning.
Conclusion
The establishment of PBL blended teaching mode provides a reference for teaching reform.The questions should be based on the student's original foundation,teachers are required to fully understand the students.The introduction of teaching methods requires teachers to take the initiative to guide, learn with students, and actively accept students' opinions.PBL blended teaching mode is conducive to the cultivation of students' comprehensive ability, but efforts should be made in all aspects to improve the teaching mode. | 2021-07-29T20:06:55.622Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "afc090ffe50201ceb8be3cd4012eeab53288eac9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1976/1/012076",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "afc090ffe50201ceb8be3cd4012eeab53288eac9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
239082850 | pes2o/s2orc | v3-fos-license | Concealed Object Detection and Recognition System Based on Millimeter Wave FMCW Radar
: At present, millimeter wave radar imaging technology has become a recognized human security solution in the field. The millimeter wave radar imaging system can be used to detect a concealed object; multiple-input multiple-output radar antennas and synthetic aperture radar techniques are used to obtain the raw data. The analytical Fourier transform algorithm is used for image reconstruction. When imaging a target at 90 mm from radar, which belongs to the near field imaging scene, the image resolution can reach 1.90 mm in X -direction and 1.73 mm in Y direction. Since the error caused by the distance between radar and target will lead to noise, the original reconstruction image is processed by gamma transform, which eliminates image noise, then the image is enhanced by linearly stretched transform to improve visual recognition, which lays a good foundation for supervised learning. In order to flexibly deploy the machine learning algorithm in various application scenarios, ShuffleNetV2, MobileNetV3 and GhostNet representative of lightweight convolutional neural networks with redefined convolution, branch structure and optimized network layer structure are used to distinguish multi-category SAR images. Through the fusion of squeeze-and-excitation and the selective kernel attention mechanism, more precise features are extracted for classification, the proposed GhostNet_SEResNet56 can realize the best classification accuracy of SAR images within limited resources, which prediction accuracy is 98.18% and the number of parameters is 0.45 M.
Introduction
In recent years, terrorist activities have occurred frequently, mostly in crowded public places such as airports, railway stations and subways [1]. At present, there are publicity and security measures to prohibit the carrying of dangerous goods in relevant areas, but the existing security mode cannot meet the demand of real-time security in peak passenger flow [2]. Therefore, it is necessary to carry out non-contact human safety inspection for people who may carry dangerous substances. The current security imaging technology mainly consists of X-ray imaging, infrared imaging, millimeter wave imaging and so on.
Currently, millimeter wave radar is widely used in human vital signs measurement, aerial imaging and non-injury detection by analyzing the amplitude and phase information of the received signal [3]. For near-field imaging systems, the millimeter wave can penetrate all kinds of optical opaque materials and dielectric materials, such as composite materials, ceramics and clothing. It can penetrate the surface to image the hidden target. Millimeter wave radar detection imaging technology has great potential in various application markets, such as ground penetrating radar, non-destructive testing and medical imaging. It has become one of the most important imaging technologies in recent ten years. The millimeter wave radar has the advantages of high resolution and no harm to the human body [4]. However, many millimeter wave imaging studies involve highly complex and expensive the human body [4]. However, many millimeter wave imaging studies involve highly complex and expensive customized systems. In 2020, MIMO-ISAR technology was used to reduce scanning time in a near-field millimeter wave imaging system [5]. In 2021, dualpolarization antennas were employed to improve the millimeter wave imaging system [6]. This makes it possible to design low-cost and low-power millimeter wave imagers based on the latest development of frequency modulated continuous wave (FMCW) millimeter wave radar with synthetic aperture radar (SAR) [7] and multiple-input multiple-output (MIMO) [8] radar antennas technology. This paper uses the MIMO-SAR radar to move along a zigzag route: the radar starts at three transmit antennas and four receive antennas and transmits the FMCW signal at each position, receives and stores the radar echo signal at the corresponding transmitting position, which generates an equivalent long antenna aperture-the image's longitudinal resolution and horizontal resolution is guaranteed.
However, in the previous near-field millimeter wave imaging system, human intervention is needed to check whether the tested person is carrying dangerous goods, which greatly reduces the detection efficiency. In recent years, convolutional neural networks have been used for SAR images classification [9]. There is a lot of redundancy in mainstream convolutional neural networks, which leads to the process of training the model taking up a lot of time and memory space. Lightweight convolutional neural networks, such as predicting facial expressions by ShuffleNetV2 [10], complete autonomous vehicle target detection by MobileNetV3 [11] and remote sensing image classification by Ghost-Net [12], reduce the amount of network parameters and calculations through redefining convolution, adopting branch structure and optimizing the network layer structure. Compared with traditional neural networks, lightweight CNN reduces the size of the model and increases the speed while maintaining the same level of accuracy. On the basis of the existing lightweight neural network, this paper innovatively introduces the SE (squeezeand-excitation) and SK (selective kernel) attention mechanism module, the importance of each feature channel is automatically acquired through learning, and then the useful features are promoted according to this importance and the features that are not useful for the current task are suppressed. The system performance is improved, and better classification results are obtained. Therefore, this paper will implement a two-dimensional millimeter wave imaging system based on the combination of the low-cost millimeter wave radar and the MIMO-SAR technology. The IWR1443 mm wave radar board, mmWave-Devpack, mechanical slide rail and TSW1400 mm wave development board are selected to build the hardware environment. Through HSDC Pro, Uniflash, MATLAB, Python and other software environments, three processes can be implemented: (1) radar Z scanning along X and Y axes and acquiring original data; (2) image reconstruction and preprocessing; (3) image recognition. Finally, the target can be detected and recognized. The process is shown in Figure 1.
Test Object Distance
IWR1443 mm wave radar is used in this system to judge whether there is an object in the detection direction. The millimeter wave radar emits continuous frequency modulated waves (FMCW), the obtained intermediate frequency (IF) signal is transformed by fast Fourier transform (FFT), which is analyzed in the frequency domain, then the frequency of the corresponding point at the spectrum peak is obtained [13], as shown in Figure 2.
Test Object Distance
IWR1443 mm wave radar is used in this system to judge whether there is an object in the detection direction. The millimeter wave radar emits continuous frequency modulated waves (FMCW), the obtained intermediate frequency (IF) signal is transformed by fast Fourier transform (FFT), which is analyzed in the frequency domain, then the frequency of the corresponding point at the spectrum peak is obtained [13], as shown in Figure 2. According to the Formulas (1) and (2), the distance results of metal objects with high reflectivity are shown in Table 1. (1)
2
(2) If the reflectivity of the object is high, the intensity of the IF signal obtained by the radar will be correspondently large. The signal is transformed from the time domain to the frequency domain, where the frequency corresponds to the distance of the object, and the peak value in the frequency domain after the signal transformation indicates that the object exists at the distance.
The two round measurements were measured at different time points. According to the analysis of the experimental results, target distances calculated by the algorithm are consistent with the true values of 0.35 m, 0.50 m and 0.75 m, the relative error is less than 5%. This experiment shows that the existence of a point at a certain distance of an object can be observed statically through the IF signal generated by the radar. This idea is According to the Formulas (1) and (2), the distance results of metal objects with high reflectivity are shown in Table 1. If the reflectivity of the object is high, the intensity of the IF signal obtained by the radar will be correspondently large. The signal is transformed from the time domain to the frequency domain, where the frequency corresponds to the distance of the object, and the peak value in the frequency domain after the signal transformation indicates that the object exists at the distance.
The two round measurements were measured at different time points. According to the analysis of the experimental results, target distances calculated by the algorithm are consistent with the true values of 0.35 m, 0.50 m and 0.75 m, the relative error is less than 5%. This experiment shows that the existence of a point at a certain distance of an object can be observed statically through the IF signal generated by the radar. This idea is Appl. Sci. 2021, 11, 8926 4 of 17 extended to a two-dimensional imaging process, the reflectivity of each point of the target can be obtained by the IF signal.
Synthetic Aperture Radar (SAR) and Multiple-Input Multiple-Output (MIMO) Radar Antennas Technique
Using a single radiation unit, the radar moves continuously along a straight line. After receiving the echo signal of the target at different positions, the intermediate frequency (IF) signal is obtained by radar correlative demodulation and stored; the raw data is then uploaded to the host. In this way, the aperture of the antenna can be increased, which can be regarded as a column of the horizontal antenna array [14]. In the course of a radar Z scan, the MIMO-SAR radar is used to improve image resolution and reduce imaging cost compared to using a multi-radar imaging system. In this paper, GUI in MATLAB is used to control the synchronization of radar transceiver signal and mechanical slide motion. X and Y axis linkage Z scanning as shown in Figure 3. extended to a two-dimensional imaging process, the reflectivity of each point of the target can be obtained by the IF signal.
Synthetic Aperture Radar (SAR) and Multiple-Input Multiple-Output (MIMO) Radar Antennas Technique
Using a single radiation unit, the radar moves continuously along a straight line. After receiving the echo signal of the target at different positions, the intermediate frequency (IF) signal is obtained by radar correlative demodulation and stored; the raw data is then uploaded to the host. In this way, the aperture of the antenna can be increased, which can be regarded as a column of the horizontal antenna array [14]. In the course of a radar Z scan, the MIMO-SAR radar is used to improve image resolution and reduce imaging cost compared to using a multi-radar imaging system. In this paper, GUI in MATLAB is used to control the synchronization of radar transceiver signal and mechanical slide motion. X and Y axis linkage Z scanning as shown in Figure 3.
Radar Enabled Three Transmitting Antennas and Four Receiving Antennas
In the first version, the radar uses a single transmitter and single receiver mode, and the sampling interval needs to be controlled at 0.9495 mm in the Y direction, requiring multiple scans, which will increase the error of longitudinal movement of the mechanical slide rail, and it is very time-consuming.
In the second version, in order to ensure the sampling interval and improve the resolution of the image, this paper started with three transmitting antennas and four receiving antennas enabled [15]. Therefore, the concept of the virtual channel can be constructed. A total of 12 virtual channels are arranged linearly in the Y-direction.
In the actual test, it was found that 12 virtual channels are used for data analysis at the same time to generate image blur, which will lead to the decline of resolution. In order to improve the quality of information carried by pixels in the longitudinal direction of the image, this paper removes the virtual channel with a higher interference on the upper edge and lower edge, selecting 8 virtual channels to construct 3D data blocks; the scan length on the Y-axis is estimated to be 1 where 8, is the number of scans in the Y direction. After using the MIMO-SAR radar antennas technology, the mechanical slide moves 2λ 7.590 mm each time in the longitudinal direction, the image resolution between each virtual channel is λ/4, as shown in the Figure 4a. A comparison of scanning time and equivalent antenna aperture of the single Y-direction scan between the second version and the first version are shown in Figure 4b.
Radar Enabled Three Transmitting Antennas and Four Receiving Antennas
In the first version, the radar uses a single transmitter and single receiver mode, and the sampling interval needs to be controlled at 0.9495 mm in the Y direction, requiring multiple scans, which will increase the error of longitudinal movement of the mechanical slide rail, and it is very time-consuming.
In the second version, in order to ensure the sampling interval and improve the resolution of the image, this paper started with three transmitting antennas and four receiving antennas enabled [15]. Therefore, the concept of the virtual channel can be constructed. A total of 12 virtual channels are arranged linearly in the Y-direction.
In the actual test, it was found that 12 virtual channels are used for data analysis at the same time to generate image blur, which will lead to the decline of resolution. In order to improve the quality of information carried by pixels in the longitudinal direction of the image, this paper removes the virtual channel with a higher interference on the upper edge and lower edge, selecting 8 virtual channels to construct 3D data blocks; the scan length on the Y-axis is estimated to be D y ≈ N y (M − 1) λ 4 where M = 8, N y is the number of scans in the Y direction. After using the MIMO-SAR radar antennas technology, the mechanical slide moves 2λ = 7.590 mm each time in the longitudinal direction, the image resolution between each virtual channel is λ/4, as shown in the Figure 4a
Actual Measurement Parameter Setting
By using the MIMO-SAR radar, the horizontal equivalent antenna aperture is extended when the mechanical slide rail moves at a uniform speed of 20 mm/s. The radar starts with three transmitting antennas and four receiving antennas; 8 virtual channels are used in this paper, and they are arranged linearly; each step in the longitudinal direction is 7.590 mm, and the longitudinal equivalent antenna aperture is extended. The parameters set in this paper can measure the target with a distance of 90 mm from the radar. The number of sampling points in the horizontal direction is 180, and the number of sampling points in the longitudinal direction is 104. Scanning time and image resolution can be well guaranteed. Detailed parameters are shown in Tables 2 and 3.
Actual Measurement Parameter Setting
By using the MIMO-SAR radar, the horizontal equivalent antenna aperture is extended when the mechanical slide rail moves at a uniform speed of 20 mm/s. The radar starts with three transmitting antennas and four receiving antennas; 8 virtual channels are used in this paper, and they are arranged linearly; each step in the longitudinal direction is 7.590 mm, and the longitudinal equivalent antenna aperture is extended. The parameters set in this paper can measure the target with a distance of 90 mm from the radar. The number of sampling points in the horizontal direction is 180, and the number of sampling points in the longitudinal direction is 104. Scanning time and image resolution can be well guaranteed. Detailed parameters are shown in Tables 2 and 3.
Image Resolution
The resolution of reconstructed image depends on wavelength, scan length and target distance. For two-dimensional imaging, the horizontal (X-axis) and longitudinal (Y-axis) resolutions are estimated to be [2,16]: where D x and D y are the physical lengths of the two-dimensional scan length. According to Z 0 = 90 mm, D x = 90 mm, D y = 98.67 mm, λ = 3.798 mm. The image resolution in X and Y directions are δ x = 1.90 mm and δ y = 1.73 mm.
Building 3D Data Block
After analyzing the bin data returned by radar, a one-dimensional array is obtained, which is converted into two-dimensional data blocks according to the number of IF signal sampling points, and then it is converted into a three-dimensional data block according to the number of sampling points in the horizontal direction and longitudinal direction. Each virtual channel is phase compensated [17], and IF signals of the 8 virtual channels are obtained simultaneously, each virtual channel corresponds to a definite longitudinal scale at a definite X-coordinate. Take a 2D data block with a fixed Y-axis value in the 3D data block, as shown in Figure 5. The resolution of reconstructed image depends on wavelength, scan length and target distance. For two-dimensional imaging, the horizontal (X-axis) and longitudinal (Yaxis) resolutions are estimated to be [2,16]: where and are the physical lengths of the two-dimensional scan length. According to 90 mm, 90 mm, 98.67 mm, λ 3.798 mm. The image resolution in X and Y directions are 1.90 mm and 1.73 mm.
Building 3D Data Block
After analyzing the bin data returned by radar, a one-dimensional array is obtained, which is converted into two-dimensional data blocks according to the number of IF signal sampling points, and then it is converted into a three-dimensional data block according to the number of sampling points in the horizontal direction and longitudinal direction. Each virtual channel is phase compensated [17], and IF signals of the 8 virtual channels are obtained simultaneously, each virtual channel corresponds to a definite longitudinal scale at a definite X-coordinate. Take a 2D data block with a fixed Y-axis value in the 3D data block, as shown in Figure 5.
Reconstruction Image
In the millimeter wave radar imaging process, the radar transmits an FMCW signal and irradiates the target through a synthetic aperture. The received signal at different space points is interferometric demodulated and recorded, and then the IF signal of target to host after scanning is uploaded. Since the purpose of this paper is to generate SAR images, we chose the analytical Fourier transform, which is an existing image reconstruction algorithm [18], according to the dispersion relation of the plane wave in free space, the wave number is divided into three components in a Cartesian coordinate system: The values of the Fourier transform variables and are 2 to 2 , which satisfy the visible region:
Reconstruction Image
In the millimeter wave radar imaging process, the radar transmits an FMCW signal and irradiates the target through a synthetic aperture. The received signal at different space points is interferometric demodulated and recorded, and then the IF signal of target to host after scanning is uploaded. Since the purpose of this paper is to generate SAR images, we chose the analytical Fourier transform, which is an existing image reconstruction algorithm [18], according to the dispersion relation of the plane wave in free space, the wave number k is divided into three components in a Cartesian coordinate system: The values of the Fourier transform variables k x and k y are −2k to 2k, which satisfy the visible region: Two-dimensional plane reflectance of target at a distance of z 0 from the radar can be expressed as: where u(x, y, n) is a three-dimensional data block. FT 2D and FT −1 2D in Formula (6) denote 2D Fourier and inverse Fourier transform operations, respectively.
The following is the image reconstruction of the actual object, as shown in Figure 6. In this paper, the millimeter wave radar is used to detect hidden objects, so the target is placed in a cardboard box with a distance of 90 mm from the radar. The simultaneous activation of the MIMO-SAR radar and the mechanical slide ensured the resolution of the image. Two-dimensional plane reflectance of target at a distance of from the radar can be expressed as: where , , is a three-dimensional data block. and in Formula (6) denote 2D Fourier and inverse Fourier transform operations, respectively.
The following is the image reconstruction of the actual object, as shown in Figure 6. In this paper, the millimeter wave radar is used to detect hidden objects, so the target is placed in a cardboard box with a distance of 90 mm from the radar. The simultaneous activation of the MIMO-SAR radar and the mechanical slide ensured the resolution of the image. The target used in the test is the scissors, which are opened and placed in the paper box. After the image reconstruction algorithm, the details of the scissors can be clearly seen with high object identification. The result of image reconstruction is shown in Figure 7. The target used in the test is the scissors, which are opened and placed in the paper box. After the image reconstruction algorithm, the details of the scissors can be clearly seen with high object identification. The result of image reconstruction is shown in Figure 7. Two-dimensional plane reflectance of target at a distance of from the radar can be expressed as: where , , is a three-dimensional data block. and in Formula (6) denote 2D Fourier and inverse Fourier transform operations, respectively.
The following is the image reconstruction of the actual object, as shown in Figure 6. In this paper, the millimeter wave radar is used to detect hidden objects, so the target is placed in a cardboard box with a distance of 90 mm from the radar. The simultaneous activation of the MIMO-SAR radar and the mechanical slide ensured the resolution of the image. The target used in the test is the scissors, which are opened and placed in the paper box. After the image reconstruction algorithm, the details of the scissors can be clearly seen with high object identification. The result of image reconstruction is shown in Figure 7. The scissors placed in the paper box can be detected by the millimeter wave radar, and the SAR image is clearly visible, which verifies the effectiveness and reliability of the analytic Fourier imaging algorithm.
Image Preprocessing
The data set consists of 250 SAR images, which contains 10 categories such as wrench, wire stripper, hammer, rasp, ax, scissors, key, disc, pliers and gun, and each category contains 25 SAR images. The photo of the test object and the corresponding SAR radar image are shown in Figure 8. The experimental setting is to place the item in the carton, and the effect is the same as that when the clothing covers the object.
Appl. Sci. 2021, 11, 8926 8 of 16 The scissors placed in the paper box can be detected by the millimeter wave radar, and the SAR image is clearly visible, which verifies the effectiveness and reliability of the analytic Fourier imaging algorithm.
Image Preprocessing
The data set consists of 250 SAR images, which contains 10 categories such as wrench, wire stripper, hammer, rasp, ax, scissors, key, disc, pliers and gun, and each category contains 25 SAR images. The photo of the test object and the corresponding SAR radar image are shown in Figure 8. The experimental setting is to place the item in the carton, and the effect is the same as that when the clothing covers the object. In the reconstruction algorithm, the distance parameter is given in advance, so that the target can be imaged near this range. The radar original reconstruction image may contain noise, which is caused by the distance error between the target and the radar. In the actual security check process, the relative distance between the object and the radar cannot be guaranteed to be very accurate, so the image preprocessing is very important, which can eliminate noise and enhance the image features and also, improve visual recognition. The radar original reconstructionimage is first processed using the gamma transform algorithm and then linear stretching is carried out, as shown in Figure 9. In the reconstruction algorithm, the distance parameter Z 0 is given in advance, so that the target can be imaged near this range. The radar original reconstruction image may contain noise, which is caused by the distance error between the target and the radar. In the actual security check process, the relative distance between the object and the radar cannot be guaranteed to be very accurate, so the image preprocessing is very important, which can eliminate noise and enhance the image features and also, improve visual recognition. The radar original reconstructionimage is first processed using the gamma transform algorithm and then linear stretching is carried out, as shown in Figure 9.
where c = 1 and γ = 2.4, gamma transform algorithm deals with the normalized brightness and then reverse-transforms to the real pixel gray value. The linear stretch piecewise function is: , 30 ≤ x < 60 120 + ( that the target can be imaged near this range. The radar original reconstruction image may contain noise, which is caused by the distance error between the target and the radar. In the actual security check process, the relative distance between the object and the radar cannot be guaranteed to be very accurate, so the image preprocessing is very important, which can eliminate noise and enhance the image features and also, improve visual recognition. The radar original reconstructionimage is first processed using the gamma transform algorithm and then linear stretching is carried out, as shown in Figure 9. (a) (b) The gray value of each pixel in image represents the energy of a certain point of the target at a certain distance, so the radar original reconstruction image indicates that the SAR image has the characteristic that the reflected energy of the target is higher than the noise energy. Since the energy value containing the object information is concentrated in the bright region, the gamma transformation algorithm is used, the parameter is adjusted to 2.4 to increase the contrast in the bright areas and decrease the contrast in the dark areas [19]. Then the pixel gray value is handled by a linear stretching algorithm, which contains four piecewise functions: (1) eliminate the noise; (2) preserve the information of the low gray pixel area; (3) map the original pixel value to a higher and wider brightness region, which increases the contrast and brightness of the image; (4) make the image not appear in extremely bright pixels, which ensures the integrity of the image.
By using the gamma transformation algorithm, the effective information of radar original reconstruction image is retained, while the noise is reduced. After the linear stretch, the image is enhanced, and improves the visual recognition. Image preprocessing lays a foundation for the subsequent supervised learning. The results are shown in Figure 10.
The gray value of each pixel in image represents the energy of a certain point of the target at a certain distance, so the radar original reconstruction image indicates that the SAR image has the characteristic that the reflected energy of the target is higher than the noise energy. Since the energy value containing the object information is concentrated in the bright region, the gamma transformation algorithm is used, the parameter is adjusted to 2.4 to increase the contrast in the bright areas and decrease the contrast in the dark areas [19]. Then the pixel gray value is handled by a linear stretching algorithm, which contains four piecewise functions: (1) eliminate the noise; (2) preserve the information of the low gray pixel area; (3) map the original pixel value to a higher and wider brightness region, which increases the contrast and brightness of the image; (4) make the image not appear in extremely bright pixels, which ensures the integrity of the image.
By using the gamma transformation algorithm, the effective information of radar original reconstruction image is retained, while the noise is reduced. After the linear stretch, the image is enhanced, and improves the visual recognition. Image preprocessing lays a foundation for the subsequent supervised learning. The results are shown in Figure 10.
Lightweight Convolutional Neural Networks
While the traditional convolutional neural network leads to the process of training the model, which occupies a lot of time and memory space, a lightweight convolutional neural network, with the advantages of small model volume, high accuracy and less computation, can be used to construct an object recognition algorithm. Software can be integrated into resource-limited embedded and mobile devices, which meets the actual needs of security scene.
Lightweight convolution neural networks include MobileNet, ShuffleNet, GhostNet and other lightweight models. MobileNet and ShuffleNet, respectively, use point-wise convolution and channel shuffle to achieve the purpose of feature communication, which realizes the fusion of features between different groups. GhostNet adopts a different approach, which is based on a group of original feature images and uses linear transformation to obtain more features that can excavate the useful information from the original
Lightweight Convolutional Neural Networks
While the traditional convolutional neural network leads to the process of training the model, which occupies a lot of time and memory space, a lightweight convolutional neural network, with the advantages of small model volume, high accuracy and less computation, can be used to construct an object recognition algorithm. Software can be integrated into resource-limited embedded and mobile devices, which meets the actual needs of security scene.
Lightweight convolution neural networks include MobileNet, ShuffleNet, GhostNet and other lightweight models. MobileNet and ShuffleNet, respectively, use point-wise convolution and channel shuffle to achieve the purpose of feature communication, which realizes the fusion of features between different groups. GhostNet adopts a different approach, which is based on a group of original feature images and uses linear transformation to obtain more features that can excavate the useful information from the original features. The original feature and the linear transform feature are spliced together to enlarge the feature image. By redefining the convolution rules, the lightweight model can extract image features efficiently with a shallow network structure and a few parameters.
The active millimeter wave imaging system can obtain single-channel images, which contain less information than light images, and the contrast between target contour and background is not obvious. More importantly, active millimeter wave images will have varying degrees of virtual shadows due to their imaging principle, which will have a great impact on the classification effect. Based on these characteristics, this paper will use the convolution neural network module with the method of experiment and attention mechanism. On the one hand, the convolutional neural network has strong feature extraction ability; on the other hand, the attention mechanism is used to obtain more details of the target to be concerned, so as to suppress interference information in millimeter wave images and improve the efficiency and accuracy of feature extraction.
The data set consists of 250 SAR images in 10 categories, and the training set and the validation set are divided in a ratio of 3 to 2, all of which are scanned at a distance of 90 mm from the radar. Three representative lightweight networks are proposed in this paper: (1) ShufflenetV2; (2) MobileNetV3; (3) GhostNet_ResNet56 based on GhostNet, which have been repeated for five rounds of verification, and the prediction accuracy is the average of the five rounds of experiments. The images are firstly normalized, and the number of image channels input to the neural network is adjusted to adapt to the characteristics of the grayscale images. During the training, the learning rate of all networks is set as 0.01, the batch size is 16, and the epochs are 30.
• ShuffleNetV2
The ShuffleNetV2 network improves the ShuffleNetV1 network. Firstly, the convolution step size is selected. For the bottleneck block with convolution step size of 1, the input features are first divided into two parts according to channels. The result is entered into two branches, one of which does not take any action to reduce the number of parameters and computational complexity, and the other branch does not take grouping convolution to reduce the memory access cost. For the subsampling building block with convolution step size of 2, the number of feature channels is doubled. In ShufflenetV2, a layer of 1 * 1 convolution is added before the average pooling layer, to further mix features. Concat module is used to replace the original addition of each element to reduce the computational complexity, and Channel Shuffle module is added to increase the information communication between channels [20]. The ShuffleNetV2 convolutional neural network flowchart is shown in Figure 11.
The SAR images are recognized through ShuffleNetV2 network, and the accuracy of validation set is 84.55%. This low accuracy may be due to the slower convergence speed of the network in a limited number of epochs. Moreover, as the number of epochs increases, the accuracy will be improved to a certain extent. complexity, and the other branch does not take grouping convolution to reduce the memory access cost. For the subsampling building block with convolution step size of 2, the number of feature channels is doubled. In ShufflenetV2, a layer of 1 * 1 convolution is added before the average pooling layer, to further mix features. Concat module is used to replace the original addition of each element to reduce the computational complexity, and Channel Shuffle module is added to increase the information communication between channels [20]. The Shuf-fleNetV2 convolutional neural network flowchart is shown in Figure 11. Figure 11. ShuffleNetV2 network structure. Figure 11. ShuffleNetV2 network structure.
• MobileNetV3
MobileNetV3 combines the advantages of MobileNetV1 and MobileNetV2. At the convolution level, MobileNetV1 introduces the deep separable convolution, decomposes the standard convolution into deep convolution and point-by-point convolution, and Mo-bileNetV2 introduces the linear bottleneck and backward residual structure in the network structure. On this basis, MobileNetV3 introduces a squeeze-excitation (SE) attention mechanism in the bottleneck structure. The SE module automatically obtains the importance degree of each feature channel through learning, which enhances the useful features according to the importance degree and inhibits the features that are less useful to the current task [21].
The SAR images are recognized through MobileNetV3 (SE) network, and the accuracy of validation set is 98.18%.
• GhostNet
GhostNet proposes a novel Ghost module that replaces ordinary convolution and can generate more feature images with fewer parameters. Unlike ordinary convolution, the Ghost module contains two steps. In the first step, the feature image of input is convolved to obtain the feature image with half the channel number of ordinary convolution operation. In the second step, linear transformation is used to obtain another part of the feature image generated in the first step. Finally, the two groups of feature images are stitched together to generate the final feature image. The ghost module can replace the ordinary convolution to reduce the computational cost of the convolution layer [22]. The GhostNet convolutional neural network flowchart is shown in Figure 12. volved to obtain the feature image with half the channel number of ordinary convolution operation. In the second step, linear transformation is used to obtain another part of the feature image generated in the first step. Finally, the two groups of feature images are stitched together to generate the final feature image. The ghost module can replace the ordinary convolution to reduce the computational cost of the convolution layer [22]. The GhostNet convolutional neural network flowchart is shown in Figure 12. The SAR images are recognized through the GhostNet_ResNet56 network, and the accuracy of the validation set is 95.45%. This accuracy is significantly higher than that of ShuffleNetv2 but a bit lower than that of MobileNetV3. However, in terms of model parameters and memory usage, GhostNet_ResNet56 is better than MobileNetV3. Thus, GhostNet_ResNet56 is suitable for the classification task of millimeter wave images.
In order to further improve the accuracy of networks, a confusion matrix is used to reflect the accuracy of image classification more clearly, as shown in Figure 13. The SAR images are recognized through the GhostNet_ResNet56 network, and the accuracy of the validation set is 95.45%. This accuracy is significantly higher than that of ShuffleNetv2 but a bit lower than that of MobileNetV3. However, in terms of model parameters and memory usage, GhostNet_ResNet56 is better than MobileNetV3. Thus, GhostNet_ResNet56 is suitable for the classification task of millimeter wave images.
In order to further improve the accuracy of networks, a confusion matrix is used to reflect the accuracy of image classification more clearly, as shown in Figure 13. It can be seen from the confusion matrix that the GhostNet_ResNet56 network is not good at distinguishing between key, pliers, knife, ax, etc., which leads to lower prediction accuracy. Based on the above three basic network models, it can be seen that the Mo-bileNetV3 convolutional neural network with the introduction of the SE attention mechanism model has the highest prediction accuracy. Therefore, the squeeze-and-excitation (SE) and selective-kernel (SK) attention mechanism modules are used to improve the existing classification network.
Two Optimization Algorithms of Attention Mechanism
The squeeze-and-excitation (SE) attention mechanism mainly uses squeeze, excitation and scale to recalibrate the previous features. The squeeze operation, which compresses features along the spatial dimension, turns each two-dimensional feature channel into a real number, which has a global receptive field and represents the global distribution of the response over the characteristic channel. The output dimension matches the number of feature channels input. Next is the excitation operations; it is a mechanism similar to the doors for a recurring neural network. The parameter is used to generate weights for each feature channel. Finally, through scale operation, the weight of the output is treated as the importance of each feature channel after the feature selection and It can be seen from the confusion matrix that the GhostNet_ResNet56 network is not good at distinguishing between key, pliers, knife, ax, etc., which leads to lower prediction accuracy. Based on the above three basic network models, it can be seen that the MobileNetV3 convolutional neural network with the introduction of the SE attention mechanism model has the highest prediction accuracy. Therefore, the squeeze-and-excitation (SE) and selective-kernel (SK) attention mechanism modules are used to improve the existing classification network.
Two Optimization Algorithms of Attention Mechanism
The squeeze-and-excitation (SE) attention mechanism mainly uses squeeze, excitation and scale to recalibrate the previous features. The squeeze operation, which compresses features along the spatial dimension, turns each two-dimensional feature channel into a real number, which has a global receptive field and represents the global distribution of the response over the characteristic channel. The output dimension matches the number of feature channels input. Next is the excitation operations; it is a mechanism similar to the doors for a recurring neural network. The parameter w is used to generate weights for each feature channel. Finally, through scale operation, the weight of the output is treated as the importance of each feature channel after the feature selection and weighted to the previous features to complete the recalibration of the original features in the channel dimension [23].
The selective-kernel (SK) attention mechanism uses a non-linear approach that fuses features from different kernels to adjust the size of the receptive field, which contains split, fuse and select. Split operation generates multiple channels with different kernel sizes, which are related to different receptive field sizes of neurons. The fuse operation combines information from multiple channels to obtain a global and understandable representation for weight selection. The select operation fuses the feature images of different kernel sizes according to the selected weights [24]. In this paper, SE and SK attention mechanisms are used to optimize the neural network algorithms of ShuffleNet series, MobileNet series and GhostNet series.
• GhostNet_SEResNet56 The squeeze-and-excitation (SE) attention mechanism is introduced into Ghost Net_SEResNet56 lightweight convolutional neural network to optimize its network structure [25]. The process is shown in Figure 14.
Results and Discussion
According to the results of the confusion matrix, this paper uses SE and SK attention mechanism to optimize MobileNetV3, ShuffleNetV2 and GhostNet lightweight convolutional neural networks. The results are shown in Table 4. Figure 14. SE attention mechanism optimizes GhostNet_SEResNet56 network.
Results and Discussion
According to the results of the confusion matrix, this paper uses SE and SK attention mechanism to optimize MobileNetV3, ShuffleNetV2 and GhostNet lightweight convolutional neural networks. The results are shown in Table 4. Where Madd represents the number of operations multiplied first and then added, FLOPs represent the number of floating point operations and MemR + W represents the total memory space occupied by the model. Table 4 shows that the prediction accuracy of the three series networks was significantly improved after the optimization of SE and SK attention mechanism. Although the introduction of the attention module into the SAR image recognition algorithm will increase the network load slightly, it is a tolerable range.
The Madd, Parameters, FLOPs and MemR + W of ShuffleNet series are all higher than those of the other two models, which indicates that the number of calculation amount of the model is the largest and occupies the most memory space, but its prediction performance on SAR image datasets is worse than that of MobileNetV3 series and GhostNet series.
By comparing GhostNet_SEResNet56 and MobileNetV3_SK, the prediction accuracy of GhostNet_SEResNet56 is the same as MobileNetV3_SK; the Madd and FLOPs of Ghost-Net_SEResNet56 are slightly higher than MobileNetV3_SK, but parameters and MemR + W are significantly lower than MobileNetV3_SK, indicating that GhostNet_SEResNet56 optimized by SE attention mechanism can play the greatest advantages within the most limited resources. The confusion matrix of GhostNet_SEResNet56 algorithm is shown in Figure 15a. The Madd, Parameters, FLOPs and MemR + W of ShuffleNet series are all higher than those of the other two models, which indicates that the number of calculation amount of the model is the largest and occupies the most memory space, but its prediction performance on SAR image datasets is worse than that of MobileNetV3 series and GhostNet series.
By comparing GhostNet_SEResNet56 and MobileNetV3_SK, the prediction accuracy of GhostNet_SEResNet56 is the same as MobileNetV3_SK; the Madd and FLOPs of Ghost-Net_SEResNet56 are slightly higher than MobileNetV3_SK, but parameters and MemR + W are significantly lower than MobileNetV3_SK, indicating that GhostNet_SEResNet56 optimized by SE attention mechanism can play the greatest advantages within the most limited resources. The confusion matrix of GhostNet_SEResNet56 algorithm is shown in Figure 15a. GhostNet_ResNet56 is optimized by the SE attention mechanism. Compared with the network without attention mechanism, the network with attention mechanism can significantly improve the accuracy in few epochs. In addition, its convergence speed is significantly accelerated, and the oscillation effect of the tail is effectively weakened, as shown in Figure 15b. Comprehensively consider the classification accuracy of the neural network and its memory occupation, GhostNet_SEResNet56 are used as the object recognition network in this paper.
In this paper, the millimeter wave imaging system can obtain the target SAR image at 90 mm. The number of virtual channels can be increased by increasing the antenna array, GhostNet_ResNet56 is optimized by the SE attention mechanism. Compared with the network without attention mechanism, the network with attention mechanism can significantly improve the accuracy in few epochs. In addition, its convergence speed is significantly accelerated, and the oscillation effect of the tail is effectively weakened, as shown in Figure 15b. Comprehensively consider the classification accuracy of the neural network and its memory occupation, GhostNet_SEResNet56 are used as the object recognition network in this paper.
In this paper, the millimeter wave imaging system can obtain the target SAR image at 90 mm. The number of virtual channels can be increased by increasing the antenna array, and then the longitudinal antenna aperture can be increased. The horizontal synthetic aperture can be widened by increasing the horizontal slide movement distance. The improved hardware can amplify the measured distance while maintaining the image resolution.
In a realistic scenario, target containers and humans carrying targets can sway and move by more than the wavelength which will cause the image to blur. To solve this problem, the speed of the object can be measured first when the object is moving, and then the influence of the speed can be compensated in the imaging algorithm.
Lightweight neural network of deep learning is used for target recognition. Different from the previous manual intervention mode, dangerous objects are identified by machine learning, which can greatly improve the efficiency of security inspection and reduce the uncertainty of manual identification. The limitation of the system at the present stage of this paper is that only ten categories of objects can be identified, which does not include all dangerous goods. In addition, the accuracy of network prediction after the optimization of SE and SK attention mechanism has not been greatly improved, and the lightweight convolutional neural network is easy to overfit and fall into local optimal solution, so the data set needs to be expanded.
Conclusions
In this paper, a detection and recognition system for concealed objects based on the MIMO-SAR radar is proposed. The contributions made in this paper are as follows:
1.
By using the MIMO-SAR radar, the aperture of the radar antenna is expanded to 90 mm in the X-axis direction. Eight virtual channels are established in the Y-direction, which widens the length of the longitudinal direction aperture in each transverse scanning can be equivalent to 4λ. Image resolution can reach 1.90 mm in X-direction and 1.73 mm in Y-direction, when the object is 90 mm away from the radar. The MIMO-SAR imaging system can effectively reduce the scanning time cost, the system economic cost and improve the image resolution.
2.
Gamma transform with a coefficient of 2.4 and linear stretch processing are innovatively carried out for the SAR images to remove the noise caused by distance error and improve visual recognition, which lays a good foundation for the subsequent supervised learning network. 3.
The lightweight convolutional neural network is small in size and occupies less resources, but the prediction accuracy is not high. After the optimization of the SE and SK attention mechanism, the prediction accuracy is improved with the increase of a small part of the resource occupancy rate. Combined with the prediction accuracy; computational complexity: Madd, FLOPs; memory occupation rate: MemR + W, parameters. GhostNet_SEResNet56 is the optimal prediction algorithm for SAR data set, which prediction accuracy of the validation set is 98. | 2021-10-19T15:20:16.364Z | 2021-09-24T00:00:00.000 | {
"year": 2021,
"sha1": "4edc4e3693e1114a2a3a63f74062c87164519907",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/19/8926/pdf?version=1632738052",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6e4f622a5b244ef47e0b1cbd614e3c9c9093e9e9",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
271408807 | pes2o/s2orc | v3-fos-license | Mycobacterium tuberculosis Essential Gene Thymidylate Synthase Is Involved in Immune Modulation and Survival inside the Host
A Mycobacterium tuberculosis essential gene, ThyX (Rv2754c), plays a key role in intermediate metabolism and respiration by catalyzing the formation of dTMP and tetrahydrofolate from dUMP and methylenetetrahydrofolate. ThyX is present in the M.tb complex and in M. smegmatis a nonpathogenic strain of Mycobacteria. In this study, we identified a novel function of ThyX, an enzyme with immune-modulating properties. We have shown that ThyX can activate the macrophages in the host toward M1 response. Overexpression of ThyX stimulates the production of nitrite oxide (NO) and induces apoptosis in macrophages; indeed both responses help the host to control growth of M.tb. ThyX was also discovered to play a role in the recombinant bacterium’s ability to survive when it was subjected to oxidative and hypoxic stress by macrophages. These findings demonstrate the protein’s functional importance in M.tb. Indeed these findings represent ThyX as a potential candidate for future research and show this as a therapeutic target.
■ INTRODUCTION
Nearly one-third of the world's population is infected with tuberculosis (TB), a potentially fatal illness caused by Mycobacterium tuberculosis (M.tb). 1,2A quarter of the world's population has latent TB infection. 3Over the past two decades, the situation has gotten worse due to the emergence of strains that are multidrug-resistant (MDR) and extensively drug-resistant (XDR).With 450,000 incident cases of rifampicin-resistant tuberculosis recorded in 2021, the burden of drug-resistant tuberculosis also increased by 3% between 2020 and 2021.The largest percentages (>50%) of MDR or rifampicin-resistant tuberculosis were found in Russia and other eastern European and Central Asian nations. 2 Rapid drug resistance strain development has brought attention to the need to investigate M.tb virulence mechanisms that have enabled this bacteria to evolve as one of the most effective pathogens known to humans.The ability of M.tb, an intracellular pathogen, to survive in host macrophages is a crucial component of its pathogenicity.The intricate strategy used by M.tb to thrive in the very microbic environment of macrophages is extremely complex and remains a mystery.
M.tb has developed defense mechanisms that let it infect and persist in the host environment while evading the immune system.This necessitates the interaction of many virulence factors that allows M.tb to adjust to the host immunological challenges.−6 A total of 121 methyltransferases (Mtases) have been found in M.tb.H37Rv, which is significantly more than other pathogenic, nonpathogenic, and opportunistic species of mycobacteria. 7The methylome of the M.tb complex has not been extensively studied.It might be necessary for this pathogen to endure challenging circumstances, including a hypoxic environment, which might increase its virulence and lead to the development of treatment resistance. 8,9ecent research has identified a flavin-dependent thymidylate synthase (FDTS) called ThyX as a potential target for the repurposing of existing antibacterial medications. 10,11De novo 20-deoxythymidine-50-monophosphate (dTMP) synthesis depends on the enzyme thymidylate synthase (ThyA), which is a member of the methyltransferase family.Additionally, ThyX is essential for DNA synthesis because it catalyzes the conversion of dUMP to dTMP, acting as a crucial component for DNA synthesis to continue and, as a result, for cell survival and replication. 12Given that ThyX has only sometimes been identified in eukaryotes and is absent in humans, it is an especially popular target for antibacterial drugs. 13In addition to other significant human pathogens, the gene is identified in Bacillus anthracis, Helicobacter pylori, and Mycobacterium.The ThyX gene has been shown in numerous studies to be essential for bacterial survival, and MDR strains of M.tb have been shown to overexpress this gene. 14n this work, we looked at the functional role of ThyX of M.tb.Examinations were conducted into ThyX's impact on cellular characteristics, including stress resistance and immunological response.To study the responses of this protein in vivo, macrophage surface marker estimation, cytokine ELISA, reactive oxygen species, nitric oxide test, and apoptosis were estimated.The impact of ThyX overexpression on immune modulation and pathogenesis was evaluated in vivo by evaluating the gain of functions after insertion into nonpathogenic bacteria, M. smegmatis.
Our research sheds important new light on the function of ThyX in host−pathogen interactions during TB pathogenesis.
In-Silico Analysis of ThyX and Generation of an
Overexpression Strain in M. smegmatis.ThyX protein sequence consists of 250 amino acids and has a molecular weight of 27.5 kDa.We have performed in-silico analysis of the enzyme using different computational tools.First, the VaxiJen tool was used to predict the antigenic nature of ThyX, meaning it can trigger an immune response in humans (Figure S1.A).The B-cell and T-cell epitope analysis using the IEDB server confirmed ThyX's immunogenicity and strengthened its role in immune modulation (Figure S1.B, C, and D).Next, by P-BLAST analysis, it was found that ThyX does not show any homology with humans and is rarely seen in eukaryotes; hence ThyX is shown to be a highly preferred target for antibacterial drugs.
Further to characterize ThyX, it was cloned and expressed in a pET28a vector (Figure S2A, B and C), and the recombinant protein so obtained was purified by affinity chromatography (Figure S2.D).We wanted to explore the effects of overexpression of ThyX; hence it was subcloned in the pVV16 expression vector and electroporated in the M. smegmatis strain (Figure S2E).Positive overexpressed constructs of M. smegmatis harboring His-tagged ThyX (M.s_ThyX) or vector control pVV16 (M.s_Vc) were cultured for further use.
Overexpression of ThyX in M. smegmatis Enhances Bacterial Survival.
As we have shown in the above result, we have generated overexpression strains M.s_ThyX and M.s_Vc.First, by doing Western blotting experiments using anti-His antibodies, we confirmed the expression of the ThyX gene in the M.s_ThyX strain, and the band of specific size was absent in M.s_Vc (Figure 1A.a and A.b). Further, we wanted to see the effects of ThyX overexpression on the growth of bacteria in in-vitro conditions; thus we have performed a growth curve analysis.Here we have observed that M.s_ThyX grows fast compared to M.s_Vc, suggesting that ThyX provides a growth advantage to M.tb (Figure 1B).Next, following 30 h of incubation, culture aliquots from the M.s_ThyX and M.s_Vc were plated to observe colony size and number, and we found that M.s_ThyX colonies were larger in size and less in number compared to the higher number of smaller colonies of the M.s_Vc (Figure 1C).
Further, we wanted to understand the impact of this overexpression inside the host, and for this, we have used THP-1 macrophages.To do this, M.s_ThyX or M.s_Vc strains were grown until the mid log phase, and single-cell suspensions were prepared for the infection.Colony-forming unit (CFU) analysis was performed at different time points to see the impact of ThyX overexpression on the survival of bacteria inside the macrophages.Similar to in-vitro assays, we have found more CFUs in M.s_ThyX-infected cells compared to M.s_Vc-infected cells at all time points including 24, 48, and 72 h (Figure 1D).Together these results suggest that ThyX overexpression aids bacterial growth in both in-vitro and ex-vivo conditions.
M.s_ThyX Leads to the Production of NO and Apoptosis in
Infected Host Cells.By halting the release of intracellular pathogens and the propagation of mycobacterial infection, apoptosis is essential to the host's defense against intracellular infections, such as M.tb. 15Innate and adaptive immune responses are triggered by macrophage apoptosis, which can reduce mycobacterial infection. 16Apoptotic bodies that contain bacteria, other cellular organelles, and cell cytoplasm are picked up by dendritic cells and macrophages by receptor-mediated phagocytosis. 17An early stage of the infection is usually eliminated by apoptosis, a programmed cell death that protects the host cells.However, it can favor the bacterium in the later stages of infection by disseminating the disease via apoptotic bodies. 18Accordingly, we investigated the effect of ThyX in macrophages infected with M.s_ThyX and M.s_Vc.The recombinant strains' potential to cause apoptosis was examined by checking apoptotic markers using flow cytometry analysis (Figure 2A).We have analyzed apoptosis 48 h postinfection and observed that overexpression of ThyX effectively enhances the apoptosis in infected macrophages compared to the vector control (Figure 2B).
Free radicals play an important role in controlling bacterial infection. 19Hence next, we checked the levels of NO in M.s_ThyX-and M.s_Vc-infected cells.Interestingly we have found increased NO levels in M.s_ThyX-infected cells compared to the vector control (Figure 2C).These results together suggest that overexpression of ThyX stimulates different host defense mechanisms and indicate that the M.s_ThyX provides the capability of nonpathogenic bacterium to stimulate NO production from host macrophages, followed by macrophage cell death by apoptosis.
ThyX Confers Resistance to Oxidative and Hypoxic Stress
Conditions.An infection spreads as apoptotic bodies transfer bacteria to nearby cells.The hypoxic and acidic environments created by infected macrophages kill the mycobacteria.M.tb provides a novel means of survival in macrophages by establishing a tolerance to acidic and hypoxic stress environments.Many proteins secreted by M.tb can provide defense against oxidative and hypoxic stress.It is known that H 2 O 2 and CoCl 2 can induce oxidative stress and hypoxic stress, respectively.Here, we were interested in checking whether ThyX overexpression affects bacterial survival under oxidative and hypoxic stress conditions.Bacterial cells of both strains were seeded independently in a concentration-dependent manner.The oxidative stress was generated through the addition of H 2 O 2 in different concentrations ranging from 1 to 10 mM, and survival of bacteria was checked after 24 h through the Alamar blue assay.Here we have observed a better survival of M.s_ThyX compared to M.s_Vc (Figure 3A).Similar results were observed in the case of hypoxic stress induced by the addition of CoCl2 (1 to 10 mM) as well (Figure 3B).
M.tb ThyX Upregulates Macrophage Activation.
A functional CD4+ T-cell response depends on the controlled expression of CD80/CD86 and MHC-II (major histocompat- ibility complex).Effective T-cell activation and cytokine generation are achieved by co-stimulatory signal molecules, CD80, CD86, and macrophage activation marker MHCII. 20,21s we have observed in the above results, ThyX shows antigenic properties; hence, we have checked the expression of macrophage activation markers in the presence of ThyX.To do that, we performed ex-vivo experiments using RAW264.7 macrophages.Initially, cells were exposed to different concentrations (0.5 to 5 μg/mL) of ThyX purified protein along with lipopolysaccharide (LPS) as a positive control and heat-inactivated (HI) protein as a negative control.First, we checked the survival of cells through the MTT assay and found there was no significant cell death until 5 μg/mL ThyX (Figure 4A).Hence for further experiments we have used protein concentration in a range.At 48 h, fluorescence-activated cell sorting (FACS) analysis was performed to check the levels of different macrophage activation markers and showed that increasing concentrations of ThyX significantly enhance the expression of CD80, CD86, and MHC II (Figure 4B, C, and D).It is possible that M.tb ThyX modulates T cell activity through an increase in the expression of MHC II, CD80, and CD86.
Macrophage activation leads to T-cell activation and the generation of different protective cytokines to clear the infection.Therefore, we were interested in checking whether ThyX protein is involved in the stimulation of proinflammatory cytokines.Here, we have observed upregulated levels of IL-12 and TNFα in the presence of ThyX protein (Figure 4E and F).
2.6.Exposure to M. tb ThyX in Vivo Also Causes Apoptosis and Increases NO and ROS.To eradicate infection, macrophages create increased amounts of ROS and NO. 22If they cannot eradicate the pathogen, they may also undergo apoptosis. 23These findings offer a molecular explanation for the activation of apoptosis in macrophages harboring ThyX.Virulence and cell death that bacteria use to cause disease are indeed correlated. 24o assess the role of ThyX's significance in the protection of the bacteria residing within the macrophages, THP-1 cells were treated with purified ThyX.The macrophage cells treated with ThyX were seen to have elevated NO levels (Figure 5A and B).ROS level was quantified by flow cytometry using the CellROX Green Reagent assay.The figure indicates that the ROS level increases with an increase in the concentration of ThyX in treated macrophages at 48 h.It was observed that ThyXtreated macrophages produced significantly higher levels of ROS with increasing concentrations of protein (Figure 5C).The ThyX gene helps the pathogen increase the levels of ROS produced by the host, which allows the bacteria to live inside macrophages.
When infected macrophages undergo apoptosis, innate control over early bacterial growth is established.Additionally, the antigen-containing reservoir serves as a bridge for dendritic cells to initiate acquired T-cell immunity. 25,24Apoptosis serves as a last-resort host defense mechanism.Controlled bacterial survival is accomplished by enclosing pathogens within apoptotic cells. 26Additionally, bacterial antigens that can activate M.tb-specific T-cell immunity are largely obtained from apoptotic macrophages. 27However, a growing body of evidence indicates that pathogenic M.tb generates bacterial chemicals that prevent apoptosis and instead cause necrosis in macrophages. 28The percent apoptotic cells was estimated after 48 h using a FITC Annexin V apoptosis detection kit.We observed a significant increase in apoptosis in macrophages infected with ThyX in a concentration-dependent manner in the late apoptotic phase (Figure 5D and E).
CONCLUSION
In this research, we delve into the multifaceted role of the M.tb ThyX protein in the context of tuberculosis infection.The study investigates how ThyX influences the delicate interplay between the host's immune response and the pathogen's survival strategies.
Our findings reveal a complex picture.M.tb ThyX modulates the antigen processing pathway, stimulating macrophages and enhancing the expression of co-stimulatory markers on antigen-presenting cells.This increased expression of MHC II, CD80, and CD86 molecules on macrophages treated with ThyX suggests a role in enhanced antigen presentation, impacting immune responses and pathogen clearance, potentially hindering their ability to recognize and eliminate the invading M.tb bacteria.ThyX appears to trigger the release of pro-inflammatory cytokines such as TNF-α and IL-12, enhancing the host's immune responses against M.tb and potentially bolstering the host's immune defenses.
Furthermore, the results suggest that ThyX may equip M.tb with an enhanced resistance to the harsh environment within macrophages.This includes increased tolerance to stressors like reactive oxygen species (ROS) and hypoxia, conditions typically employed by macrophages to combat bacterial threats.ThyX induces stress responses in macrophages, leading to the upregulation of ROS production, creating an unfavorable environment for the pathogen.Moreover, ThyX promotes NO generation from host macrophages, leading to apoptosis-induced macrophage cell death and potentially limiting bacterial multiplication.
ThyX contributes to the resistance of M.tb to macrophage stress conditions, allowing the bacteria to survive and grow in hostile environments.Our results have shown that M. smegmatis strains overexpressing ThyX exhibit higher survival rates and persistence within macrophages than control strains.This ability of ThyX to enhance bacterial survival and persistence underscores its significance in M.tb pathogenesis.
In conclusion, this study sheds light on the multifaceted role of M.tb ThyX in TB pathogenesis by influencing immune modulation, antigen presentation, cytokine secretion, and bacterial survival within host macrophages.While it may activate the host's immune response, it also equips M.tb with mechanisms to evade and potentially manipulate these defenses.Considering these findings, targeting ThyX holds the potential for developing novel therapeutic strategies to combat TB.However, further research is needed to fully understand the complex interplay between ThyX and the host immune response.
Reagents, Chemicals, Vectors, and Bacterial
Strains.Analytical grade reagents and chemicals were used in this study.The cell culture growth media (DMEM and RPMI), antibiotic (antibiotic−antimycotic), and fetal bovine serum (FBS) were purchased from Gibco Life Technologies.Hi-Media Laboratories (India) supplied the LB broth used to grow the bacterial culture.The PCR reagents were purchased from Fermentas (Thermo Fisher Scientific, Inc., CA, USA).A gel extraction and plasmid isolation kit was purchased from Qiagen.The gene-specific primers were purchased from Sigma.
M. smegmatis mc2155 was borrowed from the National Institute of Pathology (NIOP), India.The sequence of genes was retrieved from Mycobrowser.Both strains were grown in LB broth medium at 37 °C under continuous shaking at 180 rpm.Kanamycin was added at a final concentration of 50 μg/ mL.
Molecular Cloning, Expression, and Purification of ThyX.
To produce the ThyX protein, Escherichia coli Rosetta cells were used to express the ThyX protein after the ThyX gene was PCR amplified and cloned into a pET28a expression vector.For protein purification, 30 mL of chilled 1× PBS (pH 7.4) was used to resuspend the IPTG-induced culture pellet, and 150 mM KCl was mixed properly and kept on ice for 5 min.For 5 to 10 min, the cell suspension was sonicated with an amplitude of 40% with regular off and on cycles of 10 and 5 s each, respectively.The sonicated product was centrifuged at 9000 rpm for 45 min, and the supernatant containing the solubilized protein was collected and loaded on the Ni-NTA (Qiagen/Genetix) column.Further washing was done with 50 mL of 40 mM imidazole in 1× PBS.Protein was eluted with a 200 mM imidazole-containing buffer.Fractions containing recombinant protein were analyzed on 15% SDS-PAGE.The Bradford assay was used to determine the concentration of the dialyzed proteins.Polymyxin B (Sigma) was added to the protein at 4 °C for 2 h for removal of lipopolysaccharides.
Cloning of M.tb ThyX in Mycobacterial Expression Vector pVV16 and Generation of Recombinant
M. smegmatis.The ThyX encoding gene was subcloned into mycobacterial integration expression vector pVV16 to produce the pVV16_ThyX plasmid. 29This construct along with empty vector pVV16 was electroporated (Bio-Rad Laboratories, CA, USA) into competent M. smegmatis to generate recombinant strains termed M.s_ThyX and M.s_Vc.To confirm the expression of recombinant ThyX, for 24 h, M.s_ThyX and M.s_Vc were grown in LB broth that was supplemented with 50 μg/mL kanamycin.Centrifugation was used to obtain the cell pellet, which was then PBS-washed (5000 rpm, 10 min).The cell pellet was heated at 95 °C for 30 min after being dissolved in SDS-PAGE loading dye.ThyX protein expression was confirmed by Western blotting with an anti-His antibody after the lysed fractions were separated by electrophoresis in 10% SDS-PAGE.
4.4.Macrophage Cell Culture and Growth Conditions.The human macrophage cell line THP-1 and murine macrophage RAW264.7 cells were cultured in Dulbecco's modified Eagle's medium (DMEM) and Roswell Park Memorial Institute (RPMI 1640) supplemented respectively with 1% antibiotic solution and 10% FBS.Depending on the experiment, the necessary number of cells was seeded in sixwell and 96-well plates.The cells were treated with various concentrations of recombinant ThyX protein or with M.s_ThyX and M.s_Vc.Under standard tissue culture conditions of 37 °C and 5% CO 2 , cells were grown and maintained. 30After the initial frozen stocks were seeded, eight passages later, all experiments were completed in the different cell lines. 30.5.In-Vitro Survival of M.s_Vc and M.s_ThyX under Normal Growth Conditions.M. smegmatis mc2 155 vector control (M.s_Vc) and M.s_ThyX log phase cultures (OD 600 of 0.8−1.0)were grown for about 12 h until the OD 600 reached 0.05 after being diluted 1:100 onto LB medium. 30The cells were reinoculated and were allowed to grow for 30 h, and the OD 600 was taken after every 3 h up to 30 h. 4.6.In-Vitro Stress Response Assay.M.s_ThyX and M.s_Vc were raised to an OD of 1.0 and further diluted to an OD of 0.2 in fresh LB media.After that, the bacterial cells were seeded into 96-well plates and given 24 h to develop.After, a 24 h growth period, 1−10 mM H 2 O 2 and 1−10 mM CoCl 2 were used to induce oxidative and hypoxic stress, respectively.With 0.3% resazurin sodium salt, cell viability was evaluated after 24 h by monitoring the readings at 570 and 600 nm in a spectrophotometer and calculating the survival percentage.4.7.Bacterial Survivability Assessment in Infected Macrophages.Recombinant M. smegmatis expressing ThyX was added to PMA-differentiated THP-1 macrophages along with the vector control grown to an OD of 0.1 at an MOI (multiplicity of infection) of 1:10 in the BSL2 facility. 30The macrophages were lysed and serially diluted after 0, 24, 48, and 72 h and then plated on Luria agar plates to allow the bacterial colonies to develop.After respective hours of incubation at 37 °C, the CFUs of the bacterial colonies were counted to determine the viability of the bacteria.
4.8.MTT Assay.The MTT assay was done to check the cytotoxicity of M.tb protein ThyX.The assay was carried out using RAW264.7 cells (1 × 104/well) seeded in 96-well plates in complete DMEM media and treated with proteins in different concentrations for 24 h.A fresh 200 μL medium was added once the supernatant was harvested.MTT was diluted to a final volume of 20 μL and incubated at 37 °C for 4 h.After completely removing the media, 100 μL of DMSO was added to each well, and it was thoroughly mixed.Absorbance was measured at 595 nm.
4.9.Surface Expression of Macrophage Activation.Various concentration of ThyX protein (0.5, 2, and 5 μg/mL) were added to macrophage RAW264.7 cells, and the surface activation markers for macrophage activation, such as MHC-II, CD80, and CD86, were determined.Cells in 24-well culture plates were seeded and treated with recombinant ThyX protein after 4 h of seeding.They were then incubated for 48 h with anti-mouse Alexa Fluor 488-MHCII, PE-CD80, and APC-CD86.The samples were handled by the supplier's supplied protocol.LPS (100 ng/mL) was used as a positive control for the expression of TLR4.
4.10.Estimation of Cytokine Levels.In a 12-well culture plate, murine macrophage cells were seeded (∼1 × 10 6 cells per well), and the cells were left to adhere overnight at 37 °C.After adhesion, cells were treated with recombinant ThyX protein at varying concentrations (0.5, 2, and 5 μg/mL) or with LPS as a positive control (100 ng/mL) (Sigma, USA).To release cytokines and other cellular markers, the concentration of the protein treatment was prestandardized.Heat-inactivated proteins are used as a negative control for cytokine estimation. 30After 24 h of treatment, the supernatant was removed and stored at −80 °C until required.Proinflammatory cytokines, such as TNF-α and IL-12, and antiinflammatory cytokines, IL-10, were measured at 450 nm using the BD Biosciences mouse ELISA kit by following the manufacturer's instructions.
Detection of ROS in Macrophages.
THP-1 cells (2 × 10 5 cells/well) were seeded overnight and were treated with recombinant ThyX at concentration ranges from 0.5, 2, and 5 μg/mL at 37 °C in a 12-well plate.Cells were collected and washed with 1× PBS after 48 h of treatment.About 5 mM CellROX green reagent was used to stain the treated cells, followed by incubation at 37 °C for 30 min.Stained cells were captured using FACS Lyric (BD Biosciences), and Flow Jo software (Tree Star) was used to analyze the data.
4.12.Quantification of Nitrite (NO) in Macrophages.The THP-1 cells were treated with recombinant strains that expressed ThyX, M.s_Vc, and M.s_ThyX.Around 150 μL of the cell-free supernatant was collected after 30 h of treatment and was mixed with a volume of 50 μL of Griess reagent.This reaction was carried out in 96-well plates and incubated at room temperature for 30 min at 37 °C, 5% CO 2 .Untreated macrophage cells were used as control.The cells were harvested 24 and 48 h after infection.Nitrite concentration was measured using sodium nitrite as a standard.Plates were measured at 540 nm.
4.13.Annexin V/PI Apoptosis Assay.THP-1 cells seeded in 24-well plates were incubated with 0.5, 2, and 5 μg/mL of recombinant ThyX protein.THP-1 cells were also treated with M.s_Vc and M.s_ThyX at an MOI of 1:10 and were incubated for 4 h.Post-treatment, to kill the extracellular bacteria, cells were treated with complete media containing gentamycin after being washed with PBS 30 in the case of the M. smegmatis strain.In both cases, after treatment, cells were harvested after 48 h and stained with AnnexinV-FITC and propidium iodide staining protocol (BD) to analyze apoptosis.Cells were washed and collected in PBS and then resuspended in 1× binding buffer.The treated cells, approximately 1 × 10 5 , were transferred into fresh tubes, and 5 μL of FITC AnnexinV and PI were added to each.Cells were gently vortexed and incubated at RT for 15 min.Following the addition of 400 μL of binding buffer to each tube, cells were examined using flow cytometry at the correct machine settings.The positive control was performed using LPS-treated cells.FACS Lyric (BD Biosciences, San Jose, CA, USA) was used to analyze the samples, and Flow Jo software was used to process the data.
4.14.Statistical Analysis.GraphPad Prism 6.0 software was used to express all data, which were obtained from three independent groups of experiments and expressed as mean ± standard deviation (SD).A one-way analysis of variance (ANOVA) was used to determine the statistical significance at p < 0.05.
Figure 1 .
Figure 1.(A) To assess the role of M.tb ThyX in pathogenicity, genes from the pathogenic strain of M.tb were introduced into the nonpathogenic M. smegmatis.(A.a) Confirmation of M. smegmatis strain by Western blotting using anti-His antibody.(A.b) Graphical representation of M.s_ThyX and M.s_Vc using Western blot by the ImageJ tool.(B) Log phase cultures (OD 600 of 0.8−1.0) of M. smegmatis mc 2 155 vector control (M.s_Vc) and M.s_ThyX were diluted 1:100 into 7H9 media and cultured for approximately 12 h until the OD 600 reached 0.05.Reinoculated cells were then allowed to grow for 30 h, and the surviving cells were grown on LB media after every 3 h in culture.OD 600 was also taken every 3 h up to 30 h. (C) Plates were inoculated with equivalent amounts of cultures harboring (a) M.s_ThyX or (b) M.s_Vc from panel B or (c) cells only, at the 30 h time point.Colonies were visible after 3 days.(D) THP-1 cells were incubated with an equal number of M.s_Vc and M.s_ThyX at an MOI of 1:10 for 4 h.THP-1 cells were lysed at 0, 24, 48, and 72 h postreseeding to extract the intracellular bacteria that survived.Bacteria was plated on LB agar, and the CFU assay was done.For ** the corresponding P value is <0.01.
Figure 2 .
Figure 2. (A) Representative scatter plot of the apoptosis assay at 48 h.(B) Annexin-PI assay assessed the percent of apoptotic cells by flow cytometry, as described in the Materials and Methodology.(C) NO production by THP-1 cells upon infection with M.s_ ThyX and M.s_Vc for 24 and 48 h.LPS (100 ng/mL) was used as the positive control.Data were plotted as NO concentrations (in micromolar).The treated and untreated groups were statistically compared.All statistical analyses were performed using two-way ANOVA.The P values for *, **, ***, and ns are <0.05,< 0.01, < 0.001, and >0.05, respectively.
Figure 3 .
Figure 3. Mycobacterium tuberculosis ThyX protects the bacteria against oxidative and hypoxic stress conditions.Recombinant M.s_Vc (white bars) and M.s_ThyX bacterial cells (red bars) were grown in the presence of oxidative (H 2 O 2 ) (A) and hypoxic (CoCl 2 ) (B) stress environments.Cell viability was assessed using 0.3% resazurin sodium salt for 4 h spectrophotometrically.Data were plotted as percent survivability.
Figure 4 .
Figure 4. RAW264.7 cells were treated with (A) ThyX, and cell viability was assessed spectrophotometrically through the MTT assay.(B) ThyX enhances the expression of macrophage activation markers.Quantitative representation of the expression of (B) MHCII, (C) CD80, and (D) CD86 on the surface at 48 h.(E) Culture supernatants were collected at 24 h postinfection, and the concentrations of TNF-α (E) and IL-12 (F) were determined using ELISA.Representative data from three experiments show the concentration of TNF-α and IL-12.Representative data obtained from three independent experiments show means ± SEM of duplicate wells.P value of <0.01.
Figure 5 .
Figure 5. THP-1 cells were cultured in the absence or presence of ThyX.NO production by THP-1 cells upon infection with recombinant ThyX for (A) 24 and (B) 48 h.Data were plotted as the NO concentrations (μM).(C) ROS level was analyzed by flow cytometry.The mean fluorescent intensity (MFI) of intracellular ROS production in infected macrophages was measured at 48 h.(D) Annexin-PI assay assessed percent apoptotic cells by flow cytometry, as described in the Materials and Methodology.(E) Representative bar graph of an apoptosis assay at 48 h.LPS (100 ng/ mL) was used as the positive control. | 2024-07-25T15:07:35.574Z | 2024-07-23T00:00:00.000 | {
"year": 2024,
"sha1": "53b26cdb9efcd452c5f07121ed62aeb41e547dcb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acsomega.4c02919",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef95844f7014b6123fdbd854f13945b1868b2574",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17311522 | pes2o/s2orc | v3-fos-license | Immunogenicity of Yellow Fever Vaccine Coadministered With MenAfriVac in Healthy Infants in Ghana and Mali
Background. Yellow fever (YF) is still a major public health problem in endemic regions of Africa and South America. In Africa, one of the main control strategies is routine vaccination within the Expanded Programme on Immunization (EPI). A new meningococcal A conjugate vaccine (PsA-TT) is about to be introduced in the EPI of countries in the African meningitis belt, and this study reports on the immunogenicity of the YF-17D vaccines in infants when administered concomitantly with measles vaccine and PsA-TT. Methods. Two clinical studies were conducted in Ghana and in Mali among infants who received PsA-TT concomitantly with measles and YF vaccines at 9 months of age. YF neutralizing antibody titers were measured using a microneutralization assay. Results. In both studies, the PsA-TT did not adversely affect the immune response to the concomitantly administered YF vaccine at the age of 9 months. The magnitude of the immune response was different between the 2 studies, with higher seroconversion and seroprotection rates found in Mali vs Ghana. Conclusions. Immunogenicity to YF vaccine is unaffected when coadministered with PsA-TT at 9 months of age. Further studies are warranted to better understand the determinants of the immune response to YF vaccine in infancy. Clinical Trials Registration. ISRCTN82484612 (PsA-TT-004); PACTR201110000328305 (PsA-TT-007).
Yellow fever (YF), an acute viral hemorrhagic fever caused by yellow fever virus, remains among the most feared diseases. Primary endemic regions for YF are sub-Saharan Africa, Central America, and South America. At present, the disease still affects approximately 200 000 persons with 30 000 deaths annually, despite the availability of YF vaccines. YF vaccines are live attenuated vaccines based on the 17D attenuation variant and are considered among the most effective and safe vaccines in use today, with >400 million people vaccinated [1][2][3][4][5][6].
One of the main public health concerns is the maintenance of high levels of population immunity in endemic regions through routine childhood immunization. The YF-17D vaccine was added to the Expanded Programme on Immunization (EPI) in YF-endemic countries in Africa in 1991, in concomitant administration with measles vaccine in infants at 9 months of age [7]. However, there are still only limited data available on safety and immunogenicity when YF-17D vaccine is coadministered with other vaccines. The immune response is usually not affected when coadministered with other vaccines such as measles [8,9]; however, some reports have described significantly lower immune responses to YF, mumps, and rubella following coadministration of YF and measles, mumps, and rubella vaccine [10].
A monovalent group A meningococcal conjugate vaccine (PsA-TT, MenAfriVac), developed through the Meningitis Vaccine Project (MVP), is about to be introduced in routine EPI in countries of the African "meningitis belt," with a single dose at 9 months of age concomitantly administered with YF and measles vaccines [11]. We report here the immune response to YF vaccine following coadministration with PsA-TT in 2 infant clinical trials conducted in Ghana and in Mali.
METHODS
The studies were designed and conducted in accordance with the Good Clinical Practice guidelines established by the International Conference on Harmonisation, and with the Declaration of Helsinki, and approved by the competent ethics committees and regulatory authorities. Both studies were coordinated by MVP, a partnership between the World Health Organization (WHO) and PATH, aiming to develop an affordable, monovalent, group A meningococcal conjugate vaccine through a public-private partnership with the vaccine manufacturer Serum Institute of India, Ltd.
Study A
The first study (PsA-TT-004) was a phase 2, double-blind, randomized, controlled, dose-ranging study to evaluate the safety, immunogenicity, dose response, and schedule response of PsA-TT administered concomitantly with local EPI vaccines in healthy infants. The study was conducted in rural northern Ghana from November 2008 to May 2012 and the main study results are reported by Hodgson et al (unpublished data). A total of 1200 infants were randomized to receive primary vaccination into 6 study groups of 200 subjects each. Subjects' group allocation during the study is presented in Table 1. Subjects in all groups received EPI vaccines (measles and YF) at 9 months of age. The EPI vaccines were administered alone in groups 3 and 4, and concomitantly with a second dose of PsA-TT with different dosages in groups 1A (10 µg), 1B (5 µg), and 1C (2.5 µg) and with a single dose of PsA-TT (10 µg) in group 2. Group 4 was the control group for this vaccine period (no blood draw was performed in group 3 at this time point).
Study B
The second study (PsA-TT-007) was a phase 3, double-blind, randomized controlled study to evaluate the immunogenicity and safety of different schedules and formulations of PsA-TT administered concomitantly with local EPI vaccines in healthy infants and toddlers. The study was conducted in urban Mali from March 2012 to September 2013, and the main study results are reported by Hodgson et al (unpublished data). A total of 1500 infants were randomized to receive primary vaccination into 5 study groups of 300 subjects each. Subjects' group allocation during the study is presented in Table 1. Subjects in all groups received EPI vaccines (measles and YF) at 9 months of age. The EPI vaccines were administered alone in group 3, and concomitantly with PsA-TT vaccine with different dosages in groups 1A (10 µg), 1B (5 µg), 2A (10 µg), and 2B (5 µg). Group 3 was the control group for this vaccine period.
Yellow Fever Vaccines
Study A The live attenuated YF virus vaccine strain 17D, substrain 17DD (Fiocruz Yellow Fever Vaccine, manufactured by Bio-Manguinhos/Fiocruz) was used. The vaccine contained ≥1000 LD 50 (lethal dose, 50%) units per dose (0.5 mL); that is, the vaccine concentration per dose was between 4.34 log 10 plaque-forming units (PFU) and 4.56 log 10 PFU (2 batches, No. 085VFA051Z and No. 085UFC011Z, were used). The presentation was in 10-dose vials of freeze-dried vaccine to be reconstituted with diluent.
Study B
The live attenuated YF virus vaccine strain 17D (manufactured by Federal State Unitary Enterprise of Chumakov Institute of Poliomyelitis and Viral Encephalitis, Russian Academy of Medical Sciences) was used. This vaccine contains ≥1000 LD 50 units per dose (0.5 mL); that is, the vaccine concentration per dose was between 4.5 log 10 PFU and 4.7 log 10 PFU (a single batch was used, No. 090). The presentation was in 5-dose vials of freeze-dried vaccine to be reconstituted with diluent.
Immunogenicity
Blood samples obtained before and 4 weeks after YF vaccination were tested for neutralizing antibodies against YF virus in the Robert Koch Institute (RKI) microneutralization assay using the YF-17D target virus strain produced at the RKI in a concentration of 100 TCID 50 (tissue culture infectious dose, 50%)/well (ie, 100 µL) [12]. Neutralization titers (NTs) were expressed as the reciprocal serum dilutions yielding ≥50% neutralization after 5 days, that is, blocking at least 1 of 2 duplicate infections. All serum samples were first heat-inactivated at 56°C for 30 minutes; then, 2-fold dilutions of each serum sample were prepared in 96-well plates to obtain dilutions of 1:4 to 1:256. To each serum dilution the same volume of YF-17D virus was added. The serum-virus mixture along with positive and negative control sera was incubated for 1 hour at 37°C in a 5% carbon dioxide, 90% humidity atmosphere. Meanwhile, porcine kidney epithelial (PS) cells (10 mL of 6 × 10 5 cells/mL) were prepared. PS cells were washed with phosphate-buffered saline, detached by the addition of HyQTase (incubation for 10 minutes at 37°C), and then diluted in Dulbecco's modified Eagle medium to the required concentration. A volume of 100 µL of the correctly adjusted cell concentration was then added to each well of a new 96-well plate. After 1 hour of incubation, 100 µL of each serum-virus solution was transferred in duplicates to the wells with the cells. For each serum sample, cytotoxicity (to exclude possible cytotoxic effects of the serum on the cells), cell, and virus controls were used. The plates were incubated for 5 days at 37°C in a 5% carbon dioxide, 90% humidity atmosphere, and then cells were fixated with 3.7% formaldehyde and stained with naphthalene black solution. Plates were evaluated and each well was observed under a microscope for signs of cytopathic effects in the infected cells. The serum dilution, which prevented 50% of replicate inoculation (ie, in which 1 of 2 duplicate infections was blocked), was determined as the NT. Whenever infection was prevented in both duplicate wells (100%) at a particular dilution and present in both duplicates (100%) at the next dilution, the NT was determined as the geometric mean of the 2 dilutions. If complete infection was observed at all serum dilutions, the NT was determined as <1:4 the starting serum dilution.
Seroconversion was defined as an NT at least twice as high as that at baseline (≥2-fold rise) 28 days after immunization. Seroprotection was defined as an NT ≥1:8.
Statistical Analysis
The neutralizing geometric mean titers (GMTs) between the vaccine groups at baseline and 4 weeks after vaccination were compared using analysis of variance (ANOVA) adjusted for baseline titers, age, and sex. Percentages of subjects with NTs ≥2-fold rise and with NTs ≥1:8, along with their exact binomial 95% confidence interval (CI), were calculated. The 95% CI for the difference in the proportions of subjects with these responses between the control group and a particular study vaccine group where subjects received PsA-TT was computed using the Miettinen-Nurminen method [13]. If the upper limit of the CI was <10%, the response in the study vaccine group was considered to be noninferior to that of the control group. Reverse cumulative distribution curves of YF NTs were generated at baseline prior to vaccination and 4 weeks after vaccination. All immunogenicity analyses were conducted in the intention-to-treat population. Missing values were treated as missing at random. All tests were 2-sided with a significance level of .05. Data analysis was performed using SAS, version 9.1.3.
Study Population
A total of 1153 subjects (96% of the 1200 subjects enrolled at age 14 weeks) received YF vaccination at a median age of 9 months with a sex ratio (F/M) of 0.99. The immune response to YF vaccine was assessed in all study subjects with sufficient volumes of sera (Table 2).
YF Serum Neutralizing Antibody Titers
Reverse cumulative distribution curves for YF NTs at baseline prior to vaccination and 4 weeks after vaccination, according to study groups, are shown in Figures 1A and 2A. The proportion of subjects with YF NTs ≥1:8, with ≥2-fold YF NT rises as compared to baseline and the GMTs of YF fever NTs, at 28 days after vaccination and for each study group, is presented in Table 2.
At 28 Days After Vaccination. The percentages of subjects with YF titers ≥1:8 ranged from 67.8% to 79.3% and were lower than the anticipated response rate (95%) in all groups. The noninferiority of the immune response elicited by YF vaccine administered concomitantly with the second dose of PsA-TT vaccine at different dosages (10 µg and 5 µg) to that elicited by YF vaccine alone was demonstrated-that is, the upper limits of the 95% CI for the differences were <10% between group 4 and each of groups 1A and 1B. In contrast, the same noninferiority was not confirmed when YF vaccine was administered concomitantly with the second dose of PsA-TT 2.5 µg vaccine or with the 1-dose 10 µg PsA-TT vaccine, with an upper limit of the CI of the difference of 13.3% between group 4 and group 1C and of 11.3% between group 4 and group 2, respectively ( Table 2).
The percentages of subjects with a ≥2-fold response in YF titers with respect to baseline ranged from 64.8% to 71.0%. The noninferiority of the immune response elicited by YF vaccine administered concomitantly with the second dose of PsA-TT For the comparison of NT GMTs between groups, the P value was >.05 (all groups) by ANOVA after adjusting for age, sex, and baseline titer.
vaccine at different dosages (10 µg and 5 µg) to that elicited by EPI vaccines alone at 28 days after vaccine administration was demonstrated for this endpoint as well; the upper limit of the 95% CI for the differences was <10% between group 4 and each of groups 1A and 1B. In contrast, the same noninferiority was not confirmed when YF vaccine was administered concomitantly with the second dose of PsA-TT 2.5 µg vaccine or with the 1-dose 10 µg PsA-TT vaccine; that is, the upper limit of the 95% CI for the differences was 12.3% between group 4 and group 1C and 10.2% between group 4 and group 2, respectively ( Table 2). YF neutralizing GMTs were similar in all groups, ranging from 12.1 to 16.6, with no statistically significant difference when groups were compared using ANOVA after adjusting for age, sex, and baseline titer ( Table 2), and the distribution of YF NTs was consistently similar in all study groups (Figure 2A).
Four weeks postvaccination, analysis stratified by sex did not show any difference in the overall proportion of subjects with YF NTs ≥1: 8
Study Population
All 1500 subjects enrolled received YF vaccination at a median age of 9 months, with a sex ratio (F/M) of 0.94. The immune response to YF vaccine was assessed in a random subsample of 300 subjects with equal distribution in all study groups (60 subjects per group).
YF Serum Neutralizing Antibody Titers
Reverse cumulative distribution curves for YF NTs at baseline prior to vaccination and 4 weeks after vaccination, according to study groups, are shown in Figures 1B and 2B. The proportion of subjects with YF NTs ≥1:8, with ≥2-fold YF NT rises as compared to baseline and the GMTs of YF fever NTs, at 28 days after vaccination and for each study group, is presented in Table 2.
At 28 Days After Vaccination. The percentages of subjects with YF titers ≥1:8 were similar in all groups, ranging from 95.1% to 98.3%, and above the anticipated response rate (95%). The noninferiority of the immune response elicited by YF vaccine administered concomitantly with PsA-TT at different dosages (10 µg and 5 µg) to that elicited by YF vaccine alone was demonstrated; that is, the upper limits of the 95% CI for the differences were <10% between group 3 and each of groups 1A, 1B, and 2B. However, the same noninferiority of group 2A (YF and PsA-TT 10 µg vaccines) to group 3 (YF vaccine alone) was not confirmed with respect to the same endpoint; that is, the upper limit of the 95% CI for the difference was ≥10% (10.7% between group 3 and group 2A) ( Table 2).
The percentages of subjects with a ≥2-fold response in YF titer with respect to baseline ranged from 89.8% to 98.3%. The noninferiority of the immune response elicited by YF vaccine administered concomitantly with the first dose of PsA-TT at different dosages (10 µg and 5 µg) to that elicited by YF vaccine alone was demonstrated for this endpoint as well; that is, the upper limit of the 95% CI for the differences was <10% for each comparison of group 3 with groups 1A, 1B, and 2A. However, the same noninferiority of group 2B (YF and PsA-TT 5-µg vaccines) to group 3 (YF vaccine alone) was not confirmed with respect to the same endpoint; that is, the upper limit of the 95% CI for the difference was ≥10% (12.0% between group 3 and group 2B; Table 2).
YF neutralizing GMTs were similar in all groups, ranging from 29.1 to 33.9, with no statistically significant difference when groups were compared using ANOVA after adjusting for age, sex, and baseline titer ( Table 2); in addition, the distribution of YF NTs was consistently similar in all study groups ( Figure 2B).
Four weeks postvaccination, analysis stratified by sex did not show any difference in the overall proportion of subjects with YF titers ≥1: 8
DISCUSSION
In both studies, PsA-TT (at 10-µg, 5-µg, and 2.5-µg dosages) did not adversely affect the immune response to the concomitantly administered YF vaccine at the age of 9 months.
In both studies, the noninferiority of each PsA-TT vaccine group to the control group (YF/measles vaccines alone) was demonstrated for the majority of pairwise comparisons of percentages of subjects achieving seroconversion and seroprotection 4 weeks after immunization. In a few instances, such noninferiority was not confirmed, likely due to low statistical power, resulting from low seroconversion rates in study A or from small sample size in study B. In study A, 68%-79% of subjects reached YF seroprotection (NT ≥1:8) at 4 weeks after immunization (ie, significantly less than the expected 95%), resulting in a low power in testing noninferiority. In study B, YF endpoints were measured only in a random subsample of subjects (300/1500, 60 subjects per study group), also resulting in limited power. However, there was no statistically significant difference among all study groups in each study in YF virus neutralizing antibody GMTs 4 weeks after immunization after adjusting for age, sex, and prevaccination titer.
The immune response to YF, as measured by NTs 4 weeks after immunization, was different between the 2 studies, with a higher seroconversion rate, seroprotection rate, and GMTs (93%, 97% and 32, respectively, in study B conducted in Mali, vs 68%, 73%, and 14, respectively, in study A conducted in Ghana). Several determinants could explain this difference, such as vaccine substrain, vaccine concentration, presence of maternal antibodies, and interference of other vaccines [14].
Two different vaccine substrains of YF-17D were used in the 2 studies: the 17DD substrain in study A (Ghana) and the 17D-213/77 substrain (a derivate of the 17D-204 substrain) in study B (Mali). The difference between these 2 vaccine substrains is the passage level (17D-204: 235-240; 17DD: 286-287) [15], but there are only minor differences when comparing nucleotide sequences of both the substrains [16]. Camacho et al and Nascimento Silva et al performed studies in which both vaccine substrains, 17DD and 17D-213, were tested for immunogenicity in adults and infants [10,17], with no significant difference in immune response. Seroconversion rates were 98% in one study and 70%-88% in the other study. Immunogenicity studies of YF-17D vaccines in infants show that immune responses tend to be lower in infants than in adults, with seroconversion rates ranging from 70% to 88.8% [8,10,17], consistent with our findings in study A. Interestingly, Nascimento Silva et al have also reported a significant difference in seroconversion rates when administering YF-17D alone or simultaneously with measles, mumps, and rubella vaccine (86.5% vs 69.5%) [10]. Similar rates were reported in another study, where 9-to 11-month-old infants received the YF-17D vaccine concomitantly with measles vaccine [14] with a seroconversion rate of 72%, consistent with that found in study A.
The difference in immune response between the 2 studies could also be related to a differential amount of viral particles in the vaccines. In 2009, a WHO expert committee defined a minimum amount of viral particles per dose as 3.0 log 10 international units, that is, approximately equivalent to 3.73 log 10 PFU. In 2013, the latter concentration was supported by a dose-response study of the YF-17DD vaccine conducted by Martins et al, who demonstrated that this minimal dose was as immunogenic as higher doses, with little differences in response rates [15]. The concentrations of the vaccines, which were used for both our studies, were above this concentration (study A: 4.34-4.56 log 10 PFU; study B: 4.5-4.7 log 10 PFU). However, viral concentrations are determined by titrating the virus on susceptible cells. Most commonly, Vero or PS cells are used for this purpose, with titers being higher when performing the titration on Vero cells vs PS cells (a difference ranging from 0.5 to 1 log 10 ). The method for the determination of concentrations is not published and it would be valuable to test both vaccines together on the same cell system. WHO indicated in 2013 that a single dose of the YF-17D vaccine provides lifelong protective immunity against YF disease and that a booster dose is no longer necessary [18]. This is consistent with the systematic review of the efficacy and duration of immunity after YF vaccination conducted by Gotuzzo et al to assess the need for a booster dose every 10 years [19]. Their findings indicate that, in most studies, seroconversion rates following YF vaccination were >90% and remained >75% several years after immunization. Furthermore, they found some indications that a YF booster dose would only lead to a minor or short-lived increase in neutralizing antibodies due to preexisting antibodies from primary vaccination [19], and they concluded that a YF booster dose would not be needed. Given the rather low neutralizing GMTs found after vaccination in study A, the question may arise whether these titer values are maintained throughout life. Conducting a serosurvey in these infants in 3-5 years would be warranted to evaluate whether titers are maintained or decline with time [18,19].
The presence of maternal antibodies also plays a role in the immunologic response in infants [20,21]. The median age at vaccination (9 months) and the prevaccination titers were similar and consistently low in both studies, with 4.5% and 4.0% of the infants with titers ≥1:8 prior to vaccination. Therefore, presence of maternal antibodies cannot explain the different levels of response observed in our 2 studies.
Sex differences in response to YF vaccine have been reported, with contradictory reports of higher responses either among adult males or among adult females [22,23]. No difference in the immune response to YF vaccine according to sex was found in our studies, so more studies are required to analyze sex differences in immune responses to the YF-17D vaccine in infants.
In conclusion, concomitant administration of the PsA-TT does not affect the response to YF vaccine in African infants. Differences in the postvaccination seroconversion and seroprotection rates in the 2 studies were observed, confirming the need to further document the immune response to YF-17D vaccine in infants.
Notes
Acknowledgments. The authors thank Manisha Ginde, Arati Borkar, and Nija Sasidharan from Diagno Search Life Sciences Pvt, Ltd, for their ambitious engagement in the continuous evaluation of the quality of the YF NT assay according to international standards and supervision of the accreditation process, and the Meningitis Vaccine Project team, in particular, Julie Chaumont, Kajsa Hultgren, and Lionel Martellet for their support. Our gratitude also goes to Anette Teichmann and Katharina Holschbach-Bussian for facilitating the accreditation of the YF NT according to national regulations. For providing further details on the YF vaccines, we also thank Alexander Lukashev from the Russian Academy of Medical Sciences and Maria Fernandes from Bio-Manguinhos/Fiocruz.
Disclaimers. 1) The authors and editors alone are responsible for the views expressed in this publication and they do not necessarily represent the views, decisions, or policies of the institutions with which they are affiliated; 2) The designations employed and the presentation of the material in this publication do not imply the expression of any opinion whatsoever on the part of PATH or the World Health Organization (WHO) concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. Dotted and dashed lines on maps represent approximate border lines for which there may not yet be full agreement; 3) The mention of specific companies or of certain manufacturers' products does not imply that they are endorsed or recommended by PATH or WHO in preference to others of a similar nature that are not mentioned. Errors and omissions excepted, the names of proprietary products are distinguished by initial capital letters. | 2017-09-07T11:59:35.270Z | 2015-11-09T00:00:00.000 | {
"year": 2015,
"sha1": "b6392dcb59f5397dc3234089a6095e58a5abf3f2",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/cid/article-pdf/61/suppl_5/S586/20914540/civ603.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e69709d5dd518f2c3bdb47f8af93b9e99cfe2d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232356752 | pes2o/s2orc | v3-fos-license | RNA silencing suppressor-influenced performance of a virus vector delivering both guide RNA and Cas9 for CRISPR gene editing
We report on further development of the agroinfiltratable Tobacco mosaic virus (TMV)-based overexpression (TRBO) vector to deliver CRISPR/Cas9 components into plants. First, production of a Cas9 (HcoCas9) protein from a binary plasmid increased when co-expressed in presence of suppressors of gene silencing, such as the TMV 126-kDa replicase or the Tomato bushy stunt virus P19 protein. Such suppressor-generated elevated levels of Cas9 expression translated to efficient gene editing mediated by TRBO-G-3′gGFP expressing GFP and also a single guide RNA targeting the mgfp5 gene in the Nicotiana benthamiana GFP-expressing line 16c. Furthermore, HcoCas9 encoding RNA, a large cargo insert of 4.2 kb, was expressed from TRBO-HcoCas9 to yield Cas9 protein again at higher levels upon co-expression with P19. Likewise, co-delivery of TRBO-HcoCas9 and TRBO-G-3′gGFP in the presence of P19 also resulted in elevated levels percentages of indels (insertions and deletions). These data also revealed an age-related phenomenon in plants whereby the RNA suppressor P19 had more of an effect in older plants. Lastly, we used a single TRBO vector to express both Cas9 and a sgRNA. Taken together, we suggest that viral RNA suppressors could be used for further optimization of single viral vector delivery of CRISPR gene editing parts.
www.nature.com/scientificreports/ CP subgenomic RNA promoter and located downstream of a GFP coding region to create the TRBO-G-3′gGFP vector (see "Results"). It should be emphasized that the gGFP used does not target the GFP insert in the TRBO vector and the elongated progenitor gGFP expressed from this construct is processed by endogenous catalytic events in plants that results in proper programming of Cas9 19 . This engineered vector exhibited high expression of GFP, which insinuates relatively high expression of the adjoining sgRNA. Additional co-infiltration of pHco-Cas9 and TRBO-G-3′gGFP induced double-stranded breaks (DSBs) resulting in indels of up to 60% 17 . Using the pHcoCas9 and TRBO-sgRNA delivery system, it was hypothesized that while in previous systems sgRNA delivery was the limiting factor for producing indels, this system's rate limiting step could be Cas9 expression 17 . Through the use of the TRBO vector to deliver Cas9, we aimed to further increase a more effective Cas9 expression and create a high-efficient and rapid method of knocking out plant genes that can be used across multiple applications and plant species. The Tomato bushy stunt virus (TBSV) P19 protein functions as a suppressor of RNA interference (RNAi) by forming homodimers that bind short interfering RNAs (siRNAs) produced by the DICER-like nuclease 20 . The sequestering of siRNAs by P19 prevents the RNA-induced silencing complex (RISC) from being programmed with these molecules 21 . This inhibits the endonuclease activity of RISC and thus interferes with degradation of any RNA corresponding to siRNA 21,22 . Since the demonstration of P19 protein as a strong RNAi suppressor 23 , P19 has been used to enhance the expression of recombinant proteins in plants. For instance, the ectopic expression of p19 enhanced agroinfectivity of a TMV expression vector harboring the gfp gene in Nicotiana benthamiana, leading to a significant increase of cells expressing GFP 9 . Similarly, the transient and stable introduction of P19 into transgenic sugarcane enhanced and stabilized the expression of the reporter gene 24 . In another example, the the delivery of P19 from a Potato virus X vector enhanced transgenic GFP expression into previously silenced GFP-transgenic N. benthamiana (16c) 23 . In a similar experimental design, GFP expressed using a P19-defective TBSV vector in N. benthamiana resulted in an antiviral silencing response and low level of GFP expression, whereas GFP expression recovered significantly when a separate P19 construct was infiltrated in the same leaves 25 . Another viral suppressor of RNA silencing (VSR) is the P126 replicase subunit of TMV, which is produced when translation of the full replicase prematurely terminates at an amber stop codon. The P126 protein encodes multiple domains, including a methyltransferase, helicase, and the non-conserved region II that are crucial for the silencing suppression activity 26,27 . Experiments support the notion that P126 interferes with the RNAi pathway by binding to siRNA duplexes and physically blocking the vital HEN1-dependent methylation process, ultimately inhibiting their incorporation into RISC 28,29 .
In the present study we indeed observed a beneficial effect of the endogenous TRBO-expressed P126 suppressor on gene expression, but also demonstrate a substantial additive effect of separately adding the P19 suppressor protein on Cas9 expression for gene editing. Nonetheless, the increase in Cas9 expression did not necessarily result in higher indel percentages in treated plants. Rather it was only in older N. benthamiana plants (5-week old plants) compared to younger (3-week old plants) where P19 noticeably aided indel accumulation, indicating that the host antiviral RNAi mechanism may affect the expression and functionality of gene editing components in a developmentally controlled manner. We also provide pioneering evidence for an RNA virus vector, that the Cas9 protein and the sgRNA can be expressed from the same virus backbone, towards the implementation of single delivery platforms. Overall, it is evident that the integration of virus vector technology, suppressors, and CRISPR/Cas9 can be adapted for the development of alternative transient expression systems in plants for rapid screening of gene function.
Results. P19 increases transient expression of Cas9 protein.
Previously, we developed a binary vector, pHco-Cas9, for the expression of Cas9 protein in plants (Fig. 1a) 17 . As mentioned earlier, the P19 TBSV suppressor protein is widely used to enhance expression of foreign gene inserts. Therefore, we aimed to test whether P19 would increase the expression of Cas9 by co-delivery of P19 with the pHcoCas9 vector to deliver Cas9 into fourweek-old N. benthamiana plants. The effect of P19 on Cas9 protein expression was examined by western blot analyses to monitor Cas9 expression over nine days. Coomassie blue staining was carried out to ensure equal protein loading across the tissue samples.
Immunoblot analysis of plants infiltrated with pHcoCas9 showed that the Cas9 protein was first detected at 6 days post infiltration (dpi), followed by decrease in expression through 9 dpi. For N. benthamiana leaves coinfiltrated with pHcoCas9 and P19 vectors (Fig. 1a), Cas9 expression was observed at 3 dpi, accumulating to its highest level at 6 dpi and maintained through 9 dpi (Fig. 1b). The visual results indicate that in the presence of P19, Cas9 protein levels were noticeably higher and co-expression extended the expression profile, as compared to levels obtained in the absence of P19 (Fig. 1b).
P19 expression increases activity of Cas9/sgRNA complexes. Our previous study showed that the GFP-expressing TMV vector (TRBO-G) could be used to visually track virus infection while at the same time delivering high amounts of sgRNAs (TRBO-G-3′gGFP) 17 . This autocatalytically driven TRBO-G-3′gGFP system (Fig. 2a) was used in the present study to examine the effect of P19 on gene editing events in 16c plants in conjunction with the increase of Cas9 protein expression shown above.
To compare the efficiency of catalytic events, pHcoCas9 and TRBO-G-3′gGFP were co-delivered into 16c plants in the presence or absence of P19. The occurrence of indels was analyzed through quantifying the intensity of the loss of a BsgI site (undigested bands) from the genomic amplicon compared to those that contained wt sequence (readily digestible). Furthermore, presence of indels was assayed over the course of ten days. Prior to 4 dpi, treatments with and without P19 resulted in similar indel quantities. Based on the effect on Cas9 expression levels (Fig. 1b), it was expected that P19 would also drastically elevate gene editing levels. However, plants coinfiltrated with the P19 expression vector showed only slightly, albeit consistently, higher percentages of indels www.nature.com/scientificreports/ from 4 dpi through 10 dpi. This will be elaborated on in the Discussion but at present considering the small differences, we could not conclude that P19 greatly improves gene editing but it is clear that the addition of P19 does not substantially alter the timing or incidence of indel production under these conditions. TMV replicase-associated RNA silencing suppressor activity. The initial objective was to investigate the effect of Cas9 protein levels on indel production. Even though, unlike P19, the TMV P126 replicase subunit is not a widely used suppressor protein in gene expression studies, serendipitous results revealed additional insight into the role and utility of the P126 suppressor on Cas9 protein expression and gene editing events.
To illustrate this, an Agrobacterium culture containing the pHcoCas9 vector was serially diluted to produce four sample concentrations (OD 600 of 0.5, 0.25, 0.125, and 0.063) for agroinfiltration. Four-week-old 16c plants were co-infiltrated with the pHcoCas9 dilutions along with TRBO-G-3′gGFP (OD 600 of 0.5). Indels were calculated using the BsgI restriction digest assay and Cas9 protein levels were monitored by western blot assays. The BsgI assay showed little or no differences in indel percentages, ranging from 24 to 28%, between the different concentrations of Cas9-expressing constructs (Fig. 3a). Similarly, Cas9 protein expression remained at a constant level despite the decreasing vector infiltration concentration (Fig. 3b). Even the lowest amount of Cas9 delivery www.nature.com/scientificreports/ (OD 600 0.0623) was nearly indistinguishable from the indel percentages associated with the highest amount of Cas9 protein expressing construct (OD 600 0.5). From this serendipitous finding, we hypothesized that the P126 replicase expressed from the TRBO vector, may have beneficial effects through its suppressor mode of action possibly explaining the similar Cas9 expression level across different concentrations of pHcoCas9.
To test this, serial dilutions of pHcoCas9 at OD 600 of 0.5, 0.25, 0.125, and 0.063 were co-infiltrated with a TRBO-gGFP replicase-deficient mutant vector (RM-gGFP) 17 at an OD 600 of 0.5. The RM-gGFP vector has a large deletion in the replicase gene that abolishes TMV replication and synthesis of any functional proteins or genes including gGFP for gene editing, but perhaps more importantly it removes the ability of TRBO to produce P126 17 . The expression profile of Cas9 with RM-gGFP (absence of P126) production showed a decrease in protein levels across the serial-diluted pHcoCas9 cultures, most dramatically from OD 600 of 0.5 to 0.25 (Fig. 3c). The results confirmed that virus replication, presumably through the expression of the P126 replicase suppressor-active protein, contributes to the stable and high expression of proteins that are expressed from a separate binary vector. Moreover, these results showed that Agrobacterium cultures can be used at a very low concentration while maintaining a high agroinfection rate for co-infiltrations involving a TRBO-based vector, with OD 600 as low as 0.0625.
Co-delivery of Cas9 and sgRNA using two separate TRBO vectors. The advantage of developing a TRBO/Cas9 viral delivery tool over a non-viral vector system, such as the binary pHcoCas9 vector, is that the virus can move cell to cell due to the viral MP, enabling Cas9 expression in cells adjacent to those exposed to Agrobacterium T-DNA. However, as has been demonstrated to occur with large nucleotide inserts 30 the expression of TRBO-delivered Cas9 was anticipated to be very low or result in recombination as the virus replicates. Hence, the P19-expressing vector was supplied to the co-infiltrations to aid in the expression of TRBO-delivered Cas9. www.nature.com/scientificreports/ In order to test TRBO as a Cas9 delivery tool, the GFP gene in the TRBO-G vector was replaced with the HcoCas9 coding sequence to produce TRBO-HcoCas9 (Fig. 4a). A time course assay was used to analyze the Cas9 protein expression profile in four-week-old wild-type N. benthamiana plants. In these experiments, half of the leaf was infiltrated with TRBO-HcoCas9 alone and the other half co-infiltrated with TRBO-HcoCas9 and P19. As anticipated, the Cas9 protein expression using TRBO-HcoCas9 by itself was undetectable throughout the nine-day time course using these lysate concentrations (Fig. 4b). However, co-expression of P19 resulted in detectable levels of Cas9 with increased protein expression from 4 to 6 dpi, followed by a slow decrease in expression through 9 dpi. These results firmly demonstrate the additive advantage of delivering P19, in addition to the endogenous P126 suppressor encoded by the TMV vector itself, to obtain readily detectable levels of Cas9 expression from TRBO using our experimental conditions.
Following the detection of the Cas9 protein productions from the TRBO vector, we aimed to test the functionality of TRBO delivered Cas9, using indel formation at the gGFP targeting locus in 16c N. benthamiana plants as a proxy for Cas9 functionality. Additionally, we aimed to deliver both Cas9 and a sgRNA using TRBO vectors. For this, the TRBO-HcoCas9 vector was co-delivered with TRBO-G-3′gGFP to express both the Cas9 protein and gGFP, respectively, with and without P19 in four-week-old N. benthamiana 16c plants. A BsgI restriction www.nature.com/scientificreports/ digest assay was conducted on DNA from tissue sampled at 10 dpi. Restriction digestion resistant bands were observed (Fig. 4c) as indication of generated indels that represented 10% and 14% for samples without and with P19, respectively. This indicated that the TRBO vector can deliver a large Cas9 endonuclease (~ 4.2-kb) that is in its bioactive form as indicated through indel formation in treated tissues.
Host age-dependent gene editing efficiency. During these studies, the functionality of the system was also examined over a spectrum of differently aged N. benthamiana plants. For instance, N. benthamiana 16c plants that were three-or five-week-old were co-infiltrated with TRBO-G-3′gGFP and Cas9, the latter delivered using either TRBO-HcoCas9 or pHcoCas9, in the presence or absence of P19. Leaf tissues were sampled at 7 dpi to examine Cas9 protein levels and the P19 protein expression profile. DNA was sampled from tissues collected at 10 dpi to determine indel percentages through the BsgI restriction assay (Fig. 5).
The three-week-old infiltrated plants showed similar percentages of mgfp5 indels in the presence or absence of P19, 33% and 29%, respectively, with TRBO-HcoCas9-delivered Cas9. Consistently, the binary vector pHcoCas9delivered Cas9 overall demonstrated higher levels of indel efficiency (Fig. 5b). The western blot used to verify Cas9 protein levels in three-week-old 16c plants in relation to P19 protein expression showed a similar expression profile as previous blots in this study, with higher Cas9 protein levels in the presence of P19 and the lower expression levels in the absence of P19 (Fig. 5c). Collectively, these results showed that in plants inoculated at three-weeks of age, P19 and pHcoCas9 expression follows expected patterns, but the presence of P19, may slightly compromise the gene editing in plants at a younger physiological state.
Conversely, lower gene editing events were consistently observed when P19 was not supplied in five-weekold 16c plants, evident from the calculated band intensities. For instance, the TRBO-HcoCas9 and gGFP coinfiltration yielded 5% indels, as compared to the addition of P19 to the mix that yielded 18% indels. Similarly, the co-infiltration of pHco-Cas9 and gGFP produced 30% indels, as opposed to 59% upon the addition of P19 (Fig. 5d). The western blot for the five-week-old plant protein samples also showed higher Cas9 protein levels in the presence of P19 (Fig. 5e). As seen previously (Figs. 2b, 4c), these effects of P19 on indel percentages in www.nature.com/scientificreports/ unknown technical reasons a biological viable construction of this vector was not successful despite multiple attempts. As an alternative, we elected to clone the reverse-complement of gGFP (RCgGFP) 3′ proximal to Cas9 in the TRBO-HcoCas9 vector to produce TRBOCas9-RCgGFP (Fig. 6a). In this case, the gGFP would only be expressed in the negative sense RNA through TRBO replication. To test out our new Cas9 and sgRNA expression vector, 16c plants were co-infiltrated with this TRBOCas9-RCgGFP, either in presence of absence of P19. Western blot analysis of the Cas9 protein was conducted using proteins sampled at 7 dpi. Indel induction percentages were analyzed through the BsgI restriction assay at 10 dpi to determine functionality of the expressed Cas9 and gGFP. As expected, western blot analysis for TRBOCas9-RCgGFP-delivered Cas9 showed protein expression that was enhanced in the presence of P19. However, protein expression was lower compared to TRBO-HcoCas9 (not carrying RCgGFP) and the P19 vector (Fig. 6b). Furthermore, the BsgI restriction assay of the single delivery vector showed undigested DNA bands (Fig. 6c,d) with calculated indel percentages of approximately 3 ± 0.5% and 6 ± 1.5% in the absence or presence of P19, respectively (Fig. 6c). To further validate the biological functionality of Cas9 and sgRNA, the resistant DNA fragment of TRBOCas9-RCgGFP and P19 co-infiltration was cloned 164-kDa 55-kDa TRBO-HcoCas9 pHcoCas9 TRBOCas9-RCgGFP P19 www.nature.com/scientificreports/ for sequencing to confirm the presence of indels. The results included nucleotide insertions or deletions at the genomic BsgI restriction site targeted by gGFP, as expected from NHEJ (Fig. 6d).
In conclusion, the addition of a 100-bp progenitor gGFP into the TRBO-HcoCas9 vector did not disrupt the genome replication capability of TMV nor impair Cas9 protein synthesis and function when expressed on the anti-sense RNA. Even though the expression level of Cas9 in TRBO was lower in the RCgGFP carrying construct compared to TRBO-Cas9, indels were still detected.
Discussion
P19 and gene editing. The extensive study of the TBSV P19 protein as a strong RNAi suppressor has led to efforts of utilizing P19 to increase the production of valuable human therapeutic proteins in plants [31][32][33] . However, this idea has not been implemented or reported in a gene editing context. Until recently, methods of using the CRISPR/Cas9 gene editing tool focused on optimizing the delivery and expression of sgRNAs, with less attention to the levels of the Cas9 endonuclease. Here, we present a new approach of using VSRs to boost the overall performance of the CRISPR/Cas9 system and developed a TMV-based delivery method to transiently and rapidly synthesize Cas9 and sgRNA. Based on the findings, the significant increase of Cas9 protein levels and enhancement of gene editing events induced upon introduction of the P19-expressing vector were demonstrated, specifically in more mature plants. Furthermore, presumably due to the suppressing activity of the homologous expression TMV P126, the co-infiltration of a TRBO vector in itself already positively influences accumulation of recombinant proteins using lower density Agrobacterium cultures. The results also showed that the TRBO vector can replicate with the large Cas9 insert, and the protein is catalytically active as demonstrated through the formation of indels in treated tissue. Lastly, expression of Cas9 and sgRNA were integrated into a single TRBO delivery vector that is functionally active to induce indels at DNA targets, again with better results in presence of the P19 VSR.
Plant age effects.
Our study showed that plant age can also influence the effect of P19 on indel production.
One possible explanation for the somewhat reduced indel percentages in the younger three-week-old plants coinfiltrated with P19 is that in early stages, the silencing machinery is heavily focused on regulating expression of genes involved in cellular developmental processes, while the machinery simultaneously undergoes fine-tuning of itself; thus, silencing of invading RNA is less active and thus suppression less noticeable. Another possibility is that the P19 duplexes could have bound to some of the processed gGFPs, hindering proper binding with Cas9. For older plants, cell developmental stage slows down or halts and the RNA silencing mechanism can target other regulatory processes, such as defending against pathogens and transgenes. Subsequently, the viral-induced and -targeted RNA silencing causes the boost in P19 suppressing effect in older plants to protect the virus from degradation. Another possibility is that the silencing machinery is involved in proper processing of gRNAs and thus supplying P19 might have conflicting effects depending on plant age. Therefore, the stimulating effect of P19 on Cas9 accumulation might, under certain circumstances, be camouflaged by less efficient editing due to effects of P19 on sgRNA processing. Regardless of the mechanism(s) involved, the results agree with the observation that RNA silencing against a GFP-expressing TBSV construct (not expressing P19) is much weaker in younger than older plants (HBS, personal communication).
Expressing Cas9 and sgRNA using TRBO. The TRBO vector has great potential for the expression of larger foreign protein sizes with better stability and efficiency 8 . Within that context, the initial goal of using the TRBO vector to deliver Cas9 stemmed from the pioneering study that reported rapid expression and high accumulation of GFP protein delivered using the same TRBO vector that harbored the gRNA 17 . Initially, the ability of TRBO to successfully deliver the large Cas9 protein (164-kDa) into plants was uncertain, since the largest reported gene insertion in a TMV-based vector were the ~ 56-kDa Human papillomavirus type 16 major CP L1 34 , and the ~ 58-kDa Norwalk virus CP 35 . In the current study the insert size in the TRBO vector is increased to ~ 4.2-kb for expression of the ~ 164-kDa Cas9 protein, which to our knowledge reflects one of the largest foreign inserts expressed in a TMV-based vector reported. Furthermore, we were able to use two TRBO constructs and a singular construct to deliver the Cas9 endonuclease and sgRNA into plants whereby the editing performance is again enhanced by the presence of P19.
The indel formation induced by the co-infiltration of the two TRBO vectors, TRBO-HcoCas9 and TRBO-G-3′gGFP (Figs. 4c, 5a) can perhaps be further improved when the sgRNA is delivered using an unrelated virus vector, such as TBSV or the satellite virus of TMV (STMV) (unpublished data). Previous work has demonstrated that co-infection of TBSV and TMV gene vectors can be used in the same plant cells and produce similar amounts of the respective recombinant proteins in the tested hosts 36 . This can be explained by the antagonistic and synergistic interactions among related and unrelated viruses, respectively, that has been well documented. For instance, it has been established that the co-infection of multiple similar backbone-based TMV-based vectors, comparable to using different strains creates a competing environment in the host that manifests itself as cross-protection 37 . In the context of virus vector technology, the result is lower production of the recombinant proteins compared to the use of different non-competitive viruses 38 . Or the solution might be as simple as using more stable viral genome background like that of Citrus tristeza virus 39 .
Another novel finding of this study is that a single TRBO delivery vector could be used for simultaneous expression of Cas9 and sgRNA. Many currently used delivery methods utilize co-infection of independent binary vectors to deliver and express Cas9 and sgRNA. As demonstrated, this could lead to varying levels of transient virus-mediated expression in each cell at different time points, resulting in poor gene editing efficiency 40 . Our single TRBO delivery vector ensures co-expression of Cas9 and sgRNA in agroinfiltrated cells to provide a versatile and simple approach to gene editing with the potential of multiplexing. Prior to application, the system www.nature.com/scientificreports/ needs improvement due to the editing efficiencies obtained with the delivery by a single TRBO vector being low. However, the aim here was to show proof-of-concept. The present findings seem to also unveil some important aspects regarding the basics of TMV replication. The RCgGFP positioning places the guide RNA sequences in the reverse complementary negative orientation on the (+) sense genomic (and subgenomic) RNA of TMV. Consequently, the proper sense gGFP orientation is only present on the negative-strand RNA of TMV that is produced during replication. It is thought that the negative-strand RNA exists as fully or partially double-stranded replicative forms or intermediates as the virus replicates 41,42 . In our case the gGFP produced from TRBOCas9-RCgGFP would reside on the full length negative-strand RNA instead of the (+) sense short subgenomic mRNA transcribed from the CP subgenomic promoter, such as the gGFP synthesized from the TRBO-G-3′gGFP vector. Thus, the RCgRNA is only present on genomic (−) sense RNA that is associated with positive sense RNA to exist as dsRNA. However, the ability of TRBOCas9-RCgGFP to functionally induce indels must mean that there is a population of free negative singlestranded RNAs containing gGFP that is processed for Cas9 programming. Albeit indirect, this provides novel information that free minus-strand is important for TMV replication.
Lastly, the use of Agrobacterium-mediated transient expression could be limiting especially in pathogen-host interaction studies or with in-depth studies of viral suppressor effects, due to plant pathogenic effects. In this instance, agroinfiltration could activate defensive mechanisms from both in the plant and the bacterium 43 . The development of a fully viral-dependent vector would allow for direct inoculation of vector transcripts, such as those that can be transcribed from the TRBO constructs used here for gene editing, onto plants without the reliance of Agrobacterium for vector delivery into plants.
Conclusion
In summary, the present study shows that viral suppressors of RNA silencing may represent useful tools to be implemented together with viral-mediated delivery of CRISPR/Cas9 to obtain rapid and efficient transient methods of gene editing in plants. During these studies we also obtained new information on: factors affecting virus-editing performance, the functionality of TMV expressing large inserts, effects of plant age on suppressor effects, and the implications of how the behavior of constructs informed on potential new insights into basic virus replication. In addition to these fundamental revelations, the developed platform may form the basis for optimization to acquire tools that maybe attractive in future practical settings as alternatives for rapid transient screening of gene editing effects prior to proceeding to transgenic approaches.
Data presentation. For comparisons, all gel-based data used in the Figures are provided as originals in
Supplementary figures S1-S6.
Construct design. The pJL-TRBO and pJL-TRBO-G (TRBO-G) vectors were previously constructed and provided by John Lindbo 8 . The P19 plasmid used in this study was previously made and described in Saxena et al. 2011, known as pKYLX7-p19. The pHcoCas9, TRBO-G-3′gGFP, and RM-gGFP vectors were previously designed and constructed 17 . The TRBO-HcoCas9 vector was constructed using a restriction enzyme cloning approach to introduce the HcoCas9 fragment from pHcoCas9 into the PacI and NotI restriction site in pJL-TRBO-G 8 to replace the truncated gfpc3 gene. The TRBOCas9-RCgGFP vector was designed using TRBO-HcoCas9 as the cloning backbone. The reverse complement fragment of gGFP was PCR amplified from TRBO-G-3′gGFP, using overlapping forward and reverse primers containing 5′-and 3′-overhang flanking regions of the TRBO-Hco-Cas9 cloning site located directly downstream of the Cas9 stop codon. This PCR product was inserted into NotI linearized TRBO-HcoCas9 using NEBuilder HiFi DNA Assembly master mix (New England Biolabs) according to the manufacturer's instructions and transformed into E. coli and later Agrobacterium pGV3101.
Agroinfiltration of plants.
Constructs were transformed into Agrobacterium pGV3101 strain via electroporation. Agrobacterium cultures of constructs were grown in LB liquid media containing 50 mg/L of kanamycin and incubated overnight at 28 °C on a shaker set at 250 rpm. Bacterial cells were harvested by centrifugation at 3900×g for 20 min at room temperature, resuspended in infiltration buffer (IB; 10 mM MgCl 2 , 10 mM MES pH 5.6, and 200 µM acetosyringone), and incubated at room temperature for 2 to 4 h. Prior to agroinfiltration, cultures of all the constructs were adjusted to a final concentration of OD 600 0.5 with IB, except for the P19 cultures that were adjusted to OD 600 0.4. For each infiltration, three N. benthamiana leaves were agroinfiltrated on the abaxial side using a needleless syringe and returned to the growth chamber (16/8 h light/dark cycle at 25/23 °C and 60% relative humidity).
Protein extraction and western blot analysis. The expression profile of Cas9 and P19 proteins were determined using western blot analysis of proteins extracted from infiltrated leaf tissues. Proteins from 50 mg of leaf tissue in 500 µl of 5× cracking buffer (645 mM Tris pH 6.8, 10% (w/v) SDS, 715 mM 2-mercaptoethanol, 40% (v/v) glycerol, and 0.005% (w/v) bromophenol blue), boiled for 5 min, and centrifuged at 10,000×g for 2 min. Then, 20 µl of the supernatant were loaded and separated on 7.5% or 12% polyacrylamide-SDS gels for Cas9 and P19, respectively. Electrophoresis was performed at 80 V for 20 min and then 150 V for 80 min in 1× Laemmli running buffer (25 mM Tris, 192 mM glycine, and 0.1% (w/v) SDS). The separated proteins were transferred to a nitrocellulose membrane (Bio-Rad, Hercules, CA) in Tris-glycine transfer buffer (25 mM Tris, 192 mM glycine, and 20% methanol, pH 7) at 270 mAmp for 90 min. The membranes were blocked in TBST buffer (0.2 M NaCl, 50 mM Tris, and 0.05% (v/v) Tween 20, pH 7.4) added with 5% non-fat milk for 1 h, followed by overnight incubation with mouse IgG anti-CRISPR (Cas9) primary antibody (BioLegend) or P19 antibodies 21 www.nature.com/scientificreports/ dilution at 4 °C. After incubation, the membranes were subjected to three 5-min washes with TBST. Secondary IgG anti-mouse (goat) or IgG anti-rabbit antibodies conjugated to alkaline phosphatase (Sigma Aldrich) were added to the membrane at 1:10,000 dilution and incubated for 1 h at room temperature. The wash steps were then repeated following the incubation. Colorimetric detection of Cas9 protein was achieved by adding 33 μl of 5-bromo-4-chloro-3-indolyl phosphate (100 mg/ml) and 66 μl of nitro blue tetrazolium (20 mg/ml) to 10 ml of 1× alkaline phosphatase buffer (100 mM Tris, pH 9.5, 1 M NaCl 2 , and 0.5 M MgCl 2 ). Cas9 protein can be identified by its molecular mass of ~ 164-kDa. Coomassie Brilliant Blue R-250 staining was used to ensure equal loading of protein for each sample. Rapid staining was achieved by microwaving the SDS-PAGE gel of separated proteins in water twice for 2 min, discarding water, followed by microwaving the gel in stain solution for 1 min (40% methanol, 10% acetic acid, 50% water, and 0.1% (w/v) Coomassie Brilliant Blue R-250), and incubating with gentle swirling for 2 min at room temperature. The stained gels were transferred to a square petri dish containing de-stain solution (40% methanol, 10% acetic acid, and 50% water) alongside two Kim Wipes bordering the top and bottom of petri dish to absorb the non-binding blue stain. The petri dishes were rocked gently on a rotator. The de-staining steps were repeated until the 55-kDa Rubisco band could be clearly observed on the gel.
DNA and indel induction analysis. Leaf tissues were collected and pooled from three infiltrated leaves, totaling 50 mg of fresh tissue for each plant. DNA was extracted from these tissue samples using the Quick-DNA Miniprep Kit (ZYMO Research) according to the manufacturer's instructions. The BsgI restriction digest assay begins with PCR amplification of the mgfp5 gene that spanned the aforementioned unique BsgI restriction site using 100 ng of genomic DNA. The PCR product was purified using the DNA Clean & Concentrator-5 kit (ZYMO Research). Subsequently, 200 ng of the purified PCR product was subjected to BsgI restriction enzyme with overnight incubation at 37 °C. The mgfp5 DNA digestion was separated by electrophoresis in an ethidium bromide-stained 1.5% agarose gel. Generated gel images were uploaded and analyzed in Image J analysis software (NIH) to measure the induced indels as a result of NHEJ events. The band intensity of undigested mgfp5 DNA fragments (indel containing) were quantified and compared to digested mgfp5 DNA (non-indel containing) to yield indel percentages. In the presence of Cas9 protein, the gGFP is designed to target the BsgI restriction site of the mfgp5 gene. In some cases, gene edited resistant bands were excised and cloned into the pGEM-T-Easy vector (Promega) and transformed in E. coli cells for the sequence analysis of plasmids from several colonies.
Data availability
Upon request materials, data and associated protocols will be made available. | 2021-03-26T06:16:44.130Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "5232164e6bd9206c5c9ebedfd1a29f2ddeab4251",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-85366-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb2d37bc90036a7dd35abef3182d21d6801a1c64",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209414748 | pes2o/s2orc | v3-fos-license | Enhanced photon emission from a double-layer target at moderate laser intensities
In this paper we study photon emission in the interaction of the laser beam with an under-dense target and the attached reflecting plasma mirror. Photons are emitted due to the inverse Compton scattering when accelerated electrons interact with a reflected part of the laser pulse. The enhancement of photon generation in this configuration lies in using the laser pulse with a steep rising edge. Such a laser pulse can be obtained by the preceding interaction of the incoming laser pulse with a thin solid-density foil. Using numerical simulations we study how such a laser pulse affects photon emission. As a result of employing a laser pulse with a steep rising edge, accelerated electrons can interact directly with the most intense part of the laser pulse that enhances photon emission. This approach increases the number of created photons and improves photon beam divergence.
=
≈ . × , m e is the electron rest mass, e is the elementary (positive) charge and is the reduced Planck constant 7,8 . This parameter is maximized, when the electron is colliding head-on with the laser pulse. In such a case, the value of χ e can be approximated as χ γ ≈ E E 2 / e 0 S , where E 0 is the amplitude of the laser field 9 . In such a case photon emission probability is only controlled by the energy of the incoming electron and the amplitude of the laser field.
Electrons in plasma can be accelerated by Laser Wake-Field Acceleration (LWFA) or Direct Laser Acceleration (DLA) mechanisms [10][11][12] . The latter becomes more important in the case of plasma densities higher than 10 20 cm −3 and intensities going beyond today's world record (>10 22 W/cm 2 ) [13][14][15][16][17] . To achieve such a high intensity, the laser pulse has to be tightly focused that will result in rapid diffraction of the laser field. Thus, the higher plasma density is required to compensate for diffraction in this case.
Nevertheless, head-on collision remains an issue from the experimental point of view due to the spatio-temporal alignment of the interaction 2 . This can be overcome by employing a plasma mirror. As the laser pulse impinges on the over-dense plasma mirror, it is reflected and thus previously accelerated electrons can interact with a counter-propagating laser field that leads to efficient photon emission 5,18 . This double-layer interaction setup can be further optimized by tuning the target properties (density, thickness) with respect to the laser intensity and focal spot radius to create the highest number of high-energy photons [19][20][21][22][23][24][25][26][27] .
In this paper, we study photon emission in such an interaction scheme when various temporal profiles of the incoming laser pulse are assumed. For efficient photon production in a laser-electron collision it is crucial for the electron to get in the highest intensity region. As the electron enters the laser filed, it starts losing energy and thus can be expelled by the ponderomotive force before reaching the laser field amplitude. This effect that acts against efficient photon emission can be overcome by employing an appropriately tailored temporal profile of the laser pulse. The laser pulse with a steep front edge ensures that accelerated electrons will interact directly with the most intense part of the laser pulse and that consequently enhances photon emission, see Fig. 1. To our knowledge, the technique allowing direct shaping of the temporal profile of the femtosecond intense laser pulse while its frequency and intensity remain preserved has not yet been developed. Therefore, in the case of the current multi-petawatt laser systems, the laser pulse with a steep rising edge can only be realized by the preceding interaction with a dense and thin plasma foil 28 . Using numerical simulations we therefore present how the laser pulse that acquires a steep rising edge affects photon emission in the double-layer interaction setup.
Results
To analyze photon emission in this interaction setup, we have performed 2D Particle-In-Cell (PIC) simulations in the code EPOCH 29,30 . At first, we considered the interaction of the laser pulse with 24 μm-thick under-dense target containing electrons and protons of a density . n 0 1 c , where ω ε = n m e / e c 0 2 0 2 is the critical electron density and ε 0 is the vacuum permittivity. At the rear side of the under-dense target, 1 μm -thick Al 11+ foil of the electron density n 385 c is attached. The density is lower than the real density of solid aluminum due to computational constraints, nevertheless it does not have any significant influence on our results. This part of the double-layer target serves as a reflecting mirror for the laser pulse. The incoming laser pulse has a wavelength of 805 nm and Full-Width-At-Half-Maximum duration of τ = 30 fs. The peak intensity = × where ω 0 is the laser angular frequency. These laser parameters are well within the capabilities of today's laser systems such as J-Karen-P 17 .
At first, we have compared the interaction in which the laser pulse has either the Gaussian (Setup I) or perfectly tailored (Setup II) temporal profile of the laser pulse, as shown on snapshots from PIC simulations in Fig. 2a,b, respectively. The latter case was modelled by cutting the front edge of the laser pulse so that the electric field was equal to zero up to one-quarter of the laser period before the peak amplitude. Such a beam therefore delivers by almost 50% less energy onto the target compared to the previous case.
Setup I, i.e. when the laser pulse has the Gaussian temporal envelope, see Fig. 2a, represents the interaction of the intense laser pulse with under-dense plasma. Such an intense laser pulse can propagate through the plasma with minimal loses of its energy. As can be seen from Fig. 2a, for such a configuration, the bubble for LWFA scheme of electron acceleration is not efficiently developed. However, part of the target electrons is accelerated by the DLA scheme as they are trapped inside the laser pulse field structure.
By contrast, employing a tailored laser beam leads to considerable enhancement of electron acceleration. This is represented by Setup II, shown in Fig. 2b. In such a case, the laser beam has both a shorter duration and a steeper rise of the front edge. As the laser pulse enters the plasma, the electrons are immediately expelled sideways due to the strong ponderomotive force. Since the protons are not expelled so rapidly, they form a positively charged bubble behind the laser pulse. The created electrostatic field pulls the electrons back towards the laser axis. These electrons therefore exhibit betatron oscillations, as can be seen in Fig. 2b near μ = x 18 m and μ = − y 1 m. As a result, the bubble behind the laser pulse can fully develop in Setup II compared to the previous case as the laser pulse propagates through under-dense plasma.
The temporal profile of the laser pulse affects the motion of electrons in the plasma during the interaction and thus has an impact on their acceleration. We have seen in Fig. 2, that using the tailored laser beam profile enables more efficient acceleration of electrons via LWFA mechanism. This is confirmed in Fig. 3a showing electron energy spectra at the time when the laser pulse reaches the end of the under-dense target. Setups I and II are represented by lines I and II, respectively. From their comparison it is evident that much more electrons with higher energies are produced when the laser beam has a tailored temporal profile.
Photon emission can only be enhanced when these high-energy electrons collide with the laser field reflected from the aluminium foil attached at the end of the under-dense target, see Fig. 1. Therefore, the position of accelerated electrons with respect to the laser pulse as well as their energy are the key factors that affect photon emission. In Fig. 3b we present the energy spectra distribution of generated photons during the interaction. The cut-off energy of generated photons in Setup II is below 50 MeV even though electrons can be accelerated up to 130 MeV in this case, see Fig. 3a. However, the electrons having the highest energy are trapped in the front part of the tailored laser pulse, thus these DLA electrons can not collide with a sufficiently long part of the reflected laser pulse. High-energy photons are more likely generated by DLA electrons locked in the rear part of the laser pulse as well as by LWFA electrons dragged behind the laser pulse. From the experimental point of view, the laser beam with a steep front edge can be realized by the interaction of the laser pulse with an ultra-thin solid-density foil, so-called plasma shutter, e.g. a Diamond-Like-Carbon (DLC) foil [31][32][33][34][35][36] . Since the foil is over-dense for the incoming laser pulse, the front part of the laser pulse is reflected. As the peak of the laser pulse impinges upon the foil surface, the relativistic mass of electrons suddenly increases causing the foil to become relativistically transparent for the rest of the laser pulse. Therefore, the laser pulse gains a steep front edge after passing through the foil.
In the following text, we present the results of electron acceleration and photon emission in PIC simulations in which the preceding interaction of the laser pulse with the foil is taken into account. At first, we have performed simulation for Setup III, in which a 10 nm-thin DLC foil is attached at the front side of the target 35,36 . The DLC electrons are depicted by orange color in Fig. 4a. The fully ionized DLC foil has the electron density n 384 c . www.nature.com/scientificreports www.nature.com/scientificreports/ As the laser pulse initially having the Gaussian temporal profile passes through the DLC foil, it gets a steep front edge, as shown in Fig. 5. By cutting the front part of the laser pulse, it loses about 15% of its initial energy.
After passing the foil, the tailored laser pulse interacts with the double-layer target. The dynamics of DLC electrons negatively affects the acceleration of electrons originating in under-dense plasma. Electrons from under-dense plasma (blue) are immediately expelled by the laser pulse while the DLC ones (orange) are attracted by protons to compensate for the charge separation field created behind the laser pulse. For this reason, the electrons which are expelled sideways by the ponderomotive force can not form the bubble and thus are not trapped at the back of this structure, see Fig. 4a. Thus, the acceleration of electrons in under-dense plasma is efficiently reduced. It agrees with the electron spectrum represented by line III in Fig. 3a. It shows, that electrons belonging to the under-dense target (dotted line) have much lower cut-off energy than the DLC ones. In this configuration, the cut-off energy for photons is 80 MeV. As shown in Fig. 3a, there are much more DLC electrons with energy higher than 50 MeV. Therefore, mainly the DLC electrons located in the rear part of the laser pulse are the ones responsible for generation of high-energy photons.
As the main disadvantage of the previous interaction setup is that the laser wake-field structure can not fully develop in the under-dense target we propose another configuration, Setup IV, in which the DLC layer is initially detached from the double-layer target by a μ 8 m vacuum gap. This is illustrated in Fig. 4b. Due to the sufficiently www.nature.com/scientificreports www.nature.com/scientificreports/ large vacuum gap between the DLC layer (orange) and the under-dense target (blue), the DLC electrons do not have enough energy to overcome the potential induced at the surface of the foil and to enter the under-dense target and thus are not attracted by the protons. As a result, the DLC electrons do not prevent development of the bubble in plasma. Therefore, detaching the DLC layer from the under-dense target leads to a more efficient LWFA of electrons originating their motion in under-dense plasma. These electrons are trapped behind the laser pulse and thus they have the favourable position for emitting photons when they interact with the reflected laser pulse. As a result, more photons are emitted when the DLC layer is detached from the double-layer target, see Fig. 3b where lines III and IV represent the corresponding setups. However, the cut-off in the photon energy spectrum for Setup IV is still about 80 MeV despite the improvement in the electron energy spectrum cut-off. This is due to the fact that the most energetic electrons are the DLC ones locked in the front part of the laser pulse which do not have a chance to significantly contribute to photon emission. Nevertheless, employing the detached DLC layer allows creating the highest number of high-energy photons in comparison with all the Setups presented above, see Fig. 3b.
Even though employing the detached DLC layer causes faster diffraction of the laser pulse, compare the laser field structure in Figs. 2a and 4b, it does not considerably affect photon emission. The efficiency of photon emission depends on the electron energy and the experienced laser intensity. Since Setup IV employing the DLC foil can provide electrons that are accelerated to higher energies compared to Setup I and these can experience the higher laser intensity due to the steep rising edge of the laser pulse, the photon emission is more pronounced in this case.
Moreover, the angular characteristics of the emitted photon beam are also improved in Setup IV, see Fig. 6. As described above, when the DLC layer is employed and detached from the target (Setup IV), the LWFA mechanism can develop and more electrons are accelerated via this mechanism. Since the bunch of LWFA electrons is collimated, such a configuration results in a narrower angular distribution of emitted photons compared to Setup III, see Fig. 6(c,d) and Table 1. x i , , p y i , are the components of photon momentum. Employing the laser pulse with a steep front edge (Setup II) results in generating 5x more photons than for a laser pulse with the Gaussian temporal envelope (Setup I). The mean photon energy in such a case is about 40% higher and conversion efficiency is increased by a factor of six.
In Setup III, the number and mean energy of created photons are much lower than in Setup II as the presented DLC foil is not dense enough to create ideally tailored laser beam. However, the number and mean energy of created photons are still higher compared to Setup I.
For Setup IV the theory predicts photons with typical energy around 2 MeV 37 . This agrees with our results from PIC simulations. The conversion of the laser energy to photons is increased by a factor of 3.4 compared to Setup I. Optimizing the distance between the DLC layer and the target with respect to the target density can further enhance conversion efficiency. The length of a vacuum gap allows the DLC electrons to expand and thus reduce their number which enters the under-dense target. The optimal length of the vacuum gap is given by the parameters of the laser pulse (intensity, focal spot radius, temporal duration) and of the foil (density, thickness). If the gap is too small, DLC electrons can enter the under-dense target and prevent the formation of the wake-field structure. On the other hand, when the vacuum gap is extremely wide, the laser pulse may considerably diffract and thus the acceleration of target electrons becomes less efficient. For example, by performing a set of PIC simulations, we have found that the optimal length of the vacuum gap is about 4 μm for the above-mentioned parameters. The conversion efficiency of laser energy to photons in such a case is four times higher compared to Setup I.
Discussion
Up to this point, we have assumed the fixed target density while the distance from the DLC layer was varied. Increasing the plasma density leads to a creation of a higher number of photons by electrons from the under-dense target and thus more efficient laser energy conversion provided that the relativistic critical density γn c is not reached. However, if the plasma density is too high then the laser pulse can be rapidly depleted. On the other hand, if an intense laser pulse propagates in near-critical-density plasma for a sufficiently long distance, it may undergo the effect of relativistic self-focusing that increases the laser intensity and reduces diffraction 38,39 . That, in turn, can lead to emission of photons with higher energy. The optimal propagation distance with respect to a given plasma density is therefore limited by these two effects 23,40,41 .
To assess the role of a higher plasma density, we have performed additional simulations, in which the target density has been increased by a factor of 10 from 0.1n c to 1n c . Targets of such a density have been already demonstrated, e.g. refs. 42,43 . Due to self-focusing, the laser pulse gains a smaller transverse profile and a higher peak intensity. In our case, the peak intensity of the laser field in 1n c target is by 25% higher than in 0.1n c one. Moreover, the DLA is more efficient at such a plasma density as it enables to accelerate a higher number of electrons. As a result, the laser energy conversion to photons is higher by a factor of 15 compared to Setup I, i.e. when the DLC layer is not considered. Employing the DLC layer is still feasible for such a dense target as it enhances photon production by a factor of 1.3. This confirms the applicability of our setup even for near-critical-density plasma targets.
The efficiency of laser energy conversion to photons in Setup IV is approximately the same as in the case when 20 pC LWFA electron bunch having energy 0.5 GeV collides with the laser pulse of the same properties as described above. In such a case we obtain η ≈ . γ 0 10% according to ref. 44 Although it is possible to achieve higher electron energies using LWFA compared to our setup, it might be complicated to reflect and focus the driving laser pulse to initiate photon emission 45,46 . The presented setup is therefore more robust as it encompasses both the acceleration and photon-emission stages while the latter does not rely on a focusing mirror.
It has been shown that photon emission in the interaction of a laser pulse with the under-dense target and reflecting plasma mirror can be enhanced by employing a laser pulse with a steep front edge. Such a beam can be created by the preceding interaction of the laser pulse with a thin solid-density foil, plasma shutter. The shaped laser pulse then propagates through the vacuum into the under-dense target in which electrons are accelerated via LWFA and DLA mechanisms. The vacuum gap between the foil and the target ensures that electrons dragged from this foil will not counteract the acceleration of electrons in the under-dense target. The accelerated electrons then interact with the most intense part of the laser pulse reflected from the plasma mirror. Therefore, employing the solid-density foil will result in a more efficient conversion of the laser energy to photons. For the parameters Table 1. The number of photons γ N , their mean energy E 〈 〉 γ , conversion efficiency η γ of laser energy to photons and the photon beam divergence angle θ relative to the laser propagation direction for simulation Setups I-IV. Energy is normalized to the energy of the Gaussian laser pulse. described above we obtained three times higher conversion efficiency and a narrower angular distribution of emitted photons compared to the interaction without the thin solid-density foil. This can be further improved by adjusting the density and thickness of the foil to provide the optimal temporal profile of the laser pulse. As the laser pulse loses its energy in the under-dense target very slowly, the length of the electron acceleration stage could be optimized with respect to the laser intensity to get the highest number of accelerated electrons.
Methods
Numerical modelling. To analyze the presented laser-plasma interaction we used the PIC code EPOCH in which photon emission is considered as a step-like quantum process 30 . For details about the implementation of photon emission into this code the reader is referred to ref. 29 .
In 2D simulations of Setups I-IV, the box was spanning from 0 to 50 μm in the x-direction and from −15 μm to 15 μm in the y-direction. Such a simulation domain was resolved with 22,320 × 13,392 cells. This is sufficient as for a density of 385n c the plasma skin depth is about 6.5 nm. The spatial resolution remained unchanged for all other performed 2D simulations (e.g. parameter scan for the optimal length of a vacuum gap), while the size of the simulation box was enlarged. The laser pulse enters the box at a boundary μ = x 0 m. The DLC layer of thickness 10 nm was located at μ = .
x 11 99 m (Setups III and IV) while the under-dense target was spanning from μ 12 m to μ 36 m (Setups I-III) or from μ 20 m to μ 44 m (Setup IV). At the rear side of the under-dense target a 1 μm-thick Al 11+ foil was attached. The laser pulse having the Gaussian temporal envelope propagates in the positive x-direction while being polarized along the y-axis. It is focused to a focal spot of radius μ = . w 1 5 m 0 located at μ = x 12 m in the simulation box.
Data availability
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request. | 2019-12-19T08:48:09.000Z | 2019-12-19T00:00:00.000 | {
"year": 2020,
"sha1": "0a9bdf748f8442136f3c27e7e42b8b6ed62fe9d8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-65778-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7bd38d04bccaa1f6fe8c974bca364242bc5934b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
3314849 | pes2o/s2orc | v3-fos-license | Use of Microarray Datasets to generate Caco-2-dedicated Networks and to identify Reporter Genes of Specific Pathway Activity
Intestinal epithelial cells, like Caco-2, are commonly used to study the interaction between food, other luminal factors and the host, often supported by microarray analysis to study the changes in gene expression as a result of the exposure. However, no compiled dataset for Caco-2 has ever been initiated and Caco-2-dedicated gene expression networks are barely available. Here, 341 Caco-2-specific microarray samples were collected from public databases and from in-house experiments pertaining to Caco-2 cells exposed to pathogens, probiotics and several food compounds. Using these datasets, a gene functional association network specific for Caco-2 was generated containing 8937 nodes 129711 edges. Two in silico methods, a modified version of biclustering and the new Differential Expression Correlation Analysis, were developed to identify Caco-2-specific gene targets within a pathway of interest. These methods were subsequently applied to the AhR and Nrf2 signalling pathways and altered expression of the predicted target genes was validated by qPCR in Caco-2 cells exposed to coffee extracts, known to activate both AhR and Nrf2 pathways. The datasets and in silico method(s) to identify and predict responsive target genes can be used to more efficiently design experiments to study Caco-2/intestinal epithelial-relevant biological processes.
Biological networks are representational interactions between genes, proteins, and other biomolecules. Different kinds of biological networks (e.g protein-protein interaction or signalling networks) represent different features of a cell 1 . Such networks can be usefully exploited to gain key insights into biological systems 2, 3 . Exploration of tissue and cell type specific networks has demonstrated the effects of tissue specific regulation on the remodelling of biological networks 4 . Differential network analysis has also been used to compare topological characteristics of networks corresponding to normal or tumorous cells and to isolate characteristics of distinct cancer subtypes, which in turn has led to the prediction of cancer subtype-specific drug targets 5 . One important biological system is the epithelial cells lining the small and large intestine. The role of diet and the response of host towards diet and its compounds is challenging to be studied in vivo due to the complexity of biological systems and inter-individual variability. Thus, a reductionist approach using the human Caco-2 intestinal epithelial cell line is a widely accepted laboratory model to understand the response of intestinal enterocytes exposed to nutrition and microbes [6][7][8] . Although Caco-2 cells were derived from a colon carcinoma, when cultured as confluent monolayers for 2-3 weeks, they functionally resemble the enterocytes lining the small intestine 9 . Caco-2 cells have been used in numerous experiments to study effects of food products and compounds 6,7,[10][11][12][13] , probiotics 8,14 , pathogens [15][16][17] and other studies [18][19][20] , using microarrays. Comparative proteomic analysis of Caco-2 cells and scrapings of the human intestinal epithelium support the usability of this in vitro model 21 , although Caco-2 cells appear to over-express as well as under-express certain proteins which needs to be considered in the interpretation of in vitro data and translation of results to the in vivo situation 21 .
A compendium of Caco-2 gene expression profiles under a broad number of conditions can be instrumental in building dedicated network models describing gene interactions in human intestinal enterocytes and in providing new insights on their functioning. Although, gene profiles tuned for selected tissues [22][23][24] are present, to the best of our knowledge, no broad compendium of Caco-2 microarray experiments has been initiated, limited data on metabolic networks is available 25,26 and no gene/protein association networks are available for Caco-2/ intestinal enterocytes. Another commonly faced problem is the identification of Genes Of Interest (GOI) in the pathways investigated for a specific cell type. Thus identification of candidate sets of GOI could help study the impact of treatments on specific pathways of interest in a given cell type.
Intestinal epithelial cells, apart from major functions like digestion and absorption of nutrients, minerals and water 27,28 , play an important role in the exclusion or detoxification of xenobiotics and regulating oxidative stresses. The AhR and Nrf2 pathways are involved in the metabolism of xenobiotics and protection against oxidative stress 29,30 . AhR is an important regulator of Phase I and Phase II enzymes and other enzymes which metabolize compounds such as dioxins, polycyclic aromatic hydrocarbons, plant polyphenols and tryptophan photoproducts 31 . Nrf2 has been designated the "master regulator" of the adaptive response to oxidative stress 29 and regulates the expression of antioxidant proteins that protect against oxidative damage triggered by injury and inflammation.
In this study, we aim to i) exploit the knowledge accumulated in the publicly available datasets on Caco-2 cells exposed to different treatments in order to generate a dedicated network model accounting for gene associations specific to intestinal enterocytes and ii) to develop workflows to reliably select genes for studying intestinal enterocyte-specific pathways. The proposed strategies were experimentally validated by focussing on GOI in the Nrf2 and AhR pathways using Caco-2 cells exposed to coffee to induce the gene responses within these pathways. The obtained networks are provided as supplementary files (Caco2_Network) and R scripts for the identification of GOI are made available at http://semantics.systemsbiology.nl/index.php/download-page/ with a working example.
Results
Cell/Tissue-specific gene expression profiles aid the identification of reporter genes for specific pathway activity. In this study, we develop strategies to generate dedicated gene network models for Caco-2 and identify specific gene responses to nutrition related exposures. This was illustrated using Ahr and Nrf2 pathways. We have independently validated our results through a new experimental setup on which Caco-2 cells were exposed to coffee extracts, which have previously been shown to induce the Ahr and Nrf2 pathways 32 . Coffee extracts have a great chemical diversity and the components vary according to the cultivar, treatment, processing, storage and others [33][34][35][36] . We have tested induction of these pathways using four coffee types.
To identify reporter genes for the AhR and Nrf2 pathways, scientific literature was searched and we investigated whether these genes were also responsive to oxidative stress in our Caco-2 model after exposure to TCDD (2,3,7,8-Tetrachlorodibenzo-p-dioxin) or coffee. 16 genes that are frequently used as indicators for AhR and Nrf2 signalling, were selected from the literature (Table 1) for validation. Caco-2 cells were exposed to coffee extracts (Turkish coffee, Brasil Espirito, Java Preanger, Nescafe©) and TCDD and relative expression of the selected genes was measured by qPCR. Out of the 16 genes tested, 3 genes were not detectable (CT values ≥ 35) and 5 genes showed no differential expression (DE), a fold change threshold of 1.5 folds up or down in at least two of the coffee samples, indicating that 50% of the genes selected from literature are not useful for studying the activities of the AhR and Nrf2 pathways in enterocytes.
Compendium of Caco-2 experimental data supports cell-specific gene selection. A data compendium was generated using Affymetrix expression profiles of 341 arrays from 85 Caco-2 exposure experiments ( Table 2). UPC filtering procedure was used to identify genes that are actively expressed in Caco-2 and 12849 genes were identified to be expressed. These genes were then used to generate a cell-specific network dedicated to Caco-2 intestinal epithelial cells.
Supplementary Table S1 presents the comparison between network topological properties of the full interaction network retrieved from STRING (converted to Entrez Ids) and the Caco-2 specific network. The same cut-off (≥700) related to the reliability of the interactions (STRING combined score) was selected for both networks. The Caco-2 network is composed of 8937 nodes and 129711 edges and can be explored using common network visualization tools such as Cytoscape 37 . Notice the differences in the number of nodes and edges between the two networks.
Out of the 16 genes that we previously selected based on literature, ABCC1, ABCG2 and TIPARP are removed from the network of functional associations. This indicates that in the overall network they are connected only to nodes that show no (active) expression in our compendium. However, even after this reduction, still large number of genes remain (77 nodes for Nrf2 pathway and 42 nodes for AhR pathway) to probe for each pathway and therefore we wanted to optimize our approach to identify GOI.
Biclustering analysis improves gene selection. The biclustering method works based on identification of genes that are co-expressed with seed genes (i.e. genes well known to be responsive in Caco-2 cells to a specific perturbation). In order to identify Caco-2 responsive genes within the Nrf2 pathway, we used a full list of genes that are involved in this pathway (derived from generic IPA consensus pathway). SQSTM1, HMOX1, NRF2, ABCC1, DNAJB1 and ENC1 were selected as seed genes. The seed genes were used to identify co-expressed genes within the compendium of microarrays. The initial average correlation threshold for array selection was set at 0.75 (default value). In this way, only arrays that showed a high degree of correlation with the seed genes were included for GOI identification.
The biclustering analysis reduced the 341 arrays (the initial number of arrays) to 229 arrays and the following genes were obtained as GOI: CDC34, DNAJC4, GTR, ATF4, GSTA2 and GSTM4. Together with the seed genes this resulted in a total of 12 potential responsive genes for the Nrf2 pathway (Table 3). These genes had an average correlation of 0.79 in the arrays included in this analysis.
Similarly, CYP1A1, TIPARP, AHR, ARNT and PRKCA were chosen as seed genes for AhR pathway. Owing to the small number of seed genes, mean correlation threshold for array selection was set at a more stringent value of 0.8. The biclustering analysis reduced the initial 341 arrays to 274 arrays and predicted GSTA2, GSTM4, MAPK8, MED1, NCOR2 and NFIA as GOI for the AhR pathway. This procedure reduced the number of potential responsive genes to 11 for AhR pathway (Table 3), including seed genes.
We selected 14 genes for experimental verification using Caco-2 cells exposed to coffee extracts (Figs 1 and 2). Of these, 6 genes were specific to AhR pathway, 6 specific to Nrf2 pathway and 2 common to both pathways. Four of these genes have been predicted by the algorithm ("Biclustering" see Table 3). All 4 genes were found to be expressed in Caco-2 cells of which 3 showed substantial changes in expression (Fold Change > 1.5) between control and treatment (Figs 1 and 2).
Based on these results, we concluded that this strategy constitutes a useful addition to the literature data for gene selection. Selected genes extracted from the literature can be combined with the ones selected using the proposed approach. In those cases where literature provides an ample list of genes for experimental validation, our approach serves to further refine the selection of genes which are differentially expressed by Caco-2 cells in a chosen pathway. Differential Expression Correlation Analysis (DECA) further enhances gene selection. An assessment of DECA algorithm was performed using 10 pathways from the KEGG database 38 that are of interest to intestinal epithelia. For each pathway 10 runs were performed using three randomly selected genes from the pathways as seed genes. Genes known to be in the target pathways were found to be significantly better ranked than genes not in the pathway, as indicated by the enrichment p-values. On average ~9% of genes related to each pathway could be predicted as target genes on analysing the top 10% ranked genes using DECA algorithm. The performance of the algorithm varied according to the pathway from 6% to 15%. This result indicates that without any further literature considerations DECA is able to retrieve genes associated to the pathway. In this assessment seed genes were chosen at random, however careful selection of seed genes is required to obtain more reliable prediction of target genes. As in the previous case, this approach would work best when combined with pre-existing knowledge. The results of the in silico assessment are provided in Supplementary Table S2. Table 3. Expression changes upon coffee exposure of genes selected using the biclustering algorithm. '-' Indicates genes that were not the target of experimental validation. Genes were considered to be responsive if they were differentially expressed in at least two coffee samples. The DECA method was applied to find a global set of genes (amongst all genes expressed in Caco-2) associated with Nrf2 and AhR pathways which are responsive to altered pathway activity. SQSTM1, NQO1 and HMOX1, involved in the Nrf2 pathway were used as seed genes for the DECA algorithm. 2834 genes were found to have correlation values or significance fractions above the 0.6 threshold against each seed gene. The genes were ranked as mentioned in Materials and Methods section and top ranked genes were considered for further analysis. From this list, GCLM 39 , TXNRD1 40 , SOX9 and KCTD5 41 were selected for further experimental validation via qPCR as there is some evidence of involvement in this pathway. In addition, BAG3 42 gene which did not belong to the top ranking genes was randomly chosen as a negative control (Table 4).
A similar approach was used to predict the GOI in the AhR pathway. Only two genes, CYP1A1 and TIPARP were chosen as the seed genes for the DECA algorithm which resulted in a list of 398 ranked genes. From this list, UGCG 43 , EREG 44 , RND3, CHMP1B were chosen for experimental verification as evidence from scientific literature associated few of them with the AhR pathway. ATP9A was randomly selected as a negative control ( Table 4).
The above mentioned 10 genes along with a seed gene for each pathway were experimentally verified using qPCR analysis in Caco-2 cells exposed to coffee samples (Fig. 3). The results indicate that 75% of the selected GOI showed a substantial relative difference in expression (absolute fold change > 1.5) in all tested samples, 2 genes (SOX9 and KCTD5) were differentially expressed upon exposure to two of the coffee extracts (Turkish and Nescafe, absolute fold change > 1.5) while the control genes showed no significant change in expression in most coffee extracts, as expected.
These results indicate that the DECA is a substantially improved strategy to identify GOI compared to other methods discussed in this paper and moreover does not require prior knowledge of the genes within the pathway except for the seed genes.
Discussion
Initially we focussed on developing an intestinal enterocyte-specific association network using expression data from Caco-2 cells exposed to different nutrients and stimuli. The network was constructed by selecting 12849 genes (actively) expressed in Caco-2 based on UPC filtering. This is consistent with previous observations of 11559 26 and 14113 genes 24 based on RNAseq data (Caco-2 cells grown under controls). Differences could be attributed to different selection procedures or experimental approaches. Additionally, the gene list and network provided in this paper are based on a compendium of transcriptomics data from exposure of Caco-2 cells to different nutrients and stimuli.
When applying our Caco-2-specific selection to STRING network the number of edges and nodes was reduced considerably (~50%). The number of connected components is reduced by over 60% and the local network structure is preserved with similar values of clustering coefficient, which suggests a more compact network, as expected for gene that are functionally closely related. The degree assortativity decreases indicating less redundancy on gene associations when the network is restricted to Caco-2. Incidentally STRING could support dedicated data analysis by enabling seamless tissue specific gene selection.
Biclustering simultaneously clusters both genes and samples to arrive at the identification of genes with similar expression profiles in a subset of the samples. Existing biclustering algorithms do not allow targeting a particular pathway 45,46 , instead they generally try to find biclusters that cover either a broad range of genes or conditions. Similarly WGCNA based clustering does not focus on a particular pathway but looks for modules of co-expressed genes that may belong to more than one pathway. Here we present a biclustering approach, that represents a modification of that in van dam et al., that allows the user to select or pre-select the seed genes and thus a pathway 47 . Nevertheless, biclustering performed poorly as the identified GOI did not show significant DE, indicating little responsiveness of Caco-2 cells to coffee exposures. Therefore, DECA algorithm was used, resulting in a list of responsive gene candidates and a set of criteria to further rank them. From the ranked list, genes were selected for experimental verification in Caco-2 cells exposed to coffee and we found association with AhR and Nrf2 pathways. The verified genes were not in these pathways as defined in IPA. It might be that some of these genes have an indirect association to these pathways. The DECA ranking can be combined with existing knowledge, for instance, adding weight to genes on the basis of literature evidence. Of the 5 genes predicted for Nrf2 pathway, GCLM and TXNRD1 are previously known downstream gene targets of NRF2 39,40 . KCTD5 is likely to have an indirect interaction mediated by CUL3 41 and BAG3 (negative control gene) has been associated with Nrf2 pathway 42 while we find that only Turkish coffee induces this gene. Similarly for the genes predicted for AhR pathway, UGCG is indirectly linked to AhR pathway via ARNT 43 and EREG is reported as a target gene for AHR 44 .
Seed genes play a critical role in predicting responsive genes in a certain pathway and should be carefully considered and accurately selected. As an example, Nrf2 gene was initially included among the seed genes for the biclustering algorithm. However, experimental verification showed transcript levels of this gene not to be responsive to coffee exposure. It was later not used as seed gene for DECA algorithm and was replaced with NQO1. One optimal way to select seed genes is to select two or three highly differentially expressed genes (Fold Change > 3) associated to the pathway of interest from literature (eg. CYP1A1 and TIPARP for AhR pathway), verify their altered expression in response to activation or repression of the pathway and use these as seed genes. Expression changes upon coffee exposure of genes identified using the DECA algorithm in AhR and Nrf2 pathways. '*'Indicates genes found to be significantly differentially expressed (Fold change > ± 1.5) in Turkish Coffee only. '^'Indicates genes found to be significantly differentially expressed (Fold change > ± 1.5) in Nescafe only. N/A indicates genes that were not the target of experimental validation. Genes were considered to be responsive if they were expressed in at least two coffee samples. The biclustering algorithm requires a further selection of genes to be considered, the gene pool set. This selection was performed by aggregating non cell type specific pathway level information. On the other hand, DECA has no such constraint and the whole set of expressed genes are considered. Therefore DECA is our method of choice to identify GOI in pathways for which little information is available. One could also argue that, when combining such a large set of array data collected over different batches, batch correction techniques should be applied. However, here each experiment has its own control in the same batch. As a result batch effects and experimental effects might be confounded and usually applied correction methods such as ComBat and SVA are not effective 48,49 . Instead, we have used a higher level integration approach, in which data from each study is compared with the corresponding control. This way we bypass the need for additional batch corrections as we study only correlations between changes in gene expression.
In addition to predicting GOI, the compendium presented in this paper can be used for other purposes. For instance, a systematic categorization of the treatments based on expression profile, similar to the approach taken in Connectivity map 50 and thus could select food components that have effects on certain genes and pathways. Such datasets can also be used to predict key regulators and/or gene hubs 2 . Additionally, the database can be expanded further by adding data from future experiments, even from technologies like RNAseq. The provided Caco-2 specific network also serves as a platform to understand future experiments. Gene expression data from a new experiment could be integrated with this network by using algorithms for network mining and active module identification 3 . The Caco-2 cell type specific network can also be used to develop networks associated to different conditions such as Caco-2 exposure to pathogens or pathogenic toxins, then these networks can be used to identify potential drug targets by applying statistical methods and identifying hub genes using similar strategies as the one successfully used in cancer research 51 . This paper can therefore be seen as a first important step to improve current analysis tools for Caco-2 and thereby elicit a better understanding of the interaction between our intestinal epithelium and luminal (nutritional) compounds.
Conclusion
Caco-2 cell lines are increasingly used as model systems to study the interaction of food and other luminal factors with the intestinal system of the host, which is difficult to study in vivo. As the availability of experimental datasets will grow further we believe that this work is the first step in generation of a Caco-2 specific database and tissue specific research tools and strategies to extract more knowledge from these data. One of the research tools for which we make an important step is the dedicated protein-protein association network using gene expression data for Caco-2. The network provided in this paper could be the basis to be implemented in other software tools like IPA and STRING and can be further updated when more data become available in the future. The modified biclustering and DECA methods should additionally provide the necessary tools to extract genes of a desired pathways and can be applied, by the codes provided, to a similar dataset of any cell type of interest.
In the future, a comprehensive Caco-2 transcriptome database should include microarray data from other platforms such as Agilent, Illumina, etc but more importantly should include RNAseq data which will provide additional information on splice isoforms. We believe that such a cohesive database would provide finer results regarding the genes of interest in Caco-2 and can support the analysis and understanding of future Caco-2 cell based analysis. The dataset can additionally be used for building classifiers using genetic profiling and in finding therapeutic food solutions.
Materials and Methods
Data Processing. Caco-2 microarray gene expression data were obtained from public repository, Array Express (www.ebi.ac.uk/arrayexpress) and from in-house experiments performed using Affymetrix © 1.1 ST array platform. In-house data was obtained by exposure of Caco-2 cells grown on transwells with different preparations of food-related compounds in experiments conducted over several years. Publicly available data was restricted to experiments on Affymetrix platform. Data and associated metadata were manually curated using the following inclusion criteria: i) experiments that did not induce genetic mutations, ii) experiments performed on Caco-2 cell monolayers that were grown for at least seven days and iii) arrays probing for at least 17000 genes (annotated in Chip Definition Files), thereby leaving out old arrays. Based on these criteria 341 arrays were selected corresponding to 22 experimental batches encompassing 85 different treatments (Table 2). GSE accession numbers of publicly available datasets and other relevant descriptions are given in Supplementary Table 3.
The consolidated data of 341 arrays were normalized using the SCAN algorithm before network construction and biclustering analysis, as this is a method that performs well for cross comparison 52 . RMA normalization was used for differential expression (DE) analysis, as it is considered as standard for this calculation 53 . All the normalization procedures were performed using R Bioconductor packages SCAN.UPC 54 and affy 55 . Microarray probes were matched to gene identifiers using the CDF array annotation (version 18) provided by the University of Michigan microarray© lab 56 . After both normalization procedures, a combined set of 21996 genes was obtained. All statistical programming were performed using statistical language R (version 3.2.3).
Identification of genes expressed in Caco-2 cells.
Universal exPression Code (UPC) was used to obtain a standardized score describing the active/inactive state of each gene in each array of our data compendium 54 . Genes with a UPC value greater than 0.5 in at least one array were considered to be expressed in Caco-2 cells and therefore used in the analysis. This step was applied to the matrix of 21996 genes and 341 arrays reducing it to a matrix of 12849 genes and 341 arrays. In this matrix there were some genes with some values missing, likely due to platform differences. Therefore, genes with missing values in more than half the total number of arrays (ie. 170 arrays) were discarded. Remaining missing values were imputed using KNN algorithm from the 'impute' R package in refs 57 and 58 with default parameters. The final data matrix contained values for 10831 genes over 341 arrays.
SciENTiFic RepoRtS | 7: 6778 | DOI:10.1038/s41598-017-06355-0 Caco-2 cell specific network generation. The database STRING (version 10) 59 was used for the retrieval of high confidence human specific protein association and a combined score cut-off value of 700 was used as recommended by STRING. Nodes representing genes identified as not being expressed by Caco-2 cells were removed from the network. The network (in edgelist format) is available as supplementary file (Caco2_Network). Edgelist contains pairs of interacting genes (first two columns) and in this file genes are denoted by their Entrez Ids. The third column refers to the weight of each edge, which is however empty in the given file, as the edges have no weights. The networkx (python package) was used for network topological analysis 60 .
Biclustering Algorithm. The Biclustering algorithm of cMonkey 45 adapted by van Dam et al. 47 was used to find biclusters (i.e. groups of co-expressed genes in a subset of conditions 61,62 ). In our implementation a pre-defined set of genes, called seed genes, together with additional genes from a second list called gene pool were used to find biclusters. Seed genes were selected using the following two approaches: i) from literature on Caco-2 expression in response to different types of coffee (SQSTM1, HMOX1, NRF2 and ABCC1 for the Nrf2 pathway and CYP1A1, TIPARP and AHR for AhR pathway). ii) from Weighted Gene Correlation Network Analysis 63 (WGCNA). The WGCNA method partitions genes expressed in Caco-2 cell lines into groups enriched for topological overlap based on their expression profiles. These groups were then assessed for enrichment in genes belonging to the selected pathways using Ingenuity Pathways Analysis (IPA) (http://www.ingenuity.com, release March 2014). Genes assigned to the selected pathways in the enriched modules (FDR < 0.05) were further included in the seed gene list (DNAJB1 and ENC1 for Nrf2 pathway and ARNT and PRKCA for AhR pathway). To build the gene pool, genes expected to be in the pathway of interest were retrieved from pathway database IPA (Ahr and Nrf2 consensus pathway).The gene pool list contained 87 genes for Nrf2 pathway and 48 genes for AhR pathway.
Biclustering was performed using R implementing the iterative procedure depicted in Fig. 4. In the first step, the data compendium is explored to select arrays for which the seed genes show a high degree of mean pairwise correlation between each other. This selection is performed by iteratively removing one array from the list and comparing the average pairwise correlation between seed genes computed considering the full array list and the array list without the selected one. If removal of the considered array leads to an increase of this correlation, the array is permanently removed from the array list. This process is iterated until either the average correlation between seed genes is greater than or equal to a threshold value, C T = 0.75 or half of the initial arrays have been removed.
Once the reduced array set has been established, an additional iterative procedure to search for candidate genes is performed. In the initialisation step, a new list of genes is built containing the seed genes. Then a new gene is selected from the gene pool and the mean correlation between this new gene along with the genes in the current list is calculated. If such correlation value is greater than previous correlation value, the new gene is added. This procedure is iterated till no new genes remain. The full procedure of array reduction and gene addition is continued until a bicluster with the desired properties is obtained.
Differential Expression Correlation Analysis (DECA). We implemented a new algorithm, Differential
Expression Correlation Analysis (DECA) to find GOI using DE values from microarray datasets. The DECA algorithm works by calculating correlation values between seed genes and other DE genes identified using the UPC algorithm. DE values were calculated for 85 experimental setups (3 of which could not be used as they lacked sufficient replicates or controls) giving a total of 21996 genes. For each of these genes the treatments were compared to their respective controls using Bioconductor package limma 64 . Following this, UPC filtering was applied and the DE matrix (a matrix containing the DE values with genes along the rows and experimental comparisons along the columns) was reduced to 12849 genes. Genes that were missing expression values for more than 56 conditions (roughly two third conditions) were excluded and then remaining missing data were imputed using KNN impute as mentioned above. This resulted in a matrix of DE values for 12462 genes and 85 conditions. All corresponding missing p-values were substituted with 1.
The next step in DECA is the selection of seed genes from literature. Seed genes were chosen in such a way that they showed strong and significant (absolute fold change ≥2 and p-value < 0.01) DE in stimulations associated to the chosen pathway (SQSTM1, NQO1 and HMOX1 for the Nrf2 pathway and CYP1A1 and TIPARP for AhR pathway).
The workflow of the procedure is described in Fig. 5 and implemented in R. Seed genes were then randomly considered one at a time. The DE matrix is reduced by the algorithm to contain only the comparisons in which the seed gene under consideration is found to have significant DE. Correlation values are calculated between the seed gene and each gene in the gene pool using the reduced DE matrix. The fraction of reduced comparisons in which each gene has significant DE (p-value < 0.01) is recorded and is termed significance fraction. Finally, correlations and fractions for each seed gene, are combined in a matrix format and a selection criterion for absolute correlation values and significance fraction was set at 0.6. A list of genes that have either absolute correlation value or significance fraction above the threshold for any of the seed gene is selected. Subsequently, this new list of genes is ranked depending on their individual absolute correlation values and significance fraction for each seed gene, thereby providing 2n ranks (where n is the number of seed genes). A final rank was calculated by estimating the geometric mean of the 2n ranks for each gene.
All R scripts used in this paper are available at http://semantics.systemsbiology.nl/index.php/download-page/.
DECA comprehensive in silico assessment. 10 pathways were chosen at random for assessment of DECA algorithm. These pathways are ABC transporters pathway, Adherens junction pathway, Fat Absorption pathway, Gap junction pathway, Glycerolipid metabolism pathway Glycerophospholipid metabolism pathway, Nfk-β signalling pathway, p53 signalling pathway, PPAR signalling pathway and TLR signalling pathway. Some of these pathways are known to be associated with intestinal epithelia [65][66][67] . The genes associated to each of the 10 pathways were selected form KEGG pathway database 38 . For each of these pathways, 3 seed genes were chosen at random. The chosen seed genes were ensured for significant differential expression in at least 15 experiments. The seed genes were then used in DECA and the resulting gene list was ranked as mentioned above. The number of genes present in the top 10% of the ranked list belonging to the pathway were calculated. In addition to this, a Welch two sample t-test was performed to assess if the average ranks of the pathway related genes had a better rank compared against the average ranks of the rest of the genes in the ranked list. The protocol was iterated 10 times for each pathway. The results are provided in Supplementary Table S2. | 2018-04-03T03:58:52.776Z | 2017-07-28T00:00:00.000 | {
"year": 2017,
"sha1": "3b627b33e0c3c3aa01f6a911f3932608feaa888e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-06355-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b627b33e0c3c3aa01f6a911f3932608feaa888e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
270960891 | pes2o/s2orc | v3-fos-license | Late Holocene echinoderm assemblages can serve as paleoenvironmental tracers in an Antarctic fjord
High Latitude fjords can serve as sediment trap, bearing different type of proxies, from geochemical to micropaleontological ones, making them exceptional tools for paleoenvironmental reconstruction. However, some unconventional proxies can be present and can be used to depict a comprehensive and exhaustive interpretation of past changes. Here, studying a sediment core in Edisto Inlet (Ross Sea, Antarctica) we used irregular echinoid spines and ophiuroids (Ophionotus victoriae) ossicles to trace environmental changes throughout the last 3.6 kyrs BP. Irregular echinoids can serve as proxy for the organic matter content, while O. victoriae ossicles can be used as proxy for steady sea-ice cycle along with organic deposition events. O. victoriae release a high number of ossicles, making estimation about the population quite challenging; still, presence data, can be easily collected. By applying Generative Additive Models to the stratigraphical distribution of these data, we detected an environmental phase that was previously unnoticed by other traditional proxies: the Ophiuroid Optimum (2–1.5 kyrs BP). In conclusion, here we demonstrate how echinoderm presence can be used as a valuable source of information, while proving the potential of modelling binary data to detect long-term trend in Holocene stratigraphical records.
Fjords are one of the most important transitional environments of the high latitudes.Recently they have been identified as carbon cycle hotspots making them an important factor in Earth's climate system, being the marine environment with the highest carbon burial rate per unit area 1,2 .However, most of the studied fjords are in the northern Hemisphere, especially in the Arctic, whilst the southern counterpart remains less studied with the Antarctic peninsula being the only exception 3 .In their role as sedimentation traps, fjords have been extensively used in paleoenvironmental reconstruction, especially to disentangle abrupt changes in the environment over the Holocene.This has been of great importance in the Antarctic Peninsula to recognize glacial advance or retreat, meltwater pulses, changes in the hydrographic conditions and in sedimentation regime [3][4][5][6][7] .A plethora of methods have been used to investigate these aspects: from micropaleontological analysis on foraminifera and diatoms 8 , to geochemical proxies and biomarkers 9 .
In addition, these highly dynamic systems have been studied for their modern ecosystem characteristic.Bae et al. 10 showed that different species of diatoms, located in different parts of the Marian coves, can build up different macrofauna communities.Lagger et al. 11 , studying Potter Cove in 2010, found that ascidians and bryozoans constitute more than 90% of the total benthic community of a newly ice-free area.
Despite the increased attention towards the environmental evolution of fjords, little attention has been paid to the use of macrofauna component in paleoenvironmental reconstruction regardless of their importance as a tool to give insights on community and environmental dynamics 12,13 .
Whitin this context, our study focuses on echinoderm remains in the Edisto Inlet, a small fjord located in the Ross Sea characterized by a seasonal sea-ice cover to assess if macrofaunal component can be used to depict a more comprehensive view of the environmental evolution of this area.The fjord has been previously studied for paleoenvironmental dynamics, seafloor characteristics and tephra content [14][15][16][17][18] .The bottom of the Inlet is covered by a 110 m thick expanded laminated Holocene sedimentary sequence, reflecting different diatom communities through distinct lamina colours 17,18 .Although echinoderm remains (ophiuroids and echinoids) were
Study area
Edisto Inlet, situated in the northwestern part of the Ross Sea, is an elongated and narrow fjord with an average depth of 500 m and a minimum depth of 100 m at the fjord mouth 14,17 (Fig. 1).The fjord has four different glacial inputs: the Edisto glacier, the Manhaul glacier, and the Arneb glacier and a little unnamed glacier near the mouth(Fig.1).Geomorphological evidence, combined with geochemical and micropaleontological analyses, was used previously to study the environmental evolution of the Inlet over the last 11 kyrs BP 15,17,18 .
From 11 kyrs BP, the glaciers started to retreat, leading to open water conditions, along with the reinvigoration of the circulation.The period from 9 to 2.6 kyrs BP witnessed a deglaciation phase with the establishment of a seasonal sea-ice regime 17 .Subsequently, from 2.6 to 0.7 kyrs BP, geochemical and foraminifera analysis suggested the presence of a seasonal sea-ice cover, and around 1.48 kyrs BP, conspicuous glacial meltwater flow attributed to the retreat of three in-situ glaciers was linked with the onset of the Medieval Climate Anomaly (MCA) 15,18 .Lastly, spanning from 0.7 kyrs BP to the present day, after the onset of the Little Ice Age (LIA), a prolonged period of sea-ice cover has been suggested by the sudden decrease of the sedimentation rate by one order of magnitude, probably due to persisting presence of the sea-ice during the thawing season 15,16,18 .
Core TR17-08
The marine sediment core TR17-08 (14.6 m long) was retrieved in January 2017 at the entrance of the fjord at a depth of 462 m below sea level (Fig. 1).The core consists of diatomaceous ooze and shows a lamination defined by the alternation of olive-green to brownish lamina (dark), and white ones (light).In Edisto, lamina colour reflects diatom assemblages: dark laminae are dominated by Fragilariopsis curta, F. obliquecostata and have a low biovolume content of Corethoron pennatum, indicative of a first sea-ice break-up during the early austral summer, whilst light laminae are mainly composed by C. pennatum and are deposited during the later part of the summer, when oligotrophic conditions are present 18 (Fig. S1).The age depth-model was constructed using 10 radiocarbon dates and 1 tephra layer associated with the Mount Rittmann eruption 15,16 (Fig. S2).The core covers a period of 18 and BAY05-20 67 .Map from Galli et al. 15 .
Micropaleontological samples
Samples with 1 cm thickness were taken every 10 cm from core TR17-08, resulting in a total of 152 samples.Samples were washed with a 63 µm sieve and dried overnight at 40 °C.Ice Rafted Debris (IRD) content was determined by counting sharp clastic remains in the > 1 mm fraction.Foraminifera and echinoderm remains in the > 150 µm fraction were picked exhaustively.Since autotomy and arm regeneration can exaggerate the number of ossicles released into the environment by ophiuroid individuals 30 , we used the distribution of ossicles to evaluate the presence/absence of the echinoderm taxa.Benthic Foraminifera Accumulation Rate (BFAR, specimens cm −2 yr −1 ), Planktic Foraminifera Accumulation Rate (PFAR, specimens cm −2 yr −1 ) and IRD fluxes (counts•cm −2 yr −1 ) were measured as described in Herguera and Berger 31 .Foraminifera were recognised at the species level (for an extensive list of the species see table S1) using the taxonomic table from [32][33][34][35][36] .Ophiuroid remains were analysed using Scanning Electron Microscopy (SEM) to evaluate the diagnostic features of the LAPs 29,37 .Echinoid spines were collected and recognised at the functional level.
Statistical analysis
Presence/Absence data are difficult to interpret due to their binomial nature.Generative Additive Models (GAM) were used to model the probability of occurrence over time, by utilizing the method to detect trends in temporal series described by Simpson 38 .GAM are homologous to the LOESS curve but rely on fewer assumptions, giving these models an advantage since they can detect significant trends in temporal series 38,39 .The age (in yrs BP) of the layer was used as a predictor of the model.GAM were applied to both echinoid and ophiuroid distribution.Statistical analyses were performed in the software RStudio 40 (v4.3.1).GAM were calculated using the mgcv package 41 .The fit criterion used was the restricted maximum likelihood (REML, Simpson, 2018).For the stratigraphical analysis, a stratigraphically constrained analysis with Euclidean distance (CONISS) was computed using BFAR, PFAR, IRD, Margalef (M), Eveness (J) and the probability of occurrence of the echinoids (P(E)) and ophiuroids (P(O)).We used the package tidypaleo for the CONISS, which uses a broken stick approach to define significant clusters 42 .Foraminifera diversity indexes, M and J, were calculated using the software PAST 43 (v4.14).M index is calculated using the formula M = (S − 1)/ln(n ), where S is the number of species and n is the total number of individuals.J is calculated using the formula J = e H /S , where H is the Shannon Index, cor- responding to
Ophiuroids and echinoids
In our record echinoids were more common throughout (104/152 samples, 69%) than ophiuroids (27/152 samples, 18%) but were consistently absent from 1 to 0.7 kyrs BP (Fig. 2).Ophiuroid presence is more frequent during the period that goes from the 3.6 to 1 kyrs BP.GAM applied to the distribution shows significant trends (p < 0.05), increasing their values as the density of the presence points increases (Fig. 2; Table S2).The P(E) increases from 3.6 to 1.5 kyrs BP, while it tends to decrease after 1.5 kyrs, where the echinoderms are absent (Fig. 2a).Ophiuroids show a different trend: the P(O) increases steadily from the bottom and peaks around 2 kyrs BP, in concomitance with the interval with most presence points (Fig. 2b).Although the peak of the P(O) is low (< 0.5), we argue that these low values arise due to the low number of presence points of the ophiuroid distribution (Fig. 2b).Despite the low probability values, the P(O) still gives important clues on the long-term trend, as it can define the frequent presence of ophiuroids, as seen by the bulging of the curve in the interval 2-1.5 kyrs BP (Fig. 2b).This application shows the strength of utilizing GAM models even in binomial distribution, unlocking possibilities on the use of this type of models to understand long-term macrofauna dynamics just by a presence/ absence matrix derived from a temporal series.
Micromorphological analysis of the LAPs retrieved from our samples allowed us to recognize the presence of the species Ophionotus victoriae Bell, 1902 29,37,44 .Since there were no other types of LAP throughout the core, we assume that all other ophiuroid ossicles belonged to the same species (Figs. 3 and 4).
Ophionotus victoriae is a well-known and widely distributed species across Antarctica, living at almost every depth and within different types of environments 20 .Like other ophiuroids in Antarctica, O. victoriae is an opportunistic species with a high diet plasticity, even showing cannibalistic behaviour in high-density populations 45 .Cannibalistic behaviour is typically found among organisms that live in areas affected by strong seasonal fluctuations, like fjords or high-latitude enclosed basins, generally characterized by a seasonal cycle of the sea-ice cover 26,45,46 .In Deception Island, O. victoriae spatial distribution relates to the sedimentation regime and a low ice-related disturbance 21,47 .In addition, O. victoriae has a peculiar reproductive biology involving synchronous annual spawning in November-December, long oocyte development periods and, the dependence upon the previous year sedimentation event of the reproductive effort 48 .Considering these premises, we hypothesize that the presence of O. victoriae could be a valuable proxy for low interannual variability of the environmental cycle of the study area that corresponds to a seasonal sea-ice cycle along with constant organic deposition events.Given the crucial role of O. victoriae in the bentho-pelagic coupling of the community, its presence could also be used as a proxy to indicate a mature benthic community, implying that an energy flux was present from the primary producers (e.g.diatoms) to the secondary consumers 24 .
Since the skeleton of O. victoriae is made of high-Mg calcite, the ossicles can be prone to dissolution after decay, with a complete disappearance estimated to occur within 6-105 years 49 .However, in high sedimentation regimes, like the Edisto Inlet, rapid burial could have removed the ossicles from corrosive water masses enabling their preservation 50 .Considering the similar chemical composition of O. victoriae ossicles and echinoid spines and considering the coupled presence of both along the core, we infer that there was no taphonomical filter on the distribution of the echinoderm remains 51 .
Echinoid spines were identified as belonging to irregular echinoids (infraclass Irregularia, Latreille 1825, Fig. S3) due to their morphology, although the material available precluded a species-level identification 52,53 .Irregular echinoids are bottom dwellers feeding on organic matter on and within the seafloor 54 .This type of ecology hints at a possible use as proxy for organic matter content on the seafloor.
Paleoenvironmental reconstruction over the Late Holocene of Edisto Inlet
To test whether the echinoderms could be used as a proxy for past environmental conditions, we compared P(E) and P(O) with other proxies derived from the core TR17-08 and from biogeochemical proxies derived from a nearby core inside the Inlet, HLF17-01 (Figs. 5 and 6).BFAR and PFAR have been used extensively as proxy for the paleo productivity 31,55 .Also, IRD fluxes have been used to reconstruct the location of the polar front, iceberg discharging events and the extension of the sea-ice cover at both poles [56][57][58][59] .In Barilari Bay (Antarctic Peninsula), a fjord with similar environmental features as Edisto, high abundances of IRD suggests the onset of a discharging event guided by calving of an iceberg in a marine terminating glacier, and a decrease in its content marks the onset of seasonally open marine conditions 59 .
Changes in J and M have been related to change in the benthic environment as well as the presence of stressful periods or events [60][61][62] .In the Arctic, changes in the population of calcareous foraminiferal fauna diversity and densities are related to changes in the macrofauna component, paralleling the response of the macrofauna diversity 62 .In our study, the diversity indexes calculated on the foraminiferal content are used as a proxy for the patchiness of the environmental conditions.Foraminiferal fauna composition depends upon the physical and chemical characteristics of the environment.Thus, an increase in the number of species could be evidence of an increase in the number of habitats or ecological niches 61 .On the other hand, J is used to evaluate how well the species are partitioned in a community, with lower values indicating that few species are dominating the community 63 .This can be interpreted as the prominent presence of the ecological characteristics of the dominant group.In other words, if J decreases and M increases, we can suppose that the decrease in the equitability (increase in the dominance of a species or a group) and the contemporary increase in the number of species, reflects the prominent presence of an environment type (indicated by the low values of J), while new rare species are added (indicated by M).In turn, this implies the steadiness of the ecological condition, since one species (or group) is dominating the assemblages, while not taking into consideration the environmental condition itself.
Considering these premises, the values of P(O) and P(E) should be higher in concomitance with relatively high primary productivity (High BFAR and PFAR), seasonal sea-ice cycle (low content of IRD), and an increase in the environmental steadiness (low J, high M).
The CONISS analysis divided our record into 4 clusters (Fig. 5, dashed lines).Starting from the oldest, the first cluster (Fig. 5, yellow band) goes from 3.6 to 2.5 kyrs BP.Within this interval the BFAR and PFAR show similar patterns, indicating a period with explosive increases of the primary productivity 55 .IRD fluxes are relatively high with respect to the subsequent zone (Fig. 5, red band), while J and M do not show any trend.This can be interpreted as a climatic phase where primary productivity was high but inconsistent from year to year, indicated by the peaks of IRD (Fig. 5, yellow band).The same conclusion can be inferred by the peaks in the BFAR and PFAR content and by the variability of J 59,64 (Fig. 5, yellow band).The increase in the P(E) and P(O) corroborates this view.The increase in P(E) suggests an increase in the organic matter content on the seafloor probably derived from superficial algal blooms along with the increase in P(O) that can be interpreted as a gradual increase in the stability of the seasonal sea-ice cycle.
The subsequent interval (2.5-1.4 kyrs BP, Fig. 5 red band) is characterized by low values of BFAR, PFAR and IRD.During this time, J decreases in the middle part along with an increase in M. The low values of BFAR and PFAR can be interpreted as a decrease in primary productivity alongside a decrease in the variability of the input of nutrients, indicated by the lower values of the peaks (Fig. 5, red band).The IRD content is lower than the previous interval (Fig. 5, yellow band) and shows less variability, indicating the presence of a seasonal seaice cover 59 .During this period, P(O) and P(E) peaks at around 2-1.8 kyrs BP, validating our hypothesis that ophiuroid presence/absence can be used as a paleoenvironmental proxy, since the premises stated previously are met: low IRD content, relatively high BFAR and PFAR, and a decrease in J along with an increase in M.
In addition, this interval can be compared with the geochemical data from core HLF17-01 and with the foraminifera analysis from core TR17-08 15,18 .In the study of Tesi et al. 18 , IPSO 25 index (Ice Proxy for the Southern Ocean with 25 highly branched isoprenoids 65 ) have been used to track absence of sea-ice cover during the austral summer, revealing a prominent ice-free summer season from 2.6 to 0.7 kyrs BP (Fig. 6).However, the interval with the average lower values and with the most frequent under-the-threshold values of the IPSO 25 spans the interval from 2 to 1.5 kyrs BP (Fig. 6).This period also corresponds to the peak in both P(E) and P(O) values, corroborating the hypothesis on the use of Ophionotus victoriae as a proxy for seasonal sea-ice cycle (Fig. 6).Foraminiferal analysis also supported the view of a prominent seasonal phase during this period 15 .By combining the results from the geochemical and the foraminiferal analyses with our echinoderm distribution we can identify a time interval that is characterized by an interannually stable sea-ice cycle with conspicuous organic sedimentation events, along with an energy flux that goes from primary producers (diatoms) to secondary consumers (ophiuroids and echinoids), implying the presence of a developed benthic community.We call this period Ophiuroid Optimum due to the relationship between the ophiuroid presence and the IPSO 25 value (Figs. 5 and 6).
From 1.4 to 0.7 kyrs BP a transitional phase took place (Fig. 5, purple and blue bands).BFAR and PFAR drop to near 0 values, while IRD content suddenly increases at 1.2 kyrs BP.J increases, while M decreases in concomitance with P(E) and P(O) over the same period (Fig. 5, purple band).The period after 0.7 kyrs BP (Fig. 5, white band), experiences a drop in the sedimentation rate from an average value of 0.49 to 0.07 cm yr −1 and we decide to divide it from the previous cluster manually, since it yields less resolution, suggesting a closed environment with a sea-ice cover that does not thaw during the summer, or experience very incipient opening 15,18,19 .Foraminiferal analysis over 1.4-0.7 kyrs BP suggests the presence of a conspicuous meltwater flow, probably derived from the retreat of in-situ glaciers with the onset of the MCA, a warm phase recognised in the Northern Hemisphere as well as in the Antarctic Peninsula and in the Victoria Land Coast 15,66 .The transitional phase has been interpreted as a deterioration of the seasonal sea-ice cycle, with an increase in the residence time of water masses inside the fjord 15 .The increase in J in concomitance with a decrease in M of the foraminiferal community also suggests a transitional state, where species vanish from the community, while the remaining ones thrive.
In addition, from another nearby core, BAY-0520 (Fig. 1), the increase in the Fragilariopsis curta content, a sea-ice indicator diatom, over the same period, is coherent with our interpretation 67 .Thus, our results suggest a period of less stable seasonality of the sea-ice cover, with a prolonged season of the winter cover, as suggested by the low BFAR and PFAR values.IRDs have the highest values over this period, and the steady decreases of P(E) and P(O), along with the increase in J and the decrease in M, corroborate the hypothesis of a reduction in the seasonality as well as the decrease in productivity, culminating around 0.7 kyrs BP 15,18 .In addition, the absence of both echinoids and ophiuroids (Fig. 2) suggests the absence of an energy flux that goes from the primary producers to the secondary consumers, reflected in the absence of a mature benthic macrofaunal community.
The period from 0.7 kyrs BP to recent (Fig. 5, white band) has been associated with the onset of the LIA, a northern hemisphere cooling period that has also been identified in Antarctica ice cores by a sudden decrease of 2 °C in the reconstructed air temperature 68,69 .Over this period, the sedimentation rate in Edisto Inlet is stable but with low value, suggesting the presence of a prolonged period of sea-ice cover with low-to-no productivity 15,18 .Results from P(E) and P(O) are difficult to interpret due to the low number of samples with respect to the previous zones, and the increase in their value and in the amplitude of the confidence interval could be derived from the low resolution of this period compared to other ones (Fig. 5).
Conclusion
In this study we evaluated the use of microfossils of macrofaunal organisms as paleoenvironmental proxies in high sedimentation settings by studying the marine sediment core TR17-08 in the Edisto Inlet, Ross Sea (Antarctica).In the record, spanning over 3.6 kyrs BP, we were able to successfully identify the presence of an ophiuroid species, Ophionotus victoriae, by analysing lateral arm plate (LAP) morphology along with the presence of irregular echinoid spines.To detect significant trends in the stratigraphical distribution of the latter, a presence/absence matrix was constructed.Generative Additive Models (GAM) were used to convert a binomial distribution into a continuous distribution, serving as a new way of analysing macrofaunal presence in paleoenvironmental studies.By comparing presence of echinoids (P(E)) and ophiuroids (P(O)), respectively, with other biogeochemical and micropaleontological proxies, we were able to demonstrate the use of O. victoriae as a proxy for the interannual steadiness of the seasonal sea-ice cycle and the organic sedimentation events, as well as a functional bentho-pelagic coupling.Although the irregular echinoid spines could not be identified in detail, their presence can be used to infer the existence of organic matter at the seafloor.
By utilizing geochemical, micropaleontological and macrofaunal proxies we were able to identify four different phases, in accordance with previous studies and a new seasonally stable phase called "Ophiuroid Optimum" 15,18 .
This study demonstrates that macrofaunal components can be used in micropaleontological studies and in multiproxy approaches to reconstruct paleoenvironmental settings, opening new ways to describe and interpret past climatic and environmental changes.
Figure 1 .
Figure 1.(a) Ross Sea and the studied area; (b) marine sediment cores location in Edisto Inlet.The red triangle highlights the core used in the study.Black points indicate marine cores retrieved from the area and used for comparison: HLF17-0118 and BAY05-2067 .Map from Galli et al.15 .
Figure 2 .
Figure 2. GAM model of the echinoderm's distribution.(a) GAM model of the irregular echinoids; (b) GAM model of the ophiuroid.P(E) refers to the probability of occurrence of the echinoids, while P(O) is the probability of occurrence of the ophiuroids.Trend line is displayed as black line, and the confidence band (95%) is indicated by the light blue ribbon.Black points represent the presence (1)/ absence (0) distribution along the core.
Figure 5 .
Figure 5.Comparison between the BFAR, PFAR, IRD, J, D, P(O) and P(E) over 3.6 kyrs BP from core TR17-08.CONISS cladogram is also displayed.The dashed lines represent the CONISS cluster division.Note the different scale on every graph.BFAR, PFAR and IRD are calculated in counts•cm −2 •yr −1 .The grey ribbon in the P(O) and P(E) graph represents the 95% confidence interval.
Figure 6 .
Figure 6.IPSO 25 values over the last 2.6 kyrs BP in the core HLF17-01 (Tesi et al.18 ).The blue dots indicate summer sea-ice cover while red dots indicate free-sea ice conditions during the summer, the red shaded area highlights the period where P(E) and P(O) reaches the maximum.The y-axis is in logarithm scale. | 2024-07-05T06:17:17.385Z | 2024-07-03T00:00:00.000 | {
"year": 2024,
"sha1": "7943eaf4cfc5e784e507d364b335f61ba7eecbe1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bf713459a7a4655401cc0e26f08739deacb3fc8e",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253239045 | pes2o/s2orc | v3-fos-license | Data-driven lemma synthesis for interactive proofs
Interactive proofs of theorems often require auxiliary helper lemmas to prove the desired theorem. Existing approaches for automatically synthesizing helper lemmas fall into two broad categories. Some approaches are goal-directed, producing lemmas specifically to help a user make progress from a given proof state, but they have limited expressiveness in terms of the lemmas that can be produced. Other approaches are highly expressive, able to generate arbitrary lemmas from a given grammar, but they are completely undirected and hence not amenable to interactive usage. In this paper, we develop an approach to lemma synthesis that is both goal-directed and expressive. The key novelty is a technique for reducing lemma synthesis to a data-driven program synthesis problem, whereby examples for synthesis are generated from the current proof state. We also describe a technique to systematically introduce new variables for lemma synthesis, as well as techniques for filtering and ranking candidate lemmas for presentation to the user. We implement these ideas in a tool called lfind, which can be run as a Coq tactic. In an evaluation on four benchmark suites, lfind produces useful lemmas in 68% of the cases where a human prover used a lemma to make progress. In these cases lfind synthesizes a lemma that either enables a fully automated proof of the original goal or that matches the human-provided lemma.
INTRODUCTION
Interactive proof assistants [de Moura et al. 2015;Filliâtre et al. 1997;Paulson 1993] are powerful frameworks for writing code with strong guarantees. While various tools exist to perform automated proof search [Bansal et al. 2019;First et al. 2020;Gauthier et al. 2017;Paliwal et al. 2020;Sanchez-Stern et al. 2020;Sekiyama et al. 2017;Whalen 2016;Yang and Deng 2019] and to integrate external automated solvers [Blanchette et al. 2011;Czajka and Kaliszyk 2018;Kaliszyk and Urban 2015a,b], the manual proof burden remains high. One particular challenge is the need to identify auxiliary lemmas that are required to prove a desired theorem. For example, the theorem's induction hypothesis may be too weak, thereby necessitating a stronger lemma that is amenable to an inductive proof. As another example, a lemma may be required to rewrite a subgoal at a particular point in the proof into a form that allows the induction hypothesis to be applied.
Existing approaches to address this problem through a form of lemma synthesis fall into two categories [Johansson 2019]. In the first category, heuristic rewrites are performed on the proof state at the point where the user is stuck to identify potentially useful lemmas [Aubin 1976;Bundy et al. 1993;Castaing 1985;Dixon and Fleuriot 2003;Hesketh 1992;Hummel 1990;Johansson et al. 2010;Kapur and Subramaniam 1996;Kaufmann and Moore 1997;Sonnex et al. 2012]. For example, the generalization technique [Boyer and Moore 1979; Kaufmann and Moore 1997] from ACL2 heuristically replaces one or more terms in the current subgoal with fresh variables. In the second category of approaches, candidate lemmas are generated from a grammar through a form of enumeration-based synthesis [Claessen et al. 2013[Claessen et al. , 2010Heras et al. 2013;Johansson et al. 2011Johansson et al. , 2014Montaño-Rivas et al. 2012;Reynolds and Kuncak 2015;. For example, HipSpec [Claessen et al. 2013] uses QuickSpec [Claessen et al. 2010] to generate many candidate equational lemmas and then proves as many as possible using an automated prover.
The strength of the heuristic rewriting approach is that it is goal-directed, producing candidate lemmas that are directly related to the current proof state. However, the approach has limited expressiveness, as the space of possible candidates is dependent on a particular set of rewrite rules. The enumeration approach has the opposite strengths and weaknesses. Because candidate lemmas are enumerated from a grammar, they can be highly expressive. However, candidate lemmas are generated in an undirected fashion, independent of the particular state where the user is stuck. Hence this approach will generate many irrelevant lemmas and so is ill-suited for an interactive setting. Indeed none of the enumeration-based tools cited above support interactive usage.
In this paper, we propose a new approach to lemma synthesis that combines the strengths of the existing approaches. We show how to reduce lemma synthesis to a data-driven program synthesis problem, which aims to synthesize an expression that meets a given set of input-output examples. The examples for synthesis are generated directly from the current proof state, ensuring that lemma candidates are targeted at the goal. At the same time, the approach enables the usage of off-the-shelf data-driven program synthesizers that generate expressions in a user-provided grammar [Albarghouthi et al. 2013;Feser et al. 2015;Frankle et al. 2016;Lubin et al. 2020;Miltner et al. 2022;Osera and Zdancewic 2015]. This new approach allows us to successfully synthesize helper lemmas for more stuck proofs than ever before.
Reducing lemma synthesis to data-driven program synthesis requires us to solve several technical challenges. While data-driven synthesis is a common approach to generating other kinds of program invariants [Ezudheen et al. 2018;Garg et al. 2014Garg et al. , 2016Miltner et al. 2020;Padhi et al. 2016;Zhu et al. 2018], for instance, loop invariants, these prior settings have several advantages that our setting lacks. In prior settings, the desired invariant is a predicate over a fixed set of variables, for example, the variables that are in scope at a loop. In contrast, it's common for auxiliary lemmas to require new variables that do not appear in the current proof state. Further, prior approaches employ counterexample-guided inductive synthesis (CEGIS) [Solar-Lezama 2009], because there exists a clear behavioral specification for the desired invariant: each candidate invariant is verified against the specification, and counterexamples become new input-output examples for synthesis. In our setting, we lack such a specification since a proof state can require an auxiliary lemma for many different reasons. Further, a counterexample to validity applies to the current lemma candidate, but that same valuation of variables is not necessarily a counterexample to the validity of a different candidate lemma. Hence we cannot generate input-output examples using CEGIS. Finally, the lack of a specification also makes it difficult to determine whether any particular candidate lemma is useful.
To address the problem of lemmas that require variables not appearing in the proof state, we observe that the generalization technique [Boyer and Moore 1979; Kaufmann and Moore 1997] described above can be used not only to produce candidate lemmas but also as a systematic way to łliftž the current proof state to new variables for lemma synthesis. Hence our approach starts by producing all generalizations of the proof state, each formed by replacing one or more terms with fresh variables.
To generate examples for synthesis without counterexamples, we leverage the implicit observation underlying the heuristic rewriting approaches described earlier, that the necessary lemma often has a similar structure to the goal in the current proof state. We produce a set of lemma sketches for each generalized goal, each sketch consisting of a version of that goal but with one expression replaced by a hole to be synthesized. We sample valuations of the variables in the current goal to generate input examples, and the expected output value for each example is determined by the value of the hole's original expression. In this way, we require that the synthesized expression's behavior be consistent with that of the expression that it is replacing.
Finally, to address the lack of clear criteria for candidate lemmas to satisfy, we have developed techniques to filter candidate lemmas that are not useful and to rank the remaining candidates based on their likely utility to the user. Filtering removes lemmas that are determined to be either trivial, redundant, or invalid, the latter using existing tools for automated counterexample search [Claessen and Hughes 2000;Paraskevopoulou et al. 2015]. Since the ultimate utility of a lemma is based on whether it is provable and allows the user to complete the current proof, our ranking approach employs existing tools for automated proof search to categorize lemmas for user inspection.
Our example-based approach to lemma synthesis is targeted for use in proving properties of programs. In that setting we can leverage existing randomized testing tools to generate the necessary examples. Generating examples for arbitrary Coq propositions and types is difficult in general and an active area of research [Paraskevopoulou et al. 2022]. Therefore our approach is less applicable to other uses of Coq, for example to prove mathematical theorems.
We have implemented our approach as a tactic for Coq and call the resulting tool lfind 1 . Coq users can invoke lfind as a tactic at any point in their proof, and it will produce a set of ranked lemma candidates. Our approach is parameterized by a data-driven program synthesizer (for candidate lemma generation), counterexample searcher (for candidate filtering), and proof searcher (for candidate ranking). Our implementation uses the Myth [Osera and Zdancewic 2015] data-driven program synthesizer for OCaml, the Quickchick [Paraskevopoulou et al. 2015] tool for counterexample search, and the state-of-the-art Proverbot9001 [Sanchez-Stern et al. 2020] tool for proof script search. Note that our approach is agnostic to the specific toolset we use for implementation; in fact, future improvements in data-driven program synthesis, counterexample search, and proof search can be directly leveraged to improve lemma synthesis.
We evaluate our approach on two benchmark suites from prior work on lemma synthesis, clam [Ireland and Bundy 1996] and lia , as well as two new benchmarks from diverse domains, full adder [cir 1995] and compiler correctness [Chlipala 2013]. Together, there are 222 evaluation locations from these benchmarks, where a human prover used an auxiliary lemma to progress. lfind synthesizes a useful lemma for 150/222 of these locations, with a median runtime of 3.3 minutes (see ğ5.3). At 117 of these locations lfind provides a full automated proof of the synthesized lemma and the goal; at the other 33 locations lfind produces a ranked list of lemma candidates where the human-written lemma is in the top 10. We also show that our approach significantly outperforms the prior technique of generalization as well as a version of lia that employs enumerative synthesis without examples (ğ5.4). Finally, in ğ5.5 we evaluate lfind's sensitivity to different hyperparameters and timeouts.
In summary, this paper makes the following contributions: (1) We present the first approach that reduces the general lemma synthesis problem to a data-driven program synthesis problem. The approach derives both lemma sketches as well as examples for synthesis from a given stuck proof state, and it uses the existing generalization technique to lift the proof state to new variables for synthesis. (2) We describe a suite of filtering and ranking strategies for candidate lemmas, which is necessary for an interactive verification setting. (3) We have instantiated our approach in a tactic called lfind for Coq. (4) Our experimental evaluation demonstrates the practical utility of our approach and tool, quantifies the benefits over multiple alternative approaches to lemma synthesis, and investigates the sensitivity of lfind to different parameter values.
Motivating Example
To illustrate how lfind works, we will start with an example. Figure 1 shows Coq code that tries to prove a simple theorem: that reversing a list twice returns the same list. It starts by defining lists of nats along with definitions for appending and reversing lists. Following that is an attempt to prove the theorem, named rev_rev.
The proof proceeds by induction on the list l. The Nil case is easily proven, but the Cons case is trickier. After simplification, the user is stuck because the goal is not in a form that enables direct use of the induction hypothesis. Figure 2 shows the proof state at that point, including the current assumptions and goal.
To get unstuck, the user can invoke our tool lfind as a tactic at this point. In this example, the top three lemmas that lfind produces are as follows: 1 (Λ 1 ) Lemma lem1: forall l1 n, 2 rev (app l1 (Cons n Nil)) = Cons n (rev l1). 3 (Λ 2 ) Lemma lem2: forall l1 l2, 4 rev (app l1 l2) = app (rev l1) (rev l2). 5 (Λ 2 ) Lemma lem3: forall l1 l2, 6 rev (app (rev l1) l2) = app (rev l2) l1. Each lemma is bucketed into one of three categories (Λ 1 , Λ 2 , or Λ 3 ), and the categories are presented to the user in that order. Λ 1 lemmas are those in which lfind can automatically find a complete proof of the original goal using the generated lemma and Proverbot9001, a state-of-theart automated prover. In other words, lfind has successfully generated an appropriate auxiliary lemma, proven that lemma, and used the lemma to complete the original proof. The lemma lem1 is such an Λ 1 lemma; the full proof of the theorem rev_rev using lem1 is shown in Figure 3.
Λ 2 lemmas are those that are not disprovable by Quickchick and are sufficient to automatically prove the original goal, but Proverbot9001 cannot automatically prove the auxiliary lemma. lfind indicates that the second and third lemmas in the above listing are Λ 2 lemmas; indeed, each of them in turn depends on its own auxiliary lemmas, for example, the associativity of app. However, both lemmas are also still good options for the user: the lemma lem2 is a more general version of lem1, while lemma lem3 reduces to the original rev_rev lemma when l2 is Nil. Λ 3 lemmas are ones that are not disprovable by a tester like Quickchick but automation using Proverbot9001 can't be used to prove either the goal or the auxiliary lemma; since they are similar to the goal and not disprovable, they might still be useful to the user.
In the rest of this section we explain how lfind produces these results.
Approach
As mentioned in Section 1, the generality of our setting induces several technical challenges. Lemma synthesis in lfind has four steps that are targeted at these challenges, as shown in Figure 4. We start by generalizing the goal state, in order to systematically introduce new variables that can be used in candidate lemmas. From each generalization, we create sketches and sample variable valuations from the current goal in order to reduce lemma synthesis to data-driven program synthesis. Finally, we filter the resulting lemma candidates to remove those that cannot be useful and rank and categorize the remaining candidates for user inspection.
Generalization. In Coq, helper lemmas are generally used as arguments to the apply and rewrite tactics. To use the apply tactic, the consequent of the lemma must structurally match the goal state to which it is applied. Similarly, to use the rewrite tactic, the lemma needs to be an equality or similar relation, one of whose operands structurally matches a portion of the goal state. It is for these reasons that prior techniques for lemma synthesis in the interactive setting [Bundy et al. 1993;Kaufmann and Moore 1997] work by making heuristic rewrites to the goal state. The most common such technique is generalization, which replaces terms in the goal state with fresh variables [Kaufmann and Moore 1997].
Our approach starts from the same intuition but aims to use data-driven synthesis instead of heuristic rewrites in order to transform the goal state. However, we observe that generalization provides a systematic way to introduce new variables for the synthesis process. Since we are not sure in advance how many and which variables a useful lemma might need, we exhaustively generate generalizations, one per subset of terms within the goal state. In our example, there are six non-variable terms in the goal ( Figure 2). While in principle there are 2 6 possible generalizations using these terms, there are only 16 unique ones, since some terms are only present as subterms of other ones.
For example, replacing rev l with a fresh variable l1 of type lst produces the following generalization: forall l n l1, rev (app l1 (Cons n Nil)) = Cons n l.
Alone, this generalization does not produce a valid lemma, as it does not hold when l1 is not the reverse of l. Typically generalization is only applied on terms that appear more than once in a goal [Kaufmann and Moore 1997], to avoid these cases. In our example, there are no such terms, and in fact, all lemmas generated by generalization alone are easily disprovable.
Nonetheless, these generalizations play a crucial role in our approach. In addition to being treated as candidate lemmas themselves, we use each generalization as a starting point from which to produce many more candidate lemmas via data-driven program synthesis. Each generalization introduces new variables that can be leveraged as part of that synthesis process.
Synthesis. From each generalization, we create a set of sketches, where each sketch is a version of that generalization with one term replaced by a hole. For example, if we replace the term Cons n l in the generalization above with a hole, then we end up with the following sketch (note that we remove variable l from the quantifier since it is no longer used): forall l1 n, rev (app l1 (Cons n Nil)) = □.
Intuitively, we would like the expression that fills the hole to behave consistently with the expression that it is replacing. To that end, we generate concrete examples of the original goal in the stuck state and then map them to input-output examples for data-driven synthesis. In our running example, the original goal has two variables, l and n, so suppose we randomly generate the following (l, n) pairs (using regular list notation for clarity): { As a result of this mapping, we can now produce a set of input-output examples that act as a specification for synthesis, each mapping (l1, n) pairs to the expected output value of the term to be synthesized: Finally, we pass these input-output examples to a data-driven synthesizer. In addition to the examples, we provide the type of the function to be synthesized (which in this case is lst * nat → lst) and a grammar to use for term generation. lfind automatically creates a grammar consisting of the definitions that appear in the stuck proof state along with definitions that they recursively depend upon. In our example the grammar includes the constructors Nil and Cons and the functions app and rev. One term that the synthesizer generates from these inputs is Cons n (rev l1). Substituting this expression into the hole in our sketch yields exactly the lemma lem1 shown earlier, which enables a fully automated proof of the original lemma.
Note that synthesis is much more expensive than the generalization process described above, which simply replaces some terms with variables. Furthermore, the need for lemmas to structurally match the goal state limits how many parts of that state can be usefully rewritten. For these reasons, we consider only one hole per lemma sketch, but lfind's algorithm conceptually does not limit the number of holes per sketch. Our technique can be extended to synthesize terms for each hole one at a time and then induce candidate lemmas from their combinations.
In summary, we have shown how to generate candidate lemmas in a targeted way, based on the current proof state, using a novel combination of generalization and data-driven program synthesis. While the expressions that are generated by synthesis can make use of a general grammar, the form of the lemmas that we generate are still limited to the structure of the sketches that we produce. As we demonstrate in ğ5, our approach can generate useful lemmas for a variety of interesting benchmarks.
Filtering. As described above, our approach induces many generalizations of each goal, multiple sketches for each generalization, and multiple synthesis results for filling each sketch's hole. Hence, the set of candidate lemmas that are generated is quite large. In our running example, with default settings for the number of sketches to produce for each generalization and the number of synthesis results to produce for each sketch (see Section 5.2), lfind generates 276 candidate lemmas. While the ability to explore a large space of candidates is a strength of the approach, we must organize these candidates in a manner that is understandable and beneficial to users.
To that end, we filter out extraneous candidates in multiple ways. First, we filter out candidates for which we can find a counterexample; we search for counterexamples using Quickchick, an existing counterexample-generating tool [Paraskevopoulou et al. 2015]. Second, we filter out candidates representing trivial facts, for example forall l, rev l = rev l. We identify such cases using Coq's trivial tactic.
Finally, we filter out candidates that "follow directly" from the user's original lemma, a notion we explore in more detail in 3.4. For instance, in our running example, one candidate lemma is forall n l, rev (rev (Cons n l)) = Cons n l, which is a special case of the original rev_rev lemma and hence is discarded in this step.
Ranking. After filtering, there are 21 candidate lemmas remaining in our running example. While that constitutes a 92.4% reduction, it is still too many candidates to require the user to examine. Hence, we rank candidates based on their likely utility to the user and present them in ranked order. Since ultimately the utility of a lemma is based on whether it allows the user to prove the original goal, our ranking leverages a state-of-the-art automatic prover for Coq, Proverbot9001, which searches the space of Coq tactics to try to prove a given goal [Sanchez-Stern et al. 2020].
Specifically, we use the automatic prover to partition the candidate lemmas into the three groups introduced in Section 2.1: Λ 1 lemmas that are automatically provable and enable automatic proof of the user's stuck proof state; Λ 2 lemmas that are not automatically provable but enable automatic proof of the user's stuck proof state; and the remaining Λ 3 lemmas. Next, we sort each group in order of size from least to greatest, since we expect smaller lemmas to be easier for users to understand and evaluate. Finally, we concatenate these sorted groups to form the final ranked list.
In our running example, there are 2, 2, and 17 lemmas respectively in each of these three categories. The first lemma in category Λ 1 , which yields a fully automated proof, is lem1 shown earlier, so it is ranked first. Lemmas lem2 and lem3 are the smallest lemmas in category Λ 2 and hence are ranked next in our results.
ALGORITHMS
In this section we formally describe the core algorithms that make up our approach.
Preliminaries
Our approach synthesizes lemmas for a given proof state Ψ, which is a tuple ⟨H, , Γ, D⟩, where H is a set of logical formulas that are the current hypotheses, is a logical formula that is the current goal, Γ is a type environment for all free variables in H and , and D is a set of type and term definitions that are recursively referred to in H and . We require that the goal be unquantified, which in practice typically means that the original lemma/theorem should have all variables universally quantified at the front.
We use to denote logical formulas, for variables, for values, for terms of sort Type, and for the types of terms. A sample for a proof state Ψ = ⟨H, , ⟨ 1 : 1 , . . . , : ⟩, D⟩ is an environment = ⟨ 1 : 1 , . . . , : ⟩ such that is a model of H → , denoted ⊨ D H → . We also use the notation ⇓ to denote the evaluation of term to value under environment .
Finally, we assume the existence of several black-box functions that have been created by others in prior work. We assume the availability of a black-box synthesizer that takes as input a grammar G , consisting of typed constants and functions; a type signature 1 → 2 ; and input-output examples of the form ( 1 , 2 ), where 1 has type 1 and 2 has type 2 . This synthesizer returns a list of functions of type 1 → 2 in the grammar G such that ( 1 ) = 2 for all examples; or it fails after some time limit. We also assume the existence of a function Sample(Ψ) that produces a set of samples. Last, we assume the existence of both automated theorem provers and disprovers. A prover R ( ,¯, D) attempts to prove a given formula in the context of a set of auxiliary lemmas¯as well as a set of definitions, returning either Valid or Dont Know. A disprover C( , D) searches for concrete counterexamples to and returns either Invalid or Don't Know.
Lemma Synthesis
First, we describe how we reduce lemma synthesis to data-driven program synthesis. As described in the previous section, the first step is to produce generalizations of the current goal , by replacing some set of terms in with fresh variables. The following definition formalizes this notion of generalization.
Definition 3.1. (Generalization: G) Given a goal , a type environment Γ, and a set T = { 1 , . . . , } whereby no term in T is a subterm of any other term in the set, we define the generalization of with respect to Γ and T, denoted G( , Γ, T), as the tuple ⟨ ′ , Σ⟩, where Σ = ⟨ 1 ↦ → 1 , ..., ↦ → ⟩ records the mapping from each new variable to the term that it replaces, variables 1 , . . . , are not in the domain of Γ, and ′ = [¯↦ →¯].
As in lfind, the definition of generalization above replaces all occurrences of a particular subterm with the same variable, though it is possible to relax this requirement. The restriction that no term in T can be a subterm of any other term in the set ensures that their simultaneous substitution is well defined. In lfind any term that is a subterm of another term to be generalized is simply ignored.
For example, the goal state: In order to produce a data-driven program synthesis problem, we must generate input-output examples. The following definition shows how we extend an environment to an input-output example, given a set of terms (which are used for generalization), and a term (used for creating a sketch). Intuitively, the new variables created by generalization become additional input variables, and the term used to create a sketch defines the expected output.
For example, given (1) the environment = ⟨ ↦ → 2, ↦ → 3, ↦ → 4⟩, which are values for the variables in our original example goal state, (2) the term mapping Σ = ⟨ ↦ → atan2(x,y)⟩ from our example generalization, and (3) the term 2 * (a -z) that was replaced with a hole in our example sketch, we produce an input-output example whose input tuple extends the environment with a mapping from to the value of atan2(2,3) and whose expected output is the value of 2 * (atan2(2,3) -4).
Finally, we can put all of this together to specify how to reduce lemma synthesis to data-driven program synthesis.
Definition 3.4. (Lemma synthesis as data-driven program synthesis) Given a proof state Ψ = ⟨H, , Γ, D⟩, a set of terms T = { 1 , . . . , } for generalization, and a sketch term , we produce a data-driven program synthesis problem as follows. Let G( , • The grammar G for synthesis is defined by the type and term definitions in D.
• The output type for the function to be synthesized is , where Γ ⊢ : . We invoke the synthesizer with these inputs and ask for the smallest functions (ğ 5 reports sensitivity analysis for ) that meet this specification. For each such function , with body expression , the induced candidate lemma is created by universally quantifying all free variables in the term Above we have formalized the process of lemma candidate generation from a single set of terms to be generalized and a single term to be used for creating a sketch. lfind performs this process many times, for many different generalizations and many different sketch terms. Various approaches to exploring this space are possible. lfind exhaustively explores the generalization space, producing one generalization for each subset of terms in the goal . For each such generalization, lfind employs terms that have sort Type for creating sketches. There are several ways to pick a synthesis term for a sketch, and in ğ5 we carry out sensitivity analysis for two natural approaches to choosing such terms.
Filtering
The approach described so far generates a lot of candidate lemmas. If there are subterms in a given goal to use for generalization, sketches per generalization, and we ask the synthesizer for results, then without any filtering lfind would produce a maximum of 2 +1 candidates, including all generalizations and the lemmas derived from them using data-driven synthesis. Exploring a large space of candidates is advantageous, but clearly we require techniques to filter out candidates that are not going to help the user.
We employ four different filtering techniques. First, duplicates among candidate lemmas are common. For example, it's possible for synthesis from two different sketches to produce the same result. It's also possible for synthesis from a single sketch to produce syntactically distinct results that are behaviorally equivalent. We identify and filter duplicates by applying Coq's simpl tactic and then comparing the results for syntactic equivalence. Second, we use the disprover C to search for counterexamples, filtering out any candidate such that C( , D) = Invalid. Third, we remove lemmas that can be solved using Coq's trivial tactic, since they are self-evident and hence never needed as explicit auxiliary lemmas.
Finally, we filter lemmas that łfollow directlyž from the original lemma, as they will not help in proving that lemma. This is a subtle notion. For example, it is not a form of logical implication, since if the candidate lemma is valid then any other lemma implies it. Instead, we formalize this filter via a binary relation ⪯, which says that one lemma is an instantiation of (or equivalent to) another, defined as follows: Definition 3.5. (⪯-operator) Given lemmas 1 and 2, we say 1 ⪯ 2 if we can prove 1 using either of the following proof scripts: 1 intros. apply l2. Qed. 2 intros. rewrite <-l2. reflexivity. Qed. 3 intros. rewrite -> l2. reflexivity. Qed.
We then filter out any candidate lemma that is ⪯ than the original lemma.
Ranking
We rank the remaining candidate lemmas using the automated prover R we introduced earlier.
For each candidate we use the prover to determine if the candidate can be used to automatically prove the goal Ð R ( , {H, }, D) Ð and whether the candidate itself is automatically provable Ð R ( , ∅, D). Based on the results we partition the lemmas into three groups, Λ 1 , Λ 2 , and Λ 3 . The Λ 1 group contains the lemmas for which both calls to R return Valid, meaning that we have obtained a fully automated proof of the user's original goal. The Λ 2 group contains the lemmas for which the first call to R return Valid, meaning that the lemma enables the goal to be automatically proven but the lemma is not itself automatically provable. The remaining lemmas go in the Λ 3 group. We sort the lemmas in each group in order of size from smallest to largest, since we expect smaller lemmas to be easier for users to understand and evaluate. Finally, we concatenate these sorted groups to form the ranked list.
Discussion
We note that lfind's approach to candidate lemma generation imposes some important restrictions on its usage. We have already mentioned that the goal in the proof state must be unquantified. Further, the approach relies on the ability to generate examples for the stuck state, which limits it to the capabilities of current test generation techniques. Because we reduce lemma synthesis to program synthesis we require the ability to extract necessary definitions as code and translate code back to Coq. Finally, because sketches for synthesis are derived from a generalization of the original goal, the generated lemmas will always have the same top-level structure as the goal. For example, if the original goal has the form A = B then the candidate lemmas will also have this form. ğ5.3 shows that despite these limitations, lfind can successfully identify non-trivial helper lemmas for a variety of examples. In addition, all of these limitations represent useful avenues of investigation in future work. Figure 5 illustrates the overall architecture of lfind, which leverages three black-box components: a data-driven synthesizer for candidate lemma generation; an automatic disprover for candidate filtering; and an automatic prover for candidate ranking. Our implementation of lfind is 3. Fig. 5. Given a stuck goal, lfind implements generalization, synthesis, filtering, and ranking in conjunction with existing tools to generate candidate lemmas.
Example Generation
To synthesize candidate lemmas, our approach relies on a Sample function that can produce samples for the variables in the stuck state . We leverage Quickchick [Paraskevopoulou et al. 2015], a state-of-the-art property-based randomized tester for Coq, for this purpose. While Quickchick is intended as a testing tool, we log all of the test inputs that it generates and use them as the samples from which to produce examples for synthesis.
Specifically, for each user-defined type T in the stuck goal, lfind first generates the following Coq code, which enables the usage of Quickchick for that type: The Show typeclass is required for printing test cases and the Arbitrary typeclass is required to combine test-case generation with an operation for shrinking test inputs. Quickchick supports automatic derivation of instances of these type classes for simple types. Quickchick also requires that types have decidable equality, so we derive an instance of the Dec_Eq typeclass for T.
Next, to produce examples for the stuck proof state , we create a Coq lemma for that state, defined as Lemma stuckState: H → . We also create a function collect_data whose input type V is the tuple of the types of all free variables in H → . The function logs the input values to a file and returns the valuation of H → on those input values: Finally, we run Quickchick on this function, thereby logging samples to use for data-driven synthesis and also searching for counterexamples to the stuck proof state. If Quickchick returns any such counterexamples, then there is no way to complete the proof so we report this to the user and halt lfind. Otherwise, we proceed with synthesis.
Synthesis
To our knowledge, there are no data-driven synthesizers that work directly for Coq. We chose Myth [Osera and Zdancewic 2015] as our synthesizer because it accepts and generates OCaml code, for which tools exist to convert to/from Coq's language Gallina; it has a simple interface that is easy to use, and it has worked well for us in the past. Myth requires an input grammar in OCaml, so we use Coq's Extraction feature to recursively extract reachable definitions and types from the stuck goal to OCaml. Additionally, we adapt Myth slightly in two ways. First, Myth supports only a subset of OCaml and does not support common syntactic sugars. For example, Myth does not support the function keyword. To get around these limitations, we wrote a translator that desugars the definitions extracted from Coq into a form acceptable by Myth. Second, we modified Myth to return a set of candidate functions sorted by size, instead of just one. This enables the generation of multiple candidate lemmas as described earlier. Finally, to substitute the synthesized OCaml function body back into our lemma sketch, we use an open-source tool, coq-of-ocaml [coq 2003].
Filtering and Ranking
In ğ3.3, we described multiple filters to remove extraneous candidate lemmas. To implement these filters, we declare each candidate as a Coq lemma and use Quickchick to remove lemmas that have counterexamples. The remaining filters are implemented by running proof tactics using SerAPI [Gallego Arias et al. 2020], a library for machine-to-machine interaction with Coq. To rank the filtered lemmas, we use Proverbot9001 [Sanchez-Stern et al. 2020], a state-of-the-art proofsynthesis tool that uses machine learning to produce proofs of Coq theorems. Proverbot9001 takes as input definitions, a theorem that needs to be proven, and a set of axioms that can be assumed, and returns a proof script or Don't Know.
Discussion
In our implementation, we try to disprove each generalization eagerly, and we only carry out synthesis from generalizations for which the disprover finds a counterexample. Intuitively, if a generalization is not disprovable then it is itself a candidate lemma, and so we would rather spend our synthesis resources elsewhere. Candidate lemmas are produced incrementally, as generalization and synthesis proceed. Hence the algorithm is any-time: we can stop at any point, collect up the current set of candidates, and filter and rank them. Furthermore, we stop synthesis as soon as we get a category Λ 1 lemma since we will have a fully automated proof of the user's original goal.
Our implementation inherits the limitations of the black-box tools we rely on. Notably, Myth only supports a small subset of OCaml. As described above, we mitigate this limitation by implementing a translator, but this is not a solution that works for the full OCaml language, and so in some cases lfind can fail to produce code that Myth accepts. Myth also does not support polymorphic types.
EXPERIMENTAL RESULTS
In this section we perform experiments to answer the following research questions: RQ1. (ğ5.3) How effective is lfind in synthesizing useful helper lemmas? How fast can the tool synthesize these helper lemmas? What is the impact of its filtering and ranking techniques? RQ2. (ğ5.4) How does lfind's data-driven approach compare in effectiveness to prior approaches to lemma synthesis? RQ3. (ğ5.5) How sensitive is lfind to hyperparameters and timeouts?
Benchmark Suite
Our approach generates candidate helper lemmas from a given proof context, and our tool is implemented as a tactic. Hence, to evaluate lfind we need to invoke lfind at each point in the proofs where a user-provided helper lemma was used. These are called evaluation locations. Concretely, a proof state is an evaluation location if a human prover has used either the apply or rewrite tactics with a helper lemma that they created. We evaluate lfind on a total of 222 evaluation locations. These benchmark locations are drawn from the following sources.
• CLAM (140): This benchmark suite consists of 86 theorems about natural numbers as well as various data structures, including lists, queues, and trees, and it has been used to evaluate prior forms of lemma synthesis [Ireland and Bundy 1996;]. These benchmarks lack associated proofs, so we converted them to Coq and manually proved each theorem (more details on this process below). Out of the 86 clam theorems, 67 require at least one helper lemma, with many requiring multiple lemmas. In total, the clam suite contains 184 unique evaluation locations that employ a helper lemma. Implementation limitations mentioned in ğ4.4 preclude 44 locations from clam from being used for evaluation, leaving 140 remaining evaluation locations. • Full Adder (62): This project [cir 1995] from the coq-contribs collection formalizes a full adder and proves it correct [cir 1995]. The program first builds a half-adder circuit (which takes two binary digits, and outputs two binary digits) and proves properties about it. Then the half-adder circuit is used to build a full-adder circuit ( which takes two binary digits, plus a łcarryž digit, and outputs two binary digits). Finally, the program chains together a sequence of full adders to create an adder circuit, which is proven correct. All of the 40 theorems in this project require at least one helper lemma, and the project contains 62 evaluation locations in total. • Compiler (1): This benchmark is the compiler example from Chapter 2 of Chlipala's CPDT textbook [Chlipala 2013], which is a certified compiler from a source language of expressions to a target language of a stack machine. The final theorem formalizes the correctness of the compiler. This benchmark contains one theorem, which uses one helper lemma, which is the evaluation location. Though it contains only a single evaluation location, we chose this example as a benchmark because it showcases a different application and the required helper lemma is relatively large and complex. • LIA (19): This benchmark suite consists of 9 theorems about data structures that require linear integer arithmetic, from a prior work on lemma synthesis for fully automated proofs about data structures (see Table 1 in ). As with the CLAM benchmarks, we converted them to Coq and manually proved each theorem. Each proof requires at least one helper lemma, and there are a total of 19 evaluation locations.
The Full Adder and Compiler benchmark suites already contain full Coq proofs written by others, which in turn determine our evaluation locations. The theorems in the CLAM and LIA benchmark suites lack proofs, so each theorem was manually proven by one of three of us, with varying experience from novice to expert in interactive theorem proving. Specifically, one person had only done a small class project with Coq previously, one has been using Coq for the past few years on a research project, and one has used it on and off for a decade. The proofs were completed independently of lfind's evaluation, and helper lemmas were used wherever the human prover deemed necessary. In ğ 5.4, we show that the vast majority of these helper lemmas are indeed necessary, in the sense that a state-of-the-art automated prover cannot complete the proof of the theorem without a helper lemma. We have provided all the benchmarks as part of the anonymous supplementary material.
Experimental Setup
For each evaluation location, lfind generates 50 input-output examples from the current proof state and is allowed to generate candidate lemmas with a maximum timeout of 120 minutes. Despite the large search space, in ğ5.3 we show that the tool is performant with a median runtime of 3.3 min. The tool has a 12s timeout for each call to Myth and a 30s timeout for each call to Proverbot9001. In addition to the timeout parameters, two key hyperparameters to our algorithm are the choice of subterms to use for generating sketches and the number of synthesis results to obtain per sketch.
In our experiments, we generate sketches from all subterms of sort Type, and we ask for 5 synthesis terms per sketch. Empirically we have found these choices to provide good results, but we also present a sensitivity analysis of other choices for timeout and hyperparameters in ğ5.5. All evaluations were performed on a machine that runs MacOS (10.15.6) in a 2.3 GHz-Quad-Core Intel Core i7, with 32GB memory. Table 1 summarizes the results for all our benchmarks. We consider the use of lfind at an evaluation location to be successful in three scenarios. First, we say that lfind is successful if it can produce a candidate helper lemma that is automatically proven by Proverbot9001 and this helper lemma enables Proverbot9001 to automatically prove the user's goal. This is the best-case scenario, as lfind has produced a complete proof for the user. Second, we say that lfind is successful if a lemma that matches the human-provided lemma is ranked highly (top-10) by the tool. Third, we say that lfind is successful if a lemma that is more general than the human-provided lemma is ranked highly by the tool. We use the ⪯ operator defined in ğ3.3 to automatically identify if a candidate lemma matches or is more general than the human-provided lemma ℎ. Specifically we say that matches ℎ if both ⪯ ℎ and ℎ ⪯ , and we say that is more general than ℎ if ℎ ⪯ but not vice versa. These are reasonable success metrics for our tool, as we expect versions of the human-provided lemma to be "natural" for people to understand, and also we know that the human-provided lemma does indeed lead to a full proof of the goal. Note however that the metrics are conservative, as there could be other lemmas produced by lfind that are natural and appropriate but do not fall into one of the above three categories.
Synthesized Helper Lemmas
In total, based on our evaluation metrics we see that lfind succeeds in 150 (67.5%) of the 222 evaluation locations across all benchmarks. Further, as shown in the third row of the table, in 117 (78%) of these successful 150 locations, lfind was able to synthesize a lemma that led to a fully automated proof of the user's goal. Rows 4-7 of Table 1 show a breakdown of the remaining 33 successful locations. Notably, for 19 of these evaluation locations, the top-ranked candidate lemma produced by lfind matches the helper lemma provided by the human prover. These results demonstrate the effectiveness of our filtering and ranking strategies in surfacing relevant lemmas toward the top, and often as the top result. Further, 83.3% of the successful locations employed data-driven synthesis, rather than solely using generalization. In those cases, we found that on average 43.4% of the candidate lemma comes from the synthesized term, and the rest comes from the lemma sketch.
Examples. Table 2 shows examples of lemmas synthesized by lfind along with their rank and category (see ğ3.4 for category notations). We describe the first four of them in detail.
The first example from the Compiler benchmark formalizes the correctness of a compiler from a source language of expressions to a target language of a stack machine. In this case, type exp defines the source language of arithmetic expressions. evalExp function evaluates the programs in this language. The target language's instructions are of type instr, which are executed on a stack machine. The function execI takes an instruction and a stack (represented as a list of nats) and returns an updated stack, and execIs uses this function to execute a list of instructions. Finally, the compiler function translates source programs to a list of instructions. The theorem itself is not inductive, necessitating an inductive helper lemma that implies the theorem [Chlipala 2013]. lfind was not able to identify a helper lemma that leads to a fully automated proof of the theorem. However, it produces candidate lemmas in categories Λ 2 and Λ 3 , and the top-ranked candidate in category Λ 2 , shown in the table, exactly matches the human-provided lemma. The lemma is non-trivial as it involves multiple calls to execIs, an arbitrary list of stack instructions l, and an arbitrary stack s.
The second example is from the full adder benchmark. The theorem says that if we convert to a natural number the result of adding a binary number, we get the same natural number we would if we converted that input to a natural number. We present a synthesized helper lemma in table 2, which belongs to category Λ 1 and hence led to a full proof of the theorem.
The third example in the table is from the clam benchmark suite and proves the equivalence of two functions for converting a binary tree into a list. For this example, lfind produced candidate helper lemmas in both categories Λ 2 and Λ 3 . The tenth-ranked candidate, shown in the table, matches the human-provided lemma.
The fourth example in the table is from the lia benchmark suite and reasons about how pushing onto a queue affect its length. This is a case in which our evaluation does not deem lfind to have succeeded, since it does not produce a fully automated proof and does not produce a match for the human-provided lemma in the top ten results. However, the top-ranked result, shown in the figure, is very close to the human-provided lemma, which simply replaces the term (rev l1) with l1. Further, this lemma is itself equally useful in completing the proof, despite being slightly more complex.
Runtime Performance. Figure 6 plots the runtime distribution of lfind across all 222 evaluation locations. The tool ran to completion on each of these benchmarks with a median runtime of 3.3 min (shown in the plot where the curve labeled Total Time reaches a CDF of 0.50). Recall that lfind Theorem app_revflat:∀ (x:tree) (y:list nat), (revflat x) ++ y = qrevaflat x y.
produces a full automated proof (category Λ 1 ) in 78% (see Table 1) of the successful evaluation locations. As shown by the curve labeled Time to Category 1 in Figure 6, the median and 75th percentile runtime of the tool were only 1.2 min and 3.0 min respectively. These runtimes indicate the viability of our approach and its instantiation in lfind to support interactive usage.
Impact of Filtering and Ranking. Figure 7 provides a detailed view of how many candidate lemmas were generated and filtered, for the results presented in Table 1. As explained in ğ3.3, our approach indeed generates a lot of candidate lemmas. For example, lfind generates a median of 150 candidate lemmas for each evaluation location from the benchmarks (shown where the solid curve reaches a CDF of 0.50). However, our filtering techniques are very effective in removing useless lemmas. As mentioned in ğ4, we filter Invalid candidates (labeled Filter 1 in the figure) as we generate candidate lemmas. We then filter lemmas (labeled Filter 2) that are either syntactically similar to each other, or trivial, or restatements or special versions of the theorem statement. After Filter 1, the median number of lemmas is reduced to 100. Further, after Filter 2 there is a median of 16 candidate lemmas. Hence on average, Filter 1 reduced the candidate lemmas by 33%, and Filter 2 reduced the remaining candidates by 84%.
Finally, as mentioned above even after filtering we are left with a median of 16 lemmas for each benchmark suite. This highlights the importance of our ranking strategy, which was already shown to be effective in the results of Table 1. are 23 cases where lfind failed to identify a candidate lemma due to the specific choice of tool hyperparameters (rows 1-2). Recall that lfind is successful if it can produce a category Λ 1 lemma or if a lemma ranked in the top 10 matches the human-provided lemma. As shown in the first row of Table 3, in 8 cases lfind identifies a candidate lemma that matches a human lemma, but it is not ranked in the top 10. The second row contains 15 cases where Myth synthesizes a term that leads to the required candidate lemma, but that term is not among the (5 in our case) smallest terms that Myth produces and so we do not use it (see ğ5.2). Second, there are 49 cases where lfind failed due to algorithmic limitations (rows 3-5 of Table 3), which represent useful avenues of investigation in future work. lfind produces generalizations by replacing non-variable terms in the goal state with fresh variables (see ğ3.2). However, in four cases (row 3), the generation of a successful candidate lemma requires a generalization that is formed by replacing repeated variables in the goal state with fresh variables. Next, in four cases (row 4), the human-provided helper lemma contains functions that are not part of the goal state, and therefore the grammar generated by lfind for synthesis does not contain those functions. Finally, recall that lfind produces lemma sketches from a generalization of the original goal, so candidate lemmas will always have the same top-level structure as the entire goal. However, 41 cases (row 5) require a helper lemma that is used to rewrite a particular subterm of the goal, and lfind is unable to generate such lemmas.
Comparison with Other Approaches
In this section, we compare lfind against the three most relevant approaches. First, we compare against a state-of-the-art automated prover to try to complete the proof from the evaluation location (proof context). Second, we compare against generalization [Boyer and Moore 1979;Kaufmann and Moore 1997], which is the most common lemma synthesis technique in interactive theorem provers. Finally, we compare against ADTInd which is a state-of-the-art lemma synthesis technique in a non-interactive setting.
No Synthesis. In this study, we ran Proverbot9001 on each evaluation location across all benchmarks, without providing any synthesized lemma from lfind. Proverbot9001 can automatically prove only 27.5% of the evaluation locations. In contrast, with a lemma synthesized by lfind, Proverbot9001 can automatically prove 52.7% of the evaluation locations (117 out of 222), and as shown earlier overall lfind provides a useful lemma in 67.5% of the cases. This experiment highlights the need for lemma synthesis and shows how our work complements existing work on automated proofs. These results also serve as a measure of the quality of the human proofs, as the human-provided lemmas are required in the vast majority of cases. Situations where a lemma is used but not needed could arise due to the inexperience of the human prover or simply for readability purposes.
Generalization. To our knowledge, there is no existing implementation of generalization for Coq. As part of lfind, however, we have implemented generalization in Coq, and furthermore, we have implemented an exhaustive version of it: each subset of terms in the current goal state induces a candidate lemma through generalization. This implementation allows us to perform an apples-to-apples comparison between generalization and our approach. Hence, for this comparison, we disable lfind's synthesis process, so Myth is not used at all, but all other parts of lfind work as described earlier. This version of lfind can be seen as a best-case version of the generalization technique [Boyer and Moore 1979;Kaufmann and Moore 1997], since we exhaustively consider all possible generalizations, while in prior tools typically only one or a small number of generalizations are heuristically chosen [Chamarthi et al. 2011;]. According to our success metrics defined in ğ 5.3, a generalization is deemed useful in only 11.3% of all evaluation locations, as compared with 67.5% of locations for lfind.
Enumerative Synthesis. We compare against ADTInd ], a state-of-the-art lemma synthesis technique for fully automated theorem proving. Like lfind, ADTInd employs generalization as well as lemma sketches, but ADTInd fills these sketches via grammar-based term enumeration. Unfortunately, a direct tool comparison against ADTInd would not be informative due to several important differences between the tools: (1) Unlike lfind, which automatically generates lemma sketches from the current proof state, ADTInd requires user-provided lemma sketches. The choice of sketches has a large impact on which theorems are able to be successfully proven.
(2) Since lfind is intended for an interactive setting, the user indicates the proof state from which to carry out lemma synthesis, whereas ADTInd automatically decides where to invoke lemma synthesis. (3) ADTInd is a fully automated prover, so in addition to lemma synthesis, it also has its own proof search algorithm. Hence whether ADTInd can prove a theorem or not depends in large part on the power of that proof search algorithm. Further, that algorithm is very different from the neural-based prover that lfind uses for filtering and ranking. To avoid these confounding differences and enable a fair comparison of ADTInd's and lfind's lemma synthesis techniques, we substitute ADTInd's enumeration-based synthesis approach for our data-driven approach but keep everything else the same in lfind: the same automatically generated lemma sketches, filtering and ranking, and success metrics.
To perform this comparison, we have created a version of lfind that does not provide any examples to Myth whenever it is invoked, but is otherwise identical to lfind. Without examples, all terms of the desired type will be considered by Myth to meet the given specification, so the effect is that Myth will perform a type-guided enumeration through the given grammar. This ADTInd-like version of lfind synthesizes a successful helper lemma according to our success metric in 79 evaluation locations, whereas the unmodified lfind does so in 125 evaluation locations. Note that these results exclude cases where generalization produces the useful lemma for an evaluation location since the two versions of lfind are identical in those cases. These results demonstrate the benefits of data-driven synthesis: the examples act as a specification that allows for early filtering of candidate lemmas, which in turn enables the synthesizer to provide higher-quality candidates.
Sensitivity
As described in ğ 3, lfind has two hyperparameters: (1) number of synthesis results per sketch, and (2) which terms to select for generating sketches. Further, as described in ğ 4, lfind uses Proverbot9001 to rank candidate lemmas and the Myth synthesis engine for term generation. We limit the time spent on each of these tools to efficiently search over the large space of candidate lemmas using available resources. We carry out four separate experiments on the largest benchmark suite (clam, with 140 evaluation locations) to understand lfind's sensitivity to each of these parameters. To quantify the sensitivity of a parameter, in each experiment we vary one parameter while fixing all others. Number of Synthesis Terms. In the first experiment, we vary the number of synthesis results ( ) that we ask of Myth per sketch. We generate sketches from maximal subterms, and use 10s and 12s timeout for Proverbot9001 and Myth respectively. We study the sensitivity to this parameter by varying to be 5, 15, and 25. Respectively for these settings, lfind is successful in 85, 89, and 80 clam evaluation locations. There is a modest 4.7% increase in effectiveness from = 5 to = 15, since there is a large search space of candidate lemmas as increases. However, there is a significant drop in effectiveness from = 15 to = 25 Ð as the search space increases, the useful candidates can more easily fail to be highly ranked. Figure 8 plots the total runtime for different values, and as expected, the median total time increases with increasing . Median total time of = 5 is 4.4 min (labeled Top 5), while it is 8.0 min and 10.9 min for = 15 and = 25 respectively (labeled Top 15, Top 25). We pick = 5 as the optimal number of synthesis terms for the remaining experiments, since the increase to = 15 has a large time cost and only a modest effectiveness benefit. Fig. 9. Median runtime of lfind decreases with an increase in Proverbot9001 timeout. While this is unintuitive, this is because the prover is allocated more time per call, enabling it to prove a candidate lemma earlier, which was otherwise not provable using a smaller timeout.
Proverbot Timeout. In the second experiment, we vary Proverbot9001 timeout to be 5s, 10s, 15s, 30s, and 60s setting = 5 and keeping other parameters similar to the previous experiment. Respectively for these settings, lfind is successful in 50, 85, 94, 97, and 102 clam evaluation locations. The tool performs poorly with a 5s timeout, since Proverbot9001 spends the first few seconds in setup, leaving too little time for the actual proof search. Figure 9 plots the runtimes for the 10s, 15s, 30s, and 60s timeout cases. Median total runtime for 10s (labeled 10 seconds) is 4.4 min, while it is only 3.4 min for 15 seconds (labeled 15 seconds), 2.6 min for 30 seconds (labeled 30 seconds), and 3.2 min for 60 seconds (labeled 60 seconds). It is perhaps unintuitive that allowing Proverbot9001 more time leads to lower time overall, but the additional time for Proverbot9001 can allow it to complete a proof that would otherwise not be possible, thereby finding a category Λ 1 lemma sooner. Therefore, we pick 30s as the optimal timeout parameter for Proverbot9001 for experiments in ğ5.3.
Myth Timeout. The third experiment varies the Myth timeout to be 8s, 12s, and 16s, updating Proverbot9001 timeout to 15s and keeping other parameters similar to the second experiment. Respectively for these settings, lfind is successful in 87, 94 and 94 clam evaluation locations. Figure 10 plots the total runtime for these timeout values. Despite increasing timeouts, the total runtime is very similar among the three settings, with a median timeout of 3.1 min, 3.4 min, and 3.5 min for 8s, 12s, and 16s respectively. Therefore, we pick 12s as the optimal timeout parameter for Myth. Sketch Generation. In this final experiment, we explore two choices for sketch generation, using = 5, 15s timeout for Proverbot9001 and 12s timeout for Myth. We generate synthesis sketches from (1) all subterms of sort Type or (2) only from maximal subterms of sort Type. To make the use of maximal terms more feasible, for that setting we also use a heuristic that requires the synthesized expression to refer to all generalized variables from the sketch. The use of all terms is successful in 98 evaluation locations while the use of maximal terms is successful in 94 locations. Figure 11 plots the total runtime for these settings, and as expected, the total runtime is more when generating sketches from all subterms compared to only maximal subterms. However, the difference in the median runtime is only one minute. Therefore, we pick all subterms as the optimal parameter for sketch generation. Fig. 11. There is a modest increase in median runtime of lfind from 3.4 min to 4.5 min when generating synthesis sketches from maximal terms compared to all terms. 6 RELATED WORK 6.1 Lemma Synthesis As described in ğ1, there are a variety of existing approaches to lemma synthesis [Johansson 2019], and they broadly fall into two categories. Many techniques perform rewrites on the target theorem or the current proof state, in order to identify stronger induction hypotheses and helper lemmas. Most common among these is the generalization technique [Aubin 1976;Boyer and Moore 1979;Castaing 1985;Dixon and Fleuriot 2003;Hesketh 1992;Hummel 1990;Kaufmann and Moore 1997], whereby selected terms are replaced by fresh variables. Other works go beyond generalizing variables to a broader set of rewrites [Bundy et al. 1993;Johansson et al. 2010;Kapur and Subramaniam 1996;Sonnex et al. 2012]. For example, the rippling technique [Bundy et al. 1993] employs a set of rewrite rules in order to make the current goal match the induction hypothesis.
The other category synthesizes candidate lemmas from a grammar using bottom-up enumeration [Claessen et al. 2013[Claessen et al. , 2010Heras et al. 2013;Johansson et al. 2011Johansson et al. , 2014Montaño-Rivas et al. 2012;Reynolds and Kuncak 2015;. Candidate lemmas are typically filtered by searching for counterexamples [Claessen and Hughes 2000], and in many systems an automated prover is used to try to prove the remaining candidates. Closest to our work is AdtInd , which employs bottom-up enumeration in order to search for candidate lemmas in the context of an automated prover for abstract datatypes. Like lfind, AdtInd leverages both generalization and sketches (which they call templates) for synthesis, but it is unclear how generalizations are chosen and the sketches are user-provided. Heras et al. also combine enumeration with lemma sketches [Heras et al. 2013], but the sketches are automatically learned from a set of existing theorems.
lfind's key innovation over these prior works is showing how to reduce the problem of lemma synthesis to a form of data-driven program synthesis. Versus the first category of approaches, lfind explores a wider space of potential lemmas via grammar-based synthesis and can leverage off-the-shelf program synthesizers. Versus the second category of approaches, lfind generates candidates that are directly targeted toward the current goal, which is critical in an interactive setting. However, our approach borrows several techniques from these prior works. First, lfind also employs generalization, but it is used not only to directly produce candidate lemmas but also as the basis for producing sketches for program synthesis. Second, lfind employs counterexample search to filter candidates, which has been previously used for filtering in both of the earlier approaches [Chamarthi et al. 2011;Claessen et al. 2010]. Third, lfind also employs automated provers, though due to the interactive setting we use them to rank rather than verify candidates.
Data-Driven Invariant Inference
Data-driven invariant inference has been widely used for various software engineering tasks, at least since Ernst's dissertation on inferring likely program invariants from data [Ernst 2000]. In this approach, data about concrete program executions is used to generate positive and/or negative examples, and the goal is to synthesize a predicate that separates these two sets of examples. Recently these techniques have become state of the art for automated program specification and verification [Astorga et al. 2019;Ezudheen et al. 2018;Garg et al. 2014Garg et al. , 2016Padhi et al. 2016;Zhu et al. 2018]. For example, prior work has shown how to generate examples for data-driven synthesis of loop invariants that are sufficient to prove that a function meets its specification [Garg et al. 2014[Garg et al. , 2016Padhi et al. 2016]. Closest to our work is the Hanoi tool [Miltner et al. 2020], which infers likely representation invariants to aid users of interactive theorem provers in proving that a data structure implementation meets its specification.
As described in Section 1, the existing data-driven verification techniques fundamentally exploit the specific kind of invariant being targeted, which has a clear logical specification over a fixed set of variables. This enables a natural approach based on CEGIS [Solar-Lezama 2009] for both generating examples and verifying candidate invariants. Our setting of lemma synthesis is more general and poses a challenge for data-driven inference, as we lack both a fixed set of variables for the lemma and clear criteria upon which to classify examples as positive or negative. Hence, we have devised a new reduction to data-driven program synthesis: lfind produces sketches from generalizations of the goal state and generates examples for synthesis using the heuristic that a synthesized term should behave consistently to the term that it replaces. We have also developed new approaches to filtering and ranking lemma candidates, to address the lack of clear success criteria in our setting.
Automated Proofs for Interactive Theorem Provers
A variety of tools exist for automatically generating proofs in interactive settings, both in Coq and other languages. Recent techniques use a form of machine learning, for example a neural network, to guide a heuristic proof search, given a set of proof tactics as well as a set of existing lemmas/theorems [Bansal et al. 2019;First et al. 2020;Gauthier et al. 2017;Huang et al. 2019;Paliwal et al. 2020;Sanchez-Stern et al. 2020;Yang and Deng 2019]. Another class of techniques serialize the proof context into a format for input to an external automated solver and then serialize the resulting proof back into the interactive theorem prover [Blanchette et al. 2011;Czajka and Kaliszyk 2018;Kaliszyk and Urban 2015a,b].
Our contribution is orthogonal to these works, which do not perform lemma synthesis. For example, while the machine-learning-based approaches leverage existing lemmas as part of the proof search, they will fail if the existing lemmas are not sufficient. As we showed in ğ5.3, lfind can improve the capabilities of Proverbot9001 [Sanchez-Stern et al. 2020], a state-of-the-art automated prover for Coq based on neural networks, synthesizing lemmas that allow it to prove goals that it otherwise could not. lfind uses Proverbot9001 to rank candidate lemmas and produce proofs for ones that are fully automatable. However, our approach is independent of the particular prover used and so for example could instead employ a solver-based prover like CoqHammer [Czajka and Kaliszyk 2018] or even employ multiple provers to leverage their relative strengths.
CONCLUSION
In this paper, we developed a new approach to lemma synthesis for interactive proofs that is both goal-directed and expressive. The key technical contribution is a new reduction from the general lemma synthesis problem to a data-driven program synthesis problem. The approach leverages the information available in a given stuck proof state in multiple ways: sampling variable valuations for example generation, generalizing the state to systematically introduce new variables for synthesis, and deriving synthesis sketches from the current goal. We also describe several techniques for filtering and ranking candidate lemmas, which are critical in an interactive setting. While the problem of lemma synthesis is hard in general, the experimental evaluation of our resulting tool lfind demonstrates the promise of the approach and quantifies the benefits over other approaches. | 2022-11-01T13:16:15.163Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "cfa086d20ee4bc540d6ca1d2f5321019be6e067d",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3563306",
"oa_status": "GOLD",
"pdf_src": "ACM",
"pdf_hash": "cfa086d20ee4bc540d6ca1d2f5321019be6e067d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1427603 | pes2o/s2orc | v3-fos-license | Hepatitis B Seroprotection and the Response to a Challenging Dose among Vaccinated Children in Red Sea Governorate
AIM: To assess the long-term effectiveness of hepatitis B virus vaccine and the need for a booster dose among children who received three doses of vaccine during infancy in Red Sea Governorate. METHODS: A cross-sectional study was performed. Interviews with children (9 months to 16 years) and parents were done. Blood samples to assess Hepatitis B markers were tested. Children showing no seroprotection received a booster dose to assess their anamnestic response after four weeks and one year later. RESULTS: None of the participants had evidence of chronic Hepatitis B. The seroprotection rate was 23.3% and it significantly decreased with age. Multivariate logistic analysis revealed that older age was the significant predicting variable for having no seroprotective level, while baseline anti-HBs level < 3.3 IU/L was the predicting variable for not developing early anamnestic response or loss of late anamnestic response. CONCLUSION: Long-term immunity persists among children who received complete series of hepatitis B vaccination during infancy even in absence or reduction of anti-HBs over time. Therefore, a booster dose is not necessary to maintain immunity till the age of sixteen.
Introduction
HBV infection is a leading cause of acute and chronic liver disease, cirrhosis and hepatocellular carcinoma worldwide. WHO estimates that globally, about 2 billion people have been infected with hepatitis B virus, more than 350 million are chronically infected, and nearly 1 million die every year from acute or chronic sequelae of primary infection with the disease [1,2]. In Egypt, 75%-85% of patients with the chronic liver disease have HBV or HCV infection as a contributing cause [3].
Mass vaccination of neonates and preschool children has been strongly recommended by the WHO [4,5]. Hepatitis B vaccines are highly effective and safe and the public should be educated about the importance and necessity of hepatitis B prevention by vaccination [6,7].
It has been demonstrated that 90-99% of healthy neonates, children, adolescents, and adults develop protective levels of anti-HBs following a standard vaccination course with hepatitis B vaccine. Serum levels of anti-HBs ≥ 10 IU/L are considered to be protective against HBV infection [7][8][9]. Loss of detectable concentrations of antibodies (anti-HBs) to hepatitis B surface antigen (HBsAg) does not necessarily indicate loss of immunity. The presence of HBV-specific immune memory can be demonstrated by administering a challenging (booster) dose of vaccine and measuring anti-HBs response. A rapid increase in anti-HBs represents an anamnestic response and is considered to indicate the presence of HBV-specific immune memory [10].
In Egypt, HBV vaccination was included in 1992 in the Expanded Program on Immunization (EPI) with injections at 2, 4 and 6 months of age using the recombinant vaccine. The Ministry of Health and Population conducted a wide range of prophylactic strategies to control viral hepatitis. It was reported that the prevalence of hepatitis B surface antigen (HBsAg) positivity among healthy individuals decreased from 10.1% in 1985 to 1.18% in 2008, and the frequency of acute HBV infection as a cause of symptomatic hepatitis decreased significantly from 43.4% in 1983 to 28.5% in 2002 [11,12].
The aim of the present study is to assess the long-term effectiveness of HBV vaccine and the need for a booster dose among fully vaccinated children during infancy in Red Sea Governorate.
Subjects and Methods
The present study is a part of a national community based multi-stage cluster sampling design carried out in the period from July 2010 to June 2013 in 6 governorates representing all geographic regions of Egypt. For sampling process and cluster selection, probability proportional to size sampling was used. The sample frame for the current survey was based on the most recent population census 2006. According to the population size of each governorate, the number of participating clusters was identified in each governorate. First, implicit stratification by geographic location in each governorate, lists of shakhas, medinas, and villages were arranged in serpentine order geographically. This stratification was done independently for urban and rural areas. A sampling interval was calculated and accordingly, a random number was selected using a table of random numbers. From these lists, areas such as villages or city blocks were selected. In each selected area, lists of child health center (MCH), kindergarten and school facilities were identified and 5 facility clusters were randomly selected.
The current work presented the part of the project results concerning Red Sea Governorate. From this governorate, 189 children from Hurghada were recruited. The study was conducted at facility levels, where 5 facilities (one maternal and child health center (MCH) or health unit, one kindergarten and 3 schools (1 primary, 1 preparatory and 1 secondary school) were randomly selected from each area according to the age of the targeted children.
The study protocol was approved by the ethical committees of Ministry of Health, National Research Center and Ministry of Education. A closeended questionnaire was designed and tested. For quality assurance, training sessions for supervisors, interviewers and Ministry of Health staff in Red Sea governorate were carried out. Peel-off barcode sheets were prepared and used for easy tracking of blood samples and laboratory results. Inclusion criteria were children aged from 9 months to 16 years and had received the full 3 compulsory doses of HBV vaccine during infancy. Signed written consents were obtained from each guardian Face to face interview was carried out with the parents or caretakers of the children. Adolescents above 10 years were also interviewed after their verbal assents which were obtained in the presence of their parents or class teachers. The questionnaire was used to collect data about child age, sex, date of birth and other demographic and socioeconomic variables. Data were also collected concerning children's' HBV vaccination and the available vaccination cards were revised for date and dose intervals of HBV vaccine. Socioeconomic status (SES) was determined according to Fahmy and El Sherbiny [13]. It depends on parents' education, maternal working status, water source, sewage disposable, and availability of electricity and some modification of family income. Non-responders for the booster dose were further given 2 additional doses of HBV vaccine with 2 months interval in between.
Out of 88 children proved to have nonseroprotective levels, 45 children participated and accepted to receive a booster dose of 10 μg monovalent Euvax HB vaccine intramuscularly in the front of the left thigh for children < 3 years and in the deltoid muscle for older children. A blood sample was withdrawn from each child for quantitative detection of anti-HBs one month and one year post-booster in order to assess their early and late anamnestic responses respectively. An anamnestic response was defined as rising in anti-HBs to ≥10 IU/L [14]. Three children who did not develop post-booster anti-HBs level ≥ 10 IU/L were then offered two additional doses of Euvax HB vaccine (2nd vaccination series) 1 and 6 months post booster. One month later, a blood sample was withdrawn from only 2 children for quantitative detection of anti-HBs to assess their immune response.
Blood sampling and Laboratory Analysis
About 3-5 ml blood sample was withdrawn aseptically from each participant. Serum samples were aliquoted into two labeled sterile cryo tubes and were stored at -20°C until laboratory examination. HBV markers detection was carried out in the Virology lab in the Microbiology Department,Faculty of Medicine (for Girls), Al-Azhar University, Cairo, Egypt. It was carried out using commercially available enzyme-linked immunoassays (ELISA, DiaSorin-Italy) and according to the manufacturer instructions. Quantitative detection of serum anti-HBs and qualitative determination of serum Total HBV core antibody (anti-HBc) and hepatitis B surface antigen (HBsAg) was assessed. According to international standards, anti-HBs ≥ 10 IU/L was considered to be protective against HBV infection [14,15]. Vaccines that developed an anti HBs level between 10-100 IU/L after the full vaccination dose are referred to as low responders while those above 100 IU/L are good responders [16]. Sero-positivity is defined as cutoff anti-HBs of 3.3 IU/L [17]. Breakthrough infection was defined as anti-HBs seropositivity in vaccinated subjects who were not chronically infected [18].
Data analysis and presentation
Data entry was carried out on excel sheet and statistical analysis was performed using SPSS software program version 18.0 (SPSS Inc., Chicago). The geometric mean titer (GMT) was calculated to indicate the central tendency of anti-HBs titers in consideration of the skewed distribution of anti-HBs level. For the calculation of the GMT, children who had an undetectable anti-HBs titer were assigned a nominal serum anti-HBs titer value of 0.05 IU/l [13]. The w2-test was performed for qualitative data and was presented by numbers and percentages. The ttest was performed for comparison between two means and one-way analysis of variance for more than two means. When data were not normally distributed, Mann-Whitney U test was used. The multivariate logistic analysis was performed to predict the risk factors significantly associated with non seroprotection and non-development of early and late anamnestic response. P value was considered statistically significant when its value was less than 0.05 and considered statistically highly significant when its value was less than 0.01.
Results
The total number of studied children was 189 (100 boys and 89 girls) aged 9 months to 16 years with a mean age of 9.6 years. None of the participants had an evidence of chronic HBV or breakthrough infection. Table 1 shows that the nonseroprotection level significantly increased with age (it was 17.2% for the age group < 5 years and 66% for the age group >10 years, P < 0.05). There was no significant difference in the seroprotection level as regards gender, socioeconomic status, and other studied variables.
Only 45 children with non-seroprotection level received the booster dose and 93.4% of them developed an anamnestic response. Figure 1 presents the baseline seroprotection rate and postbooster anamnestic response rate among the studied children. At baseline, out of 189 studied children, only 44 (23.3%) had a good level of seroprotection (anti-HBs ≥ 100 IU/L) and 88 (46.6%) had a non-seroprotective level (< 10IU/L). As regards early postbooster anamnestic response, 75.6% developed a good anamnestic response, 17.8% developed low anamnestic response (10-99 IU/L) and 6.7% were non-responders. Out of 42 children who developed early, anamnestic response 41 (97.6%) were followed up one year later to assess the persistence of response (late anamnestic response). It was found that 31 children (75.6%) still retained their anamnestic response (anti-HBs ≥10IU/L) while 10 children (24.4%) had lost their protective level. Table 2 presents baseline, early and late anamnestic response anti-HBs GMT in relation to gender, age, and pre-booster level. There was no difference between both sexes as regard prebooster and post-booster (early and late anamnestic response) GMT anti-HBs. While children aged ≤ 10 years had significantly higher pre-booster anti-HBs GMT than older children, post-booster (early and late) GMT anti-HBs did not differ among both age groups.
When pre-booster non-seroprotection anti-HBs levels were divided into seronegative (< 3.3 IU/L) and seropositive (≥ 3.3 IU/L), the post-booster early anamnestic response GMT was significantly higher among children with seropositive level of anti-HBs than among those with seronegative level of anti-HBs (848.6 ± 1.3 & 64.3 ± 18.0 respectively) P = 0.004. However, there was no statistically significant difference between both groups for late anamnestic response GMT. Multivariate logistic analysis revealed that older age was the only significant predicting variable for having no seroprotective level, with AOR = 3.6 and 8.5 among children aged 5-10 and older age respectively compared to those < 5 years, P < 0.01. The significant predicting variable for not developing an early anamnestic response or loss of late anamnestic response was baseline anti-HBs level < 3.3 IU/L with AOR 4.7 & 2.0 respectively and P<0.001 for both (Table 3).
Discussion
Universal infant vaccination will be the key to the elimination and subsequent eradication of hepatitis B. Elimination and eradication of hepatitis B require long-term commitment all over the world to continue the vaccination. Further progress towards the elimination of HBV transmission will require sustainable vaccination programs with improved vaccination coverage [6,19].
In the current study, 189 studied children received three doses of HB vaccine during infancy and none of them had an evidence of chronic HBV or breakthrough infection. The overall seroprotection rate among all studied children was 53.4%. This rate is similar to that of a study carried out in Dakahlya Egypt where seroprotection rate was 57.7% [20] and in Turkey [21]. However, higher percentages were reported among Chinese, Italian and Iranian children (66.4%,64% & 81% respectively) [9,22,23]. The present study also found that 82.8% of participants aged less than 5 years were HBV seroprotected and the seroprotection rate significantly decreased with age which is in agreement with Yazdanpanah et al, [24]. Lower percentages (68.1%) were reported by Zhu et al for the same age group [9]. For older children ≥ 6 years, Liop et al and Behjati et al reported seroprotection rates were >70% [25,26].
The seroprotection rate was higher among boys (54.0%) than girls (52.8%) and among children with higher socioeconomic status (60.0%) compared with others with very low socioeconomic status (39.4%), but with no statistically significant difference. Similarly, no statistically significant differences were reported as regards gender [5,21,22,24]. However, Sami et al found that in Gharbeya Governorate in Lower Egypt among 762 children, the non seroprotection rate was significantly higher among girls (50.1%) compared to boys (33.6%) (P<0.0005), while the seroprotection rate was found to be significantly higher among high socioeconomic level individuals (61.2%) compared to those of very low class (49.6%) (P < 0.05) [27].
Another Egyptian study carried out on 64 children found that the nonsero-protective rate of anti-HBs was significantly higher among girls than boys aged > 6 years, while no statistical difference was found among children < 6 years [28].
The present study showed that 46.6% of studied children had non-seroprotective levels of anti-HBsat baseline and 93.4% of those receiving the booster dose developed an anamnestic response. Seventy-five percent of them developed good anamnestic response (anti-HBs ≥ 100 IU/L) and 6.7% were non-responders. Similarly, in Egypt, it was found that 91.7% of non-seroprotected children developed anamnestic response and 77.8% of them reported good response [27]. Moreover, Eldesoky et al found that 97.2% of the non-seroprotected children developed good anamnestic response [7]. A Turkish and an Italian study also reported that over 95% of non-seroprotected children developed an anamnestic response post booster [21,22]. A lower percentage of children developed a postbooster anamnestic response in Iran (78.1%) and in Taiwan [24,29]. Booster doses of hepatitis B vaccine after primary vaccination in immune competent individuals are currently not recommended by either the WHO or the European Consensus group on Hepatitis B Immunity. The decrease of antibody concentrations below seroprotection level or even below detection levels is not considered as an indicator of loss of protection [30]. The real threat may emerge when the vaccinated subjects begin to engage in high-risk behaviors for HBV transmission in areas of high endemicity. Boosters for certain highrisk groups or for individuals living in the areas of high endemicity may be a reasonable alternative [31][32][33].
In this study, post booster anamnestic was detected among 100% of non-seroprotected children having pre-booster level ≥ 3.3 IU/l compared to only 90.3% of those having pre-booster level < 3.3 IU/l (P > 0.05). Similarly but to a less extent, Steiner et al, reported that 90.5% of children who had anti-HBs antibody concentrations > 3.3 IU/l reached the seroprotection level after receiving the booster dose compared to 81.0% of seronegative children [34].
In the current study, girls had higher prebooster and post-booster anti-HBs GMT compared with boys but with no statistically significant difference between both sexes (P > 0.05]. While children aged ≤ 10 years had significantly higher pre-booster anti-HBs GMT than those aged >10 years (P < 0.05], but there was no difference between both age groups in the post-booster anti-HBs GMT. These results are similar to those of other studies in Alaska, Turkey, and Iran which found that pre-booster anti-HBs GMT significantly decreased with age [10,21,24). However, in two Iranian studies, it was found that the post-booster anti-HBsGMT among girls was significantly higher than among boys (P < 0.05] [23,24]. The statistically significant difference between the mean antibody concentration before and after the booster dose shows the vigorous response of the immune system to the booster and suggests that the immunologic memory is good [24]. In the current study, 100% of the children ≤ 10 years and 91.1% of adolescents developed anamnestic response (P > 0.05). Similarly, in Alaska, 99% of children and 83% of adolescents [10] and 100% of children in Taiwan [32] developed an anamnestic response to a booster dose. Similar results were also reported by [21,24]. They also found that there was no significant difference in postbooster anamnestic response rate as regards age. In contrast, the anamnestic response rate in Alaskan children decreased according to increasing age [10].
In the current study, the anamnestic response rate to the booster dose did not vary between boys and girls (91.3% and 95.5% respectively, P > 0.05). Similarly, in Gharbeya governorate, the anamnestic response rate was 90.1% among boys and 92.5% among girls, P > 0.05 [27]. On the contrary, Su et al found that Taiwanese adolescent females always had a greater anamnestic response rate than males (P < 0.001) [35].
In the present study, 75.6% of children still retained their anamnestic response after one year post-booster. This was also found in a study in Gambia where 80% of boosted participants still had detectable antibodies [8] and in Canada where 99.3% of subjects in the Engerix-B 10 [mu]g (EB) group and 100% in the Recombivax-HB 2.5 [mu]g (RB) group had detectable anti-HBs one year after the booster [31].
In this study after the booster dose was administered, 6.7% of children were non-responders and two out of the three children who failed to generate an early anamnestic response after receiving the booster dose, were offered another two doses of recombinant HBV vaccine. While 3% of Italian children and 4% of Italian recruits remained negative for anti-HBs, they offered to complete the second course of vaccination with two additional vaccine doses given at 1 and 6 months after the first booster injection [22]. Also, 3% of boosted participants in Gambia did not mount a detectable antibody response following a booster dose of HBV [8]. Samandari et al and Su et al reported that 28.7% and 31% of Alaskan and Taiwanese adolescents respectively did not respond to a booster dose. [10,35] Logistic analysis for determining predictors of not seroprotection revealed that older age was the only significant predicting variable for having no seroprotective level while significant predicting variable for not developing an early anamnestic response or loss of late anamnestic response was baseline anti-HBs level < 3.3 IU/L. On the contrary, there was no significant difference as regards HBV booster non-response in subjects below or above 16 years of age in a study by Lu et al [32.] In another study by Wang and Lin, risk of non-response to booster vaccination was highest among adolescents who smoked cigarettes and chewed betel-quid [36].
The study concluded that long-term immunity exists among children who had complete series of hepatitis B vaccination during infancy even in the case of reduced or absent anti-HBs over time. HBV vaccine is highly protective against HBV infection as evidenced by the absence of HBV infection in the vaccinated groups. The longer the time lapse after vaccination, the lower the seroprotection rate and the lower the mean anti-HBs. More than 93% of vaccinated individuals who lost protective levels of antibody developed a rapid anamnestic response when boosted and this indicates the presence of immune memory. Therefore, using routine booster doses of hepatitis B vaccine is not necessary to sustain at least 16 years of protection in immunized individuals. | 2018-05-08T17:59:25.025Z | 2016-05-24T00:00:00.000 | {
"year": 2016,
"sha1": "eb48256df401009d6bc7e334691cc459498d04e2",
"oa_license": "CCBY",
"oa_url": "http://www.id-press.eu/mjms/article/download/oamjms.2016.043/1014",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb48256df401009d6bc7e334691cc459498d04e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251373013 | pes2o/s2orc | v3-fos-license | Hepatitis of unknown etiology in children: What we know and what we can do?
Recently, acute hepatitis of unknown etiology in children has gained great concern since March 2022. The disease was first reported by Public Health Scotland. Cases increased rapidly and are now reported in 33 countries worldwide. All cases are predominantly aged under 5 years old. Most patients presented with jaundice, and remarkably, some cases progress to acute liver failure. Until now, the etiology is not fully elucidated, and the investigations are ongoing. Adenovirus infection seems to be an important factor. Several hypotheses on the etiology have been proposed. This review aims to summarize current research progress and put forward some suggestions.
Introduction
On 31 March 2022, Public Health Scotland was first alerted to five children aged 3-5 years admitted to the Royal hospital with severe acute hepatitis of unknown etiology within 3 weeks (Marsh et al., 2022). In the following weeks, more cases were reported in the United Kingdom, the United States, and European countries (Baker et al., 2022; European Centre for Disease Prevention and Control, 2022a; UK Health Security Agency, 2022a). Most cases are aged under 5 years old. Jaundice and gastrointestinal symptoms are the main clinical manifestations. All cases presented with markedly elevated transaminases, and remarkably, some cases progress to acute liver failure and even die. So far, common hepatotropic viruses have been ruled out and no definite cause has been identified. In epidemiological terms, cases are almost sporadic but widely distributed (World Health Organization, 2022a). Given the severity of the disease and the unascertained pathogenesis, this is an urgent issue that should be paid more attention to worldwide. Here, we would like to discuss what we know about acute hepatitis of unknown etiology so far and what we can do next.
Clinical characteristics
The definition of acute hepatitis in children of unknown etiology has changed as investigations have progressed. All the definitions in different regions and periods were summarized in Table 1 (UK Health Security Agency, 2022a). Most of the initial cases reported in Scotland were presented with serum levels of transaminase greater than 2,000 international units per liter (IU/L) and hepatitis virus infection (A to E) has been excluded (Marsh et al., 2022). So far, from the investigation of trawling questionnaires, no common environmental exposures in travel, family structure, parental occupation, diet, water source, or potential exposures to toxicants have been found (UK Health Security Agency, 2022b). The etiology needs further investigation. Coincidentally, clinicians had noticed similar cases from October to November 2021 in Alabama, and the report on these cases was published in April 2022 (Baker et al., 2022). Due to uneven medical conditions in different countries around the world, we can speculate that such cases may have occurred earlier than October 2021, but these have not been taken seriously or recognized.
In the description of the technical briefing 3 published by the UKHSA, 50% of England cases are female and the majority are of white ethnicity (86.3%). They are predominantly aged between 3 and 5 years old (53.5%), with a median age of 3 years (interquartile range 2-4 years). The main symptoms in these children are shown in Table 2. Notably, they were reported to be immunocompetent before this admission (UK Health Security Agency, 2022b). As of 26 May 2022, 14 cases have undergone liver transplantation (World Health Organization, 2022b).
Histopathologic characteristics
The hepatotropic viruses (hepatitis A, B, C, D, and E viruses) have been excluded and, therefore, the nonhepatotropic viruses, such as cytomegalovirus (CMV), Epstein-Barr virus (EBV), herpes simplex virus (HSV), human herpesvirus-6 (HHV-6), adenovirus, echovirus, etc., should be considered. Generally, the histological features of acute hepatitis in children mainly manifested as lobular hepatocellular injury, inflammatory infiltrate with lobular predominance, and portal or periportal infiltrate, and viral inclusions are specific features of non-hepatotropic viruses related to hepatitis (White and Dehner, 2004). Certain histologic characteristics of liver injury may be suspicious for a specific agent. For instance, for CMV hepatitis, a paucity of bile ducts may be present. Nonzonal confluent necrosis and nuclear target-like inclusions in hepatocytes are biopsy features of adenovirus-related hepatitis (White and Dehner, 2004).
To further clarify the etiology, some patients received a hepatic histopathologic assessment. In the UK, liver specimens included 8 biopsies and 6 liver tissue samples obtained from the children receiving liver transplantation. Variable severity ranging from mild hepatocellular injury to massive hepatic necrosis was found in the histopathologic evaluation. The histopathologic pattern showed non-specific changes (UK Health Security Agency, 2022a). In Alabama, United States, liver biopsies from six children presented various degrees of hepatitis. Moreover, immunohistochemical (IHC) evidence of adenovirus cannot be observed and viral particles cannot be identified by electron microscopy (Baker et al., 2022). However, viral inclusions may not be obvious in all cases. At present, laboratory tests and histological examination cannot ascertain the etiology.
Etiological findings
Considering the epidemiological and clinical features, an infectious etiology may be the most possible cause, then all patients were tested for pathogens at or around the time of admission. The possible infection-causing factors were presented in Table 3. The adenovirus is the most frequent pathogen detected in various samples. Among 197 UK cases, 68% (116 of 179 cases with available results) of cases have tested positive for adenovirus (UK Health Security Agency, 2022b). Adenovirus viral loads in blood or serum samples in those who received liver transplantation were approximately 12-fold higher than in those who did not (UK Health Security Agency, 2022b). In the study, Adenovirus was detected more frequently in blood or serum samples (79.4%) than in stool (43.9%) or respiratory (27.3%) samples (UK Health Security Agency, 2022b). So, cases that were not tested on whole blood or serum samples showed negative results, which may lead to an underestimation of positivity for adenovirus. By partial Hexon gene sequencing, 35 cases have been successfully subtyped, of which 27 (77%) cases are adenovirus type 41 (UK Health Security Agency, 2022b). Among 9 American cases, adenovirus type 41 was detected in all patients (Baker et al., 2022).
The SARS-CoV-2 was also frequently detected in reported cases. In the United Kingdom, SARS-CoV-2 has been detected in 25 cases of 169 with available results (15%) (UK Health Security Agency, 2022b). From the report of Israel, 11 of 12 patients had SARS-CoV-2 infection in recent months (Haaretz, 2022). From the WGS of SARS-CoV-2 in England cases, 5 sequences are classified as VOC-22JAN-01 (lineage BA.2) (UK Health Security Agency, 2022a). Seven cases were co-infected with adenovirus and SARS-CoV-2 (PCR or lateral flow device) and serological testing is underway (UK Health Security Agency, 2022a).
Other pathogens, such as enterovirus, parechovirus, human herpesvirus 6 and 7 (HHV-6, HHV-7), and varicella-zoster virus, have been detected in some UK cases (UK Health Security Agency, 2022a). Of 9 Alabama cases, 6 cases showed positive test results for EBV by PCR but negative results for EBV immunoglobulin M (IgM) antibodies, suggesting low-level reactivation of previous infections rather than acute infection. Enterovirus/Rhinovirus, metapneumovirus, respiratory syncytial virus, and human coronavirus OC43 have also been detected (Baker et al., 2022). In addition, preliminary metagenomics findings that Adeno-associated virus 2 (AAV2), Adeno-associated dependoparvovirus A, Human Herpes Virus, and Human Polyomavirus detected in samples from cases in England and Scotland were published on 19 May 2022 (UK Health Security Agency, 2022b).
Possible mechanisms
Based on the working hypotheses put forward by the UKHSA and the views of experts, we summarize the possible mechanisms as follows.
Human adenovirus infection
Adenoviruses are a group of double-stranded nonenveloped DNA viruses. They consist of 7 species (A-G) and are currently classified into 51 serologically distinct types (Lynch and Kajon, 2016). Different types of adenoviruses show various tissue tropisms, and a certain type is often associated with a particular clinical manifestation. Acute respiratory symptoms are the most common manifestations (serotypes 1-5, 7, 14, and 21). Besides, keratoconjunctivitis (serotypes 8, 19, and 37), urethritis (serotypes 11, 34, 35, 3, 7, and 21), and gastroenteritis (serotype 40 and 41) are also important symptoms (Ronan et al., 2014;Lynch and Kajon, 2016). Type 41 adenovirus infection often presents with gastrointestinal disease, for example, vomiting and diarrhea, and it is a common cause of pediatric acute gastroenteritis (Lynch and Kajon, 2016). Acute hepatitis is an uncommon manifestation of adenovirus infection. However, a few cases were reported in both immunocompetent and immunosuppressed individuals (Lion, 2014). In addition, adenovirus infection was identified as the causing factor of acute liver failure in several case reports (Ozbay Hosnut et al., 2008;Gu et al., 2021).
In previous studies, pediatric acute liver failure (PALF) is a rare but rapidly progressive and life-threatening disorder, and approximately 22-49% of cases were diagnosed with indeterminate etiology (Squires et al., 2006;Berardi et al., 2020). The PALF study is a multicenter, observational cohort study and it demonstrated that 46% of cases with acute liver failure of unknown etiology undergo liver transplantation, which is higher than the group with definite etiology (Squires et al., 2006). As of 26 May 2022, 14 (2%) cases have undergone liver transplantation and at least 24 (4%) cases are waiting for liver transplantation in recently reported acute hepatitis of unknown etiology (World Health Organization, 2022b). Acute hepatitis of unknown etiology in children is not a newly emerging
Scotland (in the technical briefing 1 on 23 April 2022)
Confirmed A person presenting with a serum transaminase greater than 500 IU/L (AST or ALT) without any known cause, who is 10 years of age and under or a contact of any age of a possible or confirmed case, since 1 January 2022.
Possible A person presenting with jaundice without any known cause, who is 10 years and under or contact of any age to a possible or confirmed case, since 1 January 2022.
England, Wales, Northern Ireland (in the technical briefing 1 on 23 April 2022)
Confirmed A person presenting with an acute hepatitis (non-hep A-E*) with serum transaminase > 500 IU/L (Aspartate Transaminase-AST or Alanine Transaminase -ALT), who is 10 years old and under, since 1 January 2022.
Possible
A person presenting with an acute hepatitis (non-hep A-E*) with serum transaminase > 500 IU/L (AST or ALT), who is 11-16 years old, since 1 January 2022.
Epi-linked A person presenting with an acute hepatitis (non-hep A-E*) of any age who is a close contact of a confirmed case, since 1 January 2022.
Scotland (in the technical briefing 2 on 6 May 2022)
Confirmed A person presenting with a serum transaminase greater than 500 IU/L (AST or ALT) without any known cause (excluding hepatitis A-E, Cytomegalovirus and Epstein-Barr Virus), who is 10 years of age and under or a contact of any age of a confirmed case, since 1 January 2022.
England, Wales, Northern Ireland (in the technical briefing 2 on 6 May 2022)
Confirmed A person presenting since 1 January 2022 with an acute hepatitis which is not due to hepatitis A-E viruses, or an expected presentation of metabolic, inherited or genetic, congenital or mechanical cause** with serum transaminase greater than 500 IU/L (AST or ALT), who is 10 years old and under.
Possible
A person presenting with an acute hepatitis since 1 January 2022 with an acute hepatitis which is not due to hepatitis A-E viruses or an expected presentation of metabolic, inherited or genetic, congenital or mechanical cause** with serum transaminase greater than 500 IU/L (AST or ALT), who is 11-15 years old.
Epi-linked A person presenting since 1 January 2022 with an acute hepatitis (non-hepatitis A-E) who is a close contact of a confirmed case.
(A person who is epi-linked but also meets the confirmed or possible case definition will be recorded as a confirmed or possible case and their epi-link noted in their record. This prevents double-counting of cases).
WHO and ECDC
Confirmed N/A Probable A person presenting with an acute hepatitis (non-hepatitis viruses A, B, C, D, and E*) with aspartate transaminase (AST) or alanine transaminase (ALT) over 500 IU/L, who is 16 years old or younger, since 1 October 2021.
Epi-linked A person presenting with an acute hepatitis (non-hepatitis viruses A, B, C, D, and E*) of any age who is a close contact of a probable case since 1 October 2021.
*If hepatitis A-E serology results are awaited, but other criteria were met, these are classified as "pending classification." **Confirmed and possible cases should be reported based on clinical judgment if some hepatitis A-E virus results are awaited, or if there is an acute on chronic hepatic presentation with a metabolic, inherited or genetic, congenital, mechanical, or other underlying cause. If hepatitis A-E serology results are awaited, but other criteria were met, these will be classified as "pending classification." disease. However, the higher-than-expected numbers in 2022 require more attention, especially during the current COVID-19 pandemic worldwide. COVID-19 prevention and control measures, such as physical distancing restrictions, could result in the lack of opportunity for exposure to pathogens for younger children. These children may be relatively immunologically naïve and more susceptible to adenovirus infection and, unexpectedly, severe effects may occur (Samarasekera, 2022).
With the decline of pandemic restrictions, more social mixing leads to children's first exposure to pathogens, allowing a rarer outcome of adenovirus infection to be detected. Retrospective analysis of the trends of adenovirus infection in England from 1 January 2017 to 1 May 2022 showed a reduction in the number of adenovirus cases between March 2020 and May 2021 (COVID-19 pandemic period) and a remarkable exceedance of adenovirus infection in children under 10 years old since the end of 2021 (UK Health Security Agency, 2022a). After a period of a low prevalence of adenovirus infections during the previous 2 years of the COVID-19 pandemic, a massive wave of adenovirus infections may emerge in a short time and a serious outbreak may happen. According to the clinical manifestations and outcomes of the present cases, although most of the cases recovered after supportive treatment, the level of transaminases in reported cases was higher than that in general hepatitis. Moreover, approximately 6 percent of cases progress to acute liver failure (Marsh et al., 2022). In addition, re-infection with adenovirus after SARS-CoV-2 infection, or coinfection of SARS-CoV-2 and adenovirus, may make the clinical outcome worse (The Lancet Infectious Diseases, 2022). However, only seven cases (5.6%, 7/125) were co-infected with adenovirus and SARS-CoV-2 in England, and the proportion is not high. This hypothesis needs to be further verified. Another hypothesis is adenovirus mutation. The molecular evolution of Adenovirus by homologous recombination can result in novel viruses showing various tissue tropisms and increased virulence (Lion, 2014;Tian et al., 2019). Wholegenome sequencing (WGS) of adenovirus facilitates the discovery of viral recombinants. The virulence and clinical However, the evidence, of which adenovirus seems to be a decisive causative factor, was still limited. Firstly, the viral load of adenovirus present in blood samples is not high (UK Health Security Agency, 2022a). Secondly, viral inclusions, IHC evidence of adenovirus, or viral particles were not identified in the liver biopsies of related cases until now. One case successfully underwent adenovirus PCR of liver tissue, but the result was negative (Baker et al., 2022; UK Health Security Agency, 2022a). It seems uncertain whether adenoviruses are incidental concomitant or play important role in acute hepatitis.
SARS-CoV-2 infection or multisystem inflammatory syndrome
Firstly, acute hepatitis in children may be the post-infectious sequelae of SARS-CoV-2 infection. Studies in adults showed that elevated serum alanine aminotransferase (ALT) and aspartate aminotransferase (AST) levels during SARS-CoV-2 infection are common manifestations (Marjot et al., 2021). The level of ALT and AST varies in different patients. The underlying causes of elevated liver enzymes in SARS-CoV-2 infection is temporarily not clear. Direct infection of hepatocytes and immune-mediated inflammatory response may be both vital in the progress of the liver injury. After SARS-CoV-2 infection, the SARS-CoV-2 spike protein will bind to the human angiotensin-converting enzyme 2 (ACE2) receptor for entry into the target cell, then trigger an immune response in the host. The innate immune system will work together with T and B cells to control SARS-CoV-2. Patients exhibit enhanced pro-inflammatory responses and corresponding organ dysfunction and symptoms. Differences in antibodies, CD4 + and CD8 + T cell populations, genetic polymorphism in MHC, and cytokine production will result in different clinical outcomes (Shah et al., 2020;Fergie and Srivastava, 2021). In a recent study preprinted in medRxiv, Kendall et al. (2022) reported that children infected with SARS-CoV-2 are at significantly increased risk for elevated AST or ALT (hazard ratio or HR:2.52, 95% confidence interval or CI: 2.03-3.12) and total bilirubin (HR: 3.35, 95% CI: 2.16-5.18), compared to children infected with other respiratory infections. Liver manifestations and pathophysiological aspects related to SARS-CoV-2 infection in patients without liver diseases have been summarized in a recently published review. Even though transaminase elevation is common during COVID-19, severe acute liver injury is relatively rare. A "cytokine storm, " which is characterized by the release of a large number of inflammatory factors, plays a vital part in the pathophysiological of SARS-CoV-2 infection (Dufour et al., 2022). With a deeper investigation and understanding of COVID-19, several studies
Country Adenovirus (%, cases) SARS-CoV-2 (%, cases) Other positive results (cases or samples)
The found that the activation and proliferation of a memory T-cell pool against SARS-CoV-2 contribute to rapid viral clearance during viral reinfection (Shah et al., 2020;Jung et al., 2021;Wang et al., 2021). SARS-CoV-2-specific CD4 + and CD8 + memory T-cell immunity can be detected not only in patients who recovered from COVID-19 but also in close contacts, who are SARS-CoV-2-exposed but are negative in nucleic acid and antibody screening (Wang et al., 2021). Moreover, SARS-CoV-2-specific memory T cell response would persist up to 10 months after infection (Jung et al., 2021). In the indeterminate PALF patients, IHC staining for liver tissue sections showed that the tissue-resident memory CD8 + T-cells are predominantly infiltrated leukocytes, which were responsible for the aberrant immune activation during liver failure (Chapin et al., 2018). Therefore, we can speculate that the children with acute hepatitis of unknown etiology may be in close contact with SARS-Cov-2 infection cases resulting in the generating of memory T cells response. At the reinfection of SARS-CoV-2, dysregulation of immune response can cause rapid liver damage, even acute liver failure (Bogunovic and Merad, 2021). Additionally, Osborn et al. (2022) reported a previously healthy 3-year-old female who develop acute liver failure secondary to type 2 autoimmune hepatitis (AIH) preceded by mild infection with SARS-CoV-2. This case showed a possible association between SARS-CoV-2 infection and subsequent development of autoimmune liver disease presenting with acute liver failure (Osborn et al., 2022). Although the mechanisms of AIH and acute hepatitis of unknown origin may be different, SARS-CoV-2 infection may play an important role in both cases. Secondly, Staphylococcal enterotoxin B (SEB) is one of the most severe and typical bacterial superantigens that stimulate non-specific T cells and result in the release of large amounts of cytokines, leading to multi-organ system failure and death (Fries and Varshney, 2013). Yarovinsky et al. (2005) reported that the exposure of SEB to adenovirusinfected mice increases the severity of liver injury, in which the IFN-γ signaling and apoptosis may play important roles. Interestingly, structure-based computational models showed that the epitope of the SARS-CoV-2 spike protein has a sequence motif unique to SARS-CoV-2, which highly resembles the sequence and structure of SEB (Cheng et al., 2020). Therefore, Brodin (2022) recently proposed a hypothesis about SARS-CoV-2 superantigens that SARS-CoV-2 infection can cause viral reservoir formation. When SARS-CoV-2 continues to stimulate the gastrointestinal tract, a superantigen motif within the SARS-CoV-2 spike protein that resembles SEB will mediate the immune activation. This immune activation may result in multisystem inflammatory syndrome in children (MIS-C) (Cheng et al., 2020;Porritt et al., 2021;Brodin, 2022). Exposure to adenovirus after SARS-CoV-2 infection will lead to a broad and non-specific T-cell activation, resulting in apoptosis of hepatocytes. Immunomodulatory therapies would be suggested when superantigen-mediated immune activation is verified. Furthermore, several cases reported in Israel were treated with steroids and recovered (Haaretz, 2022), which indirectly suggested the role of immune-mediated inflammatory response.
Meanwhile, whether SARS-CoV-2 infection is vital in the progress of hepatitis remains controversial. The number of cases with SARS-CoV-2 coinfection is relatively less, while most cases have no history of SARS-CoV-2 infection. Serological investigations are ongoing to explore prior infection. Otherwise, in the report of Alabama, two of the cases that developed acute liver failure were treated with steroids, and medical treatment efficiency was limited (Baker et al., 2022). The hypothesis of SARS-CoV-2 mediated MIS-C is hard to explain in these cases.
There is no evidence of any correlation between hepatitis and the COVID-19 vaccine because the COVID-19 vaccine is not recommended for children under 5 years old, and nearly 75% of children with acute hepatitis of unknown etiology are under 5 years old and are too young to receive the vaccination (UK Health Security Agency, 2022a).
Host immune deficiency
In general, despite the increasing number of new cases, the prevalence of unknown hepatitis is relatively low. All cases presented with liver injury, but most of the cases recovered through supportive treatment, and only a small proportion of cases progressed to acute liver failure or death. Professor Kelly put forward a hypothesis that there may be genetic or immunologic differences, which make individuals more susceptible to a certain pathogen or activate a more potent immune response, between patients and healthy children (Samarasekera, 2022). Pathogen investigations are ongoing, and the immune state of the host is also of great importance. In the investigation planning of the UKHSA, the research on host characterization, including harmonized clinical data collation and analysis, host genetic characterization, immunological characterization including T cell activation studies, and transcriptomics will be launched. These studies will lead to a deeper understanding of the etiology.
What we can do Diagnosis
The diagnosis of acute hepatitis of unknown etiology in children includes early recognition of symptoms and performing laboratory testing. In most cases, the onset of jaundice was preceded by gastrointestinal manifestations, such as diarrhea, nausea, and vomiting. Parents should strengthen the awareness of the children with corresponding symptoms and general practitioners, or other medical specialists should raise the awareness of this hepatitis of unknown etiology among young children. A person presenting with acute hepatitis with AST or ALT of over 500 IU/L, who is under 16 years old, needs further investigation. Clinical and epidemiologic characteristics of cases should be surveyed. The history of potential exposures to toxicants and drugs, food and water consumption, and the history of SARS-Cov-2 and other pathogen infections should be collected. Routine laboratory tests, including blood cell analysis, biochemical test, coagulation test, and plasma ammonia, the hepatotropic viruses (hepatitis A, B, C, D, and E viruses) and abdominal imaging examination should be performed. According to the definition of acute hepatitis in children of unknown etiology, a probable or epidemiologically linked (epilinked) case can be identified.
Monitoring and testing
A panel of tests is suggested to conduct when a probable or epi-linked case is identified. Adenovirus, enteroviruses, CMV, EBV, HSV, HHV6, HHV 7, and parechovirus should be tested in the blood. Respiratory virus (including influenza, adenovirus, parainfluenza, rhinovirus, respiratory syncytial virus, and human bocavirus 1-3), SARS-CoV-2, enteroviruses, and human metapneumovirus (hMPV) should be tested in throat swab specimens. Enteric viruses (including, norovirus, enteroviruses, rotavirus, astrovirus, and sapovirus) should be screened in the stool specimens. Also, according to the advice of ECDC, the serology of Brucella spp., Bartonella henselae, and Borrelia burgdorferi (if epidemiologically appropriate) should be tested. The culture of bacterial pathogens and viruses is an alternative valuable tool. Metagenomic analysis (blood and liver specimen) should be performed, and it is useful and meaningful. Other non-infectious causes (autoimmune hepatitis, Wilson disease, bacteremia, etc.) should also be considerably excluded. If necessary, a liver biopsy is required.
Treatment
Treatment of hepatitis depends on the underlying etiology. Unfortunately, the etiology of the novel hepatitis remains unknown. Hence, supportive treatment is the foundation. Current research suggests that adenovirus infection appears to be an important factor, but there is no specific and proven effective treatment for adenovirus hepatitis. In terms of antiviral therapy, cidofovir (CDV) may be a preferred agent. It is a cytosine nucleotide analog that inhibits DNA polymerase, and it works well in vitro but efficiency varies in clinical use (Cheng et al., 2020;Porritt et al., 2021;Brodin, 2022). In the report of Alabama, two of the cases that developed acute liver failure were treated with CDV and the clinical effect is unsatisfactory (Baker et al., 2022). The efficiency of CDV is unclear. In addition, corticosteroid therapy has been studied in indeterminate PALF. Some cases improved. But the survival may be relatively low (Molleston et al., 2008;Devictor et al., 2011;Chapin et al., 2019). Glucocorticoids produce broad anti-inflammatory effects and suppress the excess immune activation, which is considered to be the key disorder in acute severe hepatitis of unknown etiology. Therefore, glucocorticoids may be useful in certain patients, particularly in those who deteriorate rapidly. However, the risks and benefits of corticosteroid therapy should be balanced. Moreover, studies in pediatrics have shown plasmapheresis, and at times, in combination with other therapies, may serve as a bridge to liver transplantation (Squires et al., 2022). Finally, liver transplantation should be considered for patients with a poor prognosis. Nevertheless, different countries and regions have discrepant capacities for liver transplantation. Thus, identifying the etiology early and specific treatment for the pathogen is crucial, which may improve the survival rate and reduce the need for liver transplantation.
Prevention
The infection of adenovirus or other pathogens cannot be excluded nowadays. Considering that adenovirus transmits through fecal-oral and respiratory routes, good hygienic practices (including careful hand hygiene, cleaning, and disinfection of surfaces) and social distancing among children will contribute to reducing the spread of the disease. Attention to food and water hygiene is also very effective and necessary.
Conclusion
The etiology of acute hepatitis in children remains unknown. Infection markers of viruses have not been detected in any of the cases over the world. Adenovirus was detected in most cases and was regarded as one of the candidates' underlying causes. Several hypotheses, including multisystem inflammatory syndrome and superantigen activation, have been proposed. An immune deficiency in children resulting from lack of exposure to pathogens because of the social distancing has rendered them more susceptible to normal adenovirus infection. An exceptionally large wave of normal adenovirus infections or a novel variant of adenovirus will cause rarer and more severe outcomes. Other infectious causes, including SARS-CoV-2, are being investigated. Acute hepatitis in children may be the post-infectious sequelae of SARS-CoV-2. Metagenomics sequencing analysis from cases in England and Scotland showed that other pathogens, such as AAV2, HHV6, HHV7, and human polyomavirus, have been detected. However, the relevant significance is under investigation. There are some limitations in the prior case reports all over the world. Firstly, the different definitions used at the beginning led to some bias in the diagnosis of patients. Secondly, due to uneven medical conditions in different countries, some tests cannot be launched in some countries, and the determination of the etiology is challenging. Although there is no case reported in several countries, precautions should be prepared ahead of time. Clinicians should be trained to raise awareness of this severe hepatitis of unknown etiology among young children. The reports of these cases prompt to perform syndrome-based surveillance at a local or a country scale and can obtain a more precise and continuous picture of the epidemiological and clinical features of infectious diseases. So far, there is no specific and effective treatment as the underlying cause is unclear. The priority is to identify the etiology of the novel epidemic and to further refine control and prevention measures of the syndrome.
Author contributions
MZ: drafting of the manuscript. LC: design and critical revision of the manuscript. Both authors contributed to the article and approved the submitted version.
Funding
This work was sponsored by the Shanghai Sailing Program (grant no. 20YF1428500) and the National Natural Science Foundation of China (grant no. 82002185). | 2022-08-08T13:25:27.751Z | 2022-08-08T00:00:00.000 | {
"year": 2022,
"sha1": "7170c42cc32f03fa301d108a2865cf700ceb0ad1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "7170c42cc32f03fa301d108a2865cf700ceb0ad1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
214078490 | pes2o/s2orc | v3-fos-license | Early Stage Lymphedema in Breast Cancer Patient Detected by Indocyanine Green Lymphography but not by Lymphoscintigraphy: A Case Report
Early Stage Lymphedema in Breast Cancer Patient Detected by Indocyanine Green Lymphography but not by Lymphoscintigraphy: A Case Report Jin A Yoon, M.D., Ph.D., Myung Jun Shin, M.D., Ph.D., Young Jin Choi, M.D., Ph.D., Joo Hyoung Kim, M.D., Ph.D., Taewoo Kang, M.D., Ph.D., Heeseung Park, M.D., Ph.D. Department of Rehabilitation Medicine, Pusan National University School of Medicine and Biomedical Research Institute, Pusan National University Hospital, Busan; Department of Hemato-oncology, Pusan National University Hospital and Pusan National University School of Medicine, Busan; Department of Plastic and Reconstructive Surgery and Biomedical Research Institute, Pusan National University Hospital, Busan; Department of Surgery (Busan Cancer Center) and Biomedical Research Institute, Pusan National University Hospital, Busan, Korea
INTRODUCTION
The detection of secondary lymphedema has increased due to the advancement of breast cancer treatment. Therefore, early diagnosis and precise evaluation are necessary to minimize the progression of edema and prevent lymphatic drainage failure [1]. Currently, lymphoscintigraphy is mainly used for the detection of abnormal lymphatic flow, for which early diagnosis remains challenging [2]. Indocyanine green (ICG) lymphography, a new technique for imaging lymph vessels, can reveal lymphatic abnormalities at an early stage, possibly even before swelling becomes apparent [3]. Despite these advantages, it is currently only available at very few rehabilitation centers. In this case report, we present a patient with early diagnosis of secondary lymphedema by ICG lymphography.
CASE REPORT
A 44-year-old woman visited our hospital and presented with swollen left upper extremity that persisted for 2 weeks, following a total left mastectomy and lymphatic dissection performed one year ago. On her first visit, the physical examination revealed enlargement of the circumference of the left upper extremity by +1.6 cm above the elbow, +1.2 cm below the elbow, and +0.1 cm at the wrist and hand compared to the right side. During the initial visit, her neutrophil count, C-reactive protein titer and D-dimer were confirmed to be normal based on the findings of blood tests, and the possibility of infection was very small. By ultrasonography, a diffuse subcutaneous edema was ob- After confirming abnormal lymphatic flow and dermal backflow patterns using ICG lymphography, it was established to be clinically evident lymphedema, indicating a critical time point to initiate treatment. We were able to describe the need for active complex decongestive physiotherapy to the patients, and treatment such as education of manual lymphatic draining, wearing compressive stocking, and pneumatic compression was applied as early intervention. One month after the commencement of treatment, the difference in circumference of the upper extremity decreased to 0.5 cm above and below elbow, and the patient was able to continue treatment with good compliance. This case study was approved by our Institutional Review Board, and the requirement for written consent was waived (IRB No. 1909-023-083).
DISCUSSION
In this case study, early stage lymphedema was diagnosed by ICG lymphography, which allowed the patient to select the management as and when necessary. As lymphedema can induce not only pathologic problems like recurrent cellulitis but also cosmetic concerns influencing severe functional disabilities in breast cancer patients, early diagnosis of secondary lymphedema is vital for its treatment prognosis [4].
However, detecting lymphedema at an early stage is challenging, as its manifestations vary greatly among patients and the edema usually initiates gradually, long after lymphatic vessel disruption.
Patient history, physical examination, imaging of soft tissue, lymph vessel, and lymph node, and measuring of volume are among the commonly used tools to evaluate lymphedema. For clinical lymphedema staging, Campisi reported the importance of early diagnosis at stage 1A or 1B, no edema or edema that regressed with elevation despite the presence of lymphatic dysfunction [5]. To know the critical point of initiation of treatment, it is not suitable to apply repetitive lymphoscintigraphy for routine follow-up due to its radiation exposure.
ICG lymphography is a non-invasive, non-radioactive tool to evaluate lymphatic function. Although it cannot be used to observe lymph vessels deeper than 2-3 cm, ICG lymphography is a useful tool to evaluate the lymphatic circulatory condition by dermal backflow stage and lymph pump function by lymph transportation quantifica- [6]. Specific dermal backflow patterns are observed as linear patterns progress to splash, stardust and diffuse patterns appearing in this order with severity of edema. In particular, it can detect abnormal conditions of the lymphatic circulation before edema becomes clinically evident. As seen in our patient, these splash patterns are indicative of a reversible lymphatic disorder and would indicate a critical time point at which to start appropriate management [7].
Mihara et al. [8] reported ICG lymphography to be superior to lymphoscintigraphy for the diagnostic imaging of early lymphedema, as the diagnostic sensitivity was 1 by ICG lymphography and 0.62 by lymphoscintigraphy. There are also marked differences in the direct observation of the lymphatic system between these two methods. In ICG lymphography, retained fluid is detectable without background noise because there is no endogenous fluorescence in the near infrared band used for the detection of ICG, thereby increasing the accuracy [8]. In addition, when there is no definite lymphatic flow obstruction or dermal backflow, it is possible to overlook decreased lymphatic flow with qualitative lymphoscintigraphy [9].
Likewise, our case showed normal lymph flow and axillary lymph node activity in lymphoscintigraphy but various abnormal dermal backflows in ICG lymphography, which was proven to be an indispensable diagnostic tool for our patient. As all diagnostic methods have both strengths and weaknesses, combining diagnostic tools for evaluation of the lymphatic system might be advantageous. Regarding the pattern of dermal backflow distribution, our patient showed more severe pattern at proximal part of the limb (stardust and diffuse patterns) compared to the distal part (splash pattern). This dermal backflow pattern was previously described, wherein the linear pattern was found less frequently and dermal backflow pattern progressed to the stardust or diffuse patterns in the proximal part, like our case [7]. In addition, confirming various dermal backflow patterns in relation to the body part is another advantage of ICG lymphography which can provide additional information of which body part to focus on during lymphedema management, such as manual lymphatic drainage.
Although standardized performance metrics are necessary, ICG lymphography has various advantages. As the incidence of mild adverse reactions was reported to be 0.05% during intravenous administration [10], it would be lower, and therefore safer for subcutaneous injection. However, it is currently not available as a diagnostic tool due to the absence of appropriate medical charge and the equipment not yet being introduced in Korean healthcare settings. In this case report, we were able to make an early diagnosis of lymphedema and guide decisions to adjust lymphedema treatment by using ICG lymphography. With greater implementation efforts, ICG lymphography could become an essential clinical diagnostic tool for early diagnosis of lymphedema and staging. Considering the usefulness of ICG lymphography in early diagnosis of lymphedema, appropriate clinical settings are required for this to be implemented in Korea. | 2020-01-02T21:45:33.882Z | 2019-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "4b7e47116c35875d783dd47a7dc415ac2ec87788",
"oa_license": "CCBYNC",
"oa_url": "https://www.jbd.or.kr/upload/jbd-7-2-117.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "14f5214b22c980a60581ccd3cb2fe83a6a973fb7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3122327 | pes2o/s2orc | v3-fos-license | How do parents experience support after the death of their child?
Background A child’s death is an enormous tragedy for both the parents and other family members. Support for the parents can be important in helping them to cope with the loss of their child. In the Netherlands little is known about parents’ experiences of the support they receive after the death of their child. The purpose of this study is to determine what support parents in the Netherlands receive after the death of their child and whether the type of care they receive meets their needs. Method Parents who lost a child during pregnancy, labour or after birth (up to the age of two) were eligible for participation. They were recruited from three parents’ associations. Sixty-four parents participated in four online focus group discussions. Data on background characteristics were gathered through an online questionnaire. SPSS was used to analyse the questionnaires and Atlas ti. was used for the focus group discussions. Results Of the 64 participating parents, 97% mentioned the emotional support they received after the death of their child. This kind of support was generally provided by family, primary care professionals and their social network. Instrumental and informational support, which respectively 80% and 61% of the parents reported receiving, was mainly provided by secondary care professionals. Fifty-two per cent of the parents in this study reported having received insufficient emotional support. Shortcomings in instrumental and informational support were experienced by 25% and 19% of the parents respectively. Parental recommendations were directed at ongoing support and the provision of more information. Conclusion To optimise the way Dutch professionals respond to a child’s death, support initiated by the professional should be provided repeatedly after the death of a child. Parents appreciated follow-up contacts with professionals at key moments in which they were asked whether they needed support and what kind of support they would like to receive. Electronic supplementary material The online version of this article (doi:10.1186/s12887-016-0749-9) contains supplementary material, which is available to authorized users.
Background
The death of a child is an enormous tragedy for both the parents and other family members. Parents experience intense feelings of loss after their child's death [1]. The death of the child influences not only the family system, which is internally disrupted, [2,3] but also others: neighbours, friends, relatives (i.e., the social network) and other acquaintances. Everyone needs to deal with his or her own grief. While parents try to pick up the pieces, support that meets their needs is important for them to cope with the loss of the child [3].
The period of mourning and the way people mourn differ from person to person. There is no "right" way of grieving [4,5]. Some authors describe different stages in the grieving process, which may overlap each other [4]. Others state that grief is a complex process which has no stages and consider it to be more like a fingerprint: unique and erratic [6]. The dual process model, [7] in which an effective way of mourning is finding a balance between 'loss orientation' and 'restoration orientation' , fits well with this view. Although people mourn in their own way, on different levels of intensity and time course, complicated forms of grief have been reported [8]. As many as 58% of parents who lost a child suddenly and unexpectedly, show 18 months after the death of their child, "complicated grief reactions" if the definition of Prigerson & Jacobs is used [9]. But given the nature of the parent-child relationship, this may not necessarily indicate pathological processes. Bereavement outcome depends on a complex interaction between situational, personal and coping factors [10]. It is known that grief rumination [11] leads to more symptoms of depression and complicated mourning [10]. Complicated grief then, like yearning existing longer than six months post-loss, [12,13] increases the risk of psychosocial and psychiatric problems and death from natural and external causes [14][15][16]. To prevent psychosocial and psychiatric problems after the death of a child it is important that professionals understand the complex emotional grieving process and identify symptoms of possible complicated grief in parents and other family members at an early stage, in order to provide adequate family support.
The intensity of parental grief is related to a number of factors, such as gender and coping strategies of parents, the child's age, and circumstances surrounding the death. Cultural and ethnic differences must be taken into account in assessing the extent of expressions of grief and mourning. What is considered normal in one culture may not be in another [17]. Mothers experience intensive grief reactions more often than fathers [14,[18][19][20]. Gender differences are also observed in the use of coping strategies in relation to death. It seems that women confront their emotions, while men use avoidance coping strategies more often. The intensity of grief among parents generally increases when the child dies at an older age [14]. Furthermore, parents experience more grief reactions when the death is due to an external cause and is unexpected. Features of grief and coping styles differ between individuals, different ethnic groups and cultural backgrounds [10]. This implies that the need for support also varies.
When their child has died, parents receive support from family, friends, colleagues and other people, for example from day care, school or sport clubs, and from (health) professionals. There are different types of support described in literature, [21,22] which can be divided into emotional, instrumental and informational support. Emotional support is any behaviour in which empathy, love, trust and care is provided to parents. Instrumental support is the provision of tangible assistance or services that directly help parents. Informational support is the provision of advice and information, which empowers parents to make informed decisions about the care offered to their child, such as withdrawal of treatment, as well as other issues pertaining to family life [21]. Health professionals and others involved in a child's death are confronted with their own emotions and fears. This may influence the way they approach the parents of a deceased child [23]. The care, or the lack of care, that parents receive around the time of death has a great impact on the adjustment process and well-being of the parents in the long-term [24]. In case of a sudden and unexpected death in particular, the initial care largely determines the course of bereavement. In this context professionals should realise that parents want to say goodbye to their child, receive information about the cause of death and feel supported by professionals [25]. Parents value health professionals and others who approach them with empathy, kindness and respect. They also value professionals when they listen and communicate well and offer support before and after the death of a child [15,[24][25][26]. According to parents, support should be offered on an individual basis and may vary in intensity depending on the family needs [15]. Support should not be focused solely on parents but also on any surviving siblings [23].
In the Netherlands, professionals from different organisations are involved when children die. In protocols, guidelines or other working agreements, supporting the family after a child's death receives relatively little attention [27]. The Dutch Preventive Child Healthcare has a guideline particularly directed at counselling families after the death of a child [28]. For professionals in palliative care, a national guideline, 'Grief' , is available and it describes how surviving relatives can be supported [29]. The Dutch Association of Pediatrics developed a guideline in collaboration with the Dutch College of General Practitioners specifically directed at the organisation of care for children in the palliative phase [30]. Other professionals have generic aspects of family support included in their guidelines.
Although there is a lot of knowledge on bereavement and increasing interest in the support of the family, information is lacking about parents' experiences of the support they received after the death of their child. In this study we answer the research question: what bereavement care did parents in the Netherlands receive after the death of their child and did this care meet their needs? The answers to these questions can help professionals to optimise the support they offer after a child's death.
Study design
Online focus groups and a questionnaire were used to explore what bereavement care parents in the Netherlands received after the death of their child. The METC Twente (Medical Ethical Committee Twente) reviewed the project plan for ethical permission, but decided the study was not subject to the Medical Research Involving Human Subjects Act (WMO) (METC/11011.boe) [31].
Study sample
The target population consisted of parents who have lost their child during pregnancy and labour or after birth, up to the age of two. To recruit these parents we contacted the chairs of three parents' associations by email: the Association of Parents of Cot Death Children (in Dutch: Vereniging Ouders van Wiegendoodkinderen), the Association of Parents of a Deceased Child (in Dutch: Vereniging van Ouders van een Overleden Kind) and the online Sweet Angel Foundation (in Dutch: Stichting Lieve Engeltjes). The Association of Parents of Cot Death Children is a support group that consists of fellow sufferers. Its aim is "to support parents and others who are closely involved, to give information, to gather knowledge on cot death and to stimulate research to optimally support families and to put research to prevent Sudden Infant Death Syndrome (SIDS) on the agenda" [32]. The Association of Parents of a Deceased Child is an organisation which consists of parents of a deceased child (of any age) that aims "to offer understanding and compassion to fellow sufferers" [33]. The Sweet Angel Foundation is an association for parents of a child that died during pregnancy, birth or at an older age, and other persons who are confronted with a child's death in or outside the family. This association "provides fellow sufferers the opportunity to get in touch with each other by email" [34].
The chairs of the three parents' associations agreed to invite their members to participate in the study by means of an invitation letter, which contained information about the objectives and procedure of the study. The 256 members of the Association of Parents of Cot Death Children received the invitation letter by post. The Association of Parents of a Deceased Child published the invitation letter in their newsletter, which is delivered to all members including the 200 who lost their child when he or she was under two. The Sweet Angel Foundation placed the invitation letter in their newsletter, which all members received by email. Respectively 33, 1 and 38 parents signed-up via e-mail.
Data collection
Data were gathered through four asynchronous online focus group discussions in February and March 2013. Participating parents from the Association of Cot Death Children were divided into two focus groups of 16 and 17 persons. Participating parents from the Association of Parents of a Deceased Child and the Sweet Angel Foundation were divided into two focus groups of 20 and 19 persons.
Background characteristics of the participating parents were gathered by means of a questionnaire.
A semi-structured questionnaire was used to guide the focus group discussions. To conduct these online group discussions a secure forum licensed by TNO Child Health [35] was used. Each session was guided by two moderators (first and second authors). Parents gave their consent to participate at the beginning of the online focus group discussion, after they received a document by email that described the procedure of logging in on the secure forum. This document also contained communication rules. Anonymity for participants was ensured through the use of nicknames. The secure forum was accessible to the participants for one week. Each day, the first moderator posted a question on the forum, to which participants could respond at any time of day. Participants could also respond to each other if they wished. In total, seven questions were posted about the support parents had received in the period around and after the death of their child, and whether this care met their needs. Parents were asked to describe who was involved around the time of death of their child and whether they had received support from professionals or other people. If parents reported receiving support, they were asked to describe who supported them, what kind of support they had received and what their experiences were in relation to the support (see Additional file 1). The two moderators followed the discussion on a daily basis, in order to stimulate the exchange of information and experiences by answering participants' questions when something was unclear. The second author (a psychotherapist) also referred some parents to a form of trauma therapy or a website for information when she felt this was appropriate.
Data analysis
First, the background characteristics of the participants were categorised. Second, the input given in the online focus groups, saved on the secure forum, was analysed using Atlas ti [36]. A codebook was created based on the time period of support in relation to the death, the type of support (emotional, instrumental or informational) parents had received or lacked from a certain person, and wishes or recommendations from parents with regard to support. Support that was in line with the parents' needs or expectations was identified as good practice when parents valued this explicitly with words. The first author coded all four online focus groups and the third and fourth author each independently coded two of the four online focus groups, to minimise the introduction of researcher selection bias into the results. Relevant text fragments related to the topics of the seven questions in this study were selected and given codes. The codes and the corresponding fragments coded by the different coders were compared. The differences were discussed between the three researchers. Ultimately, consensus was reached about the definitive set of codes and the fragments that corresponded to these codes. Next, the first author removed duplicates in codes and sorted the remaining codes by the kind of support that parents reported that they had received or lacked after their child's death.
Background characteristics of the participants
Of the 72 parents who had signed up for participation, 29 from the Association of Parents of Cot death Children, one from the Association of Parents of a Deceased child and 34 from the Sweet Angel Foundation actually participated in the online focus group discussions. Fiftyseven of these 64 participants completed the questionnaire on background characteristics ( Table 1).
Most of the 64 participants were mothers (83%). Their mean age was 42.4 years, ranging from 24 to 65 years. All of them were Dutch. Their children died between 1970 and 2012; more than half of the children died after the year 2000. Sixteen per cent of the children died during pregnancy; 39% died between the ages of two months and twelve months. Sixty-four per cent of all deaths were unexpected. Forty-one per cent of the deaths were categorised as Sudden Infant Death Syndrome (SIDS); other causes of death were pregnancy and childbirth related conditions and congenital malformations, deformations and chromosomal abnormalities. Most children died at home (38%) or in the hospital (23%); eight children died elsewhere: three with family, friends or neighbours; four at the crèche, nursery or child-minder's and one in a car seat.
Parents' experiences with support
The kind of support parents reported having received or lacked after their child's death is shown in Table 2. An overview of the professionals who did or did not provide support, for each type of support, as reported by parents is given in Tables 3 and 4.
Emotional support
Of the 64 parents, 62 (97%) mentioned the emotional support they received after their child's death ( Table 2). Emotional support was mainly provided by family, primary care professionals (i.e., general practitioner, social worker and home care professional) and the parents' social network ( Table 3). Examples of good practices are illustrated in the following quotes: "We were very satisfied with the support of the general practitioner who did everything for us to sort out everything around the death of our child." (Year of death, 1997) "The general practitioner often visited us or called us sometimes to see how we coped. We knew that we could always contact her for questions and that thought was comforting." (Year of death, 2010) "Our parents and the rest of the family were there for us to provide a shoulder to cry on, to listen to us and ask how we were coping. This kind of support is priceless and has been very crucial for us." (Year of death, 2010) Despite the fact that most parents received emotional support, 33 out of the 64 parents (52%) reported lacking this kind of support (Table 2). Parents reported a lack of emotional support in particular from other (not specified) persons and family ( Table 4). The following quotes illustrate the kind of emotional support two parents had missed: "Like my mother-in-law subtly noted after 6 weeks "Are you still crying? You have to stop doing that now, because for us it is very annoying". And yet she was a very sweet woman who did not know better." (Year of death, 1985) "Although we received a lot of support from our family, they do not know how it feels when you have lost a child. They completely miss the point in giving well-intentioned advice." (Year of death, 1997)
Instrumental support
Fifty-one of the 64 parents (80%) mentioned the instrumental support they received after their child's death ( Table 2). Instrumental support was particularly provided by primary and secondary care professionals (paediatrician, gynaecologist, other medical specialist, nurse, personnel of the Accident and Emergency department) and family ( Table 3). Examples of instrumental support are reflected in the following quotes: "We received a lot of support from our family, who took over our household and made dinner for us. I have experienced this as pleasant." (Year of death, 1999) "The forensic physician allowed us to bring our daughter to the hospital ourselves without police or hearse. The hospital was informed about our arrival. A special room was prepared for us where we could stay. They offered us the opportunity to be present during the first examination, which we did not want to. After the examination we could take our daughter in our arms until she was taken away for the complete autopsy. Afterwards we put her in her own bed underneath a blanket as if she was going to sleep. We experienced this as a very warm gesture to our daughter and ourselves." (Year of death, 2005) "The hospital had organised a memorial service 5 months after the death of our daughter for all the parents of children that died at the neonatology department that year. The memorial service was followed by a get together with fellow sufferers. I am positive about this kind of support (as far as you could speak in those terms)." (Year of death, 2005) Sixteen of the 64 parents (25%) mentioned a lack of instrumental support after the death of their child ( Table 2). Parents reported a lack of instrumental support in particular from other (not specified) persons ( Table 4). The following quote illustrates the kind of instrumental support one parent reported lacking: Seven parents who participated in the online focus group discussions did not fill out the questionnaire (answer category: 'unknown') "After the death of our child we have had to struggle to get the help we needed. A psychologist with experience in bereavement was hard to find." (Year of death, 2011)
Informational support
Of the 64 parents, 39 (61%) mentioned the informational support they received after the death of their child ( Table 2). Informational support was particularly provided by secondary care professionals ( Table 3). The following quotes illustrate the informational support received from secondary care professionals: "We experienced the counseling for a future pregnancy in the hospital as very valuable. You are no longer the 'unconcerned' parent." (Year of death, 1993) "Both hospitals where I stayed were very supportive, especially one physician: the gynaecologist. The talks, the time, the personal advice. It was all well meant and direct. Although I did not want to hear it, he gave advice anyway. But I appreciated (and I still do appreciate) the support, the honesty and sincerity of this man." (Year of death, 2012) Twelve out of 64 (19%) mentioned a lack of informational support after their child's death ( Table 2). Parents reported a lack of informational support in particular from other (not specified) persons and secondary care professionals ( Table 4). The informational support that parents lacked is reflected in the following quotes:
Recommendations of parents
Twenty of the 64 parents (31%) responded to the question about the ways in which support could be improved and what kind of support they had appreciated from which person. The recommendations they provided are directed at emotional, instrumental and informational support after the death of a child, as presented in Table 5.
Discussion
When a child has died, many people are involved and provide some form of support to parents. Through the use of online focus group discussions we explored parents' experiences with support after the death of their child aged two or younger.
Most parents mentioned the emotional support they received after the death of their child. This kind of support was particularly provided by family, primary care professionals and the parents' social network. Instrumental and informational support was mainly provided by secondary care professionals. As described in other research, physicians arrange follow-up meetings, usually after 6 weeks, with parents to inform them about the autopsy findings, cause of death and genetic risk, to answer questions and to offer and provide support in the following pregnancy if needed [37].
An important finding is that slightly more than half of the parents reported a lack of emotional support, particularly from family. Furthermore, informational support from secondary care professionals was evaluated as insufficient and many parents experienced shortcomings in the instrumental and informational support of other, non-professionals. Bereavement care has changed over time. In the postwar years parents were not allowed to talk about their deceased child, to see their child after death or to show their grief [38,39]. Nowadays, there is a greater understanding of the loss and pain parents experience after the death of their child. Although this has changed the way in which support is provided to the family, parents in this study have made some recommendations to optimise family support. Parents emphasise that they would like to be approached with empathy and be acknowledged in their bereavement. Alongside this, health care workers should offer support repeatedly and provide parents with information about the grieving process and options for support. Parents appreciate contact with professionals six to twelve months after their child's death, to check whether the family needs any extra care or support. This contact should be initiated by the professional. In line with the results of other studies, parents indicate that they would appreciate the provision Table 4 Specification of the people/organisations who/that did not give support to the parents after the death of their child, as reported by the focus group participants Person/organisation who/that did not give support as perceived by the respondents of more support and follow-up appointments or contacts with a professional after the death of their child [25,26].
Strengths and weaknesses of this study
For our target population, the use of online group forums proved to be a comfortable form of group discussion. This may have helped with recruitment, because participants were confident that anonymity was guaranteed and they could decide when and where they wanted to answer the questions. We were able to recruit 64 respondents living throughout the country, of whom 57 provided information about the time, place and cause of death, the extent to which the death was expected, and the age of the child. However, parents were only recruited from support groups, which creates bias. It could be that parents who are members of support groups experience less support from family or have less or more coping skills than bereaved parents who do not participate in such a group. Recruitment through an invitation letter in the organisation's newsletter seemed to be less effective than a letter sent by post. The low participation rate for parents from the Association of Parents of a Deceased Child might relate to the fact that this association includes parents of children who died at any age, while this study focusses only on young children. Furthermore, in the interpretation of the number of members of the parents' associations it should be taken into account that membership lists usually include many dormant members. The distribution of the background characteristics of participants (mostly mothers of Dutch ethnicity) limits the generalisability of the results to athers or other ethnicities. In addition, we also were not able to observe gender differences in grief reactions and the way professionals should respond to this. With regard to church membership, the numbers are not remarkably different from the current Dutch population [40,41]. The number of participants prohibits analysing subgroups according to the circumstances of the child's death or parents' characteristics. In addition to the small number of participants, the heterogeneity of time and circumstances of loss as well as the range of professionals likely Professionals should take into account the mental situation of the mother when she gives birth to a deceased child [1] [year of death 2012]
Instrumental
The GP should offer support and discuss his/her options for giving after care shortly after the death of a child [3] [ to be involved in providing support, make it difficult to assess the internal validity of conclusions drawn from parents' reports. The findings of this study shed light on Dutch practice over decades and do not provide a clear picture of current practice. Although participants provided valuable recommendations with regard to the way in which support should be improved, some of these have already been implemented in practice. We therefore recommend repeating this study with a larger sample size covering a short time span, for example the past five years, arranged by age of the deceased child and manner of death. An advantage of online focus groups is that data do not need to be transcribed. This improves the accuracy of data and eliminates transcript bias, thereby increasing the quality of data [42]. A limitation of the online method is the varying response rate and length of responses to each individual question posted on the forum. Not every participant answered every question and was specific enough, which is understandable because it calls for a high degree of discipline. If we had been able to ask each parent to respond to each question posted on the forum, this would probably have resulted in a higher response rate and a more complete overview of the support parents received or lacked after the death of their child.
Conclusion and recommendations
Different types of support are provided to parents after the death of their child. Although increasing attention has been paid to supporting families after the loss of a child, one-fifth to slightly more than half of the parents in this study lacked some sort of support or experienced support that was not in line with their needs or wishes. According to the results of this study, support initiated by professional should always include listening to parents and asking them at key moments after their child's death whether they need (extra) support and what kind of support they would like to receive. Parents should also be asked specifically about the emotional support they receive from their family and their social network. When they lack this type of support, caregivers should explore with them how to reach out and receive more support. Furthermore, adequate communication skills and a respectful attitude are necessary in approaching the parents of a deceased child. The results of this study may not apply to every parent who has lost a child, because participants were a selected, self-admitted group. Future study is necessary in which parents are contacted through hospitals or government registries of death in order to compare the responses of those who participate in support groups and those that do not. Next to this, further research with the use of online focus groups is desirable, because the scope to reach parents and to include them in research seems so much wider than traditional focus groups. | 2017-05-26T01:55:51.081Z | 2016-12-07T00:00:00.000 | {
"year": 2016,
"sha1": "b17e178f733d521f79dcae5b27b18916e97c59db",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12887-016-0749-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b17e178f733d521f79dcae5b27b18916e97c59db",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237244351 | pes2o/s2orc | v3-fos-license | Effects of intravitreal injection of siRNA against caspase-2 on retinal and optic nerve degeneration in air blast induced ocular trauma
Ocular repeated air blast injuries occur from low overpressure blast wave exposure, which are often repeated and in quick succession. We have shown that caspase-2 caused the death of retinal ganglion cells (RGC) after blunt ocular trauma. Here, we investigated if caspase-2 also mediates RGC apoptosis in a mouse model of air blast induced indirect traumatic optic neuropathy (b-ITON). C57BL/6 mice were exposed to repeated blasts of overpressure air (3 × 2 × 15 psi) and intravitreal injections of siRNA against caspase-2 (siCASP2) or against a control enhanced green fluorescent protein (siEGFP) at either 5 h after the first 2 × 15 psi (“post-blast”) or 48 h before the first blast exposure (“pre-blast”) and repeated every 7 days. RGC counts were unaffected by the b-ITON or intravitreal injections, despite increased degenerating ON axons, even in siCASP2 “post-blast” injection groups. Degenerating ON axons remained at sham levels after b-ITON and intravitreal siCASP2 “pre-blast” injections, but with less degenerating axons in siCASP2 compared to siEGFP-treated eyes. Intravitreal injections “post-blast” caused greater vitreous inflammation, potentiated by siCASP2, with less in “pre-blast” injected eyes, which was abrogated by siCASP2. We conclude that intravitreal injection timing after ocular trauma induced variable retinal and ON pathology, undermining our candidate neuroprotective therapy, siCASP2.
Materials and methods
Experimental design. This study investigates whether caspase-2 promotes RGC death and ON axonal pathology in a b-ITON mouse model. We also considered the importance of intravitreal (invit) injection timing into an injured eye. Caspase-2 was knocked down by intravitreal injections of siCASP2 (2 μl of 1 μg/μl solution in sterile PBS) or equal concentration of siRNA against enhanced green fluorescent protein (siEGFP) as a control. In our first experiment, siCASP2 and siEGFP were intravitreally injected 5 h after the initial 2 × 15 psi blast and followed by two further blast waves, this is referred to as the "post-blast" injection study (Fig. 1A,B). Mice in this group were euthanized at 28 days post injury (dpi). We also performed injections 48 h before b-ITON, this is referred to as the "pre-blast injection" study ( Fig. 1C,D). Mice in this group were euthanized at 14 dpi. Optical coherence tomography (OCT) imaging was performed bilaterally at baseline and immediately before mice were euthanized (Fig. 1E). Mice in the "post-blast" study were euthanized at 28 dpi, eyes processed for immunohistochemistry (IHC), and RGC positive for RNA-binding protein with multiple splicing (RBPMS), a specific cytoplasmic RGC marker 51,52 , were quantified on retinal whole mounts (Fig. 1E). Mice in the "pre-blast" injection study were euthanized at 14 dpi and eyes processed for immunohistochemistry (IHC) and RBPMS + RGC quantified in retinal cryosections, as previously described 48 . In both groups, the far proximal ON tissue was processed for resin semi-thin cross-sections and PPD-stained intact and degenerating ON axons were quantified. The remaining ON were processed as longitudinal cryosections for IHC analysis (Fig. 1E).
Animal care and procedures. 12-week-old male C57BL/6 mice purchased from Jackson Laboratory (Bar Harbor, Maine, USA) were used in this study. Animal procedures were approved by the Institutional Animal Care and Use Committee of Vanderbilt University and conformed to the Association for Assessment and Accreditation of Laboratory Animal Care guidelines and conducted in accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. The study was carried out in compliance with the ARRIVE guidelines. Animals were randomly assigned, and the experimenters masked to the treatment and procedural conditions. All procedures and investigations were performed between 07:00 and 12:00.
Air blast-induced indirect optic neuropathy (b-ITON) injury mouse model. Animals were anaesthetised using 3% isoflurane in 1.8 l/min of O 2 and male 12-week-old C57BL/6 mice were exposed to a blast overpressure wave produced by a device modified to produce an air blast wave, as previously described [40][41][42]47,48 . Repeated air blast wave injury was chosen to more closely approximate real-world blast injury from linked mines or to mimic the multiple blast wave exposures and blunt-force injuries that occur during a single large explosive blast event. We used a 15 psi air blast, which alone does not cause pathology 42 . The average of 8 air blast tests shows that the system induces peak pressure at the location of the eye, when the mouse is in the system, at 4 ms post-trigger, stays elevated for 1 ms, and returns to baseline by 9 ms. This repeated paradigm causes greater RGC axonal degeneration compared to a single 26 psi air blast wave exposure. The left eye of mice was exposed to two 15 psi air blast waves with an interblast interval of ~ 0.5 s, repeated for 3 consecutive days for a total of 6 42 . The mouse eye was positioned 162 mm from the end of the device. Separate mice were exposed to equivalent procedures excluding the air blast wave which was blocked and verified to deliver a pressure of < 2 psi, and did not receive intravitreal injections. A pressure transducer recorded the air blast overpressure wave, which was viewed using LabVIEW software (National Instrument Austin, TX, USA). GenTeal Tears (Alcon, Novartis, Bilateral siCASP2 or siEGFP control were intravitreally injection 5 h after the initial 2 × 15 psi blast wave, and injections repeated every 7 days until perfusion and tissue collection at 28 dpi. (B) Experimental groups for "post-blast" injection study. (C) Timeline for "pre-blast" study. Bilateral siCASP2 or siEGFP controls were intravitreally injected 48 h before the initial 2 × 15 psi blast exposure and injections repeated every 7 d until perfusion and tissue collection at 14 dpi. (D) Experimental groups for "pre-blast" study. (E) Measured endpoints and eyes analysed. www.nature.com/scientificreports/ Fort Worth, Texas, USA). Eye drops were applied after the air blast waves to prevent corneal dehydration from anaesthetic exposure and the mice were allowed to recover fully. Intravitreal injections were performed using a 31-gauge needle with a bevelled tip attached to a 10 μl Gastight Syringe (Hamilton, Reno, NV, USA) under inhalational anaesthetic of 2-3% isoflurane at a 45° angle 1 mm peripheral to the limbus and the lens was avoided. Unilateral b-ITON was performed and 2 µl of 1 μg/μl siCASP2 or siEGFP (provided by Quark Pharmaceuticals Inc. under a Material Transfer Agreement) as control administered by bilateral intravitreal injection. Full details of siCASP2 and all modifications to the sequence are detailed as described previously 16,26 . Briefly, siCASP2 is a naked RNA duplex with chemical modifications to prevent degradation by vitreal and serum nucleases. In the first "postblast" injection study, siRNA injections were performed 5 h after the initial blast wave and repeated every 7 days until killing and tissue collection at 28 dpi (n = 10 per group) (Fig. 1B). In the second "pre-blast" injection study, injections were performed 48 h before the b-ITON (n = 5 per group) and repeated every 7 days until euthanasia and tissue collection at 14 dpi (Fig. 1D). Animals were perfused under terminal anaesthesia as described below.
Intravitreal injection of siCASP2 and siEGFP.
Optical coherence tomography (OCT) imaging, retinal thickness and vitreous haze analysis. OCT scans were performed under anaesthesia (3% isoflurane in O 2 ) at 27 dpi in the post-blast group and 13 days in the pre-blast group, to construct a high resolution cross-sectional retinal image using a Bioptigen ultra-high-resolution SD-OCT system with a mouse retinal bore (Bioptigen, North Carolina, USA). Pupils were dilated using 1% tropicamide and GenTeal™ lubricant gel was used to maintain corneal clarity and prevent drying. All images were acquired with the same level of A-scan averaging (100 averages per A scan) and with the retinal position central to the image. A total of 2 B-scans were analysed per eye either side of the ON head. Whole retinal thickness and ganglion cell complex (GCC) thickness (ganglion cell layer, GCL, and inner plexiform layer, IPL) were measured in OCT images in line with the optic nerve head (ONH). Image J was used to manually segment the layers and measure the area, which was divided by the length of the retinal segment measured to calculate the layer thickness. Analysis to quantify vitreous inflammation was performed using Image J (http:// rsbweb. nih. gov/ ij), based on the method previously described 53 : two images either side of the ONH were analysed per eye and the pixel intensity in five regions of interest in the vitreous were measured and then displayed as a percentage of the average of retinal pigment epithelium (RPE) intensity. Results are displayed for "pre-blast" and "post-blast" studies and we have also grouped siCASP2 and siEGFP injections from the "preblast" and "post-blast" intravitreal injection groups and compared to sham and no invit eyes to determine if intravitreal injections cause changes in vitreal inflammation.
Tissue preparation for IHC. Animals were euthanized by overdose of anaesthetic (Isofluorane) and intracardially perfused with 4% EM-grade paraformaldehyde (PFA; Electron Microscopy Sciences, Hatfield, Pennsylvania, USA) dissolved in phosphate buffered saline (PBS). Eyes were then cryoprotected in ascending concentrations of sucrose (10%, 20%, 30%) and embedded in optimal cutting temperature compound. Sections were cut at 15 µm-thick using a cryostat (Bright Instruments, Huntingdon, UK), collected onto SuperFrost (Fisher Scientific, Loughborough, UK) coated glass slides and stored at − 20 °C until required for IHC. Whole retinae were dissected out of the eyes and IHC staining performed in 24-well plates, as described below.
Immunohistochemistry (IHC) in retinal frozen sections.
Frozen cryosections were thawed for 20 min, washed in several changes of PBS, permeabilised and non-specific binding sites blocked by incubation in PBS containing 1% Triton-X-100 (Sigma) and 3% bovine serum albumin (BSA; Sigma). Sections were then incubated overnight at 4 °C with the appropriate primary antibody (Table 1) before washing in several changes of PBS and incubating with Alexa Fluor 488/594 or HRP-labelled secondary antibodies (Table 1) for 1 h at room temperature (RT). Finally, sections were washed in PBS and coverslips mounted in a Vectashield antifade aqueous mounting medium with 4′,6-diamidino-2-phenylindole (DAPI) nuclear stain (Vector Laboratories, Peterborough, UK). Controls, which included omission of primary antibody were included in each run and were used to set the background threshold levels prior to image capture.
IHC in retinal wholemounts.
Retinal wholemounts were stained with appropriate primary antibodies after permeabilization in PBS containing 0.5% Triton X-100 (Sigma). To aid permeabilization, this step was first performed at − 80 °C for 15 min, followed by thawing in 0.5% Triton X-100 for 15 min at RT. Primary antibodies were diluted in PBS containing 2% Triton X-100 and 2% BSA (all from Sigma), added to wholemounts and incubated overnight at 4 °C. Wholemounts were washed in several changes of PBS, incubated with appropriate secondary antibody for 2 h at RT, washed in PBS and coverslips mounted using Vectashield aqueous mounting media containing DAPI. Primary antibodies were omitted in controls and were used to set the background threshold levels. Wholemounts were viewed under an Axioplan 2 fluorescent microscope equipped with an Axi-oCam HRc and running Axiovision Software (All from Zeiss, Hertfordshire, UK). An experimenter masked to the treatment conditions captured representative images from each wholemount for analysis.
Assessment of RGC survival. The number of RBPMS + immunostained RGC were quantified. In the "post-blast injection" study, RBPMS + RGC were counted in the middle portion of the retinal whole mounts ( Supplementary Fig. S1A) and displayed as mean RBPMS + RGC per mm 2 . In the "pre-blast injection" study, RBPMS + RGC in the GCL were counted on retinal cryosections in line with the ONH in pre-determined areas S1B) and displayed as mean RBPMS + RGC/mm of retina, as previously described 48 . Four images were analysed from two cryosections per animal.
Counting of intact and degenerative ON axonal profiles. The total ON area and the number of ON
axons with intact and degenerative profiles were quantified, as described by us previously 48 . Briefly, the circumference of the ON was measured and the total ON area calculated using Image J (NIH, Maryland, Bethesda, USA). The numbers of intact and degenerating (arrows Fig. 2D) ON axons were counted manually by assessing the morphological characteristics of PPD stained axons. Intact axons had uniform myelin whereas degeneration axons had unravelling or collapsed myelin. ON axons in 9 boxes in the Image J Counting Grid plug in were analysed and the total number of axons in the whole ON extrapolated.
Statistical analysis.
Statistical analysis was carried out using SPSS version 26 (IBM Corp, Armonk, NY, USA) and GraphPad Prism version 7.00 (GraphPad Software, La Jolla, CA, USA). The data were tested for normality using the Shapiro-Wilk test. Normally distributed data without linkage were analysed using one-way ANOVA and post-hoc Tukey tests with P values corrected for multiple comparisons (RBPMS + RGC quantification, ON axon counts, ED1 + cells in ON, retinal thickness in OCT scans). Data including measurements from two eyes of each animal (vitreal intensity in OCT scans) were modelled using generalised estimating equations (GEE; normal distribution with identity link function and independent correlation matrix). To construct a model to fit the data, all available factors were included in the initial model with 2-way interaction terms and then terms with P > 0.05 were serially removed from the model, starting with the least significant and interaction terms. In experiments where multiple comparisons were made, the P values were corrected with Bonferroni correction. Data is reported as mean ± standard error of the mean (SEM). Sample size was based on previous studies demonstrating that n = 5 animals per group detected treatment effects on axon counts 40,48 . No animals were excluded or euthanized due to reaching humane end points before study completion.
Results
The number of degenerative ON axons after b-ITON was not affected by "post-blast" siCASP2 intravitreal injections. We first determined the effects of caspase-2 knockdown in b-ITON on RGC degeneration, ON axonal pathology and vitreal inflammation. To do this, we performed "post-blast" intravitreal injections of our well-characterised siCASP2 16,17,22,23 or a control siEGFP, and then assessed ON axonal morphology, quantified the number of degenerating and intact axons as well as axon density on PPD-stained semi-thin proximal ON cross-sections ( Fig. 2A-D). There was no difference in total axonal numbers between sham and b-ITON without injections at 28 dpi (38,193
Infiltration of ED1 + cells in the ON by 28 dpi. To assess infiltrating inflammatory cells in the ON after
b-ITON, we counted ED1 + cells in longitudinal ON cryosections (Fig. 3E,F). ED1 is a widely used monoclonal antibody clone which is directed against CD68 and is used to identify macrophages, monocytes and activated microglia in rat tissues. While there appeared to be more ED1 + cells in the ON after b-ITON, the numbers did not reach statistical significance in any group. For example, there were 2.09 ± 0.68 ED1 + cells per mm 2 were present in sham ONs, we detected 10.11 ± 3.69, 21.79 ± 13.21, and 28.73 ± 12.21 ED1 + cells per mm 2 in the uninjected b-ITON, "post-blast" siEGFP and siCASP2 injected b-ITON ONs, respectively (P = 0.2255, ANOVA). These results demonstrate that markers of ON and retinal inflammation, including ED1 + macrophage-infiltration and vitreous haze increased in concert after b-ITON followed by intravitreal injection, with a possible additive effect when the intravitreal injection of siCASP2 knocked down caspase-2 levels.
ON axonal degeneration and RBPMS + RGC after "pre-blast" siCASP2 and siEGFP intravitreal injections. The study detailed thus far was performed with siCASP2 or siEGFP intravitreal injections at 5 h after the first 2 × 15 psi blast exposure, into a potentially vulnerable and traumatically inflamed eye, which we have demonstrated caused vitreous and retinal inflammation. To determine if intraocular injections at this time point were potentially detrimental to the eye, we next performed a further study in which we injected siRNA at 48 h before the b-ITON, "pre-blast" injection. Pre-injury siCASP2 intravitreal injection is unlikely to be clinically translatable, however, we aimed to derive information on pre-injury caspase-2 knockdown and the mechanistic role of caspase-2 in the b-ITON model. Therefore, we injected siCASP2 and siEGFP at 48 h before b-ITON and continued the study for 14 dpi (n = 5 mice per group).
Immunostained ED1 + cells in the retina were more frequently localised in the outer plexiform layer (OPL), IPL and GCL in b-ITON with siEGFP and siCASP2 injections compared to sham and b-ITON with no injections, suggesting increased inflammatory cell infiltration was caused by "pre-blast" intravitreal injections (Fig. 5F). GFAP immunostaining levels remained constant between all groups (Fig. 5F). Consistent with our findings from "post-blast" intravitreal injections, there was a trend for an increase in the number of ED1 + cells in the ON of mice receiving b-ITON compared to sham controls, however it did not reach statistical significance (8.40 ± 2.20 vs 0.75 ± 0.17 ED1 + cells per mm 2 ; P = 0.075, ANOVA; Fig. 5G,H) due to the low numbers and significant variation. The number of infiltrating ED1 + cells were also high in b-ITON eyes injected with siEGFP and those injected with siCASP2 (12.96 ± 3.82 vs 9.77 ± 3.37 ED1 + cells per mm 2 ; P = 0.5586 and P = 0.9742 post-hoc Tukey, both compared to b-ITON with no intravitreal injections), suggesting increased inflammatory cell infiltration was caused by blast in these groups (Fig. 5F-H). These results suggest that intravitreal injections administered 48 h before b-ITON caused less retinal and vitreous inflammation than when given 5 h after b-ITON.
Discussion
In this study we show differential responses to intravitreal injections given 48 h before compared with 5 h after b-ITON, with the latter causing retinal and ON inflammation compared to sham controls and b-ITON alone. Receiving "post-blast" siCASP2 intravitreal injections did not affect the numbers of degenerating ON axons or the number of intact axons. In contrast, "pre-blast" siCASP2 intravitreal injections resulted in fewer degenerating ON axons compared to b-ITON with no intravitreal injections. Unexpectedly, "pre-blast" intravitreal injections of control siEGFP also resulted in similar reductions in degenerating ON as siCASP2. Also surprisingly, the extent of axon loss at 28 dpi and the extent of axon degeneration at 14 dpi were much less in this study than in other studies using this model 40,42 which could potentially be due to the effect of the injections and the portion of the optic nerve that was examined 58 . Furthermore, there was no effect on RBPMS + RGC survival after b-ITON injury with or intravitreal injections of siCASP2 or siEGFP in both "pre-blast" and "post-blast" at the time points assessed.
Compared to intravitreal injection of siEGFP, "post-blast" intravitreal injections of siCASP2 increased vitreous inflammation, assessed by OCT vitreous intensity, while siCASP2 intravitreally injected 48 h before b-ITON decreased vitreous inflammation. The hyper-reflective dots observed qualitatively were similar to previous studies which correlated these cells on OCT with histological mononuclear cells 59 , suggesting this vitreal haze could represent macrophage infiltration in response to the combination of b-ITON and intravitreal injections. These results suggest complex regulation of RGC survival and inflammation after b-ITON treated with intravitreal injections, which is dependent on timing of intravitreal injection. In addition, the intravitreal injection itself induced differential responses in b-ITON-treated mice, dependent on the timing of injection, with "pre-blast" siCASP2 reducing inflammation and "post-blast" treatment increasing inflammation. We also observed strong evidence for a reduction in ON axon density when siCASP2 was injected after the initial blast wave compared to siEGFP, and a trend towards a reduction in axon density when it was injected 48 h before, possibly due to ON gliosis known to be associated with this model 41 .
The delivery of siCASP2 by intravitreal injection at different time points around the blast wave exposures had varied effects on retinal and ON degeneration. Our first "post-blast" injection study, intravitreally injected siCASP2 or siEGFP at five hours after the initial 2 × 15 psi blast wave exposure, which was followed by two further 2 × 15 psi blast waves on consecutive days. We chose this time point to ensure that we still captured caspase-2 activation occurring < 24 h after injury allowing time for retinal siCASP2 penetration and knockdown (16 h) 16 while remaining acceptable for clinical translation as an injured soldier or civilian may receive specialist treatment within this time frame. Notably, we observed increased vitreous inflammation when an intravitreal injection was given after the initial blast wave exposure, which was independent of the compound injected and comparable to our previous observation with necroptosis inhibitor, Necrostatin-1s 48 . There was a greater vitreous intensity detected in eyes with siCASP2 injected after b-ITON, but not in eyes receiving "pre-blast" injections compared to siEGFP control, possibly due to caspase-2 knockdown preventing apoptosis of infiltrating macrophages 60,61 , with greater macrophage infiltration and higher persistent vitreous levels of siCASP2 in the group injected after b-ITON.
In the two different treatment paradigms (injection before and after injury), the number of RBPMS + RGC 62-64 at 14 and 28 dpi were not different, suggesting that b-ITON within the 14-28 days time-frame of our experiments alone, is not enough to cause RGC death. This is perhaps not surprising as we and others detect axon degeneration prior to RGC death in models of ITON, in fact in other models the RGC death is delayed for months 41,49 . For example, others have reported delayed and progressive RGC degeneration that results in reduction of GCL thickness, between 4 and 10 months after a single 26 psi blast-wave exposure 49,50 . In support of this assertion, decreased DAPI + cells in the GCL was observed at 2 days after b-ITON which remained low at 28 days 41 . We have however, previously demonstrated RGC degeneration caused by intravitreal injection 5 h after b-ITON 48 with comparable vitreous inflammation. In our current study intraocular injection after b-ITON did not affect the number of RBPMS + RGC but may reflect either neuroprotection induced by both siCASP2 and off-target effects of the siEGFP or greater macrophage infiltration with neuroprotective effects. Although the same siCASP2 had little effect on RGC survival in this model, we reported site-specific caspase-2-mediated RGC death peripheral to the injury site after blunt ocular trauma but not central to the impact site, but CASP2 did not drive photoreceptor death, suggesting that caspase-2-mediated apoptosis is both cell and site specific 17,65 .
The number of degenerating ON axons was increased 28 days after b-ITON alone and was unaffected by siCASP2 and siEGFP intravitreal injections administered after b-ITON. In contrast, intravitreal injection of either siCASP2 or siEGFP given 48 h before b-ITON reduced both the number of degenerating ON axon profiles and the axon density, with a lesser effect on total axon number, suggesting some interference with the process of axonal degeneration likely due to ON gliosis associated with this model 41 . Again, despite injecting equivalent treatments, the time point of intraocular injection surrounding the blast caused different ON responses. We have previously shown degenerating ON and reduced electroretinography recordings and elevated levels of pro-inflammatory cytokines when administering an intravitreal injection at 1 day after the b-ITON, indicating that intravitreal injections may be injurious to the ON when delivered at this acute stage of ON injury 58 . The "pre-blast" study ended at 14 dpi and the "post-blast" study ended at 28 dpi, which could be viewed as a limitation of our study, but we did not intend to compare the two protocols to each other. These timepoints of analysis were chosen since we previously showed that 2 weeks was the peak of axon degeneration in this model and at 4 weeks significant axon loss was detected 41 . Thus, the 2-week time point was used as the most robust anatomical assessment of protection by quantification of axon degenerative profiles, whilst at the 4-week time point the most robust anatomical assessment of protection was quantification of total axons.
Long-lasting morphological and functional consequences in the eye have also been observed in models of repetitive mild traumatic brain injury (r-mTBI) 66 . For example, r-mTBI in a mouse model caused a decrease in ON diameters, increased cellularity and areas of demyelination in the ON. This was consistent with areas of decreased cellularity in the GCL and 67% reduction in Brn3a + RGC. Furthermore, SD-OCT demonstrated thinning of the inner retina whilst ERG demonstrated a decrease in the amplitude of the photopic negative response without changes in a-or b-wave amplitudes. In a separate single blast TBI model, the authors also found decreases in RNFL thickness and reduced cellularity in the GCL at 3-months with accompanying changes in retinal function 50 . However, r-mTBI led to more profound and widespread damage to the RNFL. These studies suggest that visual system dysfunction might be a common feature after blast and repeated blunt mTBI.
As we have previously reported 48 , there were infiltrating ED1 + cells in the ON at 28 days after b-ITON alone and in eyes receiving pre-blast and post-blast injections and have now shown infiltrating ED1 + cells at 14 days. ED1 + cells are likely to be infiltrating inflammatory macrophages to clear myelin debris from degenerating ON axons 67 , and would be consistent with previous findings of CD68 + cells infiltrating the brain after blast injury 68 , and macrophage ON infiltration after ultrasonic injury 69 . However, macrophages recruited into the ON may exert polarised effects since they are not only toxic to neurons and glia but can also promote CNS axon regeneration 70 . Vitreal inflammation, induced by lens injury or injection of zymosan has long been known to cause release of oncomodulin, promoting RGC neuroprotection and axon regeneration [71][72][73][74] . In this study, RGC and ON degeneration may reflect a complex balance between pro-degenerative ON macrophage infiltration and neuroprotective retinal and vitreous macrophage infiltration, with more neurodegeneration in the "pre-blast" injection group, which displayed less vitreous inflammation and neurodegeneration, than the "post-blast" injection group which had more vitreous inflammation.
Intravitreal drug delivery has become routine for the delivery of drugs, suspensions and intraocular implants into the vitreous cavity. It is the main route to deliver macromolecules to the posterior segment of the eye. Although the technique leads to targeted delivery of therapeutics, it is invasive since it requires the penetration of the globe and is associated with complications such as endophthalmitis, retinal detachment, cataracts and intraocular haemorrhage 75 . Intraocular injections can be uncomfortable, may have limited patient compliance and often require multiple injections which can increase the risk of side effects such as infectious endophthalmitis and retinal detachment 76 . However, repeated intravitreal injections of anti-angiogenic agents such as vascular endothelial growth factor (VEGF) inhibitors have become the first-line treatment for exudative age-related macular degeneration (AMD) 77 . Our study which detected additive effects of combined b-ITON and intraocular injections 48 , suggests that intraocular injection may not be the optimal therapeutic delivery method for treating ocular blast injury and that this should be considered in development of treatments for humans. However, caution needs to be exercised when projecting adverse effects of intraocular injections in mouse eyes to substantially larger human eyes since an intraocular injection in a mouse eye may inflict a greater degree of injection-related ocular injury than in the larger human eye with a ~ 1000-fold larger vitreous volume 78,79 . Hence, the overall effects of intraocular injections into the human eye may be negligible and needs further investigation.
In conclusion, despite evidence of caspase-2 activation after b-ITON, we did not detect a neuroprotective effect of caspase-2 knockdown in this model, possibly because of the limited loss of RGC soma. We demonstrate that intravitreal injection and b-ITON combined cause retinal and vitreal inflammation, with a greater effect when the injection was administered after b-ITON compared to before the injury. Depending on the timing of injection, intravitreal injection may also induce RGC axonal loss, although this was minimal. Similarly, the timing of the intravitreal injections with respect to b-ITON also determined whether siCASP2 reduced or increased vitreous inflammation. The time point of intravitreal injection surrounding ocular trauma can induce varied | 2021-08-21T06:17:10.621Z | 2021-08-19T00:00:00.000 | {
"year": 2021,
"sha1": "57121122798b22aaf99c4d06d7c4f1bc359cf15d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-96107-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e77dad370efec24784922660be3fe536b5daebf2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251806765 | pes2o/s2orc | v3-fos-license | Emotional and Behavioral Health among Portuguese Toddlers during the COVID-19 Crisis: The Impact of Social Isolation and Caregiving Distress
The COVID-19 pandemic has led to significant changes in the lives of families with young children. The present study aimed to explore whether child social isolation due to the COVID-19 crisis was associated with toddlers’ emotional and behavioral health (EBH) and whether this association was moderated by caregiving distress, during the second mandatory lockdown in Portugal. Participants included 315 toddlers and their primary caregivers. Caregivers were invited to complete a set of questionnaires in order to report about toddlers’ social isolation from other significant family members, other children, and activities outside the house, and to provide ratings of caregiving distress and toddlers’ EBH. Family socioeconomic factors, including stressors resulted from the pandemic, were also measured. Significant interaction effects, independent of child sex and sociodemographic factors, between COVID-19-related social isolation and caregiving distress emerged in the prediction of toddlers’ EBH: COVID-19-related social isolation was found to be a significant predictor of both emotional/behavioral competencies and problems, but only among toddlers exposed to higher levels of caregiving distress. This study evidences the negative impact of the COVID-19 crisis on the functioning of Portuguese families and toddlers’ EBH. It emphasizes the importance for policies to consider the implications of the COVID-19 crisis for young children, and to provide psychosocial support to families in order to reduce caregiving distress and, thus, prevent children’s mental health problems.
Such changes may have resulted in significant stressors for children with an impact on their developmental trajectories. In fact, evidence on the effects of the COVID-19 pandemic on children around the World has suggested a negative impact on the emotional and behavioral health (EBH) of children aged 3-18 years old, such as higher irritability, fear, anxiety, and depression (Feinberg et al., 2021;Jiao et al., 2020;Loades et al., 2021;Orgilés et al., 2020). Despite such findings, less is known about the impact of COVID-19 in younger children, particularly in Portugal, the locale where the research reported herein was conducted. This is quite surprising, given that the pandemic experience and related stressors (e.g., social isolation) might be threatening to toddlers' mental health, particularly in the case of families experiencing higher levels of caregiving distress, which has reportedly increased during the pandemic (Spinelli et al., 2021) and is a strong predictor of child wellbeing (Sher-Censor et al., 2018). In an attempt to shed light on how the pandemic affected the development of Portuguese toddlers and their caregivers, we investigated the relations between COVID-19-related social isolation, caregivers' perceived caregiving distress, and toddlers' EBH during the second year of COVID-19.
COVID-19 Pandemic in Portugal
In March 2020, COVID-19 was characterized as a pandemic (World Health Organization, 2020) and the first cases of infection in Portugal were reported (Direção Geral de Saúde, n.d.). In order to slow down the rate of transmission, the Portuguese government rapidly declared a state of emergency with a mandatory lockdown of the population, which started being alleviated in May 2020 (Diário da República Eletrónico [DRE], 2020). Some studies showed the negative impact of this lockdown on the psychological well-being and mental health of Portuguese adults, such as increased anxiety, sadness, and anger (Dias et al., 2020), and on the functioning of children across various domains (Pombo et al., 2021;Poppe et al., 2021). In January 2021, the second year of the COVID-19 pandemic, Portugal registered its third -and worse to date -peak of infections. The country was near collapse and reported the World's highest rate of daily new confirmed cases and deaths per million people (European Centre for Disease Prevention and Control, 2021). There was an increase of 300% in deaths and 140% in hospital admissions (Direção Geral de Saúde, n.d.). The government, once again, declared mandatory lockdown of the population. People were confined to home, educational settings and business in general closed, social interaction and movement between locations were restricted, a curfew was imposed, and working from home was required whenever possible. These restrictions would only start to be eased in April 2021 (DRE, 2021a(DRE, , 2021b accompanied by a strong investment in vaccination.
COVID-19, Child EBH and Caregiving Distress
The lockdown restrictions presented a particularly big challenge for families with young children. Besides resulting in numerous socioeconomic stressors related to the family such as financial loss, reduction of income due to loss of job or lay-off measures, difficult access to essential goods (Achterberg et al., 2021;Calvano et al., 2021), the lockdown resulted in relational stressors for toddlers that are potentially threatening for their EBH Feinberg et al., 2021). Accordingly, some studies found a deterioration in the EBH of children and adolescents during the pandemic, reporting higher levels of internalizing and externalizing problems (Feinberg et al., 2021), depression, anxiety (Loades et al., 2021), irritability, difficulty concentrating, nervousness, feelings of loneliness, and worries (Orgilés et al., 2020), in comparison to before the pandemic.
An important example of the relational stressors produced by the lockdown is the social isolation that has limited toddlers' interaction with other children and extended family, which are significant agents in their development, and might impact children's mental health (Loades et al., 2020). In fact, a significant body of research have shown the crucial and unique role of toddlers' early socialization with these agents for the development of a wide range of socioemotional and behavior competencies, such as empathy, prosocial behaviors, and compliance (Brownell & Brown, 1992;Vandell, 2000). This can be particularly concerning in the context of the Portuguese society because the participation of extended family, namely grandparents, in children's lives is highly valued and relied on (Glaser et al., 2013). COVID-19 restrictions also limited toddlers' opportunity to engage in activities outside home, which are important for toddlers' wellbeing and are associated with multiple positive outcomes, such as social skills, confidence, and emotional control (Burdette & Whitaker, 2005).
However, decreased opportunities for socialization due to the pandemic does not necessarily mean that young children will be harmed for life, as interactions with others, at home, remained. Here, we argue that difficulties may arise when social isolation is coupled with a family environment that is unable to respond to the child's needs, including those that emerge from the challenges posed by social isolation. Confined at home, and with the loss of opportunities for learning and interact with other significant figures, young children had to rely heavily and almost exclusively on the supportive role of their parents, who were also facing a myriad of stressors as a consequence of the pandemic (e.g., financial loss, lack of social support). These stressors, in turn, may have exacerbated parents' stress levels and emotional difficulties, leaving them less available to respond adequately to their children's needs (Bate et al., 2021).
Consistently with the above, studies have reported higher levels of parenting stress (i.e., the discrepancy between the situational demands of the parental role and parents' perceived personal resources to cope with them; e.g., Abidin 1992), as well as more difficulties in the parent-child relationship, in the context of the COVID-19 crisis (Achterberg et al., 2021;Aguiar et al., 2021;Calvano et al., 2021), which are important and well-known determinants of EBH in children (Abidin, 1992). A few studies showed that increased parenting stress due to the pandemic was associated with more socioemotional problems and poorer emotional regulation in preschool-ers, school-age children, and adolescents (Achterberg et al., 2020;Spinelli et at., 2021). Also, importantly for the study reported herein, authors have documented a significant association between COVID-19-related stressors (e.g., physical distancing) and youth psychopathology, highlighting that such an association appears to be particularly exacerbated among children exposed to disruptions in the caregiving environment, including high levels of parenting stress, limited parenting availability, and parents' emotional difficulties due to the pandemic (Cohodes et al., 2021). The stressors of COVID-19 can therefore impact the EBH of children differently, depending on the quality of care experienced by the child. Notwithstanding the relevance of those results, yet to be explored is whether similar mechanisms operate in younger children. This is the focus of the present report.
The current study
To our knowledge, the few studies available to date exploring the impact of COVID-19 crisis on children's well-being were focused on preschool-aged or older children. The impact of COVID-19 on the lives of toddlers and their families is less explored, calling for the urgent need of data. Toddlerhood is a period marked by critical social and emotional development that is crucial for all aspects of functioning throughout life span (Sroufe, 1995). Furthermore, the examination of the impact of COVID-19 has mostly relied on stressors related to the family, such as financial impact, resource impact, and/or psychological impact on parents, and has considered to a less extent the putative role of stressors related to the toddler, most notable social isolation. The consideration of such an experience is crucial given evidence of the negative impact of COVID-19-related social isolation among older children (Loades et al., 2020), but also considering the role of toddlers' socialization with various agents to optimal EBH (Sroufe, 1995;Vandell, 2000). Also poorly explored is whether the impact of COVID-19-related social isolation on toddlers' EBH may be buffered (or exacerbated) by the caregiving environment, that acts, in the early years of life, as the primary context for child's development.
In light of the research gaps identified, this study sought to examine the impact of child social isolation and caregiving distress on Portuguese toddlers' EBH, while also controlling for the potential influence of COVID-19 economic hardship and other relevant sociodemographic factors (e.g., child sex, maternal education), during the 2021 mandatory lockdown in the country. Most notably, we examined if child social isolation due to the COVID-19 crisis was associated with toddlers' EBH and if this association was moderated by caregiving distress. On the basis of previous research, we hypothesize that social isolation would predict more difficulties in toddlers' EBH, especially among those children exposed to high levels of caregiving distress.
Participants and Procedure
Participants included 315 healthy toddlers (169 boys) aged 18 to 36 months old (M = 26.73 months, SD = 5.71) and their primary caregivers (279 mothers, 35 fathers, 1 grandmother) recruited from childcare centers and social media platforms in Portugal. Exclusion criteria included toddlers' diagnosis of chromosomic disorders, intellectual disability, and neurodevelopmental disorders.
Primary caregivers ranged in age from 21 to 55 years (M = 35.15 years, SD = 5.61). 74% had completed a university degree, 21.3% had completed high school, and the remaining 4.7% had completed nine or less years of education. 11% were unemployed and 2% were in lay-off due to the pandemic. The majority of the caregivers were married or living under civil union (86%), 11% were single, 1% were widowed, and 2% were divorced or separated. Twenty-eight toddlers were born prematurely (i.e., before 37 weeks of gestation). The great majority (92%) of toddlers were attending childcare center services. Most of them had no sibling (54.4%) or had one sibling (39.2%). Twenty-eight toddlers lived with other adults in the household (e.g., grandmother), besides their parents. Seventeen toddlers were separated from at least one parent due to COVID-19 infection of parents or child.
The study was disseminated in online parent groups and childcare centers, inviting the primary caregivers of children within the target age range for participation. An online link was provided to caregivers to complete a set of questionnaires, between January 2021 and May 2021, corresponding to the second lockdown in the country, which occurred during the third wave of the pandemic in Europe. Participants received no financial compensation for taking part in the study. The study was approved by the institutional review board of the University [blinded review] and informed consents were obtained from all participants included in the study.
Measures
Child Emotional and Behavioral Health. The well-known Brief Infant-Toddler Social and Emotional Assessment (BITSEA; Briggs-Gowan et al., 2004) was used to assess the caregiver's perception of child EBH. The BITSEA includes 42 items rated on a 3-point Likert scale (1 = Not true/Rarely to 3 = Very True/Often) and yields two scales: (i) the Competence Scale (11 items; e.g., Item 15, "Is affectionate with loved ones"), assessing emotional and behavioral abilities (e.g., symbolic and imitative play, empathy); and (ii) the Problem Scale (31 items; e.g., Item 9, "Has less fun than other children"), assessing emotional and behavioral difficulties (e.g., aggression, defiance, overactivity, negative emotionality). Greater scores in the Competence Scale indicate greater emotional/behavioral competence and greater scores in the Problem Scale indicate greater emotional/behavioral impairment. In this study, the internal consistency for the Competence Scale was α = 0.60 and for the Problem scale was 0.70.
Child COVID-19-related social isolation. A social isolation scale was developed to screen for stressors related to social isolation experienced by children due to COVID-19. Caregivers were asked to indicate whether the child experienced the following items due to the pandemic: (1) the child had limited opportunities to interact with other significant family members, (2) the child had limited opportunities to interact with other children, and (3) the child had limited opportunities to participate in activities outside the house. Participants were instructed to rate each item on a 3-point scale (1 = don't agree at all/not at all to 3 = agree completely/a lot). All items proved to be significantly intercorrelated (all p < .001). Scores were summed to create a child COVID-19-related social isolation score (α = 0.69); higher scores reflect greater social isolation.
Caregiving distress. For the assessment of caregiving distress, participants reported on perceived daily parenting stress, on their availability to care for the child and on the psychological impact from COVID-19. Regarding parenting stress, caregivers completed the parenting stress subscale of the Daily Hassles Questionnaire (Kanner et al., 1981). This subscale includes nine items concerning stressors related to the daily activities of parenthood, rated on a 5-point scale (0 = no hassle to 4 = big hassle). Items were summed to obtain a final score (α = 0.87; M = 6.21, SD = 5.59); greater scores indicate elevated parenting stress. Caregivers also rated their availability to care for the child during the pandemic based on two items ("Due to the COVID-19 pandemic, I have less time to be with my child"; "Due to the COVID-19 pandemic, I have been less emotionally available to interact with my child") using a 3-point scale (don't agree at all/not at all to agree completely/a lot). The two items were found to be significantly associated (r s = 0.39, p < .001) and were then summed (M = 2.76, SD = 0.98, range 1-6). Finally, the psychological impact from COVID-19 on caregivers was measured based on the single item ("I have become depressed because of the Coronavirus [COVID-19]") (M = 1.71, SD = 0.83, range 1-4) from the psychological impacts scale of the Coronavirus Impacts Questionnaire (Conway, Woodard, & Zubrod, 2020), following previous studies (Kerr et al., 2021), using a 4-point scale (not true of me at all to very true of me). Parenting stress, parental availability and the psychological impact from COVID-19 all proved to be significantly correlated (p < .001). Scores were standardized and then combined, resulting in a caregiving distress composite. Higher scores indicate increased caregiving distress.
Demographics and COVID-19 economic hardship in the family. Participants answered questions about several sociodemographic factors, including their age and sex, years of education, marital status, number of children, age and sex of the target child, as well as COVID-19 economic hardships in the family. Regarding COVID-19 economic hardships, caregivers were asked to rate on a 4-point scale (not true at all to totally true) whether the family experienced the following economic stressors due to the pandemic: (1) negative impact in the household finances, (2) reduction of the household income due to job loss, and (3) difficulty in acquiring essential goods such as food and toilet paper. Scores of each item were summed to create a COVID-19 economic hardship score (α = 70; M = 4.62, SD = 2.01). Higher scores reflect increased socioeconomic risk.
Data Analysis
Data were analyzed using IBM SPSS Statistics version 27. Bivariate correlations among study variables were examined, as well as between toddlers' EBH and sociodemographic variables to identify potential covariates. Main analyses were then performed to examine the relations between toddlers' EBH, social isolation and caregiving distress. Two moderation analysis were performed separately for toddlers' emotional/behavioral problems and toddlers' emotional/behavioral competencies as dependent variables using the PROCESS macro for SPSS (Hayes, 2012). For each of the models, COVID-19 economic hardship and sociodemographic variables selected based on the preliminary analysis' findings were entered as covariates, COVID-19-related social isolation was entered as independent variable, and caregiving distress was entered as moderator variable. The PROCESS macro provides regression estimates of the independent and moderator variables as well as their interaction using bootstrapping methods (5000 bootstrapped samples) that are robust to non-normality.
Preliminary Analysis: Descriptive Statistics and Bivariate Correlations
Descriptive statistics for the study variables can be found in Table 1.
Regarding stressors related to COVID-19, most children were reported to have somewhat or very limited opportunities (score of 2 or higher) to interact with other family members (n = 279, 88.5%), with other children (n = 274, 87%), or to participate in outdoor activities (n = 280, 88.8%). Moreover, 19.6% (n = 62) and 44.4% (n = 140) of caregivers somewhat or completely agreed that they had less time or were less emotionally available (score of 2 or higher) to interact with their child due to the pandemic, respectively. Moreover, 57.7% (n = 182) of caregivers agreed (score of 2 or higher) with the statement "I have become depressed because of COVID-19". In addition, for 56.2% (n = 178), 7% (n = 22), and 33% (n = 105) of families, the COVID-19 pandemic had some to very negative impact (score of 2 or higher) on the overall family finances, on the acquisition of essential goods, and had led to a reduction in family income due to the loss of jobs, respectively.
Correlations Between Child EBH and Covariates
Examination of potential demographic covariates revealed no significant correlations between toddlers' EBH and child age or caregiver's age, sex, marital status, number of children, employment status or area of residence (all p > .05). Girls were reported to have significantly less emotional/behavioral problems (t(342) = -2.66, p = .008) and higher emotional/behavioral competencies (t(313) = 4.47, p < .001) than boys. Maternal education (r s = − 0.14, p = .014) and COVID-19 economic hardship (r = .18, p = .001) were significantly correlated with more emotional/behavioral problems. Maternal education was also significantly correlated with child emotional/behavioral competencies (r s = 0.19, p < .001). No other statistically significant correlations were observed. Based on these results, in the subsequent analyses, we controlled for the effect of child sex, maternal education and COVID-19 economic hardship on toddlers' EBH.
Correlations Between Child EBH, Predictor and Moderator
Regarding the relations between child EBH and main study variables, COVID-19-related social isolation was found to be significantly correlated with toddlers' emotional/behavioral problems (r = .15, p = .010), but not with child competencies. Caregiving distress was significantly correlated with more emotional/behavioral problems (r = .25, p < .001) and with less emotional/behavioral competencies (r = − .13, p = .025). Greater social isolation was significantly correlated with higher levels of caregiving distress (r = .28, p < .001). Table 2 presents the moderation analyses assessing the main and interaction impact of COVID-19-related social isolation and caregiving distress on toddlers' emotional/ behavioral competencies and problems, while controlling for the effect of child sex, maternal education, and COVID-19 economic hardship.
CI = Confidence interval
Child sex was coded as 1 = female, 2 = male low levels of caregiving distress (see Fig. 2). The final regression model accounted for 14% of the variance.
Discussion
The present cross-sectional study explored the impact of the second mandatory lockdown in Portugal, which occurred during the third peak of the COVID-19 pandemic in Europe, on the well-being of toddlers and their families, by examining the putative associations between COVID-19-related social isolation experienced by the child, caregiving distress, and toddlers' EBH (including competencies and problems), while controlling for the effect of COVID-19 economic hardship and other sociodemographic factors (i.e., child sex, maternal education). Results revealed no significant main effect of COVID-19-related social isolation or caregiving distress on toddlers' emotional/behavioral competencies. However, social isolation was found to be a significant individual predictor of toddlers' emotional/ behavioral problems. This result may suggest that although the COVID-19 crisis may not have an impact on the normative development of toddlers' emotional/behavioral abilities, it may represent a disruptive experience triggering the manifestation of emotional/behavioral problems. Such possibility is certainly in line with COVID-19 recent literature showing that stressors related to the pandemic, most notably isolation and quarantine, negatively impacts child and youth mental health (Cohodes et al., 2021;Feinberg et al., 2021;Jiao et al., 2020;Loades et al., 2020;Orgilés et al., Fig . 1 Caregiving distress moderates the effects of COVID-19-related social isolation on toddlers' emotional/behavioral competencies 2020), as well as with an extensive body of literature showing the importance of toddlers' early socialization with adults and peers (Brownell & Brown, 1992;Vandell, 2000) and participation in outdoor activities (Burdette & Whitaker, 2005) for their EBH. However, it also points to the need of more research, anchored on a longitudinal design, to better understand the impact of COVID-19 on the EBH of very young children, including both competencies and problems. After all, to date, most studies were exclusively focused on the examination of emotional and behavioral problems -and not competencies -in the context of the pandemic.
In line with our hypothesis, caregiving distress was found to be a significant moderator of the relation between COVID-19-related social isolation and more emotional/behavioral competencies and problems in toddlers. As a result of the pandemic, caregivers experienced many changes to their routines: e.g., parents had to work from home while struggling with childcare or to work at essential jobs, had to face daily concerns about the health and well-being of the family, and some had to deal with job insecurity. Not surprisingly, studies have been pointing to an increase of parenting stress levels, burnout and emotional distancing from the child, and other negative outcomes for parents (e.g., anxiety, depression) in the context of COVID-19 (Achterberg et al., 2021;Aguiar et al., 2021;Calvano et al., 2021). This might be particularly concerning in the case of Portuguese families, where both parents usually work a high number of working hours (Torres et al., 2014) and often count on the support of extended family members -particularly grandparents -for childcare, which may hamper their capacity to manage the demands of both childcare and work in a situation of lockdown. Fig. 2 Caregiving distress moderates the effects of COVID-19-related social isolation on toddlers' emotional/behavioral problems It is well-established that such factors may impair parents' ability for understanding and respond properly to their child's needs and engage in supportive and adaptive parenting behavior or (Abidin, 1992); thus, having a pervasive impact on child EBH (Lim & Shim, 2021). In fact, recent studies have shown that higher levels of parenting stress due to the COV19 crisis were associated with more negative parenting practices such as punitiveness (Wolf et al., 2021) and emotional abuse (Calvano et al., 2021), less parent-child closeness (Chung et al., 2022), and less involvement in children's activities (Spinelli et al., 2021). In line with such findings, recall that, in the present study, most of the caregivers agreed that lockdown had an impact on their emotional well-being, and almost half reported being less emotionally available to interact with their child due to the pandemic.
The significant moderations observed in this report suggest that in times of crisis, such as a pandemic, caring for the well-being of caregivers may serve as an important strategy to protect young children from the negative impact of stressors, such as social isolation, on their EBH. This is supported by theoretical considerations highlighting that early supportive relationships with caregiving adults, during the first years of life, are fundamental to children's adaptative emotional and behavioral functioning (Bowlby, 1969(Bowlby, , 1973. It is also consistent with years of research suggesting that sensitive and responsive parenting can mitigate the detrimental impact of early negative events (Ruberry et al., 2018), as well as with recent work pointing to a multifinality in the effect of COVID-19 on child functioning and development -i.e., not all children experienced difficulties due to the pandemic -which is likely explained by multiple factors, including family ones (Chung et al., 2022;Cohodes et al., 2021;Conway et al., 2020).
Maternal education and COVID-19 economic hardship were revealed to be significantly associated with toddlers' emotional/behavioral competencies and problems. It is well established that parental education and other family socioeconomic factors (e.g., employment, income) are, per se, strong predictors of toddlers' EBH (Conger & Donnellan, 2007). The pandemic has significantly increased families' exposure to socioeconomic challenges such as job or income loss that represented a major threat to parents' and toddlers' well-being . Furthermore, for parents who continued their job activities from home, the balance between work and family demands within the household was a major challenge, especially for those who lived in poor housing conditions or in situations of vulnerability (Craig & Churchill, 2021;Usher et al., 2020). In fact, previous studies showed that families exposed to higher socioeconomic disadvantage during the COVID-19 crisis, including job loss, were at particular risk for maladaptive parenting practices (Calvano et al., 2021) and deterioration in parent and child well-being (Feinberg et al., 2021). Moreover, sex emerged as a significant predictor of toddlers' EBH. These findings are in line with previous research suggesting that boys tend to experience emotional/behavioral difficulties more often than girls (Briggs-Gowan et al., 2004).
The present study extends a growing body of research showing the negative impact of the COVID-19 pandemic on children's and caregivers' functioning. Results highlighted the importance of providing support to caregivers during (current and future) public health crises, such as a pandemic, to prevent and reduce their experienced stress and toddlers' emotional/behavioral problems. Such support could be provided, for example, through online and/or telehealth interventions including stress management, parenting counseling, promotion of coping skills, capitalization of protective factors, accessing of social support, and/or support for dealing with specific lockdown measures such as home schooling. Feinberg et al., (2021) suggested that positive coparenting (i.e., parents' mutual support and childrearing coordination) could also be a powerful target to support families in a period of public health crisis, following a strong body of evidence showing that coparenting quality plays a significant role in promoting parents' mental health, positive parenting, and children's adjustment (Feinberg & Jones, 2018). Policies should take into consideration the implications of the COVID-19 crisis for children's and parents' emotional well-being and promote psychosocial support interventions not only for the immediate but also for the future. Such support may be particularly relevant for families exposed to socioeconomic disadvantages.
Limitations and Future Directions
Some limitations of the present study should be acknowledged. First, the cross-sectional nature of the data did not allow to determine causality. For a better understanding of the long-term impact of the COVID-19 crisis, future research should include the longitudinal examination of the COVID-19 effects on toddlers' EBH, associated mechanisms and pathways, as well as potential risk and resilience factors. Second, the assessment of toddlers' EBH was conducted through parental report, which may be more vulnerable to bias (Bornstein et al., 2015). In future studies assessing toddlers' EBH in the context of the COVID-19 crisis, it would be relevant to use direct observational measures, whenever possible. Third, 93% of toddlers in our sample were attending childcare center services, which is a strong protective factor for toddlers´ EBH. Therefore, generalization of results to toddlers who do not attend childcare services should be made carefully. Furthermore, most of the caregivers in our sample were highly educated. Such limitation combined with the fact that the data were collected through an online survey that limited the reach of families with no access to a web connection, might limit the generalization of results to families living in extreme socioeconomic risk conditions. Furthermore, 86% of the children were raised in two-parent households. Therefore, our results are not generalizable to single-parent families, who are more vulnerable to COVID-19 stressors according to previous research (Hertz et al., 2021). The internal consistency of the BITSEA Competence scale was found to be below 0.70, very similar to the results obtained in other studies (e.g., α = 0.65, Briggs-Gowan et al., 2004), and therefore findings should be interpreted carefully. Other aspects that were not considered in the present study and should be addressed by future studies include the examination of both parents' stress experience and the impact on toddlers' functioning, the examination of highrisk groups such as toddlers with an intellectual disability or neurodevelopmental disorders, and the cross-cultural comparison of the effects of the COVID-19 crisis on families between different countries.
Conclusion
Our study provides novel and preliminary evidence of the negative impact of the COVID-19 crisis on the functioning of Portuguese families and toddlers' emotional/ behavioral problems. During public health crises such as COVID-19, it might be relevant to provide and strengthen psychosocial support to parents and toddlers and reducing caregiving distress may be an important way of promoting toddlers' EBH. Additional research should explore the long-term impact of COVID-19 stressors on caregiver adjustment and child development while identifying resilience factors. This is critical to preventing and minimizing the negative consequences of future pandemics on the well-being of children and their parents. | 2022-08-26T05:23:16.296Z | 2022-08-24T00:00:00.000 | {
"year": 2022,
"sha1": "67b78c3696e295ec2f62a687ad499c9f6cc6fc1c",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12187-022-09964-y.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3eb4d67505f1dae8d7fa8b2b816e8911acfe4f0f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
229665498 | pes2o/s2orc | v3-fos-license | ERRATUM FOR “NONHOLONOMIC AND CONSTRAINED VARIATIONAL MECHANICS”
There is an error in the statement of Theorem 4.25 in [1], a somewhat related typographical error in Remark 4.26, and an error in Remark 4.27 following directly from that in Theorem 4.25. Footnote 8 is also now obsolete. In order to ensure that the errors are unambiguously fixed, what appears below should replace the original text starting from just before the statement of Theorem 4.25 and ending at the end of Section 4.
There is an error in the statement of Theorem 4.25 in [1], a somewhat related typographical error in Remark 4.26, and an error in Remark 4.27 following directly from that in Theorem 4.25. Footnote 8 is also now obsolete. In order to ensure that the errors are unambiguously fixed, what appears below should replace the original text starting from just before the statement of Theorem 4.25 and ending at the end of Section 4.
Next we consider the case of affine vector fields. Here we wish to obtain conditions on a defining subbundle ∆ that ensure that its corresponding affine subbundle variety A(∆) remains in a given subbundle F. However, because A(∆) may be empty, we would like instead to make the problem into one that always has a solution, and then leave the matter of checking whether A(∆) is nonempty to something one can do afterwards. To this end, we note that, if A(∆) ⊆ E is flow-invariant under the affine vector field X aff , then is flow-invariant under X aff , according to Lemma 4.18. Clearly Therefore, we seek conditions on a defining bundle ∆ ⊆ E * ⊕ R M that is flowinvariant under X aff (meaning, by definition, that it is flow-invariant under X aff, * ) and satisfies Λ(∆) Λ(∆) ∩ (E × {1}) ⊆ F. The following result gives conditions to this end, recalling from (10) the definition of ∆ 1 pr 1 (∆).
By letting λ = 0 and g be arbitrary, we see that this implies that a = 1 (of course).
Thus we arrive at the conclusion that condition (i) is equivalent to .
With the preceding observations in place, we can prove the theorem. Finally, we have by part (iii) of the lemma and noting that ∆ 1 = pr 1 (∆). By 3, we have .
ANDREW D. LEWIS
Now let U ⊆ M be open, and let λ ∈ G r Λ(F) (U) and g ∈ C r M (U) and compute Applying pr 1 to this inclusion gives ∇ X0 λ ∈ G r ∆1 (U), which is part (iic) of the theorem.
We conclude, from Proposition 4.13, that, when r = ω or when r = ∞ and F is a subbundle, that all integral curves of X aff with initial conditions in Λ(∆) remain in F. Since Λ(∆) is flow-invariant under X aff (as we pointed out in the preamble to the proof), this implies (i).
One can combine the previous results with Proposition 4.13 to obtain the following procedure for finding invariant affine subbundles contained in a given subbundle.
We first consider the linear case.
Remark 4.26 (Finding invariant cogeneralised subbundles contained in a cogeneralised subbundle). Let r ∈ {∞, ω}, let π : E → M be a C r -vector bundle, let ∇ be a C r -linear connection in E, let F ⊆ E be a C r -cogeneralised subbundle, let X 0 ∈ Γ r (M) be complete, and let A ∈ Γ r (End(E)). Denote Find a flow-invariant cogeneralised subbundle L ⊆ F satisfying the following algebraic/differential conditions: We shall say that L satisfying these conditions is (X lin , F)-admissible. The resulting cogeneralised subbundle L is then flow-invariant under X lin and is contained in F. • In the affine case, we have the following.
Remark 4.27 (Finding invariant affine subbundle varieties contained in a cogeneralised subbundle). Let r ∈ {∞, ω}, let π : E → M be a C r -vector bundle, let ∇ be a C r -linear connection in E, let F ⊆ E be a C r -cogeneralised subbundle, let X 0 ∈ Γ r (M) be complete, let b ∈ Γ r (E), and let A ∈ Γ r (End(E)). Denote Find a flow-invariant defining subbundle ∆ ⊆ E * ⊕ R M satisfying the following algebraic/differential conditions: . We shall say that ∆ satisfying these conditions is (X aff , F)-admissible. Having found such a ∆, check the following: 4. the set S(A(∆)) = {x ∈ M | (0, 1) ∈ ∆ x } is nonempty. The resulting affine subbundle variety A(∆) is then flow-invariant under X aff and is contained in F. • The methodology outlined in the preceding constructions involve some interesting partial differential equations with algebraic constraints. With some effort, it might be possible to apply the integrability theory for partial differential equations [23,24] to arrive at the obstructions to solving these equations. An application of the resulting conditions to the setup of Section 7 would doubtless lead to some interesting answers to the central questions of this paper. | 2020-12-10T09:05:03.417Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "c3497857b5c057b9e141c67f99036917d6736730",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=4566c206-2ac6-443a-a43e-099298cb7269",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "102bdc8b5ede1bc201e90c1bca682594df5c9833",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
261661896 | pes2o/s2orc | v3-fos-license | Pathogenic mitochondrial DNA mutations inhibit melanoma metastasis
Mitochondrial DNA (mtDNA) mutations are frequently observed in cancer, but their contribution to tumor progression is controversial. To evaluate the impact of mtDNA variants on tumor growth and metastasis, we created human melanoma cytoplasmic hybrid (cybrid) cell lines transplanted with wildtype mtDNA or pathogenic mtDNA encoding variants that partially or completely inhibit oxidative phosphorylation. Homoplasmic pathogenic mtDNA cybrids reliably established tumors despite dysfunctional oxidative phosphorylation. However, pathogenic mtDNA variants disrupted spontaneous metastasis of subcutaneous tumors and decreased the abundance of circulating melanoma cells in the blood. Pathogenic mtDNA did not induce anoikis or inhibit organ colonization of melanoma cells following intravenous injections. Instead, migration and invasion were reduced, indicating that limited circulation entry functions as a metastatic bottleneck amidst mtDNA dysfunction. Furthermore, analysis of selective pressure exerted on the mitochondrial genomes of heteroplasmic cybrid lines revealed a suppression of pathogenic mtDNA allelic frequency during melanoma growth. Collectively, these findings demonstrate that functional mtDNA is favored during melanoma growth and enables metastatic entry into the blood.
Introduction 40
Pathogenic mutations within the mitochondrial genome (mtDNA) are widely recognized as 41 causative for inherited diseases, yet their role in the pathology of acquired diseases is largely 42 unknown 1-3 . Somatic mtDNA mutations commonly occur in human tumors, with an incidence rate 43 greater than 50% 4-6 . Certain tumor types such as colorectal, thyroid, and renal cancers exhibit a 44 disproportionately high incidence and allelic burden of deleterious mtDNA mutations 5-9 , and are 45 typically associated with "oncocytic" changes secondary to excessive mitochondrial 46 accumulation 10 . However, high allelic deleterious mtDNA mutations are atypical in most tumors, 47 and the majority of cancers maintain somatic mtDNA mutations at a low allelic frequency 4-6 . 48 Additionally, most cancer types appear to favor functional mtDNA, as indicated by a lower allelic 49 frequency of deleterious mutations relative to nonpathogenic variants 4-6 . These correlation studies 50 have yet to be functionally investigated in an experimental model that rigorously evaluates the 51 impact of mtDNA variants on tumor progression in otherwise isogenic backgrounds. In particular, 52 in vivo studies with nuclear isogenic tumors are needed to test whether selective pressures 53 maintain functioning mtDNA in most tumor types. 54 A number of studies have suggested that healthy mitochondria promote cancer progression. 55 For example, disseminated cancer cells in melanoma, breast, and renal cancers display 56 increased expression of the mitochondrial biogenesis transcription factor PGC-1α, which 57 promotes mitochondrial mass and oxygen consumption [11][12][13] . In oral squamous cell carcinoma, 58 metastatic cells are observed to enhance mtDNA translation through mt-tRNA modifications 14 . 59
Generation of melanoma cybrid models with loss-of-function mtDNA variants 93
In the immortalized human melanoma cell line A375, we established isogenic cytoplasmic 94 hybrid (cybrid) models each harboring distinct mitochondrial genomes. For cybrid model 95 generation, the endogenous mtDNA needed to be depleted so that exogenous mtDNA sources 96 could repopulate the mtDNA pool. Following a two weeks treatment with 5 μM or 10 μM 97 dideoxycytidine (ddC), an irreversible inhibitor of mtDNA replication 22 , we established multiple 98 A375 clones in which mtDNA was reduced to undetectable levels (Extended Data Fig. 1a). To evaluate the impact of mtDNA variants on tumor growth and progression, we generated 118 a panel of A375 homoplasmic cybrids (variant allele frequency (VAF) = 1) (Fig. 1a). These cybrids 119 carried either wildtype (WT) mtDNA (with no pathogenic variants) or pathogenic variants 120 associated with human disease (Table 1). We established and validated multiple independent 121 clonal lines for each cybrid model, ensuring the retention of the A375 nuclear genome (through 122 STR analysis) and successful transplantation of the mtDNA genome by Sanger sequencing 123 (Extended Data Fig. 3a-c). To precisely verify homoplasmic allelic frequency for the partial loss 124 of function models, ATP6 and ND1, we utilized digital droplet PCR (ddPCR) (Fig. 1b, western blot analysis revealed a loss of mt.CO1 protein expression, while expression of another 128 mtDNA encoded protein, mt.ATP8, remained intact (Fig. 1d). 129 All cybrid lines displayed a restoration of mitochondrial genome content relative to the 130 A375 ρ0 clone, albeit at levels lower than the parental line (Fig. 1e). The mitochondrial mass 131 (based on MTGreen staining) of the cybrid lines was generally elevated relative to the parental 132 line (Fig. 1f). The influence of the mitochondrial genome on oxygen consumption reflected the 133 anticipated functional consequences of the respective pathogenic mtDNA alleles 26,27 . Specifically, 134 the mitochondrial oxygen consumption rate was unchanged between the parental line and the 135 WT cybrids, partially reduced in the ATP6 and ND1 cybrids, and completely ablated in the CO1 136 cybrids (Fig. 1g). 137 138 Tumor growth is sustained in mtDNA dysfunctional cybrids 139 We subcutaneously xenografted the A375 cybrids into the hind flank of 140 immunocompromised NOD.CB17-Prkdc scid Il2rg tm1Wjl /SzJ (NSG) mice and monitored tumor 141 growth over time (Fig. 2a). Despite variations in ETC capacity, all cybrid models reliably 142 established tumors at either 100 or 10,000 cell injections (Fig. 2b-c). Upon tumor harvesting, we 143 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint confirmed the in vivo stability of the homoplasmic pathogenic mtDNA variants (Extended Data 144 Fig. 4a-c). Observed growth rates among the subcutaneous tumors were heterogenous, with a 145 general reduction in comparison to the parental line -an effect further accentuated in the models 146 with mtDNA dysfunction (Fig. 2c). Assessment of Ki67 staining across the tumors revealed 147 comparable levels of proliferation amongst the cybrid models (Fig. 2d, Extended Data Fig. 5a-d). 148 Histological analysis indicated substantial areas of tumor necrosis within the WT, ND1, and ATP6 149 cybrid tumors (Fig. 2e,f, Extended Data Fig. 6a). Conversely, the CO1 tumors exhibited negligible 150 tumor necrosis and displayed an increased prevalence of disorganized, or discohesive, regions 151 Pimonidazole staining revealed that tumors with WT, ATP6, and ND1 mtDNA contained 153 comparable levels of hypoxia (Fig. 2g,h). In contrast, no detectable hypoxic regions were 154 observed in tumors with the CO1 variant (Fig. 2g,h). Interestingly, we failed to find significant 155 differences in mitochondrial biomass, assessed by measuring mitochondrial protein expression 156 (TOMM20 on the outer membrane, and HSP60 in the matrix), and mitochondrial genome content 157 among the cybrid tumors (Fig. 2i,j), indicating a lack of oncocytic transformation. Collectively, the 158 pathogenic mtDNA variants did not preclude the growth of subcutaneous melanoma xenografts, 159 as evidenced by 100% of implants forming tumors, but generally reduced tumor growth rates. 160 While the WT, ATP6, and ND1 presented with comparable tumor morphology, the most severe 161 loss of function model, CO1, presented with histological, hypoxic, and necrotic variations. 162 163
Pathogenic mtDNA variants suppress spontaneous metastasis 164
The xenografted lines were engineered to express luciferase, enabling quantitation of 165 spontaneous metastatic disease burden via bioluminescence imaging of organs. Specifically, the 166 total spontaneous metastatic burden, as measured by bioluminescence of dissected organs, was 167 analyzed when primary subcutaneous tumors attained a size of 20-25 mm in diameter (Fig. 3a). 168 Both the WT and partial loss of function ATP6 cybrid tumors exhibited spontaneous metastasis 169 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint at levels comparable to the parental line (Fig. 3b,c). In contrast, the ND1 cybrid tumors exhibited 170 a substantial decrease in spontaneous metastatic burden, and no metastasis was detected from 171 the CO1 cybrid tumors (Fig. 3b,c). Correspondingly, there was a substantial reduction in the 172 frequency of circulating melanoma cells in the blood of all mtDNA mutant cybrid lines (Fig. 3d, 173 Extended Data Fig. 7a,b). 174 To extend these findings, we investigated whether pharmacologic inhibition of 175 mitochondrial ETC function in tumors with functional mtDNA suppresses the emergence of 176 circulating melanoma cells in the blood. Mice bearing advanced-stage melanoma patient-derived 177 xenograft (PDX) UT10 28,29 tumors were subjected to an acute 5-day oral gavage of either 10 178 mg/kg IACS-010759 (an established bioavailable complex I inhibitor 30 ) or 0.5% methylcellulose 179 vehicle control (Fig. 3e). The short-term IACS-010759 treatment did not induce changes in 180 primary tumor size or organ metastatic burden ( Fig. 3f-h). However, IACS-010759 treatment led 181 to a significant decrease in the number of circulating melanoma cells in the blood (Fig. 3i). These 182 findings indicate that either genetic or pharmacologic impairment of mitochondrial ETC activity 183 can inhibit the appearance of melanoma cells in the blood. 184 185
Pathogenic mtDNA variants inhibit tumor cell motility and invasion 186
Considering that the onset of anoikis may limit metastasis 31 , we investigated whether the 187 cybrid models exhibited differential detachment survival potentials. In line with observations of 188 metabolic perturbation induced by detachment 32-35 , detached culture of the A375 cybrid models 189 increased reactive oxygen species (ROS), reduced glucose consumption, and reduced lactate 190 excretion (Fig. 4a-c). Relative to the WT line, these effects were exacerbated in the complete loss 191 of function CO1 model. Despite these detachment induced stresses, the mtDNA mutant cybrid 192 lines exhibited significantly elevated cell counts following 24 hours of detached culture relative to 193 the WT line (Fig. 4d). Detachment resulted in only marginal reductions in viability and minimal 194 increases in apoptosis for all cybrid lines (Fig. 4e,f). To directly investigate metastatic seeding 195 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint from the blood stream, we injected 1,000 cells of each cybrid line into the tail vein of NSG mice 196 (Fig. 4g). Live BLI imaging showed significant bioluminescence signal in all cybrid lines, 197 irrespective of ETC capacity (Fig. 4h). Further analysis of dissected organs indicated no 198 significant difference in the total metastatic disease burden (Fig. 4i). These results demonstrate 199 that mtDNA mutations do not significantly inhibit the ability of melanoma cells to survive 200 detachment or seed metastatic sites following direct bloodstream injections. 201 We therefore hypothesized that mtDNA mutations might impede metastatic entry into the 202 blood. The migratory potential of cybrids was examined under various glucose concentrations 203 that correspond to high (25mM), plasma (5mM) and tumor interstitial fluid (TIF) (1mM) levels 36 . 204 Under conditions of TIF glucose availability, continuous oxygen consumption analysis revealed a 205 significant increase in the oxygen consumption for the WT, ATP6, and ND1 cybrid lines, indicating 206 that TIF conditions stimulate mitochondrial oxidative activity (Extended Data Fig. 8a,b). Notably, 207 these differences were not a consequence of altering cellular viability (Extended Data Fig. 8c-e). Although sequence analysis of human tumors has suggested that cancers select for 219 wildtype mitochondrial genomes, our homoplasmic cybrid experiments demonstrate that loss-of-220 function mitochondrial mutations do not abolish growth of subcutaneous melanoma xenografts. 221 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint However, these experiments did not address pressures that might restrict the expansion of 222 dysfunctional mtDNA genomes. To probe tumor evolution and potential selective pressures on 223 mtDNA, we generated heteroplasmic cybrid clones by fusing cytoplasts derived from mtDNA 224 dysfunctional donors into the homoplasmic WT cybrid line (Fig. 5a). Immediately following 225 heteroplasmic cybrid fusion, the ATP6, ND1, and CO1 alleles presented as heteroplasmic with 226 their respective wildtype alleles (Extended Data Fig. 9a-c). However, the ND1 and CO1 227 heteroplasmic models consistently shifted toward increased VAF of the wildtype allele, indicating 228 that these heteroplasmic pathogenic alleles were not stable during clonal expansion (Extended 229 Data Fig. 9d,e). Notably, the ATP6 allele maintained heteroplasmy in culture, as assessed by 230 ddPCR analysis (Fig. 5b). We chose four heteroplasmic ATP6/WT clones for further analysis (Fig. 231 5b, red arrows). Single-cell analysis of the ATP6 heteroplasmic frequencies in individual clones 232 demonstrated that these clonal lines contained a distribution of single cells with allelic frequencies 233 centered around the calculated allelic frequencies from bulk analysis (Fig. 5c). We noted that the 234 cybrid lines with a higher ATP6 allelic frequency (~50%, clone 1 and clone 2) exhibited a lower 235 oxygen consumption rate than clones with a lower ATP6 allelic frequency (~30%, clone 3 and 236 clone 4) (Fig. 5d). 237 These four heteroplasmic ATP6/WT clones, two at ~50% ATP6 VAF and two at ~30% 238 ATP6 VAF, were used for concurrent passage in culture and subcutaneous xenografting (Fig. 5e). 239 After subcutaneous xenograft of 100 cells, all clones reached maximal tumor size in ~40 days 240 (Fig. 5g). In culture, there were minimal shifts in the single cell allelic frequency in these four 241 clones (Fig. 5f). In contrast, we observed that subcutaneous tumors consistently shifted toward 242 increased VAF of the wildtype allele (Fig. 5h). We observed similar results following subcutaneous 243 injection of 10,000 cells per mouse (Extended Data Fig. 10a-e). Further, intravenous injection of 244 heteroplasmic cell lines also shifted toward the wildtype allele in metastatic nodules of visceral 245 organs ( Fig. 5i-l). These results indicate that A375 melanoma growth exhibits selection for 246 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint wildtype mitochondrial genomes when implanted in mice, irrespective of growth in subcutaneous 247 or visceral space. 248 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint
Discussion 249
We identified that isogenic melanoma cybrids transplanted with dysfunctional 250 mitochondrial genomes are capable of sustaining tumor proliferation. Interestingly, the ND1 and 251 (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint Prior to this report, the impact of complete loss of complex IV function on tumor 274 progression was unknown. The mt.6692del CO1 mutation in this paper was derived from a human 275 melanoma patient derived xenograft model (M405) 27 and has also been reported in human colonic 276 crypts 37,38 , myopathy 39 , peripheral blood of a breast cancer patient 40 , and within the 277 PCAWG/TCGA dataset for the following cancer types: bone, breast cancer, prostatic 278 adenocarcinoma, esophageal adenocarcinoma, renal cell carcinoma, glioma, hepatobiliary 279 cancer, and non-small cell lung cancer 5,6 . We previously reported metabolic tracing in the 280 mt.6692del M405 PDX model and demonstrated a lack of TCA cycle metabolic activity, as well 281 as minimal metabolic perturbations by treatment with IACS-010759 27 . Here, we build on the 282 effects of mt.6692del (CO1 mutant) and demonstrate that these tumors histologically do not 283 present regions of tumor necrosis or hypoxia, yet exhibit a high proportion of discohesive regions. 284 These results indicate that mitochondrial respiration can contribute to tumor necrotic processes, 285 and future studies expanding to more cancer types will be needed to establish the relationship 286 between severe mtDNA impairment and necrosis. 287 Lastly, the analyzed partial loss of function cybrid lines, ATP6 and ND1, were derived from 288 well-characterized human mitochondrial disease models. The influence of mitochondrial disease 289 on cancer progression is largely uncharacterized, but will grow in importance as patient survival 290 improves. Within the scope of the studied alleles, these findings suggest that mitochondrial 291 disease may not preclude melanoma development but rather attenuate its severity. Similar 292 studies in other cancer types will be needed to establish the generality of these results. 293 294 295 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint
Experimental models 297
Immortalized human melanoma cell line A375 (CRL-1619) was obtained from ATCC. 298 Melanoma patient derived xenograft model UT10 was obtained with informed consent according cell culture models were trypsin (T4049, Sigma-Aldrich) digested for 5 minutes at 37°C to 319 dissociate from adherent cultures followed by room temperature centrifugation at 200g for 3 320 minutes. Cells were resuspended, at desired cell count for injection, in staining media (L15 321 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023.
Bioluminescence imaging 345
Metastatic disease burden was monitored using bioluminescence imaging (all melanomas 346 were tagged with a bicistronic lentiviral (FUW lentiviral expression construct) carrying dsRed2 and 347 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint luciferase (dsRed2-P2A-Luc)). Five minutes before performing luminescence imaging, mice were 348 injected intraperitoneally with 100 μl of PBS containing d-luciferin monopotassium salt (40 mg/ml) 349 (Biosynth) and mice were anaesthetized with isoflurane 2 minutes prior to imaging. All mice were (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint
Single cell digital droplet PCR quantification of mtDNA 448
Single cell digital droplet PCR analysis was performed as previously described 44 a Q-tip. The inserts were allowed to dry for several hours, after which the membrane was cut and 508 imaged with Primovert ZEISS microscope on a ×10 objective. All images were recorded with ZEN
Analysis of mitochondrial mass 536
To assess intracellular mitochondrial mass, adherent cells were washed with PBS and 537 incubated for 30 minutes at 5% CO2 and 37°C in staining medium with 20 nM MitoTracker™ 538 Green FM (Thermo Fisher Scientific, M7514). Cells were then washed with staining medium and 539 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023.
Statistical analysis 563
Mice were allocated to experiments randomly and samples processed in an arbitrary 564 order, but formal randomization techniques were not used. Samples sizes were not pre-565 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint determined based on statistical power calculations but were based on our experience with these 566 assays. For assays in which variability is commonly high, we typically used n>10. For assays in 567 which variability is commonly low, we typically used n<10. All data representation is indicated in 568 the figure legend of each figure. No blinding or masking of samples was performed. All 569 represented data are unique biological replicates. 570 Prior to analyzing the statistical significance of differences among treatments, we tested 571 whether the data were normally distributed and whether variance was similar among treatments. 572 To test for normal distribution, we performed the Shapiro-Wilk test. To test if variability 573 significantly differed among treatments we performed F-tests. When the data significantly 574 deviated from normality or variability significantly differed among treatments, we log2-transformed 575 the data and tested again for normality and variability. Fold change data were log2-transformed. 576 If the transformed data no longer significantly deviated from normality and equal variability, we 577 performed parametric tests on the transformed data. If the transformed data remained significantly 578 deviated from normality or equal variability, we performed non-parametric tests on the non-579 transformed data. For normally-distributed data, groups were compared using the two-tailed 580 Student's t-test (for two groups), or one-way ANOVA or two-way ANOVA (>2 groups), followed 581 by Dunnett's or Tukey's test for multiple comparisons. For data that was not normally distributed, 582 we used non-parametric testing (Kruskal-Wallis test for multiple groups), followed by Dunn's 583 multiple comparisons adjustment. All statistical analyses were performed with GraphPad Prism (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint All data and materials are available from the corresponding author upon request. 618 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint (e,f), and median ± interquartile range (g). Statistical significance was assessed using one-way 639 ANOVA with Dunn's multiple comparison adjustment (d) and nested one-way ANOVA with Dunn's 640 multiple comparison adjustment (f,g). 641 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Homoplasmic growth data (black circle) is repeated as a reference in all panels. P values reflect 647 comparison with wildtype (WT) group. 648 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint h,i, Representative ddPCR 2-dimensional plot with probes specific to the ND1 G3460A in 891 b-e, Single cell ddPCR analysis of heteroplasmy at mt.T8993G for ATP6/WT heteroplasmic 962 clones of subcutaneous xenograft of 10,000 cells following tumor growth of indicated clones. P 963 values reflect comparisons with the initial passage. 964 The number of cells analyzed per treatment is indicated. Data are median ± interquartile range 965 (b-e). Statistical significance was assessed using non-parametric Kruskal-Wallis test with Dunn's 966 multiple comparison adjustment (b-e). 967 968 . CC-BY-NC-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 5, 2023. ; https://doi.org/10.1101/2023.09.01.555986 doi: bioRxiv preprint | 2023-09-11T13:09:45.137Z | 2023-09-05T00:00:00.000 | {
"year": 2023,
"sha1": "6b144db8099934de80c6c1e4e09b863371b87f4e",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/09/05/2023.09.01.555986.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcdcc2aabec83a8e74eb44d03af51ad7137ceb40",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
91374583 | pes2o/s2orc | v3-fos-license | Design , synthesis , molecular docking study , and biological evaluation of salicylaldimine derivatives as potential histone deacetylases inhibitors ( HDACi ) and anticancer agents
Despite the increased success rates of histone deacetylases inhibitors (HDACi) as potent anticancer agents, many metabolic obstacles face the hydroxamic acid-based HDAC inhibitors, which inspired us to develop nonhydroxamate HDAC inhibitors. Based on the established knowledge of the SAR of the reported HDAC inhibitors and based on the knowledge that salicylaldimine moiety is an established chelating agent, a series of salicylaldimine based HDAC inhibitors were designed, synthesized and biologically evaluated. The compound 14 in the present study showed considerable HDAC inhibition and potential antiproliferative activities on NCI cell lines rendering it as a good start for optimization that introduces a new class of non-hydroxamate HDAC inhibitors as potential anticancer agents.
INTRODUCTION
Histone deacetylases inhibitors (HDACi) proved their potent cytotoxic effects and their capability in a reversal of resistance in many cancer cell lines by regulating the expression of a number of tumor suppressor genes that are involved in apoptosis of cancer cells [1]. DNA (deoxyribonucleic acid) is wrapped around the histones to be included inside the nucleus in the form of chromatin to form chromosomes as shown in Fig. 1. [2] HDACs catalyze the cleavage of the N-acetyl group from acetylated lysine residues located on the tails of the nucleosomal histones, so HDAC with the histone acetyltransferases (HAT) regulate the degree of acetylation and deacetylation of the histones consequently the level of expression of the certain gene [3]. The overexpression of HDACs is directly linked to the poor prognosis of the cancer patient because they are responsible for removal of an acetyl group from the histones allowing interaction between negatively charged DNA and positively charged histone proteins, which lead to transcriptional silencing of tumor suppressor genes and apoptotic genes. Furthermore, many non-histone transcription factors such as heat shock protein-90 (HSP-90) and tubulin are the substrate for HDACs too [4]. As a result, HDACs control the level of expression of many oncogenes and apoptotic genes so control many cancer cellular processes such as cell proliferation, cell migration, cell death, and angiogenesis. In addition, many studies have shown that HDAC inhibitors can restore the transcription of p53 protein that induces apoptosis for the resistant cancer cells [5]. In addition, HDAC inhibitors control the self-renewal of cancer stem cells by downregulation of Nanog expression which is one of the key transcription factors that has been shown to promote cancer progression by regulating cancer stem cells [6,7]. It was found that HDAC knockdown in cisplatin resistant cell lines (CisR) markedly downregulated Nanog and reversed the pluripotency of the cancer stem cells [7,8]. In addition, the HDAC inhibition with SAHA decreased survivin levels in CisR cell lines. Survivin is an important cell survival protein responsible for the inhibition of apoptosis in the cancer cells [7]. HDACs comprise a family of 18 enzymes that is divided into four classes according to their sequence homology and domain organization. The class-I HDACs (HDAC1, 2, 3, and 8) located in the nucleus and expressed widely in various tissues and they are involved in gene expression. Class II HDACs are divided into two sub-groups, Class IIa (HDAC4, 5, 7, and 9) and Class IIb (HDAC6 and 10), and they are associated with cellular differentiation [9]. Class IIa HDACs shuttle between the cytoplasm and nucleus. Class IIb HDACs are situated in the cytoplasm and it is found that HDAC-6 is involved in α-tubulin deacetylation that may influence the mitotic process and other processes that depend on the acetylation pattern of the microtubular network [10]. In addition to all of the previous Zn 2+ dependent HDACs, Class III HDACs are called sirtuins ]SIRT1-SIRT7[ and they are NAD + dependent [11]. Class IV HDAC that comprises HDAC11, which is a unique member just because of the specific structure.
Up until now, the FDA showed in Fig. 2 has approved just four HDAC inhibitors. The 1 st approved HDAC inhibitor was Suberoylanilide Hydroxamic acid SAHA (1) or Vorinostat under the trade name of Zolinza® that was developed by Merck for the treatment of refractory cutaneous T-cell lymphoma (CTCL). The 2 nd one is Romidepsin (2) under trade name Istodax® in the market that was developed by Celgene pharmaceutical company for the treatment of CTCL and peripheral T-cell lymphoma (PTCL). In addition, in early 2015 the 3 rd agent approved as HDAC inhibitor was panobinostat (3) and it was marketed under the name of Farydak® that was developed by Novartis to be used in oral combination with Bortezomib and Dexamethasone in patients with recurrent multiple myeloma. The 4 th one is Belinostat (4) or called Beleodaq® in the market that was developed by Spectrum Pharmaceuticals to be used for the treatment of PTCL [12].
Fig. 2. FDA approved HDAC inhibitors
It is clear from Fig. (3) that HDAC inhibitors -except Romidepsin (2)-are sharing essential pharmacophoric structural features essential for activity [13]. The 1 st feature is the hydrophobic capping group that interacts with the surface of the enzyme and aids in identification and binding of the compound with the enzyme. The 2 nd feature is the linker that is essential for interaction with the enzymatic tunnel. The 3 rd one is the zinc-binding group (ZBG) which is most commonly bidentate chelating group such as hydroxamic acid (HA) chelating group like in SAHA (1) which is potent HDAC inhibitor [14]. Despite this high potency of the hydroxamic acid as ZBG in HDAC inhibition, it suffers from in vivo instability as it is extensively metabolized in body rendering this potent HDAC inhibitor has two hours half-life then it is glucuronylated so it requires continuous injection [15]. That may lead to toxicity and off-target side effects including interaction with the cardiac potassium channel and mutagenic effects especially that HA-based HDAC inhibitors have pan HDAC inhibition so this increase the risk of cardiotoxicity. Therefore, there is increased interest in developing non-hydroxamate HDAC inhibitors that contain ZBG other than HA and capable of HDAC inhibition with more selectivity and fewer side effects [16]. There are many research groups working on the discovery of new non-hydroxamate HDAC inhibitors such as Entinostat (5) and Mocetinostat (6) that bear 2-aminobenzamide as ZBG. Moreover, there are many reported ZBG for the design of non-hydroxamate HDAC inhibitors such as thiols as in compound (7), electrophilic ketone as in compound (8), Mercaptoamides (9), Sulfones (10) and Phosphones (11) shown in Fig. (4).
Fig. 4. Some examples of non-hydroxamate HDAC inhibitors [16]
In our study, we have explored the reported chelating agent salicylaldimine [17] and other imine analogs 2-thiophene imine and 2-pyridine imine to be the ZBGs [18,19] which connected via phenyl linker to different surface recognition caps containing urea, sulfonamides, piperazine and benzthiazole to form novel non-hydroxamate HDAC inhibitors targeting the cancer cells. In addition, we report the synthesis, the assay of percent inhibition of HDAC-6 isoenzyme at 10 µM and cancer cell percent growth inhibition on the NCI cell lines panel at a single dose ( Table 2 & Fig. 11).
Design and molecular docking study
Upon the basis of the well-established knowledge of the structural parameters (cap, linker, ZBG) that are essential for HDACs inhibitory activity, we chose to investigate the design of some new salicylaldimine-based HDACi that might retain the desired zinc chelating activity instead of the hydroxamic acid chelating group. Salicylaldimine is reported as a popular bidentate ligand that chelates with the different metal like iron and zinc. In addition, we investigated the other imines as analogs for salicylaldimine by replacing the ortho hydroxyl group with either cyclic nitrogen or cyclic sulfur to test the ability of thiophene-2-ylmethanimine as in compound 17 and compound 28 and we also tried the pyridine-2-ylmethanimine as in compound 16 and compound 27 to act as novel zinc chelating groups. Furthermore, we chose versatile linkers for the designed compounds such as sulfonamides, urea or piperazine that connect the ZBG to different surface recognition cap such as phenyl as in compounds 25, 27, 28, and 4-isopropylphenyl as in compound 14, diphenyl in compounds 26 and compounds 24, 28, benzthiazole cap in compound 21, naphthyl in compounds 15, 16, and 17.
These designed compounds were further evaluated by molecular docking via the semiempirical forcefield based Autodock Vina software in HDAC-6 isoenzyme that is cocrystallized with Trichostatin (TSA) (PDB code: 5edu). The pose generation process was validated by calculation the root mean square standard deviation (RMSD) between the cocrystallized TSA and the docked one. The resultant RMSD was 1.293, which reflects the ability of this docking engine to dock the compounds inside the active site of HDAC-6 with the right pose as shown in Fig. 5.
The co-crystallized TSA showed essential interactions with the amino acids His611, Phe620, and tyr782 inside HDAC-6 binding site as shown in Fig. 6. Most of our designed compounds showed very similar binding mode as TSA with comparable docking score to it especially the salicylaldimine derivatives.
The designed compound 14 showed binding Vina score of 28, which is comparable to the lead TSA binding, score 31. Moreover, it retained the essential binding features of TSA binding such as zinc binding, hydrogen bonding with HIS 611 , and pi-pi interactions with Phe 620 and with PHE 680 in addition as shown in Fig. 7. The second promising salicylaldimine derivative with docking sore of 27.56 was compound 27 (Fig. 8) that showed very similar binding modes to the lead TSA with zinc and PHE 620 and formed extra interactions with ASP 649 and HIS 610 . Furthermore, the compound 24 ( Fig. 9) showed docking scores of 27.2 with retaining the essential key interactions and with the formation of an extra hydrogen bond with TYR 782. The use of hydrophobic moieties as capping group such as naphthyl in compounds 15, 16, 17 and 6-methoxy benzothiazole as in compound 21 ( Fig. 10) rendered the caps more surface binder with extra interaction with HIS 651 . These findings from molecular docking by autodock vina directed us to proceed in the chemical synthesis of the designed compounds and their further biological evaluation.
Chemistry
After the promising findings of the molecular docking study for our designed compound, four schemes were designed for synthesis of these compounds. The first scheme was for the synthesis of biaryl urea-based caps and imine based ZBGs. The synthesis of biaryl ureas was done by reacting the 4-nitroaniline with either 1naphthyl isocyanate or 4-isopropylphenyl isocyanate [20]. The resultant nitro containing ureas were reduced to the corresponding amines via catalytic hydrogenation using 10% Pd loaded on charcoal [21] then the resulting amines were condensed either with salicylaldehyde, thiophene-2-carboxaldehyde or pyridine-2carboxaldehyde under Dean-stark apparatus in toluene to afford the corresponding imines [22].
The second scheme was for the synthesis of 6-methoxy benzothiazole based cap from 4methoxyaniline using potassium thiocyanate and bromine to make oxidative ring closure in acetic acid [23], and then the resultant amine intermediate was reacted with nitrobenzenesulfonylchloride in pyridine to afford the corresponding sulfonamide derivatives The third scheme was designed to synthesize compound 24 that contains diphenyl cap and sulfonamide linker and salicylaldimine ZBG. This scheme begins with the diphenylamine that was reacted with the nitrobenzene sulfonyl chloride in pyridine to afford the intermediate compound 22, which then reduced to afford the corresponding amine that then reacted with the salicylaldehyde to produce the imine derivative. The fourth scheme was designed to synthesize the piperazine derivatives that have either salicylaldimine as ZBG, thiophene-2ylmethanimine or pyridine-2-ylmethanimine. This scheme begins with the nucleophilic aromatic substitution of the phenylpiperazine on 4-fluronitro benzene in basic condition and in DMF as solvent [25]. Then the resultant nitro derivative was reduced via catalytic hydrogenation to afford the aniline derivative that was reacted with salicylaldehyde to give compound 27 and with thiophene-2-carboxaldehyde to afford compound 29 and with pyridine-2-carboxaldehyde to yield compound 30
In vitro HDAC inhibition assay
The synthesized compounds were tested for their inhibitory activity towards HDAC-6 isoform in Bioscience Company (Table 1), and they were selected by NCI for determination of their antiproliferative activities against the NCI panel of 60 tumor cell lines. On the HDAC-6 isoenzyme, compound 14 showed the most potent HDAC-6 inhibition with percent inhibition of 63% at 10 µM concentration. Replacing the isopropyl phenyl cap with naphthyl cap in compound 15 diminished the percent inhibition to be 5%, which suggest that1-naphthyl moiety may interfere with the entrance of HDAC tunnel. Furthermore, replacing the salicylaldimine chelating group with thiophene-2-ylmethanimine in compound 16 or pyridine-2-ylmethanimine in compound 17 rendered the HDAC percent inhibition to be 0% for both compounds. In addition, 5-methoxy benzothiazole capping group may interfere with the entrance of HDAC tunnel too as the HDAC percent inhibition resulted was 12% in the compound 21. Compound 24 that contains biphenyl sulfonamide cap showed considerable HDAC percent inhibition of 52%. Interestingly, compound 27 that contains piperazine as linker showed 58% HDAC inhibition but replacing the salicylaldiminechelating group with pyridine-2-ylmethanimine in compound 29 and thiophene-2-ylmethanimine in compound 30 caused a severe decline in HDAC inhibitory activity to be 3% and 0%, respectively. Replacing the phenylpiperazine with phenylpiperazine moiety in compound 28 decreased the percent inhibition to be 29% instead of being 58 % in compound 27.
Antiproliferative activity
The designed compounds were selected by NCI to be tested on the 60-cell panel and the results were represented in either tabular form or one dose graph. The compound that showed most potent antiproliferative activity at 10 µm single dose was compound 14 that induced 60% inhibition for HL-60 cell line of leukemia that overexpresses the HDACs. Moreover, compound 14 showed 40% inhibition to CCR-CEM and Molt-4 cell lines of leukemia and UO-31 cell line of renal cancer rendering it promising start for further optimization of the antiproliferative activity and HDAC inhibitory activity. Interestingly, compound 24 showed 70% inhibition on UACC-62 cell line of melanoma. Furthermore, compound 27 showed 49% inhibition on Hl-60 cell line of leukemia that overexpresses HDACs and 47% inhibition on BT-549 cell line of breast cancer rendering it interesting for further antiproliferative optimization.
Conclusion
In conclusion, we have designed and synthesized a new class of non-hydroxamate HDAC inhibitors exploring salicylaldimine moiety and other imines as new zinc chelating group. The idea was explored first via molecular docking inside HDAC-6 before the synthesis and the in vitro assay. The good correlation between the docking results and biological assays results rendered the use of autodock vina as a useful tool in the design of further HDAC-6 analogs. Moreover, the synthesized compounds were selected to be tested for their antiproliferative activities against the NCI panel of 60 tumor cell lines. Compound 14 that possesses isopropylphenyl moiety as capping group and urea as linker and salicylaldimine as ZBG exhibited considerable HDAC-6 percent inhibition and potential to be effective anticancer after further optimization. In addition, compound 24 and compound 27 that have salicylaldimine moiety as a chelating group also exhibited considerable HDAC inhibition and antiproliferative activities that strongly supports the idea of using the salicylaldimine as ZBG. We suggest a modification of either the linker length to be longer than used in compound 14 or inverting the imine to make the nitrogen besides the hydroxyl with no carbon spacer that may improve the chelation power resulting in more potent HDAC inhibition and anticancer activity.
Materials and instrumentation
Starting materials and reagents were purchased from Sigma-Aldrich or Alfa-Aesar Organics and used without further purification. Solvents were purchased from Fisher Scientific or Sigma-Aldrich and used without further purification. Reactions were monitored by analytical TLC, performed on silica gel 60 F254 packed on aluminum sheets, purchased from Merck, with visualization under UV light (254 nm). Flash column chromatography was performed using silica gel (230-400 mesh) purchased from Sigma-Aldrich. Melting points were recorded on Stuart Scientific apparatus and were uncorrected. HNMR spectra were recorded in δ scale given in ppm on a Bruker 400 MHz spectrophotometer and referred to TMS at Center for Drug Discovery Research and Development, Ain Shams University. The hydrogenation process was carried out using hydrogenator (Parr Shaker) apparatus at Ain Shams University.
General procedure for the synthesis of biaryl urea derivatives (12a-13b) (method A)
To a solution of p-nitroaniline (1 g, 7.5 mmol: 1 equiv.) in dry dioxane (10 mL), the appropriate isocyanate (8.6 mmol; 1.2 equiv.) was added and the mixture was stirred at room temperature overnight. The formed solid was collected by filtration, washed with dioxane, allowed to dry.
30)
To a solution of the appropriate amine (1.32 mmol) in dried toluene (10 mL) in presence of 1mL glacial acetic acid, 0.2 mL (1.32 mmol) of the appropriate aldehyde. The mixture is refluxed under dean stark apparatus until precipitate is formed. The formed solid was collected by filtration and washed with toluene.
General Method for the synthesis of sulfonamides (Method D) (19, 22)
To a solution of the appropriate amine (3.5 mmol) in pyridine (5 mL) placed in an ice bath, 1.2 equivalent of 4-nitrobenzene sulfonyl chloride (4 mmol) was added portion wise and left stirring for 48 h at room temperature. After that, the solution is poured on ice/HCl and the formed solid was collected by filtration and washed with diethyl ether.
6-Methoxybenzo[d]thiazol-2-amine (18)
To a stirred solution of 4-methoxy aniline (8 mmol) and 3.75 g potassium thiocyanate (7 equivalents) in glacial acetic acid (10 mL) placed in ice bath bromine solution(80 mmol) was added portion wise then left stirring overnight. Then the solution is poured and the formed solid is collected by filtration and washed by DEE to afford gray solid with yield 62% and m.p of 161-163 °C.
4-Nitro-N, N-diphenylbenzene sulfonamide (22)
To a solution of the diphenylamine (3.5 mmol) in pyridine (5 mL) placed in an ice bath, 1.2 equivalent of 4-nitrobenzene sulfonyl chloride (4 mmol) was added and continued as in method D to afford buff solid with a yield of 88% and m.p of 165-168 °C.
1-(4-Nitrophenyl)-4-phenylpiperazine (25a)
To a mixture of phenylpiperazine (3.5 mmol) and fluronitrobenzene (3.5 mmol) in DMF (5 mL), 0.65 g of dried potassium carbonate (3 equivalents) was added and the mixture was refluxed for 12 h then the solution was poured onto ice to afford orange solid which was collected by filtration and washed with diethyl ether with yield of 88% and m.p of 108 °C.
Molecular modelling
Molecular docking study was performed using autodock vina docking software interface and while protein and ligands preparation prior docking process was conducted through Accelrys discovery studio 2.5 via prepare protein and prepare ligands protocols.
Preparation of protein
The X-ray crystal structure of HDAC-6 cocrystallized with Trichostatin (TSA) was obtained from the Protein Data Bank at the Research Collaboration for Structural Bioinformatics (RCSB) website [www.rcsb.org] (PDB code: 5 edu) and loaded in Accelrys discovery studio 2.5. The protein structure was prepared using the default protein preparation tools integrated into the software. This was accomplished by adding hydrogen atoms to the amino acid residues, completing the missing residues, and applying force field parameters by using CHARMm forcefield 256. All of the protein structure was minimized using 500 steps employing SMART minimizer algorithm. The heavy atoms other than hydrogen were kept fixed during the minimization process. The whole enzyme was defined as the receptor. In addition, binding pocket together with the surrounding amino acid residues was identified. The ligand structure was removed from the binding sites.
Ligand preparation for docking
Ligands structures were constructed using the default sketching tools of Accelrys discovery studio 2.5 and then the ligands were prepared using Ligand preparation protocol of Accelrys Discovery Studio. The ionization pH was adjusted to 7.4, hydrogen atoms were added and no isomers or tautomers were generated from the ligands
Docking process
The autodock vina protocol performs random searching for poses via a genetic algorithm to find poses close to the bioactive form. Then the scoring step is performed via forcefield based scoring functions that apply CHARMm forcefield to calculates the energy of the pose-protein complex so enable the program to rank the poses based on the scores. The program provides the results in PDBQT files that contain the poses alone without the protein active site so both protein and pose files can be opened in discovery studio and analyzed to get the best score and the best pose.
HDAC-6 assay
The compounds were resuspended in 100% DMSO. A series of dilutions of the compounds were prepared with 10% DMSO in HDAC assay buffer and 5 µL of the dilution was added to a 50µL reaction so that the final concentration of DMSO is 1% in all the reactions. The enzymatic reactions for the HDAC enzymes were conducted in duplicate at 37 °C for 30 min in a 50 µL mixture containing HDAC assay buffer, 5 µg BSA, an HDAC substrate, HDAC enzyme, and a test compound. After enzymatic reactions, 50 μL of 2 x HDAC Developer was added to each well and the plate was incubated at room temperature for an additional 15 min. Fluorescence intensity was measured at an excitation of 360 nm and an emission of 460 nm using a Tecan Infinite M1000 microplate reader.
Anti-proliferative activity against NCI 60 cell line panel
The human tumor cell lines of the cancerscreening panel were grown in RPMI 1640 medium containing 5% fetal bovine serum and 2 mM L-glutamine. For a typical screening experiment, cells are inoculated into 96 well microtiter plates in 100 μL at plating densities ranging from 5.000 to 40.000 cells/well depending on the doubling time of individual cell lines. After cell inoculation, the microtiter plates are incubated at 37 °C, 5 % CO 2 , 95% air and 100% relative humidity for 24 h prior to addition of experimental drugs. After 24 h, two plates of each cell line are fixed in situ with trichloroacetic acid (TCA), to represent a measurement of the cell population for each cell line at the time of compound addition (Tz). Experimental drugs are solubilized in dimethyl sulfoxide at 400-fold the desired final maximum test concentration and stored frozen prior to use. At the time of drug addition, an aliquot of frozen concentrate is thawed and diluted to twice the desired final maximum test concentration with complete medium containing 50 μg/mL gentamicin. Additional four, 10-fold or ½ log serial dilutions are made to provide a total of five drug concentrations plus control. Aliquots of 100 μL of these different drug dilutions are added to the appropriate microtiter wells already containing 100 μL of medium, resulting in the required final drug concentrations.
Following drug addition, the plates are incubated for an additional 48 h at 37 °C, 5% CO 2 , 95% air, and 100% relative humidity. For adherent cells, the assay is terminated by the addition of cold TCA. Cells are fixed in situ by the gentle addition of 50 mL of cold 50% (w/v) TCA (final concentration, 10% TCA) and incubated for 60 min at 4 °C. The supernatant is discarded, and the plates are washed five times with tap water and air dried. Sulforhodamine B (SRB) solution (100 μL) at 0.4% (w/v) in 1% acetic acid is added to each well, and plates are incubated for 10 min at room temperature. After staining, the unbound dye is removed by washing five times with 1% acetic acid and the plates are air dried. The bound stain is subsequently solubilized with 10 mM trizma base, and the absorbance is read on an automated plate reader at a wavelength of 515 nm. For suspension cells, the methodology is the same except that the assay is terminated by fixing settled cells at the bottom of the wells by gently adding 50 μL of 80% TCA (final concentration, 16% TCA).
Declaration of interest
The authors have declared no conflict of interests. | 2019-07-26T15:41:30.180Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "06366e92a49dae5ea71a4f2e337bb9211942ee30",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21608/aps.2018.18729",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "462faf32ecfc9fc8123cb73c52c6c00f758e5e9d",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
256398717 | pes2o/s2orc | v3-fos-license | Investigation of the effects of P1 on HC-pro-mediated gene silencing suppression through genetics and omics approaches
Posttranscriptional gene silencing (PTGS) is one of the most important mechanisms for plants during viral infection. However, viruses have also developed viral suppressors to negatively control PTGS by inhibiting microRNA (miRNA) and short-interfering RNA (siRNA) regulation in plants. The first identified viral suppressor, P1/HC-Pro, is a fusion protein that was translated from potyviral RNA. Upon infecting plants, the P1 protein itself is released from HC-Pro by the self-cleaving activity of P1. P1 has an unknown function in enhancing HC-Pro-mediated PTGS suppression. We performed proteomics to identify P1-interacting proteins. We also performed transcriptomics that were generated from Col-0 and various P1/HC-Pro-related transgenic plants to identify novel genes. The results showed several novel genes were identified through the comparative network analysis that might be involved in P1/HC-Pro-mediated PTGS suppression. First, we demonstrated that P1 enhances HC-Pro function and that the mechanism might work through P1 binding to VERNALIZATION INDEPENDENCE 3/SUPERKILLER 8 (VIP3/SKI8), a subunit of the exosome, to interfere with the 5′-fragment of the PTGS-cleaved RNA degradation product. Second, the AGO1 was specifically posttranslationally degraded in transgenic Arabidopsis expressing P1/HC-Pro of turnip mosaic virus (TuMV) (P1/HCTu plant). Third, the comparative network highlighted potentially critical genes in PTGS, including miRNA targets, calcium signaling, hormone (JA, ET, and ABA) signaling, and defense response. Through these genetic and omics approaches, we revealed an overall perspective to identify many critical genes involved in PTGS. These new findings significantly impact in our understanding of P1/HC-Pro-mediated PTGS suppression.
However, different species of viruses have developed various suppressors to counteract the DCL2/4-mediated siRNA defense system, known as PTGS suppression, making these species capable of surviving and multiplying in the infected plants. Viral suppressors of PTGS not only suppress the siRNA defense system but also inhibit miRNA regulation, resulting in symptom development. Symptoms represent the misregulation of the miRNA phenomena, whereas a mutant virus has a defective suppressor that causes mild symptoms and has a limited Open Access *Correspondence: linss01@ntu.edu.tw 1 Institute of Biotechnology, National Taiwan University, Taipei 106, Taiwan Full list of author information is available at the end of the article inhibitory effect on miRNA-regulation (Kung et al. 2014;Wu et al. 2010).
Viral suppressors of PTGS have various approaches to interfere with miRNA biogenesis or miRNA regulation. For instance, 2b of the cucumber mosaic virus (CMV) Q strain and p19 of tomato busy stunt virus (TBSV) binds miRNA and siRNA to prevent those of small RNAs loading into AGO1 (Silhavy et al. 2002;Zhang et al. 2006). P0 of polerovirus has an F-box-like domain to trigger AGO1 degradation (Michaeli et al. 2019). P1/HC-Pro is the first identified viral suppressor of PTGS (Anandalakshmi et al. 1998;Kasschau and Carrington, 1998). HC-Pro is a highly conserved protein in potyvirus that plays a major role in PTGS suppression (Kasschau and Carrington, 1998;Kasschau et al. 2003;Kung et al. 2014;Valli et al. 2006). In contrast, P1 is a highly divergent protein that has variable sequences in each potyvirus. P1 of tobacco etch virus (TEV) can enhance the HC-Pro-mediated PTGS suppression; however, the mechanism is still unclear (Kasschau and Carrington, 1998;Martínez and Daròs, 2014;Valli et al. 2006). Martinez and Daròs (2014) demonstrated that P1 of TEV interacts with the 60S ribosomal subunit and enhances in vitro translation.
Previous studies demonstrated that P1/HC-Pro genes of zucchini yellow mosaic virus (ZYMV) and turnip mosaic virus (TuMV) suppressed miRNA regulation (Kung et al. 2014;Wu et al. 2010). Transgenic Arabidopsis expressing P1/HC-Pro of ZYMV (P1/HC Zy plant) or P1/HC-Pro of TuMV (P1/HC Tu plant) showed severe serrated and curling leaf phenotypes that are related to miRNA misregulation and viral symptom development (Kung et al. 2014;Wu et al. 2010). Moreover, the FRNK motif (highly conserved amino acid sequence) of HC-Pro in TuMV and ZYMV is necessary and sufficient for PTGS suppression (Kung et al. 2014;Wu et al. 2010). The miRNA misregulation in transgenic plant expressing viral suppressor gene, such as 2b, P1/HC-Pro, and P19, is occurred by abnormal miRNA/miRNA* accumulation via an unknown mechanism, resulting in target RNA accumulation (Kasschau et al. 2003;Kung et al. 2014). Therefore, abnormal miRNA/miRNA* and target RNA accumulations are the molecular phenotypes of PTGS suppression.
In this study, we demonstrated that various potyviruses of P1 are necessary and sufficient to enhance HC-Pro PTGS suppression. Through high-throughput omics approaches, several critical genes that interact with P1 or are involved in PTGS were identified from immunoprecipitation (IP) and transcriptomic profiles. We also found that P1/HC-Pro of TuMV triggers Argonaute protein 1 (AGO1) posttranslational degradation. These critical genes offer new directions for further investigation of the PTGS and P1/HC-Pro-mediated suppression.
Plant material and transgenic plants
Arabidopsis thaliana ecotype Col-0 and transgenic plants, P1/HC Tu plant, and P1/HC Zy plant (Wu et al. 2010) were used in this study. Arabidopsis seeds were surface sterilized and chilled at 4 °C for 2 days and then sown on Murashige and Skoog (MS) medium with/without suitable antibiotics. The seedlings were transferred into soil after 1 week of germination. All plants were grown at 24 °C in a growth room with 16 h of light/8 h of dark.
For P1 Tu plant construction, the TuMV infectious clone was used as a template to amplify the P1 Tu gene with the primer set: PtuP1/MTuP1 (5′-TCA AAA GTG CAC AAT CTT -3′), and the gene was then cloned into the pENTR and pBCo-DC vectors following the above procedures to generate pBCo-P1. For the HC Tu plant resistant to Basta, the TuMV infectious clone was used as template to amplify the HC Tu gene with the primer set: PTuHC (5′-CAC CAT GAG TGC AGC AGG AGCC-3′)/MTuHC, and it was then cloned into the pENTR and pBCo-DC vectors following the above procedures to generate the pBCo-HC Tu fragment. An NheI site was introduced into the fusion form of the P1HC-Pro gene (P1HC Tu−FA ) to generate a F 362 A substitution. Furthermore, the P1 and HC-Pro genes were amplified from the TuMV infectious clone (Niu et al. 2006) and constructed under the 35S promoter to create the P1 Tu and HC Tu plants, respectively. The pBCo-P1/HC Te , pBCo-P1 Tu , pBCo-HC Tu , and pBCo-P1HC Tu-FA binary vectors were transferred into Col-0 by the floral-dipping method with the Agrobacterium tumefaciens ABI strain to generate the P1/HC Te , P1 Tu , and HC Tu plants, respectively.
For recombined P1/HC-Pro transgenic plant construction, the infectious clones of TuMV, ZYMV, and TEV were used as templates to generate the recombinant P1/HC-Pro constructs. The P1 cleavage site in the recombined gene had to be preserved in the recombined constructs, and the constructs were cloned into the pBCo binary vector (Kung et al. 2014) for Agrobacterium-mediated flower-dipping transformation.
All of the PCR fragments were digested with NheI and XhoI and then ligated with the same restriction enzyme-digested pET-28a vector to generate pET-P1 Tu , pET-P1 Zy , pET-P1 Te , pET-HC Tu , pET-HC Zy , and pET-HC Te ). All of the pET28 plasmids were transformed into the E. coli BL21 strain for recombinant protein expression. All recombinant proteins were purified by fast protein liquid chromatography (FPLC) (AKTApurifier, GE Healthcare). One milligram of recombinant protein with a 1 × volume of complete Freund's adjuvant was injected into New Zealand white rabbits for the first injection. The following three injections consisted of 1 mg of protein mixed with a 1 × volume of incomplete Freund's adjuvant. IgG purification was performed according to the protocol of Chiu et al. (2013). The IgG was collected after 4 injections for western blot detection.
Immunoprecipitation and in-solution protein digestion
To identify the P1-interacting proteins, 250 mg of 10-day-old seedlings (n = 6) were homogenized with 1 mL IP buffer [25 mM Tris-HCl, pH 7.0, 150 mM NaCl, 1 mM EDTA, 5% glycerol, and a protease inhibitor (Roche)], followed by centrifugation for 10 min at 4 °C. IgG of α-P1 Tu , α-P1 Zy , and α-P1 Te was used for the in vivo IP. IP was performed by mixing 30 μl of washed Protein A Mag Sepharose TM Xtra ferrite beads (GE), IgG (30 μl per IP reaction) and lysate. The IP reaction was carried out at 4 °C with gentle mixing for 3 h. The tube was then centrifuged at 300 g to pull-down the beads, which were washed three times with 0.3 mL wash buffer (25 mM Tris, 150 mM NaCl, 1 mM EDTA, 5% glycerol, 0.1% Triton-X-100, and a protease inhibitor) to remove nonspecific binding. Finally, the beads were resuspended in 50 μL elution buffer (0.1 M glycine, pH 2.0), and the reaction was mixed on a rotary at 4 °C for 10 min. A total of 10 μL of neutralization buffer (Tris-HCl, pH 8.0) was added to neutralize the reaction.
The proteins were dissolved in 6 M urea. A total of 15 μg of protein from each time point was used for insolution digestion. Proteins were reduced by incubation with 10 mM dithiothreitol (DTT) for 1 h at 29 °C and alkylated by 55 mM iodoacetamide (IAA) at room temperature for 1 h. This step was quenched by 55 mM DTT at 29 °C for 45 min. The concentration of urea was diluted to 1 M before the sample was subjected to proteolysis. Protein digestion was performed overnight at 29 °C using mass spectrometry-grade modified trypsin (Promega) at a 1:50 trypsin/protein ratio. After overnight incubation, 0.1% TFA was added to stop the digestion. Finally, all remaining reagents from the in-solution digestion procedure were removed using a C18 stage tip.
LC-MS/MS analysis
High-performance liquid chromatography with tandem mass spectrometry (LC-MS/MS) was performed on an Orbitrap Fusion Lumos Tribrid quadrupole-ion trap mass spectrometer (Thermo Fisher Scientific) in the Instrumentation Center of National Taiwan University. Peptides were separated on an Ultimate System 3000 NanoLC System (Thermo Fisher Scientific). Peptide mixtures were loaded onto a 75 μm inner diameter (ID), 25 cm length C18 Acclaim PepMap NanoLC column (Thermo Scientific) packed with 2 μm particles with a pore size of 100 Å. Mobile phase A was 0.1% formic acid in water, and mobile phase B was 100% acetonitrile with 0.1% formic acid. A segmented gradient was set over 90 min from 2% to 35% solvent B at a flow rate of 300 nl/ min. Mass spectrometry analysis was performed in a data-dependent mode with full-MS (externally calibrated to a mass accuracy of < 5 ppm, and a resolution of 120,000 at m/z = 200), followed by high-energy collision activated dissociation (HCD)-MS/MS of the most intense ions in 3 s. HCD-MS/MS (resolution of 15,000) was used to fragment multiply charged ions within a 1.4 Da isolation window at a normalized collision energy of 32. An automatic gain control (AGC) target at 5e5 and 5e4 was set for MS and MS/MS analysis, respectively, with previously selected ions dynamically excluded for 180 s. The max injection time was 50 ms.
Identification and quantitation of the proteome by label-free labeling methods
Quantitative proteomics was performed by label-free quantitative proteomic analysis. The raw MS/MS data were searched against the UniProt Knowledgebase/ Swiss-Prot Arabidopsis thaliana protein database (Mar 2019 version) by using the Mascot 2.3 search algorithm via the Proteome Discoverer (PD) package (version 2.2, Thermo Scientific). The search parameters were set as follows: peptide mass tolerance, 10 ppm; MS/MS ion mass tolerance, 0.02 Da; enzyme set as trypsin and allowance of up to two missed cleavages; and variable modifications including oxidation on methionine, deamidation on asparagine and glutamine residues, and carbamidomethylation of cysteine residues. Peptides were filtered based on a 1% FDR. Protein quantification was computed by the abundance of ions extracted from the MS spectra of the corresponding peptides. The normalization method was set to the total peptide amount.
Whole-transcriptome analysis
Total RNAs that were isolated from 10-day-old seedlings of Col-0, P1 Tu , HC Tu , and P1/HC Tu plants (n = 3) were used for whole-transcriptome deep sequencing by the High Throughput Sequencing Core of Academia Sinica. The sequencing was accomplished by paired-end (2 × 125) strand-specific HiSeq sequencing (Illumina). The transcriptome was analyzed by the ContigViews system (www.conti gview s.bioag ri.ntu.edu.tw) of the NGS core of National Taiwan University. For the ContigViews network analysis in this study, the twofold differentially expressed genes (DEGs) between Col-0 and P1/HC Tu plants (n = 3) with an 80% passing rate were selected for the assay. Reads with twofold log 10 FPKM values of genes under 1.14 were trimmed. At least 10 samples from Col-0, P1 Tu , HC Tu , and P1/HC Tu profiles (n = 3) were selected to calculate the Pearson correlation with a 0.95 threshold for positive relation and a 0.9 threshold for negative relation. Notable, parameter determination is according to the highlighted genes and network complexity. These parameters can generate the best network for data mining in ContigViews.
Ethylene detection
Three-week-old Col-0 and P1/HC Tu plants (n = 3) were individually sealed in the 1.5 L chambers at 24 °C with 16 h light/8 h dark. Ethylene gas samples in (1 mL) were withdrawn and collected at 4, 24, 48, and 72 h and were analyzed by GC-8A gas chromatography (Shimadzu) equipped with a flame ionization detector (FID).
P1 enhances the severity of the HC-Pro-mediated serrated leaf phenotype and PTGS suppression
To dissect the function of P1 Tu and P1/HC Tu in PTGS suppression, we generated Arabidopsis transgenic lines expressing P1 Tu and P1/HC Tu in combinations or individually ( Fig. 1a, b). The P1/HC Tu plants showed a severe serrated and curled leaf phenotype (Fig. 1b, panel ii). The translated P1/HC-Pro protein contains an F 362 /S 363 cleavage site (Fig. 1a), which can generate separated P1 and HC-Pro proteins through P1 cleavage (Fig. 1c). The P1 Tu plant showed normal development similar to that of the Col-0 plants, whereas the HC Tu plant showed mildly serrated leaves (Fig. 1b, panels iii and iv). In addition to the difference in the severity of the leaf phenotype, the size of the HC Tu plant was larger than that of the P1/HC Tu plant ( Fig. 1b, panels ii and iv).
In addition, an F 362 A substitution at the F 362 /S 363 -P1 cleavage site produced a P1HC-Pro fusion protein (P1HC Tu-FA ) (Fig. 1a, c). This transgenic P1HC Tu-FA plant showed a normal phenotype (Fig. 1b, panel v). Furthermore, a kanamycin-resistant HC Tu plant [HC Tu (kan) plant] was generated for crossing with the P1 Tu plant (Basta resistant) (Fig. 1a). Similar to the HC Tu plant, the HC Tu (kan) plant showed mildly serrated leaves (Fig. 1b,panel vi). Interestingly, the P1 Tu × HC Tu (Kan) offspring showed severely serrated and curled leaves, but the P1 Tu × HC Tu (Kan) plant was larger than that of the P1/HC Tu plant (Fig. 1b, panel vii). In addition, only the P1/HC Tu plant showed high levels of the P1 and HC-Pro proteins, while the other lines, even the P1 Tu × HC Tu (Kan) plant, showed low levels of P1 and HC-Pro (Fig. 1c).
We compared 57 potyvirus amino acid sequences of P1/HC-Pro (Fig. 2a). The alignment results showed that the sequence and length of the P1 protein in different potyviruses are highly diverse (Fig. 2a). Only the C-terminal protease activity site (black boxes) is conserved (Fig. 2a). In contrast, several conserved domains of HC-Pro were found in different species (Fig. 2a). To test whether the P1/HC-Pro from other potyviruses also induce serrated leaf phenotype, we generated Arabidopsis transgenic lines expressing P1/HC-Pro form ZYMV and TEV. P1/HC Zy plants showed a severe serrated and curled leaf phenotype, whereas P1/ HC Te plants showed a minor serrated leaf phenotype ( Fig. 2b). However, both plants had high levels of P1 and HC-Pro (Fig. 2c). The results indicated that the P1/ HC-Pro genes of ZYMV and TEV can also trigger a serrated leaf phenotype.
The next question was whether the function of the HC-Pro from each virus requires the P1 from the same species. We generated 6 recombinant P1/HC-Pro plants in which HC-Pro was fused with a heterologous P1, namely, P1 Zy /HC Tu , P1 Te /HC Tu , P1 Tu /HC Zy , P1 Te / HC Zy , P1 Tu /HC Te , and P1 Zy /HC Te (Fig. 2d). Except for P1 Tu /HC Te plants that show serrated leaves, the other 5 recombinant transgenic plants showed a severe serrated and curled leaf phenotype (Fig. 2e). The represented plants, P1 Zy /HC Tu and P1 Te /HC Tu plants, showed detectable P1 and HC-Pro expression (Fig. 2c). These results suggest that multiple P1 genes have conserved functions in enhancing the HC-Pro-mediated serrated leaf phenotype.
HC-Pro-mediated PTGS suppression
Previous studies demonstrated that an abnormal accumulation of miRNA and miRNA* occurs in several transgenic viral suppressor plants because suppressors interfere with miRNA biogenesis (Kasschau et al. 2003;Kung et al. 2014;Wu et al. 2010). Indeed, P1/HC Tu , P1/ HC Zy , P1/HC Te , and 6 recombinant P1/HC-Pro plants showed abnormal miRNA/miRNA* accumulation (Fig. 2f ). These data suggested that 3 species of viral P1/ HC-Pro and recombinant P1/HC-Pro interfered with miRNA biogenesis. In addition, except for the P1HC Tu-FA plant, all transgenic lines that contained HC Tu showed abnormal miRNA and miRNA* accumulation (Fig. 3a), confirming that HC Tu is the dominant player in PTGS suppression. Surprisingly, the P1 Tu plant also showed miRNA and miRNA* accumulation through an unknown mechanism (Fig. 3a). In addition to miRNA/miRNA* accumulation, miRNA targets were also upregulated in (Kasschau et al. 2003;Kung et al. 2014;Wu et al. 2010). Transcriptome profiles also indicated that miRNA targets were upregulated in HC Tu , HC Tu (kan) , P1 Tu × HC Tu , and P1/HC Tu plants (Fig. 3b), suggesting that miRNA regulation was blocked by HC-Pro. However, DICER-LIKE 1 (DCL1; miR162 target) and two translation inhibition genes, APETALA 2 (AP2; miR172 target) and SHORT VEGETATIVE PHASE (SVP; miR396 target), showed no change in their transcript levels (Fig. 3b). Except for the DCL1, AP2, and SVP genes, the P1/HC Tu plant suppressed most of the miRNA-target regulation (Fig. 3b). We conclude that the P1/HC Tu plant has a stronger suppressive effect than the HC Tu plants. In addition, the
Host P1-interacting proteins are involved in PTGS
Because the recombinant P1/HC-Pro plants showed identical serrated leaf phenotypes and heterologous P1s could enhance HC-Pro-mediated PTGS suppression, we hypothesize that various P1 proteins have (a) conserved interacting protein(s) in Arabidopsis that enhance HC-Pro-mediated PTGS suppression. To identify the host P1-interacting proteins, the P1/HC Tu , P1/HC Zy , and P1/ HC Te plants were used for IP with α-P1 Tu , α-P1 Zy , and α-P1 Te antibodies, respectively. These IP eluates were analyzed by LC-MS/MS. We identified 101 cytoplasmic P1 of TuMV (P1 Tu )-interacting proteins (Additional file 1: Data). Furthermore, we identified 56 cytoplasmic P1 of ZYMV (P1 Zy )-interacting proteins and 20 cytoplasmic P1 of TEV (P1 Te )-interacting proteins (Additional file 1: Data). Importantly, only one consensus cytoplasmic protein, VERNALIZATION INDEPENDENCE 3/ SUPERKILLER8 (VIP3/SKI8; AT4G29830), was found in the IP profiles of 3 viral P1s (Table 1). VIP3/SKI8 is a subunit of the RNA exosome complex that is required for degradation of the RISC 5′-cleavage fragment (Branscheid et al. 2015;Orban and Izaurralde 2005). In contrast, 12 consensus cytoplasmic proteins were identified in the P1 Tu and P1 Zy IP profiles, whereas 10 consensus proteins were identified in the P1 Tu and P1 Te IP profiles (Table 1). Moreover, 5 consensus cytoplasmic proteins were found in the P1 Zy and P1 Te IP profiles (Table 1).
Next, we focused on P1 Tu -interacting proteins because the P1/HC Tu plant was the model used in this study. In the P1 Tu IP profile, two TUDOR-SN ribonucleases [(TSN1 (AT5G07350) and TSN2 (AT5G61780)] were uniquely identified 5 to 6 times in a total of 6 IP experiments with P1/HC Tu plants (Table 1, and Additional file 2: Table S1). TSN1 and TSN2 have been suggested to be involved in the regulation of uncapping mRNA and localize to processing bodies (P-bodies) and stress granules (Yan et al. 2014). Therefore, whether P1 Tu could alter the function of TSN1 and TSN2 is an interesting project for the further investigation. Moreover, VARICOSE (VSC; AT3G13300) and MODIFIER OF SNC1,4 (MOS4; AT3G18165), which are involved in RNA regulation, were identified in the P1 Tu IP profile (Table 1 and Additional file 2: Table S1). We also identified the NUCLEAR-PORE ANCHOR (NUA; AT1G79280), two IMPORTIN subunits (AT5G53480 and AT4G16143), and BREFELDIN A-INHIBITED GUANINE NUCLEOTIDE-EXCHANGE PROTEIN 5 (BIG5; AT3G43300), which are involved in protein or nucleic acid transport between the nucleus and cytosol (Table 1 and Additional file 2: Table S1) (Xue et al. 2019). Moreover, VACUOLAR PROTEIN SORTING-ASSOCIATED PROTEIN 29 (VSP29; AT3G47810) was identified, which participates in vacuolar protein trafficking and vacuolar sorting receptor recycling (Table 1 and Additional file 2: Table S1) (Kang et al. 2012).
The posttranscriptional and posttranslational regulation of miRNA targets in P1/HC Tu plants
CCS1 is involved in copper delivery, and SOD1 and SOD2 participate in Cu/Zn superoxide dismutase activities. The transcripts of these three genes are regulated by miR398 (Bouché 2010; Sunkar et al. 2006). However, there were high levels of CCS1, SOD1, and SOD2 accumulation in the HC Tu and P1/HC Tu plants, which corresponded to their transcript levels, indicating P1/ HC-Pro-mediated PTGS suppression (Fig. 4d-f, panel ii). Indeed, the transcript level of miR168-regulated AGO1 (AT1G48410) was increased in HC Tu and P1/HC Tu plants compared with Col-0 (Fig. 4r, panel ii). Surprisingly, the level of AGO1 protein was decreased via an unknown mechanism in HC Tu and P1/HC Tu plants (Fig. 4r, panel i). The western blot data also indicated that the level of AGO1 was lower in P1/HC Tu plants than in Col-0 plants but was similar to that in Col-0 plants, P1/HC Zy and P1/ HC Te plants (Fig. 5). These data suggested that the P1/ HC-Pro of TuMV has a specific ability to trigger the posttranslational degradation of AGO1.
Comparative gene-to-gene network and transcriptome analysis
In the transcriptome analysis, we constructed a geneto-gene correlation network to study PTGS suppression from a different perspective. First, we constructed a network for Col-0 vs. P1/HC Tu plants in the ContigViews system. A list of twofold DGEs between Col-0 and P1/ HC Tu plants was used to generate a Pearson correlation network (Fig. 6). A group of positive correlations (red lines) and a group of negative correlations (green lines) were highlighted in the network (Fig. 6). The output of the network showed that AGO1, AGO2 (AT1G31280), and AGO3 (AT1G31290) were present in the group of negative correlations (Fig. 6). AGO2 and AGO3 were positively correlated with each other (red line) but had an indirect correlation with AGO1 through XYLOGLUCAN ENDOTRANSGLUCOSYLASE/HYDROLASE 7 (XTH7; AT4G37800) (Fig. 6). Notably, the transcripts of AGO1, AGO2, and AGO3 were upregulated in the HC Tu and P1/ HC Tu plants, but the XTH7 transcripts were downregulated, suggesting that the AGOs and XTH7 might have opposite functions in PTGS (Fig. 4r, panel ii; Fig. 8a-c).
Next, we constructed two comparative networks, which were generated by a list of twofold DEGs between Col-0 and HC Tu plants or between Col-0 and P1 Tu plants (Fig. 7a, b). The gene positions in the comparative networks were followed with the Col-0 vs. P1/HC Tu network for comparison (Figs. 6 and 7). There were 97 genes in the Col-0 vs. P1/HC Tu network (Fig. 6); however, there were only 36 genes showed up when we applied the Two-asterisk (**) indicates cross-reaction of the α-HC Zy or α-HC Te antibodies. Three-asterisk (***) indicates common bands. The @ symbol indicates RUBISCO as an internal control same parameters in the Col-0 vs. HC Tu network (Figs. 6 and 7a). In addition, the main genes involved in PTGS, such as AGO1, AGO2, AGO3, and XTH7, remained in the Col-0 vs. HC Tu network (Fig. 7a). This suggested the presence of a basic network backbone in the HC Tu -mediated PTGS suppression that occurs without the effects of P1 Tu . In contrast, the Col-0 vs. P1 Tu network only had 7 genes in 2 small groups that were also present in parts of the Col-0 vs. HC Tu or Col-0 vs. P1/HC Tu networks (Figs. 6 and 7). Moreover, XTH7 had fewer than 49 connected genes in the Col-0 vs. HC Tu network, whereas XTH7 had 61 connections in the Col-0 vs. P1/HC Tu network (Figs. 6 and 7a). These data indicated that the XTH7 connection is variable in different networks and might play an important role in PTGS suppression. Overall, the comparative network analysis highlights the effects of P1 Tu on HC Tu -mediated PTGS suppression. This also explains why the P1/HC Tu plant has a severe phenotype because of how many pathways were interfered with.
Critical genes in the Col-0 vs. P1/HC Tu network that are involved in PTGS
The importance of XTH7 is not only in the number of gene connections it has or that it is connected with AGO1 and AGO2; XTH7 also had a negative correlation with several miRNA targets in the Col-0 vs. P1/HC Tu network, such as 2 auxin response transcription factor genes [ARF3 (AT2G33860) and ARF8 (AT5G37020)], PHOSPHATE 2 (PHO2; AT2G33770), GROWTH-REGULATING FACTOR 1 (GRF1; AT2G22840), CCS1, Fig. 6 The gene-to-gene network of Col-0 vs. P1/HC Tu plants. The gene profiles of twofold DEGs between Col-0 and P1/HC Tu plants were used to generate the Pearson correlation network. The different circle sizes indicate the numbers of correlated genes. A positive correlation (> 0.95) between the two genes is indicated by a red line, whereas a green line indicates a negative correlation (< −0.9). The red circles indicate the genes involved in calcium signaling and are grouped with a red background. The blue circles indicate the genes involved in the defense response and are grouped with a blue background. The green circles indicate the genes involved in the PTGS pathway and are grouped with a green background. The yellow circles indicate the genes that are the miRNA targets and are grouped with a yellow background. The gray circles indicate the genes involved in the JA, ABA, and ethylene biosynthesis pathways and are grouped with a gray background SOD1, and SOD2 (Fig. 5). However, ARF3, ARF8, PHO2, GRF1, CCS1, SOD1, and SOD2 formed a positive correlation in the network (Fig. 6). These miRNA target transcripts were upregulated in HC Tu and P1/HC Tu plants because of PTGS suppression (Fig. 4e-f; panel ii; Fig. 8d-g). Moreover, SEP3 (AT1G24260) showed negative correlations with XTH7, ARF3, ARF8, and SOD1 (Fig. 7). In addition, SEP3 transcript levels were lower in P1 Tu , HC Tu , and P1/HC Tu plants compared to Col-0 plants (Fig. 8h). Notably, SOD1 was shown to have a physical interaction with P1 Tu and P1 Te (Table 1) and was also highlighted in the network, suggesting the importance of SOD1 in PTGS suppression.
P1 enhances HC-Pro-mediated PTGS suppression
In potyvirus, P1 is a hypervariable protein with poorly understood its function. Previous studies suggested that P1 modulates virus replication, determines pathogenicity in a host-dependent manner, and triggers the host defense response (Maliogka et al. 2012;Pasin et al. 2014).
In this study, we demonstrated that 3 viral P1s have a conserved function in enhancing HC-Pro-mediated PTGS suppression. From the perspective of P1-host protein interaction, VIP3/SKI8 turns over the 5′-fragment of RISC-cleaved target RNA, whereas TSN1, TSN2, and VSC are involved in mRNA decapping in stress granules and P-bodies (Branscheid et al. 2015;Deyholos et al. 2003;Gutierrez-Beltran et al. 2015;Sorenson et al. 2018;Xu and Chua 2009). Moreover, a MOS4 modifier, 2 IMPORTINs, and BIG5, which are involved in RNA splicing and RNA transportation, respectively, also interact with P1 Tu (Helizon et al. 2018;Kitakura et al. 2017;Luo et al. 2013;Xu et al. 2012). In addition, EMA1/SAD2 contains an importin-beta domain that negatively regulates miRNA activity (Wang et al. 2011). EMA1/SAD2 protein levels were upregulated in P1 Tu , HC Tu , and P1/HC Tu plants, but their transcript levels did not differ those in Col-0, suggesting P1 stabilized or increases EMA1/SAD2 levels to help inhibit miRNA regulation (Fig. 4g). Moreover, transcriptome data mining also showed that the CAF1A/B deadenylases and the CDF2 zinc finger protein had a strong correlation with PTGS suppression. To summarize these findings, posttranscriptional RNA regulation occurs in stress granules and P-bodies, and many RNA regulatory components were identified among the proteins that interacted with P1 or were highlighted in the PTGS suppression network, which suggests that P1 is extremely vital for HC-Pro to enhance suppression.
Although P1 functions in regulating PTGS, it seems that P1 needs to be generated from the P1/HC-Pro fusion protein to have better enhance HC-Pro suppression. It is a cyclic effect in which lower levels of HC-Pro cause less efficiency in PTGS suppression, resulting in lower levels of HC-Pro. Indeed, the HC-Pro levels of P1 Tu × HC Tu (Kan) plants were similar to those of HC Tu plants (Fig. 1c), suggesting ectopically expressed P1 did not have the same effect on enhancing HC-Pro as did P1 released from the fusion protein. Why must P1 be released from the fusion form to enhance HC-Pro ability? It is still unclear.
P1/HC-Pro of TuMV specifically primes posttranslational AGO1 degradation
AGO1 degradation has been reported to be controlled by selective autophagy (Kobayashi et al. 2019;Li et al. 2019;Michaeli et al. 2019). The P0 viral suppressor of Polerovirus is thought to trigger autophagic AGO1 degradation (Michaeli et al. 2019). In our study, P1/HC-Pro of TuMV specifically triggered AGO1 posttranslational degradation, but the same effect was not observed in P1/HC Zy and P1/HC Tu plants, suggesting that P1/HC-Pro triggering AGO1 degradation does not occur in all potyviruses. In the other words, AGO1 degradation might not be essential for P1/HC-Pro-mediated PTGS suppression.
Autophagy works with vacuoles to allow for the degradation of large protein complexes. VSP29 is involved in the trafficking of vacuolar proteins and in the recycling of vacuolar sorting receptors and specifically interacts with P1 Tu (Table 1) (Kang et al. 2012). Moreover, CML24 interacts with AUTOPHAGY GENE 4b (ATG4b), which primes AUTOPHAGY GENE 8 (ATG8) by removing the C-terminus and exposing a glycine residue during autophagy (Tsai et al. 2013). CML24 was found to be present in the group of positive correlations of the PTGS network. Therefore, we implied that P1/HC-Pro of TuMV might also trigger AGO1 posttranslational degradation through autophagy.
Network of HC-Pro-mediated PTGS suppression
The comparative gene correlation network provides a 4-dimensional perspective, which includes gene expression, gene correlation, position, and time course. This information is helpful to interpret and identify critical genes in pathways of interest. In the Co-0 vs. HC Tu network, we identified a basic backbone network in HC Tu -mediated PTGS suppression. However, the effects of P1-enhanced HC-Pro suppression were highlighted in the Col-0 vs. P1/HC Tu network upon comparing the two networks. The Col-0 vs. P1/HC Tu network specifically highlighted the relationship between AGOs and viral resistance. Previous studies demonstrated that AGO2 and AGO3 upregulated to enhance the viral resistance (Alazem et al. 2017;Harvey et al. 2011;Zheng et al. 2019). AGO2 is a target of miR403, which is negatively regulated by AGO1 (Harvey et al. 2011), suggesting the upregulation of AGO2 in response to AGO1 degradation in P1/HC Tu plants. However, although we have no explanation for AGO3 upregulation, we assume that the AGO2/AGO3 antiviral system was activated and complemented AGO1 degradation. Indeed, AGO2 and AGO3 are directly positively correlated in the network.
Surprisingly, several miRNA targets, such as CCS1, SOD1, SOD2, PHO2, ARF3, and GFR1, showed a positive correlation in the network. These genes were indirectly negatively correlated with AGO1 through XTH7. In addition, other miRNA targets, such as TOE2, SPL13A/B, ATHB-15, and PHB were also present in the network. SPL13A and SPL13B had direct positive correlations. ATHB-15 and PHB, which belong to the homeodomainleucine zipper (HD-ZIP) transcription factor (TF) family, were also positively correlated. To summarize, the gene correlation network had significant accuracy in data mining.
Calcium signaling has been demonstrated to be involved in the suppression of gene silencing (Anandalakshmi et al. 2000;Nakahara et al. 2012). Anandalakshmi et al. (2000) demonstrated that the calmodulin-related protein (rgs-CaM) in tobacco interacts with HC-Pro and that it suppresses gene silencing similar to HC-Pro. Nakahara et al. (2012) demonstrated that tobacco rgs-CaM counterattacked various viral suppressors by binding to RNA-binding domains. In addition, rgs-CaM triggers autophagic viral suppressor degradation (Nakahara et al. 2012). Indeed, CML24 has physical interaction with ATG4b, suggesting that there is crosstalk between calcium signaling and autophagy (Tsai et al. 2013). CML24 was present in the group with a positive correlation, which was opposite to the AGOs that were present in the group with a negative correlation, suggesting that calcium signaling might counteract gene silencing.
We noted that several genes, such as XTH7, FUM2, and BAM2, had a significant number of connected genes (> 50 connected genes) (Fig. 6). XTH7 has been defined as a xyloglucan endotransglucosylase/hydrolase; however, little is known about its function in PTGS suppression. In addition, the cytosolic fumarase FUM2 is essential for Arabidopsis acclimation to low temperatures (Dyson et al. 2016). BAM2 is a CLAVATA1-related receptor kinase, and a little is known about its involvement in anther and meristem development (DeYoung et al. 2006;Hord et al. 2006). Although the functions of these genes were not explicitly linked with PTGS or defense, they were present in critical positions within the network with a large number of connected genes, which provides information for future research directions to investigate PTGS.
Auxin and ethylene signaling in the serrated leaf phenotype
Current studies have indicated that the treatment with a high dose of auxin elicits endogenous ethylene production. In P1/HC Tu plants, 3 auxin signaling genes (ARF3, ARF8, and SUTR3;1) were upregulated; therefore, we assume that ethylene was accumulated along with the increased expression of ethylene signaling genes. In addition, Hay et al. (2006) demonstrated that auxin can initiate marginal serrations in leaves, suggesting that the serrated leaf phenotype of P1/HC Tu plants might be related to endogenous auxin accumulation.
Conclusion
In this study, we used a transgenic plant approach to investigate the functions of P1 and HC-Pro. By mining high-throughput data from proteomic and transcriptomic profiles, P1-interacting proteins and critical genes in PTGS suppression were identified. Instead of traditional DEG identification, the comparative gene correlation network provides a four-dimensional perspective to identify critical genes, and provides new ideas and directions for further investigation. We believe plant molecular viology and plant molecular biology, like two hands, can be used together to efficiently investigate the PTGS mechanism. | 2023-01-31T14:57:01.367Z | 2020-08-03T00:00:00.000 | {
"year": 2020,
"sha1": "29749224dbfd593817419c99c02e49f6da2e762b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40529-020-00299-x",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "29749224dbfd593817419c99c02e49f6da2e762b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
259188852 | pes2o/s2orc | v3-fos-license | Snowflake-inspired and blink-driven flexible piezoelectric contact lenses for effective corneal injury repair
The cornea is a tissue susceptible to various injuries and traumas with a complicated cascade repair process, in which conserving its integrity and clarity is critical to restoring visual function. Enhancing the endogenous electric field is recognized as an effective method of accelerating corneal injury repair. However, current equipment limitations and implementation complexities hinder its widespread adoption. Here, we propose a snowflake-inspired, blink-driven flexible piezoelectric contact lens that can convert mechanical blink motions into a unidirectional pulsed electric field for direct application to moderate corneal injury repair. The device is validated on mouse and rabbit models with different relative corneal alkali burn ratios to modulate the microenvironment, alleviate stromal fibrosis, promote orderly epithelial arrangement and differentiation, and restore corneal clarity. Within an 8-day intervention, the corneal clarity of mice and rabbits improves by more than 50%, and the repair rate of mouse and rabbit corneas increases by over 52%. Mechanistically, the device intervention is advantageous in blocking growth factors’ signaling pathways specifically involved in stromal fibrosis whilst preserving and harnessing the signaling pathways required for indispensable epithelial metabolism. This work put forward an efficient and orderly corneal therapeutic technology utilizing artificial endogenous-strengthened signals generated by spontaneous body activities.
Statistics
For all statistical analyses, confirm that the following items are present in in the figure legend, table legend, main text, or or Methods section.
n/a Confirmed The exact sample size (n) for each experimental group/condition, given as as a discrete number and unit of of measurement A statement on on whether measurements were taken from distinct samples or or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of of all covariates tested A description of of any assumptions or or corrections, such as as tests of of normality and adjustment for multiple comparisons A full description of of the statistical parameters including central tendency (e.g. means) or or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or or associated estimates of of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of of freedom and P value noted Give P values as exact values whenever suitable.
For Bayesian analysis, information on on the choice of of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of of the appropriate level for tests and full reporting of of outcomes Estimates of of effect sizes (e.g. Cohen's d, Pearson's r), ), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above.
Software and code
Policy information about availability of of computer code Data collection
Data analysis
For manuscripts utilizing custom algorithms or or software that are central to to the research but not yet described in in published literature, software must be be made available to to editors and reviewers. We We strongly encourage code deposition in in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information.
Data
Policy information about availability of of data All manuscripts must include a data availability statement This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or or web links for publicly available datasets -A description of of any restrictions on on data availability -For clinical datasets or or third party data, please ensure that the statement adheres to to our policy Yuan Lin Jun 3, 3, 2023 The electrical performance of of all devices was measured by by a Keithley 6514 electrometer and a portable DSO-X2012A oscilloscope. No No commercial, open-source, nor custom code was used for data collection.
The authors declare that all data supporting the findings of of this study are available within the Article and its Supplementary Information. Source data are provided with this paper.
nature portfolio | reporting summary
March 2021
Human research participants
Policy information about studies involving human research participants and Sex and Gender in Research.
Reporting on sex and gender
Population characteristics
Recruitment
Ethics oversight Note that full information on the approval of the study protocol must also be provided in the manuscript.
Field-specific reporting
Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.
Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf
Life sciences study design
All studies must disclose on these points even when the disclosure is negative.
Sample size
Data exclusions
Replication
Randomization Three healthy subjects participated in voltage monitoring, including two men (Participant I: 23 years old; Participant II: 26 years old) and one woman (24 years old).
The voltage output in the human-worn state was monitored for proof-of-principle testing, and the non-invasive test did not involve tissue damage or biological characterization.
All volunteers participating in the study are entirely voluntary. The voltage output in the human-worn state was monitored for proof-of-principle testing, and the non-invasive test did not involve tissue damage or biological characterization. All volunteers are recruited within the school, stating the duration (30 minutes) and compensation amount ($50). This recruitment excludes individuals with low tolerance to wearing contact lenses. During the participation in the experiment, a principle verification test was conducted to monitor the voltage output of daily blinking while the volunteers were wearing it. Subjects have read the relevant research materials and received satisfactory answers to all questions. Subjects fully understand the relevant medical research materials and the potential risks and benefits of the research. The subjects know that participating in the study is voluntary and have the right to withdraw at any time. The subjects agree to the review of research materials by the drug regulatory department, ethics committee, or applicant and have expressed their willingness to participate in the study. All collected data and information are for research purposes only. The subjects agree to publish research results (including age, sex, and Supplementary Movie 3) in scientific journals or present at scientific conferences.
All experiments performed with animal and human participants were conducted under a standard protocol (1061420210617007) Our current sample size (n=10) is sufficient, exceeding most experiments related to corneal injury repair. Mice in the intervention (MI) group were intervened with an EF generated by the BPCL. Mice in the sham (MS) group were stimulated with deactivated devices in which the electrodes were disconnected from the BPCL. The mice in the blank (MB) control group had no wearable electrodes. Normal mice without cornea injury were labeled as MNn (n = 10, Male: n= 1-5, Female: n= 6-10). Mice in MI, MS, and MB groups were subjected to the same corneal injury surgery procedure, and the mice were labeled as MIn, MSn, and MBn (n = 10, Male: n= 1-5, Female: n= 6-10). Identical to the grouping of mice, the rabbits were divided into four groups (RN, RI, RS, and RB) and labeled as RNn, RIn, RSn, and RBn (n = 10, Male: n= 1-5, Female: n= 6-10). Both male and female animals were considered and employed to increase statistical robustness.
No data was excluded.
The sample size of experimental animals (mice and rabbits) was 10 to ensure replicability and statistical robustness. From the measured results, the data showed high similarity within the same testing group, and the replicability was good in each group. Attempts at replication were successful. For representative experiments (Figs. 1c,h;3b,h;4f;Supplementary Figures 2d;5b;11c,d;19c;22a,b,c;23a,b), each experiment was repeated independently many times (#3) with similar results, demonstrating good data reproducibility.
Mice and rabbits were randomly allocated into experimental groups. For mice and rabbits, males were labeled as 1-20, and females were
March 2021
Blinding Reporting for specific materials, systems and methods We require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response. labeled as 21-40. Then, we randomly divided animals numbered 1-20 into four groups and 21-40 into four groups. Then randomly combine the grouped males and females. Mice in the sham (MS) group were stimulated with deactivated devices in which the electrodes were disconnected from the BPCL. The mice in the blank (MB) control group had no wearable electrodes. Normal mice without cornea injury were labeled as MNn (n = 10, Male: n= 1-5, Female: n= 6-10). Mice in MI, MS, and MB groups were subjected to the same corneal injury surgery procedure, and the mice were labeled as MIn, MSn, and MBn (n = 10, Male: n= 1-5, Female: n= 6-10). Identical to the grouping of mice, the rabbits were divided into four groups (RN, RI, RS, and RB) and labeled as RNn, RIn, RSn, and RBn (n = 10, Male: n= 1-5, Female: n= 6-10). Both male and female animals were considered and employed to increase statistical robustness.
N/A. Only one condition that the devices work properly (BPCL intervention group) can affect the results in mouse and rabbit models. There were no significant differences in the other control groups (Sham and Blank). Therefore, the blinding process won't influence the result.
Identity of the cell lines were frequently checked by their morphological features but have not been authenticated by the short tandem repeat profiling.
The cell line was tested for mycoplasma contamination. No mycoplasma contamination was found.
No commonly misidentified cell lines are used in this study. | 2023-06-19T06:17:47.958Z | 2023-06-17T00:00:00.000 | {
"year": 2023,
"sha1": "57936192fcefa9cf6ac911fb09896239e1c6448a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-39315-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc01cb7323b8793c133785915817467b8586ed22",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52513100 | pes2o/s2orc | v3-fos-license | Endophytic bacteria affect sugarcane physiology without changing plant growth
The aim of this study was to evaluate if endophytic bacteria inoculants would be beneficial to the sugarcane varieties IACSP94-2094 and IACSP95-5000, promoting changes in photosynthesis and plant growth. The plants, obtained from mini stalks with one bud, were treated with two bacteria mixtures (inoculum I or II) or did not receive any inoculum (control plants). The inocula did not affect shoot and root dry matter accumulation as compared to the control condition (plants with native endophytic bacteria). However, photosynthesis and electron transport rate (ETR) increased in IACSP94-2094 treated with the inoculum II, whereas the inoculum I enhanced photosynthesis and stomatal conductance in IACSP95-5000. The inoculum II caused increase in leaf sucrose concentration of IACSP94-2094 and decrease in IACSP95-5000 leaves. Leaf nitrogen concentration was not affected by treatments, but bacteria inoculation increased nitrate reductase activity in IACSP95-5000, and the highest activity was found in plants treated with the inoculum II. We can conclude that bacteria inoculation changed sugarcane physiology, improving photosynthesis and nitrate reduction in a genotype-dependent manner, without promoting plant growth under non-limiting conditions.
iNtrODUctiON
The interaction between plants and microorganisms is quite complex and depends on organisms involved and environmental conditions, being affected by plant physiological status and nutrition (Oliveira et al. 2006;Moutia et al. 2010).The abundance and diversity of bacteria are huge under field (or non-desinfested soil/substrate) conditions, reducing or masking the effects of bacterial inoculation (Rosenblueth and Martínez-Romero 2006).
In order to assess the effects of bacterial isolates on plant physiology, most studies use plants free of microorganisms as micropropagated ones and evaluate the inoculation of only one bacterium species in plant material (Singh et al. 2011).Under such conditions, some specific changes due to plantmicroorganism interaction have been revealed (Oliveira et al. 2006); however, such response may be different from those ones found in plants when more than one bacterium is present (such as in mini stalk with one buds) or when the inoculum is confronted with soil native microorganisms.For instance, the inoculation of bacterial mixtures of different species or strains (as usual in commercial inoculants) caused increases in growth and yield of tomato as compared to the single inoculation, which was justified by improvements on nitrogen (N) and phosphorus (P) nutrition (Botta et al. 2013) Endophytic bacteria are microorganisms that live within the plant, isolated from tissues whose surface was desinfested (Hallmann et al. 1997).They are able to colonize different plant tissues, from roots to flowers (Compant et al. 2005), without causing any visible damage to plants.While the interaction between plants and endophytic bacteria has been studied taking into account plant growth promotion (Ryan et al. 2007) and plant dry matter production (Botta et al. 2013), little is known about the physiological basis of the process of improving plant growth.Sugar beet plants treated with endophytic bacteria showed higher potential quantum efficiency of photosystem II, electron transport in the thylakoid membranes, leaf CO 2 assimilation and carbohydrate content than untreated ones (Shi et al. 2010).In general, growth promotion is usually attributed to improved plant nutrient acquisition (Barretti et al. 2008) and production of phytohormones as indoleacetic acid by bacteria (Shi et al. 2010).Biological nitrogen fixation is another advantage of the association between diazothrophic bacteria and plants and it has been shown that sugarcane varieties can get at least 40 kg N•ha -1 yr -1 through this process (Urquiaga et al. 2012).However, the underlying physiological changes related to the improved nutrition in plants treated with bacterial inoculum remain unclear.
As there is a genotypic variation when considering sugarcane yield and physiological responses to constraining environmental conditions (Landell et al. 2005a, b;Machado et al. 2009), it would be reasonable to assume that sugarcane varieties present differential ability in establishing a beneficial interaction with microorganisms.The understanding of how plants respond to bacterial inoculation and what mechanisms are stimulated is important to optimize the use of bacteria as an alternative technology to improve plant production from mini stalks and for increasing crop yield.Herein, we evaluated if mixtures of endophytic bacteria would be beneficial to two sugarcane varieties IACSP94-2094 and IACSP95-5000 with differential stalk yield (Landell et al. 2005a,b), aiming to answer the following question: does the inoculation of endophytic bacteria promote changes in sugarcane physiology and growth in a genotype-dependent manner?
Plant material and growth conditions
Sugarcane (Saccharum spp.) plants cvs.IACSP94-2094 and IACSP95-5000 (Landell et al. 2005a,b) were propagated by planting stalk segments containing one bud.Plants were grown under greenhouse conditions, where air temperature varied between 37.4 ± 2.8 °C (maximum) and 18.2 ± 1.5 °C (minimum) and the average air relative humidity was 73 ± 7%.Thirty-two days after germination, plantlets were transferred to 5-L pots containing a sterile mixture of sand, soil and substrate (Carolina Soil of Brazil, Vera Cruz SC, Brazil, composed of sphagnum peat, expanded vermiculite, limestone dolomite, agricultural gypsum and NPK fertilizertraces) 1:1:1 (v/v/v) sterilized in autoclave.They were irrigated three times a week with nutrient solution with low N concentration (57 mg N•L -1 ).Such solution was composed by 2 mL of solution A [200 g•L -1 of Ca(NO 3 ) 2 , 250 g•L -1 of CaCl 2 , 20 g•L -1 of ConMicros Standard (commercial product) with 1.45 g•L -1 of Fe-EDTA, 0.25 g•L -1 of Cu-EDTA, 0.15 g•L -1 of Zn-EDTA, 0.36 g•L -1 of Mn-EDTA, 0.36 g•L -1 of B, 0.072 g•L -1 of Mo and 0.07 g•L -1 of Ni] and 3 mL of solution B [200 g•L -1 of KNO 3 , 150 g•L -1 of KH 2 PO 4 , 300 g•L -1 of MgSO 4 •7H 2 O and 100 g•L -1 of KCl] per liter.Sugarcane tillers were removed and each plant was kept with only the main stalk.The physiological measurements and plant sampling were taken 72 days after planting (dap).
Inoculants and bacterial counting
Ten bacteria isolates from stem and roots, belonging to the Soil Microorganisms Collection held by the Instituto Agronômico (Campinas, SP), were previously selected as micropropagated sugarcane plant growth-promoting bacteria (PGPB) and chosen to prepare the inocula.
The ten selected strains were grown separately in flasks with Dygs liquid medium (Döbereiner et al. 1995) until the concentration of 10 8 CFU•mL -1 .Two inocula were prepared by mixing five bacterial strains each.The composition of each inoculum, as well as the characteristics regarding indol production, the presence of nifH gene and plant-growth promotion are shown in Table 1.
For inoculation, small stalk segments (4 cm length) of both varieties were immersed in culture medium with inoculum (I or II) or without inoculum (control) for a period of one hour before planting.Then, they were placed in plastic cups (200 mL) with sterilized substrate.At 32 and 69 dap, an additional application of inoculum was carried out with 10 mL of inoculum (I or II) solution with a concentration of 10 8 CFU•mL -1 or sterilized culture medium (control).
The quantification of native endophytic bacteria was done in small stalk segments at planting and also in roots at the end of the experiment in both control and treated plants, following the procedure described by Döbereiner et al. (1995).
Leaf gas exchange, photochemistry and chlorophyll content
The leaf gas exchange and chlorophyll fluorescence emission were measured using an infrared gas analyzer model LI-6400 (Licor, Lincoln NE, USA), equipped with a modulated fluorometer (6400-40, Licor, Lincoln NE, USA).The evaluations were taken under constant air CO 2 concentration (380 μmol•mol -1 ), photosynthetic active radiation (Q) of 1,600 μmol•m -2 •s -1 and natural variation of air temperature and humidity.The evaluations were performed between 14h00 and 15h00 in the first fully expanded leaf with visible ligule (leaf +1) at the middle third of the leaf blade.Leaf CO 2 assimilation (P N ), stomatal conductance (g S ) and intercellular CO 2 concentration (C i ) were evaluated.The instantaneous carboxylation efficiency (k) was estimated as P N /C i (Machado et al. 2009).The chlorophyll fluorescence was evaluated simultaneously to leaf gas exchange, measuring the steady-state (F S ), maximum (F M ') and variable (ΔF = F M ' -F S ) fluorescence emission under light conditions.As general index of photochemical activity, we calculated the apparent electron transport rate as ETR = (Q × ΔF/F M ' × 0.4 × 0.85), where ΔF/F M ' is the actual quantum efficiency of photosystem II (PSII), 0.4 is the distribution of electrons between photosystems I and II in C 4 plants (Edwards and Baker 1993), and the light absorption by leaves was considered 0.85 (McCormick et al. 2008).
The chlorophyll content was measured with the chlorophyllmeter clorofiLOG (CFL1030, Falker, Porto Alegre RS, Brazil).The device provides indirect readings of chlorophyll a, b and a + b contents and data are shown as Falker Chlorophyll Index (FCI).
Carbohydrate and total free amino acids
Carbohydrate and total free amino acids were evaluated in samples of the third fully expanded leaves with visible ligule (leaf +3) collected and kept at -80 °C.Such metabolites were extracted from lyophilized samples (75 mg) with a methanol:chloroform:water solution (12:5:3, v:v:v), according to Bieleski and Turner (1966).The concentrations of soluble sugars (SS) and sucrose (Suc) were quantified according to Dubois et al. (1956) and Van Handel (1968), respectively.Starch (Sta) was quantified by the enzymatic method proposed by Amaral et al. (2007).The concentration of non-structural carbohydrates (NSC) was calculated as the sum of SS and Sta.Amino acids were determined quantitatively by using the colorimetric method of Yemm et al. (1955).
Nitrogen content and activities of nitrate reductase and glutamine synthetase
The total nitrogen concentration in leaves was determined by the Kjeldahl method and expressed in mol•kg -1 of dry matter (Bremner 1965).To obtain the enzymatic extract and estimate the activities of nitrate reductase (NR) and glutamine synthetase (GS), we used the procedure described by Silveira et al. (2010), with modifications.Extracts were obtained from 2 g of leaves macerated until fine powder in a mortar with liquid nitrogen and polyvinylpolypyrrolidone (PVPP).The extraction buffer (100 mM Tris-HCl buffer, pH 7.5 containing 10 mM FAD + 20 mM EDTA, 5 mM DTT + 0.5% BSA + mixture of inhibitors: 0.1 mM PMSF + leuptine 10 mM + 1 mM benzidine) was then added to the sample (2.5 mL•g -1 of fresh weight).The homogenate was filtered through two layers of cheesecloth and centrifuged at 4 °C for 20 min at 3,000 g.Enzyme activity was measured in the supernatant.The activity of nitrate reductase (NR, EC 1.7.1.1)was determined by adding 200 µL of enzymatic extract to a mixture of 500 µL of buffer (Tris-HCl 100 mM pH 7.5 + EDTA 10 mM + KNO 3 5 mM + DTT 5 mM + FAD 10 µM) and 15 µL of NADH 1 mM.The reaction was carried out in a water bath at 30 °C for 30 minutes and stopped at 100 °C for 10 minutes.Then, 750 µL of sulfanilamide (sulfanilamide 1% [w/v] + naphthylethylenediamine dihydrochloride in HCl 2.4 N) were added in the reaction mixture, and the absorbance was measured at 540 nm.The activity was expressed in nmol•h -1 •g -1 FW.
Biometry
After 72 days of planting, leaves were counted and plant height was evaluated with a tape-measure.At this time, shoot and root dry matter were evaluated after drying samples in a forced air oven at 60 °C.
Statistical analysis
The experiment was arranged in a randomized design, testing a 3 × 2 factorial.One cause of variation was the inoculation (control; inoculum I and II) and the other was the sugarcane genotype (IACSP94-2094 and IACSP95-5000).The data were subjected to ANOVA procedure and mean values were compared by the Tukey test at 5% probability level.
resULts AND DiscUssiON
The counting of endophytic bacteria present in stalk segments before inoculation indicated a concentration seven times higher in IACSP95-5000 (22.0 × 10 5 CFU•g -1 ) than in IACSP94-2094 (3.3 × 10 5 CFU•g -1 ) at planting.After 72 days of planting, the number of endophytic bacteria found in roots of inoculated plants was higher than that found in control plants, which had only the native bacteria.IACSP94-2094 roots presented 66.7 × 10 5 CFU•g -1 when inoculated with the inoculum I and 74.8 × 10 5 CFU•g -1 when inoculated with the inoculum II, whereas the control plants had 16.7 × 10 5 CFU•g -1 .Bacterial counting in IACSP95-5000 ranged from 11.3 × 10 5 CFU•g -1 in plants treated with the inoculum I and 5.3 × 10 5 CFU•g -1 in plants treated with the inoculum II to 3.2 × 10 5 CFU•g -1 in control plants.There was an increase on endophytic bacteria counting in the root of IACSP94-2094, even in control plants, which may be related to specific plant compounds that stimulate bacterial growth (Rosenblueth and Martínez-Romero 2006).Instead, the bacterial community was inhibited in IACSP95-5000.Such differences in colonization are likely due to the complexity of endophytes ecology, being the interactions endophytesendophytes and endophytes-plants affected by biotic and abiotic factors.In fact, a single plant species has thousands of epiphytic and endophytic microbial species and the interactions between those microorganisms may regulate several physiological processes in the host (Andreote et al. 2014).
The application of the inoculants I and II did not affect the dry matter accumulation of shoots or root as compared to the control (Table 2).In addition, treated plants did not differ from the control plants in height, ranging from 37 to 39 cm for IACSP94-2094 and from 25 to 27 cm for IACSP95-5000.Regardless of genotype, inoculations did not change the number of leaves, with plants showing in average 7 ± 2 leaves.
Herein, shoot and root growth were not enhanced in treated plants (Table 2), indicating that any beneficial effect of bacterial inoculation can be hidden by species-specific interactions between bacteria and also between bacterium and plant.Native bacteria in plant tissues and the bacteria introduced by inoculation can compete for space, carbon and nutrients, a quite different condition from the application of individual bacteria species.Such competition could prevent plant growth promotion as already reported by Ögüt et al. (2005) in bean and wheat plants.On the other hand, these results could indicate that the native endophytic bacterial community was very adapted inside the plant and the plantbacteria interaction balance was quite stable.Despite the absence of plant growth promotion ( and suggest that growth promotion is a consequence of several mechanisms by which endophytic bacteria may influence plant development. Leaf CO 2 assimilation was stimulated in inoculated plants (Figure 1a), but the underlying processes causing improved photosynthesis were different when comparing genotypes.Increased photosynthesis in IACSP94-2094 treated with the inoculum II was associated with increases in chlorophyll a content (Table 3) and in ETR (Figure 1c).These changes suggest an improved absorption of light energy and use in photochemistry, a key physiological process responsible for ATP and NADPH production in chloroplasts.Shi et al. (2010) also reported that bacteria inoculation caused an increase in ETR and improved photosynthesis, suggesting the presence of unknown compounds produced by bacteria that could increase ETR and chlorophyll metabolism.The presence of bacteria in leaves may also upregulate photosynthetic genes related to ferredoxin and NADPH ferredoxin (Bilgin et al. 2010).On the other hand, increased photosynthesis in IACSP95-5000 treated with the inoculum I was caused by higher stomatal aperture (Figure 1b).Stomatal regulation is affected by endophytic bacteria (Ryan et al. 2007), which may be present in stomatal cells (Compant et al. 2005).Such regulation was previously associated with compounds produced by bacteria as coronatine, with similar action to jasmonate (Brader et al. 2014).Chlorophyll b content was also increased in IACSP95-5000 treated with the inoculum II (Table 3).Carboxylation efficiency (k) was not affected by inoculation, varying around 3.84 ± 0.78 µmol•m -2 •s -1 •Pa -1 in IACSP94-2094 and 3.85 ± 0.54 µmol•m -2 •s -1 •Pa -1 in IACSP95-5000.
IACSP94-2094 treated with the inoculum II presented higher leaf sucrose content than the control plants (Figure 2a), a response associated with improved photosynthesis (Figure 1a).However, photosynthesis stimulation in IACSP95-5000 treated with the inoculum I did not increase leaf carbohydrate content (Figures 1a and 2) and plants that received the inoculum II presented large reduction in leaf sucrose content (Figure 2a).As such reduction did not result in low soluble sugars content (Figure 2b), our data indicate the presence of other sugars derived from sucrose hydrolysis.In fact, Shi et al. (2010) found higher fructose concentration in leaves of plants treated with endophytic bacteria.The inocula did not change Sta and NSC concentrations, regardless sugarcane genotype (Figures 2c, d).
There was no significant difference in leaf nitrogen concentration between treatments and the mean value was 1.64 ± 0.03 mol•kg -1 .However, bacteria inoculation increased nitrate reductase activity in IACSP95-5000, with the highest activity being found in plants treated with the inoculum II (Figure 3a).Regardless of genotype, glutamine synthetase activity was not affected by inoculation (Figure 3b).There was no difference in total free amino acids among treatments and the mean values were around 2.32 ± 0.16 mg•g -1 in IACSP94-2094 and 2.58 ± 0.32 mg•g -1 in IACSP95-5000.
Decreases in leaf sucrose content may be related to plant-bacteria interactions (Fuentes-Ramírez et al. 1999) in IACSP95-5000, which also showed higher nitrate reductase activity (Figure 3a).According to Donato et al. (2004), bacteria can affect the N metabolism through nitrate reductase activity, increasing the intake of nitrate and then leaf nitrogen content.As the activities of glutamine synthetase (unaffected) and nitrate reductase (increased) were differently affected by not changed by inoculation.Regarding the enzymatic activity, one may consider that in vitro essays do not necessary reflect the in vivo activity of enzymes as temperature, and substrate concentrations would be not the same found in planta.Then, the activities of nitrate reductase and glutamine synthetase reported herein would be an indicative that the inocula have changed the sugarcane N metabolism.
Is bacteria inoculation advantageous to sugarcane as there are metabolic costs without promotion of plant growth?Plants were under non-limiting conditions in this study and we should take into account that any advantage associated to inoculation may occur under stress condition (Vargas et al. 2014).For instance, increases in carbohydrate status of IACSP94-2094 may benefit plant metabolism under constraining conditions, maintaining energy and carbon supply and then plant homeostasis.The increases in stomatal conductance of IACSP95-5000 due to inoculation (Figure 1b) may be a positive change under shortterm water deficit, favoring CO 2 supplying to photosynthesis.Accordingly, positive effects of bacteria inoculation in plant nutrition were observed only in low fertility soils (Oliveira et al. 2006).Time after inoculation is another aspect to be considered when the interaction between plants and bacteria is studied.Chauhan et al. (2013) reported positive effects of bacteria inoculation in sugarcane plants grown under field conditions after six months, with plants showing improvements in chlorophyll content, N content and yield.On the other hand, studies have shown that the most pronounced effects could occur at the beginning of growth after inoculation.
cONcLUsiON
Our data demonstrate that bacterial mixtures affect sugarcane physiology, improving photosynthesis and nitrate reduction in a genotype-dependent manner.However, such physiological changes are not associated with biomass production in sugarcane plantlets, obtained from mini stalks with one bud, already colonized by native endophytic bacteria and grown under non-limiting conditions.
Figure 1 .
Figure 1.Leaf CO 2 assimilation (P N , in a), stomatal conductance (g S , in b) and apparent electron transport rate (ETR, in c) in IACSP94-2094 and IACSP95-5000 treated with the inoculum I, II or untreated (control).Mean values ± standard deviation (n = 3).Different capital letters indicate statistical differences between genotypes in a given treatment, whereas lower case letters indicate differences between treatments in a given genotype by Tukey's test (p < 0.05).
Figure 2 .
Figure 2. Leaf carbohydrate concentrations in IACSP94-2094 and IACSP95-5000 treated with the inoculum I, II or untreated (control): sucrose (a); soluble sugars (b); starch (c); and total non-structural carbohydrates (d).Mean values ± standard deviation (n = 3).Different capital letters indicate statistical differences between genotypes in a given treatment, whereas lower case letters indicate differences between treatments in a given genotype by Tukey's test (p < 0.05).
may argue that the bacterial treatment improved the absorption and translocation of nitrate to the leaves.Such assumption is based on leaf nitrogen concentration, which was
Figure 3 .
Figure 3. Activities of nitrate reductase (a) and glutamine synthetase (b) in leaves of IACSP94-2094 and IACSP95-5000 treated with the inoculum I, II or untreated (control).Mean values ± standard deviation (n = 3).Different capital letters indicate statistical differences between genotypes in a given treatment, whereas lower case letters indicate differences between treatments in a given genotype by Tukey's test (p < 0.05).GGH = glutamyl hydroxymate.
table 1 .
Inoculum composition, presence of nifH gene and bacteria ability in promoting root and shoot growth as well as producing indol substances*.(Soil Microorganisms Collection).+ and -indicate promotion/presence or non-promotion/absence, respectively.ID = identification in the Genbank.
Table 2 )
, physiological changes due to inoculation were noticed in both genotypes
table 2 .
Shoot and root dry matter of IACSP94-2094 and IACSP95-5000 plants treated with the inoculum I and II or untreated (control)*.
*Mean values ± standard deviation (n = 3).Different capital letters indicate statistical differences between genotypes in a given treatment, whereas lower case letters indicate statistical differences between treatments in a given genotype by Tukey's test (p < 0.05). | 2018-09-18T23:28:23.505Z | 2015-11-24T00:00:00.000 | {
"year": 2015,
"sha1": "90533df98e428e75db262fd5832cd25766624641",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/brag/v75n1/0006-8705-brag-1678-4499256.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "90533df98e428e75db262fd5832cd25766624641",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
235315471 | pes2o/s2orc | v3-fos-license | Vascular Metabolism as Driver of Atherosclerosis: Linking Endothelial Metabolism to Inflammation
The endothelium is a crucial regulator of vascular homeostasis by controlling barrier integrity as well acting as an important signal transducer, thereby illustrating that endothelial cells are not inert cells. In the context of atherosclerosis, this barrier function is impaired and endothelial cells become activated, resulting in the upregulation of adhesion molecules, secretion of cytokines and chemokines and internalization of integrins. Finally, this leads to increased vessel permeability, thereby facilitating leukocyte extravasation as well as fostering a pro-inflammatory environment. Additionally, activated endothelial cells can form migrating tip cells and proliferative stalk cells, resulting in the formation of new blood vessels. Emerging evidence has accumulated indicating that cellular metabolism is crucial in fueling these pro-atherosclerotic processes, including neovascularization and inflammation, thereby contributing to plaque progression and altering plaque stability. Therefore, further research is necessary to unravel the complex mechanisms underlying endothelial cell metabolic changes, and exploit this knowledge for finding and developing potential future therapeutic strategies. In this review we discuss the metabolic alterations endothelial cells undergo in the context of inflammation and atherosclerosis and how this relates to changes in endothelial functioning. Finally, we will describe several metabolic targets that are currently being used for therapeutic interventions.
INTRODUCTION
The endothelium is a crucial barrier between blood and tissue and is essential for maintaining vascular homeostasis [1]. The monolayer of endothelial cells (ECs) covering the vascular wall are exposed to several mechanical (stretch, shear stress, pressure) and circulating factors (cytokines, chemokines, humoral agents, chemical factors, lipoproteins) that can all affect their phenotype. Throughout vascular homeostasis ECs are in a quiescent state, characterized by the formation of nitric oxide (NO) by endothelial nitric oxide synthase (eNOS). NO has been considered to be atheroprotective due to its anti-inflammatory role by regulating vasodilatation, inhibiting thrombosis and the adhesion of leukocytes and platelets [2]. However, when ECs are exposed to disturbed or low flow conditions, they exhibit a loss in eNOS activity and an enhanced activated phenotype [3]. This EC activation can result in (1) immune cells to the site of infection [3][4][5][6]. Furthermore, during the advanced stages in atherosclerosis, the hypoxic regions in atherosclerotic plaques can lead to local production of the pro-angiogenic factor VEGF, resulting in plaque neovascularization [7][8][9]. Overall, these processes can contribute to further progression of atherosclerosis, thereby aggravating clinical outcome.
To sustain these pro-atherogenic processes, a certain amount of energy and biomass is necessary. In the field of cancer biology, rewiring of cellular metabolism has been extensively explored as a way for cancer cells to gain a substantial amount of energy and biomass required for proliferation, invasion and metastasis [10,11]. Cancer cells switch from oxidative phosphorylation to aerobic glycolysis for the production of ATP [12,13]. This metabolic switch is referred to as the Warburg effect.
Interestingly, this rewiring of cellular metabolism has also been observed during atherosclerosis, which has predominantly been described in macrophages [14,15]. Recently, it has been established that vulnerable human atherosclerotic lesions exhibit an enhanced expression of glycolytic markers compared to stable plaques [16,17]. Several landmark studies of the group of Carmeliet have shown that ECs are highly glycolytic, a phenomenon which is also the case under quiescent conditions [18].
Immunometabolism. 2021;3(3):e210020. https://doi.org/10.20900/immunometab20210020 However, to date the role of EC metabolism in atherosclerosis has been studied to a lesser extent. In this review we will discuss the intricate role of EC metabolism in fueling vascular inflammation and atherogenesis.
Lastly, we will address various signaling routes targeting microRNA-124, the mitochondria, the glycolytic enzyme 6-phosphofructo-2kinase/fructose-2,6-biphosphatase 3 (PFKFB3), lipoprotein (a) [Lp(a)] and oxidized phospholipids as potential interventions that target endothelial metabolism. It is important to note that all the metabolic states described in this manuscript are a reflection of the 'activation state' of the cells (i.e., proliferating cells, inflammatory cells) and should therefore be extrapolated into different contexts and disease pathologies with caution.
ALTERED METABOLISM AS A MARKER FOR PLAQUE VULNERABILITY
Although the molecular mechanisms that underlie cellular metabolic changes or rewiring are still being unraveled, the concept of an altered vascular metabolism has already been exploited for years. The tracer 18 [24][25][26][27]. In contrast to other cell types, ECs carry a low mitochondrial content accounting for approximately 2-5% of their cytoplasm, which suggests that in this context, mitochondrial respiration appears not the preferred route for ATP generation in ECs [28].
In the field of cancer biology, it has been established that ECs undergo a metabolic switch towards glycolysis to promote neovascularization, which facilitates tumor growth and metastasis [10,11]. Equivalently, neovascularization is pivotal in atherosclerotic lesions. These newly formed unstable and leaky vessels provide novel routes for the influx of pro-atherogenic lipoproteins, red blood cells, inflammatory cells andmediators, and thereby contribute to plaque instability by forming thincap fibroatheromas that are more prone to rupture [8,9,29]. The formation of these new blood vessels rests on ECs specializing into leading tip cells that extend their filopodia and trailing stalk cells, which support extension of the sprouts by proliferation [30][31][32]. To date, a collection of studies describes the changes in EC metabolism that are essential in driving angiogenesis (neovascularization) and are extensively reviewed elsewhere [26,33,34]. In this review we aim to provide an overview of the candidates that are of interest in the context of atherosclerosis (a schematic overview can be found in Figure 1).
Glycolysis
De Bock et al. demonstrated that in human umbilical venous endothelial cells (HUVECs), as well as arterial, lymphatic, and microvascular ECs glycolysis is the predominant bioenergetic pathway [35]. To investigate the role of glycolysis in ECs, they focused on the glycolytic enzyme 6-phosphofructo-2-kinase/fructose-2,6-biphosphatase 3 (PFKFB3). Upon knock-down of PFKFB3, an in-vitro sprouting assay showed a marked decrease in the number and length of sprouts (p < 0.05) [35].
Mitochondrial Respiration
Yetkin-Arik and colleagues demonstrated that silencing the mitochondrial respiration enzyme pyruvate dehydrogenase E1 subunit alpha 1 (PDHA1) in HUVECs, resulted in an increased number of apoptotic tip cells and a decrease in proliferating cells [38]. This data underpins that besides glycolysis, mitochondrial respiration is also of importance in driving angiogenesis. Similarly, blocking pyruvate transport into mitochondria using 2-cyano-3-(1-phenyl-1H-indol-3-yl)-2-propenoic acid (UK5099), targeting the mitochondrial pyruvate carrier, resulted in a 30% reduction in the number of tip cells, indicating that mitochondrial respiration is essential for tip cell survival and EC proliferation. Similar effects were observed by the group of Diebold in HUVECs upon inhibition of the mitochondrial complex III using antimycin A and was attributed to decreases NAD + /NADH ratios [39]. The importance of mitochondrial respiration in angiogenesis is further highlighted by the observation that silencing Pdha1 expression resulted in a 2.3-fold reduction in sprout length (p < 0.05) in in-vitro spheroid assays, followed by a decrease in branching points (p < 0.01) and total sprout length (p < 0.05) in the in-vivo chicken chorioallantoic-membrane photodynamic therapy (CAM-PDT) assay [38].
Furthermore, the group of Lapel reported diminished tubular formation of vasa vasorum ECs (VVECs) upon exposure to the OXPHOS inhibitors rotenone, oligomycin, and FCCP [40]. Collectively, these studies indicate a significant role for mitochondrial respiration in neovascularization. In conclusion, although the expression of glycolysis related markers has been established to be associated with increased plaque vulnerability, the importance of mitochondrial respiration in driving angiogenesis is nowadays becoming increasingly clear. It would be of interest to extrapolate these findings and assess the role of mitochondrial respiration in driving atherosclerosis in order to combat its progression.
Fatty Acid Oxidation
Lastly, the function of mitochondrial fatty acid oxidation (FAO) in angiogenesis was studied by Schoors and colleagues [41]. Carnitine palmitoyltransferase 1 (CPT1) is a rate-limiting enzyme in FAO and is essential for beta oxidation of long chain fatty acids in the mitochondria.
Silencing of the CPT1 isoform CPT1A in HUVECs resulted in a decrease in vessel sprout numbers and length (p < 0.0001), which was due to reduced
KLF2 AND FOXO1; GATEKEEPERS OF EC QUIESCENCE
Since endothelial cells form the inner lining of blood vessels, they are exposed to a force of laminar blood flow and shear stress. Disturbance of this blood flow can lead to disturbed shear stress, resulting in NF-κBinduced hypoxia-inducible factor 1α (HIF1α) transcription, promoting EC proliferation, activation and inflammation [45]. Besides the NF-κB-HIF1α signaling pathway, the AMPK/mTOR/ULK1-axis also have been demonstrated to be induced by shear stress [46]. This axis induces autophagy and thereby modulates vascular smooth muscle cells (VSMCs) phenotype. Similarly to VSMCs, autophagy is also essential in ECs for maintaining alignment [47].
KLF2
In ECs the transcription factor Krüppel-like factor 2 (KLF2) promotes endothelial quiescence by upregulating anti-inflammatory and antithrombotic proteins and by downregulating pro-inflammatory and prothrombotic proteins. Upon exposure to laminar shear stress for 72 h, HUVECs induced KLF2 expression, which was accompanied by decreased glucose uptake and mitochondria per EC compared to static conditions [48]. This reduction in glycolysis was mediated by KLF2-induced downregulation of PFKFB3. This furthermore resulted in increased intracellular hyaluronan (HA) substrate availability and HA synthesis [49].
These results suggest that the KLF2-PFKFB3 axis has an important role in regulating EC metabolism and thereby altering the quiescent or activation state of the endothelium. It is therefore tempting to speculate that at sites
FOXO1
In addition to KLF2 as a gatekeeper for endothelial quiescence, the transcription factor forkhead box O1 (FOXO1) has been described as a metabolic checkpoint. Similarly to KLF2, FOXO1 is essential in regulating neovascularization [50]. Upon endothelial selective Foxo1 deletion,
YAP-TAZ Signaling
The YAP/TAZ signaling pathway is also of importance in EC quiescence. homeostasis. Emerging evidence is accumulating that the YAP/TAZ pathway is intertwined with cellular metabolism [52][53][54]. The group of Enzo presented that YAP/TAZ, similarly to KLF2 and FOXO1, is regulated by glycolysis in MDA-MB-231 breast cancer cells [55]. In turn activation of the YAP/TAZ pathway in pulmonary arterial ECs has also been shown to modulate the metabolic enzyme glutaminase (GLS1), involved in glutaminolysis and glycolysis [56]. Collectively, these studies imply the existence of a YAP/TAZ metabolism positive feedback loop that could lead to the progression of atherosclerosis.
THE RISE OF MIRNAS
Over the years microRNAs (miRNAs) have been emerging as significant regulators in atherosclerosis with novel functions being discovered regularly. The current status of miRNAs and their therapeutic potential in atherosclerosis have been extensively discussed by Feinburg and Moore [58]. This review aims to focus on miRNAs that are specifically involved in cellular metabolism in the context of atherosclerosis.
Inflammation
Oxidized phospholipids (OxPLs) are also known as Danger Associated Molecular Patterns (DAMPs) that can be carried by lipoprotein(a) and oxLDL, resulting in accumulation in atherosclerotic lesions [59]. Here oxPLs can induce an inflammatory response and thereby aggravate disease progression [60]. Upon exposure of HUVECs to 30 μg/mL oxidized However, further research is necessary to further investigate the intricate connection between miR-93, inflammation and glycolysis.
Pulmonary Arterial Hypertension
Lastly, the effect of anomalous miRNA expression in cardiovascular disease was highlighted in-vivo by Caruso
CELLULAR METABOLISM
Atherosclerosis is a multifactorial process that drives cardiovascular disease, and has been associated with several risk factors, including age.
Especially, in the Western world we are being confronted with a growing aging population, which increases the risk of major adverse cardiovascular events (MACE) [63,64]. To be able to treat this expanding patient population, it is necessary to understand how aging affects the atherosclerotic process.
One of the key hallmarks of ageing is the growing number of cells that turn senescent, thus an increase in cells that are in proliferation arrest. Recently, Sabbatinelli and colleagues extensively reviewed the metabolic rewiring that senescent ECs undergo in order to sustain their activities. This rewiring is characterized by an even higher dependency on glycolysis, the production of ROS, a decrease in nitric oxide (NO) production and induction of proinflammatory processes [65], demonstrating a metabolism-senescenceinflammation axis in aging individuals.
During aging, mitochondrial function declines, thereby contributing to the acceleration of atherosclerosis [66]. Using wild-type mice in a low cholesterol environment, Tyrrell and colleagues demonstrated that, along with an increase in mitochondrial dysfunction, aged mice also exhibit elevated levels of IL-6 within the aorta [66]. Mitochondrial damage-associated molecular patterns expressed by dysfunctional mitochondria activate the TLR-9-MyD88 axis, resulting in the production of pro-inflammatory cytokines including IL-6. In turn IL-6, can further aggravate mitochondrial dysfunction, suggesting a positive feedback loop within the aorta of aging mice. Furthermore, this enhanced mitochondrial dysfunction was also characterized by an increase of mitophagy, which is the degradation of mitochondria by autophagy.
Recently, encouraging evidence is accumulating about the potential of targeting mitochondrial function as strategy for maintaining vascular function [67]. By supplementing the mitochondrial-targeted antioxidant MitoQ in aged (approximately 27 months) mice, it has been demonstrated that targeting mitochondrial fitness reduced the production of mitochondrial derived ROS and restored endothelium-dependent dilation [68]. These promising pre-clinical data were confirmed in a randomized, placebocontrolled, double-blind, crossover design study with healthy older adults Besides changes on a cellular level, aging individuals are also confronted with increased stiffness of the large arteries. There are several mechanisms underlying this arterial stiffness, which have been extensively reviewed by several groups [70][71][72]. Vessel stiffening has been shown to increase endothelial permeability [73]. In tumor vasculature, PFKFB3 has been implicated in vessel destabilization due to VE-cadherin internalization [11]. Inhibiting PFKFB3 led to increased barrier integrity, due to increased expression of VE-cadherin at the membrane. This suggests that increased PFKFB3 activity can lead to vessel destabilization and increased endothelial permeability. Additionally, vessel stiffness and shear stress-mediated EC alignment are also linked. Bovine aortic ECs cultured on hydrogels mimicking older, stiffer vessels form less tight junctions after 24-hour exposure to fluid shear stress compared to ECs cultured on hydrogels mimicking younger vessels [74]. To this end, loss of eNOS activity due to disturbed shear stress, results in lower NO production, an increase in blood pressure and thereby an increased likelihood of vessel wall injury [2], a process that is accelerated in aging individuals and hallmarked by increased blood vessel stiffening.
Trained Immunity
In the context of trained immunity, the influence of inflammation on rewiring metabolism in monocytes and the subsequent sustained inflammatory effects have been well documented [59,75,76].
Accumulating evidence suggests that circulating lipoproteins elicit trained immunity in monocytes [59,77]. The pro-atherogenic lipoprotein oxLDL has been recognized to induce trained immunity in primary monocytes demonstrated by an enhanced secretion of IL-6, TNFα, IL-8 and MCP-1, thereby contributing to the persistent low-grade inflammation observed in atherosclerosis [77]. Recently it has been demonstrated that metabolic reprogramming is required for oxLDL induced trained immunity [78].
Using extracellular flux analysis Keating and colleagues observed an increased ECAR (p < 0.05) in oxLDL trained macrophages, that was accompanied by an upregulation of the glycolytic enzyme PFKFB3 (p < 0.05). Along with this increase in ECAR, there was also an enhanced OCR (p < 0.05). Overall, these results indicate a metabolic switch upon oxLDL induced training. This was further validated by demonstrating that the susceptibility for trained immunity in individuals was associated with genetic variations in glycolytic genes, including PFKFB3, PFKP and HK1.
Endothelial Cells
Increasing evidence suggests that a similar metabolism-inflammationaxis may also exist in endothelial cells [4,11,57]. As previously described,
A Novel Approach; Directly Targeting Vascular Metabolism
Altered endothelial metabolism is inextricably linked to atherosclerosis, especially PFKFB3 has been illustrated as a key regulator of glycolysis in ECs, and could therefore be a potential drug target (Table 1). In cancer research, PFKFB3 has already been extensively studied as a Along with restoring vascular homeostasis, 3PO has also been shown to be effective in reducing pathological angiogenesis in ocular and inflammatory models [87]. Previous studies have also shown that neovascularization in atherosclerotic plaques contributes to increased plaque instability [8,9]. Therefore, the observations that 3PO can reduce Plaques from the PFK158 treated group had less incidence of fibrous cap atheroma (p < 0.05), accompanied by a significant reduction in necrotic core area (p < 0.05) and apoptotic cell (TUNEL) staining area (p < 0.005) [16]. Moreover, there was increase in vascular smooth muscle content (p < 0.005). And thickening of the fibrous cap area (p < 0.05). Altogether, these aspects contribute to plaque stability, as indicated by the significant increase in stability index area (p < 0.05) of the PFK158 treated group.
Taken together, pharmacological therapeutic interventions directly or in-directly targeting vascular metabolism appear to be beneficial by increasing plaque stability, diminishing inflammation and reducing neovascularization in in-vitro and/or in-vivo models.as summarized in Table 1.
Metabolism
In parallel to directly targeting vascular metabolism, it would be advantageous to reduce the atherogenic stimuli that induce metabolic reprogramming in ECs in the first place. As described previously, Lp(a) induces vascular glycolysis, thereby initiating a pro-inflammatory endothelial phenotype that facilitates leukocyte extraversion [4]. This
Conclusions
In this review, we discussed the impact of vascular metabolism in atherosclerosis and its progression along with shedding some light on the potential of targeting these altered metabolic pathways. Although there are several treatment options on the market for slowing the progression of atherosclerosis, CVD remains the number one cause of death worldwide and is still increasing, in part due to our rapidly growing aging population [63,64]. Aging as well as the exposure to atherosclerotic stimuli are able to rewire cellular metabolism in the vasculature [2, 4,11,65,66,78]. This metabolic rewiring in ECs results in endothelial activation, consequently inducing neovascularization and creating a pro-inflammatory environment that facilitates leukocyte extravasation [3][4][5][6][7][8][9]. Both processes drive the progression of atherosclerosis and contribute to plaque instability, illustrating the importance of these pathways [8,9,29].
Limitations and Future Perspectives
As stated previously multiple studies showed the beneficial therapeutic effect of targeting altered EC metabolism in several atherosclerosis models (Table 1). However, currently most of these interventions have not entered clinical trials. In order to be able to translate these experimental findings into the clinical arena, new scientific advances in the field of vascular and immunometabolism are warranted. For instance, most in-vitro studies discussed in this review were performed with HUVECs. HUVECs are a preferred endothelial model, since they are easily to retrieve and have a high proliferation rate [90,91]. Additionally, HUVECs can migrate and invade, making them suitable for several angiogenesis and transmigration assays [90,92]. However, HUVECs do not fully recapitulate the vascular bed affected in atherosclerosis [91]. It is therefore important to take this into account when extrapolating the data into the context of their respective disease etiology. Therefore, future studies could take the different disease pathologies into account as well as the tissue of interest and adapt their cell lines accordingly. To illustrate, HAECs could be one of the preferred cell-types when studying atherogenesis [93,94]. Alongside utilizing the appropriate cell lines, the field of vascular metabolism could also benefit from the use of advanced in-vitro models, such as organ-on-a-chip technologies, co-culture systems and human induced pluripotent stem cells. These in-vitro models provide a platform to mimic the complex multifactorial aspects of the vasculature, making the results accessible to translate towards the clinic [95].
Besides exposure to atherogenic stimuli, the role of aging is significant for the elevated dependency on glycolysis, increased mitochondrial dysfunction, ROS production and inflammation as well as the decrease in NO production [65,66]. However, the plasticity of EC metabolism in aging individuals has been discussed to a lesser extent. This generates the Immunometabolism. 2021;3(3):e210020. https://doi.org/10.20900/immunometab20210020 question whether novel therapeutic interventions targeting metabolism can switch EC metabolism to their original state, thereby restoring EC phenotype and consequently vascular homeostasis. Implementing these outstanding questions in future research in the field of vascular metabolism will help move the field forward.
In this review we described various metabolic pathways that can be altered in ECs, where the glycolytic pathway being the one being that has been extensively investigated and therefore mostly discussed. PFKFB3 inhibition have been described in the context of cancer by several landmark studies by the group of Carmeliet [11,26,78]. In the context of atherogenesis, inhibition of PFKFB3 showed promising results in the first in-vitro studies as well as in-vivo studies, demonstrating the therapeutic potential of targeting of vascular metabolism as a therapeutic strategy to combat atherosclerosis [4,16].
However, it is important to realize that-just like any other cellular processes-the adaptation of metabolism is, amongst others, dependent on time, spatial localization, their 'cellular state' (i.e., quiescent, proliferative, activated/inflamed) but also on the available energy supply and demand [96]. This makes extrapolation of the metabolic state of ECs from one disease to another extremely challenging. While some data suggests that EC activation and inflammatory responses precede the observed increase in glycolysis [18], the opposite could also be true for example in diabetic patients. Here, the sustained glucose supply and increased glycolytic flux by itself may also cause EC activation and inflammation [97,98]. Therefore, further unraveling of the metabolic-inflammatory axis in ECs in the proper (patho)physiological context is necessary to provide this and other exciting fields with detailed insight in which metabolic regulators could be targeted to reduce the atherosclerotic burden.
CONFLICT OF INTERESTS
JK has received a research grant from Oxitope Pharma BV. The other authors declared they do not have anything to disclose regarding conflict of interest with respect to this manuscript.
FUNDING
This work was supported by the Netherlands Organization for Scientific Research. JK received a VENI grant from ZonMW (91619098). | 2021-06-04T05:23:37.499Z | 2021-05-17T00:00:00.000 | {
"year": 2021,
"sha1": "43357ca81dfc0f96578ebd15138ede847b0e7c0d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.20900/immunometab20210020",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43357ca81dfc0f96578ebd15138ede847b0e7c0d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
386496 | pes2o/s2orc | v3-fos-license | Glomerular abundance of complement proteins characterized by proteomic analysis of laser-captured microdissected glomeruli associates with progressive disease in IgA nephropathy
Background The clinical course of IgA nephropathy (IgAN) is variable and complement activation may predict prognosis. The present study investigated whether glomerular abundance of complement proteins associates with progression to end-stage renal disease (ESRD) in patients for whom prognosis could not be predicted based on clinical variables. Methods Based on data from the Norwegian Kidney Biopsy Registry and the Norwegian Renal Registry, three groups were included: IgAN patients with (n = 9) or without (n = 16) progression to ESRD during 10 years, and controls (n = 15) with a normal kidney biopsy. IgAN patients had eGFR > 45 ml/min/1.73 m2 and non-nephrotic proteinuria at time of biopsy. Using stored formalin-fixed paraffin embedded kidney biopsy tissue, about 100 glomerular cross sections were microdissected for each patient. Samples were analyzed by liquid chromatography–tandem mass spectrometry and relative abundances of complement proteins were compared between groups. Results Proteomic analyses quantified 2018 proteins, of which 28 proteins belong to the complement system. As compared to IgAN patients without progressive disease, glomeruli from patients with progressive IgAN had significantly higher abundance of components of the classical and the terminal complement pathways, and inhibitory factors such as Factor H and factor H related proteins. Abundance of complement proteins classified progressors from non-progressors with an area under ROC curve of 0.91 (p = 0.001). Clinical and morphological data were similar between the two patient groups and could not predict progressive IgAN. Conclusions In conclusion, higher glomerular abundance of complement proteins was associated with a progressive clinical course in IgAN and are candidate biomarkers to predict prognosis. Electronic supplementary material The online version of this article (doi:10.1186/s12014-017-9165-x) contains supplementary material, which is available to authorized users.
Background The clinical course of IgA nephropathy (IgAN) is highly variable and difficult to predict, some patients have a stable clinical course, while others progress to end-stage renal disease (ESRD). Several clinical and histological factors at time of diagnosis have been shown to indicate worse prognosis. These include low estimated glomerular filtration rate (eGFR), hypertension, proteinuria, mesangial hypercellularity, segmental glomerulosclerosis or adhesion, tubular atrophy and interstitial fibrosis [1][2][3]. There is however a large group with moderate risk in which individual prognostication based on these factors is difficult and there is a clear need for better prognostic markers in this group [4].
It has long been suggested that complement has an important role in the pathogenesis of IgAN as complement components C3, properdin and factor H have been commonly co-detected with IgA deposits in renal biopsy specimens [5][6][7]. Complement activation can occur through the classical, lectin or alternative pathways [8][9][10][11], that ultimately result in activation of the terminal complement pathway. Previous studies have shown that the lectin pathway [12,13] and the alternative pathway [14] likely are involved in the pathophysiology of IgAN.
In the present study we investigated markers of progressive IgAN in patients with medium risk of progression based on eGFR and proteinuria. Patients were included in a case-control design comparing patients with progressive IgAN to non-progressive disease IgAN as well as to control patients. Glomerular cross sections were microdissected and glomerular protein abundances were compared between groups. Initial findings showed that complement related proteins seemed to be important and we thus compared abundances of these proteins in progressive versus non-progressive IgAN andnonprogressive IgAN versus healthy control patients and describe associations with clinical and morphological parameters. Lastly we investigate whether complement related proteins showed potential for prediction of progressive IgAN.
Methods
The study was approved by the Regional Committee for Medical and Health Research Ethics.
Registries used in the study
Data from the Norwegian Kidney Biopsy Registry were used for selection of patients. The registry has recorded clinical, biochemical and histopathological data at time of biopsy from nearly all patients who have undergone a non-neoplastic kidney biopsy in Norway since 1988. Serum creatinine, systolic blood pressure and urinary protein excretion at time of biopsy were used as reported to the registry. Creatinine was measured at the local hospital laboratories using kinetic Jaffe method until about 2005 when there was a swithch to the IDMS traceable enzymatic test, the switch was done at slightly different time points at different hospitals. In the present study, creatinine values measured before 2005 was recalculated based on a formula used by Hallan et al. to recalibrate creatinine to IDMS-traceble values [15]. We calculated eGFR based on the the CKD-EPI equation [16]. Urinary protein was quantified as g/24 h either from directly measured values, by calculation from reported urinary protein to creatinine ratio or if only reported by urinary dipstick a negative dipstick was set to 0 g/24 h, 1+ was set to 0.5 g/24 h, 2+ was set to 1.0 g/24 h and 3+ was set to 3.0 g/24 h [4]. By using the 11-digit national identity number, data from the Norwegian Kidney Biopsy Registry were linked with the Norwegian Renal Registry which has registed all cases with ESRD in Norway since 1980. At the time of linkage, data on ESRD were available until 2013.
Study population
Based on data from the described registries, patients were selected for three subgroups. (1) Non-progressive IgAN, criteria: diagnosis of IgAN at kidney biopsy, eGFR > 45 ml/min/1.73 m 2 , urinary protein >1 g/24 h and no development of ESRD during a follow-up period of at least 10 years. (2) Progressive IgAN, criteria: diagnosis of IgAN at kidney biopsy, eGFR > 45 ml/min/1.73 m 2 , urinary protein <3.5 g/24 h and development of ESRD during the first 10 years after kidney biopsy. (3) Control patients, criteria: normal or minimal morphological changes in the kidney biopsy, eGFR > 60 ml/min/1.73 m 2 , urinary protein <0.5 g/24 h and no development of ESRD during a follow-up period of at least 10 years. All biopsies had been performed as part of a standard clinical workup where glomerular disease was suspected. By review of medical journals, data on steroid treatment were retrieved for all IgAN patients and last available serum creatinine and urinary protein were also retrieved for patients who had not developed ESRD.
Laser capture microdissection and sample preparation
The remaining part of the kidney biopsy core that was not used for diagnostic examination has been stored as formalin-fixed paraffin-embedded tissue and was used for the present study. Ten micrometer thick FFPE sections were deparaffinized, rehydrated and stained with haematoxylin-eosin. Glomeruli with global sclerosis, more than minimal segmental sclerosis, crescents or fibrinoid necrosis were excluded. Based on these criteria, eligible glomeruli were laser microdissected (PALM MicroBeam, Zeiss) and pressure catapulted into a tube cap (AdhesiveCap 500 clear, Zeiss). For each patient, we aimed to microdissect about 100 glomerular cross sections.
Microdissected FFPE glomeruli were suspended in 10 µL lysis solution and stored at −20 °C until peptide extraction. Protein extraction and trypsinization of microdissected glomeruli were performed as previously described [17].
Liquid chromatography and tandem mass spectrometry
The samples were analyzed on a Q-Exactive HF (Thermo Scientific) connected to a Dionex Ultimate NCR-3500RS LC system. The MS instrument was equipped with an EASY-spray ion source (Thermo Scientific) and MS spectra were acquired as described in detail in the supplemental information documenting the detailed methods.
Label free quantification
The raw data was analyzed with the Progenesis LC-MS software (version 4.0, Nonlinear Dynamics, UK) using default settings. Features were exported from Progenesis and imported into Proteome Discoverer (version 1.4, Thermo Scientific) for protein identification using the SwissProt human database (downloaded from UniProt August 2015, 20,197 sequences).
Histology and immunohistochemistry
The biopsies were reclassified in a blinded manner by an experienced nephropathologist (SL) using the Oxford classification scoring system and M, E, S and T scores were assigned [3]. Immunohistochemistry was performed on 3 µm thick sections from FFPE tissue after antigen retrieval with proteinase digestion. The following antibodies were used: polycolonal rabbit anti-human C3c (Dako, Glostrup, Danmark; A0062), polyclonal rabbitantihuman C1q (Dako, Glostrup, Danmark; A0136) and monoclonal mouse anti-human C5b-9, clone aE11 (Dako, Glostrup, Danmark; M0777). The aE11 antibody detects a neoepitope exposed in C9 after C9 is incorporated in the C5b-9 complex and is not present in native C9, thus specifically deting activation of the whole complement cascade [18]. Nearly all biopsies had been stained for C3c and C1q at time of diagnostic evaluation and these sections were used for evaluation. For C5b-9 staining, new sections were used. Glomerular positivity for complement factors was evaluated by semiquantitative scoring ranging from 0 to 3+.
Statistics and bioinformatics
Clinical and morphological variables are described either as mean ± standard deviation or as percentages Tests of statistical significance were performed with t-tests or Chi-square statistics. Normalized protein abundances were compared between groups with t-tests and considered differently abundant if identified by at least two unique peptides and p value <0.05. Fold change is given for relative quantification of protein abundance between groups. Mean ± standard deviation is given where appropriate. Linear regression was performed to explore the relationship between complement proteins and clinical variables GFR, proteinuria and blood pressure.
A complement score was calculated for each IgAN patient by multiplying scores for all included complement proteins (score for each protein calculated as the protein abundance for the patient divided by mean protein abundance for all patients with IgAN; for proteins with fold change <1 in the comparison between IgAN with progression divided with IgAN without progression, the score was exponentiated by −1). The complement score was logarithmically transformed. Receiving operating characteristics (ROC) curves were used to evaluate the performance of the complement protein score and area under the curve (AUC) were calculated. Two complement scores were calculated, one including all significant complement proteins and one including only proteins of the MAC complex (complement factors C5, C6, C7, C8 and C9). ROC curves were also created for systolic blood pressure, complement component C7 and 1/eGFR for comparison.
Results
Three groups were included and kidney biopsy tissue could be retrieved and enough glomeruli microdissected for 16 patients with non-progressive IgAN, 9 patients with progressive IgAN and 15 controls with normal biopsies. The clinical and morphological characteristics of the three groups are summarized in Table 1. There was no statistical significant difference in clinical characteristics between IgAN patients with progressive versus non-progressive disease. Oxford classification showed no difference in M, E or S score between patients with versus without progression, T score was however more often positive in patients with progressive disease (44 vs 0%) (p = 0.004).
Glomerular proteome analysis
A total of 3274 proteins were identified, of which 2018 were identified with two or more unique peptides and could thus be used in quantitative analyses. Of these, 231 proteins had significant different abundance between progressive and non-progressive IgAN. The 25 most strongly significantly changed proteins in progressive versus non-progressive IgAN are listed in Table 2. Notably, 10 (40%) of these were complement proteins and we therefore chose to focus further studies towards complement proteins. In the list of all quantified proteins, 28 were complement proteins.
Complement proteins in non-progressive IgAN versus controls
In the comparison between patients with non-progressive IgAN and controls, 19 proteins were significantly (Table 3), a similar pattern to that observed for the comparison between progressive and non-progressive IgAN.
Immunohistochemistry
Representative pictures illustrating immunohistochemistry staining for the three groups are shown in Fig. 2
Prediction of progressive versus non-progressive IgAN
As shown above, glomerular protein abundance of complement proteins were higher in patients with IgAN with progressive disease as compared to IgAN with non-progressive disease. We further analysed whether glomerular abundance of these proteins could classify IgAN patients as progressive versus non-progressive. Unsupervised hierarchical clustering including only the significantly abundant complement related proteins of Table 3 separated most patients with progressive and non-progressive disease (Fig. 3).
A complement score was calculated for each patients based on abundance of complement related proteins (for details, see "Methods" section). Patients with progressive IgAN had significantly higher scores than patients with non-progressive IgAN, and controls had lower scores than non-progressive IgAN. We further tested whether these scores could be used to classify patients with progressive versus non-progressive IgAN. In ROC analyses, AUC values were 0.91 (p = 0.001) for a complement score using all significant proteins, 0. 91 (p = 0.001) for the complement score including complement components C5, C6, C7, C8 and C9 and 0.90 (p = 0.001) when only including protein abundance of complement Fig. 2 Representative immunohistochemistry staining images for complement factors C1q, C3 and membrane attack complex (C5b-9) for a representative control, a representative patient with non-progressive IgAN and a representative patient with progressive IgAN factor C7, the rate limiting factor of the terminal pathway (Fig. 4). In comparison, AUC value for the clinical variables systolic blood pressure was 0.580 (p = 0.5) and for the variable 1/eGFR it was 0.74 (p = 0.054). Other clinical or morphological variables could neither be used to classify progressive from non-progressive IgAN.
Associations between complement proteins and clinical variables
For patients with IgAN, linear associations between the complement proteins and clinical variables were investigated. These analyses showed that C1r, C1s, C5, C6, C8, C9 and clusterin had higher abundance with lower eGFR (Table 4). There were no significant associations with urinary protein, but there were increased abundances of the C1r, C1s, C4, C5, C8, C9, factor H, factor H-related protein 3 and C4b binding protein alpha with increasing systolic blood pressure.
Associations between complement proteins and MEST score
Distribution of MEST scores are shown in Table 1. Complement levels were compared between IgAN patients with positive as compared to negative scores for the 4 different MEST characteristics (Table 5). Positive M score was associated with higher abundance of complement proteins C5, C6, C7, C8, and clusterin, and lower abundance of complement receptor type 1. Positive E score was associated with higher abundance of C5, C7, C9 and complement factor H-related protein 5. Positive S score was associated with higher abundance of C1r and C1s. Positive T score was associated with higher abundance of C1q, C1r, C1s, C4, C5, C6, C7, C8 clusterin, complement factor H and C4b-binding protein, and lower abundance of complement receptor type 1.
Discussion
In the current study we have shown that patients with progressive IgAN had higher glomerular abundance of complement proteins as compared to patients with non-progressive IgAN. Interestingly, both ordinary complement components and most of the complement inhibitors showed higher abundance, indicating compensatory mechanisms taking place during activation. IgAN patients selected for the present study had medium risk of progression and prognosis could not be predicted based on accepted risk factors such as eGFR, proteinuria, blood pressure or the Oxford classification. Glomerular abundance of all significant complement proteins, in particular those of the terminal pathway, did however show predictive performance with area under the ROC curve of about 0.9. Similar findings for complement proteins were made when comparing nonprogressive IgAN patients to controls, indicating a doseresponse relationship.
In the present study we were able to quantify 28 complement proteins. We found increased abundance of proteins related to the classical and terminal pathway. Members of the terminal pathway (complement factors C5-C9) that constitute the MAC, showed the strongest increase in progressive versus non-progressive IgAN as well as in non-progressive IgAN versus controls. Previous studies have shown increased glomerular MAC deposition [19] and increased urinary MAC levels [20] in IgA nepropathy. The prognostic importance has however not been shown before. Local expression of terminal pathway components in renal cells has not been described [21] indicating that our finding are suggestive of complement activation and not just local synthesis. In Fig. 2 we show mesangial localization of the membrane attack complex with an antibody against a neoepitope in C9 that only stain positive for the assembled complex, indicating activation of the complex and not just deposition of the native component.
In our study, components of the classical pathway C1q, C1r and C1s, were significantly increased in patients with progressive IgAN as compared with non progressive IgAN, suggesting the involvement of the classical pathway in the progression of the diseases. In our study, we could not detect MASP (mannose binding lectin associated serine proteases), MBL (mannose binding lectin) or ficolins and we could thus not find evidence for activation of the lectin pathway. We thus suggest that the increased abundance of complement component C4 in progressive IgAN may argue for contribution of the classical pathway in IgAN patients with progressive disease.
Furthermore, complement C3 mesangial deposition was also significantly increased in progressive IgAN. The Table 4 Linear associations between complement related proteins significantly altered in Table 3
in the comparison progressive versus non-progressive IgAN and clinical variables at time of biopsy
Only IgAN patients * Direction of association is shown, + means higher intensity with higher value for clinical marker and − means lower intensity with higher value for clinical marker alternative pathway is suggested to be activated in IgAN as complement C3 mesangial deposition is present in >90% of patients and Immunglobulin A has been shown to activate the alternative pathway in vitro [7,22]. As C3 is present both by activation of the classical and the lectin pathtay by the amplification loop, it is not possible to know with certainty whether or not the alternative pathway was activated primarily in IgGAN. Interestingly, analyses of the subcomponents of C3 showed stronger increase of C3dg than the other peptides in progressive IgAN. C3dg is an inactive product of degraded C3b and our findings thus indicate increased opsonization by C3b in patients with progressive IgAN. Similar findings of accumulation of C3dg was recently also shown for C3 glomerulopathy [23]. Other regulators of the complement system, such as factor H, which is one of the most important regulators of C3 and the alternative pathway, were also mostly significantly increased in progressive IgAN. These findings suggest that compensatory mechanisms are active in IgAN in order to control the increased complement activation. One inhibitor of the complement system, complement receptor 1 (CR1) that acts by inactivating C3b and is localized on the podocytes [24] was however present in lower abundance in progressive IgAN. Previous studies have shown reduced CR1 in injured podocytes from patients with different types of glomerulopathies [25] and one study also showed reduced CR1 expression in lupus nephritis [26]. The decrease in CR1 may contribute to a disturbed balance with increased activation and reduced inhibition, enhancing the detrimental effects of complement activation in IgAN. The exact mechanisms for complement activation and regulation in IgAN cannot howver be mapped by the present study, but the clear evidence of its prognostic role points to a need for further studies. In the selection of IgAN patients for the present study, we aimed to include patients with medium risk of progression and a progressive versus non-progressive disease course. The rationale for the selection criteria based on eGFR and proteinuria was to select patients in whom prediction of prognosis was difficult based on traditional risk factors and indeed, prognosis could not be predicted based on classical risk factors. Initially, we planned to include only patients with proteinuria of 1-3.5 g/24 h, but due to a limited number of patients with these characteristics, we chose to add 3 patients with proteinuria less than 1 g/24 h who progressed to ESRD and 1 patient with proteinuria above 3.5 g/24 h who did not progress to ESRD. In our opinion this approach yielded two groups with progressive versus non-progressive disease for whom prediction of prognosis was very difficult, in strong line with the rationale described above. Complement score, either based on abundance of significant complement proteins, and in particular components of the membrane attack complex, could however predict prognosis with area under ROC curve of about 0.9. Unsupervised hierarchical clustering also showed the same, confirming and strengthening these findings. Two important reservations should however be made. First, the predictive capacity could not be reproduced with immunohistochemistry staining for C5b-9, and staining for C3 was only moderately increased in patients with progressive IgAN, the direct clinical significance should therefore be interpreted with caution. Second, we investigated the predictive ability of the complement scores in the same cohort in which we demonstrated the importance and not in a separate cohort. Our results therefore need confirmation in a new cohort. A previous study also showed prognostic importance of C4d staining, this staining was not tested in our study [27].
The most important strengths of the present study are the relevant study population with IgAN in whom the prognosis was difficult to predict, microdissection and analysis of the relevant glomerular tissue, the large number of quantified proteins and the dose-response relationships that were seen for progressive IgAN versus non-progressive IgAN versus controls.
Conclusions
In conclusion, the present study has shown increased abundance of complement factors and inhibitors in progressive IgAN as compared to non-progressive IgAN. Increased abundance of proteins of the terminal complement pathway argue for complement-mediated damage in progressive IgAN. One inhibitor of the complement system, CR1, had lower abundance in progressive IgAN and may represent a mechanism that reduces complement inhibitory control in IgAN. | 2017-08-15T05:32:26.896Z | 2017-08-14T00:00:00.000 | {
"year": 2017,
"sha1": "db833192419b61cf11e1ae5ddf3e902b7f35c57f",
"oa_license": "CCBY",
"oa_url": "https://clinicalproteomicsjournal.biomedcentral.com/track/pdf/10.1186/s12014-017-9165-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db833192419b61cf11e1ae5ddf3e902b7f35c57f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
258766000 | pes2o/s2orc | v3-fos-license | Major cardiovascular events and associated factors among routine hemodialysis patients with end-stage renal disease at tertiary care hospital in Somalia
Introduction Cardiovascular complications are the most significant cause of death in patients undergoing routine hemodialysi (HD) with end-stage renal disease (ESRD). The main objective of this study is to determine the significant cardiac events and risk factors in patients undergoing routine hemodialysis in Somalia. Methods We carried out a cross-sectional retrospective study in a single dialysis center in Somalia. Two hundred out of 224 were included. All of them had ESRD and were on hemodialysis during the study period between May and October 2021. The records of all patients were reviewed, and the following parameters were analyzed socio-demographic factors, risk factors for cardiovascular disease, and the presence of cardiovascular diseases. Results The mean age was 54 ± 17.5 years (range 18–88 years), and 106 (53%) patients were males. The prevalence of a cardiovascular disease among hemodialysis patients was 29.5%. Moreover, the distribution of cardiovascular diseases was different; heart failure was the most common, about 27.1%, followed by coronary artery disease (17%), pericarditis and pericardial-effusion (13.6%), dysrhythmia (10.2%), cerebrovascular-accident (8.5%), and peripheral vascular disease (3.4%). About 176 (88%) participants had at least one modifiable cardiovascular risk factor. The most common modifiable cardiovascular risk factor was hypertension (n = 45, 25.1%), followed by anemia (n = 28, 15.6%) and diabetes (n = 26, 14.5%). Younger (18–30) participants were six times less likely to have cardiovascular events among hemodialysis than older age 0.4 (0.11–1.12). Conclusion Low prevalence rate of cardiovascular complications was confirmed in ESRD patients receiving hemodialysis in the main HD center in Somalia. Diabetes, anemia, and hypertension were the highest significant risk factors for CVD in HD patients with ESRD in Somalia.
Introduction: Cardiovascular complications are the most significant cause of death in patients undergoing routine hemodialysi (HD) with end-stage renal disease (ESRD). The main objective of this study is to determine the significant cardiac events and risk factors in patients undergoing routine hemodialysis in Somalia.
Methods: We carried out a cross-sectional retrospective study in a single dialysis center in Somalia. Two hundred out of were included. All of them had ESRD and were on hemodialysis during the study period between May and October . The records of all patients were reviewed, and the following parameters were analyzed socio-demographic factors, risk factors for cardiovascular disease, and the presence of cardiovascular diseases.
Conclusion: Low prevalence rate of cardiovascular complications was confirmed in ESRD patients receiving hemodialysis in the main HD center in Somalia. Diabetes, anemia, and hypertension were the highest significant risk factors for CVD in HD patients with ESRD in Somalia. KEYWORDS hemodialysis, end-stage renal disease, cardiovascular disease, heart failure, diabetic, hypertension Introduction CKD is becoming more common in Sub-Saharan Africa (SSA), primarily affecting young individuals in their prime for economic productivity. Additionally, many patients receive nephrologist referrals too late, are vulnerable to acute complications from dialysis, and struggle with infrastructure and financial issues that make it challenging to provide adequate dialysis to those with endstage renal disease (1,2). Although the prevalence of chronic kidney disease (CKD) in Somalia has not previously been studied, Muiru et al. reported that CKD in sub-Saharan Africa was determined to be 8% in 2020 (3).
Renal replacement therapy (RRT) as a whole and hemodialysis (HD) in particular remains a lifesaving intervention for a lot of patients whose kidneys do not work correctly (4). Nevertheless, in patients undergoing routine hemodialysis (HD), there is an increased risk for cardiovascular morbidity and mortality (5).
It's thought that cardiovascular diseases account for more than 50% of deaths among HD patients (6). Cardiovascular deaths among HD patients are also believed to be 10-20 times greater than in general (7). Sudden cardiac death is the leading cardiovascular death among HD patients, with 25% of all cardiovascular deaths (8).
Some researchers suggested that the HD process can activate the complement system, induce prothrombotic and proinflammatory responses in patients and thus predispose cardiovascular events to the development (9). Some researchers suggest that cardiovascular lesions may even appear before the initiation of the HD process in chronic renal failure patients, as chronic renal failure or end-stage renal disease (ESRD) is an independent risk factor for cardiovascular diseases (10). In addition, HD can lead to anemia and cause alteration in calcium and phosphate metabolism, which could play a significant risk factor for developing cardiovascular events (11).
To the best of our knowledge, the prevalence and risk factors of major cardiac events among patients undergoing HD in Somalia remain unknown. The main objective of this study is to determine the significant cardiac events and risk factors among patients undergoing routine hemodialysis in Somalia.
Methods
This retrospective study included all patients who have received the diagnostic code of ESRD in accordance with the International Classification of Diseases (ICD-10) system and underwent routine hemodialysis between May 2021 and October 2021 using the electronic hospital information system (HIS). Two hundred out of 224 patients who had ESRD and underwent routine hemodialysis (HD) were included in our study. Patients with renal transplantation, peritoneal dialysis, and those with incomplete data were excluded from the study.
The following parameters were analyzed: Socio-demographic and clinical parameters included age and gender, risk factors for cardiovascular disease (family history of coronary artery disease, diabetes mellitus, arterial hypertension, and dyslipidemia), anemia, duration on HD, and time of hemodialysis per week. The presence of cardiovascular diseases such as heart failure, coronary artery disease, dysrhythmia, pericarditis, pericardial effusion, cerebrovascular disease, and peripheral vascular disease was looked for in the hospital information system (FONET) record by using an electrocardiogram, two-dimensional and Doppler echocardiography, Doppler ultrasound, brain CT scan, and brain MRI records.
The Echocardiography was licensed in Turkey using a Toshiba Aplio TM ultrasound system (TUS-A500, Shimoishigami, Japan) in accordance with the American Society of Echocardiography guidelines.
The presence of any of the following was considered a cardiovascular disease (CVD): • Coronary heart disease: myocardial infarction or stable angina or unstable angina by assessing normal values for cardiac dimensions and EKG diagnostic criteria were obtained from standard references, or coronary artery bypasses graft or percutaneous coronary intervention (12). • Heart failure: is defined as an aberrant left ventricular filling pattern and/or a mitral E/A ratio on echocardiography that is out of the range of 0.7-3.1 if under 64 years old or 0.5-1.7 if over 64 years old. b) Systolic dysfunction was defined as an ejection fraction of <50%. • Cerebrovascular disease includes atherothrombotic cerebral infarction and transient ischemic attack with brain CT scan or MRI Confirmation.
• Peripheral vascular disease: is a chronic progressive atherosclerotic disease leading to partial or total peripheral vascular occlusion. PAD typically affects the abdominal aorta, iliac arteries, lower limbs, and occasionally the upper extremities. • Dysrhythmias: evidence of ventricular tachycardia, fibrillation, or any other type of dysrhythmia on electrocardiographic criteria. • Pericarditis and Pericardial effusion: diffuse ST-segment elevation on ECG, stiff or thick pericardium constricting the heart's normal movement or free fluid around the heart by echocardiography.
The study was carried out after receiving ethical approval and being granted permission by the research and ethical committee of the Mogadishu Somali Turkish Training and Research Hospital (Ref: MSTH/6384). This study was carried out in accordance with the Helsinki Declaration's contents. The information obtained from the medical records was kept strictly confidential and utilized only for research purposes. Furthermore, study participants are not recognized by name to ensure confidentiality.
Microsoft Excel and SPSS software version 23 were used to create the database. Continuous variables are presented as mean ± standard deviation, and categorical variables as the observed number of patients (percentage). Fisher's exact test was used for categorical variables to compare patient characteristics between groups (cardiac and non-cardiac events). For correlations, a correlation coefficient test was applied, binary logistic regression .
/fmed. . was also used, and a p-value of <0.05 was considered statistically significant.
Results
In this retrospective observational study, 200 out of 224 routine hemodialysis patients at Mogadishu Somali Turkish Training and Research Hospital from May 1, 2021, to October 231, 2021, fulfilled the inclusion criteria and enrolled in the study. Table 1 shows the socio-demographic characteristics of the 200 patients with HD with ESRD. The mean age was 54 ± 17.5 years (range 18-88 years), and 106 (53%) patients were males.
Based on the time of hemodialysis, most of the patients (78%) underwent hemodialysis twice a week. In comparison, 29 (14.5%) patients underwent one a week, and 15 (7.5%) patients underwent three times per week.
This study revealed that about 176 (88%) of the study participants (hemodialysis patients with end-stage renal disease) had at least one modifiable cardiovascular risk factor. As shown in Figure 1, the most common modifiable cardiovascular risk factors among hemodialysis patients with end-stage renal disease were hypertension in 45 patients (25.1%), followed by anemia in 28 patients (15.6%), and diabetes mellitus in 26 patients (14.5%).
The prevalence of cardiovascular disease among hemodialysis patients with ESRD was 29.5%, as shown in Figure 2. About 27.1% of the hemodialysis patients with ESRD had heart failure, 17% had coronary artery disease, 13.6% had pericarditis and pericardial effusion, 10.2% had dysrhythmia, 8.5% had cerebrovascular accident, and 3.4% had peripheral vascular disease (Figure 3).
The data also showed among the 59 respondents that were diagnosed with at least one cardiovascular disease, near half of the respondents [30 out of 59 (50.8%)] were males. In comparison, females comprised the remaining 29 out of 59 respondents (49.2%) (p = 0.757).
Of the 16 respondents who were diagnosed with at least one cardiovascular disease, 23 (40%) had been on hemodialysis for 1 year or less, and 22 (37.3%) respondents have on hemodialysis for 2-5 years, while the remaining 14 respondents (23.7%) have been on hemodialysis for more than 5 years (p = 0.222).
Regarding the study population, 59 respondents were diagnosed with at least one cardiovascular disease. Most participants (84.7%) had undergone hemodialysis twice per week, while five (8.5%) had undergone hemodialysis once weekly. Only four (6.8%) participants had undergone hemodialysis three times or more per week (p = 0.259) ( Table 2). Table 2 shows younger (18-30) participants were six times less likely to have cardiovascular events among hemodialysis than older age 0.4 (0.11-1.12). Cardiovascular events were less in participants with previous risk factors than in those without.
Discussion
Chronic kidney disease (CKD) implies various degrees of declined renal function. The most severe and last stage of CKD is an end-stage renal disease (ESRD), which occurs when the kidneys cannot properly perform their essential functions. Finding regular hemodialysis or a kidney transplant is the only option available to individuals with ESRD to survive (13,14). Cardiovascular complications are the most significant cause of death in patients with end-stage renal disease (ESRD) on hemodialysis treatment (15). As early as 1836, Richard Bright suggested that the first cardiovascular disease (CVD) originated from renal disease (16).
The mechanism underlying the increased risk of cardiovascular events in patients with ESRD has not been well defined. In fact, a broad spectrum of risk factors influences cardiac function and structure in hemodialysis patients with ESRD.
Lindner et al. (17) discovered the significant burden of cardiovascular disease (CVD) in chronic renal disease (CRD) more than 40 years ago.
. /fmed. . In general medical practice, patients in stages 3-4 CKD who have reduced renal function but are not in ESRD have a prevalence of ischemic heart disease of 25%, more than double the prevalence in patients without CKD, according to the NEORICA (New Opportunities for Early Renal Interventions by Computerized Assessment) study (18). Cardiovascular disease is a common cause of death in hemodialysis patients, with a ratio of 10-20 times greater than in people with normal renal function (19).
Regarding the duration of HD, most patients (38.5%) had dialysis duration between 1 and 5 years, similar to study findings from Sudan (20).
Our findings showed that most of the participants (84.7%) had undergone hemodialysis twice per week, while five (8.5%) participants had undergone hemodialysis once per week. Only four (6.8%) participants had undergone hemodialysis three times or more weekly. In contrast to our report, a study from Ethiopia found that only 10.8% had undergone hemodialysis twice per week, while 89.2% had undergone hemodialysis Three times per week (21).
Our study found no association between cardiovascular disease and duration of HD and section of HD per week.
A hemodialysis study in the United States states that 40% of dialysis patients had cardiovascular disease at admission. Coronary artery disease was the cause of 63% of hospital admissions for cardiovascular causes (22).
In the present study, the prevalence of cardiovascular disease among hemodialysis patients with ESRD was 29.5% lower than that reported in previous studies (22,23). Another study from Cameroon found that 84% of hemodialysis patients with ESRD had a cardiovascular illness, which is higher prevalent than this figure (2). The variation in cardiovascular events prevalence in our study compared with other reports could be due to several reasons: the diagnostic criteria for cardiovascular events in ESRD patients were not uniform, the different populations were genetically varied, or the inclusion criteria in the various studies may have been different.
The increased prevalence of heart failure could be related to the higher prevalence of hypertension in this study sample which is the leading etiological factor of underlying renal disease. Another risk factor for heart failure is the high incidence of anemia caused by the low use of erythropoiesis-stimulating drugs. High cardiac output, a large stroke volume, increased heart rate, and deteriorating left ventricular dilatation are all associated with anemia (24).
A study from Spain on cardiovascular disease among hemodialysis patients reported that 16.7% had coronary disease, 13.9% had different degrees of heart failure, and 11.6% had arrhythmia (25). Rostand and his teammates reported that 73% of hemodialysis patients have coronary artery disease, representing the highest cardiovascular disease prevalence among ESRD patients (26).
In our study, 88% of dialysis patients had at least one pre-existing comorbidity before beginning dialysis therapy, significantly higher than the study published in Malaysia, which found that just 31.6% had such conditions (23). The most prevalent comorbidities in the current study were hypertension (21.5%), anemia (15.6%), and diabetes (14.4%). In bivariate or multivariate analyses, pre-existing comorbidities were also not statistically related to cardiovascular events. According to Lim et al. (23), hypertension (96.5% of all cases), diabetes (66.2%), and hyperlipidemia (58.1%) were the most prevalent comorbidities identified throughout their analysis.
Cardiovascular events were less in participants with previous risk factors than in those without. This may be due to progress in both prevention and treatment of CVD, including precipitous declines in cigarette smoking, improvements in hypertension and diabetic treatment and control, and widespread use of statins to lower circulating cholesterol levels.
Regarding the study population, most of the participants (84.7%) had undergone hemodialysis twice per week, while five (8.5%) participants had undergone hemodialysis once per week. Only four (6.8%) participants had undergone hemodialysis three times or more per week (p = 0.259). Inadequate or missed hemodialysis sessions also significantly affected cardiovascular disease among hemodialysis patients with ESRD. The overcrowding of our center for receiving many ESRD patients needing regular renal replacement therapy, lack of public awareness of the disease and the hemodialysis itself, discrimination, and social pressure on the patients were the leading factors of inadequate or missed hemodialysis sessions. In addition to this, insufficient skills of dialysis providers, higher costs belonging to each dialysis session that most of the patients are not affordable (low socioeconomic status), as well as; lack of access to the center because of rural and far distance distribution of the cases also played a role.
The limitations of our study included: a limited sample size, a retrospective study, and a single-center study that may not be representative of the country. Risk factors such as smoking, alcoholism, sedentary lifestyle, and obesity were not evaluated due to a retrospective study that cannot be obtained from the system. Several novel risk factors have yet to be explored due to technical drawbacks and the high cost of laboratory-based tests.
Despite the growing population of patients on maintenance hemodialysis in Somalia, there has been a relative lack of large clinical databases describing the specific cardiac diseases among HD patients with ESRD.
Although this study has several limitations, it is the first study to assess the prevalence, risk factors, and extent of cardiovascular disease in a significant condition in adult chronic hemodialysis patients in Somalia. This issue has been well addressed in adult ESRD patients.
Conclusion
Cardiovascular events are lower prevalent among hemodialysis patients with ESRD in Somalia when compared to other countries. The majority of the cardiovascular event confirmed in our HD patients were significantly higher in older patients and those with diabetes, anemia, and hypertension.
Data availability statement
We declared that we had full access to all of the data in this study and we take complete responsibility for the integrity of the data. All original data are available in the Mogadishu Somali Turkish Training and Research Hospital in Mogadishu, Somalia. Data used to support the findings of this study are available from the corresponding author upon request.
Ethics statement
The study was carried out after receiving ethical approval and granted permission from the Research and Ethical Committee of the Mogadishu Somali Turkish Training and Research Hospital (Ref: MSTH/6384). Written informed consent for participation was not required for this study in accordance with the National Legislation and the Institutional requirements. | 2023-05-19T13:12:34.988Z | 2023-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "1a83df969ad4816b2392705a2a12a895a2ad3b94",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "1a83df969ad4816b2392705a2a12a895a2ad3b94",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
10582849 | pes2o/s2orc | v3-fos-license | Methylcobalamin Facilitates Collateral Sprouting of Donor Axons and Innervation of Recipient Muscle in End-to-Side Neurorrhaphy in Rats
Using ulnar nerve as donor and musculocutaneous nerve as recipient we found earlier that end-to-side neurorrhaphy resulted in weak functional reinnervation after lengthy survival. End-to-side neurorrhaphy however is the sole choice of nerve repair at times and has the advantage of conserving donor nerve function. Here, we investigated whether myelination-enhancing agent methylcobalamin and motoneuron trophic factor pleiotrophin enhances the recovery after end-to-side neurorrhaphy. Methylcobalamin significantly increased the expression of growth associated protein 43 and S100 protein and βIII tubulin in musculocutaneous nerve 1 month after neurorrhaphy suggesting the ingrowth of ulnar axonal sprouts in reactive Schwann cell environment. Upper limb functional test, compound muscle action potential measurements, motor end plate counts, and axon and myelin analyses showed that methylcobalamin treatment alone or with pleiotrophin improved the recovery significantly, 3 and 6 months post-surgery. There were fewer axons, closer in number to that of the intact recipient nerve, found in the distal repaired nerve of the methylcobalamin-treated than that of the vehicle control, suggesting that methylcobalamin facilitates axonal maturation and eliminates supernumerary sprouts. In conclusion, our results showed that methylcobalamin does indeed enhance the recovery of peripheral nerve repaired in end-to-side configuration.
Introduction
Peripheral nerve injury involving a segmental loss of a particular nerve is a potential indication for nerve grafting and neurorrhaphy. Among the strategies devised over the years, end-to-side neurorrhaphy (ESN) which has the advantage of saving donor functions is a common strategy of the nonmaleficence principle for treating nerve lesions. The indication for ESN is for injuries with long nerve gaps and this has been widely explored [1][2][3][4][5][6][7][8][9]. ESN however induces slow collateral sprouting and requires long postoperative survival for recovery. In animals, this remains unsatisfactory as compared to end-toend neurorrhaphy (EEN) in a six month study [10] and thus remains debatable [10,11]. Hence, strategies to enhance the outcome of ESN are eagerly awaited.
Vitamin B12 (cobalamin) is important to hematopoietic and nervous tissues [12][13][14][15]. The methylated analogue methylcobalamin (MeB12) provides a basis for transmethylation that promotes conversion of homocysteine to methionine and has been shown to have a stronger affinity for nervous tissues than other analogues including cyanocobalamin [12]. MeB12 is prescribed to ameliorate various neuropathies [16,17] and ease the progression of amyotrophic lateral sclerosis [18]. Mechanistically, it has been shown to act on downstream mechanisms of nerve growth factor and brain-derived neurotrophic factor to promote neurite outgrowth as well as the regeneration and conduction of nerves [19]. It has also been shown to have a special affinity for nerve tissues to promote myelination and transport of axonal cytoskeleton [20]. In addition to enhancing axonal regeneration, MeB12 also promotes Schwann cell proliferation and migration [21], which is essential in providing a permissive environment for axonal growth [16]. These make MeB12 a good candidate for facilitating the recovery of peripheral nerve repaired with ESN.
Pleiotrophin (PTN) is a heparin-binding growth factor expressed in developing nervous system and is reported to facilitate peripheral nerve regeneration [22,23]. In rats, PTN had been shown to significantly increase the number of axons growing into the recipient nerve in the EEN of musculocutaneous nerve (McN) to ulnar nerve (UN) [24], therefore we hypothesize that the combination of MeB12 and PTN will enhance the sprouting of intact UN following coaptation of the severed McN.
Here we used the ESN paradigm of coaptating the severed McN to UN with an epineurial window [10] to test the effects of MeB12 alone and in combination with PTN. We examined the expressions of several axonal growth-related markers (Schwann cell marker S100 [24,25], axonal growth marker growth-associated protein 43 (Gap43), and neuron specific cytoskeleton marker βIII tubulin) in the nerve proximal and distal to the neurorrhaphy site to investigate the readiness of reinnervation of target muscle a month after surgery. Electrophysiological, morphological and behavioral means were used to assess the outcome of the combination of drugs up to 6 months following coaptation.
Materials and Methods
A total of 94 young adult male Wistar rats (Charles River strain, Animal Center of the Medical College of National Taiwan University) aged 6-8 weeks (200-300 g) were studied (Table 1).
Ethics statement
Animal experiments were approved by the Animal Care and Use Committee of the Tzu-Chi University under guidelines of the National Science Council of Taiwan. All efforts were taken to minimize animal suffering during and post-surgery.
End-to-side neurorrhaphy (ESN)
Rats (n = 81) were deeply anaesthetized (8 mg ketamine and 1 mg xylazine/100 g body weight) and prepared for microsurgery. An incision was made along the left midclavicular line to expose the UN and McN in the left brachial plexus. McN was transected at the margin of the pectoralis major muscle. An epineurial window matching the cross-sectional size of McN was then made on the UN while minimizing the damage done to the axons. The cut end of McN was attached with 10-0 nylon under a surgical microscope. This configuration of nerve repair could be achieved regularly as shown in detail in our previous report [10]. The wound was then closed with 5-0 silk. Animals were monitored 1, 3 and 6 months following surgery (Table 1). Thirteen sham-operated normal animals were used as control (Table 1).
Sticker removal grooming test
The agility of the affected forelimb was tested using a sticker removal grooming test [24] by placing a small piece of sticker, 1-cm in diameter, to the ipsilateral ear [29]. Grooming was then initiated with a spray of water over the face and the time required to remove the tape was recorded. The test was repeated 5 times and an average score for each animal was derived. The test was terminated if the rat failed to remove the sticker in 5 minutes. All behavior studies were performed by the same blinded observer and analyzed in a double-blind arrangement.
Compound muscle action potential recording
Compound muscle action potentials (CMAPs) of the repaired nerve and target muscle were recorded with Viking Quest electromyogram (VQ EMG, Nicolet Biomedical, Madison, WI). The repaired nerve and biceps brachii muscle were exposed after anaesthetization. A hand-held nerve locator (Vari-stim®, Medtronic, Minneapolis, MN) was used to confirm the innervation of the targeted muscle before CMAP recording. The stimulating electrode was placed above the reconnection site and the recording electrode was placed in the biceps brachii muscle at the mid-humerus level. The recording electrode was kept 1 cm from the stimulating electrode with a piece of 5-0 nylon suture. The rat's tail was connected to the signal ground. Following that, the nerve was stimulated with 0.2 msec square pulse current in an increasing magnitude (from 5 to 13 mA) at a repetition of 0.2 Hz. In sham-operated rats, electrodes were placed at corresponding locations on McN and biceps brachii muscle. Data was then digitized and analyzed accordingly.
The z-stacked confocal images of the muscle were captured with a Leica TCS SP5 confocal microscope (Leica Microsystems) to analyze the MEP clusters and the innervation of biceps brachii muscle. Numbers of innervated MEPs, those with PGP 9.5-immunostained axons, in each of the 15 reacted sections of each muscle were counted and the mean of each muscle was then derived.
Plastic embedding of the repaired nerve and subsequent evaluation
To count the number and measure the sizes of axons, the fixed repaired nerve distal to the neurorrhaphy site was processed as below. They were immersed in 2% osmic acid solution in PB for 1 hour at room temperature before dehydrated in graded alcohol and embedded in Epon 812 resin (EMS, Fort Washington, PA). The plastic-embedded nerve was sectioned with glass knife transversely at 1-µm thickness, stained with toluidine blue and examined with a light microscope. The numbers and diameters of the axons and thicknesses of myelin sheaths were measured with Image-Pro Plus (Media Cybernetics, Silver Spring, MD).
Statistics
Prism 5.0 (PRISM, GraphPad Software, San Diego, CA, USA) was used to analyze the results and the data presented are in the form of group mean ± SD. The data were first analyzed for normality with Kolmogorov-Smirnov test. Those qualified (p > 0.1) were then analyzed subsequently with oneway ANOVA followed by Bonferroni post hoc test and a significance level of 0.05 was the criterion. Mann-Whitney U test was only used when normality or equal variance test failed. Sample sizes were determined with G*power 3 [30] where α = 0.05 and power (1-β = 0.8) based on statistics applied in previous study [24].
Results
End-to-side coaptation of the distal stump of the severed McN to intact UN resulted in innervation of biceps brachii muscle by axon collaterals originated from the latter. The recovery of the PBS-treated following ESN was less than ideal as compared to intact McN, even after 6 months of survival (please see below). These are consistent with our previous report on ESN alone with no treatment.
Effects on CMAP and upper limb functional recovery
We first explored the electrophysiological evidence of the reinnervation of biceps brachii muscle by examining the recipient nerve-evoked CMAP. In normal animals, stimulating the nerve to biceps brachii muscle, namely McN, at 5 mA generated a short-latency CMAP which is comparable in amplitude to that elicited by 11-mA stimulus ( Table 2). This suggests that in normal animals most motor units can be readily activated at moderate stimulus strength of 5 mA. However, CMAP could only be marginally identified with higher stimulus strength 1 month following ESN. In the PBS-treated, such evoked CMAPs had long and variable latencies suggesting the development of weak and marginally synchronized motor units. CMAPs could be consistently evoked and had slightly increased amplitudes but remained relatively long in duration in rats 3 and 6 months after surgery ( Figure 1, Table 2). This suggests that newly established motor units are weak and poorly synchronized.
CMAPs in MeB12-treated rats a month following surgery had long and variable latencies and generally could be consistently triggered at high stimulus strength (not shown). Similar responses were recorded in the MeB12+PTN-treated rats (not shown). Shorter latency and higher amplitude CMAPs were recorded in the MeB12-treated rats 3 months after surgery. This was also observed in the MeB12+PTN-treated group ( Figure 1, Table 2). However, by 6 months post-surgery, the amplitudes of CMAPs recorded in the MeB12 and MeB12+PTN-treated groups with 11 mA stimuli were still slightly lower than the sham-operated control ( Table 2). These suggest that the repaired nerve-muscle connection has weaker maximal power than that of the normal muscle. Notice that in the MeB12-treated rats, CMAPs recorded with either 5 or 11 mA stimulus had characteristically short duration (Figure 1, compared the positive phase of the MeB12-6 month traces with others) suggesting that MeB12 enhances the maturation of the reestablished motor units so that they had higher conduction velocity and were better synchronized upon activation. Behaviorally, MeB12 and MeB12+PTN-treated rats performed significantly better than PBS-treated rats in the sticker removal test as early as 3 months following ESN (Table 3). Like the sham-operated controls, both groups of rats removed the tape in an average of 1 second while the PBStreated rats took approximately an average of 2.67 seconds to accomplish 6 months following ESN (Table 3). This suggests that MeB12 and MeB12+PTN treatments could enhance the recovery following ESN so that the repaired nerve and muscle were functionally competent.
Effects on the expressions of axonal growth-related markers during early stage of reinnervation
The above functional examinations revealed that differences in the recovery between treatment groups were difficult to discern during early post-surgery period. To resolve this, we looked into the expressions of 3 nerve regeneration-related markers in the repaired nerve 1 month following ESN ( Figure 2). The repaired nerve was divided into donor and recipient parts, corresponding to the segment proximal and distal to the neurorrhaphy site. On the donor side, Gap43, an axonregeneration marker, had increased to around 2 folds above that of the sham-operated control in all ESN animals disregarding whether they were MeB12, MeB12+PTN or vehicle-treated. This suggests that coaptation alone induced the collateral sprouting of donor axons. On the recipient side, Gap43 expression in McN was slightly increased in the PBStreated but close to 4 folds in the MeB12-treated rats and 3 folds in the MeB12+PTN-treated rats ( Figure 2). These are MeB12 Enhances End-to-Side Neurorrhaphy Outcome PLOS ONE | www.plosone.org consistent with the proposition that MeB12 effectively facilitates the growth of axonal sprouts into the recipient nerve. Enhanced growth of axonal sprouts into the recipient nerve was also evidenced by the significantly increased expressions of neuronspecific cytoskeleton molecule βIII tubulin in the MeB12 and MeB12+PTN-treated groups and PGP 9.5 in the MeB12treated group over that of the PBS-treated ones (Figure 2). Expressions of the Schwann cell marker S100 in the recipient nerve of the MeB12 and MeB12+PTN-treated groups was also dramatically increased, around 12-13 folds over that of the PBS-treated (Figure 2). This is consistent with a role of Schwann cells in assisting peripheral nerve growth and myelination.
Effects on Schwann cells
We also looked into the expression of S100 in cross sections of the recipient nerve using immunostaining method. S100 immunolabeling often appears circular, resembling the Schwann cell surrounding axon (arrows, Figure 3). As expected, brighter S100 labeling were seen in the recipient nerves of the MeB12 and MeB12+PTN-treated group than the PBS control 1 month following ESN (Figure 3). Triple labeling of sections of the recipient nerves 1 month following ESN showed little or no labeling of the macrophage marker CD68 in an abundance of S100 labeling surrounding the Hoechstlabeled nuclei (Figure 4). These suggest that most axonal degeneration-associated debris-removing activities were already over 1 month following ESN. S100 labelings were more intense in the MeB12 and MeB12+PTN-treated groups 3 and 6 months following surgery (Figure 3, comparing the middle and right column panels to those of the PBS-treated column in the left).
Effects on the numbers, sizes and myelination of regenerated axons
In order to find the anatomic correlates of the effects of treatments we first counted the number of axons entering each recipient nerve. Coaptation alone resulted in the significant growth of the axonal sprouts into the recipient nerve ( Figure 5, Table 4) within a month. Each ESN-repaired recipient nerve contained one large-sized vessel a month after surgery disregarding whether it was treated or not (not shown). In the PBS-treated ESN rats, the number of axons was actually far more numerous than that found in intact McNs of the shamoperated animals (Table 4). This is consistent with our finding that the donor, namely ulnar nerve contains more axons than the recipient nerve and this further suggests that trimming of excess axons is required later in the reinnervation process. In the recipient nerve of the PBS-treated group, the number of axons continued to increase 3 months after ESN. Although it did decreased by a small amount, it remained high by the end of 6 months. Interestingly, MeB12-treated rats had fewer axons (21% less) than that of the PBS-treated ones 1 month after ESN ( Table 4). The number of axons did increase moderately by the end of 3 months but was 25% less than that of the PBStreated ones (Table 4). This value decreased by the end of 6 months and was 37% less than that of the PBS-treated rats ( Table 4). These phenomena suggest that MeB12 promotes axonal maturation and pruning of redundant axons. However in the MeB12+PTN-treated group, more axons were found to enter the recipient nerve 1 month after ESN compared to MeB12 or PBS-treated group. This value however, decreased by the end of the third month and remained steady 6 months after ESN ( Table 4). The early surge of axons in the MeB12+PTN-treated suggests that the PTN applied transiently promotes axonal sprouting.
We then quantitated the diameters and myelin thicknesses of the axons growing into the recipient nerve. In the PBS-treated, axons were fine in diameters in the first month and had increased in diameters and myelin thicknesses steadily with survival ( Table 5). The mean value of the axon diameters showed that axons of the MeB12-treated were more than twice the thickness than those of the PBS-treated 1 month postsurgery and gained in diameter steadily afterward. In addition, they were always larger than those of the PBS-treated counterparts at any given time point. Like the MeB12-treated group, MeB12+PTN-treated rats also showed a similar-pattern of augmented axonal size growth over that of the PBS-treated rats but the sizes of axons were slightly smaller at each stage. Figure 6 showed a Whisker box plot of axons according to their diameters 6 months following ESN. It shows that the MeB12treated group had significantly larger diameters than the PBStreated ones (P < 0.05). The upper outliners of the MeB12treated group clustered at around 8 µm whereas those of the PBS-treated group varied widely.
On the other hand, mean myelin thicknesses of the MeB12 and MEB12+PTN-treated groups were found comparable to each other but significantly thicker than those of the PBStreated group 3 and 6 months after ESN (Table 5). These were consistent with the proposition that MeB12 facilitates axonal maturation. Figure 7 plotted the correlation between diameters and myelin thicknesses of the axons growing into the recipient nerves. PBS-treated had predominantly smaller-sized axons throughout the survival period examined. MeB12 treatment increased the sizes and myelin thicknesses of the axons proportionally beginning in a month and the distribution pattern became dramatically different from that of the PBS-treated group 3 months following surgery. Similar pattern of correlation was observed in the MeB12+PTN-treated group but the MeB12-treated group had more points scattered in the upper right sector of the plot, i.e., more larger axons with thicker myelins (Figure 7, the 3-month and 6-month plots).
Effects on MEPs
In order to investigate how muscle innervation recovered over time, biceps brachii muscle sections were reacted simultaneously with α-bungarotoxin tagged with fluorochrome for MEP (green, Figure 8) and PGP 9.5 immunohistochemistry for axons (red, Figure 8). Flower-like MEP clusters (Figure 8) were identified starting 1 month following ESN. Colocalization of nerve fibers in MEPs made most of them appear yellow to orange in color. In the MeB12 and MeB12+PTN-treated, some MEP clusters were often seen to connect to relatively thin redstaining axon-like profiles (arrows, Figure 8) as early as 1 month after coaptation. Large bundles of apparently thicker axons (arrowheads, Figure 8) were found traveling in the muscles of the MeB12 and MeB12+PTN-treated groups 6 The results showed that, 1 month following ESN, number of MEP clusters in the MeB12-treated had restored to approximately 72% of that of the normal control as compared to 33% in the PBS-treated group ( Table 6). Numbers of clusters grew steadily with survival and reached a level equivalent to that of the sham-operated control in the MeB12 and MeB12+PTN-treated groups 6 months following nerve repair ( Table 6). Those in the vehicle-treated however, remained much fewer (Table 6).
Discussion
EEN and ESN are used to repair injured peripheral nerves. The two strategies are empirically different in which EEN involves the regeneration of transected donor nerve whereas ESN deals with the collateral sprouting of the intact [10,31,32]. ESN could be in part induced by humoral factors released by Figure 4. S100, CD68 and Hoechst 33342 labelings in the recipient nerve 1 month following ESN. Micrographs from the PBS, MeB12 and MeB12+PTN treated rats are illustrated. S100 immunoreactivities (green) were found to surround Hoechst 33342labeld nuclei (blue), presumably Schwann cells. There were little or no detectable CD68 labeling (red) in the recipient nerve demonstrating that most cells in the recipient nerve at this stage were involved in nerve regeneration than degeneration. Each confocal image illustrated is the stack of a series of scans of the nerve section, 8-µm thickness, examined. Scale bar = 50 µm for all. doi: 10.1371/journal.pone.0076302.g004 the recipient nerve as collateral sprouts of the donor were found capable of crossing a gap between the donor and recipient nerves which were brought close to each other with a Y-shaped silicone tube without suturing [31]. Thus, it's not surprising to find that ESN results in slower recovery and weaker connection strength [10]. However it has the advantage of saving donor function. In the present study, we found that MeB12 effectively facilitated the outcome of ESN in the form of establishing reasonable strength connection in a shorter period of time as compared to the vehicle-treated.
Effects of MeB12
In the rat ESN paradigm that we investigated, donor UN motoneurons have been confirmed with retrograde tracer to sprout into the recipient nerve and innervate the target muscle ([10]: figures 1 and 7). The results in the present study showed that coaptation of a severed nerve to an intact one alone is sufficient to induce the latter to sprout. This is supported by the Values are means ± SD. * P < 0.05 as compared to corresponding PBS-treated (One-way ANOVA followed by Bonferroni post hoc test).
doi: 10.1371/journal.pone.0076302.t005 increase of Gap43 expression in the donor nerve 1 month after surgery disregarding treatments, and consistent with earlier report that Schwann cells alone could induce the collateral sprouting of intact axons [31]. Although axonal counts show that the PBS-treated contained more axons in the proximal part of the recipient nerve, more sprouted axon collaterals appear to have ventured distally in the MeB12 and MeB12+PTN-treated groups as only the recipient nerves of these latter two groups showed large increases of Gap43 and βIII tubulin expressions. This is also supported by an increase of PGP 9.5 expression, which is preferentially associated with neuronal cytoskeleton [33] in the MeB12-treated over that of the PBS and MeB12+PTN-treated 1 month after coaptation. These findings are consistent with earlier report that MeB12 promotes the transport of axonal cytoskeleton [20]. The large increase of S100, by and large a Schwann cell marker [10,24], in the recipient nerves of the MeB12-treated group also echoed the enhancement on axonal growth as Schwann cells can induce collateral sprouting of intact axons [31] and are critical to peripheral nerve growth after ESN [25]. Furthermore, it is consistent with earlier reports that MeB12 enhances Schwann cell proliferation and migration [21,34] as well as maturation [24]. The scarcity of the expression of the macrophage marker CD68 in the recipient nerves of both the drug and vehicletreated groups as early as 1 month after coaptation supports that these Schwann cells are more likely involved in axonal regeneration than degeneration. Thus MeB12 could have dual effects on both axons and Schwann cells that work together to facilitate the reinnervation after ESN. The communication between regenerating growth cones and Schwann cells via the release of acetylcholine from the former and the expression of corresponding receptors in the latter in the process of regeneration [35] could play an important role in establishing an effective innervation in the MeB12-treated group that we explored. MeB12 treatment enhanced the final outcome of ESN but not the excessive enumeration of invading collaterals. The recipient nerves of the MeB12-treated group contained as many axons as the intact recipient nerve 1 month after coaptation, which is much fewer than those of the PBS-treated. Analyses of axonal numbers, sizes and diameter-myelin thickness relationship suggested that MeB12 boosts the maturation of ingrowing axons to establish effective connection so that larger axons prevailed as the rats' survival lengthened. This maturation effect is likely to involve elimination and/or pruning of axons which developed weak to no neuromuscular connection. This is consistent with our earlier findings that the distal part of the recipient nerve contained fewer axons, 30-40% less, than its proximal part consistently from 2 to 6 months following ESN without treatment [10]. Elimination and/or pruning of axons are also supported by the early transient increase and later reduction in axonal numbers in the MeB12 and MeB12+PTN-treated groups in the present study. This effect of MeB12 is likely critical to enhance the outcome of neurorrhaphy as hyperinnervation and/or polyinnervation are detrimental to functional recovery [36]. The remaining axons, although less numerous than those of the vehicle-treated, developed effective and robust connections to generate sizable CMAPs upon stimulation and to move the affected upper limb in removing the sticker in the modified grooming test. MeB12 could have altered methylation-related kinase activities and/or oxidative stress-reactive cascades in both neurons and Schwann cells [19,20] to result in such a reinnervation enhancing effect. The precise mechanism however remains to be elucidated.
Effects of PTN
In the EEN of UN and McN, PTN alone or in combination with MeB12 was found to increase the number of blood vessels in the recipient nerve to two folds of that of the PBS controls which contain 3-4 vessels per nerve [10]. In EEN, MeB12 alone slightly increased the number of vessels while enhancing axonal sizes but not the quantity. These suggest that MeB12 lacks a dominant angiogenic effect. In the present study, one central vessel was identified in all nerves repaired with ESN 1 month post-surgery regardless of the treatments. Hence, it is arguable that MeB12 enhanced the recovery by promoting angiogenesis. Our results instead support the notion that the number of blood vessels in the nerve is determined by the need of blood supply by the amount of tissue. The lack of difference in the number of blood vessels in the ESN repaired nerve treated additionally with PTN supports that a single dose of PTN applied to the nerve right after surgery, followed by continuous MeB12 supply did not enhance angiogenesis although PTN was thought to have such an effect [37].
In EEN of McN to UN in rats, PTN alone was found to enhance the sprouting of the axotomized donor axons for at least 3 months [24]. However at the same time, PTN interacts with heparan sulfates or chondroitin sulfate proteoglycans to inhibit fibroblast growth factor-2 incorporation to Schwann cells [22,38]. This hampers myelination and axonal maturation. Persistent sprouting is also known to impede axonal maturation as neutralizing collateral sprouting improved facial nerve reinnervation [39]. In the present study, PTN+MeB12 treatment increased the sprouting of intact donor axons transiently 1 month following ESN. Decreases in axon numbers afterward suggest domination of axonal pruning and/or elimination subsequently, likely an effect of the continuously administered MeB12. The quick onset of pruning and elimination of redundant and/or weakly connected axons is likely responsible for the development of a sound reinnervation in the MeB12/ PTN-treated group. In our earlier studies on the EEN of peripheral nerves, combined PTN and MeB12 treatment was found to postpone axonal maturation and was not recommended for repairing peripheral nerves [10].
Technical remarks
In this study, several measures were used to assess the recovery following ESN. Numbers of MEP clusters in the affected muscle did not linearly reflect the strength of the reestablished innervation as measured by CMAP or predicted by the sticker removal grooming test. This is similar to what we reported earlier when evaluating the outcome following EEN of the same nerves [10]. Thus, counting MEP clusters is not recommended as a sole measure for assessing reinnervation. On the other hand, by the end of the 6th month, rats of the MeB12 and MeB12+PTN groups moved their upper limbs effectively to quickly remove the tape comparable to the shamoperated intact controls. Nevertheless, CMAPs generated by the reinnervated muscle upon high strength stimuli were still somewhat less robust than those of the sham-operated controls. These suggest that CMAP is likely a measure of the maximal power of the newly developed neuromuscular connection.
Figure 8. MEPs and innervation of the biceps brachii muscles in rats 1, 3 and 6 months following ESN.
MEPs were revealed with α-bungarotoxin tagged with Alexa Fluor 488 (green). Nerve fibers, namely intramuscular axons, were revealed with PGP 9.5 immunohistochemistry (Cy3 red fluorescence). MEP cluster appeared as flower-like structure. Most of them overlapped with nerve staining to become yellow or orange in color. Relatively thin red fibers (arrows) were observed occasionally in the MeB12 and MeB12+PTN-treated muscles 1 month after surgery. Thicker red-staining structures, likely bundles of thicker axons (arrowheads), were seen in the muscles of the MeB12 and MeB12+PTN-treated rats 6 months post-surgery. The fine grain green background staining is the noise from connective tissue covering muscle fibers. Each micrograph illustrated is the stacked confocal scanned image of a portion of a representative muscle section. Scale bar = 30 µm for all. doi: 10.1371/journal.pone.0076302.g008
Conclusions
We demonstrated in rats that systemic MeB12 effectively enhanced the recovery of UN to McN transfer in ESN configuration. The restoration although somewhat short of that of the sham-operated control for the survival we tested, it appears to be rather effective when compared to the vehicletreated control. Since MeB12 is used in the clinic to treat peripheral neuropathy, it could be readily adapted to treat ESN. The presumed motoneuron trophic factor PTN [23] however is not recommended as we found no advantage in using it with MeB12. In human, nerve repair is complicated by factors such as delay in treatment, tissue inflammation, and long distance for regeneration. Combinational use of anti-inflammatory drug and MeB12 might be considered as we have suggested earlier for enhancing the recovery of EEN [24]. The effect of combining anti-inflammatory drug with MeB12 in ESN repair however remains to be explored. | 2017-04-30T11:15:48.373Z | 2013-09-30T00:00:00.000 | {
"year": 2013,
"sha1": "26119cf41ae2cea820e97a7e553070f57abd538b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0076302&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26119cf41ae2cea820e97a7e553070f57abd538b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
212622500 | pes2o/s2orc | v3-fos-license | TLD1433 Photosensitizer Inhibits Conjunctival Melanoma Cells in Zebrafish Ectopic and Orthotopic Tumour Models
The ruthenium-based photosensitizer (PS) TLD1433 has completed a phase I clinical trial for photodynamic therapy (PDT) treatment of bladder cancer. Here, we investigated a possible repurposing of this drug for treatment of conjunctival melanoma (CM). CM is a rare but often deadly ocular cancer. The efficacy of TLD1433 was tested on several cell lines from CM (CRMM1, CRMM2 and CM2005), uveal melanoma (OMM1, OMM2.5, MEL270), epidermoid carcinoma (A431) and cutaneous melanoma (A375). Using 15 min green light irradiation (21 mW/cm2, 19 J.cm−2, 520 nm), the highest phototherapeutic index (PI) was reached in CM cells, with cell death occurring via apoptosis and necrosis. The therapeutic potential of TLD1433 was hence further validated in zebrafish ectopic and newly-developed orthotopic CM models. Fluorescent CRMM1 and CRMM2 cells were injected into the circulation of zebrafish (ectopic model) or behind the eye (orthotopic model) and 24 h later, the engrafted embryos were treated with the maximally-tolerated dose of TLD1433. The drug was administrated in three ways, either by (i) incubating the fish in drug-containing water (WA), or (ii) injecting the drug intravenously into the fish (IV), or (iii) injecting the drug retro-orbitally (RO) into the fish. Optimally, four consecutive PDT treatments were performed on engrafted embryos using 60 min drug-to-light intervals and 90 min green light irradiation (21 mW/cm2, 114 J.cm−2, 520 nm). This PDT protocol was not toxic to the fish. In the ectopic tumour model, both systemic administration by IV injection and RO injection of TLD1433 significantly inhibited growth of engrafted CRMM1 and CRMM2 cells. However, in the orthotopic model, tumour growth was only attenuated by localized RO injection of TLD1433. These data unequivocally prove that the zebrafish provides a fast vertebrate cancer model that can be used to test the administration regimen, host toxicity and anti-cancer efficacy of PDT drugs against CM. Based on our results, we suggest repurposing of TLD1433 for treatment of incurable CM and further testing in alternative pre-clinical models.
The adaptive immune system in ZF does not reach maturity until 4 weeks post-fertilization, allowing circumvention of cell graft-host rejection by using ZF in early stages [34]. ZF larvae can absorb various small molecular weight compounds from the water they swim in, which is advantageous when screening for anti-cancer compounds. When assessing drug efficacy, ZF experiments require much less material than mouse models [35]. Routinely 1 mL (1 nM to 20 µM) of drug solution is enough for testing drug efficacy in six individual ZF embryos. Alternatively, (pro)drugs can also be injected in the animal in nL quantities, which further minimizes the amount of compound required for testing. Importantly, the use of transgenic lines with fluorescent vasculature, neutrophil granulocytes or macrophages, allows live, non-invasive imaging of proliferation, migration and tumour-associated neo-angiogenesis, and interaction with the microenvironment at the single cell resolution in the entire organism within 1 week [36,37]. For PDT, the transparency of the animals allows for activating a PS in the entire organism by simple light irradiation. Overall, these combined advantages account for the increased experimental use of zebrafish cancer models in drug discovery during the last two decades [38,39].
For cutaneous melanoma, a current phase I/II clinical trial of leflunomide combined with vemurafenib is the first to arise from initial screens in zebrafish [40]. Many ZF xenograft models have been established, and the choice of the best ZF model depends on the type of disease, but also on the type of treatment. Human tumour cells can be injected for example into the yolk sac [41], the Duct of Cuvier [42], the pericardial cavity [43], the perivitelline space [44], the swimming bladder [45], or the hindbrain [46]. Here, we aimed to engage different CM xenograft models for testing the TLD1433 PS as a potential new PDT treatment strategy to combat CM growth. We hence developed a new orthotopic model for CM by RO injection of CM cells, mimicking primary tumour spread. We also investigated a previously-developed ectopic model, generated by intravenous cell injection: circulating cancer cells usually form tumour lesions in the tail of the embryo [11,42]. Using three different treatment modalities of TLD1433 in two different tumour models, we established a testing platform in which the anti-tumour efficacy of this PS can be observed.
TLD1433 Is Phototoxic in Six Eye Melanoma Cell Lines
TLD1433 is known to generate reactive oxygen species (ROS) with high quantum efficacy in many cancer cell lines. However, there is no report of the in vitro toxicity of this compound in eye melanoma (UM) cell lines. We determined the cell viability of three conjunctival melanoma cell lines (CRMM1, CRMM2 and CM2005.1), and three uveal melanoma cell lines (OMM1, OMM2.5, MEL270) in the presence of TLD1433, both in the dark or under green light irradiation (21 mW/cm 2 , 19 J/cm 2 , 520 nm, 15 min), and compared this viability to epidermoid carcinoma A431 and cutaneous Zebrafish (Danio rerio) are indeed increasingly used as an in vivo model to study cancer [30]. Benefits include large clutch size, ex utero development and easy manipulability of larvae [31]. Because there is high conservation of genes between zebrafish (ZF) and human, data collected in ZF are relevant for humans [32]. Notably, the histology of ZF tumours has been shown to be highly similar to tumours found in human cancers [33]. The adaptive immune system in ZF does not reach maturity until 4 weeks post-fertilization, allowing circumvention of cell graft-host rejection by using ZF in early stages [34]. ZF larvae can absorb various small molecular weight compounds from the water they swim in, which is advantageous when screening for anti-cancer compounds. When assessing drug efficacy, ZF experiments require much less material than mouse models [35]. Routinely 1 mL (1 nM to 20 µM) of drug solution is enough for testing drug efficacy in six individual ZF embryos. Alternatively, (pro)drugs can also be injected in the animal in nL quantities, which further minimizes the amount of compound required for testing. Importantly, the use of transgenic lines with fluorescent vasculature, neutrophil granulocytes or macrophages, allows live, non-invasive imaging of proliferation, migration and tumour-associated neo-angiogenesis, and interaction with the microenvironment at the single cell resolution in the entire organism within 1 week [36,37]. For PDT, the transparency of the animals allows for activating a PS in the entire organism by simple light irradiation. Overall, these combined advantages account for the increased experimental use of zebrafish cancer models in drug discovery during the last two decades [38,39].
For cutaneous melanoma, a current phase I/II clinical trial of leflunomide combined with vemurafenib is the first to arise from initial screens in zebrafish [40]. Many ZF xenograft models have been established, and the choice of the best ZF model depends on the type of disease, but also on the type of treatment. Human tumour cells can be injected for example into the yolk sac [41], the Duct of Cuvier [42], the pericardial cavity [43], the perivitelline space [44], the swimming bladder [45], or the hindbrain [46]. Here, we aimed to engage different CM xenograft models for testing the TLD1433 PS as a potential new PDT treatment strategy to combat CM growth. We hence developed a new orthotopic model for CM by RO injection of CM cells, mimicking primary tumour spread. We also investigated a previously-developed ectopic model, generated by intravenous cell injection: circulating cancer cells usually form tumour lesions in the tail of the embryo [11,42]. Using three different treatment modalities of TLD1433 in two different tumour models, we established a testing platform in which the anti-tumour efficacy of this PS can be observed.
TLD1433 Is Phototoxic in Six Eye Melanoma Cell Lines
TLD1433 is known to generate reactive oxygen species (ROS) with high quantum efficacy in many cancer cell lines. However, there is no report of the in vitro toxicity of this compound in eye melanoma (UM) cell lines. We determined the cell viability of three conjunctival melanoma cell lines (CRMM1, CRMM2 and CM2005.1), and three uveal melanoma cell lines (OMM1, OMM2.5, MEL270) in the presence of TLD1433, both in the dark or under green light irradiation (21 mW/cm 2 , 19 J/cm 2 , 520 nm, 15 min), and compared this viability to epidermoid carcinoma A431 and cutaneous melanoma A375 cell lines under the same conditions. The protocol used was based on previous work from the Bonnet group [47,48], and differed slightly from recommendations from McFarland et al. [19]. Notably, the cell seeding time was 24 h instead of 3 h, and the drug-to-light interval (DLI) was 24 h instead of 16 h. The effective concentration (EC50) values, i.e., the concentration required to reduce cell viability by 50% compared to untreated wells, were assessed by fitting the dose-response curves with a Hill equation. The phototoxicity index (PI), defined as the ratio of the dark EC50 to the light EC50, was also calculated and represents the amplification of TLD1433 activity with a light trigger. In control A375 and A431 cells, the dark toxicity of TLD1433 was very low, with EC50 values higher than the highest concentration used in that assay (5 µM). PI is greater than 100, as previously reported for other cell lines [22], were observed. In eye melanoma cells, the dark toxicity of TLD1433 was relatively high, with EC50 values around 1 µM. Upon light activation, TLD1433 became significantly more potent (as observed with the A375 and A431 cell lines), with EC50 values in the nanomolar regime. The lowest EC50 values were measured for CM cells, where the PI values were also the highest (>140). Due to the higher PI values for CM cells compared to uveal or cutaneous cancer cells, CM cells were chosen for later in vivo experiments ( Figure 2 and Table 1).
Cancers 2020, 12 4 of 23 melanoma A375 cell lines under the same conditions. The protocol used was based on previous work from the Bonnet group [47,48], and differed slightly from recommendations from McFarland et al. [19]. Notably, the cell seeding time was 24 h instead of 3 h, and the drug-to-light interval (DLI) was 24 h instead of 16 h. The effective concentration (EC50) values, i.e., the concentration required to reduce cell viability by 50% compared to untreated wells, were assessed by fitting the dose-response curves with a Hill equation. The phototoxicity index (PI), defined as the ratio of the dark EC50 to the light EC50, was also calculated and represents the amplification of TLD1433 activity with a light trigger. In control A375 and A431 cells, the dark toxicity of TLD1433 was very low, with EC50 values higher than the highest concentration used in that assay (5 µM). PI is greater than 100, as previously reported for other cell lines [22], were observed. In eye melanoma cells, the dark toxicity of TLD1433 was relatively high, with EC50 values around 1 µM. Upon light activation, TLD1433 became significantly more potent (as observed with the A375 and A431 cell lines), with EC50 values in the nanomolar regime. The lowest EC50 values were measured for CM cells, where the PI values were also the highest (>140). Due to the higher PI values for CM cells compared to uveal or cutaneous cancer cells, CM cells were chosen for later in vivo experiments ( Figure 2 and Table 1). It should be noted that the PIs determined were somewhat lower than those reported for TLD1433 with other cell lines (>1000), which could be due either to a preferential toxicity toward uveal and CM melanoma lines or a difference in the in vitro PDT protocol used or both. Under the selected conditions, the dark toxicity observed for both uveal and CM cell lines was relatively high [29], which reduces the maximum PIs that can be obtained. In addition, the slightly lower PI could be due to the low light dose that we used, compared to other studies: typically, 100 J.cm −2 has been proposed by McFarland et al. [19]. Regardless, we chose CM cells for further studies given that TLD1433 had the largest PI and was most phototoxic toward this particular cell line.
TLD1433 Induces Apoptosis and Necrosis in CRMM1 and CRMM2 Cells
Depending on the nature and intracellular localization of a PS, the light dose, and the cell type, PDT is known to provoke either necrosis, apoptosis, or autophagy [49]. In order to investigate the death mechanism induced by green light-activated TLD1433 in CRMM1 and CRMM2 cells, the cells were stained with Annexin V and propidium iodide, and further analysed by fluorescence-activated cell sorting (FACS).
In both the vehicle control and the dark TLD1433 groups, most cells were found alive. In the light-activated TLD1433 group, about half of the cells were found dead, either in the late apoptotic or necrotic quadrant ( Figure 3A-D), but most importantly, very few early apoptotic cells were found. Overall, these results suggest that the CM cells treated with TLD1433 and light did not die via apoptosis, but probably by necrosis.
In both the vehicle control and the dark TLD1433 groups, most cells were found alive. In the light-activated TLD1433 group, about half of the cells were found dead, either in the late apoptotic or necrotic quadrant ( Figure 3A-D), but most importantly, very few early apoptotic cells were found. Overall, these results suggest that the CM cells treated with TLD1433 and light did not die via apoptosis, but probably by necrosis.
Light Toxicity and the Maximum Tolerated Dose of TLD1433 by Water, Intra-Venous and RO Administration in Zebrafish Embryos
In order to test the effectiveness of TLD1433-induced PDT in zebrafish CM cancer models, we first examined the effect of light irradiation on wild type, non-injected embryos. 2 days post fertilization (dpf) embryos (30 embryos per group) were exposed continuously to green light (21 mW/cm 2 , 520 nm) for 0, 3, 6, and 12 h and cytotoxicity symptoms were monitored by stereomicroscope.
Green light irradiation for 6 h did not induce any toxicity or developmental defects in zebrafish embryos; the percentage of mortality, malformation (i.e., bent spine and pericardial edema) and fish length were the same as in the control group (Figure 4), indicating that green light is not toxic to ZF until at least 6 h of continued exposure. mW/cm 2 , 520 nm) for 0, 3, 6, and 12 h and cytotoxicity symptoms were monitored by stereomicroscope.
Green light irradiation for 6 h did not induce any toxicity or developmental defects in zebrafish embryos; the percentage of mortality, malformation (i.e., bent spine and pericardial edema) and fish length were the same as in the control group (Figure 4), indicating that green light is not toxic to ZF until at least 6 h of continued exposure. Next, we tried three different regimens of drug administration into zebrafish larvae and determined the maximum tolerated dose (MTD) of TLD1433 in dark and after light activation ( Figure 5 and Table 2). Water administration (WA) of drugs by skin epithelial cell absorption and drinking is commonly used in zebrafish drug experiments [35]. Hence, different concentrations of TLD1433 were added into the egg water at 2.5, 3.5, 4.5, and 5.5 dpf embryos, followed by 12 h DLI and 90 min green light irradiation (21 mW/cm 2 , 114 J.cm −2 , 520 nm). In addition, we also tested IV of TLD1433 by direct injection into the dorsal vein, as well behind the eye injections for RO administration [41]. For IV and RO administration, the compound was injected four times into the embryos at 3, 4, 5 and 6dpf, followed by 60 min drug-to-light interval and the same kinetic and irradiation regime as for the WA administration ( Figure 5A). Zebrafish embryos tolerated light-activated TLD1433 without any effect on the mortality, malformation and fish length at an MTD of 9.2 nM when delivered by WA administration and an MTD of 4.6mM when delivered by IV and RO administration, respectively ( Figure 5B-D). Considering that in the dark, even higher concentrations of TLD1433 (23 nM by WA, 11.5 mM by Cancers 2020, 12, 587 8 of 23 IV and RO) were not toxic to embryos, we conclude that this compound is activated by green light irradiation and very effective at low concentrations in vivo.
light irradiation (21 mW/cm 2 , 114 J.cm −2 , 520 nm). In addition, we also tested IV of TLD1433 by direct injection into the dorsal vein, as well behind the eye injections for RO administration [41]. For IV and RO administration, the compound was injected four times into the embryos at 3, 4, 5 and 6dpf, followed by 60 min drug-to-light interval and the same kinetic and irradiation regime as for the WA administration ( Figure 5A). Zebrafish embryos tolerated light-activated TLD1433 without any effect on the mortality, malformation and fish length at an MTD of 9.2 nM when delivered by WA administration and an MTD of 4.6mM when delivered by IV and RO administration, respectively ( Figure 5B-D). Considering that in the dark, even higher concentrations of TLD1433 (23 nM by WA, 11.5 mM by IV and RO) were not toxic to embryos, we conclude that this compound is activated by green light irradiation and very effective at low concentrations in vivo.
The Treatment of TLD1433 by WA, IV and RO in a Zebrafish Ectopic and Orthotopic Tumour Model
The zebrafish ectopic conjunctival melanoma tumour model has been described previously [42]. In this model, around 200 fluorescent CM cells are injected into the Duct of Cuvier at 2 dpf, and then disseminate through the blood circulation and grow in the head and tail. To establish the orthotopic tumour model, around 100 red, (td)Tomato fluorescent CRMM1 or CRMM2 cells were injected RO at 2 dpf into tg(Fli:GFP/Casper), endothelial reporter transgenic zebrafish with green fluorescent 3 nM, 4.6 nM, 9.2 nM, 11.5 nM, 23 nM) were added to the water containing 10 embryos per well at 2.5, 3.5, 4.5, 5.5 dpf, for 12 h (yellow box). After these treatments, the drug was removed and replaced by egg water followed by 90 min green light irradiation (21 mW/cm 2 , 114J.cm −2 , 520 nm), depicted as a green lightning bolt. IV or RO: 1 nL of TLD1433 (1.15 mM, 2.3 mM, 4.6 mM, 9.2 mM, 11.5 mM) were injected into the embryos at 3 dpf to 6 dpf every morning, followed by 60 min drug-to-light interval (yellow box) and 90 min green light irradiation (21 mW/cm 2 , 114 J.cm −2 , 520 nm), depicted as a green lightning bolt. (B) WA, (C) IV, (D) RO. (B-D) Images were made of irradiated (light) and non-irradiated (dark) embryos (n = 30) at 6dpf and the percentages of mortality, malformation and fish length were calculated (shown as means ± SD from three independent experiments). Representative images of embryos under dark and light conditions are shown. The zebrafish ectopic conjunctival melanoma tumour model has been described previously [42]. In this model, around 200 fluorescent CM cells are injected into the Duct of Cuvier at 2 dpf, and then disseminate through the blood circulation and grow in the head and tail. To establish the orthotopic tumour model, around 100 red, (td)Tomato fluorescent CRMM1 or CRMM2 cells were injected RO at 2 dpf into tg(Fli:GFP/Casper), endothelial reporter transgenic zebrafish with green fluorescent vasculature and examined by fluorescent microscopy at day 1 and 4 after engraftment ( Figure 6A). Tumour expansion at the injection site was measured as fluorescence intensity and tumour area. Figure 6B shows that RO-engrafted CRMM1 and CRMM2 significantly proliferated at the site of injection and formed primary tumour lesions ( Figure 6B,C).
To engage both CM models for testing the efficacy of TLD1433 as a potential new PDT treatment strategy to combat CM growth, first, the MTD of TLD1433 delivered into zebrafish embryos engrafted with CM cells was measured (Table 2) following the same procedure as already described for wild type embryos ( Figure 5 and Table 2). Engrafted embryos were more sensitive to light-activated TLD1433 than non-engrafted embryos ( Table 2). MTD concentrations of 4.6 nM and 2.3 mM were delivered by WA, IV and RO administration. Delivery of TLD1433 at the MTD by WA did not inhibit tumour burden in the ectopic or orthotopic tumour model after engraftment of CRMM1 and CRMM2 cells (Figure 7). Relative tumour burden, estimated as fluorescence intensity and tumour area, was not significantly different between the dark and light treatments (21 mW/cm 2 , 114 J.cm −2 , 520 nm), indicating that the low concentrations of TLD1433 added to the water of engrafted embryos were not sufficient to attenuate CM growth in either model (Tables 3 and 4). The TLD1433 concentration in these experiments was not increased further as the initial treatment was already at the pre-determined MTD.
Next, the effect of IV administration of TLD1433 was determined in both CM models. Figure 8 indicates that light activation of TLD1433 significantly reduced the tumour burden in the CRMM1 and CRMM2 ectopic model but not in the orthotopic model. In the ectopic model, light activation with the MTD (2.3 mM) of TLD1433 (21 mW/cm 2 , 114 J.cm −2 , 520 nm) inhibited the CRMM1 and CRMM2 tumour fluorescence intensity as well as tumour area (41%, 31%, 54%, 50%) ( Figure 8B,C and Tables 3 and 4). The CRMM1 and CRMM2 tumour burden was not changed in the orthotopic model ( Figure 8D,E and Tables 3 and 4). This clearly shows that CRMM1 and CRMM2 tumour cells received a sufficient amount of activated TLD1433 in the ectopic model but not in the orthotopic model, suggesting that IV administration allows the compound to reach and inhibit CM cells in the ectopic model but is not effective to attenuate localized CM growth behind the eye in orthotopic model.
In contrast, delivery of the same concentration of TLD1433 (2.3 mM) by RO administration toward CRMM1 and CRMM2-induced tumours diminished the fluorescence intensity and tumour area in both ectopic (47%, 40%, 64%, 52%) and orthotopic models (35%, 55%, 69%, 71%) upon green light activation (114 J.cm −2 , 520 nm) (Figure 9 and Tables 3 and 4). We propose that TLD1433 remained longer in the interstitial fluid at the injection site after RO injection, reaching a higher effective concentration to inhibit CM cells grown in the same area. tumour model, around 100 red, (td)Tomato fluorescent CRMM1 or CRMM2 cells were injected RO at 2 dpf into tg(Fli:GFP/Casper), endothelial reporter transgenic zebrafish with green fluorescent vasculature and examined by fluorescent microscopy at day 1 and 4 after engraftment ( Figure 6A). Tumour expansion at the injection site was measured as fluorescence intensity and tumour area. Figure 6B shows that RO-engrafted CRMM1 and CRMM2 significantly proliferated at the site of injection and formed primary tumour lesions ( Figure 6B,C). To engage both CM models for testing the efficacy of TLD1433 as a potential new PDT treatment strategy to combat CM growth, first, the MTD of TLD1433 delivered into zebrafish embryos engrafted with CM cells was measured (Table 2) following the same procedure as already described for wild type embryos ( Figure 5 and Table 2). Engrafted embryos were more sensitive to light-activated TLD1433 than non-engrafted embryos ( Table 2). MTD concentrations of 4.6 nM and 2.3 mM were delivered by WA, IV and RO administration. Delivery of TLD1433 at the MTD by WA did not inhibit tumour burden in the ectopic or orthotopic tumour model after engraftment of CRMM1 and CRMM2 cells (Figure 7). Relative tumour burden, estimated as fluorescence intensity and tumour area, was not significantly different between the dark and light treatments (21 mW/cm 2 , 114 J.cm −2 , 520 nm), indicating that the low concentrations of TLD1433 added to the water of engrafted embryos were not sufficient to attenuate CM growth in either model (Tables 3 and 4). The TLD1433 concentration in these experiments was not increased further as the initial treatment was already at the predetermined MTD. Next, the effect of IV administration of TLD1433 was determined in both CM models. Figure 8 indicates that light activation of TLD1433 significantly reduced the tumour burden in the CRMM1 and CRMM2 ectopic model but not in the orthotopic model. In the ectopic model, light activation with the MTD (2.3 mM) of TLD1433 (21 mW/cm 2 , 114 J.cm −2 , 520 nm) inhibited the CRMM1 and CRMM2 tumour fluorescence intensity as well as tumour area (41%, 31%, 54%, 50%) ( Figure 8B,C and Tables 3 and 4). The CRMM1 and CRMM2 tumour burden was not changed in the orthotopic model ( Figure 8D,E and Tables 3 and 4). This clearly shows that CRMM1 and CRMM2 tumour cells received a sufficient amount of activated TLD1433 in the ectopic model but not in the orthotopic model, suggesting that IV administration allows the compound to reach and inhibit CM cells in the ectopic model but is not effective to attenuate localized CM growth behind the eye in orthotopic model. In contrast, delivery of the same concentration of TLD1433 (2.3 mM) by RO administration toward CRMM1 and CRMM2-induced tumours diminished the fluorescence intensity and tumour area in both ectopic (47%, 40%, 64%, 52%) and orthotopic models (35%, 55%, 69%, 71%) upon green light activation (114 J.cm −2 , 520 nm) (Figure 9 and Tables 3 and 4). We propose that TLD1433 remained longer in the interstitial fluid at the injection site after RO injection, reaching a higher effective concentration to inhibit CM cells grown in the same area.
TLD1433 by Retro Orbital Administration Induces Apoptosis of CRMM1 and CRMM2 Cells in Zebrafish Orthotopic Model
In situ TUNEL assay on fixed embryos was used to detect TLD1433 induced apoptosis in zebrafish CRMM1 and CRMM2 orthotopic tumour models at 4 dpi after light activation of 2.3 mM TLD1433, administrated by retro orbital injection. The DNA strand breaks in apoptotic tumour cells were stained with fluorescein and visualized as a green signal. In control dark, control light, TLD1433 dark groups there was no positive green signal detected, which co-localized with red signal of CRMM1 and CRMM2 engrafted cells (Figure 10). In contrast, light activation of TLD1433, as described before ( Figure 9C,E), induced CRMM1 and CRMM2 cell apoptosis in the zebrafish orthotopic model. After light irradiation, the red signal representing engrafted CM cells was reduced, however some of the remaining cells stained positive for apoptotic cells and turned green (yellow in overlay), indicating that PDT-driven anti-tumour efficacy of TLD1433 in this PDT regimen is at least partially apoptosis-dependent.
Cancers 2020, 12 14 of 23 Figure 10. TUNEL assay of in CRMM1 and CRMM2 orthotopic model after RO of TLD1433. Red fluorescent CRMM1 (A) and CRMM2 (B) cells were injected at 2dpf behind the ZF eye and divided into four groups for drug treatment. RO administration of vehicle control and TLD1433 was performed as described in Figure 9C,E. After dark or light exposure (21 mW/cm 2 , 114 J.cm −2 , 520 nm) Figure 10. TUNEL assay of in CRMM1 and CRMM2 orthotopic model after RO of TLD1433. Red fluorescent CRMM1 (A) and CRMM2 (B) cells were injected at 2dpf behind the ZF eye and divided into four groups for drug treatment. RO administration of vehicle control and TLD1433 was performed as described in Figure 9C,E. After dark or light exposure (21 mW/cm 2 , 114 J.cm −2 , 520 nm) embryos were fixed and TUNEL staining was performed. Representative images of embryos are shown in this figure. (A,B) In TLD1433 light groups nuclear DNA fragmentation by nucleases is detected by co-localization of green (DNA fragments) and red signal of engrafted CM cells, depicted as yellow signal and marked by white arrows. In control dark, control light, TLD1433 dark, there are no positive green apoptotic tumour cells observed. Background green signal in TLD1433 light groups, does not co-localized with cytosolic red signal, which is diminished in degraded cells and TUNEL stains only the DNA breaks in these CM apoptotic cells. Experiment was performed 3 times with a group size of 10 embryos.
Discussion
Developing new ocular PDT treatments often depends on a limited number of rabbit studies, due to lack of other animal models. To overcome this, we previously generated an ectopic CM model, and now developed an orthotopic CM model in zebrafish. Zebrafish xenograft models are particularly straightforward for testing compound toxicity and efficacy in vivo, as due to the small size and transparence of the embryo, one can examine on the one hand adverse effects on developing phenotypes or animal survival, and on the other hand tumour burden by fluorescence microscopy. For PDT in zebrafish, one should note that the PI can either be defined as the total tumour fluorescence or the total tumour area (as detected in confocal microscopy) in the light-activated group, divided by the tumour fluorescence or tumour area in the dark group. These definitions are quite different from the definition of the PI in vitro, where it is usually defined as the ratio between the EC 50 values in the dark and in light-irradiated conditions. As a consequence, in vitro and in vivo PIs cannot be directly compared. For example, the PI obtained by fluorescence spectroscopy in the orthotopic CRMM1 model and using RO injection of TLD1433 was 1.85, while that obtained by measuring the tumour area was 4.1; the PI measured by the ratio of EC 50 values in vitro was 140. The only PIs that can be compared are the ones defined identically in the same cancer model.
Here, our results demonstrate not only activity of TLD1433 in a broad range of different CM and UM cells in vitro, but also anti-tumour activity in a zebrafish embryo tumour models of CM. Interestingly, the in vitro results on 6 different eye melanoma cell lines are not significantly different, which means that TLD1433 shows a broad range of photoactivity, independently of the genetic background of the different cell lines. For the in vivo part of this work, we focussed on CM because TLD1433 induced the highest PIs in these cell lines. However, future experiments may further analyse UM, as good activity was also observed in the UM cell lines. Clearly, the excellent photodynamic properties of the Ru-based TLD1433 sensitizer make it phototoxic in most cell lines, including cutaneous melanoma and non-melanoma cell lines. When testing it in vivo, it is hence particularly important to optimise the mode of administration, compound dose, and light dose, in order to minimise side effects.
In ZF embryo models for PDT the small size of the embryo, of the tumor, and the relative optical transparency of all tissues, combine into easy light penetration into the tumor. On the other hand, local irradiation of the tumor is very difficult, so that it was not investigated here. Hence, the main challenge in developing a ZF embryo model for testing PDT sensitizers, is to insure that the concentration of the photosensitizer in the tumor tissue is, at the moment of irradiation, high enough, while it should remain as low as possible in healthy tissues, which will be irradiated as well. This condition is the only way to insure, following light irradiation, a maximum dose of oxidative stress in the diseased tissue, while keeping minimal the activity in the rest of the body. In larger animals such as mice or humans, achieving deep light penetration in the tumor tissue and constant light dosimetry can be tougher and requires specific optimization for each type of cancer, compound, and irradiation wavelength. On the other hand, light irradiation is, by definition of PDT, circumvented to the tumor area, which minimizes global toxicity issues. However, in this stage of our research we cannot exclude an inflammatory reaction caused exclusively by innate immunity cells (as adaptive immunity is not active yet) of the zebrafish embryos elicited by tumour necrosis or apoptosis after TLD treatment. This reaction may drive same negative side effects or even help to attenuate tumour development in this model [50].
Whether the transparency of ZF embryo is considered as an advantage or a disadvantage for testing PDT photosensitizers, the relationship between the method of implanting the tumour and the mode of administration of the compound has to be established for each particular disease. For CM, our results clearly demonstrate that when the route of administration did not fit with the chosen tumour model, the activity in zebrafish was abrogated. This result is very important considering the notoriously excellent ROS generation properties of TLD1433 and its excellent PDT properties in mice tumour models [19]. In other words, activity in fish only appeared when the proper administration route was used. Independently of the model, water administration did not to allow for the compound to reach the tumor. For intraveinous injection of TLD1433, the situation was more contrasted, as the ectopic model showed good activity after light irradiation, while the orthotopic model did not. In the ectopic model, engrafted cells disseminate through the blood circulation and form small metastatic lesions in the head and tail. At this stage, we can only speculate that when TLD1433 is also injected in the blood circulation more cells may effectively be reached by the drug before activation. In the orthotopic model, lesions are bigger and may be more compact, so that cells may be less easily reached by the drug when it is injected in blood. When TLD1433 was locally injected behind the eye, an excellent response was found for both ectopic and orthotopic models. Overall, the best response was obtained with the orthotopic model, combined with local injection of TLD1433, i.e., injection behind the eye. In such a case, maximum drug concentration may be achieved in the tumor right before light irradiation. Such a mode of administration turns out to be reminiscent of that used in bladder cancer patient, where TLD1433 is injected in the bladder and taken up very selectively by the tumour cells. Our results open the door for further zebrafish testing of not only TLD1433 (to assess on its toxicity, activity, and mode of action), but also of other phototherapeutic compounds, for which no activity in vivo has ever been reported.
Last but not least, clinical PDT in intraocular melanoma has up to now been limited not only by the lack of clinically approved PS, but also by interferences by the ocular and tumour pigment with light absorption. Most approved PDT sensitizers are porphyrin compounds, which offer a quite narrow (~20 nm) excitation wavelength range. If the pigment of the tumour absorbs too much of that light, PDT activity may be compromised. TLD1433, like most Ru polypyridyl compounds, shows broad absorption bands (∆λ~150 nm) between the blue and red regions of the spectrum, thereby allowing to fine-tune the excitation wavelength and optimise light absorption by the sensitizer vs. light absorption by the pigment [51]. These effects could not be tested here, as the CM and uveal melanoma cell lines have lost their pigments. Rutherrin, a new formulation of TLD1433 and transferrin, is now being proposed to improve the target specificity and water solubility of the PS [52][53][54]. Rutherrin was proven to cross the blood brain barrier (BBB) and is now under clinical investigation for glioblastoma multiforme (GBM) and non-small-cell lung cancer (NSCLC) [19]. However, this obstacle should be taken into account in any further in vivo testing of Ru-based sensitizers for PDT.
Photosensitizers
For in vitro studies, TLD1433 was firstly diluted to 2 mM in autoclaved PBS and further diluted in media as required. For the in vivo studies, TLD1433 was directly diluted to autoclaved 2% PVP as required.
In Vitro Cytotoxicity (SRB) Assay
At day 0, cells were detached using 1mL of trypsin, resuspended in 4 mL of media and transferred to a 15 mL corning falcon tube. Cells were counted using trypan blue and BioRad ® TC20™ automated cell counter ( Figure 11). Dilutions of 6000 (CRMM1), 6000 (CRMM2), 8000 (CM2005.1), 6000 (OMM1), 6000 (OMM2.5), 6000 (MEL270) 8000 (A431), and 4000 (A375) cells/well were calculated from each cell suspension at a final volume of 6 mL using the following formula: where V t = total volume of solution, C = number of cells per well/per 100 µL and L c = live cells count (cells/mL). The cell suspensions were transferred to a 50 mL reservoir and 100 µl of each cell line was seeded at the aforementioned cell densities in triplicate in six 96-well plates. Boarder wells were intentionally filled with PBS media to avoid boarder effects. After 24 h, the cells were treated with TLD1433 with six different concentrations ranging from 0.025 µM to 3.0 µM, followed by incubation in a normoxic incubator. After 24 h of post treatment the cells were exposed to the green light for 15 min (520 nm, 21 mW/cm 2 , 19 J/cm 2 ). The dark control plate was kept under dark conditions. Cisplatin was used as a positive control in all cell types. Then cells were incubated for another 48 h before fixing them with trichloroacetic acid (10% w/w) solution. The fixed cells were kept at 4°C for 48 h, when TCA was washed out with distilled water before adding the sulphorhodamin B (SRB) (0.6% SRB) dye. The SRB dye was washed out after 30 min and plates were air dried for overnight. Next day, the dye was dissolved using Tri-base (0.25%) and absorbance of SRB at 510 nm was recorded from each well using a Tecan plate reader. The SRB absorbance data was used to calculate the fraction of viable cells in each well (Excel and GraphPad Prism software). The absorbance data were averaged from triplicate wells per concentration. Relative cell viabilities were calculated by dividing the average absorbance of the treated wells by the average absorbance of the untreated wells. Three independent biological replicates were completed for each cell line (three different passage numbers per cell line). The average cell viability of the three biological replicates was plotted versus log(concentration) [µM], with the SD error of each point. By using the dose-response curve for each cell line under dark-and irradiated conditions, the effective concentration (EC 50 ) was calculated by fitting the curves to a non-linear regression function with a fixed maximum (100%) and minimum (0%) (relative cell viability) and a variable Hill slope, which resulted in the simplified two-parameter Hill-slope equation [Equation (1)]: 100/(1 + 10((log10EC 50 − X)·HillSlope (1) and irradiated conditions, the effective concentration (EC50) was calculated by fitting the curves to a non-linear regression function with a fixed maximum (100%) and minimum (0%) (relative cell viability) and a variable Hill slope, which resulted in the simplified two-parameter Hill-slope equation [Equation (1)]: 100/(1 + 10((log10EC50−X)⋅HillSlope (1) Figure 11. Time line for the SRB assay.
Flow Cytometry
CRMM1 (10,000/well) and CRMM2 (10,000/well) cells were seeded into an 8-well chamber in Opti-MEMTM (Gibco, Reduced Serum Medium, no phenol red) with 2.5% FBS (Gibco). After 24 h incubation, TLD1433 (0.0059 µM for CRMM1, 0.0048 µM for CRMM2) was added into the medium. 24 h later, wells were washed and new drug-free medium was added. The cells were exposed to green light (520 nm, 21 mW/cm 2 , 19 J/cm 2 ) for 15 min and incubated for 48h. Medium of all wells was collected and wells were washed with PBS and lysed by 500 µL trypsin for 3 min. Collected medium was added to the wells with lysed cells, mixed and centrifuged for 2000 rpm, 3 min. After washing, cells were resuspended in 200 µL of 1× binding buffer. Next, 5 µL of Annexin-V-FITC and 5 µL of Propidium Iodide was added to each well and incubated for 15 min at room temperature. 200 µL of sample was added to 96-well plate, and used for FACS measurement.
Zebrafish Maintenance, Tumour Cells Implantation and Tumour Analysis
Zebrafish lines were kept in compliance with the local animal welfare regulations and European directives. The study was approved by the local animal welfare committee (DEC) of the University of Leiden (Project: "Anticancer compound and target discovery in zebrafish xenograft model". License number: AVD1060020172410). The Zebrafish (ZF) Tg(fli1: GFP/Casper) [36] were handled in compliance with local animal welfare regulations and maintained according to standard protocols (www.ZFIN.org).
For cancer cell injection, two days post-fertilization (dpf), dechorionated zebrafish embryos were anaesthetized with 0.003% tricaine (Sigma) and plated on a 10 cm Petri dish covered with 1.5% of solidified agarose. CRMM1 and CRMM2 cells were suspended in PBS containing 2% polyvinylpyrrolidone (PVP; Sigma-Aldrich) with a concentration of 50,000 cells/µL and loaded into borosilicate glass capillary needles (1 mm O.D. × 0.78 mm I.D.; Harvard Apparatus). In the ectopic model, 200 (td)Tomato fluorescent CM cells were injected into the Duct of Cuvier or at 2 dpf, which led to dissemination through the blood circulation and outgrowth in the head and tail. In orthotopic tumour model, 100 (td)Tomato fluorescent CRMM1 or CRMM2 cells were injected RO in 2 dpf embryos using a Pneumatic Picopump and a manipulator (WPI). After injection, the embryos were incubated in a 34 • C incubator. Images were acquired at 1-, 2-and 4-days post injection (dpi) with a Leica M165 FC stereo fluorescence microscope. Tumor growth was quantified by calculating the total fluorescence intensity and area with the ZF4 pixel counting program (Leiden). Each experiment was performed at least 3 times with a group size of >30 embryos.
Maximum Tolerated Dose (MTD) for Wild Type Zebrafish and Tumour Cells Injected Zebrafish
For determining the MTD of the WA of the TLD1433 solution in wild type zebrafish, solutions of 2.3 nM, 4.6 nM, 9.2 nM, 11.5 nM, 23 nM were made before the experiment. At 2.5, 3.5, 4.5, 5.5 dpf, TLD1433 was added to the fish water and maintained for 12 h. At 3, 4, 5, 6 dpf, the fish water was refreshed and after 1 h, embryos were exposed to green light for 90 min (520 nm, 21 mW/cm 2 , 114 J/cm 2 ). For the IV and RO administration, TLD1433 solutions (1.15 mM, 2.3 mM, 4.6 mM, 9.2 mM, 11.5 mM) were prepared before the experiment. At 3, 4, 5, 6 dpf, 1nL of TLD1433 was injected via the dorsal vein or the RO site and maintained for 1 h. After each of the 4 injections, the embryos were exposed to green light for 90 min (520 nm, 21 mW/cm 2 , 114 J/cm 2 ). The images of treated and wild type embryos at 6 dpf were taken using a DFC420C camera coupled to a Leica MZ16FA fluorescence microscope. In order to determine the MTD of tumour cell bearing zebrafish, TLD1433 was performed according to the same procedure, delivered by WA, IV and RA administration as described above for the wild type embryos.
The Efficacy of TLD1433 by WA, IV and RO in a Zebrafish Ectopic and Orthotopic Tumour Models
Fluorescent CRMM1 and CRMM2 cells were injected at 2 dpf into the Duct of Cuvier (ectopic model) and behind the eye (orthotopic model) and TLD1433 was delivered by WA, IV and RO administration with or without a light treatment as described in 4.8. For the WA administration, the 4.6 nM TLD1433 solution was added to the tumour cells injected zebrafish at 2.5, 3.5, 4.5, 5.5 dpf and maintained for 12 h. At 3, 4, 5, 6 dpf, the fish water was refreshed, and after 1 h, embryos were exposed to green light for 90 min (21 mW/cm 2 , 114 J/cm 2 , 520 nm). For the IV and RO administration, 1 nL of 2.3 mM TLD1433 solution was injected via the dorsal vein or the RO site at 3, 4, 5, 6 dpf. After 1 h interval, the embryos were exposed to green light for 90 min (21 mW/cm 2 , 114 J/cm 2 , 520 nm). After treatment, the embryos images were acquired with a Leica M165 FC stereo fluorescence microscope. Tumor growth was quantified by calculating the total fluorescence intensity and area with the ZF4 pixel counting program (Leiden). Each experiment was performed at least 3 times with a group size of >30 embryos.
TUNEL Assay
The zebrafish larvae were fixed overnight with 4% PFA at 4 • C. Embryos were washed in PBST for 5 min and dehydrated by a graded methanol series until reaching 100% methanol. Embryos were stored at −20 • C for further use. Embryos were gradually rehydrated in PBST (25%, 50%, 75%), washed twice for 10 min with PBST and digested by proteinase K (Roche, Mannheim, Germany) solution in PBST (10 µg/mL) at 37 • C for 40 min. After two washes in PBST, embryos were post-fixed in 4% PFA for 20 min. After twice washing in PBST for 10 min, 50 µL of TdT reaction mix (Roche) was added to the embryos. Embryos were overnight incubated with the TdT at 37 • C (in the dark). The reaction was stopped by three 15 min washes with PBST at room temperature and embryos were used for high-resolution imaging. Embryos were placed on glass-bottom petri dishes and covered with 1% low melting agarose containing 0.003% tricaine (Sigma). Imaging was performed using the Leica SP8 confocal microscope. The images were processed with ImageJ software (National Institutes of Health, Bethesda, MD, USA). Each experiment was performed three times with a group size of 10 embryos.
Statistical Analysis
Determination of the EC50 concentrations in vitro was based on a non-linear regression analysis performed using GraphPad Prism Software. Results are presented as means ± SD from three independent experiments. Significant differences were detected by one-way ANOVA followed by Dunnett's multiple comparisons test implemented by Prism 8 (GraphPad Software Inc., La Jolla, CA, USA). A p-value < 0.05 was considered statistically significant (*p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, **** p ≤ 0.0001).
Conclusions
Our work supports three main conclusions. First, the Ru-based PDT sensitizer TLD1433 is very active in eye melanoma cell lines, where green light activation provokes cell death via apoptosis and necrosis. Second, this paper is one of the rare examples of testing PDT in a zebrafish tumour model. It could hence act as a basis for future PDT sensitizer screening in vivo, somewhere between in vitro and mice studies. Due to the excellent ROS generation properties of this PDT sensitizer, it appears of utmost importance to fine-tune the way of administration of the prodrug to the tumour model. For two different models of conjunctive melanoma, i.e., an ectopic and an orthotopic model, we have tested three ways of administration of TLD1433. The WA, which is often chosen to test compounds in zebrafish, did not give good results: the phototoxicity to the zebrafish was high, and the anti-tumour efficacy low. When the compound was injected IV or RO, however, the toxicity became much lower and, when injected IV or RO, excellent anti-tumour properties were observed. We hence propose, as a third and last conclusion of this work, that TLD1433 can be repurposed as a treatment against conjunctival melanoma. | 2020-03-05T10:27:18.411Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "bd2f156428f8c30fc85399895820d883c621eaea",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/3/587/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15ce9dc4a769daabbe17a01c3e53944bfdb05f24",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51927323 | pes2o/s2orc | v3-fos-license | Efficacy of artesunate–amodiaquine in the treatment of falciparum uncomplicated malaria in Madagascar
Background Since 2006, the artemisinin-based combination therapy (ACT) are recommended to treat uncomplicated malaria including non Plasmodium falciparum malaria in Madagascar. Artesunate–amodiaquine (ASAQ) and artemether–lumefantrine are the first- and second-line treatment in uncomplicated falciparum malaria, respectively. No clinical drug efficacy study has been published since 2009 to assess the efficacy of these two artemisinin-based combinations in Madagascar, although the incidence of malaria cases has increased from 2010 to 2016. In this context, new data about the efficacy of the drug combinations currently used to treat malaria are needed. Methods Therapeutic efficacy studies evaluating the efficacy of ASAQ were conducted in 2012, 2013 and 2016 among falciparum malaria-infected patients aged between 6 months and 56 years, in health centres in 6 sites representing different epidemiological patterns. The 2009 World Health Organization protocol for monitoring anti-malarial drug efficacy was followed. Results A total of 348 enrolled patients met the inclusion criteria including 108 patients in 2012 (n = 64 for Matanga, n = 44 for Ampasipotsy), 123 patients in 2013 (n = 63 for Ankazomborona, n = 60 for Anjoma Ramartina) and 117 patients in 2016 (n = 67 for Tsaratanana, n = 50 for Antanimbary). The overall cumulative PCR-corrected day 28 cure rate was 99.70% (95% IC 98.30–99.95). No significant difference in cure rates was observed overtime: 99.02% (95% IC 94.65–99.83) in 2012; 100% (95% IC 96.8–100) in 2013 and 100% (95% IC 96.65–100) in 2016. Conclusion The ASAQ combination remains highly effective for the treatment of uncomplicated falciparum malaria in Madagascar.
Background
Malaria remains an important health problem in Madagascar mainly in children under 5 years of age as mentioned in the national strategy plan for malaria control in Madagascar 2013-2017 [1]. Over the past decade, the burden of malaria has fluctuated over time partly due to successes and failures of anti-malarial policy [1]. The artemisinin-based combination therapy (ACT) are recommended in Madagascar since 2006 to treat uncomplicated malaria including non-falciparum malaria. The National Malaria Control Programme (NMCP) replaced chloroquine with artesunate-amodiaquine (ASAQ) combination as first-line drug for treating uncomplicated falciparum malaria, and artemether-lumefantrine as an alternative treatment. This change was based on a study, including clinical and in vitro data that reported the complete efficacy of the ASAQ combination in Madagascar [2,3]. An additional study performed 1 year later, which was part of a multicentre trial conducted in 2006 in 5 African sites, reported similar high levels of cure rates
Open Access
Malaria Journal *Correspondence: landyvalerie@gmail.com † Oméga Raobela and Valérie Andriantsoanirina contributed equally to this work 1 National Malaria Control Programme of Madagascar, Androhibe, Antananarivo, Madagascar Full list of author information is available at the end of the article to ASAQ in Tsiroanomandidy [4]. Since then, no data about the efficacy of ASAQ or alternative anti-malarials in Madagascar have been reported.
The incidence of malaria cases has increased in Madagascar (9.83 cases for 1000 inhabitants in 2010 to 19.52 in 2016) (NMCP data, pers. comm.). In this context, it is of utmost importance to acquire new data from recent efficacy studies. This study present the results of studies conducted from 2012 to 2016 to assess the efficacy of the ASAQ combination for the treatment of uncomplicated malaria cases in 6 sites covering three epidemiological patterns in Madagascar.
Patient recruitment
Clinical ASAQ efficacy studies were conducted according to 2009 World Health Organization protocol for monitoring anti-malarial drug efficacy [5]. Febrile patients or patients with fever history seeking anti-malarial treatment in health centres were screened for malaria by rapid diagnostic test (SD Bioline Ag P.f/Pan, Standard Diagnostics INC, Korea). Positive cases for Plasmodium falciparum were enrolled if they met inclusion criteria and gave their written informed consent. Inclusion criteria were: (i) age > 6 months; (ii) axillary temperature ≥ 37.5 °C or history of fever in the 24 h preceding consultation; (iii) P. falciparum mono-infection with parasitaemia between 1000 and 200,000 asexual parasites per µl; and, (iv) absence of signs of severe malaria. Children were recruited after parental consent.
Finger-prick capillary blood sample were collected to prepare thin and thick blood smears for malaria microscopy examination, haemoglobin level determination (HemoCue HB 201, HemoCueAB, Angelholm, Sweden) and dried blood spot (DBS) for molecular biology investigations.
Artesunate-amodiaquine Winthrop ® (Sanofi, France) was administered daily for 3 days at the dose recommended by the manufacturer on a mg per weight basis (4.5-8 kg: 25 mg/67.5 mg tablets; 9-17.9 kg: 50 mg/135 mg tablets; 18-35 kg: 100 mg/270 mg tablets; > 36 kg: 100 mg/270 mg tablets). Participants were observed by the medical team for 30 min after treatment to monitor for vomiting or other adverse events; those who vomited were administered a second dose and observed for an additional 30 min. Patients who vomited both doses were excluded from the study and referred to the hospital for parenteral treatment.
Patient follow-up
Enrolled patients were followed daily (days 1, 2, 3) and weekly (days 7, 14, 21, and 28). At each visit, patients were clinically examined by a physician and clinical parameters were recorded on a Case Record Form (CRF). Parasite densities estimated by microscopic examination were checked at each visit. All blood films were read by 2 qualified microscopists, and by a third independent microscopist if discordance was > 20%. Dried blood spot obtained at day 0 and day failure were used for PCR genotyping. Haemoglobin concentrations were assessed on day 0 and day 28.
DNA extraction and molecular investigations
Parasite DNA extracted from dried blood spot (DBS) with QIAamp DNA blood kit according to manufacturer's instruction, was amplified for confirming Plasmodium species with slight modifications [6] and for distinguishing recrudescence and re-infection by using msp1 and msp2 polymorphisms [7] in case of recrudescence.
Data analysis
Data were entered into the standard pre-programmed Excel worksheet provided by the Global Malaria Programme WHO for per-protocol analysis. Per protocol analysis was used to assess treatment outcomes at day 28 based on the WHO 2009 criteria: ACPR (adequate clinical and parasitological response), ETF (early treatment failure), LCF (late clinical failure), and LPF (late parasitological failure). Table 1. Ages ranged from 6 months to 56 years (median 8 years). The geometric mean of the parasite density at day 0 was 17,502 parasites/µl of blood (95% CI 13,000-22,003, range 1005-199,800). "The parasite density mean varied according to sites, from 7315 parasites/µl of blood in Ampasipotsy to 28,089 parasites/µl of blood in Tsaratanana".
The overall efficacy of ASAQ per protocol population analysis and after PCR correction was 99.70% (95% IC 98.30-99.95). According to years, PCR-corrected day 28 cure rates were 99.02% (95% IC 94.65-99.83) in 2012; 100% (95% IC 96.8-100) in 2013 and 100% (95% IC 96.65-100) in 2016. There was no case of ETF; 2 cases of LPF and one case of LCF were observed. The first patient classified as LPF was a 17-month old boy with a parasitaemia of 18,613 asexual parasites/µl at day 0 and 150,000 asexual parasites/µl at day 28. This patient was a true recrudescence as determined by msp1/msp2 genotyping. The second patient classified as LPF was a 17-month old girl with a parasitaemia of 9900 asexual parasites/µl at day 0 and 29,232 asexual parasites/µl at day 28. The patient presenting a LCF, a 10-year old male, had a parasitaemia of 5811 asexual parasites/µl on day 0 and 9400 asexual parasites/µl on day 28 with 38.9 °C of axillary temperature. The latter two patients were classified as reinfection by msp1/msp2 genotyping and were excluded from the analysis ( Table 2).
Discussion
Amodiaquine, in combination with artesunate, is widely used in sub-Saharan African countries, including Madagascar. The aim of this study was to provide updated data about the efficacy of ASAQ combination in Madagascar. Twelve years after its adoption as first-line treatment for acute uncomplicated falciparum malaria, ASAQ remains highly efficacious in the 6 geographical sites, covering various eco-epidemiological facies of the island, with different levels of malaria transmission. PCRcorrected cured rates estimated at day 28 were largely > 90%, above the WHO threshold for changing malaria treatment policy [5]. In addition, all patients treated with ASAQ cleared their parasites before day 3, indicating the absence of delayed parasite clearance, a marker for suspected partial resistance to artemisinin [8]. The proportion of cured patients observed in this study was not different from that reported in the two studies performed a decade earlier, suggesting the ASAQ retained its efficacy so far. These results are in line with those of comparable studies performed recently in neighbouring countries, such as Mozambique or Kenya [9,10]. The results obtained in this study are also in line with those obtained in several African countries where ASAQ therapeutic failures are very rarely reported [11][12][13][14][15][16][17][18]. When documented, those therapeutic failures were not associated with the same k13 mutations that were reported in P. falciparum isolates obtained during therapeutic failures in Cambodia and other Southeast Asian countries [19][20][21][22]. The treatment by ASAQ was well tolerated and no severe adverse event was reported among the participants in the 6 sites.
The efficacy of ASAQ, demonstrated in the present study, is necessary but not sufficient to contribute to the elimination of malaria in Madagascar. Stock out issues are recurrent on the island, emphasizing that sustainability of distribution of anti-malarials is crucial.
Conclusion
ASAQ remains highly efficacious in the treatment of uncomplicated falciparum malaria in Madagascar. Thus, according to this study, the increased incidence of falciparum malaria cases observed recently seems not related to clinical treatment failure of first-line treatment promoted since 2006. Further investigations are required to determine the reasons for the recent increase in incidence of falciparum malaria. | 2018-08-08T10:08:54.193Z | 2018-08-06T00:00:00.000 | {
"year": 2018,
"sha1": "3eaac760d2c3d5f06fb2374ee75cb43c40e87165",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/s12936-018-2440-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "025c6c0a980575210723387d5185a8e59a23bea3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259283794 | pes2o/s2orc | v3-fos-license | Healthcare Application of In-Shoe Motion Sensor for Older Adults: Frailty Assessment Using Foot Motion during Gait
Frailty poses a threat to the daily lives of healthy older adults, highlighting the urgent need for technologies that can monitor and prevent its progression. Our objective is to demonstrate a method for providing long-term daily frailty monitoring using an in-shoe motion sensor (IMS). We undertook two steps to achieve this goal. Firstly, we used our previously established SPM-LOSO-LASSO (SPM: statistical parametric mapping; LOSO: leave-one-subject-out; LASSO: least absolute shrinkage and selection operator) algorithm to construct a lightweight and interpretable hand grip strength (HGS) estimation model for an IMS. This algorithm automatically identified novel and significant gait predictors from foot motion data and selected optimal features to construct the model. We also tested the robustness and effectiveness of the model by recruiting other groups of subjects. Secondly, we designed an analog frailty risk score that combined the performance of the HGS and gait speed with the aid of the distribution of HGS and gait speed of the older Asian population. We then compared the effectiveness of our designed score with the clinical expert-rated score. We discovered new gait predictors for HGS estimation via IMSs and successfully constructed a model with an “excellent” intraclass correlation coefficient and high precision. Moreover, we tested the model on separately recruited subjects, which confirmed the robustness of our model for other older individuals. The designed frailty risk score also had a large effect size correlation with clinical expert-rated scores. In conclusion, IMS technology shows promise for long-term daily frailty monitoring, which can help prevent or manage frailty for older adults.
Background
Typically, skeletal muscle mass begins to decline gradually at around age 45, after reaching its peak in the early adult years [1]. Additionally, gait speed, which has been deemed the sixth vital sign [2], significantly decreases in older adults after age 60 [3]. The decline in skeletal muscle mass and gait speed below a critical threshold may result in physical functional impairments that limit mobility, such as walking, climbing stairs, and crossing over obstacles [4]. These impairments may lead to sarcopenia or frailty in older adults [5] (see Figure 1a).
Although the relationship between sarcopenia and frailty has yet to be fully characterized, these conditions share many commonalities. Both are linked to physical functional impairment, and sarcopenia is an age-related, long-term process that involves the loss of muscle mass and strength, affecting mobility and nutritional status [6][7][8]. Additionally, physical frailty may result in sedentary behavior, cognitive impairment, and social isolation [9]. Frailty is closely associated with various detrimental outcomes for older adults, such as an increased risk of falls and fractures, impaired ability to perform daily activities, Generally, IMSs can transmit detailed waveforms wirelessly to a smartphone or server for further analysis, which consumes a significant amount of power. As a result, these IMSs need to be frequently charged, reducing their usability for practical applications. In a previous study, we developed a new type of IMS, which is small and lightweight, can be attached to insoles, and has optimally designed power-saving operation sequences and modes for practical applications. Our study showed that this IMS achieved high usability for long-term daily measurement without the need for battery charging for up to one year [29]. One key feature contributing to power savings is that our IMS can perform simple data processing and calculate common spatiotemporal GPs, such as gait speed, stride length, and stance phase duration, using inertial measurement unit (IMU) signals. We have named this type of IMS A-RROWG®. These features enable A-RROWG to collect daily gait data over long periods, regardless of location and time, without the user noticing the sensor's presence. The research question for Step 1 is how to construct an HGS assessment model that is feasible for an A-RROWG-type IMS and that can be proven effective. However, to the best of our knowledge, no technology has been developed for assessing HGS performance using IMSs.
Ideas for Solving the Research Question in Step 1
Due to the characteristics of A-RROWG, the HGS assessment model must be lightweight enough to be implemented on it. Therefore, rather than applying recent machine learning methods that require a large computation capacity [31], we focused on developing a lightweight, high-precision estimation model via linear multivariate regression with a minimum number of predictors required. This development included two tasks: (1) identifying predictors that highly correlate with HGS and (2) reducing redundant predictors via feature selection.
Gait speed has been suggested to correlate with HGS [32,33], indicating that gait features might be a useful predictor for HGS assessment. However, gait speed is not a specific predictor for HGS as it can also be influenced by other factors, such as knee osteoarthritis [34] or depression [35], making it challenging to construct an accurate model.
To address this limitation, we proposed considering additional potential predictors for HGS assessment. Previous research has demonstrated that HGS correlates with knee extension muscles, specifically the quadriceps [36,37], which play a crucial role in walking. Since gait is a periodic movement, the same motions using muscles are repeated during specific gait phases in every gait cycle (GC). Although the quadriceps do not directly control foot motion, they should impact foot motion through their control of the knee joint and lower leg. Therefore, we considered predictors for HGS assessment that can be determined from foot motion signals during specific gait phases, specifically those gait phases where the quadriceps are activated.
For the second task of selecting appropriate predictors, several techniques, such as LASSO [38], Bayesian methods such as Bayesian LASSO [39], deep learning methods for sparse learning [40], and multi-objective optimization methods [41], have been proposed. However, multi-objective optimization methods are suitable for optimizing multiple conflicting objectives simultaneously, which is not within the scope of linear regression methods utilized in our study. LASSO and Bayesian LASSO are more feasible alternatives, but Bayesian LASSO may require more substantial expertise to interpret results accurately. As such, we chose to apply LASSO for feature selection.
In conventional LASSO, cross-validation approaches [42] are commonly used to select the LASSO tuning parameter value. However, these techniques typically consider randomly selecting training and validation sets without considering variations between individuals. To ensure model robustness and account for individual differences, we combined LASSO with a leave-one-subject-out (LOSO) process. This approach involved running multiple LASSO analyses by looping the LOSO process for all subjects, conceptually similar to jackknife, resampling method [43], to approximate the nature of the population estimator and improve model robustness against individual differences.
In our previous studies, we developed an algorithm capable of automatically extracting novel significant gait predictors from foot motion, selecting optimal features, and constructing an assessment model, valid for estimating adults' foot function and older adults' balance ability measured by the outcome of a functional reach test (FRT) [44,45]. In this study, we constructed an HGS estimation model using this algorithm via the following steps: 1.
Identifying significant gait phases with statistically significant correlation with the target variable using statistic parametric mapping (SPM) [46], which was proven effective in biomechanical studies. The significant gait phases always continuously appeared, performing as clusters on the temporal axis; thus we called them, "gait phase clusters" (GPCs).
2.
Conducting predictors by averaging the foot motion signals in the GPCs to obtain IMS predictors that can be implemented on the A-RROWG-type IMS. Although there are clustering algorithms, such as community detection algorithms [47], due to the temporal continuity of foot motion, using the integral average of the signals in GPC as a single predictor is sufficient and helpful for implementation on the A-RROWG-type IMS.
3.
Reducing redundant predictors and selecting appropriate predictors using our original algorithm, the leave-one-subject-out least absolute shrinkage and selection operator (LOSO-LASSO).
4.
Constructing a multivariate linear regression estimation model.
We refer to our approach as SPM-LOSO-LASSO, which aids in constructing a biomechanically interpretable HGS estimation model that is both lightweight enough for implementation on an edge device and precise in its predictions. In a previous study, we demonstrated the construction and operation of the IMS predictors on an A-RROWG-type IMS [44]. In this study, we have incorporated individual physical attributes (IPAs), such as age, height, weight, and body mass index (BMI), and designed GPs, including previously proposed temporal and spatial GPs (we list them in Section 2.4), as auxiliary predictors to enhance the model's precision. Considering the gait variance between biological sexes [48], we have constructed separate estimation models for males and females. Some of our findings in this report are based on the work presented at the 44th International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2022) [49].
1.3.
Step 2 to Goal: Related Work on Frailty Assessment and Designing a Frailty Risk Score 1. 3
.1. Research Question in Step 2
Aside from the cardiovascular health study (CHS) criteria, there are alternative methods for diagnosing frailty in clinical practice. Examples include the phenotype model [50] and accumulated deficit model [51]. To assess frailty levels in daily living, several techniques based on wearable sensor measurements have been proposed [22]. For instance, using wearable motion sensors, Schwenk et al. [52] conducted home assessments of established gait outcomes to identify pre-frailty and frailty. Razjouyan et al. [53] utilized a pendant motion sensor to develop a composite model for discriminating three frailty categories: non-frail, pre-frail, and frail. In addition, Greene et al. [54] aimed to create an automatic, non-expert quantitative assessment of the frailty state based on wearable inertial sensors.
However, previous research studies focused solely on discriminating two or three frailty levels. The transition from non-frail to pre-frail or pre-frail to frail is a gradual, longterm process. According to a previous study [55], the pooled incidence rate of pre-frailty was 15.1%, and that of frailty was 4.3% based on multiple cohort studies. Given that body performance tends to decline with age in the absence of intervention, it is reasonable to hypothesize that the higher the current condition's frailty risk, the greater the likelihood of future deterioration. To assist users in delaying and managing frailty progression adequately, merely classifying frailty levels is considered insufficient. Consequently, the research question in Step 2 is how to construct an analog frailty risk metric and demonstrate its effectiveness.
Ideas for Solving the Research Question in Step 2
An analog frailty risk score could prove beneficial for various reasons, such as providing users with an intuitive representation of their body condition's long-term changes, enabling a more comprehensive user rating, and demonstrating the effects of exercise. Given that HGS and gait speed are critical factors in current frailty assessment, we assert that their performance must feature significantly in frailty risk assessment. Consequently, we developed a frailty risk score in this study by merely combining the HGS and gait speed performance of the subjects. Moreover, we utilized the HGS distribution [56] and gait speed data for the Asian population aged over 60 years [57,58] to design our frailty risk score.
Testing Constructed HGS Estimation Model and Frailty Risk Score
After constructing and validating the model, we conducted two separate tests on a group of older healthy adults who were recruited independently from those used for constructing the model.
The first test involved examining the precision of the HGS assessment model on the separately recruited subjects.
The second test involved testing the effectiveness of our original frailty risk score, which was used to demonstrate the possibility of evaluating frailty via IMSs in subjects who were also recruited separately. These subjects were rated using a continuous score ranging from 0 to 100 by experts, including clinicians and physiotherapists with over 5 years of experience, who observed their gait. The score served as a reference for their risk of frailty. We tested the correlation coefficient between the designed score and the expert-rated score.
The Development Process and Main Contributions in this Study
In summary, Figure 1c presents a diagram that outlines the development process of achieving frailty risk assessment via the A-RROWG-type IMS. The main contributions of this study are as follows: (1) We discovered novel predictors for HGS assessment obtained from foot motions.
(2) We constructed a lightweight HGS assessment model that can be feasibly implemented in the A-RROWG-type IMS, which serves as a key module for long-term frailty assessment. (3) We tested the effectiveness and robustness of the constructed model using a group of separately recruited subjects. (4) We designed an analog frailty risk score and evaluated its effectiveness for frailty risk assessment via an IMS.
The acronyms and symbols used in this manuscript can be referenced in Table A1 in Appendix A.
Subjects and Their Characteristics
To contribute to potential applications for frailty prevention, as well as postponing and managing its progression, we recruited healthy older adults who could participate in the experiment independently. We recruited three separate groups of healthy older subjects with different ages, heights, and weights for model construction (Group I), Test 1 (Group II+III, combining data in Group II and Group III together), and Test 2 (Group III). We successfully collected data from 62 subjects (27 males and 35 females) for Group I, 20 females for Group II, and 25 subjects (6 males and 19 females) for Group III. All subjects were able to walk independently without assistive devices, had no history of severe neuromuscular or orthopedic diseases, had normal or corrected-to-normal vision, and had no communication obstacles. After explaining the experimental procedure to the subjects, we obtained their informed consent before the experiment. This study received approval from the NEC Ethical Review Committee for Life Sciences (Approval No. LS2021-004, 2022-002) and the Ethical Review Board of Tokyo Medical and Dental University (Approval No. M2020-365). The demographic data are summarized in Table 1, with HGS and gait speed serving as reference values. The average age of both male and female subjects in all three groups was over 70 years old. Although the average BMIs indicate that most subjects had a normal body mass, we ensured that subjects with a wide range of body mass were recruited, including those with maximum and minimum BMIs. In Group I, male and female subjects had similar age characteristics (p = 0.755), and no significant sex difference in gait speed was found (p = 0.453). The data also show that the female subjects for model construction (Group I) were similar in age to those for model testing (Group II+III) (p = 0.604), as well as in terms of HGS and gait speed (HGS: p = 0.395; gait speed: p = 0.265). However, compared with the male subjects in the two groups, the age in Group II+III was higher than that in Group I (p = 0.040). Although there was no significant difference in gait speed (p = 0.052), due to age, the HGSs in Group II+III were much lower (p = 0.021). When comparing female subjects in Groups I and III, no significant differences in age, HGS, and gait speed were found between them (p = 0.058, 0.102, 0.972). According to the J-CHS scores of the subjects in Group III, 60% of the subjects self-assessed themselves as not being frail, and none of them assessed themselves as frail. Further details on how the J-CHS scores were calculated for the subjects are presented in Section 2.2.
Experiment
To achieve our final goal, we collected five types of data from subjects in an indoor environment, performing the following steps: Step 1 At the start of the experiment, all subjects were asked to complete a questionnaire to provide basic information, including age, height, and weight, and for the calculation of BMI based on their answers. Step 2 The same questionnaire included four questions based on the J-CHS criteria: Q1. Have you lost more than 2-3 kg in the past 6 months? Q2. In the past two weeks, have you felt tired for no reason? Q3. Do you engage in light exercise or gymnastics at least once a week? Q4. Do you engage in regular exercise or sports at least once a week?
From the four questions, we calculated the J-CHS score for each subject as subjective frailty reference data. Step 3 After answering the questionnaire, subjects were guided to measure their HGS, which served as the reference HGS value in this study.
Step 4 Subjects were asked to walk in a straight line. In this step, we collected foot motion data for calculating GPs and IMS predictors, as well as reference gait speed data for all subjects. Additionally, for those in Group III, video recordings were made while they walked.
Step 5 We sent the walking videos to clinical experts to obtain expert-rated frailty risk scores as objective frailty reference data.
Further details on Steps 2 through 5 are explained in the following subsections.
Step 2 of the Experiment
For Q1 and Q2, one point was added for each "yes" answer. For Q3 and Q4, if both questions were answered with "no", one point was added, and if either was answered with "yes", no points were added. The total J-CHS score was obtained by totaling the points. Finally, we checked whether the reference HGS and gait speed were below the threshold specified in the J-CHS criteria to determine the subjects' total J-CHS score. Subjects who scored 0, 1-2, and higher than 2 were classified as "Robust", "Pre-frail", and "Frail", respectively.
Step 3 of the Experiment
To assess the HGS of the subjects, we used a Jamar hydraulic hand dynamometer (Lafayette Instrument Company, Lafayette, IN, USA). The measurement process followed the method suggested in a previous study [15], as shown in Figure 2a. Subjects were asked to sit on an armchair with their elbow flexed at 90 • , without touching the chair arms. The Jamar is a variable hand-span dynamometer with five handle positions. The dynamometer was set to handle position "two", and both hands were measured three times with subjects exerting their best effort. To determine the representative HGS of each subject, we calculated the mean value of the six measurements. This mean value served as the reference value for HGS in this study. Figure 2. Schematic of (a) measurement of HGS. The subjects were asked to sit on an armchair sitting with the elbow in 90° flexion, but the elbow cannot touch the chair arms. The dynamometer was set at handle position "two". (b) The structure of an IMS (left side). IMS was embedded in an insole placed under the foot arch near the calcaneus side and then inserted into a sport shoe.
Characteristics of IMS
The IMSs used in this study have the same structure as A-RROWG-type IMSs. Each IMS consists of a 6-axis IMU (BMI 160, Bosch Sensortec, Reutlingen, Germany), an ARM Cortex-M4F microcontroller unit (MCU) with Bluetooth module (nRF52832, CPU: 64 MHz, RAM: 64 KB, ROM: 512 KB, Nordic Semiconductor, Oslo, Norway), onboard memory (AT45DB641, 64 Mbit, Adesto Technologies, Santa Clara, CA, USA), a real-time clock (RTC) (RX8130CE, EPSON, Suwa, Japan), a control circuit, and a 3V coin lithiumion battery (CLB2032 T1, 300 mAh, Maxell, Tokyo, Japan). The device is lightweight (12 g, including the coin battery) and compact (29 mm × 40 mm × 7 mm) enough to be placed at the arch of the foot. Please note that during the feasibility study stage, the IMSs were set to developer mode, which differed from A-RROWG in that all calculations were performed on the device. Under this mode, raw foot motion waveform data were first rec- Figure 2. Schematic of (a) measurement of HGS. The subjects were asked to sit on an armchair sitting with the elbow in 90 • flexion, but the elbow cannot touch the chair arms. The dynamometer was set at handle position "two". (b) The structure of an IMS (left side). IMS was embedded in an insole placed under the foot arch near the calcaneus side and then inserted into a sport shoe.
Step 4 of the Experiment
To collect foot motion data, the subjects were asked to walk straight along 16 m lines for four trials at a self-determined comfortable speed. Before data collection, they were given a 2-min practice session to familiarize themselves with the environment and procedure. While walking, their foot motions were recorded by two IMSs embedded in insoles placed under the arches of both feet near the calcaneus side (see Figure 2b). This placement ensured that the subjects could walk comfortably. Please note that during the feasibility study stage of this study, foot motion data were temporarily recorded onto the onboard memory during experiments and would later be transferred to a personal computer for data processing. The characteristics of the IMS are described in Section 2.3.
The time taken by each subject to walk 10 m along the 16 m lines was recorded using a digital stopwatch to calculate their average gait speed when walking at a uniform pace. This speed was treated as the reference value for gait speed in this study. Subjects in Group III were also recorded while walking by two video cameras placed at the side and end of the walking path. To protect their privacy, their faces were obscured.
Step 5 of the Experiment
After gait data collection was finished, the videos were sent to six clinical experts in gait evaluation. They were asked to score the subjects regarding the risk of "being diagnosed with frailty within the next 5 years" on a 100-point scale by observing their gait. A subject considered by the rater to have the highest risk was rated as 100, and a subject with the lowest risk was rated as 0. The relative frailty risk of the remaining subjects compared with the highest and lowest ones was scored between 0 and 100. Then, every subject had six scores. Except for observing the recorded videos, the raters were not given any personal information about the subjects.
Characteristics of IMS
The IMSs used in this study have the same structure as A-RROWG-type IMSs. Each IMS consists of a 6-axis IMU (BMI 160, Bosch Sensortec, Reutlingen, Germany), an ARM Cortex-M4F microcontroller unit (MCU) with Bluetooth module (nRF52832, CPU: 64 MHz, RAM: 64 KB, ROM: 512 KB, Nordic Semiconductor, Oslo, Norway), onboard memory (AT45DB641, 64 Mbit, Adesto Technologies, Santa Clara, CA, USA), a real-time clock (RTC) (RX8130CE, EPSON, Suwa, Japan), a control circuit, and a 3V coin lithium-ion battery (CLB2032 T1, 300 mAh, Maxell, Tokyo, Japan). The device is lightweight (12 g, including the coin battery) and compact (29 mm × 40 mm × 7 mm) enough to be placed at the arch of the foot. Please note that during the feasibility study stage, the IMSs were set to developer mode, which differed from A-RROWG in that all calculations were performed on the device. Under this mode, raw foot motion waveform data were first recorded on the IMSs' onboard memory and then sent to a PC via Bluetooth after the experiment. We developed dedicated software for controlling data recording start and end in the IMSs and for downloading raw data from the onboard memory of the IMSs to a PC via Microsoft Visual Studio (Microsoft, Redmond, WA, USA).
Signal Processing and GPs
For all data processing, simulation, and model construction tasks, MATLAB (Math-Works, Natick, MA, USA) was used in this study.
To construct the HGS estimation model via the SPM-LOSO-LASSO algorithm, predictors from three categories were required: IPAs, temporospatial GPs, and IMS predictors. Temporospatial GPs and IMS predictors were obtained by processing one stride of the foot motion waveform. In this section, we explain the procedures used to obtain GP predictors. The flow chart is shown in Figure 3. 23, 5446 11 second task focused on calculating the GPs that were extracted from each stride of th motion waveform. The third task was to obtain a set of average foot motion wavef and GPs in each trial. For the first task, to prepare the nine-dimensional foot motion signals from the for analysis, the signals were partitioned into individual strides by detecting a heel-(HS) event [60]. The IMS signal during the stance phase was then temporally norma to a 1-60% gait cycle (%GC), while the swing phase was normalized to 61-100%G create a 9 × 100 matrix. To eliminate potential biases, we subtracted the average s amplitude during 21-25%GC from each stride assuming that these phases, where th sole fully touches the ground, can be represented as a neutral posture. Additional exclude any walking velocity bias in foot motion, we normalized the amplitude of eration and angular velocity waveform of each stride using the corresponding maxi instantaneous velocity during a stride. The instantaneous walking velocity was comp by integrating Ay from a neutral posture to the end of the stride. It is worth noting th excluded the first and last three strides of each trial, as they were not uniform in s Furthermore, we removed any gait outliers from the remaining strides of each partici following the exclusion criteria outlined in [61].
023,
Before temporal normalization, we derived 20 temporal and spatial GPs [29,62] each stride of the foot motion waveform using the algorithm depicted in [29,62]. T parameters are listed in Table 2. GP01, GP05, and GP06 were normalized by subject h GP11-14, GP19, and GP20 were normalized by the duration of one stride. GP15, GP16 GP18 were normalized by the maximum instantaneous walking velocity during the s phase.
We then calculated the average foot motion and GPs for each trial on the left and During the preliminary stage, two primary tasks were completed. The first task involved processing every stride of the foot motion waveform into data matrices. The second task focused on calculating the GPs that were extracted from each stride of the foot motion waveform. The third task was to obtain a set of average foot motion waveforms and GPs in each trial.
For the first task, to prepare the nine-dimensional foot motion signals from the IMSs for analysis, the signals were partitioned into individual strides by detecting a heel-strike (HS) event [60]. The IMS signal during the stance phase was then temporally normalized to a 1-60% gait cycle (%GC), while the swing phase was normalized to 61-100%GC to create a 9 × 100 matrix. To eliminate potential biases, we subtracted the average signal amplitude during 21-25%GC from each stride assuming that these phases, where the foot sole fully touches the ground, can be represented as a neutral posture. Additionally, to exclude any walking velocity bias in foot motion, we normalized the amplitude of acceleration and angular velocity waveform of each stride using the corresponding maximum instantaneous velocity during a stride. The instantaneous walking velocity was computed by integrating A y from a neutral posture to the end of the stride. It is worth noting that we excluded the first and last three strides of each trial, as they were not uniform in speed. Furthermore, we removed any gait outliers from the remaining strides of each participant, following the exclusion criteria outlined in [61].
Before temporal normalization, we derived 20 temporal and spatial GPs [29,62] from each stride of the foot motion waveform using the algorithm depicted in [29,62]. These parameters are listed in Table 2. GP01, GP05, and GP06 were normalized by subject height. GP11-14, GP19, and GP20 were normalized by the duration of one stride. GP15, GP16, and GP18 were normalized by the maximum instantaneous walking velocity during the swing phase. Duration of HS to foot flat s GP20 Duration of foot flat s GP: gait parameter. GP01-GP07 were calculated using the method of Fukushi et al. [29]. GP13, GP14, GP19, and GP20 were calculated using the method of Huang et al. [62]. Deg: degree.
We then calculated the average foot motion and GPs for each trial on the left and right feet for each subject. The data of the left and right feet were further averaged within each trial. This resulted in each participant having four sets of average foot motions and GPs. Thus, a total of 108 and 140 datasets were generated for males and females in Group I, respectively, and 24 and 156 datasets were generated in Group II+III for males and females, respectively. These processed average waveforms were used to determine new predictors for HGS estimation. In this section, we explain the process of constructing and selecting predictors for HGS estimation via SPM-LOSO-LASSO, following the steps depicted in Figure 4a [44]. Here, IMS predictor processing is part of SPM-LOSO-LASSO.
To construct IMS predictors from foot motion signals that are significantly correlated with HGS outcome, it is necessary to determine the %GCs that have a significant correlation. For this purpose, we used SPM, a widely used and effective method in biomechanical studies [46,63]. We performed SPM analysis to evaluate the correlation between HGS outcomes and foot motion signals at each %GC. SPM for correlation analysis is a stepwise process. First, a canonical correlation analysis (CCA) with SPM (SPM-CCA) was performed [46]. The %GCs whose test statistic in the CCA exceeded a critical test statistic threshold calculated in accordance with the random field theory (RFT) [64] were determined as significant %GCs. The level of significance was set as p < 0.05. Second, as a post hoc test, only data in significant %GCs were further investigated by Pearson's correlation (PeC) analysis with SPM (SPM-PeC) for each component of the foot motion signal. For each component, the %GCs whose test statistic in the PeC exceeded an RFT-based critical test statistic threshold were judged as the final HGS-correlated significant %GCs for each component. Because there were nine components in the foot motion signals, we conducted Šidák correction [65] at a level of correlation significance where p c < 0.0057. measured values.
Designing Frailty Risk Score
We assumed that the distribution of HGS and gait speed of our subjects would follow a normal distribution similar to that of the population of older Asian adults. According to [56], the mean values of HGS for males (N = 12190) and females (N = 14154) over 60 years old are 34.7 and 21.9 kg, respectively, and the standard deviations are 7.1 and 4.8 kg, re spectively. In [57], the baseline demographic and health characteristics of 1686 commu nity-dwelling Japanese were demonstrated, and no significant difference in gait speed was observed between sexes. Thus, the calculated mean value and standard deviation of gai speed for all subjects were 1.29 and 0.24 m/s, respectively.
We utilized a probability-distribution-based method to design the frailty risk score First, we calculated the Z-score of the HGS performance of males and females using Equa tions (3) and (4), respectively, and that of gait speed using Equation (5), using the mean value and standard deviation of HGS and gait speed for older Asian adults in [56,57].
Here, HGSm, HGSf, and GS are the HGS of the male subjects, the HGS of the female subjects, and the gait speed of all subjects (no sex difference). ZHGS_m and ZHGS_f denote the Based on biomechanical knowledge, we limited the predictors to the range of approximately 1-16%GC, 48-70%GC, and 92-100%GC, where the quadriceps are mostly activated. These defined quadricep-activation %GCs were used as a filter, denoted as Q t . The intersection between the %GCs judged by SPM to be HGS-correlated and the Q t was taken to exclude the %GCs not related to quadricep activities. The intersections were treated as GPCs. The integral average of the signal in GPCs was then used as an IMS predictor, as expressed by (1).
where C i means the i-th IMS predictor; T s and T e mean the start and end of %GCs of GPCs, respectively; and W means the waveform of the foot motion signal, where After collecting the subjects' IPAs, GPs, and IMS predictors, we formed predictor candidates for model construction. We used our original algorithm, LOSO-LASSO [44], along with the "lasso" function in MATLAB to determine the best selection of predictors. We obtained multiple LASSO analysis results by looping the LOSO process for all subjects. By statistically analyzing these results, we can approximate the nature of the population estimator and thereby make the LASSO analysis more robust against individual differences.
The details of LOSO-LASSO are shown in Figure 4b. In the u-th LOSO process, the data of the u-th subject are first excluded, and the remaining data are then subjected to LASSO analysis. LASSO solves the following problem: Here, N is the amount of data. y k is the target variable. x k is the predictor vector of length C. λ i is a non-negative regularization parameter input to LASSO, which can be set freely. β i is the set of fitted least-squares regression coefficients, and β i0 is the residual of the linear regression y k = x k T β i + β i0 , corresponding to λ i , which is also the output of LASSO. β ij is the j-th element of β i . As λ i increases, the number of nonzero components of β i decreases. For optimizing feature selection, we set 100 different λ i 's which formed a geometric sequence to compose a regularization parameter 100-dimensional vector λ; thus, the index i here means the i-th element of λ. In each LOSO, 100 β i 's formed a coefficient matrix. Then, we substituted nonzero elements in LASSO coefficient matrices with 1 to form label matrix B u .
This process is repeated for each subject. After completion of the LOSO process, we can obtain U sets of B u 's. By summing all B u 's, we obtain a matrix with a total counter B 0 . The elements over 0.95 × U (25 for males and 33 for females) in this matrix are substituted with 1, while the remaining elements are substituted with 0, forming the final label matrix B. LOSO-LASSO generates 100 types of predictor combinations (denoted as Ω 1 -Ω 100 ) based on different regularization coefficient sets in LASSO. Using these features, 100 different candidate multivariate regression models can be obtained for the dataset. We evaluated 100 candidate models (H 1 -H 100 ) for estimating HGS using leave-one-subject-out crossvalidation (LOSOCV) and the intraclass correlation coefficient (ICC) of type (2, 1) as the evaluation index, denoted as ICC(2, 1). The model with the highest ICC(2, 1) value was chosen as the optimal model (M o ).
Model Evaluation of HGS and Precision Evaluation of Gait Speed
After selecting M o , we used LOSOCV to evaluate the degree of agreement and precision between the reference and estimated HGS, using the ICC(2, 1) and mean absolute error (MAE). Additionally, we evaluated the adjusted coefficient of determination (R 2 ) for the multivariate regression models using all training data (not LOSOCV) and the Pearson's coefficient of correlation (r) between predictors and the outcome of HGS. For comparison, we derived models by optimizing three other patterns of predictor combinations in the same process: M 1 (gait speed (GP02)), M 2 (M 1 plus other GPs in one stride), and M 3 (M 2 plus IPAs) (see Figure 4c).
We evaluated the average value of gait speed measured by the IMS in one trial and used ICC(2, 1) and MAE to assess the agreement and precision between the reference and measured values.
Designing Frailty Risk Score
We assumed that the distribution of HGS and gait speed of our subjects would follow a normal distribution similar to that of the population of older Asian adults. According to [56], the mean values of HGS for males (N = 12,190) and females (N = 14,154) over 60 years old are 34.7 and 21.9 kg, respectively, and the standard deviations are 7.1 and 4.8 kg, respectively. In [57], the baseline demographic and health characteristics of 1686 community-dwelling Japanese were demonstrated, and no significant difference in gait speed was observed between sexes. Thus, the calculated mean value and standard deviation of gait speed for all subjects were 1.29 and 0.24 m/s, respectively.
We utilized a probability-distribution-based method to design the frailty risk score. First, we calculated the Z-score of the HGS performance of males and females using Equations (3) and (4), respectively, and that of gait speed using Equation (5), using the mean value and standard deviation of HGS and gait speed for older Asian adults in [56,57].
Here, HGS m , HGS f , and GS are the HGS of the male subjects, the HGS of the female subjects, and the gait speed of all subjects (no sex difference). Z HGS_m and Z HGS_f denote the Z-scores of the HGS performance of males and females, and Z GS denotes the Z-scores of the gait speed performance for the standard normal distribution.
Because Z-scores can theoretically be from −∞ to +∞, to constrain the score to 0 to 100, we used the cumulative percentage of the standard normal distribution as the frailty risk score, which was calculated via the Z-scores mentioned before. Then, to ensure that the scores were still in the range of 0 to 100, we propose performance scores of HGS for males and females as Equations (6) and (7) and the performance score of gait speed as (8): P HGS_m and P HGS_f denote the designed score of the HGS performance of males and females, and P GS denotes the designed score of the gait speed performance. By following the calculation process described above, we eliminated the sex difference in the HGS distribution. Thus, the scores for males and females had the same distribution and could be discussed together. Finally, to reflect the equal weight given to HGS and gait speed in the J-CHS criteria, we propose a frailty risk score (P fr ) by combining the performance of the two, as expressed by Equation (9). In Test 1, we utilized Bland-Altman (BA) plots [68,69] to assess the limit of agreement (LoA) between IMS-assessed and reference values of gait speed and HGS. We computed both the sample-based LoA and the confidence limits of LoA in the population. To examine the existence of a fixed and proportional bias, we applied a t-test and Pearson's correlation test if the differences and averages between the two methods followed a normal distribution, initially tested by a Kolmogorov-Smirnov (KS) test. The LoA of the 95% confidence interval was established from the perfect agreement (PA) line ± 1.96 × standard deviation (σ), resulting in upper and lower LoAs (ULoA and LLoA). Additionally, the 95% confidence limits of LoA were also determined, which included the upper and lower limits of ULoA (UULoA and LULoA), as well as the upper and lower limits of LLoA (ULLoA and LLLoA). T-tests were used for comparing differences between two groups, and ANOVA was used to compare the differences among three or more groups, with all levels of significance set at p < 0.05.
In the model testing stage, we evaluated the validity of gait speed measurement and HGS estimation based on the ratio of test data in Group II+III, whose BA plots were within the agreement range determined by the model test data for Group I, i.e., the success rate of measurements denoted as K A . We considered the measurement to be successful by the model when the difference between IMS-measured and reference values was located inside the agreement interval, determined by the data of Group I. We used the optimistic agreement range, i.e., the range between UULoA and LLLoA. Because the test data size was limited, we utilized the probability-distribution-based method [44] to estimate K A and eliminate randomness. We set the confidence level to 95%, assuming 5% of the measurements as the outliers in this study. If over 95% of data were inside the agreement interval, K A was considered to be 100%.
In the probability-distribution-based method, we hypothesized that the residual of BA plots for training and test data to the PA line, denoted as R A and R T , followed a normal distribution, R A~N (µ A , σ A 2 ) and R T~N (µ T , σ T 2 ). Here, µs and σs mean the averages and standard deviation, respectively. Because the model was based on multivariate regression, theoretically, µ A ≡ 0. Furthermore, because of the limited data size, we calculated the 95% confidence levels of µ T , σ A , and σ T and obtained their upper and lower limits, (µ TL , µ TU ), (σ AL , σ AU ), and (σ TL , σ TU ), respectively. Hence, if we use an optimistic agreement range, the agreement range of the residual should be fixed as −1.96σ AL to 1.96σ AU . By then, K A should be in the area of N(µ T , σ T 2 ) inside the interval of −1.96σ AL to 1.96σ AU . Because µ T and σ T are independent of each other, the largest and smallest areas for N(µ Ti , σ Ti 2 ) subject to µ Ti ∈ [µ TL , µ TU ] and σ Ti ∈ [σ TL , σ TU ] would be the upper and lower limits of K A , denoted as K AU and K AL , which can be expressed by Equations (10) and (11):
Test 2
After calculating the P fr s of all subjects in Group III, we compared them with the expertrated scores and calculated the correlation (r) between them to evaluate the effectiveness of the designed score.
For each subject, we obtained a total of six expert-rated scores. We preliminarily tested the reliability of the six expert-rated scores based on the ICC values. The results showed that the ICC(2, 1) was 0.490 (fair), and ICC(2, k) was 0.850 (excellent). Moreover, the KS test indicated that the mean score of all subjects corresponding to six raters followed the normal distribution (p = 0.987). These results showed that the score indicating a diagnosis of frailty within the next 5 years for the subjects in Group III could be assessed using an average of six expert-rated scores with high reliability. Additionally, as another statistical processing method, we obtained the median values of the six expert-rated scores and the rank of subjects according to each score. For each subject, we then calculated their average rank. Thus, for the other patterns, we used the median value and averaged rank as the reference frailty risk score of the subjects. The correlation analysis between the reference frailty risk score for the other patterns and the designed frailty risk score is shown in the Supplementary Materials.
SPM Analysis in HGS Estimation Model Construction
In a comparison between the males and females, their average waveforms appeared approximately similar. In contrast, the standard deviation of waveforms, particularly in the frontal and horizontal plane (G y , G z , E y , and E z ), appeared to have more different shapes ( Figure 5).
Feature Selection for HGS Estimation Model
To obtain the final optimal predictor combination Mo, consisting of IPA and GP predictors, we inpu ed a total of 34 and 38 candidate predictors into LOSO-LASSO for males and females, respectively. Referring to Figure 6, we determined Mo for males and females by finding the highest ICC(2, 1), which included 16 and 8 finally selected predictors, respectively. The selected predictors for constructing multivariate linear regression and their correlation analyses with the HGS are listed in Tables 3 and 4. According to the results of the SPM-CCA, a significant correlation was found between the foot motion signal vectors for most of the stance phase and the end of the swing phase (immediately before HS) and the HGSs for both sexes. A post hoc SPM-PC analysis, represented by statistic SPM{t} curves, revealed the strength of the correlation between each type of foot motion signal and the HGS. Significant GC intervals, referred to as GPCs, were identified in the sections of curves that exceeded critical thresholds and correlated with the HGSs. It is worth noting that the GPCs of the acceleration signals were more fragmented due to the smaller smoothness of the acceleration waveform compared to the angular velocities and sole-to-ground Euler angles. The shape of the statistic SPM{t} curves and the location of the GPCs varied between males and females (see Figure 5).
Consequently, 20 GPCs and 17 GPCs were obtained for males and females, respectively. Filtered by quadricep-activation %GCs (Q t ), 10 GPCs and 14 GPCs ultimately remained for creating the same numbers of IMS predictors.
Feature Selection for HGS Estimation Model
To obtain the final optimal predictor combination M o , consisting of IPA and GP predictors, we inputted a total of 34 and 38 candidate predictors into LOSO-LASSO for males and females, respectively. Referring to Figure 6, we determined M o for males and females by finding the highest ICC(2, 1), which included 16 and 8 finally selected predictors, respectively. The selected predictors for constructing multivariate linear regression and their correlation analyses with the HGS are listed in Tables 3 and 4. in the dorsiflexion direction during the swing phase. Except for GP03, which had a me dium effect size (r = 0.338), the remaining GP predictors (GP05, 08, 09, 10,18,19) only had effect sizes classified as none or small.
For both males and females, five IMS predictors were ultimately selected (Cm12-Cm16 Cf4-Cf8) by LOSO-LASSO. The corresponding GPCs are shown in Figure 7. Besides foo motions in the sagi al (Y-Z) plane, such as Ay and Az (Cm13, 14, Cf7), those in the frontal (X Z) and horizontal (X-Y) planes, such as Ax, Gy, and Gz (Cm12, 15,16,8), were suggested to be essential for HGS estimation. Temporally, major parts of GPCs for females appeared around HS (Cf4-6, 8), where both the rectus femoris (RF) and vastus muscles (VAs) in th quadriceps were mainly activated. In contrast, besides the GPCs (Cm12, 15,16) in the %GC when both the RF and VAs activated, the male subjects also had more GPCs (Cm13, 14) insid the %GCs for which only the RF activated, which appeared around TO, than the femal subjects (Cf7). These results may reflect the sex differences in muscle activation pa ern during gait. By referencing the mean value and linear correlation coefficients of the selected IM predictors with the HGS, the direction of foot motions during these phases and the chang ing trend as HGS increased could be determined. Male subjects with stronger HGSs had strong acceleration in the anterior and superior direction (Cm13, 14) immediately before and after TO. During the early mid-stance phase, when the foot approaches the defined neutra position, male subjects with stronger HGSs had higher angular velocities in the direction of eversion and internal rotation (Cm15, 16). Immediately after the heel rocker occurred, fe male subjects with stronger HGSs tended to have lower acceleration in the lateral direction and lower angular velocity in the internal rotation direction (Cf4, 8). Combining the two predictors, the results may suggest that female subjects with stronger HGSs tend to hav a higher ability to land their feet stably and smoothly. After the foot has completely hi Regarding the IPA predictors, age and height were selected for both males and females, with medium to large effect sizes (age: r = 0.162 and 0.271; height: r = 0.428 and 0.682). In particular, the age for males and height for females had the highest correlation with HGS. These results indicate that the effect of age and body size on HGS was observed. Although the effect size was small (r = 0.209), weight was also selected for the estimation model for males.
Compared to females, more GP predictors were selected for males, with GP16 (r = 0.303, medium effect size; r = 0.199, small effect size) being present in the predictor list for both sexes. This result suggests that subjects with higher HGSs have lower maximum G x in the dorsiflexion direction during the swing phase. Except for GP03, which had a medium effect size (r = 0.338), the remaining GP predictors (GP05, 08, 09, 10,18,19) only had effect sizes classified as none or small. For both males and females, five IMS predictors were ultimately selected (C m12 -C m16 , C f4 -C f8 ) by LOSO-LASSO. The corresponding GPCs are shown in Figure 7. Besides foot motions in the sagittal (Y-Z) plane, such as A y and A z (C m13,14 , C f7 ), those in the frontal (X-Z) and horizontal (X-Y) planes, such as A x , G y , and G z (C m12,15,16, C f4-6,8 ), were suggested to be essential for HGS estimation. Temporally, major parts of GPCs for females appeared around HS (C f4-6,8 ), where both the rectus femoris (RF) and vastus muscles (VAs) in the quadriceps were mainly activated. In contrast, besides the GPCs (C m12, 15,16 ) in the %GCs when both the RF and VAs activated, the male subjects also had more GPCs (C m13,14 ) inside the %GCs for which only the RF activated, which appeared around TO, than the female subjects (C f7 ). These results may reflect the sex differences in muscle activation patterns during gait. the ground, female subjects with higher HGSs tended to have less acceleration in the medial direction (or more acceleration in the lateral direction) (Cf5). At the end of the initial swing phase when the lower limb transitioned from acceleration to deceleration, the acceleration in the anterior direction (Cf7) of female subjects began to approach zero as HGS increased. Furthermore, we also list the coefficients of predictors and their p-values in a multivariate regression model in Tables 2 and 3. Although the linear correlation coefficient with the HGS contained predictors with effect sizes only classified as none or small, the constructed models for both males and females had large effect sizes (R 2 = 0.858, p < 0.001, and R 2 = 0.773 and p < 0.001, respectively).
Gait Speed
For all subjects, we evaluated the agreement between the 10 m average gait speed calculated from stopwatch-measured time in one trial and that calculated by averaging all strides of gait speed in 10 m intervals in one trial (see Figure 8a). The ICC agreement reached the "excellent" level with a value of 0.978. Compared to the reference value, the IMS achieved an MAE of 0.029 m/s, which is only 2.1% of the average gait speed of all subjects in Group I.
From the BA plots of data for Group I (see Figure 8b), we observed a fixed bias indicating that the IMS-measured gait speed was on average 0.014 m/s greater than the stopwatch-measured data (p < 0.001). There was also a proportional bias between the two measurements (r = −0.173, p = 0.006), indicating that IMS slightly overestimated the gait speed when the gait speed became slower (y = −0.034x + 0.060). The agreement interval for By referencing the mean value and linear correlation coefficients of the selected IMS predictors with the HGS, the direction of foot motions during these phases and the changing trend as HGS increased could be determined. Male subjects with stronger HGSs had strong acceleration in the anterior and superior direction (C m13,14 ) immediately before and after TO. During the early mid-stance phase, when the foot approaches the defined neutral position, male subjects with stronger HGSs had higher angular velocities in the direction of eversion and internal rotation (C m15,16 ). Immediately after the heel rocker occurred, female subjects with stronger HGSs tended to have lower acceleration in the lateral direction and lower angular velocity in the internal rotation direction (C f4,8 ). Combining the two predictors, the results may suggest that female subjects with stronger HGSs tend to have a higher ability to land their feet stably and smoothly. After the foot has completely hit the ground, female subjects with higher HGSs tended to have less acceleration in the medial direction (or more acceleration in the lateral direction) (C f5 ). At the end of the initial swing phase when the lower limb transitioned from acceleration to deceleration, the acceleration in the anterior direction (C f7 ) of female subjects began to approach zero as HGS increased.
Furthermore, we also list the coefficients of predictors and their p-values in a multivariate regression model in Tables 2 and 3. Although the linear correlation coefficient with the HGS contained predictors with effect sizes only classified as none or small, the constructed models for both males and females had large effect sizes (R 2 = 0.858, p < 0.001, and R 2 = 0.773 and p < 0.001, respectively).
Precision Evaluation of Gait Speed, Model Evaluation of HGS Estimation, and Test 1 3.3.1. Gait Speed
For all subjects, we evaluated the agreement between the 10 m average gait speed calculated from stopwatch-measured time in one trial and that calculated by averaging all strides of gait speed in 10 m intervals in one trial (see Figure 8a). The ICC agreement reached the "excellent" level with a value of 0.978. Compared to the reference value, the IMS achieved an MAE of 0.029 m/s, which is only 2.1% of the average gait speed of all subjects in Group I.
HGS
The results presented in Figure 9a suggest that gait speed alone or combined with other common GPs is not an effective predictor for estimating HGS. From the results shown in Figure 9a, it can be inferred that gait speed is significantly correlated with HGS among male and female subjects, with moderate effect sizes (r = 0.384, 0.337, p = 0.048, 0.048), but estimating HGS based solely on gait speed is not feasible due to the poor ICC agreement between the estimated and reference values. However, when additional GPs were added as predictors by using the LOSO-LASSO model (M2), significant improvements were observed for ICC, MAE, and R 2 . The ICC agreement for males and females improved from poor to fair and good, respectively, while the R 2 improved from small to large. Specifically, for the males, the ICC agreement improved from fair to good with the aid of IPAs. Additionally, the optimal model (Mo) that included IMS predictors resulted in a substantial improvement in ICC agreements, MAE, and R 2 , where the ICCs reached excellent for both males and females with MAE and R 2 values improving to 2.88 and 2.57 kg and 0.86 and 0.77, respectively. Further details on M2 and M3 predictor combinations can be found in the Supplementary Materials.
The differences between the reference and estimated values of Group I data followed a normal distribution, as shown in Figure 9b. The Bland-Altman plots of Group I for both males and females did not reveal any proportional biases (p = 0.76, 0.09) between the measurements. In terms of the HGS model test results using Group II+III data, the HGS estimation was successful for 5/6 males and 36/39 females within the agreement interval. According to Equations (10) and (11), 48.0-100.0% of male subjects and 89.1-100.0% of female subjects were estimated successfully. However, it appeared that HGS was overestimated for males in Group II+III. From the BA plots of data for Group I (see Figure 8b), we observed a fixed bias indicating that the IMS-measured gait speed was on average 0.014 m/s greater than the stopwatch-measured data (p < 0.001). There was also a proportional bias between the two measurements (r = −0.173, p = 0.006), indicating that IMS slightly overestimated the gait speed when the gait speed became slower (y = −0.034x + 0.060). The agreement interval for testing data for Group II+III was determined by the BA plots generated from the data for Group I. According to K AL and K AU calculated using Equations (8) and (9), the IMS successfully assessed 100% of gait speed data for subjects in Group II+III with an MAE precision of 0.029 m/s.
HGS
The results presented in Figure 9a suggest that gait speed alone or combined with other common GPs is not an effective predictor for estimating HGS. From the results shown in Figure 9a, it can be inferred that gait speed is significantly correlated with HGS among male and female subjects, with moderate effect sizes (r = 0.384, 0.337, p = 0.048, 0.048), but estimating HGS based solely on gait speed is not feasible due to the poor ICC agreement between the estimated and reference values. However, when additional GPs were added as predictors by using the LOSO-LASSO model (M 2 ), significant improvements were observed for ICC, MAE, and R 2 . The ICC agreement for males and females improved from poor to fair and good, respectively, while the R 2 improved from small to large. Specifically, for the males, the ICC agreement improved from fair to good with the aid of IPAs. Additionally, the optimal model (M o ) that included IMS predictors resulted in a substantial improvement in ICC agreements, MAE, and R 2 , where the ICCs reached
Test 2: Validity of Designed Frailty Risk Score with Estimated HGS
In Test 2, the scores of male and female subjects were evaluated together because the experts did not consider biological sex. For both males and females in Group III, there was no significant linear correlation between HGS and gait speed (r = 0.025, p = 0.963, and r = 0.170, p = 0.302, respectively). Even after calculating PHGS and PGS, there was still no significant linear correlation between the performance scores (r = 0.363, p = 0.075), possibly due to the small sample size and insufficient statistical power.
The ICC agreement between the three types of performance scores based on reference and IMS-estimated values is shown in Figure 10. PGS had an excellent level of agreement with an ICC(2,1) of 0.959 (Figure 10b), while PHGS only had a poor level with an ICC(2,1) of 0.282 (Figure 10a), possibly due to a few subjects who did not agree well with the reference data. However, when PHGS and PGS were combined with Pfr, the ICC value improved to a good level at 0.727 (Figure 10c). Figures 11 and 12 show the correlations between the expert-rated score and the three types of performance scores based on reference and IMS-estimated values. The expertrated score had a significant negative correlation with reference data-based PGS and Pfr, with large effect sizes (r = −0.555, −0.503; p = 0.004, 0.010), but not with reference databased PHGS (r = −0.225, p = 0.280) ( Figure 11). However, the PHGS based on IMS-estimated Results of HGS estimation model test using data from Group II+III and optimistic agreement interval determined using data from Group I shown in (b). All male subjects belonged to Group III, marked as blue triangles. Lower to upper limits of K A , i.e., K A = K AL − K AU , are depicted in (c). Black dashed circle in (c) means subjects in Group III who did not agree with the reference data well.
The differences between the reference and estimated values of Group I data followed a normal distribution, as shown in Figure 9b. The Bland-Altman plots of Group I for both males and females did not reveal any proportional biases (p = 0.76, 0.09) between the measurements. In terms of the HGS model test results using Group II+III data, the HGS estimation was successful for 5/6 males and 36/39 females within the agreement interval. According to Equations (10) and (11), 48.0-100.0% of male subjects and 89.1-100.0% of female subjects were estimated successfully. However, it appeared that HGS was overestimated for males in Group II+III.
Test 2: Validity of Designed Frailty Risk Score with Estimated HGS
In Test 2, the scores of male and female subjects were evaluated together because the experts did not consider biological sex. For both males and females in Group III, there was no significant linear correlation between HGS and gait speed (r = 0.025, p = 0.963, and r = 0.170, p = 0.302, respectively). Even after calculating P HGS and P GS , there was still no significant linear correlation between the performance scores (r = 0.363, p = 0.075), possibly due to the small sample size and insufficient statistical power.
The ICC agreement between the three types of performance scores based on reference and IMS-estimated values is shown in Figure 10. P GS had an excellent level of agreement with an ICC(2,1) of 0.959 (Figure 10b), while P HGS only had a poor level with an ICC(2,1) of 0.282 (Figure 10a), possibly due to a few subjects who did not agree well with the reference data. However, when P HGS and P GS were combined with P fr , the ICC value improved to a good level at 0.727 (Figure 10c). data had a significant negative correlation with the expert-rated score with a large effect size (r = −0.525, p = 0.007), and the Pfr based on IMS-estimated data had a higher effect size (r = −0.676, p < 0.001) than the reference data-based one. These results indicate that the performance scores based on IMS-estimated data are more consistent with the experts' diagnostic reasoning.
(a) (b) (c) Figure 9c (the same data in dashed circles in Figure 9c). Blue points: male subjects. Red points: female subjects. Figures 11 and 12 show the correlations between the expert-rated score and the three types of performance scores based on reference and IMS-estimated values. The expert-rated score had a significant negative correlation with reference data-based P GS and P fr , with large effect sizes (r = −0.555, −0.503; p = 0.004, 0.010), but not with reference data-based P HGS (r = −0.225, p = 0.280) ( Figure 11). However, the P HGS based on IMS-estimated data had a significant negative correlation with the expert-rated score with a large effect size (r = −0.525, p = 0.007), and the P fr based on IMS-estimated data had a higher effect size (r = −0.676, p < 0.001) than the reference data-based one. These results indicate that the performance scores based on IMS-estimated data are more consistent with the experts' diagnostic reasoning. We conducted a statistical analysis of the difference in expert-rated scores between subjects classified as pre-frail and robust based on the J-CHS score ( Figure 13). We found no significant difference between the subject groups that scored 1 to 2 and 0, which may be due to the difficulty in precisely scoring subjects who are on the boundary of robust/pre-frail conditions based only on gait observation. Nevertheless, the average value of the robust group was lower than that of the pre-frail group. We conducted a statistical analysis of the difference in expert-rated scores between subjects classified as pre-frail and robust based on the J-CHS score ( Figure 13). We found no significant difference between the subject groups that scored 1 to 2 and 0, which may be due to the difficulty in precisely scoring subjects who are on the boundary of robust/pre-frail conditions based only on gait observation. Nevertheless, the average value of the robust group was lower than that of the pre-frail group. Furthermore, we tested the three types of performance scores on the basis of IMSestimated data for the pre-frail and robust groups ( Figure 14). Despite the t-test showing no significant difference between the two groups in either HGS or gait speed performance scores, the overall performance Pfr of the robust group was significantly higher than that of the pre-frail group. This suggests that the frailty risk score was consistent with the J-CHS criteria. Furthermore, we tested the three types of performance scores on the basis of IMSestimated data for the pre-frail and robust groups ( Figure 14). Despite the t-test showing no significant difference between the two groups in either HGS or gait speed performance scores, the overall performance P fr of the robust group was significantly higher than that of the pre-frail group. This suggests that the frailty risk score was consistent with the J-CHS criteria. We conducted a statistical analysis of the difference in expert-rated scores between subjects classified as pre-frail and robust based on the J-CHS score ( Figure 13). We found no significant difference between the subject groups that scored 1 to 2 and 0, which may be due to the difficulty in precisely scoring subjects who are on the boundary of robust/pre-frail conditions based only on gait observation. Nevertheless, the average value of the robust group was lower than that of the pre-frail group. Furthermore, we tested the three types of performance scores on the basis of IMSestimated data for the pre-frail and robust groups ( Figure 14). Despite the t-test showing no significant difference between the two groups in either HGS or gait speed performance scores, the overall performance Pfr of the robust group was significantly higher than that of the pre-frail group. This suggests that the frailty risk score was consistent with the J-CHS criteria.
Some Significant GP Predictors for HGS Estimation
Although gait speed has been suggested to be correlated with HGS in previous studies [32,33], in this study, gait speed was not selected as a predictor for the HGS estimation model for either male or female subjects. Instead, other spatiotemporal parameters were discovered to be significant for HGS estimation in our designed model. These parameters played a key role in the optimal model for HGS estimation based on the analysis of ICC agreement. After applying the LOSO-LASSO method, essential GP predictors were selected.
One of the essential GP predictors is the maximum sole-to-ground angle in the dorsiflexion direction (GP03), which has a relatively high positive correlation with HGS in males. As shown in Figure 5, GP03 occurs immediately before heel strike. During this phase of the gait cycle, the ankle joint is in a neutral status; i.e., the foot is perpendicular to the tibia. Therefore, the value of GP03 is determined by the degree of knee extension [70]. When the knee extensor, i.e., quadriceps, becomes weaker, the knee cannot be extended sufficiently, which causes GP03 to become smaller.
Another essential GP predictor for both sexes is the maximum angular velocity in the dorsiflexion direction during the swing phase (GP16). Unlike GP03, the negative correlation coefficient between HGS and GP16 suggests that subjects with a higher HGS have a lower absolute value of GP16, i.e., a value closer to zero. This can be explained as follows: According to Figure 5, GP16 is most likely to occur during the initial swing phase. During this phase, the upper leg rotates forward (blue arrow on upper leg in Figure 15), and the knee joint gradually increases flexion. Passively, the lower leg lifts behind the central line of the body (yellow arrow in Figure 15), which prevents the lower leg from rotating forward too early by overcoming the gravity force (green arrow in Figure 15). At the same time, the ankle joint spontaneously reduces plantarflexion, which rotates the foot forward (blue arrow on foot in Figure 15). The G x waveform during this phase reflects the counterbalance motion of the knee and ankle joint [70]. Furthermore, Nene et al. [71] suggested that the rectus femoris muscle controls the degree of knee flexion. Therefore, when the quadriceps, especially the rectus femoris, becomes weaker, the antagonizing muscle power that prevents the lower leg from rotating forward along with gravity also decreases. Consequently, GP16 becomes larger in the dorsiflexion direction.
Some Significant IMS Predictors for HGS Estimation
Through SPM analysis of the correlation between HGS and the foot motion waveforms, we discovered a number of effective IMS predictors, and five IMS predictors were finally selected by LOSO-LASSO. As shown in Figure 7, the major parts of GPCs for the females appeared around HS (C f4-6,8 ), where both the RF and VAs in the quadriceps were mainly activated, while the male subjects also had more GPCs (C m13,14 ) inside the %GCs for which only the RF was activated, which appeared around TO, than the female subjects (C f7 ). Di Nardo et al. [72] suggested that female subjects have more complex activation patterns in VAs. Bailey et al. [73] indicated that in older adults' gait, the activation level of RF for males is higher than that for females according to a study using electromyography. The results shown in Figure 6 may reflect sex-dependent muscle activation during gait.
Kobayashi et al. [48] demonstrated the differences in GPs between sexes. Rowe et al. [74] analyzed the sex differences in kinetics and kinematics of lower limbs in detail, which indicated that more differences were found in frontal and horizontal planes. In this study, we also observed a difference in foot motion waveforms, which also belong to the gait in the lower limbs, between male and female subjects, especially in the frontal plane and transverse plane. Our results agreed with the findings demonstrated in these previous studies.
We analyzed the correlation between the balance ability, represented by the outcome of the FRT, and foot motion with the same subjects in Group I in our previous study [45]. We found several significant GPCs by paying attention to the gait phases related to the activation of the tibialis anterior (TA) and calf muscles (gastrocnemius (GA) and soleus (SO)). The TA has two periods of activity: one is during the early stance phase (1-15%GC), and the other is from the late pre-swing to the end swing phase (55-100%GC). Partial quadricep-activated gait phases overlap with the TA at the moments before and after HS. Different from the GPCs in the HGS assessment model, there were no GPCs selected at the end of the swing phase, i.e., the second period of TA activity, in the balance ability assessment model, which may suggest that the power needed for knee extension contributes less to balance ability. In contrast, similar to the balance ability assessment model, the HGS estimation model also has GPCs in the early stance phase (the first period of TA activity). In this period, the quadriceps control the lower limb to prevent excessive knee flexion, and at the same time, the TA contributes to decelerating the passive plantarflexion and foot pronation to make the posture more stable [69]. A previous study discovered that HGS was significantly correlated with the outcome of FRT in the older Asian population [75]. We also found that the HGS was significantly correlated with the outcome of FRT in our study (male: r = 0.456, p = 0.017; female: r = 0.390, p = 0.020). We think that the correlation may be related to the common parts of GPCs in both models during the early stance phase right after HS.
lected.
One of the essential GP predictors is the maximum sole-to-ground angle in the dorsiflexion direction (GP03), which has a relatively high positive correlation with HGS in males. As shown in Figure 5, GP03 occurs immediately before heel strike. During this phase of the gait cycle, the ankle joint is in a neutral status; i.e., the foot is perpendicular to the tibia. Therefore, the value of GP03 is determined by the degree of knee extension [70]. When the knee extensor, i.e., quadriceps, becomes weaker, the knee cannot be extended sufficiently, which causes GP03 to become smaller.
Another essential GP predictor for both sexes is the maximum angular velocity in the dorsiflexion direction during the swing phase (GP16). Unlike GP03, the negative correlation coefficient between HGS and GP16 suggests that subjects with a higher HGS have a lower absolute value of GP16, i.e., a value closer to zero. This can be explained as follows: According to Figure 5, GP16 is most likely to occur during the initial swing phase. During this phase, the upper leg rotates forward (blue arrow on upper leg in Figure 15), and the knee joint gradually increases flexion. Passively, the lower leg lifts behind the central line of the body (yellow arrow in Figure 15), which prevents the lower leg from rotating forward too early by overcoming the gravity force (green arrow in Figure 15). At the same time, the ankle joint spontaneously reduces plantarflexion, which rotates the foot forward (blue arrow on foot in Figure 15). The Gx waveform during this phase reflects the counterbalance motion of the knee and ankle joint [70]. Furthermore, Nene et al. [71] suggested that the rectus femoris muscle controls the degree of knee flexion. Therefore, when the quadriceps, especially the rectus femoris, becomes weaker, the antagonizing muscle power that prevents the lower leg from rotating forward along with gravity also decreases. Consequently, GP16 becomes larger in the dorsiflexion direction.
Results Regarding Model Test and Designed Frailty Risk Score
The agreement between reference data and IMS-estimated data for P HGS only reached a "poor" level due to the estimated HGSs of one male and two female subjects that deviated from the reference values ( Figure 10a, marked with three dashed black circles). It appears that IMS did not estimate the HGS of these three subjects accurately compared to the hydraulic hand dynamometer which is considered as the gold standard. However, the reference HGS only reflected the static systemic muscle strength of the upper limb. Figure 11a indicates that there was no significant correlation between the reference HGS and the expert-rated score, while Figure 12a suggests that IMS-estimated HGS was significantly correlated with the expert-rated score. Furthermore, compared to the results in Figures 11c and 12c, our designed frailty risk score using IMS-estimated values agreed more with the clinical experts. These results may be due to the fact that our model focused on gait performance and reflected dynamic muscle conditions via the lower limbs. Moreover, experienced clinicians and physiotherapists tend to rely on information extracted from gait observation for making their decisions in clinical practice. The ICC(2, k) of the HGS between reference and IMS-estimated values reached 0.886 and 0.902 for males and females, respectively, indicating that the average value of the dynamometer and IMSestimated HGS can be used in clinical practice to better approach subjects' systemic muscle strength reality.
Our designed frailty risk score was significantly correlated with the expert-rated score (r = −0.676, p < 0.001), indicating the reliability of frailty risk assessment using IMS and our designed frailty risk score. Additionally, significant differences were found in the designed frailty risk score between subjects in the group with a J-CHS score of 0 and those with a score of 1 to 2, further supporting the use of our proposed method for frailty assessment.
Outlook for this Technology
As a feasibility study, we temporarily recorded foot motion data in the onboard memory of IMSs during the experiments and transferred the data to a personal computer after the gait measurements were completed. However, in daily use, a real-time algorithm for frailty assessment is necessary. In our previous studies, we proposed an online algorithm for estimating stride parameters for daily gait analysis using an IMS [29], as well as an algorithm for integrating the process of IMS predictor construction into the online algorithm [44]. By using the same algorithms, we believe that daily frailty assessment can be performed using an IMS.
In this study, we did not diagnose whether the subjects were frail or not, as all recruited subjects were able to come to the laboratory using public transportation. Therefore, we assumed that they were in generally good health. The characteristics of the subjects can be observed from their J-CHS scores, but the frailty risk score does not directly represent the probability of an individual being diagnosed as frail. Instead, it reflects the relative degree of frailty in the population. To obtain more evidence supporting our frailty risk score, future longitudinal cohort studies should be conducted to track the frailty of subjects. Additionally, an epidemiological study regarding the frailty risk score is needed to improve its interpretability in connection with the real probability of being diagnosed as frail.
To improve the HGS estimation model's precision, future studies should focus on increasing the sample size. To improve the agreement of the frailty risk score with experts' diagnostic reasoning, IMS estimation should include three additional items in the J-CHS criteria: activity level, fatigue, and weight loss. Gokalgandhi et al. [28] suggested that daily activity and calorie consumption could be monitored by smart shoes. However, an estimation method via IMSs for the other two items is still lacking. In their study, Luo et al. [76] proposed a pilot method for assessing fatigue via wearable sensors that utilized vital signs such as heart rate, blood pressure, skin temperature, and steps, but did not include other GPs. Previous kinematic studies [77,78] have shown that fatigue and weight loss can impact kinematic patterns. Therefore, assessing fatigue and weight loss using IMSs alone is promising but requires further investigation in the future.
Conclusions
In this study, we demonstrated the potential for long-term frailty assessment using IMSs, which required two key tasks. The first task was to accurately measure gait speed using IMSs and construct an HGS estimation model via foot motion. The second task was to create a frailty risk score that can continuously assess frailty and validate its effectiveness.
For the first task, we confirmed that IMSs can measure gait speed with high accuracy, with an ICC agreement with reference data of over 0.97. By analyzing the correlation between HGS and foot motion waveforms using SPM-LOSO-LASSO, we discovered novel GPs and IMS predictors for HGS estimation. Specifically, we found that male subjects had more GPC components inside the %GCs for which only the RF was activated, while female subjects had more GPC components inside the %GCs for which both the VAs and RF were activated. We successfully constructed sex-dependent HGS estimation models, both of which achieved "excellent" ICC agreement, MAEs below 2.9 kg, and large effect sizes (R 2 over 0. 77). By testing the model on a separate sample of subjects, we found that 48.0-100% of males and 89.1-100% of females were within the agreement interval, indicating the robustness of our model for other older individuals.
For the second task, we successfully designed a novel analog frailty risk score by combining the HGS performance and gait speed performance of the subjects aiding by the normal distribution of HGS and gait speed of the Asian older population. This score had a large effect size correlation with the expert-rated score, demonstrating its validity and agreement with clinical experts' diagnostic reasoning.
In the future, an epidemiological study is needed to improve the interpretability of the frailty risk score in connection with the real probability of being diagnosed with frailty. Furthermore, to better align with clinical experts' diagnostic reasoning, an IMS assessment of three other items related to activity, weight loss, and fatigue is needed.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/s23125446/s1. Figure S1. Results of LOSO-LASSO analysis to determine (a) M 2 and (b) M 3 ; Table S1. Optimal predictor combination in M 2 and M 3 ; Figure S2. Correlations between expert-rated median score and three types of performance scores calculated from IMS-estimated value: (a) P HGS , (b) P GS , (c) P fr . Figure S3. Correlations between expert-rated mean rank and three types of performance scores calculated from IMS-estimated value: (a) P HGS , (b) P GS , (c) P fr . The SGA signal of y-axis P HGS_m Designed score of the HGS performance of males E z The SGA signal of z-axis PS Pre-swing | 2023-06-30T05:11:33.887Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "3d7c3ef3c426e1c015c4a0ab5369e205223e24eb",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3d7c3ef3c426e1c015c4a0ab5369e205223e24eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
118840785 | pes2o/s2orc | v3-fos-license | Weakening and Shifting of the Saharan Shallow Meridional Circulation During Wet Years of the West African Monsoon
The correlation between increased Sahel rainfall and reduced Saharan surface pressure is well established in observations and global climate models, and has been used to imply that increased Sahel rainfall is caused by a stronger shallow meridional circulation (SMC) over the Sahara. This study uses two atmospheric reanalyses to examine interannual variability of Sahel rainfall and the Saharan SMC, which consists of northward near-surface flow across the Sahel into the Sahara and southward flow near 700 hPa out of the Sahara. During wet Sahel years, the Saharan SMC shifts poleward, producing a drop in low-level geopotential and surface pressure over the Sahara. Statistically removing the effect of the poleward shift from the low-level geopotential eliminates significant correlations between this geopotential and Sahel precipitation. As the Saharan SMC shifts poleward, its mid-tropospheric divergent outflow decreases, indicating a weakening of its overturning mass flux. The poleward shift and weakening of the Saharan SMC during wet Sahel years is reproduced in an idealized model of West Africa; a wide range of imposed sea surface temperature and land surface albedo perturbations in this model produce a much larger range of SMC variations that nevertheless have similar quantitative associations with Sahel rainfall as in the reanalyses. These results disprove the idea that enhanced Sahel rainfall is caused by strengthening of the Saharan SMC. Instead, these results are consistent with the hypothesis that the a stronger SMC inhibits Sahel rainfall, perhaps by advecting mid-tropospheric warm and dry air into the precipitation maximum.
Introduction
Over the twentieth century, large interannual and interdecadal variations in precipitation were observed in the African Sahel, producing occasional floods and sustained droughts. A variety of studies examined the cause of these variations (e.g. Charney et al., 1975;Folland et al., 1986;Eltahir and Gong, 1996;Nicholson and Grist, 2001), but a robust explanation was not established until Giannini et al. (2003) showed that much of the observed variability could be reproduced if observed global SSTs were used to drive a global climate model, implicating SST as the primary cause of historical Sahel precipitation changes.
While it is now generally agreed that SST drives interdecadal variations in Sahel precipitation (e.g. Nicholson, 2013), the Sahara desert is also known to be associated with Sahel variability on a range of time scales. Haarsma et al. (2005) found a correlation on interannual time scales between increased Sahel rainfall and decreased mean sea level pressure over the Sahara. They argued that variations in the mean sea level pressure gradient between the Sahara and its surroundings cause variations in lowlevel convergence of mass and moisture, and thus in rainfall, over the Sahel. They furthermore argued that the mean sea level pressure gradient is set by 1 the land-ocean temperature contrast, which can then be viewed as a fundamental driver of Sahel rainfall. In contrast, Biasutti et al. (2009) found that land-ocean temperature contrast is poorly correlated with Sahel precipitation at interannual time scales in CMIP3 models. They argued that while variations in Sahel rainfall are indeed caused by variations in the strength of the low-level circulation over the Sahara, the meridional gradient of 925 hPa geopotential height is a better indicator of the strength of the Saharan Heat Low (SHL; e.g. Rácz and Smith, 1999), which is a dry, shallow overturning circulation centered in the Sahara desert. At interannual time scales, Biasutti et al. (2009) found that when the 925 hPa geopotential was anomalously low over the Sahara, the SHL was strong and there was greater rainfall over the Sahel. They also showed that 925 hPa geopotential height anomalies over the Sahara led Sahel rainfall anomalies by a month, suggesting that the SHL causes, and is not just correlated with, Sahel rainfall variability.
The studies just discussed treat the low-level circulation over the Sahara as an entity that can be described in terms of the distribution of mean sea level pressure or 925 hPa geopotential height. However, this circulation consists of a geopotential height minimum at 925 hPa over the Sahara and a geopotential height maximum in the lower mid-troposphere (near 700 hPa) over the Sahara, with cyclonic and anticyclonic winds rotating around the near-surface geopotential minimum and the 700 hPa geopotential maximum, respectively. Fig. 1 provides a schematic of this well-known structure of the Saharan circulation (e.g. Thorncroft et al., 2011). In addition to the balanced cyclonic and anticyclonic flow, mass converges into the near-surface low, ascends, and diverges out of the lower mid-tropospheric high in an ageostrophic overturning circulation. The thickness of the 925-700 hPa layer -the low-level atmospheric thickness (LLAT) -is greatest where the temperature of that layer is maximum, by the hypsometric relation. There is ambiguity in terminology, with some (e.g. Lavaysse et al., 2009) referring to the combination of the low pressure system near the surface and the high pressure system in the Figure 1: The near-surface Saharan Heat Low cyclone is indicated by an "L" at 20°N, the mid-level Saharan High anticyclone is indicated by an "H" at 20°N, and the combination of these is collectively referred to as the SHL circulation. The divergent component of the SHL circulation, indicated by arrows in the dashed box, is the Saharan overturning. The midlevel African Easterly Jet (AEJ), upper-level Tropical Easterly Jet (TEJ), and upper level anticyclone in the ITCZ are also shown. lower mid-troposphere (sometimes known as the Saharan High) as the SHL. Here we refer to the nearsurface low pressure cyclone over the Sahara as the SHL, the high pressure anticyclone in the lower midtroposphere as the Saharan High, the divergent component of the winds as the "Saharan overturning circulation", and the combination of all of these as the "SHL circulation".
Mechanistically, both Haarsma et al. (2005) and Biasutti et al. (2009) argue that an anomalously strong near-surface SHL creates increased low-level mass convergence over the Sahara, and this enhanced circulation causes more moisture convergence and rainfall over the Sahel. Other studies have echoed this view, including Lavaysse et al. (2009), who make the additional point that the near-surface SHL plays a crucial role in West African monsoon onset through its low level cyclonic circulation. Lavaysse et al. (2010a) show enhanced deep convection over the Sahel during strong phases of the SHL, where a strong SHL is identified by LLAT being anomalously high. During weak phases, when the Saharan LLAT is anomalously low, deep convection over the Sahel is suppressed. However, in another study from the same year, Lavaysse et al. (2010b) show that when the SHL circulation interacts with the midlatitudes on intraseasonal timescales, the opposite sign of correlation can be observed, with low LLAT over the Sahara corresponding to increased deep convection over the Sahel.
The picture becomes even less clear when one considers studies describing the influence of the midtropospheric Saharan High on Sahel precipitation. In a pair of companion studies, and used an idealized, zonally symmetric model of the West African monsoon to examine the influence on Sahel rainfall of the large-scale temperature and moisture advection produced by the SHL circulation. They used an atmospheric reanalysis to show that the SHL circulation produces near-surface cooling and moistening over the Sahel and Sahara, and warming and drying in the lower mid-troposphere (around 700 hPa). Furthermore, when they imposed these low-and midlevel advective tendencies individually in their idealized model, they found that the low-level cooling and moistening caused increased Sahel rainfall, while the mid-level warming and drying caused decreased Sahel rainfall. The effect of the mid-level warming and drying dominated, so that temperature and moisture advection in the SHL circulation has a net inhibitory effect on Sahel rainfall. This is consistent with the demonstrated sensitivity of precipitating convection to drying of the free troposphere above the boundary layer (Derbyshire et al., 2004;Holloway and Neelin, 2009;Sobel and Schneider, 2009), but would seem to contradict the idea that a stronger near-surface SHL would cause an increase in Sahel rainfall (e.g. Haarsma et al., 2005;Biasutti et al., 2009), unless the near-surface SHL does not strengthen with the Saharan overturning circulation.
In addition to the correlation between Sahel precipitation and near-surface geopotential, there is an association of precipitation with changes in winds as well. The low level monsoon westerlies have a maximum at the surface during dry years but form a jet at 850 hPa during wet years . The African easterly jet (AEJ), in balance with the strong thermal contrasts between the Sahel and Sahara and maintained by both the deep and shallow circulations (Thorncroft and Blackburn, 1999), exhibits a statistically significant northward shift and weakening during wet Sahel years (Hsieh and Cook, 2005;Dezfuli and Nicholson, 2011). The upper-level tropical easterly jet (TEJ) is also strongly linked to interannual variability over the Sahel, with a significant strengthening during wet years Dezfuli and Nicholson, 2011). There does not seem to be a clear consensus on reasons behind the changes in the AEJ and TEJ, but current understanding is reviewed in Nicholson (2013).
To the best of our knowledge, the observed association of interannual variations in Sahel rainfall with the whole SHL circulation has not been examined. It might seem reasonable to assume that the Saharan overturning circulation would strengthen as the near-surface SHL strengthens, but given the dominant effect of the mid-level warming and drying on Sahel rainfall suggested by an idealized study , this would be inconsistent with observations of a strengthening of the nearsurface SHL during rainy Sahel years. Perhaps lowlevel cooling and moistening by the SHL circulation has a larger influence on Sahel precipitation in the real world than in the idealized model of , similar to suggestions for the role of these low-level tendencies in the observed seasonal northward migration of West African rainfall (e.g. Hagos and Cook, 2007;Thorncroft et al., 2011;Peyrillé et al., 2016). Or perhaps the SHL circulation does not strengthen as the near-surface SHL strengthens. Here we seek to resolve these questions by examining the association of Sahel rainfall with the three-dimensional SHL circulation at interannual timescales in two atmospheric reanalyses and an idealized model.
The next section of this paper describes our data sources and analysis methods. Section 3 discusses the climatology and basic features of the West African monsoon and SHL circulation. Section 4 examines how the horizontal structure of the nearsurface SHL and the Saharan High covary with Sahel precipitation, and is followed by a section detailing the vertical structure of the circulation changes, with emphasis on the divergent component of the flow. Section 7 compares all of these observationally based results with output from an idealized β -plane model. We close with a discussion of implications and caveats in section 8.
Methods
We obtain winds, geopotential height, temperature, and humidity from the ERA-Interim reanalysis (Dee et al., 2011), which is produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) and is used here for 1979-2015. ERA-Interim is a third generation reanalysis with data assimilation based on 12-hourly four dimensional variational analysis (4D-Var). The dynamics are calculated on a T255 (approximately 80 km) global grid, with 60 vertical levels from the surface to 0.1 hPa. We also use NASA's Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA2; Gelaro, R. and co-authors, 2016 in preparation), which is a third generation reanalysis produced on a 0.5°×0.625°cubed-sphere grid with 72 vertical levels from the surface to 0.1 hPa. MERRA2 is not available for 1979, so here we use 1980-2015. All climatologies and regressions shown here use ERA-Interim data unless MERRA2 is explicitly indicated.
The Global Precipitation Climatology Project (GPCP) dataset (Adler et al., 2003), which is a combination of land rain gauge and satellite-based precipitation measurements, is used as the primary precipitation dataset for this study. The Global Precipitation Climatology Centre (GPCC, Schneider et al., 2014) dataset, based on corrected gridded rain gauge data, was also examined and found to produce no substantial changes in the conclusions of this study. All data was obtained at monthly mean resolution.
When we take limited zonal averages, we do so over 10°W to 25°E, a region that encompasses most of northern hemisphere Africa but excludes coasts and the Arabian Desert. Within these bounds, the latitudes of 10°N to 20°N are defined as the Sahel, and 20°N to 30°N as the Sahara, with both regions delineated with boxes in Fig. 2a. The results in this study are not sensitive to the longitudinal bounds of the region chosen for the Sahara and Sahel, as the correlation of local precipitation with Sahel-averaged precipitation consists of a pattern that extends zonally across Africa (Fig. 2c).
For the idealized model portion of this study, we analyzed the same integrations presented in Shekhar and Boos (2016). These used the Weather Research and Forecasting (WRF) model, version 3.3 (Skamarock et al., 2008), modified to run on an equatorial β -plane in a meridional channel at 15 km resolution with 41 vertical levels. The domain was 20°×140°i n the zonal and meridional directions, respectively, with periodic boundary conditions in the zonal direction and closed boundary conditions in the meridional direction. A continent was prescribed from 5°N to 32°N, divided into a grassland from 5°N-12°N and a desert from 12°N-32°N, with interactive surface temperature but prescribed soil moisture and other properties from the WRF land surface database. The remainder of the domain was ocean with a prescribed idealized SST distribution representative of that observed during boreal summer near Africa. Perpetual July 15 insolation was imposed, with the diurnal cycle retained. A total of thirteen model integrations were performed for one model year after a three month spinup, yielding the same amount of output for each integration as four three-month summer seasons. One integration was chosen as the control, and others were forced by modifications of the specified desert surface albedo, the prescribed SST, or both. These form an ensemble of integrations in which the monsoon precipitation varies in response to the SST and surface albedo forcings. These integrations were documented more thoroughly in Shekhar and Boos (2016), where they were used to examine energy-based diagnostics of ITCZ location.
Statistical analyses were performed using Python programming language packages Iris (U.K. Met Office, 2015), Seaborn (Botvinnik et al., 2016), and Statsmodels (Seabold and Perktold, 2010). For linear regressions, we test for a nonzero slope using a two-sided Student's t test at the p < 0.05 level. For some linear regressions, we also obtain a 95% confidence interval for the slope using a bootstrapping technique on the joint probability distribution of slope and intercept, as given in the Seaborn package. We tested the sensitivity of linear regressions to outliers using the robust regression (e.g. Rousseeuw and Leroy, 2005) feature of the Statsmodels package, and although certain confidence intervals narrow and shift slightly, no substantial qualitative differences were obtained.
Review of prior results on the Sahel-SHL connection
To set the stage for our study, we discuss some wellknown features of the West African Monsoon and SHL circulation, most of which were reviewed by Nicholson (2013). The monsoon exists in two relevant phases, a coastal phase, in which peak precipitation lies at the coast of the Gulf of Guinea at 5°N, and a continental phase, with peak precipitation near 10°N. The transition from the coastal to the continental phase is called monsoon onset and climatologically occurs around June 20. Over the region we choose as the Sahel (10°-20°N, 10°W-25°E) the months of June and September have roughly equal amounts of precipitation (not shown), so we choose June-September (JJAS) as our boreal summer monsoon season. During this period, abundant precipitation over land in the monsoon's continental phase is clearly visible (Fig. 2a). The climatological mean JJAS precipitation averaged over the Sahel is 2.9 mm day −1 . The time series of JJAS precipitation averaged over the Sahel (Fig. 2b, hereafter referred to as "Sahel precipitation") is in good agreement with similar time series in other studies (Nicholson, 2005;Nicholson et al., 2011). It has an interannual standard deviation of 0.37 mm day −1 , indicating that there is about a 1 mm day −1 difference between fairly wet and fairly dry years in the Sahel. Over the reanalysis period of 1979-2015, Sahel precipitation lacks statistically significant interannual autocorrelations (not shown), distinguishing variability in this more recent period from the persistent interdecadal droughts that characterized parts of the twentieth century and that were largely attributed to global variations in SST (Giannini et al., 2003). Losada et al. (2012) noted the non-stationarity of the relationship between Sahel precipitation and SST in the different ocean basins over the twentieth century, and showed a marked transition in SST dependence in the 1970s, with a largely stationary regime of SST dependence since then. In our post-1970s period, precipitation mostly exhibits a "monopole" spatial pattern (Fig. 2c), with single-signed precipitation anomalies extending from the Gulf of Guinea across the Sahel. This is in contrast to meridional "dipole" patterns of precipitation anomalies observed during the 1920s-1970s ( Fig. 14 of Nicholson, 2013). This has implications for the generality of our results, which we discuss further in section 8.
Positive anomalies of Sahel rainfall are accompanied by negative anomalies of geopotential height over the Sahara, as shown by Biasutti et al. (2009) and by our regression of ∆Z925 on Sahel precipitation (Fig. 2d). We use ∆Z to represent the difference between local geopotential height and the tropicalmean (30°S-30°N) geopotential height, ∆Z = Z − [Z] global tropics . A regression of mean sea level pressure on Sahel rainfall yields a similar pattern (not shown). Previous studies interpreted these patterns of ∆Z925 and mean sea level pressure as indicative of a strengthening of the SHL (Haarsma et al., 2005;Biasutti et al., 2009). But the near-surface SHL stretches zonally across northern Africa and is centered around 20°N while the decrease in ∆Z925 is confined to the northern and western sides of this climatological trough. Since a strengthening of the near-surface SHL would consist of negative anomalies of ∆Z925 centered over the climatological minimum ∆Z925, this would seem to indicate that the near-surface SHL is expanding northward and westward rather than simply strengthening. Much of the northward expansion occurs over northeastern Africa, consistent with the patterns in anomalous ∆Z925 and mean sea level pressure seen in Biasutti et al. (2009) and Haarsma et al. (2005).
The near-surface SHL is part of the threedimensional SHL circulation, as mentioned in the introduction. The ascending branch of the overturning component of this circulation is strongest at around 20°N in the climatological mean (Fig. 3a). Low level mass convergence occurs between the surface and 800 hPa, with peak convergence at 925 hPa associated with the near-surface SHL. Divergence occurs in the 800-550 hPa layer, with peak divergence at 700 hPa associated with the Saharan High. The 925 hPa and 700 hPa levels are taken as representative of the near-surface SHL and the Saharan High, respectively, in the next two sections. Deep ascent in the ITCZ is located much further south, around 8°N. There is weak time-mean divergence and subsidence in the near-surface layer around 10°Nin the precipitating region, a likely signal of strong timemean subsidence and sporadic ascent due to precipitation. The near-surface zonal and meridional winds change sign at 20°N (Fig. 3b), a well-known feature called the Inter-Tropical discontinuity (ITD). Around 700 hPa, there is a peak in the equatorward flow as air travels in the shallow Saharan overturning circulation toward the ITCZ in the time mean. The African Easterly Jet (AEJ; e.g. Thorncroft and Blackburn, 1999) exists in thermal wind balance around 600 hPa and 14°N. At upper levels, the tropical easterly jet, meridional flow in the upper branch of the Hadley circulation, and the midlatitude jet stream are also visible. In the next two sections, we examine how the horizontal and vertical structure of the entire SHL circulation covaries with Sahel preciptiation.
Horizontal structure of SHL circulation changes
Since the geopotential height and divergent wind together provide a nearly complete depiction of the horizontal circulation, we start by examining the horizontal structure of geopotential and divergent wind variations at 925 and 700 hPa. across northern Africa during JJAS around 20°N; this is accompanied by a cyclonic geostrophic circulation around the trough. Winds converge into the trough around 20°N at the ITD, and cross-equatorial southerly flow in the low-level branch of the Hadley cell is also visible over the Gulf of Guinea. When regressed on Sahel precipitation, we see a spatially heterogeneous but nearly single-signed decrease in ∆Z925 (Fig. 4b) north of 20°N over Africa, Europe, and parts of the Atlantic and Indian oceans. The climatological meridional ∆Z925 gradient weakens at the southern boundary of the Sahel: although the changes in ∆Z925 are not statistically significant there, the anomalous northerly divergent flow during wet years is statistically significant and indicates a weakening of the ageostrophic northward monsoon flow into the Sahel at that level. At the northern boundary of the Sahel, anomalous southerly wind flows across the climatological ITD, into the stronger low in the Sahara. Effectively, this moves the ITD and SHL poleward during wet years. Part of this shift involves a weakening of 925 hPa convergence over the Sahel during wet years; we show in the next section that this can occur because of compensating convergence in the lower mid-troposphere. Fig. 4c shows the horizontal structure of the climatological mean Saharan High, with gradients in ∆Z700 in geostrophic balance with an anticyclone over most of the Saharan region. There is a substantial zonal gradient in ∆Z700 over the central and eastern Sahara, and a sharp meridional gradient over the Sahel in balance with the AEJ. The peak in climatological ∆Z700 lies poleward and westward of the trough in climatological ∆Z925. Downgradient divergent flow occurs south and northwest of the 700 hPa high, constituting the divergent northerly outflow in the upper branch of the Saharan overturning circulation. The regression of ∆Z700 onto Sahel precipitation (Fig. 4d) shows a statistically significant decrease centered in the Sahel that extends over most of northen hemisphere Africa, implying a strong anomalous cyclonic circulation over the whole region during wet years. At 700 hPa, the anomalous divergent wind converges into the center of the anomalous low, around 15°N in the eastern Sa-hel.
We expect the 925 and 700 hPa surfaces to be affected differently by a strengthening of the shallow SHL circulation compared to a strengthening of the deep, precipitating monsoon circulation. A stronger shallow heat-low circulation is expected to consist of a reduction in ∆Z925 in the SHL, an increase in ∆Z700 in the Saharan High, and an increase in the divergent, overturning circulation that flows along the geopotential gradients at these two levels. In contrast, enhanced precipitation is expected to be accompanied by enhanced ascent in a deep circulation that can be approximated by a first-baroclinic mode; strengthening of such a first-baroclinic mode will include decreases in geopotential height in the entire lower and middle troposphere and increases in geopotential in the upper troposphere [see Neelin and Zeng (2000) for a derivation of the structure of a typical tropical first-baroclinic mode, and Zhang et al. (2008) or Nie et al. (2010) for illustration of how the shallow Saharan overturning coexists with a precipitating first-baroclinic mode structure over West Africa]. The 925 hPa and 700 hPa surfaces are thus expected to have opposite vertical displacements as the SHL intensifies, while those two surfaces are both expected to move downward as the deep, precipitating circulation strengthens. Fig. 4 (panels b and d) shows that the 700 and 900 hPa surfaces both move downward during wet Sahel years, providing no evidence for a strengthening of the shallow SHL circulation. Furthermore, when geopotential heights are averaged over our Sahel and Sahara boxes, the negative anomalies in ∆Z925 and ∆Z700 during wet years are found to be statistically significant in both regions (Fig. 5). Thus, geopotential variations at 925 and 700 hPa are inconsistent with the hypothesis that the SHL circulation strengthens during wet Sahel years.
These changes in structure can also be viewed in terms of the thickness of the lower troposphere, but one must remember that LLAT will increase during a strengthening of the shallow heat low circulation and during a strengthening of the deep precipitating circulation. The LLAT climatology (Fig. 4e) shows a maximum over the western Sahara, with rel-atively high LLAT extending east along the the 20°N line, approximately following the ITD. The regression of LLAT onto Sahel precipitation (Fig. 4f) generally shows a statistically weak increase in LLAT on the poleward side of the climatological LLAT maximum (over the Sahara), and a statistically significant decrease on the equatorward side of this maximum (over the Sahel). While all parts do not achieve statistical significance, especially over the Sahara, the general structure of a meridional dipole without large zonal variations is consistent with the meridional shift we suggested earlier based on the changes in ∆Z925. The lack of zonal variation is important, as it does not show an anomalous thickening of the lower troposphere in the Western Sahara, where the LLAT is climatologically highest. If the SHL circulation was strengthening, we would have expected increased LLAT at its climatological maximum. There are also substantial increases in LLAT over eastern Europe, the Mediterranean, and the Atlantic which might indicate interactions with the midlatitudes, perhaps through mechanisms proposed by Vizy and Cook (2009) and Lavaysse et al. (2010b). Here we simply note that these patterns signify an expansion of the SHL circulation toward those regions.
We now return to examination of our regressions of area-averaged geopotential on Sahel precipitation (Fig. 5). These quantitatively reproduce the result from Biasutti et al. (2009) of decreased Sahara ∆Z925 during wet years, with the ERA-Interim results being quantitatively indistinguishable from the MERRA2 results. We also see decreased ∆Z925 over the Sahel, and an inspection of the climatological values (Fig. 5) indicate that the meridional gradient of ∆Z925 between the Sahara and Sahel flattens during wet years. The Sahel LLAT decreases as a consequence of ∆Z925 decreasing less than ∆Z700 during wet years (in this case, removal of the tropical mean has little effect, with variations in ∆Z700 − ∆Z925 being nearly equal to variations in Z700 − Z925). This is inconsistent with the idea that a classic firstbaroclinic mode structure intensifies over the Sahel, and we will show in the next section that the vertical profile of the anomalous convergence during wet years also differs from that of a classic firstbaroclinic mode, but still provides no evidence for a strengthening of the SHL during wet years. Over the Sahara, LLAT does increase during wet years, but the confidence interval on the regression slope does not exclude zero (i.e. the slopes of the regressions of ∆Z925 and ∆Z700 are indistinguishable in that region). Furthermore, the horizontal structures discussed above indicate that this increase in LLAT is better viewed as a poleward shift in the SHL circulation rather than a strengthening. This poleward shift is clearly seen when we regress the latitude of the SHL on Sahel precipitation (Fig. 6a), with the SHL latitude defined as the latitude of minimum ∆Z925 in the limited zonal average (before finding the latitude of the minimum, we use cubic splines to interpolate zonally averaged ∆Z925 to a continuous domain). The SHL latitude exhibits a strong positive correlation with Sahel rainfall. Both reanalyses contain an influential data point in 1984, which was the driest year in the reanalysis period, with an SHL latitude about 1 degree farther south than in all other years. Removing this extreme data point or using robust regression decreases slopes to approximately 0.55 degrees mm −1 day, but does not qualitatively change the relationship between SHL and Sahel precipitation. The MERRA2 data exhibit a somewhat bimodal distribution in SHL latitude, for which we do not have an explanation.
As Sahel precipitation covaries with both Sahara ∆Z925 and SHL latitude, we can ask how much of the drop in Sahara ∆Z925 is due to the SHL trough moving into the Sahara, and how well Sahara ∆Z925 correlates with Sahel precipitation when the effect of this shift is statistically removed from Sahara ∆Z925. The regression of Sahara ∆Z925 on SHL latitude produces a slope of 5.1 ± 2.1 meters per degree latitude (R=0.62) in ERA-Interim, and when this dependence is removed to create a "latitude-detrended ∆Z925", this quantity has no statistically significant relationship with Sahel precipitation (Fig. 6c; the data that have not had the SHL shift removed are shown in Fig. 6b). Essentially, the statistically significant relationship between Sahel precipitation and many dynamical and thermodynamical quanti- ties such as geopotential, divergence, and zonal wind is due to the meridional shift of the SHL circulation. When the linear dependence on SHL latitude of quantities shown in Fig. 4 and Fig. 7 is removed, significant first baroclinic mode geopotential changes remain over the Sahel and Sahara, but statistically significant shallow circulation changes are largely eliminated (not shown).
Vertical structure of changes in the SHL circulation
We now examine the vertical structure of interannual variations in the West African monsoon circulation, with focus on the Saharan overturning circulation. We first briefly revisit the poleward shift of the SHL during wet monsoon years, then discuss variations in the strength of the divergent component of the circulation.
The climatological low-level potential temperature is maximum around 22°N (Fig. 7a), poleward of the ITD. As Sahel precipitation increases, there is substantial cooling over the Sahel below 700 hPa, likely due to evaporative cooling of the land surface and reduced surface sensible heat fluxes into the boundary layer. Above 700 hPa, warming occurs poleward of 15°N into the midlatitudes. Potential temperature changes below 700 hPa over the Sahara are not statistically significant.
The ∆Z climatology (Fig. 7b) shows the near surface SHL centered around 18°Nand the midtropospheric Saharan High centered around 25°N. The regression of ∆Z onto Sahel precipitation (Fig. 7b) shows that these structures expand or shift northward at every level. The cooling of the Sahel during wet years (Fig. 7a) is, by hydrostatic balance, accompanied by a thinning of the layer below 700 hPa and an anomalous low in the midtroposphere (Fig. 7b). The cooling of the southern part of the SHL is accompanied by an increase in specific humidity (Fig. 7c) that is larger, in energy units, than the decrease in temperature, so that the low-level equivalent potential temperature, θ e , is higher over the Sahel during wet years (Hurley and Boos, 2013). This enhanced low-level θ e during wet years is accompanied by a warming throughout much of the middle and upper troposphere over the Sahel, as expected in convective quasi-equilibrium (Emanuel et al., 1994).
The zonal wind (Fig. 7d) closely describes the balanced component of the SHL circulation. During wet Sahel years, the near surface monsoon westerlies strengthen and expand into the region of the climatological ITD. The AEJ weakens and shifts poleward, in balance with the temperature and geopotential anomalies. The upper level tropical easterly jet expands poleward, modifying the northern hemisphere subtropical jet stream on its poleward boundary. These are all well known features of Sahel wet years, but no clear consensus exists on their causal relationship with Sahel rainfall variations (Nicholson, 2013).
The poleward shift in the divergent component of the SHL circulation, the Saharan overturning, can be seen as an anomalous, meridionally asymmetric quadrupole pattern in the anomalous divergence below 550 hPa ( Fig. 7e; note the vertical dipole in the climatological mean fields centered at 20°N). There is also a meridional expansion of the upper tropospheric divergence associated with the monsoonal ITCZ. This upper tropospheric feature is also visible in the vertical velocity (Fig. 7f), with more deep ascent over the Sahel during wet years accompanying the anomalous dipole in low-level ascent that indicates a poleward shift in the Saharan overturning.
The meridional wind regression (Fig. 7g) shows an asymmetric quadrupole as well, with the southerly lobes of the quadrupole being spatially larger and of greater amplitude than the northerly lobes. However, this asymmetry in the anomalous meridional wind should not be interpreted as a weakening of the Saharan overturning circulation because we are considering a limited zonal mean, where the non-divergent component of meridional wind does not have to equal zero, unlike in the global zonal mean. When only the divergent component of the meridional wind is considered (Fig. 7h), the asymmetry in this quadrupole is substantially reduced. Nevertheless, there is meridional asymmetry in the quadrupole of anomalous divergence that is centered over the cli-matological SHL (Fig. 7e), which would seem to indicate that the poleward shift of the SHL is accompanied by a weakening of the divergent component of the SHL circulation. Indeed, by mass continuity, the increased mass divergence in the upper troposphere that accompanies enhanced Sahel precipitation must be compensated by increased convergence at either at low or mid levels. In the next subsection we examine divergence vertically integrated over lower, middle and upper tropospheric layers to more precisely quantify variations in the divergent component of the SHL circulation.
6 Changes in layer-integrated divergence The vertical section of divergence (Fig. 7d) allows identification of three layers that largely capture the changes in the divergent circulation: the lower troposphere (1000-800 hPa), the middle troposphere (800-550 hPa), and the upper troposphere (350-150 hPa). Taking a mass-weighted vertical integral of divergence over each layer (and over the remaining 550-350 hPa layer in which little divergence occurs) we see the climatological signatures of the divergent Hadley and SHL circulations (Fig. 8a). In the lower layer, the largest convergence is due to the shallow circulation which peaks near 20°N, but there is also some convergence in the ITCZ near 8°N. The middle troposphere exhibits large divergence at 20°N and substantial convergence at 10°N. Upper level divergence peaks around 8°N, the latitude of maximum precipitation and deep ascent, and the roughly equal magnitude and opposite signs of upper-tropospheric divergence and mid-tropospheric convergence at that latitude indicates that time-mean inflow to the deep, continental convergence zone occurs not near the surface but in the lower mid-troposphere. There is upper level convergence poleward of about 15°N, consistent with the northern Sahel and Sahara being regions of time-mean subsidence. Divergence in the 550-350 hPa layer is comparatively small, and divergence above 150 hPa is smaller still (not shown).
Regressing layer-integrated divergence onto Sahel precipitation (Fig. 8b) meridional dipole indicating a poleward shift of the climatological mean divergence field in the lower and mid layers, as well as a single-signed increase in upper-tropospheric divergence over the region. The asymmetry in the meridional shift is more clearly evident than in our previous depiction, with stronger changes on the equatorward lobe of the dipoles, implying that the Saharan overturning circulation weakens as it shifts poleward.
To assess the statistical robustness of this weakening of the Saharan overturning, we first note that the changes in divergence are almost entirely confined to the region between 10°N and 25°N. So we horizontally average the layer-integrated divergences over 10°N-25°N, 10°W-25°E , thereby removing the antisymmetric component of the dipole and leaving a residual that corresponds to a net strengthening or weakening of divergence in each layer over the combined Sahel-Sahara region. There is some sensitivity to the meridional bounds used in this procedure, in particular the 10°N bound. However, using an alter-nate bound (such as 5°N) decreases the magnitude of the net divergence variations but does not qualitatively change the result.
The result of this area-averaging of the layerintegrated divergence shows that during wet Sahel years, upper-level divergence increases, mid-level divergence decreases, and low-level divergence increases (Fig. 9a). A strengthening of the Saharan overturning circulation would consist of a decrease in low-level divergence (enhanced convergence), and an increase in mid-level divergence; our area-averaged results have the opposite sign. Furthermore, Fig. 9b shows mid-level divergence is strongly anticorrelated with upper-level divergence, indicating that the enhanced upper-tropospheric divergence during wet monsoon years is balanced, in the time-mean column mass budget, by enhanced mid-tropospheric convergence. This balance is quantitatively confirmed by the fact that the regression coefficient relating upper-and mid-level layerintegrated convergence is approximately -1. Inter-annual variations in the deep, precipitating monsoon circulation thus cannot be captured by a classic firstbaroclinic mode that has maximum convergence near the surface and divergence at upper levels. Thorncroft et al. (2011) showed that the climatological mean moisture flux convergence in the Sahel has a complicated vertical structure with a weak maximum in our mid-tropospheric layer associated with flow in the Saharan overturning circulation, so it is perhaps not surprising that variations in the flow also do not have a simple classical structure. This issue is discussed further in the next section in the context of our idealized simulations.
Model of a weakening and shifting SHL circulation
Our idealized WRF model integrations, performed at 15 km horizontal resolution on a zonally periodic βplane, are detailed in section 2 and in Shekhar and Boos (2016). A variety of surface albedo and SST forcings were applied individually about a control state to form an ensemble of model integrations. Instead of examining interannual variability within individual integrations, we look at the intra-ensemble variability of the long-term time-mean state and compare it to interannual variability within the reanalyses. Due to the zonally symmetric boundary conditions of the idealized model, the time-mean zonal wind is non-divergent and there are no large scale dynamical forcings such as those associated with ENSO or the South Asian monsoon, which could produce differences between observed interannual variability and the model intra-ensemble variability. Nevertheless, we find quantitative similarities in the statistical association between simulated monsoon precipitation and multiple dynamical variables. The ensemble members are strongly forced (e.g. Saharan albedo changes of 0.1 to 0.2), so represent a wider range of ITCZ and SHL locations than is observed in the historical record. Fig. 10 shows how the mass streamfunction (obtained using the method of Döös and Nilsson, 2011) wettest Sahel (the model Sahel is also defined as the region 10-20°N). In the lowest precipitation state, deep ascent peaks at 8°N, and ascent in the SHL is well separated with a peak at 17°N. In this state, the summer Hadley cell is strong, the cross-equatorial winter Hadley cell is relatively weak, and the SHL circulation is relatively strong. As precipitation increases over the Sahel box (e.g. Fig. 10c), the ITCZ moves poleward into continent, the winter Hadley cell strengthens, the summer Hadley cell weakens, and the separation between the SHL ascent and the ITCZ decreases. As Sahel precipitation increases further (e.g. Fig. 10e), the winter Hadley cell continues to strengthen, the summer Hadley cell continues to weaken, and the shallow SHL ascent begins to merge with the ITCZ.
Quantitatively comparing our idealized model with observations is complicated by the task of choosing an appropriate region over which to average precipitation and divergence. In the reanalyses, the Sahel (10°N-20°N) always lies on the poleward edge of the ITCZ and the ascending branch of the SHL circulation is centered in the region over which we averaged divergence (10°N-25°N). In the idealized model, the ITCZ and SHL move over a much wider latitude band, with the ITCZ centered south of the averaging region in some integrations and squarely within it in others (Fig. 10). Nevertheless, ascent in the model SHL always lies between 10°N and 25°N, so variations in the strength of mid-tropospheric divergence produced by the SHL circulation should be well captured by averages of the layer-integrated divergence between those latitude bounds. For this reason, we average precipitation and layer-integrated divergence over the same regions chosen for the reanalyses.
As expected, the observed interannual variability of both Sahel precipitation and SHL latitude is much smaller than variability in the model ensemble (Fig. 11a). There is rough quantitative agreement between the regression coefficients based on observed and simulated variables: the 95% confidence interval for the slope of model Sahel precipitation regressed on model SHL latitude overlaps with that of ERA- 68% (thick) and 95% (thin) confidence intervals for regression slope. Slopes shown for layer and area averaged divergence (10 −6 s −1 mm −1 day) and specific humidity (g kg −1 mm −1 day) regressed onto Sahel precipitation. For the WRF data points, due to a different number of degrees of freedom (11), critical R values are 0.552, 0.683, and 0.800 at the 0.05, 0.01, and 0.001 significance levels.
Interim but does not overlap with the MERRA2 interval. The idealized model also exhibits associations between Sahel precipitation and upper-level divergence, mid-level convergence, and area-averaged, layer-integrated humidity that are quantitatively similar to those seen in observed interannual variability (Fig. 11b). This agreement is remarkable given that the model was not tuned to observed interannual variability: these simulations were performed for a different study (Shekhar and Boos, 2016) that was completed before this analysis was undertaken. However, the idealized model disagrees with observations in that it simulates enhanced low-level convergence when there is enhanced Sahel precipitation (recall that the reanalyses indicate a weak reduction of low-level convergence during anomalously rainy years). Enhanced precipitating ascent in the idealized model thus seems to be better described by a classic first-baroclinic mode vertical structure than in reanalyses. Whether this means that the model is unsuitable for representing interactions between the monsoonal ITCZ and the SHL circulation is unclear, in large part because there has been little study of the implications of deviations from a first-baroclinic mode structure for variability in monsoons.
Despite some bias in the vertical structure of the ITCZ, this idealized model clearly simulates a weakening and poleward shift of the Saharan overturning circulation in states with enhanced Sahel precipitation. Furthermore, the model results suggest that the circulation over West Africa exists on a continuum. At one end of the continuum, dry states have a coastal ITCZ close to the equator, large separation between the ITCZ and ascent in the SHL, and a strong Saharan overturning circulation with abundant mid-tropospheric divergence. At the other end of the continuum, the ITCZ is positioned much further poleward in a continental location, the winter Hadley cell is stronger while the summer Hadley cell is weaker, the overturning mass flux and midtropospheric divergence in the SHL circulation are weaker, and the ascending branches of the SHL and ITCZ have begun to merge to produce a vertical structure closer to that of first-baroclinic mode ascent common in deep convective regions..
Discussion
Previous work has found intriguing associations between Sahel precipitation and the SHL, but the mechanism responsible for these associations has remained unclear. A close reading of previous literature reveals contradictory results concerning even the sign of the association, with some arguing that a stronger SHL causes increased Sahel precipitation (e.g Haarsma et al., 2005;Biasutti et al., 2009;Lavaysse et al., 2009Lavaysse et al., , 2010a while others argue that a stronger SHL circulation weakens Sahel precipitation or at least is correlated with weaker Sahel precipitation (Lavaysse et al., 2010b). Previous studies have not used a consistent definition for the term "Saharan Heat Low", which complicates comparison of prior results.
This study shows that the association of the SHL with Sahel precipitation is best described as a poleward shift and weakening of the SHL circulation during wet Sahel years. We showed that 925 hPa geopotential height over the Sahara was negatively correlated with Sahel precipitation, quantitatively reproducing the results of Biasutti et al. (2009). However, the decrease in geopotential was located north of the climatological mean geopotential minimum, suggesting a northward expansion or shift, rather than an intensification, of the low-level trough during wet monsoon years. Changes in the thickness of the lower troposphere had a meridional dipole structure centered on the zonally elongated climatological maximum in thickness, which is also indicative of a poleward shift in the heat low. When the linear relationship of 925 hPa geopotential with the latitude of the heat low was statistically removed, no statistically significant relationship remained between Sahel precipitation and Saharan 925 hPa geopotential height.
Weakening of the SHL circulation was best seen through examination of the divergent component of the flow. Upper tropospheric divergence over the Sahel increased during wet years, as expected for a deep, precipitating monsoon circulation. Shallow ascent in the Saharan overturning circulation shifted poleward and weakened during wet years, as ev-idenced by the meridionally asymmetric dipole of anomalous vertical velocity in the lower troposphere over the Sahara (Fig. 7f). Asymmetric meridional dipoles were also seen in the divergence integrated over the lower and middle troposphere, confirming this weakening and poleward shifting of the shallow circulation. The increased upper-level divergence during wet years is balanced, in the column integrated mass budget, by increased convergence in the lower mid-troposphere, indicating some departure from classic first-baroclinic mode structures that have maximum convergence near the surface. Nevertheless, these results suggest a trade-off between the shallow and deep modes of vertical ascent, where unusually wet years exhibit a strong deep circulation and weak shallow SHL circulation.
An idealized model of West Africa was used to produce an ensemble of integrations forced by applied SST and land surface albedo anomalies. This ensemble explores a variety of climatic states with a much greater range than that of interannual variations in reanalyses. Nevertheless, without any tuning, the intra-ensemble variability of the idealized model climatological means exhibits a similar relationship between the Saharan overturning circulation and Sahel precipitation seen in observed interannual variability. Increases in deep, precipitating ascent in the model were better described by a classic first-baroclinic mode than they were in reanalyses, but both the model and the reanalyses clearly showed a weakening of mid-tropospheric divergence in the SHL circulation as monsoon precipitation increased. This is consistent with the results of , who showed that dry and warm outflow from the Saharan high weakened Sahel precipitation in another idealized model. Our observational results disprove the hypothesis that increased Sahel precipitation is caused by a strengthening of shallow divergent flow in the SHL circulation. To be clear, the underlying cause of changes in the idealized model was anomalies in SST and land surface albedo, but variations in the SHL circulation could be part of the mechanism by which those forcings influence Sahel precipitation.
One major caveat is worth noting. Fig. 2c showed that interannual variations in GPCP precipitation exhibit a "monopole" pattern over West Africa since 1979. The reanalysis precipitation fields in ERA-Interim and MERRA-2 show more of a dipole pattern of interannual variability over this period, with a statistically significant decrease in precipitation over the Gulf of Guinea (5-10°N) during wet Sahel years (not shown).This dipole pattern of precipitation anomalies, which is consistent with a meridional shift of the ITCZ and which characterized rainfall variations earlier in the twentieth century (Losada et al., 2012), has not been seen in precipitation measurements after the 1970s. So it seems possible that the reanalyses are representing a biased spatial pattern of precipitation variability during the past few decades, assuming the precipitation observations are not themselves in error. This uncertainty in precipitation over the coastal Gulf of Guinea region influenced our decision to use 10°N as the southern boundary when calculating area averaged, layer integrated divergence, so that this uncertain region is excluded. Even if the reanalyses have substantial error in their representation of interannual variability over this region, it seems unlikely that this error would compromise the qualitative nature of our results (e.g. change the sign of correlations between Sahel precipitation and geopotential height, divergence, and ascent over the Sahara in two reanalysis products). Confirmation of some of these associations in an idealized model lends further confidence in our results. Nevertheless, it is good to bear in mind that reanalyses have bias, even while they remain a useful tool for understanding historical atmospheric variability over the last few decades. Important questions remain. Does the association between a weak Saharan overturning circulation and increased Sahel rainfall also hold on intraseasonal or synoptic time scales? Lavaysse et al. (2010a), Lavaysse et al. (2010b), and Evan et al. (2015) examine certain features of the SHL circulation on these timescales and provide evidence for relationships of both signs. More fundamentally, what mechanism causes Sahel precipitation to weaken while the SHL circulation strengthens? provide evidence from an idealized two-dimensional 20 model that Sahel rainfall can be weakened by warm and dry mid-tropospheric outflow in the SHL circulation. Another possibility is that the deep, precipitating circulation and the shallow SHL circulation are both responding independently to some external forcing. Further work is required to determine which mechanism operates in reality and whether it is relevant to variability in other monsoon regions. | 2017-02-06T16:29:53.000Z | 2016-09-27T00:00:00.000 | {
"year": 2017,
"sha1": "019e0c325654b741f97b9e1c16018fe083827541",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1175/jcli-d-16-0696.1",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "019e0c325654b741f97b9e1c16018fe083827541",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
266784161 | pes2o/s2orc | v3-fos-license | Assessing the investment viability of Indonesia's upstream electric vehicle (EV) sector stocks amidst the COVID-19 pandemic
Amidst the global push for sustainability, the burgeoning electric vehicle (EV) industry has driven increased demand for batteries, placing the Indonesian nickel ore sector in a pivotal position due to its vast reserves. This study thoroughly examines the investment landscape of this sector, utilizing advanced portfolio optimization techniques to analyze four major nickel ore mine firms in Indonesia. Through Monte Carlo simulations, the study evaluates the optimal portfolios of risky assets, comparing their performance before and during the COVID-19 pandemic. Findings reveal a significant shift in portfolio composition during the Pandemic, reflecting investors' response to global disruptions by diversifying their holdings. Notably, the Sharpe Ratio, a risk-adjusted return measure, demonstrates an impressive increase in return relative to risk during the Pandemic, emphasizing the sector's resilience and attractiveness for investment, especially in times of economic uncertainty like the COVID-19 pandemic. The transformation in portfolio weights and the corresponding increase in risk-adjusted returns highlight the sector’s potential as a lucrative investment avenue, especially during periods of global economic uncertainty like the COVID-19 pandemic.
Introduction
East Indies.After Indonesia gained independence, the American Freeport Sulphur Company's attempt to establish operations failed due to security concerns.Local company NV Perto took over operations until it was eventually acquired by the Indonesian government.Following this takeover, the government restructured the company into PN Pertambangan Nickel Indonesia in 1961.Later, it merged with Aneka Tambang, which began exporting nickel ore to Japan by 1969.The International Nickel Company also initiated metallurgical operations in the 1970s, culminating in commercial nickel production starting in 1978.(Irwandy, 2018).
Despite the passage of over four decades since the initial prospecting and exploration efforts in Indonesia's nickel ore reserves, the nation's significance as a nickel production magnet has only grown stronger, particularly in light of the burgeoning electric vehicle (EV) industry.
Signifying this heightened importance, Indonesia is expected to contribute to approximately half the global nickel production growth anticipated between 2021 and 2025 (Huber, 2021).
Indonesia, recognized as a prominent hub for nickel ore production with abundant reserves and a strategic role in the global nickel supply chain, took a significant step by enforcing a nickel export ban, which came into effect in January 2020 as Regulation of the Minister of Energy and Mineral Resources 11/2019 and amendment to Law Number 3 of 2020, to process the ore domestically and thereby add value to its nickel production.
Following the implementation of nickel export restrictions, the European Union (EU) complained with the World Trade Organization (WTO) to safeguard its export operations.
The EU contended that Indonesia's policy of banning nickel exports could potentially disrupt the global stainless-steel industry, given the crucial role of nickel ore in both stainless steel and battery production.This action by the EU resulted in several notable consequences.
Firstly, it prompted the EU to initiate proceedings before the WTO's Dispute Settlement Body (DSB), with a primary request for Indonesia to resume nickel exports.Secondly, Indonesia responded by considering an increase in the tax ratio applicable to nickel exports.
Thirdly, in response to these developments, various new policies were introduced, particularly concerning investments in smelting and downstream industries, details of which will be elaborated upon in the subsequent discussion (Meirizal et al., 2023).The significance of raw materials (nickel ore) in the global supply chain cannot be understated, as they exert a profound influence on the sustainability of global products.These materials serve as the foundation of economic activities, exerting an impact on economic growth, innovations, and overall competitiveness (Bleischwitz & Perincek, 2017).
The ban on nickel ore exports, however, has yielded positive outcomes for Indonesia's collaboration with China, primarily evident in increased Chinese investments within Indonesia, particularly in the metal processing sector, and the establishment of smelters aimed at processing existing nickel ore into ferronickel or nickel pig iron.Furthermore, effective management of derivative products like stainless steel has significantly contributed to the export value of Indonesian products (Cristina, 2022).While this partnership may appear to disproportionately benefit China's national interests, Indonesia can leverage these opportunities to develop smelter infrastructure in various locations.This exemplifies a evolving diplomatic approach, highlighting Indonesia's strategic use of diplomacy to secure protection, attain objectives, and serve national interests (Meirizal et al., 2023).The reinforcement of cooperation between Indonesia and China through smelter construction underscores how Indonesia's diplomatic efforts can be bolstered by support from China's assistance.
The Indonesian government dreams of becoming the biggest player in the EV industry.This aligns with nickel ore reserves in Indonesia, which has 5.24 billion tons of wet metric tons, or 27% of the world's contributor producers (Sembiring, 2021;Ridwan, 2022).The development of the battery industry will also increase the attractiveness of Indonesia as an investment destination country for derivative industries that use batteries, such as investment in electric motors, electric buses, and electric cars.Despite Indonesia's earnest efforts to maintain its national interests, the year 2020 brought significant challenges as the nation grappled with the volatile fluctuations in nickel ore prices amid the global COVID-19 pandemic.In March 2020, the outbreak of COVID-19 and the broad measures required to prevent its spread resulted in a decline in world nickel prices due to reduced demand.Nickel prices had plunged into negative territory in March 2020.Following a decline to $11,142 per ton, the price of nickel gradually rebounded the next month, reaching a peak of $19,722 per ton (Sandria, 2021).Subsequently, in April 2020, a regulation imposing a minimum nickel price was introduced, taking effect on May 13, 2020.This regulatory measure was initiated at the behest of the Indonesian Nickel Miners Association (APNI) to safeguard the interests of small-scale miners.As of January 21, 2022, Nickel's price reached its highest since February 7, 2012, at USD 24,028/ton due to reduced world nickel supplies (Andriato, 2022).World nickel prices are also increasing due to the Russia-Ukraine war on February 24, 2022, in which Russia is a producer that contributes 10% of the world's nickel (Pickrell, 2022).It is not surprising that this has an impact on nickel prices as well.Nickel prices experienced the highest jump on Tuesday, March 8, 2022.Nickel prices published by the London Metal Exchange (LME) experienced sharp gains, reaching US$101,350 per dry metric ton or around 110.80 percent compared to the previous trade (Muddasir, 2022;Asmarini, 2022).
The study on electric vehicles (EVs) has garnered significant attention, driven not only by the industry's steadfast commitment to sustainability and addressing environmental challenges but also by the surge in demand for key raw materials like nickel ore.With nickel being a critical component in EV batteries, the EV industry's rapid expansion has further intensified the focus on securing a sustainable and responsible supply chain for nickel ore.As EV manufacturers increasingly prioritize eco-friendly practices and technologies, their commitment to sustainability and responsibility in addressing green issues has become a central focus, reflecting a growing trend towards environmentally conscious and responsible business practices to pursue a greener future.Studies from Flammer (2015) and Albuquerque et al. (2020) found that sustainability plays a pivotal role in building brand equity and fostering brand loyalty for responsible businesses.This, in turn, translates into increased profitability and reduced vulnerability to systematic risks and economic downturns.Also, responsible businesses often excel due to their high-quality management, as Siddiq and Javed (2014) indicated.These businesses tend to attract ethical managers with commendable values in their business conduct, employee treatment, and societal interactions.Becchetti et al. (2015), Nakai et al. (2016), andChiappini andVento (2018) have discovered that responsible businesses can draw in steadfast investors.These investors, driven by nonfinancial considerations such as environmental, social, and governance (ESG) concerns, exhibit unwavering support for responsible companies, even during turbulent crisis periods when the broader investor community tends to divest their holdings.
A significant gap exists in the research landscape regarding the comparative performance of responsible investments versus conventional investments during economic downturns, with findings yielding mixed results.While some studies suggest that responsible investments exhibit superior performance (Tripathi & Bhandari, 2016;Risalvato et al., 2019;Arefeen & Shimada, 2020), others present opposing or inconclusive outcomes (Leite & Cortez, 2015;Morales et al., 2019;Lean & Pizzutilo, 2020).Moreover, a growing body of academic research has explored the repercussions of the COVID-19 pandemic on financial markets, as evidenced by studies conducted by Folger-Laronde et al. ( 2020), Heyden and Heyden (2020), and Sakurai and Kurosaki (2020).
Despite there being a growing body of academic research investigating the effects of the COVID-19 pandemic on financial markets, knowledge pertaining to the performance of investment decisions influencing the portfolio of upstream sector EV (Nickel Ore Mine) stocks during this crisis in Indonesia, a key player in the global nickel industry, remains undeveloped.To address this gap, our study aims to conduct a comprehensive analysis, evaluating the performance of such investment decisions within the context of the COVID-19 pandemic.Also, the study offers a unique perspective compared to previous studies on predictive analytics in finance While previous works focused on prediction models and data analytics (Patria & Adrison, 2015;Patria, 2021;Patria, 2022), our primary focus is to assess and provides a practical application of these models in assessing the viability of investments in the EV sector amid the economic challenges posed by the Pandemic.Through empirical investigation, we endeavor to provide valuable insights into the relative performance of these investments, contributing to a deeper understanding of responsible investing within the realm of global crises.
Nickel Benchmark
Four years after the establishment of the coal benchmark price, the Indonesian government expanded its regulatory framework by introducing a formula for determining the monthly benchmark price for both base and precious metals.This regulatory step was formalized through Directorate General of Mineral and Coal (DGMC) Regulation No. 630.K/32/DJB/2015, issued on April 27, 2015, which specifically outlines the Formula for Determining Metal Minerals Benchmark Price (DGMC 630/2015).Notably, DGMC 630/2015 unequivocally states that this benchmark price formula is universally applicable, encompassing all mining companies operating within Indonesia, including those holding a Mining Business License and those with a Contract of Work (Kontrak Karya), covering both domestic sales and mineral exports.DGMC 630/2015 encompasses a comprehensive range of twelve mineral types, including nickel, cobalt, lead, zinc, bauxite, iron, silver, gold, tin, copper, manganese, and chromium.However, due to the variations in mineral types and grades, the benchmark price is calculated based on the average mineral prices from the 20th day of the second month before a specific benchmark price period up to the 19th day of the preceding month, as published in the relevant price reference.Four distinct price references
Stock Share & Prices
Shares are evidence of a person's or entity's involvement or ownership in a corporation or limited liability business.A certificate stating that the bearer of a share is an investor in the corporation that issued the securities is known as a stock.The percentage of ownership is determined by the amount of money invested in the business (Loderer & Martin, 1997)
Monte Carlo Simulation and Sharpe Ratio Value
Modern portfolio descriptions rely on fundamental statistical measures such as expected return, asset or portfolio standard deviation, and return correlations.In general, risk can be mitigated by diversifying single assets into a portfolio, especially when returns exhibit less than perfect positive correlation.Portfolio management embraces the concept of risk reduction through the inclusion of various securities (Markowitz, 1952), recognizing the potential for enhancing risk-adjusted returns by constructing portfolios that balance the tradeoff between risk and return.
Monte Carlo simulation, a mathematical method, assesses potential outcomes of uncertain events by leveraging mathematical relationships between outputs and inputs, along with the probability distributions associated with these inputs (Robert & Casella, 2004;Truong, 2021).
According to Rubinstein and Kroese (2004), Monte Carlo simulation stands as a powerful tool for scrutinizing uncertainty, allowing us to discern how alterations in distribution or errors impact the sensitivity, performance, or reliability of the modeled system.What sets Monte Carlo simulation apart is its status as a real-world sampling method, requiring the model to select an input distribution that best mirrors the available data.By employing Monte Carlo simulation, not only can one identify the least variance portfolio, thus achieving a welldiversified portfolio with the lowest attainable risk given the expected return rate, but it also allows for a thorough assessment of portfolio performance.This is particularly relevant when measuring the effectiveness of portfolios in terms of risk-adjusted returns, as exemplified by the Sharpe Ratio, which sheds light on the trade-off between risk and return, ultimately aiding in making informed investment decisions.The Sharpe Ratio, developed by William Sharpe, serves as a valuable metric for quantifying the trade-off between risk and return, thereby supporting informed investment decision-making.With the highest Sharpe ratio indicating superior performance, it becomes a crucial tool for investors and decision-makers striving to optimize portfolios amidst the intricate landscape of financial uncertainties.The portfolio with the highest Sharpe ratio value, signifying the best-predicted return per unit of risk, is referred to as a Tangency optimization portfolio (Cheong et al., 2017;Qudratullah, 2021;Ulfa et al., 2022;Syarif et al., 2022).
Methods
In terms of data collection, daily price data for the top four major players in Indonesia's EV upstream sector (ANTM, HRUM, INCO, NICL) was gathered from Yahoo Finance using the 'yfinance' library in Python.This study relies on secondary data obtained from Yahoo Finance, covering the period from 2017 to 2022.The study's population comprises Indonesian companies operating in the nickel ore mining sector listed on the Indonesian Stock Exchange (IDX).Based on the total population listed in the IDX sector mineral, 4 (four) companies were found to meet the criteria.In this study, daily price data for each company was obtained from Yahoo Finance using the 'yfinance' library in Python programming code, divided into two distinct periods.This period witnessed a significant drop in nickel prices, attributed to the rise in COVID-19 cases, followed by a gradual recovery with a peak price of 48,226 USD/T.Notably, Period 2 also encompassed the Russia-Ukraine War, which commenced on February 24, 2022.
In order to examine the optimal portfolio model, we calculate daily returns for upstream EV battery sector stocks, particularly those related to nickel ore, using the provided equation ( 1). (1) In this analysis, we focus on the daily returns of EV stocks, denoted as i, where Pit represents the price of EV stock i at time t, and Pit−1 represents the price of the same stock at time t-1.
Subsequently, we calculate the expected return, denoted as E(rp), and the standard deviation, represented as σp, for the portfolio of stocks.These calculations are based on equations ( 2) and (3).To determine the portfolio weight of each stock, Wi, we utilize the bordered covariance matrix, following the methodology outlined by Bodie et al. (2014).The Monte-Carlo simulation is then employed to identify the efficient set of portfolios, commonly referred to as the Efficient Frontier of risky assets.As per Modern Portfolio Theory (MPT), the Efficient Frontier comprises the optimal portfolios along the risk-return spectrum.Hence, the portfolio combinations within the Efficient Frontier provide the highest return for a given level of risk and the lowest risk for a given level of return, offering valuable insights for portfolio optimization and risk management. (2) (3)
The Prices of the Upstream Sector of EV (Nickel Ore Mine) Stocks
Analyzing the stock price trends of the four major players in Indonesia's Upstream Sector of EV (Nickel Ore Mine) from 2017 to 2021, we observe that HRUM consistently had significantly higher prices compared to other Nickel Ore Mine companies.However, a common trend among all key players in Indonesia's upstream EV sector was the price drop on the day of the WHO COVID-19 pandemic announcement (March 11, 2020).This observation aligns with the idea that financial markets tend to move in tandem during crises, as supported by prior research (Goodell & Goutte, 2021;Yunus, 2023;Chang et al., 2023).
Remarkably, in contrast to the backdrop of the ongoing COVID-19 pandemic since its WHO announcement, all key players in Indonesia's upstream EV sector have experienced significant price surges.Notably, HRUM, ANTM, and INCO have displayed remarkably strong stock performance during the pandemic when compared to the pre-pandemic period.
This intriguing resilience in stock performance during a period of global uncertainty highlights the investment attractiveness of upstream EV stocks, possibly attributed to various contributing factors.This suggests that the sector remains appealing to investors seeking opportunities even in challenging economic environments, further solidifying its allure in the investment landscape.
Pandemic
We proceed to explore the optimal portfolio allocation for the four key players in the upstream sector of EV stocks by seeking the portfolio with the highest Sharpe ratio, indicating the portfolio's ability to yield the highest excess return relative to its risk (standard deviation).Utilizing Monte Carlo simulation through Python programming, we uncover that HRUM commands the highest optimal portfolio weight at 95.01%, followed by ANTM at 2.66%, INCO at 2.32%, while NICL holds no weight as it has not been established yet (see to Table 2 and Table 3 for details).In Figure 4, we present the efficient Frontier of the upstream EV stock sector before the pandemic, while Figure 5 illustrates the plot showcasing the maximum Sharpe ratio and minimum variance portfolio.The optimal portfolio within the upstream EV stock sector exhibits an expected return of -16.63% and an expected standard deviation of 27.38%.Notably, the Sharpe ratio of -0.6072 reveals that the excess return of the stock portfolio amounts to -60.72% concerning the portfolio risk level.
Pandemic
Throughout the COVID-19 pandemic, there were substantial price fluctuations among the top four market cap companies in the upstream sector of EV stocks.HRUM exhibited the highest average price at 5,168 US Dollars (USD), while NICL had the lowest at 96 USD Assessing the investment viability of Indonesia's upstream electric vehicle (EV) sector stocks amidst the COVID-19 pandemic by Harry Patria, Heri Hermawan, Heru Budi Prasetyo (refer to Table 4).HRUM also recorded the highest maximum price for upstream EV (Nickel Ore Mine) stocks at 13,750 USD, in contrast to NICL, which had the lowest maximum price at 318 USD per stock.Notably, HRUM had the highest volatility stock during the pandemic, with a value of 3,458 USD, while NICL had the lowest volatility at 48 USD (see Table 4).
Consequently, the overall standard deviation of the upstream sector of EV stocks during the pandemic significantly exceeded that of the pre-pandemic period.
In Figure 6, we can observe the efficient Frontier of the upstream sector of EV stocks during the COVID-19 pandemic, while Figure 7 provides a visualization of the maximum Sharpe ratio and minimum variance portfolio.The maximum Sharpe ratio, representing the optimal portfolio, stands out as a noteworthy feature, alongside the minimum variance portfolio, which embodies the lowest-risk stock portfolio.It's worth emphasizing that the optimal portfolio for Nickel Ore stocks during this challenging period exhibits an anticipated return of 119.26%, coupled with an expected standard deviation of 48.31%.This intriguing combination reflects the resilience and potential of the stock portfolio.The Sharpe ratio further underscores this by registering at 2.4687, indicating that the stock portfolio's excess return surpasses the portfolio risk level by a substantial margin, as elaborated in Table 5. stock in the optimal portfolio, both before and during the pandemic.Interestingly, the weights of ANTM and INCO stocks experience significant shifts from being the second and third highest weights before the pandemic to the third and second highest weights during the pandemic, respectively.This underscores HRUM's consistent ability to provide an optimal risk-return trade-off throughout the research period.However, the variations in optimal portfolio weights highlight the time-varying nature of the risk-return trade-off among stocks.
ANTM and INCO stocks, for instance, experienced a significant shift in their portfolio weights, moving from the second and third highest weights before the pandemic to the third and second highest, respectively, during the pandemic.As such, investors should continually adjust their portfolio weights to optimize outcomes (Anggraeni et al., 2022;Mariana & Patria, 2022).Figures 4 and 6 depict how the expected return and Sharpe ratio of the upstream sector of EV stocks portfolio were lower before the pandemic, indicating that nickel investment was less attractive due to a lower Sharpe ratio and expected return.The increasing investment in this sector holds promise for low-carbon energy development, particularly in the context of electric vehicles and their components (upstream sector -nickel ore), potentially catalyzing a bandwagon effect among other investors.In the long term, this trend could accelerate the adoption of low-carbon practices, contributing to a more sustainable economy (Mariana & Patria, 2021).
Overall Analysis
This ( 2022), who recognize the potential of the EV sector to contribute to a sustainable economy.
The emphasis on sustainability aligns with the broader trend towards environmentally conscious investments, a factor increasingly considered by investors worldwide.
Conclusion and Suggestion
In Contrary to prevailing market trends, the empirical evidence indicates that the upstream EV stock sector exhibited resilience to market downturns during the Pandemic, with performance seemingly more tethered to government investment policies than to broader market fluctuations.This is further substantiated by the observed lower standard deviation of the upstream EV battery stock portfolio during the pandemic, suggesting that nickel ore Nickel ore exploration in Indonesia commenced in 1901 when a Dutch mineralogist discovered deposits in the Verbeek Mountains of Sulawesi.Subsequent discoveries occurred in what is now Kolaka Regency in 1909.Further prospecting by a Canadian geologist from Inco and exploration efforts by a Dutch mining company in 1934 led to initial production between 1936 and 1941.Operations expanded during the Japanese occupation of the Dutch . The determination of stock prices on the stock market, on the other hand, is impacted by market players who are involved in the demand and supply of stock market shares.Stock price fluctuations are a typical occurrence.Stock prices can rise or fall due to a variety of basic variables.These factors are divided into two categories: internal factors and external ones.Internal factors are those that exist within the organization.Furthermore, external influences are caused by events outside of the firm.As a result, if the external factors are compared to the company's internal factors in terms of which elements can be managed or not, the external factors are more difficult to control.Corporate action, prospects of company performance in the future, exchange rate fluctuations, and fundamental conditions of macroeconomics are the main external factors influencing the stock (Utami & Nugroho, 2017).In the Indonesia Commodity & Derivatives Exchange (ICDX), the Nickel sector is classified under ICDX.
Figure 1 illustrates these periods in relation to Nickel price (USD/T), with a dashed line indicating the division between them.The left side of the graph represents the period before the Pandemic, denoted as Period 1, spanning from March 15, 2019, to March 15, 2020.During this time, the nickel price fluctuated between a minimum of 11,142 USD/T and a maximum of 18,102 USD/T.On the right side of the graph lies Period 2, spanning from March 16, 2020, to April 12, 2022, coinciding with the official declaration of COVID-19 transmission in Indonesia.
Figure 3 .
Figure 3. Prices of EV Selected Stocks in Indonesia from 2019-2022
Figure 6 .Figure 7 .
Figure 6.Efficient Frontier of Upstream Sector of EV Stocks -During the COVID-19 Pandemic study provides a comprehensive view of the investment landscape in Indonesia's upstream electric vehicle (EV) sector before or during the COVID-19 pandemic, especially in the context of the global push for sustainability.This study aligns with the findings ofMariana and Patria (2021), who observed similar resilience in the broader ASEAN EV market.Both studies highlight the sector's adaptability and attractiveness to investors, emphasizing the notable performance of key players like HRUM, ANTM, and INCO during the Pandemic.This resilience is indicative of the sector's ability to adapt and remain appealing to investors, especially in times of economic uncertainty.The increase in the Sharpe Ratio during the Pandemic signifies a higher risk-adjusted return, which is crucial for investors during volatile periods.This finding echoes the emphasis on risk management in portfolio optimization studies such as those byCheong et al. (2017),Qudratullah (2021), andUlfa et al. (2022).These studies, focusing on different sectors, highlight the importance of balancing risk and return, especially in times of market instability.The enhanced performance of the EV sector, as seen in the increased Sharpe Ratio, suggests strong investment potential, even in challenging economic landscapes.Assessing the investment viability of Indonesia's upstream electric vehicle (EV) sector stocks amidst the COVID-19 pandemic by Harry Patria, Heri Hermawan, Heru Budi Prasetyo The impact of global events, such as the COVID-19 pandemic and the Russia-Ukraine war, on nickel prices and the EV sector is critical to this study.This impact resonates with findings in other sectors, as explored by Folger-Laronde et al. (2020), Heyden and Heyden (2020), Sakurai and Kurosaki (2020), Shehzad et al. (2020), Agustin (2021), Syarif et al. (2022) and Truong (2021).Collectively, these studies illustrate how external shocks can lead to significant market fluctuations, necessitating dynamic portfolio strategies.The EV sector's response, particularly in the context of nickel ore prices and supply chain dynamics, highlights the interconnectedness of global events and sector-specific investment opportunities.The findings of the study point to the long-term prospects of the EV sector, particularly in the context of low carbon energy development.This forward-looking perspective is shared by Kapustin and Grushevenko (2020), Wen et al. (2021), and Li et al.
the wake of the WHO's official declaration of the COVID-19 pandemic on March 11, 2020, this study rigorously evaluates the investment prospects of the upstream electric vehicle (EV) stock sector in Indonesia.Employing Monte Carlo simulations to construct efficient frontiers, our analysis demonstrates the enhanced performance of a portfolio comprising four nickel enterprises integral to the upstream EV sector during the pandemic period.A confluence of factors has underpinned the appreciable increase in nickel stock prices within the Indonesian market.Notably, pivotal government policies such as prohibiting unprocessed nickel ore exports and the presidential inauguration of an EV battery manufacturing facility have played instrumental roles.Additionally, the geopolitical ramifications of the Russia-Ukraine conflict have exerted a considerable influence, given Russia's substantial contribution of 10% to global nickel production.Our empirical investigation reveals that, throughout the research period, HRUM consistently commanded the largest portfolio weight among the upstream EV (Nickel) stocks.
Assessing the investment viability of Indonesia's upstream electric vehicle (EV) sector stocks amidst the COVID-19 pandemic by Harry Patria, Heri Hermawan, Heru Budi Prasetyo are used in DGMC 630/2015: the London Metal Exchange (LME) for nickel, cobalt, lead, zinc, bauxite, and copper; the London Bullion Market Association (LBMA) for gold and silver; Asian Metal (AM) for iron, manganese, and chromium; and the Indonesia Commodity & Derivatives Exchange (ICDX) for tin.In this study, we utilize data from the Mineral Ore Benchmark Price (HPM) -Reference Mineral and Coal Price Table in ICDX and LME, aligning with Indonesian nickel regulations.
Table 1
sub-periods: the first spans fromMarch 15, 2019, to March 15, 2020 (pre-pandemic), and the second extends from March 16, 2020, to April 12, 2022 (pandemic era).Worth noting is that NICL (PAM Mineral), a newly established enterprise during the Pandemic in July 2021, had no available stock data before the COVID-19 pandemic and was not included in the Monte Carlo simulation.
Table 1 .
List of Upstream Sector of EV (Nickel) Companies(Selected Stocks)
Table 2 .
Descriptive Statistics of 4 Market Caps of Upstream Sector of EV Battery Stocks -
Table 3 .
Optimal Portfolio Weight of 4 Market Caps of Upstream Sector of EV Stocks -
Table 4 .
Descriptive Statistics of 4 Market Caps of Upstream Sector of EV Battery Stocks -
Table 5 .
Optimal Portfolio Weight of 4 Market Caps of Upstream Sector of EV Stocks -
Table 6 .
Optimal Portfolio Weights and Performances -Full Period | 2024-01-06T16:23:48.567Z | 2023-11-15T00:00:00.000 | {
"year": 2023,
"sha1": "89e8b103e7d2fa19df75d01e3fb19d47c9b16311",
"oa_license": "CCBY",
"oa_url": "https://riset.unisma.ac.id/index.php/jema/article/download/19508/16079",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a86828fb2fd50a5626dcd6afc2ba64ec34acaea3",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Business"
],
"extfieldsofstudy": []
} |
208639462 | pes2o/s2orc | v3-fos-license | Individualized Prediction Of Stroke-Associated Pneumonia For Patients With Acute Ischemic Stroke
Background Stroke-associated pneumonia (SAP) is a serious and common complication in stroke patients. Purpose We aimed to develop and validate an easy-to-use model for predicting the risk of SAP in acute ischemic stroke (AIS) patients. Patients and methods The nomogram was established by univariate and multivariate binary logistic analyses in a training cohort of 643 AIS patients. The prediction performance was determined based on the receiver operating characteristic curve (ROC) and calibration plots in a validation cohort (N=340). Individualized clinical decision-making was conducted by weighing the net benefit in each AIS patient by decision curve analysis (DCA). Results Seven predictors, including age, NIHSS score on admission, atrial fibrillation, nasogastric tube intervention, mechanical ventilation, fibrinogen, and leukocyte count were incorporated to construct the nomogram model. The nomogram showed good predictive performance in ROC analysis [AUROC of 0.845 (95% CI: 0.814–0.872) in training cohort, and 0.897 (95% CI: 0.860–0.927) in validation cohort], and was superior to the A2DS2, ISAN, and PANTHERIS scores. Furthermore, the calibration plots showed good agreement between actual and nomogram-predicted SAP probabilities, in both training and validation cohorts. The DCA confirmed that the SAP nomogram was clinically useful. Conclusion Our nomogram may provide clinicians with a simple and reliable tool for predicting SAP based on routinely available data. It may also assist clinicians with respect to individualized treatment decision-making for patients differing in risk level.
Introduction
Stroke-associated pneumonia (SAP) is a common poststroke complication with a prevalence rate of 11.3-31.3%. [1][2][3][4] Despite remarkable advances in the care of acute stroke patients, the outcomes of SAP are still poor, including prolonged hospitalization, high incidence of severe disability, and high in-hospital mortality rate. 5,6 The majority of previous studies focused on prophylactic antibiotic treatment in cases of stroke-associated infection, including SAP, but the results indicated failure to reduce the incidence of pneumonia and improve clinical outcomes. [7][8][9] Furthermore, it is difficult to diagnose SAP because of the low sensitivity of X-ray examination and sputum culture. 10,11 Therefore, a more objective and easily applicable model for predicting the development of pneumonia in stroke patients is required.
Previous studies indicated that various risk factors, including older age, male gender, dysphagia, decreased monocytic human leukocyte antigen-DR isotype (HLA-DR), stroke-induced immunodepression syndrome, and chronic obstructive pulmonary disease (COPD) were associated with the incidence of SAP. 4,6,12,13 In addition, several recent studies developed clinical scores to predict SAP in stroke patients. For example, the 10-point A 2 DS 2 score [age ≥75 years=1, atrial fibrillation=1, dysphagia=2, male sex=1, National Institutes of Health Stroke Scale (NIHSS) score of 0-4=0, score of 5-15=3, score ≥16=5] was derived for the prediction of poststroke pneumonia in a German cohort and showed high sensitivity and specificity; 14 subsequently, it was validated in Chinese stroke patients. 2 Recently, the ISAN score was developed by Smith et al 5 to assess the risk of SAP, based on 23,199 stroke cases in the UK. This score included the parameters of sex, age, prestroke independence, and NIHSS score on admission, and exhibited good discrimination in ischemic stroke derivation and validation samples for predicting SAP. For patients with acute middle cerebral artery infarction, the PANTHERIS score is a simple method for predicting SAP based on age, Glasgow Coma Scale score on admission, systolic blood pressure (SBP), and leukocyte count within 24 hrs of admission. 1 However, high sensitivity and specificity are not sufficient for clinical prediction models; timely individualized medical decision-making, cost-effectiveness, representativeness, and comprehensiveness of data collection should also be taken into consideration in clinical practice. The purpose of this study was intended to establish and validate a simple, convenient, accurate, and clinically practical model for predicting the risk of SAP in stroke patients, and meanwhile compare its performance with other prediction models.
Patient Selection
This study was approved by the Ethics Committee of the First Affiliated Hospital of Wenzhou Medical University and was conducted in accordance with the Declaration of Helsinki. We enrolled consecutive patients who had been admitted to the Department of Neurology, First Affiliated Hospital of Wenzhou Medical University within 24 hrs after onset of ischemic stroke between January 2018 and January 2019. Inclusion criteria in the study were as follows: 1) age ≥18 years; 2) diagnosis of acute ischemic stroke (AIS) confirmed by cranial computed tomography (CT) or magnetic resonance imaging (MRI) within 24 hrs after admission; and 3) written informed consent obtained from the patient or their legal representatives. The exclusion criteria were as follows: 1) diagnosis of transient ischemic attacks (TIAs); 2) preexisting dysphagia; 3) active infection or pyrexia within 2 weeks before admission; 4) a history of hematological diseases, severe hepatic diseases, cancer, or immunosuppressant treatment; and 5) lack of complete medical records.
Of the total of 1344 patients who fulfilled the inclusion criteria, 361 were excluded, such that 983 AIS patients were included in the analysis. Using a computer random number generator, two-thirds of the patients (N = 643) were randomized into the training cohort to construct the predictive nomogram model, and the remaining 340 patients were assigned to the validation cohort to evaluate the performance of the model ( Figure 1).
Data Collection
We collected demographic and clinical data from our electronic medical records system, including age, sex, stroke classification (TOAST criteria), arterial blood pressure on admission, history of stroke, thrombolytic therapy, mechanical ventilation, current smoking status, and current drinking status. Pre-existing comorbidities, including hypertension, diabetes mellitus, atrial fibrillation, coronary heart disease, congestive heart failure, and COPD were recorded. In addition, the neurological deficit on admission was measured by well-trained neurologists using the NIHSS score. Based on previously defined cutoff points, 14,15 patients were further divided into the following categories according to baseline NIHSS score: 0-4, mild; 5-15, moderate; and ≥16, severe. The baseline laboratory examinations, including fibrinogen, fasting blood glucose, homocysteine (Hcy), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), erythrocytes, leukocytes, platelets, serum creatinine (SCr), the glomerular filtration rate (GFR), total cholesterol, and hemoglobin, were obtained within 24 hrs of admission.
Dysphagia status was determined based on baseline swallow screening. In the stroke center of our institute, all stroke patients received a full clinical swallowing examination by dysphagia-trained nursing staff within 24 hrs after admission. This assessment consisted of a standardized clinical evaluation of consciousness, oromotor function, articulating function, and a water drinking test, to quantify the severity of dysphagia. 16 The dysphagic and comatose patients then underwent nasogastric tube interventions to prevent aspiration. Therefore, the presence of a nasogastric tube could reflect both dysphagia status and coma status.
Outcome Measures
In our study, SAP was diagnosed in accordance with the modified Centers for Disease Control and Prevention criteria of hospital-acquired pneumonia, 17 based on clinical and laboratory parameters of respiratory tract infection, and was confirmed by both chest X-ray and CT. 11 Furthermore, SAP was diagnosed by two treating neurologists who were blinded to other clinical and laboratory findings during the first 7 days of hospitalization after stroke onset. This study only recorded hospital-acquired pneumonia; pneumonia before the stroke was not considered.
Statistical Analyses
Statistical analyses were performed using SAS (version 9.2; SAS Institute Inc., Cary, NC, USA), R for Windows (version 3.4.1; http://www.r-project.org/), and MedCalc software (version 13.0; MedCalc Software, Ostend, Belgium). The differences in continuous variables between SAP and non-SAP patients were assessed with Student's t-test or the nonparametric Mann-Whitney U-test, and the χ 2 or Fisher's exact test was used to compare categorical variables. Receiver operating characteristic (ROC) curve analysis was used to determine the best cutoff values for continuous variables that were significantly different in the training cohort. In the training cohort, univariate logistic regression analysis was used to screen risk factors related to SAP; variables with a p-value <0.05 were considered as significant factors associated with the occurrence of SAP. And these significant factors thereafter added to multivariate-adjusted binary logistic regression analysis to identify independent clinical predictors of SAP. The SAP nomogram was formulated based on these clinical predictors in multivariate analysis by the package of rms and was validated for discrimination and calibration. Validation of the final nomogram was conducted by a bootstrap method with 1000 resampling. The area under the receiver operating characteristic curve (AUROC) was drawn to evaluate and compare the discrimination ability of the nomogram with that of other prediction models. Calibration curves were obtained by plotting observed probabilities against predicted probabilities, to evaluate the predictive accuracy of the final nomogram. In addition, to determine whether the SAP nomogram could improve the outcomes of patients, decision curve analysis (DCA) was performed using the rmda package. With regard to clinical usefulness, DCA can be used to quantify and compare the potential net benefits of the predictive models, 18 thus showing the clinical consequences of a treatment strategy. [18][19][20] All statistical tests were two-sided, and p<0.05 was considered statistically significant.
Clinical Characteristics Of The Study Cohort
A total of 983 patients were enrolled in this study between January 2018 and January 2019. Of these patients, 120 (12.2%) were diagnosed with SAP. The incidence rate of pneumonia after AIS was similar between the two cohorts: 70 (10.9%) of 643 patients in the training cohort, and 50 (14.7%) of 340 patients in the validation cohort. Regarding basic clinical characteristics and laboratory variables, there were no significant differences between the training and validation cohorts (Table 1).
Baseline Characteristics Of Patients In The Training Cohort Stratified By SAP
Descriptive analyses (Table 2) showed that patients diagnosed with SAP were older (73.3 vs 65.5 years old) and had higher rates of atrial fibrillation (44.3% vs 9.9%), nasogastric tube intervention (50.0% vs 7.2%) and mechanical ventilation (17.1% vs 4.2%) than their counterparts without SAP. Patients with SAP had a significantly higher NIHSS score on admission (9.0 vs 2.0), and higher fibrinogen (4.4 g/L vs 3.6 g/L), leukocyte counts (9.7×10 9 /L vs 7.3×10 9 /L). We selected 10.0×10 9 /L as the leukocyte cutoff point based on previous studies. 21,22 The best cutoff value for fibrinogen in the training cohort, as revealed by ROC analysis, was 3.68 g/L. As a categorical variable, more patients with SAP exhibited a high leukocyte count (55.7% vs 15.7%) and high fibrinogen level (68.6% vs 36.0%) on admission than subjects without SAP (both p<0.001; Table 2).
Construction Of The Predictive Nomogram For SAP
Multivariable adjusted binary logistic regression analysis demonstrated that age, category of NHISS score on admission, atrial fibrillation, nasogastric tube intervention, mechanical ventilation, leucocyte, and fibrinogen categories were independently associated with SAP (all p<0.05; Table 3). Therefore, the results indicated that these seven variables (age, admission NHISS, atrial fibrillation, nasogastric tube intervention, mechanical ventilation, leucocyte, and fibrinogen) were independent clinical predictors of pneumonia in AIS patients. Then, we constructed a predictive nomogram for SAP using these seven independent predictors listed above, based on R software ( Figure 2). Firstly, each predictor was marked on the nomogram and a vertical line to the "Points" axis was drawn to obtain the matching points on the point scale. Then, the corresponding points of each predicator were added up to get the total points. Finally, we located the total points on the "Total Points" axis and drew a vertical line down to the "Probability of stroke-associated pneumonia" axis. The corresponding approximate probability of SAP was obtained. In addition, an example is given in Supplementary Figure 1 to help understand how the model works.
Validation Of The Predictive Nomogram
ROC analysis indicated that the AUROC of the SAP nomogram in the training and validation cohorts was 0.845 (95% CI 0.814-0.872) ( Figure 3A) and 0.897 (95% CI: 0.860-0.927) ( Figure 3B), respectively, which suggested good discriminative capacity of this nomogram. In addition, in the training cohort, the calibration plot showed an optimal agreement between the probability of SAP predicted by the nomogram and the actual observations ( Figure 4). Regarding the predicted probability versus the actual probability, the mean absolute error in the training cohort was 0.020. Furthermore, in the validation cohort, the calibration plot of observed versus predicted probability of SAP also showed excellent concordance; the mean absolute error was 0.019.
Predictive Efficacy Assessment
To assess the predicted probability of SAP, each patient was also tested for SAP based on the ISAN, A 2 DS 2 and PANTHERIS scores. As shown in Figure 3B, the AUROC value of the nomogram prediction (0.897; 95% CI: 0860-0.927) was greater than those for the A 2 DS 2 (0.791; 95% CI: 0.744-0.833), ISAN (0.779; 95% CI: 0.731-0.822), and PANTHERIS scores (0.785; 95% CI: 0.736-0.829) (all p<0.05). DCA of the SAP nomogram, and of the ISAN, A 2 DS 2 , and PANTHERIS scores, was performed to determine whether these models can improve patient outcomes ( Figure 5). The DCA results indicated that the developed nomogram had a marked net benefit for predicting SAP when the threshold probability was >4%. Furthermore, the net benefit was comparable, nomogram always had a greater net benefit than the other models for predicting SAP.
Discussion
In this study, we developed and validated a novel and simple nomogram model for individualized risk management of pneumonia in acute stroke patients. The nomogram incorporated routinely collected data, including age, NIHSS score, nasogastric tube intervention, mechanical ventilation, atrial fibrillation, leukocyte, and fibrinogen, and showed good discrimination and calibration abilities in the training and validation cohorts, respectively. This nomogram would allow the risk probability of poststroke pneumonia to be scored easily in daily clinical practice. In addition, it would be helpful for physician in differentiating risk management of stroke patients by weighing the net benefit of individualized clinical decision-making.
Consistent with previous reports, the present study showed that age, basal NIHSS score, nasogastric tube intervention, mechanical ventilation, and atrial fibrillation contributed to the development of SAP. As shown in previous studies, older age was associated with higher risk of poststroke pneumonia. 6,23 This may have been because older people tend to have more comorbidities and impaired swallowing function. 24 Furthermore, stroke severity, as measured by the NIHSS, was associated with SAP (OR=1.159). Many studies showed similar results, i.e., that patients with high NIHSS score were more likely to develop pneumonia after stroke, [25][26][27] which were consistent with our findings. Some studies showed that atrial fibrillation was significantly more prevalent in patients who developed pneumonia in the acute stroke stage, with ORs ranging from 1.96 to 3.30. [28][29][30] Accordingly, atrial fibrillation is included as a predictor of poststroke pneumonia in the A 2 DS 2 score. 14 Furthermore, mechanical ventilation showed a strong association with the development of pneumonia after stroke. A retrospective study including critically ill ischemic stroke patients requiring invasive mechanical ventilation showed that 40% of the patients developed poststroke pneumonia. 31 Another study showed that AIS patients who received mechanical ventilation on admission had an almost fourfold greater likelihood of developing SAP compared to those without mechanical ventilation. 32 Gujjar et al explained mechanical ventilation can impair normal mucociliary clearance, allowing bacteria to colonize the airways more easily and thus increase the likelihood of pneumonia. 33 Furthermore, patients receiving mechanical ventilation have longer hospital stays, which increases their risk of exposure to pathogens. 34 Stroke can cause dysfunction in oropharyngeal and gastric regions, and in the lower esophageal sphincter. 35 Therefore, to prevent aspiration in stroke patients with reduced levels of consciousness or severe dysphagia, some studies suggested that nutrition should be provided through a nasogastric tube rather than by oral feeding. 35,36 However, although nasogastric tubes can decrease the risk of aspiration during eating, patients fed in this manner remained at a high risk of pneumonia after stroke. [37][38][39] Related studies showed that prolonged nasogastric tube insertion was associated with an increased incidence of pneumonia and worsening of the prognosis in stroke patients. 40,41 This may be because the presence of a nasogastric tube can exacerbate lower esophageal sphincter dysfunction, which may lead to reflux of gastric contents. 37 In addition, infected reflux may promote microaspiration and colonization by Gram-negative bacteria. 42 It has also been suggested that cleaning of the oral cavity by chewing and swallowing can prevent oropharyngeal colonization by pathogenic organisms in the elderly. 43 However, in tube-fed patients, the dysfunctional oropharynx loses this protective function, thereby increasing the risk of microaspiration and pneumonia due to higher bacterial load in the saliva. 38 Recently, elevated fibrinogen levels were found in patients with stroke and cardiovascular diseases. 44,45 Luyendyk et al reported that fibrinogen played an important role in intensive inflammation and chronic low-grade inflammation. 46 Therefore, we hypothesized that fibrinogen may reflect inflammatory status in stroke patients. Neutrophils are very important for the immune reaction against bacteria in pneumonia. 47 Fibrinogen, as a plasma protein, supports neutrophil activation by interacting with the human leukocyte adhesion glycoprotein αMβ2 integrin. 48 A previous animal study showed that fibrinogen could be synthesized and secreted by lung alveolar epithelial cells during inflammatory stimulation. 49 Therefore, although leukocyte count is an important risk factor for inflammation, fibrinogen may show better specificity and sensitivity for predicting poststroke pneumonia. Therefore, we included both fibrinogen and leukocyte as predictors to predict SAP more accurately. Previous studies have introduced various scoring systems to predict poststroke pneumonia. In terms of discrimination, the ISAN, A 2 DS 2 , and PANTHERIS scores performed comparably in our validation samples, but all showed poorer discrimination ability compared to the SAP nomogram. The ISAN score is simple, assessing only four clinical factors on presentation to the emergency department. A recent Chinese study 50 analyzed data from 19,333 patients in the National Stroke Registry and confirmed that the ISAN score is effective for predicting SAP in patients with ischemic stroke. However, the ISAN score does not include blood biochemical parameters that can reflect the severity of inflammation in stroke patients. Therefore, we included leukocyte count and fibrinogen level on admission in the SAP nomogram, which captured early inflammation levels in the peripheral circulation and lungs of stroke patients. Furthermore, a series of recent prospective studies explored the predictive validity of the A 2 DS 2 scoring system. Nam et al 51 reported that a high A 2 DS 2 score is an independent risk factor for SAP. Some Chinese studies confirmed that the A 2 DS 2 score can be used to stratify the risk of occurrence of SAP in acute stroke patients. 52,53 However, dysphagia was the main predictor of A 2 DS 2 score, and we concluded that nasogastric tube intervention is a more direct and sensitive indicator than dysphagia, because nasogastric tube intervention can reflect both dysphagia and consciousness disturbance. In addition, although nasogastric tube intervention and mechanical ventilation are well-known risk factors for SAP, the current prediction models do not include these two key variables. Therefore, the advantage of our SAP prediction model is in the incorporation of these two important variables. The PANTHERIS score is a 12-point SAP assessment for patients with acute infarction admitted to the neurology intensive care unit. However, it has some limitations, including lack of NIHSS score and evaluation of swallowing function, which are important risk factors for SAP. Our SAP nomogram addresses this deficiency, and may therefore be more reliable than the PANTHERIS score.
As SAP mostly occurs within the first few days after stroke onset, adoption of timely prophylactic measures is vital once stroke has occurred. To achieve early and accurate stratification of patients at high risk of SAP, predictive models should be simple, reliable, and carefully applied. 5 Therefore, seven predictors that can be obtained on the day of admission and comprehensively reflect the patient's condition were included in our model, making the SAP nomogram both quick and easy to apply.
Among the currently available prediction tools, the nomogram exhibited high accuracy and excellent ability to predict outcomes, and was confirmed as one of the most important decision-making models in modern medical practice. 54,55 To our knowledge, this is the first nomogram for prediction of SAP in stroke patients based on routinely collected data on admission. Our findings highlighted the role of nasogastric tube intervention, mechanical ventilation, and fibrinogen level in the pathogenesis of poststroke pneumonia. Our nomogram model showed better discrimination and calibration capabilities for predicting SAP among AIS patients compared to the A 2 DS 2 , ISAN, and PANTHERIS scores.
There were some limitations to our study. First, due to our limited data and lack of external validation, there will be some potential bias in our results. Therefore, future multicenter studies are needed to further validate the reliability and generalizability of final nomogram. Second, our study did not systematically document all details of the nasogastric tube interventions, such as the timing of insertion, which may influence the development of respiratory infections in stroke patients. Finally, further prospective studies are needed to validate the reliability and stability of the nomogram.
Conclusion
In conclusion, we have established and validated a reliable nomogram to predict the individualized risk of poststroke pneumonia with good discrimination and accuracy based on routinely collected data. The proposed nomogram might be a simple and useful tool for clinicians in making timely individualized clinical decision according to each patient's individual risk.
Data Sharing Statement
The data supporting this study are available from the corresponding author for reasonable request.
Acknowledgments
We thank the study participants and the clinical staff at all participating hospitals for their support and contribution to this project. This work was supported by the Projects of Provincial Natural Science Foundation of Zhejiang (no. LY19H090013).
Author Contributions
Gui-Qian Huang and Zhen Wang conceived and designed this project; Yu-Ting Lin, Qian-Qian Cheng, Hao-Ran Cheng collected the data; Gui-Qian Huang, Yu-Ting Lin, and Yue-Min Wu conducted the data analysis; Gui-Qian Huang draft the paper. All authors contributed to data analysis, drafting or revising the article, gave final Figure 5 Decision curves of the different scoring systems for predicting SAP. The net benefit was calculated by adding the true-positives and subtracting the falsepositives. For a threshold probability >4%, application of the SAP nomogram would add net benefit compared to either the treat-all strategy or the treat-none strategy. In addition, the SAP nomogram always showed a greater net benefit than the A 2 DS 2 , ISAN, and PANTHERIS scores for predicting SAP with a threshold probability >4%.
approval of the version to be published, and agree to be accountable for all aspects of the work.
Disclosure
The authors report no conflicts of interest in this work. | 2019-11-14T17:08:23.945Z | 2019-11-07T00:00:00.000 | {
"year": 2019,
"sha1": "05b10e6dd94b3ef9de96ed0d296c20ebd9148df5",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=53836",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f946fb8b911ec3a4bbeb054088a23d251ece5e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231862991 | pes2o/s2orc | v3-fos-license | DJ-1: A promising therapeutic candidate for ischemia-reperfusion injury
DJ-1 is a multifaceted protein with pleiotropic functions that has been implicated in multiple diseases, ranging from neurodegeneration to cancer and ischemia-reperfusion injury. Ischemia is a complex pathological state arising when tissues and organs do not receive adequate levels of oxygen and nutrients. When the blood flow is restored, significant damage occurs over and above that of ischemia alone and is termed ischemia-reperfusion injury. Despite great efforts in the scientific community to ameliorate this pathology, its complex nature has rendered it challenging to obtain satisfactory treatments that translate to the clinic. In this review, we will describe the recent findings on the participation of the protein DJ-1 in the pathophysiology of ischemia-reperfusion injury, firstly introducing the features and functions of DJ-1 and, successively highlighting the therapeutic potential of the protein.
DJ-1: A multifunctional protein against cellular stress
DJ-1 is a 189 amino-acid protein and exists as a homodimer of 20 kDa highly conserved across phyla [1]. Alterations in the protein functionality have been associated with different human diseases, ranging from Parkinson's disease, cancer, male infertility, diabetes, stroke, chronic obstructive pulmonary disease, and ischemia-reperfusion (IR) injury [2,3]. The role of DJ-1 in several diseases reflects its wide pattern of expression, being found in most body tissues [4]. At the subcellular level, DJ-1 is localized primarily in the cytoplasm, though it has also been found to translocate to the nucleus and mitochondria, in particular, under conditions of stress [5][6][7][8]. In addition, DJ-1 has been shown to be secreted into the extracellular space in several pathologies, as it has been detected in the plasma or serum of patients affected by melanoma, breast cancer, and stroke [9][10][11].
Despite the great efforts in the scientific community, the physiological function of DJ-1 is only partially understood. To date, several roles have been ascribed to the protein, including tumorigenesis, modulation of signaling cascades, preservation of ROS homeostasis, regulation of transcription, maintenance of glucose levels, and protection against protein aggregation [12,13]. Nonetheless, the most widely accepted function of DJ-1 is its protective role against excessive ROS levels [12,13]. In this regard, it has been suggested that DJ-1 may sense the cellular redox state through a highly conserved cysteine (Cys) residue, localized at position 106 [14]. Indeed, due to its low thiol pKa value, this residue exists almost exclusively as a reactive thiolate anion at physiological pH and it has been reported to be particularly sensitive to oxidation, probably regulating several functions attributed to the protein [15]. In the antioxidant defense, DJ-1 has been reported to orchestrate a large variety of responses to promote cell survival. In this frame, it has been shown that DJ-1 can indirectly affect transcription by interacting with transcription factors and their regulators, thus modulating gene expression [16]. In particular, the protein often leads to the activation of pro-survival and proliferative pathways, such as the extracellular signal-regulated protein kinases 1 and 2 (ERK1/2) [17], the nuclear factor erythroid 2-related factor 2 (Nrf2) [18], and the Akt pathways [19,20], while simultaneously dampening cell death signaling cascades, as reported for the pro-apoptotic transcription factor p53 [21,22] and the apoptosis signal-regulating kinase 1 (ASK1) [23][24][25]. In addition to the modulation of signaling cascades, different studies have reported the participation of DJ-1 in the maintenance of mitochondrial homeostasis. Indeed, DJ-1 has been described to interact with complex I [26,27] and, more recently, with the β subunit of F 1 F O ATP synthase [28], modulating their functionality. Moreover, the protein has been found to take part in the process of mitochondrial quality control, playing a role in the Parkin/PINK1-mediated mitophagy [29][30][31][32], and in the regulation of the mitochondrial dynamics [33][34][35][36]. Noteworthy, the protein has also been suggested to modulate endoplasmic reticulum-mitochondria tethering, playing a role in the modulation of calcium transients [37,38].
Besides these largely explored functions, more recent findings have suggested that DJ-1 may also exert cellular protection by mitigating dicarbonyl stress. In particular, two glucose-derived metabolites, the methylglyoxal (MGO) and glyoxal (GO), are highly reactive, being able to attack and modify nucleophilic macromolecules, such as proteins and DNA, eventually yielding irreversible advanced glycation end products (AGEs) [39]. In this scenario, DJ-1 has been described to convert MGO and GO in less harmful species, by acting as a glutathione-independent glyoxalase [40][41][42][43][44]. Furthermore, some studies have reported that DJ-1 can deglycate both proteins and nucleic acids upon reaction with MGO [45][46][47][48][49][50], though this function has proved contentious by other reports [51,52]. Interestingly, DJ-1 also seems to participate in the preservation of glucose homeostasis, by regulating the glucose levels in the brown adipose tissue [53] and by modulating the subcellular localization of the glycolytic enzyme hexokinase I [32]. Moreover, reduced expression levels of DJ-1 have been observed in the pancreatic islets of patients affected by type II diabetes, in contrast to healthy individuals, further supporting the role of the protein in glucose homeostasis [54].
The functions so far described clearly emphasize the protective potential of DJ-1, which appears to play a multifaceted role in orchestrating a wide range of responses to confer cellular protection. Interestingly, through the functions previously described, the protein has also recurrently been involved in the defense against IR injury, a condition that has been correlated with high levels of oxidative stress and underlies several pathologies, ranging from stroke to cancer and diabetes. The pathophysiology of IR injury will be discussed in detail in the following paragraphs, together with the recent studies that have highlighted the participation of DJ-1 in this condition.
Pathophysiology of IR injury
Ischemia results from an inadequate blood supply to tissues and organs, commonly due to the obstruction of arterial flow [55]. As a consequence, the body experiences a local shortage of oxygen and nutrients until the blood flow is restored. This condition has relevance not only to stroke, myocardial infarction, and cancer [56] but also to diabetes [57,58], and neurodegenerative pathologies [59][60][61]. At the molecular level, one of the key events occurring in response to hypoxia is the activation of the transcription factor Hypoxia Inducible Factor 1α (HIF-1α), which otherwise, under normal oxygen tension, is constantly degraded [62]. The oxygen-dependent regulation of HIF-1α is determined by the E3 ubiquitin ligase Von Hippel-Lindau (VHL), which labels HIF-1α for proteasomal degradation. The interaction with VHL is dependent on HIF-1α hydroxylation by the prolyl hydroxylase domain (PHD) protein, which uses oxygen and α-ketoglutarate as substrates [62]. Under limited oxygen availability, HIF-1α is stabilized and orchestrates the expression of multiple genes involved in the hypoxic adaptation, including those participating in vascularization and reprogramming of energy metabolism [63]. Indeed, while under normoxia, ATP synthesis largely relies on oxidative phosphorylation, hypoxic ATP production is driven by glycolysis [55]. Nonetheless, the cellular energetic demand exceeds the glycolytic capacity, eventually leading to a drop in ATP levels. Since ATP is required as a regulator of different ATP-dependent ionic exchangers, the hypoxic period leads to ionic imbalance and cellular acidosis due to the accumulation of lactate [55]. In particular, due to the altered function of Na + /Ca 2+ and Na + /H + exchangers, and of Na + /K + -ATPase, ischemia drives the intracellular accumulation of sodium and calcium, affecting the entire cellular homeostasis (see Fig. 1) [55,64]. Notably, this ionic imbalance becomes even more severe upon reperfusion, when the mitochondrial respiration resumes, and the physiological pH begins to be restored. Indeed, while the extracellular pH is more rapidly normalized with respect to the cytosol, the intracellular compartment remains still partially acidic upon reoxygenation. This condition entails a proton gradient able to sustain the function of the Na + /H + and Na + /Ca 2+ exchangers, further increasing the cytosolic levels of sodium and calcium [65].
In addition to this ionic dyshomeostasis, the first minutes of reperfusion are characterized by an abundant ROS generation, principally of mitochondrial origin. When the oxygen concentration is reduced, the levels of the mitochondrial metabolite succinate dramatically rise. Succinate is thought to arise due to the reversal activity of complex II, converting fumarate to succinate, though there may also be a contribution from the canonical Krebs cycle supported by glutaminolysis [66][67][68]. During hypoxia, succinate plays a role in HIF1-α stabilization as it is also transported to the cytosol, where it can inhibit PHDs [69,70]. However, upon reoxygenation, succinate is oxidized back to fumarate by complex II and this event forces electrons to flow from the co-enzyme Q pool back through complex I and onto oxygen through the process referred to as reverse electron transfer (RET), producing large amounts of superoxide anions [66,71]. This initial burst of ROS in conjunction with the conspicuous surge of calcium act as major players in the activation of cell death signaling cascades, which involve the opening of the mitochondrial permeability transition pore (mPTP) [72]. mPTP is considered a high conductance proteinaceous pore localized in the inner mitochondrial membrane [73]. After ischemia, the mPTP opening is stimulated within a few minutes of reperfusion, when the pH is normalized to pre-ischemic values. Indeed, it has been shown that mPTP opening is almost inhibited by pH values below 7.0, probably due to a highly conserved histidyl residue of the mitochondrial F 1 F O (F)-ATP synthase complex [74]. Once opened, mPTP increases the permeability of the inner mitochondrial membrane leading to mitochondrial swelling and subsequent activation of cell death pathways [72].
Following the acute phase of reperfusion injury, the release of damage-associated molecular patterns (DAMPs) from injured cells leads to activation of an innate immune response [71,75]. Interestingly, metabolites, such as lactate and succinate, effluxed into the circulation during reperfusion, have also recently been postulated to play a role in this immune activation [76,77]. The initial immune response induces the production of pro-inflammatory cytokines and chemokines, facilitating the infiltration of leukocytes and an inflammatory response [78]. As a consequence, the production of pro-inflammatory cytokines and chemokines facilitate the infiltration of leukocytes and an inflammatory response, which can further exacerbate the extent of tissue damage. To counterbalance the low pH, Na + /H + exchanger (NHE) exchanges protons for Na + , promoting the build-up of the ion in the cytosol. As Na + /K + ATPase (NAK) activity is almost inhibited due to the reduced ATP availability, cells exploit the reverse activity of the Na + /Ca + exchanger (NCX) to counterbalance the increased sodium level. In this way, NCX induces Ca 2+ accumulation in the cytosol, while expelling sodium. At the same time, the transcription factor HIF1-α is stabilized and enters the nucleus to promote gene expression. (B) Upon reperfusion, the mitochondrial respiration resumes, re-establishing the normal pH. This rapid pH adjustment causes a large proton gradient that prompts a sustained NHE activity, which additionally rises the cytosolic concentration of sodium. As a consequence, the NCX reverse activity is potentiated, further incrementing the calcium level in the intracellular compartment. In this condition, calcium, in conjunction with the large mitochondrial ROS production, favours the opening of the mPTP, which stimulates the initiation of cell death mechanisms. In this condition, HIF1-α is constantly degraded.
Therapeutic approach
Ischemic injury represents a debilitating pathology and is often regarded as a medical emergency, characterized by its high incidence and a complex clinical picture. Depending on the origin of the ischemic event, the treatment can vary but it commonly comprises the rapid administration of anti-thrombotic, anti-coagulative, and vasodilatory compounds, accompanied by surgical interventions, if necessary [79]. Nonetheless, the intricate etiology of IR injury has rendered it challenging to obtain efficacious therapies to treat the pathology. For this reason, the use of experimental models represents an essential approach to better comprehend this pathological condition and to investigate new therapeutic approaches.
To study ischemia, both in vivo and in vitro models are usually generated by reducing oxygen concentrations or by limiting the utilization of oxygen by organs/tissues. Many animal models rely on the occlusion of the target organ arteries, followed by the release of the suture to reperfuse the tissue [65,80]. Most in vivo studies have been performed in rodents, as their anatomy and physiology display a discrete similarity to humans. Besides in vivo models, researchers also take advantage of in vitro models, in which the selected cellular model is maintained in a hypoxic chamber for the designated experimental time. Moreover, cells can be cultured in a glucose-free acidified medium, which can be supplemented with lactate to mimic its accumulation as normally occurring during anaerobic glycolysis. Successively, reperfusion is promoted by exposing cells to normal oxygen tension and replacing the culture medium with a pH-adjusted one, enriched in essential nutrients [65]. The utilization of experimental models has led to the development of promising therapeutic interventions, including both pharmacological and non-pharmacological treatments. In this regard, hypoxic/ischemic conditioning has emerged as a possible mechanism to lessen the ischemic-associated damage [81]. Conditioning consists of the application of a series of sublethal hypoxic/ischemic cycles, which can be performed before the onset of the ischemic episode to reduce the infarct size (preconditioning) or upon reoxygenation/reperfusion to diminish the associated damage (postconditioning) [82][83][84]. In addition to pre/postconditioning, remote conditioning has also been reported to display protective effects through the application of brief hypoxic/ischemic episodes to an organ remote from the injured site [85,86]. However, the beneficial effects in human trials are less than clear, with some conflicting results on the protective effects [87]. Furthermore, hypothermia has also been observed to confer protection by lowering the metabolic rate and by delaying the activation of pro-inflammatory and pro-oxidant pathways [88][89][90]. Besides these non-pharmacological approaches, the explored treatments also include pharmacological strategies, which target molecular pathways known to be affected during hypoxia/ischemia. Thus, the investigated mechanisms encompass the inhibition of Na + /H + [91][92][93] and Na + /Ca 2+ exchangers to modulate calcium levels [94][95][96], blockage of mPTP opening [97,98], reduction of cytokine and interleukin release [99,100], and counteracting ROS production [101][102][103]. Recently, it has become clear that mitochondrial ROS generation importantly contributes to the major damage observed upon hypoxic/ischemic manifestations. Therefore, there is great interest in treatments able to protect against ROS generation, principally targeting mitochondria. In this frame, the reduction of mitochondrial respiration has been observed to reduce the pathology-associated damage [104]. For example, the inhibition of succinate accumulation or prevention of its rapid oxidation upon reoxygenation, by using drugs based on the mitochondrial complex II competitive inhibitor malonate, have proven to display beneficial results [66,101,[105][106][107]. Similarly, blockade of complex I, for example with rotenone [108,109] or preventing complex I reactivation during reperfusion by S-nitrosylation of critical cysteines [110][111][112][113], have been demonstrated to ameliorate the detrimental response to reoxygenation. While on the one hand the aforementioned therapeutic approach is to reduce ROS generation, on the other hand there is a focus on upregulating the antioxidant capacity of the cell. The explored approaches include the use of superoxide dismutases and relative mimetics [103,[114][115][116], the mitochondrially targeted antioxidant MitoQ [102,117,118], N-acetylcysteine [119,120], metformin [121,122], which also acts as a weak complex I inhibitor [123], and the administration of natural compounds, such as vitamin C [124]. In addition, recent studies have reported the cardioprotective potential of a mitochondrially targeted MGO and GO scavenger, referred to as MitoGamide, which might be a future promising candidate for IR therapy [125,126]. Besides these compounds in the antioxidant field, interesting insights have been obtained from studies involving the multifaceted protein DJ-1, which is gaining more and more attention as a protective player in response to IR injury.
The involvement of DJ-1 in IR injury
Compelling evidence has reported the participation of DJ-1 in the response to IR injury. The protein has been described to act on two levels: firstly, by sustaining the hypoxic response under oxygen depletion, and then by protecting against reoxygenation-related damage.
Participation in the ischemic response
Under limited oxygen concentrations, DJ-1 has been reported to stabilize HIF1-α [127,128]. Vasseur and colleagues initially found that DJ-1 silencing in osteosarcoma-derived cells reduced HIF-1α levels by 30% under hypoxic treatment [128]. Successively, working with neuronal cell models, Parsanejad and co-workers identified VHL as a DJ-1-interacting protein through immunoprecipitation, proposing that DJ-1 may act as a negative regulator of VHL, to boost HIF1-α and prevent its proteasomal degradation [127]. Furthermore, in accordance with other studies [129,130], the authors did not observe any significant reduction in HIF1-α mRNA levels upon DJ-1 deficiency, suggesting that DJ-1 does not influence HIF1-α gene expression [127]. Although the modulation of the VHL-HIF-1α interaction has been proposed as a possible mechanism of action of DJ-1 in neuronal cells, it may not be a general mechanism. Indeed, a more recent study, conducted in colorectal cancer cells, has reported that DJ-1 may promote HIF-1α accumulation through the PI3K/Akt pathway [130], supporting that diverse mechanisms could be activated in different tissues. Moreover, it is relevant to mention that another group has observed HIF-1α protein accumulation upon DJ-1 silencing in neuroblastoma cells [129], suggesting instead, that DJ-1 could play a negative role in HIF1-α stabilization. In an attempt to interpret these contrasting findings, it should be considered that different cell types may express different levels of endogenous DJ-1 and have different susceptibility to the hypoxic treatment. Moreover, the stabilization of HIF-1α is frequently detected via immunoblot, which may yield variable results, considering the labile nature of HIF-1α. Therefore, future analyses are required to clarify how DJ-1 is involved in the regulation of HIF1-α and target genes, and whether the contrasting data reported are due to the diverse experimental model and approach used.
Upon oxygen reintroduction, DJ-1 has been shown to protect against reoxygenation-derived ROS damage, acting at different levels. Indeed, in vivo studies have reported that the absence or down-regulation of DJ-1 results in higher sensitivity to ischemia and increased infarct size in the brain [131][132][133][134] and heart [135,136]. On the contrary, DJ-1 upregulation has been observed to elicit neuroprotection and cardioprotection, as its overexpression or injection was reported to rescue post-ischemic cerebral [131,132,137] and myocardial damage [138]. In addition, it is emerging that DJ-1 may play a role in ischemic preconditioning. During in vivo preconditioning, DJ-1 expression levels have been shown to increase, whilst knock-down of the protein seemed to impair the beneficial effects of the treatment [139]. This protective effect has also been observed in cellular models of hypoxic preconditioning, where DJ-1 protein was found to be upregulated [140,141]. Furthermore, two additional studies have emphasized that DJ-1 may play a role during ischemic postconditioning [142][143][144]. Nonetheless, it is worth mentioning that confounding results have been obtained concerning the modulation of DJ-1 expression levels upon IR. Indeed, while some studies have reported an increased DJ-1 expression [24,[144][145][146], other groups have observed no change [142,147] or even a decreased expression level [148,149]. As these discrepancies may be attributable to different models, experimental procedures, or time windows considered, more studies should be performed to elucidate whether and how DJ-1 expression is transcriptionally regulated under this condition. Moreover, to precisely evaluate its participation in the reperfusion phase, DJ-1 levels should be modulated only during reoxygenation to avoid confounding results deriving from the combination of ischemia and reperfusion injury.
Modulation of mitochondrial function
Despite conflicting reports on its expression levels during ischemic events, the protective activity exerted by the protein is becoming more and more evident, even though the precise mechanisms of action remain to be completely elucidated. In this regard, different in vitro reports have found that the protein could act at the mitochondrial level. Accordingly, upon reperfusion, DJ-1 was observed to translocate into mitochondria of rat primary neural cells and isolated murine cardiomyocytes [136,150] and this process might be mediated by the mitochondrial chaperone glucose-regulated protein 75 (Grp75) [142,147]. Moreover, DJ-1 has been described to translocate within the mitochondrial compartment in the hypoxic phase, prior to the reoxygenation process. Indeed, a study reported that the mitochondrial accumulation of DJ-1 under hypoxia occurs in both HEK cells and rat primary cortical neurons and that the translocation may rely on 14-3-3 proteins [151]. In this compartment, the protein has been reported to reduce mitochondrial fragmentation and to sustain complex I activity in cardiac cells [135,136,140,152]. In addition, DJ-1 overexpression has been observed to contribute to a delay in mPTP opening in HL-1 cells subjected to simulated IR [135] and to participate in the protective mechanisms exerted by cyclosporine A, a drug that has been described to prevent mPTP opening [153], supporting a role for DJ-1 in mPTP formation. Therefore, DJ-1 appears to regulate mitochondrial homeostasis through different mechanisms during both ischemia and reperfusion.
Modulation of ion homeostasis
Other interesting insights into the involvement of DJ-1 in IR injury have been obtained by a few studies that have highlighted the possible participation of the protein in the modulation of ionic currents. In this frame, a DJ-1-based peptide, named ND-13, has been selected from a library of other DJ-1-related peptides for its ability to exert protection against oxidative and toxic insults in vitro [154]. The peptide has been designed on a conserved portion of DJ-1 (KGAEEMETVIPVD) and attached to a 7 amino-acid region of the cell-penetrating peptide derived from the HIV trans-activator of transcription (TAT) protein (YGRKKRR) [154]. Interestingly, ND-13 has been reported to influence the expression of proteins involved in the activation of Ca 2+− activated K + channels and K v 4 voltage-gated channels, which have been associated with increased resistance to IR injury by reducing the cellular excitability and excitotoxicity, respectively [132]. Moreover, DJ-1 null dopaminergic neurons have been described to display greater hyperpolarization upon oxygen-glucose deprivation than control cells and that this alteration might derive from the participation of DJ-1 in the modulation of the activity of Na + /K + ATPases, although the underlying mechanisms of action have not been investigated [155]. These studies suggest the possible participation of DJ-1 in the modulation of ion homeostasis during IR, though additional research is required to validate this activity.
Regulation of signaling pathways
Apart from the direct mitochondrial-related functions, other studies have shown that DJ-1 protection could derive from the activation of antioxidant signaling pathways. For example, DJ-1 was demonstrated to induce Nrf2-dependent gene expression, such as SOD2, catalase, and glutathione peroxidase, in rat heart cells [138,156]. In this regard, DJ-1 has been reported to regulate Nrf2 activation also in reactive astrocytes, where the protein is abundantly expressed [133]. Interestingly, the same group has proposed that astrocytic DJ-1 could play an anti-inflammatory role by regulating the expression of inflammatory cytokines and through modulation of the interaction between TRAF6, a member of the TNF receptor-associated factor protein family, and NLRX1, a member of the NOD-like family with anti-inflammatory properties [157]. Additionally, DJ-1 has been described as a mediator of the protective effects from resveratrol against cardiac reoxygenation damage by indirectly driving the deacetylation of the pro-apoptotic transcription factor p53 [146]. Notably, DJ-1 cardioprotection has also been partially attributed to its ability to counteract glycation stress. Indeed, the loss of the protein has been found to favor AGEs accumulation, with consequent activation of AGEs receptors in the murine heart upon IR [158]. In this line, the protein has also been suggested to intervene in the cardioprotection against IR injury in diabetic conditions, possibly by upregulating the Akt cascade and autophagy in diabetic rats subjected to IR injury [143,159,160]. Moreover, the protein seems to elicit a similar protective role in the kidneys, where DJ-1 has been found to be upregulated after myocardial ischemia in diabetic rats, supposing a possible role in this organ [53]. In accordance, DJ-1 overexpression has been reported to defend against oxidative stress in NRK-52E renal cells subjected to hypoxia and high glucose concentrations [161]. Although the function of DJ-1 in the context of diabetic ischemia is still at the dawn, it could represent an interesting aspect to reflect on. As diabetic patients are considered more vulnerable to ischemic damage and present with an increased risk of morbidity [57,162], the ability of DJ-1 to protect against IR injury in hyperglycemic conditions should be further explored.
Besides the ability to act at the cytosolic level, the protein has also been suggested to be protective in the extracellular compartment. Indeed, extracellularly applied DJ-1 has been shown to rescue neuroblastoma cells exposed to oxygen-glucose deprivation, although the precise mechanism remains elusive [163]. In this regard, the secreted form of the protein has been found in oxygen-glucose deprived human neural progenitor cells [150], supporting a role of the protein in the extracellular milieu. Even though the participation of DJ-1 in signaling cascades has been recurrently reported, the picture that emerges from these data renders it difficult to understand whether the pathways here reported represent general targets of DJ-1 activity or whether the protein exerts different functions in a tissue-specific manner. In this frame, it is worth mentioning that while the role of DJ-1 has been mostly examined at the brain level, due to its association with neurodegeneration, less is known about its function in the heart and kidneys or other organs. Therefore, further studies are awaited to examine the role of DJ-1 in these tissues and to clarify whether the protein displays shared or distinct mechanisms of action in different organs.
Concluding remarks
Although the protective activity of DJ-1 against IR injury is highly supported in the literature, as here discussed, further investigation is required to better comprehend the mechanisms underlying its protection. In this regard, it would be interesting to explore the signals responsible for the activation of the protein. Upon reperfusion, ROS may be responsible for the activation of DJ-1's protective function, suggesting the protein may be redox-sensitive. Therefore, it could be evaluated whether and how the highly conserved Cys-106 residue is involved. Indeed, so far, few studies have addressed this point, mainly reporting that this cysteine is important for DJ-1 to carry out its functions [131,135]. Of note, Ariga's group has performed an in silico virtual screening to find DJ-1-binding compounds that are able to stabilize the reduced and mildly oxidized state of this residue [164]. Interestingly, the results obtained with 'compounds 23, A and B' showed the beneficial effects of these molecules in rat models of cerebral ischemia, supporting the therapeutic potential of modulating DJ-1 redox-state [165][166][167]. Nonetheless, the DJ-1-based peptide, ND-13, which has been shown to improve focal ischemia recovery, does not contain this amino acid, suggesting that this residue may not be the only determinant for its function [132]. Therefore, further research is essential to untangle this aspect. Moreover, while ROS may favor DJ-1 activation upon reoxygenation, under ischemic conditions, where ROS generation is largely repressed due to oxygen depletion, other signals might promote the participation of DJ-1 in HIF-1α stabilization. For example, HIF-1α can be stabilized upon succinate accumulation [69,70]. As DJ-1 has been associated with mitochondrial metabolism [168] and the ND-13 DJ-1-based peptide can regulate succinate dehydrogenase assembly factor 4 protein levels [132], this other mechanism of HIF-1α stabilization should be considered in order to better comprehend the participation of DJ-1 in HIF-1α regulation.
In conclusion, the findings discussed here suggest that the multifaceted behavior of DJ-1 confers the ability to protect against IR injury by orchestrating a wide-spectrum of cellular responses and by acting in multiple tissues (Fig. 2). Although studies robustly validating previously discovered roles, and new studies dissecting the precise mechanisms of action of the protein in this pathological state are essential, DJ-1 may be considered an interesting candidate to investigate for future therapeutic implications in IR injury.
Declaration of competing interest
The authors declare no competing interests.
Acknowledgements
We thank Prof. Michael P. Murphy for the helpful discussion. Figures were created in Adobe Illustrator with the support of Bior ender.com.
Funding sources
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. -DJ-1 has been reported to provide protection against IR injury through multiple mechanisms. In particular, the protein has been described to sustain adaptive and pro-survival signaling pathways, such as HIF-1α stabilization and Akt and Nrf2 activation, to protect the mitochondrial homeostasis, supporting the normal organelle functionality, and to ameliorate glycation stress, especially in diabetic ischemia. These protective functions have been observed at brain, cardiac and renal level. | 2021-02-11T06:19:41.728Z | 2021-01-30T00:00:00.000 | {
"year": 2021,
"sha1": "60c8ebc0ef72c14528507bf3ecfde39f3d673d27",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.redox.2021.101884",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d81a1a9446437c1d52404bde2f570abd39e3ad97",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
148188686 | pes2o/s2orc | v3-fos-license | Acting locally, Thinking Globally in Social Work Education
The problems and challenges of Social work relate to both local and global contexts, and Social Work Education needs to be characterized by this, although how to fully realize it is complex. The purpose of this article is to problematize and detail how Social Work Education can be seen in higher education from the perspective of internationalization out of a swedish context. The article should be seen as a contribution to Educational Science where internationalization can add to the understanding of social education. Research data has been collected from two groups of respondents: Social work students and lecturers in Social Work Education. The theoretical frame of reference is provided by a modified social-ecological model called the Entrecology model, a connector in education between the individual in relation to her or his surrounding context on different levels. The main conclusion is that the concept of ‘acting locally, thinking globally, should be viewed as a major input for developing Social Work Education—A Glocal approach.
Introduction
Internationalization is a term widely used in higher education in Sweden.The purpose of internationalization is for the activities of those involved to stimulate certain types of exchange and knowledge development, irrespective of whether or not they are students, teachers or other staff.In the context of Social Work Education, internationalization means the creation of the conditions for cooperation and understanding between the nations, although the focus should be on meetings between individuals.Further, a distinction is made between internationalization and globalization; globalization aims for deeper cross-border integration, while internationalization aims for cooperation between nations.When referring to internationalization, it is important to make the distinction between why we are internationalizing higher education and what we mean by internationalization (De Wit, H., 2002).The purpose of this article is to problematize and detail how Social Work Education can be seen in higher education from the perspective of internationalization out of a Swedish context.This article should be seen as a contribution to Educational Science where internationalization can add to the understanding of social education.The article also sheds light on and widens the subject of social In the majority of regions, the respondents indicated that their geographic focus for internationalization was within their own region.Europe is a strong focus area for most regions; however, limited funding is a major internal and external obstacle to advancing internationalization.The home institutions of the respondents report that they seek to promote values of equity and the sharing of benefits through their internationalization strategy and activities.This creates many challenges for the development of internationalization in education: Ever-increasing competition between institutions, student mobility imbalance between incoming and outgoing, socialization between domestic and international students, integration of international perspectives in education at all levels, and the planning and preparedness of teachers.In Nilsson's (Nilsson, B., 2003) research, the two main reasons for internationalization in Swedish higher education are outlined as the expansion of Swedish companies on the global market so that Swedes are able to fill important positions abroad and a new sense of global concern and solidarity with developing countries.Within the European Union and its development, the internationalization of higher education has held an important role in developing aspects such as openness, understanding, and positive attitudes toward cooperation.Official EU documents point out that support for cooperation and mobility is clearly to promote a European, not international, dimension in higher education (et. al).Programs for mobility in exchange have been developed, with Erasmus probably being the single most successful project within the EU.Parallel to this development, a shift from internationalization (cooperation between nations) toward Europeanization (cooperation within an integrated Europe) has occurred and can be seen as a consequence of the EU becoming a single market.As previously mentioned, the term internationalization holds different meanings in different countries.In some countries, it refers to the recruitment of overseas students; in others, it means exchange, and in some others, it means mobility.To put the concepts into practice, Nilsson (et. al) defines internationalization as "the process of integrating an international dimension into the research, teaching and services function of higher education.Social work might be one of the most international field of all and International social work is a growing field of interest."Knight's definition (Knight, J., 2008) acknowledges the various levels of internationalization and the need to address the relationship and integration between them: "The process of integrating an international, intercultural or global dimension into the purpose, functions or delivery of post-secondary education."Knight (et. al) also states that it is now possible to see two basic aspects evolving in the internationalization of higher education.One is 'internationalization at home,' including activities to help students develop international awareness and intercultural skills.This aspect is relatively much more curriculum oriented and prepares students to be active in an increasingly globalized world.Some examples of activities that fall under this at-home category are: Curriculum and programs, teaching and learning processes, extra-curricular activities, liaison with local cultural/ethnic groups, and research and scholarly activities.The second aspect is 'internationalization abroad,' which includes all forms of education across borders: The mobility of students and faculty, and the mobility of projects, programs and providers.These components should not be considered mutually exclusive, but rather intertwined within policies and programs.Further, De Wit (2002) identifies four broad categories of rationales for internationalization: Political, economical, social, cultural and academic.These rationales are not mutually exclusive; they vary in importance by country and region, and their dominance may change over time.
Internationalization and Social Work Education
Righard (2013) discusses how the various definitions of international social work have changed over time and she categorizes these changes into three groups: Modernization, radicalization, and globalization.In the last category where we are now, a big challenge for the social worker is to find strategies to face the challenges that arise in a global society.Globalization affects the social policy discussion in many ways (Cousins, M., 2005), and therefore, it also affects social work and Social Work Education.The need for international education in social work is clear, although achieving it may be complex (Merrill, M., Frost, C.J, 2011) Social workers are faced with new responsibilities, and it is important for the education to go beyond the national level (Healy, L., 2008) and (Nagy, G., Falk, D., 2000) claim that the impact of ongoing global processes on the social work profession is dramatic and that reformulating the education to include more international and crossborder cultural content is needed.They suggest the incorporation of international issues and comparison between the approaches, theories, and programs of other countries into mainstream Social Work Education along with the creation of more specialized professional programs.
Social work is described as contextual, meaning it bound to national traditions, laws, and local culture (Lorenz, W., 1994) and the content of Social Work Education in Sweden is, to a large extent, governed by national guidelines due to the professional title of Socionom.For example, at Malmoe University, at the Bachelor's level social work curriculum, international perspectives on social work is included as an integrated part of single lectures during the first and second semesters.At the Master's level, it works the same, with invited guest lecturers speaking about related themes and often on a comparative basis.With the exception of the programs, the individual courses Social Policies in Europe and Social Work in a Local and Global Context are offered which integrate an intercultural perspective focusing on social science, the welfare state in comparison, and social work practice.We need to understand social work in its local context by gaining a global understanding; therefore, the term Glocal (combination of global and local) is used in this article.Parts of these individual courses are also integrated in the social work programs.The main idea behind the continuous development of internationalization in Social Work Education in the social work curriculum at Malmo University in general is that social workers need to be prepared to address social work in a local and global context by studying internationally related cases and community problems that arise in their domestic practice.These cases contribute to a mutual exchange of solving global social problems as well as gaining knowledge of other countries and their social systems.Nevertheless, it seems that although the term 'internationalization' in Social Work Education is well established, and although the need for further international education in social work is viewed as essential, how to fully reach it is complex.
Theoretical frame
According to Meeuwisse and Swärd (2008), the cross-national comparison of social work is a question of assumptions and levels.The focus could be on the macro level, where comparisons are based on social policy; it could also be focused on profession (micro-meso level) or on practice-oriented differences (micro-meso level).This makes sense, as it may be more relevant and useful to use the term 'cross-national and global social work' instead of internationalization.To understand the complexity of international social work, we must take into account how the various sub-systems interact.The macro system (such as social policy) is therefore crucial for placing this analysis within the context of education.Both the individual and the environment change over time, and Bronfenbrenner (2004) maintains that these changes are crucial to our understanding of how the different systems influence the individual and her or his development.In addition, when personal development has a strong influence on family relations, this will create development on its own for the family.The same is true for institutional and cultural development; for example, the presence of strong individuals in an organization strongly influences organizational development.This explains why the development ecology model of Bronfenbrenner can be seen as a multi-level model (Winch, P., 2012).Resilience capacity on a mental, intra level (Christensen, J., 2016) and an entrepreneurial way of building, developing, and keeping networks gives the different levels in Bronfenbrenners Development Ecology model a broader understanding of what stimulates learning processes and our understanding of internationalization, education, and the profession in a social context.Transformation in a welfare context can be understood from both individual and social perspectives (Christensen, et. al).
Internationalization can thus be said to contain six levels of intervention: the intra-personal level (capacity of resilience), the micro-social level (person, client, focus on interaction), the meso-social level (group, institution, coherence), the exo-social level (society, institutions, educational system), the macro-social level (culture, nation, traditions, language) and the ex-macro-social level (international relations and EU influence).The supply of teacher competences such as experience of teaching in English, coordination of international social work at the departmental/faculty level, and management awareness and priority settings at the departmental level, are some of the presumptive critical challenges one faces in developing internationalization at home.Knight's definition (Knight, J., 2008) acknowledges its various levels and the need to address the relationship and integration between them: "The process of integrating an international, intercultural, or global dimension into the purpose, function, or delivery of post-secondary education."When understanding Social Work Education and the role of pedagogy within it, we can relate to what Fayolle and Kyrö (2008) describe as the interplay between environment and education.They argue that entrepreneurship is closely connected to an education perspective in which individuals, society, and institutions are all linked to each other.This interplay is surrounded by culture, and it is in this context that entrepreneurship and pedagogy meet.Entrepreneurship is when the individual acts upon opportunities and ideas and transforms them into value for others.Ties, meetings, and networks are therefore closely linked to the individual, and in the meetings where the individuals from different contexts meet, learning takes place.Given this, a connection between the (extended) Development Ecology model and Entrepreneurship gives us the Entrecology* model: Each link in the Entrecology model should be seen as each individual's own unique and personal network created through meetings on different levels; the starting point is the individual and the interplay between the surrounding context.When analyzing social networks as a tool for linking micro and macro networks, the strength of these dyadic ties can be understood (Granovetter, M., 1973).This strength (or weakness) gives dependency as well as independency in linkages.In addition, as pointed out by Cox and Pawar (2006), dimensions in international social work needs to have, a local as well as a global face and that a reality of globalization is that it requires a dimension of localization.Therefore, the Entrecology model can be seen as a connector in education between the individual and her or his surrounding context on different levels.
Method
Research data has been collected from two groups of respondents: A group of 20 social work students on bachelor-level and a group of 13 lecturers (full-time) in the Social Worker´s programme at the Malmo University.The teacher group is representative for the whole group of teachers (in total around 40) teaching Social Work at the Dept of Social Work.The group of lecturers was randomly selected and the students were participating in an open seminar focusing International Social Work at the Malmo University.The same open question was raised to both groups: What does the concept of internationalization in Social Work Education mean to you?The research data among the students was collected in a classroom before an ordinary lecture took place and all students participated.The students were each given 20 minutes for a written reply.The research data among the teachers were asked to give a written reply and each of these replies were used, each teacher was given 20 minutes after working hours.The replies from both groups were collected anonymously, and then the material was analysed separately divided into two main groups: A student group and a teacher group.Thereafter, interpretation and analysis was undertaken in each group.Each group was analyzed independently of the other, and no distinct comparison was made between the two groups.
Limitations of the study
The selection group of students and teachers has been selected by two criteria's, a random (teachers) and a self-imposed principle (students).A random selection strengthens the validity as the likelihood of bias is likely to be minimized as it´s among all teachers the selection has been done, not only among the teachers who are engaged in the internationalization issues involved.As for the student group is admittedly a qualitative weakness that they participated in an open seminar for social workers, several of which have a high degree of likely interest in internationalization issues prior to participating.Meanwhile, the seminar was included in the study program.This, however do not clearly say anything about the partial motives why one part is likely to vary.The issue itself is open and allows for the respondent to freely express themselves which strengthens the reliability.
Here, I would like to stress that, independently of methods, there are limitations and the key thing is to be aware of them and try to deal with them in the best possible way in line with the purpose of the study.Limitations could be lack of time, that the selected group was too small and that the environment created stress.Concerning the time given it was shown that no more time was asked for, the material and input which was transcribed reached a saturation due to analysis and the participants did not show stressful due to lack of time.In this study a combination of randomly selected teachers and a self-imposed principle by students has been used.Other methods could have been used, however it is essential for a researcher to be aware of the normativity which, intentionally or unintentionally, is hidden behind a statement which makes it essential to present a broad variety.The normativity in this study has caused consequences for the samples: samples which might have look different with another pair of normative "research eyes".
Results and Discussion
After studying the empirical material, we will discuss the material from the statements the individuals made in relation to the theoretical frame in relation to the two groups; the student group and the teacher group.Out of this, three perspectives are outlined and discussed: a global mobility perspective, social work local educational perspective and a social work professional perspective.
The Student Group
When people meet, many types of exchanges take place, including social, academic, and cultural exchanges.Personal meetings on different levels are key factors.They make the assimilation of knowledge and resources possible (Christensen, J., 2016) An exchange of knowledge characterized by mutual interest may also include a resistor, as stated by some of the respondents: "We cannot understand that particular term in social work in that way" and "We are not doing social work in that way in our country".
In the process of change and if handled correctly, this resistance can represent a significant driving force for the reflective student to develop a local understanding.The starting point of learning is the individual, and her or his interaction in their environment.This interaction occurs at different levels, and in the academic session, these levels may be seen as individual, organizational or institutional and societal.The following statement can be seen as macro-social, as it speaks of culture, nation, traditions and language: "Internationalization in Social Work Education for me is about to gain a deeper insight into different cultures, differences and similarities to use in professional situations."This shows that, according to Meeuwisse and Swärd ( 2008) that the cross-national comparison of social work is a question of assumptions and levels.The focus could be on the macro level, where comparisons are based and related to the profession (micro-meso level) or on practice-oriented differences (micro-meso level).On a micro-social level, the individual, and the focus on interaction is central as one respondent says: "We try to understand each other, but we create barriers instead".
A reflection on the resilience capacity on which the intra-personal level is in focus can be seen in the following statement by one of the students: "There's a hill you have to overcome.It is a challenge and necessity for development and challenges."Statements from some of the students relate to social work occupations; they relate to the individual and the focus on interaction wherein the micro-social level can be seen, for example: "Exchange, linguistic, cultural encounters, people, listen, be curious, ask questions, discuss and develop, learn from each other, perhaps the reduction of prejudice, abandonment, new ideas, or old ideas are confirmed, learn about social work in different countries, to learn from each other".The exo-social level can be outlined in the following statement: "So, I imagine a picture of a network that stretches across borders where it is more possible than ever to communicate and collaborate between different NGOs and authorities and this relates to society on the whole as well as institutions" "No matter where we come from, we all want to feel good.Getting there is just as diverse as the meals and the way we eat together.Knowledge of other people's way is vital to be able to do a good job as a social worker" is a statement like those of several students which shows that the group, as well as the individual in the field of social work, can be identified, meaning both the mi-cro-social level and the meso-social level relate to each other.The ex-macro-social level could be seen in all statements, as it relates to overall global relations.According to Winch (2012) this relates to why the development ecology model of Bronfenbrenner can be seen as a multi-level model and that dimensions in international social work needs to have, a local as well as a global face and that a reality of globalization is that it requires a dimension of localization.Therefore, the Entrecology model can be seen as a connector in education between the individual and her or his surrounding context on different levels.
The Teacher Group
The statements from the teachers were highly characterized by views on the importance of exchange, mobility, and integration in local Social Work Education, as well as linkages to professional social work practice.The statements can therefore be categorized into the following three perspectives in relation to levels: the ex-macro level-A global mobility perspective, the meso-social level-The social work local educational perspective and the micro-social level-The social work professional perspective.
The ex-macro level -a global mobility perspective
The majority of respondents point out the importance of the possibility for mobility and that it has value in itself.One of the respondents highlights the importance of the environment and the development of students and teachers to meet.In the new contextual learning, the environment and reflection on how contextual and structural factors that affect the social work profession and organization can be seen.One of the respondents states: "This means that students and teachers themselves live in environments in other countries and at other universities.It also means that students are influenced by visiting lecturers and visiting students.Students must be in their courses and have the opportunity to learn about social work in different contexts.Students should gain an understanding of social work complexity by getting knowledge of contextual, structural, and traditional factors governing organizing, ethical attitudes, and professional values" Another respondent emphasizes the sense of coherence as a key driver in understanding social challenges globally: "The internationalization of Social Work Education means that we should take into account what is happening in the world around us: how it can affect us, how we affect the world around us.We should be aware of that the theoretical perspectives must be thought of in both global and local contexts, and that social problems will arise in the context of both national and international levels.The transnational dimension, that we can enrich our activities by participating in international exchanges of different kinds, is very essential" Several respondents see the value of mobility in general: "Internationalization means, primarily, that there is an exchange of students and teachers at the university to study and teach at universities in Europe and other parts of the world."The importance of conceptual understanding and the common development of the subject is something that some of the respondents emphasize: "Internationalization means an increased exchange of experience, knowledge, ability to support and development of joint projects in the subject and each business.Furthermore, and perhaps most importantly, with internationalization we can reach a common understanding of Social Work Education and thereby facilitate communication in the academic exchange".
The meso-social level -the social work local educational perspective
International understanding will be made visible in various ways in teaching, which is an area that respondents consider somewhat important.Literature in another language than Swedish for students to be encouraged and to discuss social work in other contexts than the Swedish.The integration of guest teachers is another example.Also, internship opportunities and thesis work abroad is seen as advancing internationalization by several respondents: "International perspectives on social problems, primarily through literature (current), but also by international guests".According to the respondents, the most important aspect is that international perspectives are highlighted and illustrated in education.Students are encouraged to "look up" and assimilate social work in other local contexts: "Most important is that education must have an international perspective on social work theory and practice, in that we relate the discussion of social problems and efforts to countries other than Sweden, as it's easy to get 'stuck' in the narrower local situation".The defense of experiences abroad as teachers and students carry with them and feed back into the local classroom is seen as an important support structure by some of the respondents: "The enhanced international perspective can also be given space by placement (as is already the case today), but the better experience and knowledge that the international internship students have may be taken through to the mainstream as guest teachers".The development of Social Work Education in comparison is seen by several respondents as a key part in internationalization: "It may be that students can bring a comparative discussion of the official problem of perception and social interventions that are based on the Swedish context and other contexts.But then it may also discuss the different contexts that are interesting".The role of education is highlighted by several respondents: "Students in the context of the teaching given the opportunity to get deeper international perspectives in social work disciplines.In other words, the comparative learning operations can be included in most social worker training courses.The latter can be performed by, for example, differing Case or PBL-based teaching modules".
A more general view bases its starting point in the responsibility for educators to provide professional knowledge and is highlighted by one respondent: "Without international contacts, then our Social Work Education is a national education.A weakness in the profession may develop.The world we live in has changed.It is essentially a local, but also a global, challenge".This shows that social networks is a tool for linking micro and macro networks in which the strength of these dyadic ties can be understood according to Granovetter (1973).This strength (or weakness) gives dependency as well as independency in linkages.
The micro-social level -the social work professional perspective
In encounters in practice, diversity and cultural knowledge are an important part of internationalization and seen as part of Social Work Education: "Internationalization, for me, also means to have good knowledge about other countries and cultures "at home".That is, the ability to respond to potential clients/patients/users based on an understanding of other cultures, for example, norms, and how a social worker can work with this".
In social work expertise lies a general competence to understand the practical social work on the local level to apply their skills in different contexts: "The social worker is a professional career.It is set up in higher education as 'socionomutbildningen' (a professional degree).Like other professional programs, they can use their degrees to work in different countries with some additional course(s).This means that we do not train for Sweden, but for international social work.Training must, therefore, contain general knowledge that can be applied in many contexts".
One view among the respondents demonstrates the challenge of defining the meaning of internationalization: "In a way, internationalization sounds most like a cliché.It sounds good, but it is unclear what that means.At the administrative level, it seems to imply greater international cooperation and exchange on education, knowledge.For me personally, it has no special meaning.I would rather talk about globalization in the sense of social work today which involves new social problems to deal with".
Key-words coming out from the student and teacher group
The following keywords from both groups can be outlined from the written reply; inequalities, opportunities, change, boundlessness, moving, flying, freedom, motion, proximity, distance, opposites, community, people, loneliness.If we categorize these words into groups, three groups of levels can be outlined, one focusing the individual level (freedom, motion, loneliness change, opportunities, moving, flying), one focusing on the organizational or institutional level (inequalities, community, proximity, distance) and thirdly, the societal level (opposites, boundlessness, people).Internationalization of social work in a welfare context should thus be understood with the starting point originating from the individual perspective, and adding the individual in relation to an organizational, institutional, and social perspective.
Conclusion
The main conclusion of this article is that 'thinking globally, acting locally' should be seen as a key concept in the development of Social Work Education.Among students, it seems that development of a reflective capacity when meeting others can be seen as adding momentum to this.Among teachers, it seems that integration in the local social work education and links to the professional social work practice is also adds momentum.This demands in-depth knowledge about individual driving forces and views on what is essential when it comes to international Social Work Education.We need to explore how we can raise our mutual understanding of social work which assists us in developing a more global understanding, while at the same time, being aware of our different traditions and values.Therefore, as we must, according to what Meuwisse and Swärd (2008) point out, take into account how the various sub-systems interact in relation to the individual.Both the individual and the environment change over time, and Bronfenbrenner (2004) maintains that these changes are crucial to our understanding of how the different systems influence the individual and her or his development.In this, the Entrecology model can be seen as a connector in education between the individual and her or his surrounding context on different levels.The importance of allowing students and teachers to meet on a cross-border basis in Social Work Education built upon internationalization at home as a part of domestic local programs with global understanding-a glocalized view on social work-should not be underestimated when developing professional skills.This relates to the statement by Healy (2000) that Social workers are faced with new responsibilities, and it is important for the education to go beyond the national level.This study shows that a reformulation of the education to include more international and crossborder cultural content is needed according to Nagy and Falk (2000).A success factor for knowledge acquisition in international social work is in providing continuous education, where international courses can work independently, but also opportunities for integration within existing programs.It strengthens, stimulates, and develops internationalization at home, as well as attitudes toward it for both students and teachers.In order to stimulate and attract students as well as teachers in developing internationalization as a dimension in the education, a knowledge channel is essential.To facilitate this and to support interaction with the community in internationalization, efforts should be put into specific communication and educational tools. | 2018-12-07T23:59:32.365Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "ae709e7353ad07cbc7641f8392a286fc69dd8551",
"oa_license": "CCBY",
"oa_url": "http://dergipark.gov.tr/download/article-file/389981",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f8561576ca4677d32d523fdac2371e69e643abf6",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
237475775 | pes2o/s2orc | v3-fos-license | Positron emission tomography/computed tomography findings of lung invasive adenocarcinoma subgroups and comparison of their short-term survivals
Background The aim of this study was to compare the maximum standardized uptake values on positron emission tomography/ computed tomography and survival of lung invasive adenocarcinoma subgroups. Methods Between January 2010 and January 2016, a total of 152 patients (112 males, 40 females; mean age: 64.2±8.6 years; range, 41 to 88 years) who underwent lung resection for an invasive adenocarcinoma were retrospectively analyzed. The patients were divided into subgroups as follows: acinar, lepidic, micropapillary, papillary, and solid. The maximum standardized uptake values in the imaging study and their relationship with survival were examined. Results There were 84 acinar (55%), 31 solid (20%), 23 lepidic (15%), nine papillary (5%), and five micropapillary (3%) cases. The positron emission tomography/computed tomography enhancement showed a statistically significant difference among the subgroups (p=0.004). The solid subgroup was the most involved (9.76), followed by micropapillary (8.98), acinar (8.06), papillary (5.82), and lepidic (4.23) subgroups, respectively. According to Tumor, Node, Metastasis staging, Stage I was present in 48.68% (n=74) of the cases, Stage II in 25.0% (n=38), Stage III in 25.0% (n=38), and Stage IV in 1.31% (n=2). The one-year, three-year, and five-year survival rates were significantly different among the disease stages (p=0.01). The longest survival duration was in the lepidic subgroup, although it did not reach statistical significance among the subgroups (p=0.587). Conclusion The evaluation of invasive adenocarcinomas based on maximum standardized uptake values provides valuable information and may guide neoadjuvant and adjuvant therapies in the future.
Lung cancer is a major public health problem worldwide, not only due to its being a common cancer, but also due to its high mortality rate. About 80.7% of lung cancers are non-small-cell lung cancer (NSCLC), 16.4% are small-cell lung cancer (SCLC), and 2.9% are other subtypes. [1] Adenocarcinoma (AC) is the most common subtype of lung cancer, accounting for 35 to 40% of all lung cancers. [2] It constitutes 21.8% of all lung cancers in men and 4.9% in women. [3] In 2011, the International Association for the Study of Lung Cancer (IASLC)/American Thoracic Society (ATS)/European Respiratory Society (ERS) working group proposed a new classification for lung ACs. [4] According to this classification, ACs were divided into four main groups as preinvasive lesions, micro-invasive ACs, invasive ACs, and invasive AC variants. Invasive ACs are classified as lepidic dominant AC, acinar dominant AC, papillary dominant AC, micropapillary dominant AC, and solid dominant AC. This classification includes the subtyping of invasive ACs and emphasizes the prognostic significance of these histological subtypes, as well. The differences between these identified histopathological groups in lung cancer have significantly changed the treatment, prognosis, and management of the disease. [2] The stage of the tumor is the most important prognostic factor in patients with lung cancer. One of the imaging methods to determine the disease stage is positron emission tomography/computed tomography (PET/CT), which can provide information for the distinction of benign-malignant lesions, disease staging, demonstration of distant organ involvement, identifying recurrent tumors and evaluating the treatment response. The rate of fluorodeoxyglucose (FDG) uptake of the lesion on PET/CT is called standardized uptake value (SUV). A SUV value of over 2.5 to 3.0 is considered sensitive and specific for malignancy. Previous studies have shown that, despite the fact that the SUV value provides an idea on the distinction of malignant and benign lesions, it does not have a definite diagnostic value and it is recommended to be used for follow-up and evaluation of the treatment response. [5][6][7] The SUV, which is widely accepted as a semi-quantitative indicator of glucose metabolism, proposes malignancy at high values and benignity at low values. [5] In some studies, however, no exact relationship has been shown between the SUV value and the prognosis. [6][7] In the present study, we aimed to compare the SUV values on PET/CT and survival of the subgroups, i.e., lepidic dominant, acinar dominant, papillary dominant, micropapillary dominant, and solid dominant ACs in patients with lung invasive AC.
PATIENTS AND METHODS
This single-center, retrospective study was conducted at Dokuz Eylul University Hospital, Department of Thoracic Surgery between January 2010 and January 2016. A total of 152 patients (112 males, 40 females; mean age: 64.2±8.6 years; range, 41 to 88 years) who underwent lung resection and diagnosed with an invasive AC were included. The patients with mediastinal involvement on PET/CT were operated after invasive staging, while the patients without mediastinal involvement were operated without invasive staging. The SUV values, age, sex, predominant pathological subtype, survival time and the disease stage of the patients pathologically diagnosed with an AC were evaluated. All surgeries were performed via thoracotomy. A written informed consent was obtained from each patient. The study protocol was approved by the Dokuz Eylul University, Non-Invasive Research Ethics Committee (No: 2016/32-40). The study was conducted in accordance with the principles of the Declaration of Helsinki.
The FDG-PET/CT images of the patients were obtained using the Philips Gemini TOF 16 Slice PET/CT scanner (Philips Medical Systems, Best, Netherlands) at the Nuclear Medicine Department of our institution. The preoperative PET/CT scans of 17 patients were missing and were excluded from the study.
Sixteen patients diagnosed with an AC without subtyping were re-evaluated by the pathology team of our hospital and the subtype of the tumors was identified.
Statistical analysis
Statistical analysis was performed using the SPSS version 15.0 software (SPSS Inc., Chicago, IL, USA). Descriptive data were expressed in mean ± standard deviation (SD), median (min-max), or number and frequency. The comparison between the groups was performed using the Kruskal-Wallis test. The Kaplan-Meier test was used for survival analysis. The survival curves were compared using the log-rank test. A p value of <0.05 was considered statistically significant.
RESULTS
Baseline demographic and histopathological characteristics of the patients are summarized in Table 1. There was no significant difference among the subgroups regarding the age (p= 0.404). None of the patients underwent an invasive mediastinal staging before the operation, since no N2 was determined radiologically. Seven patients received neoadjuvant chemotherapy, while two patients were applied neoadjuvant chemotherapy and radiotherapy. Lobectomy was performed in 113 (87.5%) patients and pneumonectomy in 19 (12.5%) patients. A total of 55% (n=84) of the cases were in the acinar subgroup, 20% (n=31) in the solid subgroup, 15% (n=23) in the lepidic subgroup, 5% (n=9) in the papillary subgroup, and 3% (n=5) in the micropapillary subgroup. According to the Tumor, Node, Metastasis classification, 40 patients were in Stage IA, 34 were in Stage IB, seven were in Stage IIA, 31 were in Stage IIB, 28 were in Stage IIIA, 10 were in Stage IIIB, and two were in Stage IV (Table 1).
There was a statistically significant difference in the SUV values of the dominant subgroups of 135 AC patients (p= 0.004). We found the highest mean SUV value in the solid subgroup, followed by the micropapillary, acinar, papillary and the lepidic subgroups, respectively ( Table 2).
The survival rate was the lowest in the micropapillary subgroup (20%), followed by the acinar (59.5%), solid (64.5%), papillary (66.7%) and lepidic (69.6%) subgroups, respectively. There was no significant difference in the survival rates among the subgroups ( Table 3). The survival rates of the disease stages were evaluated using the Kaplan-Meier test and log-rank analysis (p= 0.015). The mean survival duration of the patients with Stage I disease was 68.4 months, 59.5 months for Stage II, 43.4 months for Stage III, and 27.2 months for Stage IV. There was a significant difference between the disease stages in terms of the mean survival duration (p= 0.009) (Figure 1).
There was a statistically significant difference among the detailed TNM stages in terms of the mean survival duration (p<0.004 There was no statistically significant difference between the subgroups in terms of survival duration (Figure 2). The five-year survival rate was the lowest in the micropapillary subgroup with 20%, followed by the acinar with 59.5%, solid with 64.5%, and papillary subgroup with 66.7%. The lepidic subgroup had the highest five-year survival rate with 69.6% (Figure 3).
DISCUSSION
Lung AC is a very heterogeneous group in many aspects such as pathological, molecular, clinical, radiological, and surgical. Therefore, the classification of the disease made in 2004 was revised in 2011. The most important reason for the revision was the unmet need for demonstrating differences by multidisciplinary approaches after remarkable developments in medical oncology, molecular biology, and radiology. [8,9] When the former and current classifications were compared in terms of subgroups, we found that acinar ACs were 100% compatible and papillary and solid ACs were 75% compatible with each other. In previous studies, no significant relationship has been shown between lymph node metastases and subtypes according to the new classification. Besides, in cases with minimally invasive AC, the entire tumor should be sampled and reflected in the report. [8][9][10][11][12] For thoracic surgeons, AC surgery may offer different surgical alternatives, compared to other histopathological types. These are sublobar or wedge resections in early-stage lung cancer, lymph node dissection width, and new approaches associated with intra-operative pathological analysis. Following the new classification, Warth et al. [13] retrospectively reviewed the cases who were previously diagnosed with the mixed-type AC and re-examined a total of 100 archived lung AC resection specimens. The most frequent subgroup in this cohort was the predominant solid pattern subgroup (37%), followed by acinar (35%), lepidic (20%), papillary (5%), and micropapillary (3%) subgroups. In our study, the subgroups identified were acinar (55%), solid (20%), lepidic (15%), papillary (5%), and the micropapillary [3] subgroups. There was no significant difference among the subgroups in terms of age and sex in our study population (p= 0.404). These results are consistent with the literature. However, considering the racial genetic characteristics of ACs, complete consistency should not be expected.
Although there are studies reporting that the SUV values of ACs are lower than other types of lung cancer, there are also studies showing that there is no difference in the maximum (SUV) (SUV max ) value between all types of lung cancers. [14,15] We found that there was a statistically significant difference in PET enhancement values between the subgroups (p= 0.004); the highest mean SUV value was in the solid (9.8±6.2) group, followed by micropapillary (9.0±5.8), acinar (8.1±6.4), papillary (5.8±3.3) and the lepidic (4.2±3. 3) subgroups. Similar to our study, Nakamura et al. [16] found that the preoperative SUV max value was closely related to AC subtypes. The highest SUV max value was shown in micropapillary dominant AC, followed by solid dominant AC.
In another study, to detect the relationship between PET/CT parameters and the stages of invasive ACs, the authors calculated the SUV max , metabolic tumor volume (MTV), and the total lesion glycolysis (TLG) values and found that the survival rates were significantly lower in the patients with high SUV max , MTV, and TLG values. [17] In our study, micropapillary and solid groups had the highest SUV values; however, there was no statistically significant difference between the subgroups in terms of the duration of survival in months (p= 0.587).
In their study including subgroups of ACs, Yoshizawa et al. [10] categorized the patients into three different prognostic groups. Adenocarcinoma in situ (AIS) and minimally invasive AC (MIA) were defined as the low-grade and the five-year disease-free survival was reported as 100% in this group. Lepidic, acinar, and papillary subgroups were defined as the moderate grade and the five-year disease-free survival rate was reported as 90%, 83% and 84% in these groups, respectively. The invasive mucinous AC, solid and micropapillary subgroups were defined as the highgrade and the five-year disease-free survival rates were reported as 70% and 67%, respectively (p<0.001).
In a series of 210 Australian patients with Stage I, Stage II, and Stage III lung AC, Russell et al. [11] retrospectively investigated the relationship between the new classification subgroups and the survival rates. The survival rates were significantly lower in the micropapillary and solid ACs, whereas they confirmed that the five-year survival rates were close to 100% in AIS, MIA, and lepidic ACs. Papillary and acinar ACs were subgroups with a moderate prognosis.
In our study, the mean follow-up duration of the patients was 40.2±22.0 months. The mean outcome of the patients was 59.7±2.8 (range, 54.13 to 65.16) months, and this long follow-up period is one of the strengths of our study.
The mean survival duration was 66.0±8.1 months in the papillary group, 61.6±6.1 months in the lepidic group, 59.7±6.3 months in the solid group, 57.4±3.8 months in the acinar group, and 44.5±11.6 months in the micropapillary group; however, there was no statistically significant relationship between the survival duration of the patients having the same stage disease (p= 0.587). At the end of the five-year follow-up duration, 80% of the patients in the micropapillary subgroup, 40.5% of those in the acinar subgroup, 35.5% of the solid subgroup, 33.3% of the papillary subgroup, and 30.4% of the lepidic subgroup died. In addition, the highest SUV value in the micropapillary and solid groups of our study population is consistent with these previous studies. [10,11] When we evaluated the one-year, three-year, and five-year survival rates according to the stages, we showed a statistically significant difference in the survival rates between the disease stages (p= 0.015). The survival rates in the patients with Stage I disease were 95%, 77%, and 69%, respectively; they were 92%, 72%, and 10% in Stage II; 80%, 45%, and 9% in Stage III; and 50%, 0%, and 0% in Stage IV disease. The mean survival durations in the subgroups were as follows: 57.4±3.8 months in the acinar group, 61.6±6.1 months in the lepidic group, 44.5±11.6 months in the micropapillary group, 66.0±8.1 months in the papillary group, and 59.7±6.3 months in the solid group, whereas there was no statistically significant difference among the subgroups for patients in the same stage (p= 0.587). Stage III disease was more frequent, particularly in the acinar group, suggesting that PET-CT has high false-negative rates and invasive staging should be performed in all conditions. Warth et al. [12] retrospectively evaluated patients with Stage I-IV AC who underwent surgical resection. They reported that staging of lung ACs according to the new IASLC/ATS/ERS scheme based on structural patterns was a quick, simple, and effective distinguishing factor in terms of long-term prognosis, and showed that it could contribute to the selection of proper patients for targeted therapies.
The fact that there are differences in the number of patients in the subgroups is the main limitation of the present study, preventing us from commenting on the survival of the subgroups. Further larger-scale studies would provide more accurate results.
In conclusion, lung adenocarcinoma is an issue of interest that many scientists closely follow and work on. In our study, the standardized uptake values were significantly higher in the subgroups with poorly differentiated tumors, compared to the other subgroups. The diagnostic and clinical diversity among all subgroups of lung adenocarcinomas suggests that the diagnosis and treatment protocols of lung cancers can be updated in the following years. The identification of subgroups of lung adenocarcinomas would be in parallel with revealing the molecular and genetic differences. Due to the development of health technologies, the current cancer treatments are progressing toward the targeted therapies based on these molecular and genetic differences. | 2021-09-10T19:38:09.613Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "d09d3144faee8de0e20ef117148ef31304c2b189",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc8462100?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d09d3144faee8de0e20ef117148ef31304c2b189",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232051333 | pes2o/s2orc | v3-fos-license | Dietary Fiber Is Essential to Maintain Intestinal Size, L-Cell Secretion, and Intestinal Integrity in Mice
Dietary fiber has been linked to improved gut health, yet the mechanisms behind this association remain poorly understood. One proposed mechanism is through its influence on the secretion of gut hormones, including glucagon-like peptide-1 (GLP-1) and glucagon-like peptide-2 (GLP-2). We aimed to: 1) investigate the impact of a fiber deficient diet on the intestinal morphological homeostasis; 2) evaluate L-cell secretion; and 3) to ascertain the role of GLP-1, GLP-2 and Takeda G protein-receptor-5 (TGR5) signaling in the response using GLP-1 receptor, GLP-2 receptor and TGR5 knockout mice. Female C57BL/6JRj mice (n = 8) either received a standard chow diet or were switched to a crude fiber-deficient diet for a short (21 days) and long (112 days) study period. Subsequent identical experiments were performed in GLP-1 receptor, GLP-2 receptor and TGR5 knockout mice. The removal of fiber from the diet for 21 days resulted in a decrease in small intestinal weight (p < 0.01) and a corresponding decrease in intestinal crypt depth in the duodenum, jejunum and ileum (p < 0.001, p < 0.05, and p < 0.01, respectively). Additionally, colon weight was decreased (p < 0.01). These changes were associated with a decrease in extractable GLP-1, GLP-2 and PYY in the colon (p < 0.05, p < 0.01, and p < 0.01). However, we could not show that the fiber-dependent size decrease was dependent on GLP-1 receptor, GLP-2 receptor or TGR5 signaling. Intestinal permeability was increased following the removal of fiber for 112 days. In conclusion, our study highlights the importance of dietary fiber to maintain intestinal weight, colonic L-cell secretion and intestinal integrity.
INTRODUCTION
Non-digestible carbohydrates, termed dietary fibers, have been linked to improved health outcome, especially concerning gut health (1). Their presence in the diet can delay gastric emptying rate (2), increase fecal bulk and moisture content (3), and impact bacterial diversity (4) and fermentation (5). Despite these attributes, fiber consumption in the diets of westernized nations continues to fall below the recommended guidelines (6). This 'western diet' is characterized by calorie-rich processed foods, high in sucrose and saturated fats with reduced dietary fiber (7) and is linked to the rise in noncommunicable diseases (1) and a decrease in microbial diversity (8,9). Additionally, low fiber diets are recommended to patients to manage intestinal symptoms such as diarrhea in a range of gastrointestinal conditions including Crohn's disease, ulcerative colitis, irritable bowel syndrome, as well as following chemotherapy and radiotherapy (10). One proposed mechanism by which diet composition can influence health is through its influence on the secretion of gut hormones, including glucagon-like peptide-1 (GLP-1) and glucagon-like peptide-2 (GLP-2) (11,12). The glucagon-like peptides are cosecreted from enteroendocrine L-cells, predominantly located in the distal small intestine (SI) and colon. GLP-1 is a multifunctional incretin hormone best known for modulating glucose metabolism (13) but also has a mild trophic effect in the intestine of rodents (14,15). GLP-2 plays an important role in gut epithelium function by being the major regulator of the intestinal size as well as absorptive capacity (16)(17)(18)(19). Together GLP-1 and GLP-2, synergistically ameliorate intestinal injury and improve intestinal healing (20).
Classically, luminal nutrient delivery has been described as the stimulus for enteroendocrine L-cells (21), but more recently other stimuli such as short-chain fatty acids (SCFA) and bile acids have also been reported to be stimulators in experimental animals (22,23) and humans (24,25). SCFAs are the primary fermentation products of dietary fiber by bacteria located in the proximal colon (26) and exert their secretagogue function by binding to the free fatty acid receptors 2 and 3 (FFAR2 and FFAR3) located on L-cells (27,28). The gut microbiota metabolizes and modifies bile acids and regulates the expression of their synthesizing enzymes (29,30). They exert their secretagogue function by binding to the bile acid receptor Takeda G protein-receptor-5 (TGR5) located on intestinal Lcells (31).
Given the ability of dietary fiber to modulate the gut microbiota (5) and its fermentation products we hypothesized that the removal of dietary fiber would decrease the gut size and available absorptive capacity due to the moderation in L-cell secretion. We aimed to investigate the impact of a fiber deficient diet on the intestinal morphological homeostasis and L-cell secretion and to evaluate GLP-1 receptor (GLP-1r), GLP-2 receptor (GLP-2r) and TGR5 signaling in the response using knockout mice.
Experimental Setup
Fiber-Free Diet in C57BL/6JRj Mice ). All animals from the 21 days experiment were weighed every four days and their daily food intake was determined by hand weighing the remaining food per cage. Daily nutritional consumption per mouse was calculated by dividing the total intake (g) per cage by the number of mice in the cage, multiplied by the macronutrient distribution of the diets per g (Figure 1).
Mice were given an intraperitoneal injection of bromodeoxyuridine (BrdU) (50 mg/kg) (Sigma-Aldrich, Missouri, US, cat.no. B5002) (only the 21 days experiment) and an oral gavage of fluorescein isothiocyanate dextran (FITC-dextran) (400 mg/kg) (Sigma-Aldrich, cat.no. 60842-46-8) 2 h before sacrifice. Mice were anesthetized with an intraperitoneal injection of ketamine (90 mg/kg) (MSD Animal Health, Madison, New Jersey, US, cat.no. 511485) and xylazine (10 mg/kg) (Rompun Vet, Bayer Animal Health, Leverkusen, Germany, cat.no. 148999). From a midline incision, a total blood draw was made from the vena cava and the blood was collected in pre-cooled 3.9 mmol/l EDTA-coated tubes (Eppendorf, Hamburg, Germany, cat.no. 20901-757) and stored on ice until centrifuged (3500 rpm, 15 min, 4°C). Plasma was transferred to pre-cooled Eppendorfs and stored at -20°C until further analysis. The small intestine (SI) was resected and partitioned into the duodenum, jejunum and ileum and the colon was resected. All the resected tissue was flushed and weighed as previously described (35). Transverse sections of the duodenum, jejunum, ileum and colon were fixed in 10% neutral formalin buffer for
Histology
Formalin-fixed tissue from the duodenum, jejunum, ileum and colon was first dehydrated and paraffin-embedded. Slices of the embedded tissue (4 mm) were cut using a microtome (pfm Slide 4005 E, pfm medical, Köln, Germany) and stained with hematoxylin/eosin. The average crypt depth and villus height were approximated by measuring these parameters in 20 welloriented villi and crypts. Mucosa area was measured by subtracting the luminal circumference from the submucosal circumference. All measurements were made from histological photographs taken using a light microscope connected to a camera (
Protein Extraction
Intestinal tissue was subject to peptide extraction as previously described (33). In short, the tissue samples were homogenized in 1% trifluoroacetic acid (TFA; Thermo Fisher Scientific, cat.no. TS-28904) then left to stand at room temperature for 1 h and were then centrifuged for 10 min at 2000xg. After determination of the concentration of protein (Pierce BCA Protein Assay Kit, Thermo Fisher Scientific, cat.no 23225), the supernatants were fractionated using tc18 cartridges (Waters, Massachusetts, US, cat.no 036810). After evaporation, ethanol-eluted peptides were reconstituted in assay buffer (phosphate buffer 80 mM, 0.1% human serum albumin, EDTA 10 mM, pH 7.5, plus 0.01 mM of the dipeptidyl peptidase-4 inhibitor valine-pyrrolidide).
Plasma Determination of FITC-Dextran
Total intestinal permeability was indirectly measured by the determination of the non-digestible 4-kDa dextran conjugated with fluorescein isothiocyanate (FITC-dextran) in plasma (39). Following oral administration, FITC-dextran transits through the gastrointestinal tract and can passively pass through the intestinal epithelium. The concentration of FITC-dextran in plasma represents the permeability of the intestinal epithelium. Plasma Change in be BW expressed as percentage BW change from day 0. BW and change in BW was compared using a two-way ANOVA followed by a Sidak's multiple comparisons test and data are presented as mean ± SEM. *p < 0.05, **p < 0.01, and ****p < 0.0001.
Hunt et al.
Fiber, Intestinal Size, and L-Cell Secretion was diluted in an equal volume of phosphate-buffered saline (PBS) and subjected to fluorescence analysis using an excitation wavelength of 485 nm and an emission wavelength of 528 nm in a SpectraMax iD3 multi-mode microplate reader (Molecular Devices, San Jose, US). The results of the fluorescence measurements were compared to a standard curve of known FITC-dextran concentrations.
Calculation and Statistical Evaluation
All statistics were performed using GraphPad Prism 6 (La Jolla, California, US). Statistical evaluations of the data were carried out using two-sided, unpaired t tests when comparing two independent groups and a two-way analysis of variance (ANOVA) when comparing multiple independent groups followed by a Sidak's multiple comparisons test. Values of p < 0.05 were considered significant and all data in the text and graphs were presented as mean ± SEM.
Dietary Intake and Body Weight
After the 21 days of feeding, fiber-free fed mice had a lower total intake of diet per mouse (47 vs. 67 g) ( Figure 1A). This resulted in a lower total calorie intake (171 vs. 214 kcal per mouse) ( Figure 1B). Unsurprisingly, total fiber intake after 21 days was lower in the fiber-free group (0.08 vs. 4.11 g). Total fat (2.37 vs. 2.75 g) and carbohydrate (29 vs. 27 g) consumption after 21 days was similar in both treatment groups. Fiber-free mice had a lower total protein intake (8 vs. 13 g) compared to the chow fed mice after 21 days. Body weight (BW) was significantly decreased from day 17 in the fiber-free mice ( Figure 1C) and the percentage BW change was significantly different from day 8 until the end of the study ( Figure 1D).
Weight and Morphometric Estimates in the Intestine
Fiber-free fed mice had significantly reduced intestinal weights, relative to BW, in the duodenum (11%), the jejunum (28%) and the ileum (32%) after 21 days of feeding (Figure 2A). Total small intestine (SI) weight relative to BW, was significantly reduced by 25% ( Figure 2B). Additionally, fiber-free feeding significantly decreased SI weight per length ( Figure 2C). Given this, surprisingly fiber-free mice had a significant increase in duodenal villus height ( Figure 2D). Villus height in the jejunum and ileum remained unchanged ( Figure 2D). Fiberfree feeding significantly reduced crypt depth in the duodenum by 13%, in the jejunum by 36%, and in the ileum by 23%, crypt depth remained unchanged in the colon but the overall mucosa area in the colon was significantly decreased (Figures 2E, F). Images of representative hematoxylin and eosin-stained intestinal tissue are displayed in Supplementary Figures 1B, C; https://doi.org/10.6084/m9.figshare.13594058.v1). Colon weight, relative to BW, was significantly decreased by 41% ( Figure 2G) and colon weight per length was significantly decreased ( Figure 2H). Fiber-free feeding did not affect the number of BrdU immunopositive cells per crypt in the SI or colon ( Figure 2I). Long-term fiber-free fed mice had significantly reduced intestinal weights, relative to BW, in the duodenum (13%), the jejunum (29%) and the ileum (38%) ( Figure 3A). Total SI weight, relative to BW, was significantly reduced by 28% ( Figure 3B) and there was a tendency (p=0.0594) for reduced SI weight per length ( Figure 3C). Fiber-free feeding did not affect the villus height in the duodenum and jejunum, but in the ileum, the villus height was significantly decreased by 19% ( Figure 3D). Fiber-free feeding significantly reduced crypt depth in the duodenum by 28%, in the jejunum by 35% and in the ileum by 30% and tended to reduce crypt depth in the colon (p=0.0558) ( Figure 3E). Mucosa area was significantly decreased in the ileum and colon ( Figure 3F). Images of representative hematoxylin and eosin-stained intestinal tissue are displayed in Supplementary Figure 1, panel D and E https://doi.org/10. 6084/m9.figshare.13594058.v1). Colon weight, relative to BW, was significantly decreased by 44% following a long-term fiberfree diet ( Figure 3F). Colon weight per length was significantly decreased in fiber-free mice compared to chow ( Figure 3G).
L-Cell Hormone Production
Fiber-free feeding for 21 days significantly decreased the concentration of total GLP-1, by 37% and 55% in the ileum and colon, respectively ( Figure 4A). The concentration of intact GLP-2 (1-33) remained unchanged in the ileum but was significantly decreased in the colon by 80% ( Figure 4B). Total PYY, remained unchanged in the ileum but was significantly decreased by 48% in the colon ( Figure 4C).
Intestinal Permeability
Fiber-free feeding did not affect the level of FITC-dextran in the plasma after 21 days ( Figure 5A). After 112 days, the concentration of plasma FITC was significantly tripled in the fiber-free mice compared to chow ( Figure 5B).
Fiber-Free Diet in Knockout Mice
In GLP-1r -/and GLP-1r +/+ mice, genotype did not affect BW change ( Figure 6A). Total SI weight, normalized to BW, was significantly increased in the chow fed GLP-1r -/mice compared to chow GLP-1r +/+ with no differences between genotypes found in the fiber-free mice ( Figure 6B). Genotype did not affect colon weight normalized to BW ( Figure 6C). In GLP-2r -/and GLP-2r +/+ mice, genotype did not affect BW change, SI or colon weight normalized to BW (Figures 6D-F). Similarly, in the TGR5 -/and TGR5 +/+ mice, genotype did not affect BW change, SI or colon weight normalized to BW (Figures 6G-I).
DISCUSSION
This study emphasizes the importance of dietary fiber for maintaining intestinal weight, colonic L-cell secretion and intestinal integrity in mice. We show that the removal of crude fiber from the diet dramatically decreased the intestinal size during both the short (21 days) and long-term (112 days) study periods and, additionally, drastically decreased colonic L-cell hormone content after 21 days. Intestinal permeability was unaffected following the 21 days deficient fiber feeding but was significantly increased after 112 days of feeding. Traditionally, the physiological influence of dietary fiber was thought to be limited to the intestinal lumen, affecting gastric emptying rate (2), contributing to fecal bulk and moisture content (40). These attributes made fiber an ideal candidate to maintain healthy intestinal transit, yet increasingly fiber and its derivatives have been shown to play a role beyond lumen. Fiber has been shown to alter the physical characteristics of the rodent gut, such as increasing intestinal weight (41) and increasing intestinal epithelial cell proliferation (42), which could make fiber a potential candidate for ameliorating intestinal injury. These effects could also have implications for populations consuming low fiber diets such as part of a western diet or for patients following advice to consume low-fiber diets to alleviate intestinal side effects such as diarrhea (10).
To assess the impact of low fiber diets on the gut size, we switched mice from their normal chow diet to a fiber-deficient diet for 21 days. Here, we show that removal of crude dietary fiber significantly decreased the weight of the SI and, to a larger extent, the colon. Additionally, removal of fiber significantly decreased crypt depth in the SI and significantly decreased the mucosa area in the colon. These findings support previous data describing a dosedependent effect of fiber on gut size in rats (41). Similar to our study, the authors reported a fiber-mediated effect on crypt depth in the jejunum and ileum. Additionally, they report that fiber supplementation increased villus height whereas paradoxically we report a significant increase in villus height, in the duodenum, upon the removal of fiber after 21 days. This difference in outcome could be attributed to the different experimental setup, with Adam et al. (41) supplementing diets with the soluble fiber pectin, compared to the total removal of crude fiber exemplified in this study. The total removal of fiber could trigger a compensatory response in the proximal SI due to a loss of absorptive capacity, as a product of reduced intestinal size.
We further investigated this possible compensatory response by repeating the experiment but with a long-term feeding schedule of 4 months (112 days). Comparable to the 21 days feeding regime, long-term fiber deficiency reduced SI and colon weights by a similar proportion that coincided with a decrease in crypt depth in the small intestine and a decrease in mucosa area in the colon. Uniquely in the long-term study, there was a decrease in villus height and overall mucosa area in the ileum following fiber-free feeding. These results suggest that the fiberfree fed mice were not able to compensate for their loss of intestinal weight over time. Instead, the reduction in crypt depth preceded villous atrophy in the ileum. At the cell level, atrophic loss may be a product of decreased proliferation or increased apoptosis. Here, we report no changes in proliferative activity measured using BrdU immunopositivity, therefore, it could be speculated that the atrophic loss was due to an increased apoptotic rate. The observed decrease in colonic weight could be assumed to be a product of the decreased fermentation processes, yet similar to previous findings utilizing soluble fiber in rats (41), the morphological changes were also present in the SI. This suggests that the observed growth effect was independent of the local actions of dietary fiber fermentation. Instead, we hypothesized the growth response in the SI could be mediated by the intestinal gut hormones GLP-1 and GLP-2. GLP-2 is a well-documented intestinal tropic hormone shown to increase proliferation and reduce apoptosis (16) (17) (43). GLP-1 can protect against mucosal loss following intestinal injury (44). Both hormones are co-secreted upon a number of nutritional stimuli including in response to dietary fiber fermentation products (22) (27). Indeed, we show that the absence of fiber in the diet decreased the tissue levels of GLP-1, GLP-2 and additionally PYY. The present results agree with previous studies assessing Lcell secretion following fiber supplement in rodents and humans (3) (45) (46). However, in contrast to measuring plasma hormone levels we measured the local tissue concentrations and showed that the L-cell hormones were primarily affected in the colon and to a lesser extent in the distal SI. This ultimately suggests that the fiber-mediated stimulation of the gut growth is controlled by colonic L-cells. Targeting this colonic endocrine function has a large therapeutic potential since the largest density of L-cells is found in the colon (47). These L-cells are situated away from classical luminal nutrient stimulation thereby attributing a different source of stimuli for this subset of L-cells that we are just beginning to characterize.
Given that a decrease in intestinal size corresponded with a decrease in L-cell content in the colon following a fiber deficient diet, we hypothesized that the intestinotrophic actions of GLP-2 and to a lesser extent GLP-1 could drive this response and investigated our hypothesis using global GLP-1r -/and GLP-2r -/mice. Abolition of GLP-1 receptor signaling did not affect the decreased SI or colon weight observed following deficient dietary fiber feeding; however, loss of fiber from the diet removed the significant increase in SI weight observed in GLP-1r -/mice. This suggests an interaction between GLP-1 signaling, gut size and the presence of fiber, whereby in the presence of fiber, GLP-1r -/mice develop increased SI weight, while the signal is lost following the removal of fiber. Abolition of GLP-2 receptor signaling did not affect the intestinal size. To assess if the growth response could be impacted by the indirect manipulation of GLP-1 and GLP-2 we used the bile acid receptor TGR5 -/mice. TGR5 is located on the L-cell and we have previously shown that TGR5 stimulation led to a GLP-2 mediated increase in intestinal size from colonic Lcells (33). However, in the current experiments intestinal size was not impacted by TGR5 signaling. Therefore, we were not able to show the mechanistic drivers of the fiber-mediated growth response using these knockout mouse models. Future studies should focus on assessing the contribution of other microbial modulated metabolites or by assessing receptor contributions in inducible knockout models since germline knockout models, as used in this study, are limited by the risk of evolved compensatory mechanisms to maintain their intestinal capacity.
Dietary fiber has been proposed to improve gut and overall health by helping maintain the intestinal barrier (4,48). Upon the removal of dietary fiber, bacteria switch their glycan metabolism from fiber degradation to mucus glycan degradation thereby reducing the colonic mucus layer thickness which increases microbial translocation, triggering systemic inflammation (4,48,49). Here, we investigated intestinal permeability following the removal of dietary fiber using FITC-dextran. Surprisingly, there were no differences in permeability after 21 days but a large increase after 112 days of fiber-free feeding. This suggests that long-term fiber interventions are required to moderate permeability to an extent that can be detected at the serum level but does not rule out short-term precursor modifications such as mucus layer thickness, which were not assessed in this study.
The present study is limited by a difference in the composition of the two compared diets. Both diets were purchased from the same distributor but had micro and macronutrient differences beyond the content of fiber including a higher kcal/kg in the fiber-free diet. Likely due to the higher kcal/kg, fiber-free fed mice consumed less diet which at the end of the 21 day feeding period increased the disparities between macronutrient consumption and led to a significant decrease in BW from day 8. In particular, after 21 days the fiber-free mice had consumed less protein compared to the chow mice (8 vs. 13 g) therefore, we cannot exclude the influence of decreased protein in the investigated intestinal parameters. Despite this, previous studies investigating the effect of low protein diets on the intestine have shown they correlate with a significant reduction in villus heights (50,51) which was not observed in our study. Additionally, protein diets have been shown to have little effect on the GLP-2 secretion (52).
Despite the numerous studies describing the health benefits of dietary fiber consumption in humans, the doses needed to emulate the benefits are often not realized in practice due to a combination of logistical reasons affecting compliance and adverse effects such as bloating (53) and flatulence (54). Therefore, it remains important to investigate the mechanisms behind these health benefits to develop dietary recommendations to yield new approaches to prevent or treat a range of human diseases. Here, we show that fiber is essential to maintain intestinal size, colonic L-cell hormone levels and maintain intestinal integrity. These findings could have important implications for populations consuming low-fiber diets such as part of a western diet, who could have an increased susceptibility to intestinal disease. In particular, patients recommended to consume low-fiber diets following intestinal injury, such as in the case of pelvic radiotherapy (55), could be at particular risk since despite the low-fiber diet improving intestinal side effects such as diarrhea, simultaneously it might prolong the radiationinduced intestinal damage (56). Despite showing here that a deficient fiber status dramatically altered the colonic endocrine function, the mechanistic drivers of this response were not elucidated. The spatiotemporal location of the colonic endocrine cells implicates alternative stimuli to classical luminal nutrients such as microbially modulated metabolites like SCFA that are produced and bile acids that are modified and by bacteria-driven processes. Given this, future fiber mediated metabolomics studies are necessary to define likely candidates of L-cell secretion in the colon.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. | 2021-02-26T14:13:00.777Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "d59a272fdf7962e7bddf802f176e9e2363fd9a47",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.640602/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d59a272fdf7962e7bddf802f176e9e2363fd9a47",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58640392 | pes2o/s2orc | v3-fos-license | KSHF Guidelines for the Management of Acute Heart Failure: Part I. Definition, Epidemiology and Diagnosis of Acute Heart Failure
The prevalence of heart failure (HF) is on the rise due to the aging of society. Furthermore, the continuous progress and widespread adoption of screening and diagnostic strategies have led to an increase in the detection rate of HF, effectively increasing the number of patients requiring monitoring and treatment. Because HF is associated with substantial rates of mortality and morbidity, as well as high socioeconomic burden, there is an increasing need for developing specific guidelines for HF management. The Korean guidelines for the diagnosis and management of chronic HF were introduced in March 2016. However, chronic and acute heart failure (AHF) represent distinct disease entities. Here, we introduce the Korean guidelines for the management of AHF with reduced or preserved ejection fraction. Part I of this guideline covers the definition, epidemiology, and diagnosis of AHF.
INTRODUCTION
The number of patients with heart failure (HF) has been on the rise as a consequence of the aging of society and the improvement in screening and diagnostic techniques. Therefore, there is an increasing need for developing guidelines for the diagnosis and treatment of HF. Although guidelines for HF management have already been issued by American and European associations, many aspects of such guidelines do not reflect the domestic reality novo HF." Acute myocardial infarction (AMI) is the most representative manifestation of AHF, though some patients remain asymptomatic despite decreased cardiac function or may exhibit gradual symptoms as decompensation develops.
Classification of acute heart failure
Although there are several ways to classify AHF, the classification based on the clinical condition at the time of admission is most useful from a practical perspective because it facilitates identification of high-risk patients and initiation of treatment according to a preestablished treat-to-target strategy. In patients with AHF, systolic blood pressure is usually preserved (90-140 mmHg) or elevated (>140 mmHg; hypertensive AHF) but may also be decreased (<90 mmHg; hypotensive AHF). Hypotensive AHF generally has poor prognosis, especially if accompanied by decreased perfusion.
Regarding causes and aggravating factors of AHF, five clinical conditions require immediate treatment: acute coronary syndrome, hypertensive emergencies, tachycardia/severe bradycardia/conduction disorders, structural heart damage, and acute pulmonary embolism. Korean Guidelines for Acute Heart Failure: Part I The clinical condition is defined based on the findings of the bedside physical examination, focused mainly on clarifying the presence, nature, and degree of congestion ("wet" or "dry") and/or tissue perfusion ("warm" or "cold") according to the Forrester classification (Figure 1). AHF patients can be classified into four groups with increasingly poor prognosis: group A, with "warm" and "dry" HF (compensated, well perfused, without congestion); group B, with "warm" and "wet" HF (well perfused but congested); group C, "cold" and "dry" HF (hypoperfused but without congestion); and group D, "cold" and "wet" HF (hypoperfused and congested).
To confirm the presence and degree of congestion, it is necessary to assess jugular venous pressure and check for ascites or peripheral edema, rales or abnormal heart sound on chest auscultation, and pulmonary edema on chest radiography. Decreased tissue perfusion can be confirmed based on decreased skin temperature, reduced urine output, and altered mentality. Pulmonary capillary wedge pressure is the best indicator of congestion, while cardiac index is the best marker of perfusion.
Epidemiology
The prevalence of AHF has been increasing in parallel with the rates of cardiovascular disease, reflecting the aging of society and widespread adoption of a westernized lifestyle. AHF is a major cause of hospitalization as well as death during hospital admission and readmission, especially among individuals aged >65 years. The prevalence of AHF continues to increase despite recent advances in medical technology and reduced in-hospital mortality rates. According to epidemiological studies conducted in the United States, more than one million hospitalizations for AHF occur annually, and hospitalization rates are expected to continue to increase over the next 20 years. 2) Approximately 15.5 million emergency room visits due to AHF were recorded between 1992 and 2001, with an average increase of 18,500 cases per year. 3) It is estimated that, by 2050, 20% of the population of the United States will consist of individuals aged >65 years, approximately 80% of whom will likely require hospitalization for AHF. 4) While no epidemiological studies on this topic have been conducted in Korea, the prevalence of AHF is expected to increase in the future due to the recent increase in the prevalence of risk factors for cardiovascular disease, reflecting the westernization of eating habits, lack of physical activity, and rapid increase in the proportion of elderly people. AHF is also associated with a heavy socioeconomic burden. In the United States, the current annual cost for hospitalizations due to AHF is approximately $34 billion, which is similar to the figures reported in Europe. 5) In Korea, AHF-related expenses associated with an 8-day hospital stay amount to approximately 7.7 million won (about $7,000). 6) In a Korean AHF cohort, 52% of patients were identified as having de novo HF, while the remaining 48% had decompensated HF. 6) The same study concluded that, at the time of AHF diagnosis, Korean patients tended to be younger (mean age, 69 years) than those in Japan, USA, and Europe; the proportion of men (55%) was slightly higher among Korean AHF patients than among patients from AHF registries in other countries. In this Korean AHF cohort, 59% of patients had hypertension, 35% had diabetes, and 28% had atrial fibrillation. The major cause of AHF was ischemic heart disease (37%), followed by cardiomyopathies (21%), valvular heart disease (14%), and tachycardia-induced cardiomyopathy (11%). The prognosis of AHF is reportedly very poor. A study conducted in the United States reported a 5-year survival rate of less than 1 in 3 patients hospitalized for AHF. 7) Studies conducted in Korea reported similarly poor outcomes: in-hospital mortality, 6.1%; short-term all-cause mortality, 1.2% and 9.2% at 1 and 6 months, respectively; 6) long-term mortality, 15%, 21%, 26%, and 30% at 1, 2, 3, and 4 years, respectively 8) ; re-admission rate, 6.4% within 1 month and 24% within 6 months. 6)
Symptoms and signs
The traditional clinical approach involves identifying symptoms and signs by taking a clinical history and performing a physical examination in all patients who visit the emergency room with acute symptoms suggestive of HF (Class of Recommendation, I; Level of Evidence, C). The diagnosis can also be made based on objective evidence of pulmonary edema or cardiac dysfunction in patients with signs and symptoms of AHF. [9][10][11] In general, symptoms and signs specific to HF can be divided into two categories: congestion and decreased perfusion of the peripheral tissues. AHF symptoms include orthopnea and paroxysmal nocturnal dyspnea as typical symptoms of congestion, as well as fatigue and decreased exercise capacity associated with decreased perfusion of peripheral tissues. AHF signs include pulmonary edema, jugular venous engorgement, hepatic-jugular reflux reflex, third heart sound (S3), left downward deviation of the apical pulse, peripheral edema, and decreased urine output. However, it is difficult to discriminate the cause of such symptoms and signs, especially in patients with obesity, advanced age, and chronic lung disease. [9][10][11][12][13] Therefore, it is recommended that the following medical history be recorded and physical examination be conducted in patients with AHF.
Orthopnea
At the bedside, the presence and severity of orthopnea can be easily evaluated if sufficient pillow height is used to achieve a position in which the patient can breathe comfortably. Thus, orthopnea can be defined as: mild or non-existent if the patient can breathe comfortably without a pillow or with minimal elevation of the neck: moderate if the patient can breathe comfortably only when using at least one pillow, with neck elevation of up to 10 cm; and severe if the patient can breathe comfortably only when using at least two pillows.
Jugular venous distention
To evaluate jugular venous distention, the patient must sit in a relaxed posture, with the back at an angle of 30°-45° from the horizontal. The neck and chest should be exposed from the middle of the sternum to the antihelix of the ear. Upon turning the patient's neck to one side, jugular venous distention can be measured vertically with illumination. If it is difficult to distinguish the jugular venous pulse from the internal carotid pulse, the hepatojugular reflux can be measured by applying pressure to the right upper quadrant for about 10 seconds and then measuring the time it takes for jugular venous pressure to recover from the transient increase; if the elevated jugular venous pressure persists, right ventricular dysfunction may be suspected. Jugular venous distention, which is measured from the angle of Lewis to the highest point of internal jugular venous pulsation, is considered severe if it exceeds 15 cm.
Peripheral edema
Peripheral edema can be examined by applying pressure with the thumb for 5 seconds to the medial malleolus, posterior side of the tibia, or sacrum (in patients who cannot walk). Edema is confirmed if the skin takes longer than 10 seconds to return to the original shape, and is defined as mild edema if observed in only one lower limb or the sacrum, versus moderate edema if observed in the bilateral lower limbs. The patient is considered to have severe edema if a large amount of edema is noted in the lower portion of the calf, if edema extends from the leg to the sacrum (in immobile patients), if pitting edema develops easily and disappears after >30 seconds, or if acute/subacute skin changes including cutaneous distension or cleavage are noted.
Rales
Rales are graded as follows: grade 1, unilateral or bilateral rales across the lower third of the lung; grade 2, bilateral rales across the lower half or two thirds of the chest; and grade 3, bilateral rales throughout all lung fields. It is also necessary to distinguish rales from wheezing (stridor) or dry crackles.
Dyspnea
Dyspnea is graded according to the New York Heart Association functional classification ( Table 3).
Aggravating factors of acute heart failure
In the initial evaluation of patients with AHF, it is important to perform an accurate hemodynamic assessment and to identify the aggravating factors. Common causes of AHF include: acute coronary syndrome, inappropriate control of hypertension, high salt intake, arrhythmia (including atrial fibrillation), medications reducing cardiac contractility, renal impairment, pulmonary embolism, alcohol abuse, worsening of underlying thyroid disease, infection, and worsening of underlying cardiovascular disease ( Table 4). 14) Because acute coronary syndrome is an important factor that can cause AHF, the patients should be carefully checked for chest pain, changes in the ST segment on electrocardiography, and elevation in blood troponin levels. If necessary, coronary angiography should be performed to exclude the possibility of coronary artery disease as a cause of worsening HF. 15) Hypertension is a common cause of AHF exacerbation, especially in women and in patients with HFpEF. Patients with HF who are taking antihypertensive medications can also develop AHF if they suddenly stop their medication. 16 Marked limitation of daily physical activity. Comfortable at rest, but less than ordinary activity causes symptoms. IV Unable to perform any physical activity without discomfort. Symptoms of heart failure at rest. Discomfort increases with any physical activity. Table 4. Causes and aggravating factors of AHF Causes and aggravating factors 1. Acute coronary syndrome (ACS) 2. Arrhythmias (tachycardia: atrial fibrillation, ventricular tachycardia; bradycardia: conduction disturbances) 3. Excessive rise in blood pressure 4. Noncompliance with recommendations regarding low salt intake, water restriction, or medication 5. Toxic substances (alcohol, recreational drugs) 6. Medications (e.g., non-steroidal anti-inflammatory drugs, corticosteroids, chemotherapeutics with cardiotoxicity) 7. Exacerbation of chronic obstructive pulmonary disease 8. Pulmonary embolism 9. Infection (including infective endocarditis) 10. Surgery and perioperative complications 11. Increased sympathetic tone, stress-induced cardiomyopathy 12. Metabolic/hormonal derangement (e.g., diabetic ketosis, thyroid dysfunction, adrenal dysfunction, and pregnancy-and peripartum-related problems) 13. Damage to the cerebral arteries 14. Acute mechanical cause: myocardial rupture complicating ACS (ventricular septal defect, pseudoaneurysm, free wall rupture, and acute mitral regurgitation), chest trauma or coronary artery intervention, acute native or prosthetic valve incompetence secondary to endocarditis, aortic dissection or thrombosis Ponikowski P, et al.; 2016 ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure: The Task Force for the diagnosis and treatment of acute and chronic heart failure of the European Society of Cardiology (ESC) Developed with the special contribution of the Heart Failure Association (HFA) of the ESC, European Heart Journal 2016; 37 (27) may worsen without adequate restriction of water or salt. 18)19) Atrial fibrillation is common in patients with HF, and an excessively increased heart rate may be a cause of HF exacerbation due to increased left atrial pressure and decreased cardiac output. 14) The use of medications such as calcium channel blockers, non-steroidal anti-inflammatory drugs, corticosteroids, and thiazolidinedione diabetic medications can decrease left ventricular contractility or worsen HF symptoms. 14) Excessive long-term alcohol consumption will deteriorate cardiac function and can deteriorate HF symptoms. Infections such as pneumonia and septicemia generally increase the metabolic demand or reduce myocardial contractility, which may exacerbate HF symptoms. 14)20) An exacerbation of renal dysfunction or pulmonary embolism or abnormal production of thyroid hormones can exacerbate cardiac dysfunction. A worsening of preexisting heart disease such as valvular heart disease would also cause HF symptoms. 14) Therefore, aggravating factors should be assessed and treated in all AHF patients.
Symptoms, physical findings, and laboratory findings in decompensated acute heart failure
The main symptoms of AHF include dyspnea, excessive body fluid retention, and fatigue, all of which are nonspecific findings that may occur in AHF or other cardiovascular diseases. The major symptoms and physical findings suggestive of AHF are summarized in Table 5. Because these symptoms and physical findings can be seen in other diseases, it is necessary to conduct differential diagnosis for pneumonia, pulmonary diseases (chronic obstructive pulmonary disease, pulmonary embolism, pulmonary arterial hypertension), renal impairment, severe infectious diseases, and acute coronary syndrome including AMI.
In patients with suspected AHF, the initial assessment should include general blood tests including biomarkers, electrocardiography, chest radiography, and echocardiography (Figure 2). Typical symptoms and signs of AHF in patients with suspected decompensated Korean Guidelines for Acute Heart Failure: Part I 1. The initial assessment of patients with suspected AHF should include 12-lead electrocardiography, chest X-ray, and blood tests to evaluate the levels of blood urea nitrogen, creatinine, electrolytes, and serum glucose, as well as a complete blood count, a liver function test, and a thyroid function test (class of recommendation, I; level of evidence, C). 2. Measuring serum natriuretic peptide levels is useful for making the clinical diagnosis in the presence of signs and symptoms of AHF, and especially useful for the differential diagnosis of AHF in patients with idiopathic dyspnea (class of recommendation, I; level of evidence, A). 3. Echocardiography should be performed in patients with hemodynamic instability or suspected functional or structural heart disease (class of recommendation, I; level of evidence, C). 4. Determining initial serum natriuretic peptide levels is useful for predicting in-hospital mortality and pre-discharge follow up levels is useful for assessing prognosis. (class of recommendation, I; level of evidence, A). 5. Measuring the serum levels of troponin is helpful for predicting in-hospital mortality and for assessing prognosis (class of recommendation, I; level of evidence, A). 6. Measuring the serum levels of ST2, a biomarker of myocardial damage or fibrosis, may be helpful in predicting mortality risk in patients with acute decompensated HF (class of recommendation, IIb; level of evidence, A). 7. The usefulness of treatment based on serum natriuretic peptide levels has not been well established (class of recommendation, IIb; level of evidence, C).
AHF include the characteristic findings of fluid overload (pulmonary edema, peripheral edema); less often, these findings are symptoms associated with reduced cardiac output and decreased peripheral circulation. Because the sensitivity and accuracy of these symptoms and 9 https://e-kcj.org https://doi.org/10.4070/kcj.2018.0373 Korean Guidelines for Acute Heart Failure: Part I physical findings are not high, the above-mentioned basic tests are often needed in addition to the clinical evaluation in order to discriminate the cause.
Baseline evaluation
1) Chest X-ray Although pulmonary congestion, pleural effusion, and cardiomegaly are the most specific findings related to AHF, chest X-ray scans are normal in up to 20% of patients with AHF. 21) Nevertheless, chest radiography can help detect non-cardiac diseases potentially responsible for the patients' symptoms.
2) Electrocardiography
Because patients with AHF rarely have normal findings on electrocardiography, this type of assessment is very helpful for identifying the underlying cardiac disease and potential triggering factors. 22)
3) Echocardiography
An immediate echocardiographic examination is necessary only in patients with hemodynamic instability (cardiogenic shock), structural heart disease, or acute lifethreatening structural or functional cardiac abnormalities (acute valvular regurgitation or acute aortic dissection). Early echocardiography should be considered in all patients with de novo AHF or for whom cardiac function information is lacking. It is common for patients to undergo echocardiography performed by a specialist within 48 hours of admission. However, the optimal timing of echocardiographic assessment in AHF remains to be established. Repeated echocardiographic examinations are usually not needed unless a change in clinical condition becomes evident.
Laboratory tests
Recent routine blood tests reflect many aspects of the pathophysiology of AHF. Biomarkers for myocardial wall stress, hemodynamic abnormalities, inflammation, myocardial damage, neurohormone changes, myocardial remodeling, myocardial extracellular matrix changes, and myocardial fibrosis are known to provide powerful additional information for the standard diagnosis, treatment, and prognosis of AHF.
1) Natriuretic peptides: the brain natriuretic peptide and N-terminal pro-BNP
Sodium natriuretic peptides are used in the diagnosis or differential diagnosis of AHF, especially in patients with idiopathic dyspnea. The brain natriuretic peptide (BNP) and N-terminal pro-BNP (NT-proBNP) have high diagnostic accuracy and negative predictive value in patients admitted to the emergency room for acute dyspnea, and are not affected by ejection fraction (i.e., are useful in patients with HFrEF or HFpEF). [23][24][25][26][27][28][29] BNP levels were shown to be moderately correlated with left ventricular end-diastolic pressure, 30) and BNP levels at admission were direct predictors of in-hospital mortality, re-hospitalization within 30 days after discharge, and subsequent re-admission and mortality in patients with AHF. [31][32][33][34][35][36] Importantly, BNP levels immediately before discharge after HF treatment were good predictors of re-admission and death. 27)37)38) Increased levels of natriuretic peptides are very useful indicators of AHF in patients with AHF-related symptoms such as dyspnea (BNP >100 pg/mL or NT-proBNP >300 pg/mL). However, since various cardiac and non-cardiac conditions can result in elevated levels of natriuretic peptides, this criterion alone cannot be used to establish the diagnosis of AHF (Table 6). Thus, clinical characteristics should be considered, along with other laboratory findings.
The levels of natriuretic peptides decrease under treatment for HF, and the magnitude of the decrease is reflected in the extent of clinical improvement. 36)37)39)40) Some studies have focused on optimizing drug therapies to lower natriuretic peptide levels has been investigated whether it can improve clinical outcomes compared to those attainable with standard HF therapy, but the prospective randomized studies conducted to date have not reported consistent results. Therefore, further studies are warranted to clarify the usefulness of these therapeutic strategies.
2) Cardiac biomarkers: troponins
The levels of troponins, which are markers of myocardial necrosis, may be increased in patients with AHF in the absence of AMI or obstructive coronary artery disease, 41) suggesting that myocardial damage or necrosis persists and accumulates. It has been reported that AHF patients with elevated troponin levels are at an increased risk of mortality during hospitalization and after discharge, and that prognosis is better in patients whose troponin levels decrease during treatment, reflecting treatment response. 32)35)40)42)
3) Markers of myocardial fibrosis: soluble ST2 and galectin-3
In combination with natriuretic peptides, soluble ST2 and galectin-3, as biomarkers of myocardial fibrosis, help establish the diagnosis of HF and predict the risk of hospital admission and death. Soluble ST2 is a product of the ST2 gene, a member of the interleukin-1 receptor family. In animals with HF, increased ST2 transcription is associated with progressive myocardial fibrosis and hypertrophy induced by myocardial elongation.
In clinical studies, soluble ST2 levels were significantly elevated in patients with AHF, representing a significant predictor of 1-year mortality. 43)44) Galectin-3 plays an important role in fibroblast activation and fibrosis in animal cell models, and its elevation is associated with long-term survival in patients with HFrEF. 45) There is clinical evidence to support the future clinical use of biomarkers of myocardial damage and myocardial fibrosis in combination with already established markers such as natriuretic peptides. However, further studies are warranted to clarify the usefulness of such biomarkers for evaluating the prognosis of HF. Korean Guidelines for Acute Heart Failure: Part I
Right heart catheterization
Hemodynamic monitoring is used when it is clinically impossible to assess the fluid status or in patients who are unresponsive to initial therapy, especially if left ventricular filling pressure and cardiac output are unclear. Patients with clinically severe hypotension (systolic blood pressure <90 mmHg or symptomatic hypotension) and patients with renal impairment during initial treatment also undergo invasive hemodynamic monitoring. Patients indicated for heart transplantation or insertion of a mechanical circulatory assist device are also required to undergo right heart catheterization including the measurement of pulmonary vascular resistance, an essential element for evaluating the candidacy of heart transplantation. Invasive hemodynamic monitoring should be used in the following cases: i) cardiogenic shock with an increased demand for vasopressors and a mechanical circulatory assist device; ii) clinically significant non-compensated HF states in which treatment is limited due to uncertainty regarding left ventricular filling pressure, perfusion status, and vascular tension; iii) clinically significant dependence with vasopressor use after the initial clinical improvement; and iv) persistence of severe symptoms after adequate use of standard therapy. On the other hand, routine invasive hemodynamic monitoring is not recommended in patients with decompensated AHF and normal blood pressure who experience symptomatic improvement with diuretics and vasodilators.
Left heart catheterization
Left heart catheterization may be useful in patients with left ventricular dysfunction.
Coronary angiography
Invasive coronary angiography should be performed in patients who may need coronary artery reperfusion. Coronary angiography is indicated in patients who were previously diagnosed with coronary artery disease and angina or those with ventricular dysfunction and 12 https://e-kcj.org https://doi.org/10.4070/kcj.2018.0373 Korean Guidelines for Acute Heart Failure: Part I 1. Invasive hemodynamic monitoring using a pulmonary artery catheter should be performed to determine the treatment direction when it is difficult to adequately assess left ventricular filling pressure in patients with dyspnea or hypoperfusion (class of recommendation, I; level of evidence, C). 2. Invasive hemodynamic monitoring can be used in the following cases where symptoms persist despite standard therapy (class of recommendation, IIa; level of evidence, C): a. Fluid status, perfusion, and systemic or pulmonary vascular resistance are uncertain b. Systolic blood pressure remains low despite initial treatment c. Renal function decreases after the start of treatment d. An intravenous injection is required to raise blood pressure e. Mechanical circulatory assist devices or heart transplantation are considered 3. Coronary angiography and intervention should be performed if ischemia is the suspected cause of HF (class of recommendation, IIa; level of evidence, C). 4. Endomyocardial biopsy may be useful in patients suspected of having a specific disease that may affect treatment decision (class of recommendation, IIa; level of evidence, C). 5. Invasive hemodynamic monitoring is not recommended if pulmonary congestion symptoms improve after diuretic and vasodilator administration in acute non-compensated HF patients with normal blood pressure (class of recommendation, III; level of evidence, B). 6. Routine endomyocardial biopsy is not recommended in patients with HF (class of recommendation, III; level of evidence, B).
severe coronary ischemic findings on electrocardiography or other noninvasive examinations.
In patients without a previous diagnosis of obstructive coronary artery disease, the possibility of recurrent obstructive coronary artery disease should be considered if left ventricular dysfunction is present. In these patients, coronary angiography may help confirm the presence and location of coronary artery occlusion. If obstructive coronary artery disease has not been identified as a cause of left ventricular dysfunction, coronary angiography is generally unnecessary unless there is a change in clinical status suggestive of the progression of myocardial ischemia.
Endomyocardial biopsy
Endomyocardial biopsy is useful for diagnosing a specific disease that may affect treatment decision. Therefore, endomyocardial biopsy should be considered in patients with rapidly progressing HF or left ventricular function deterioration despite adequate medication. Endomyocardial biopsy should also be considered in patients with acute rejection after heart transplantation, infiltrative myocardial diseases (including primary amyloidosis), or acute myocarditis (especially if giant cell-myocarditis is suspected). However, due to the limited diagnostic rate and risk of procedural complications, routine endomyocardial biopsy is not recommended in HF.
Triage and hospitalization
AHF includes both acute de novo HF and acute exacerbations of compensated HF. In AHF, the mortality rate is high, and professional medical treatment such as mechanical ventilation is often required. Therefore, it is desirable that the patient be transferred to a specialized medical institution with an intensive care unit. Early diagnosis and appropriate treatment are critical for improving symptoms and stabilizing the patient's condition. 46)47) Classification of acute heart failure The hemodynamic status of patients with AHF can be assessed via history taking and physical examination. The clinical condition is classified based on the findings of bedside physical examination, aiming mainly to detect clinical signs/symptoms of congestion ("wet" or "dry") and/or tissue perfusion ("warm" or "cold") according to the Forrester classification (Figure 1). 48)49) Among these categories, "warm-dry" HF, which does not exhibit congestion and is characterized by good perfusion, corresponds to a stable compensated state. Patients with AHF most commonly exhibit congestion and good peripheral perfusion ("warm-wet" HF). Adequate classification of AHF at the initial evaluation is a key to accurately predicting prognosis and choosing the optimal treatment direction.
Initial assessment algorithm
Continuous non-invasive monitoring of the patient's condition is required at the time of the initial evaluation and treatment. The patient's ventilatory status, peripheral perfusion status, and oxygen supply should be assessed through continuous monitoring of blood pressure, heart rate, oxygen saturation, and urine output. 50) In hemodynamically unstable conditions, medication or mechanically assisted circulation therapy should be considered, while mechanical ventilation therapy is required if the respiratory disturbance persists despite appropriate oxygen therapy. To prevent deterioration of the patient's condition during the initial treatment, it is necessary to identify and treat the triggering factors of HF, including acute coronary syndrome, hypertensive emergency, arrhythmia, acute structural cardiac damage, and acute pulmonary embolism (Figure 3).
1) Acute coronary syndrome
In patients with acute coronary syndrome, early diagnosis and treatment are important and can reduce the incidence of further HF. [51][52][53][54][55][56][57][58][59] Patients with congestive HF and acute coronary syndrome are at high risk and require concomitant active reperfusion therapy for acute coronary syndrome. [56][57][58][59] 2) Hypertensive emergencies AHF can occur with a rapid and excessive increase in arterial blood pressure and mainly manifests as acute pulmonary edema. In the presence of hypertensive episodes, the treatment for HF and that for hypertension should be performed concomitantly. Blood pressure should be reduced aggressively by up to 25% during the first few hours, and the use of intravenous vasodilators with loop diuretics is recommended for this purpose. 60 Korean Guidelines for Acute Heart Failure: Part I
3) Arrhythmia (rapid arrhythmia or severe bradycardia/conduction disturbance)
Arrhythmia associated with hemodynamic instability is common in AHF. If serious arrhythmias are noted in AHF patients, they should be stabilized early using medication, electrical cardioversion, or a temporary pacemaker. If the patient has hemodynamic instability due to atrial or ventricular arrhythmias, these arrhythmias should be terminated by electrical cardioversion. If ventricular arrhythmias occur repeatedly, appropriate treatment is required. If arrhythmia is caused by myocardial ischemia, immediate reperfusion therapy is necessary and radiofrequency catheter ablation may be considered in some cases. 62)
4) Acute mechanical causes
Acute mechanical causes include i) interventricular septal perforation, ventricular free wall rupture, and acute mitral regurgitation resulting from ischemic heart disease, ii) complications due to thoracic trauma, iii) valvular regurgitation secondary to acute endocarditis, and iv) complications due to aortic dissection or thrombosis. Echocardiographic studies play an important role in diagnosis and treatment. In patients with acute structural damage, hemodynamic instability or cardiogenic shock occurs frequently and warrants the use of circulatory assist devices; in such patients, active surgical correction should be considered early. [63][64][65][66]
5) Acute pulmonary embolism
Reperfusion therapy using a thrombolytic agent, percutaneous catheter removal of the thrombus, or surgical embolectomy is recommended in patients with hypotension or shock. 67)68) Early detection of these triggering factors of HF is important, and treatment should be initiated as soon as possible (within the first 1-2 hours) (Figure 3).
Treatment plan
The initial treatment plan for patients with AHF is created according to the patient's condition, which is defined in terms of the degree of congestion and peripheral perfusion. The management strategy for AHF is illustrated in Figure 4.
Criteria for hospitalization
In patients with AHF, hospitalization should be considered if there are symptoms or signs of congestion. The indications of hospitalization are listed in Table 7. Few patients are discharged from the emergency room within a few hours due to good response to the initial diuretic therapy. The following criteria should be considered when considering discharge from the emergency room: • Have the patient's symptoms improved sufficiently?
• Has the patient's heart rate stabilized?
• Is orthostatic hypotension absent at standing? • Is urine output appropriate?
• Is there any deterioration of renal function?
• Is oxygen saturation maintained (>95%)? https://e-kcj.org https://doi.org/10.4070/kcj.2018.0373 Korean Guidelines for Acute Heart Failure: Part I If there is persistent dyspnea or hemodynamic instability, it is necessary to observe AHF patients in a space where cardiopulmonary resuscitation is possible. Patients with severe respiratory distress, hemodynamic instability, recurrent arrhythmia, and acute coronary syndrome are considered high risk; it is thus recommended to have such patients monitored and treated in the intensive care unit. Patients who meet one or more of the following criteria are also candidates for intensive care monitoring 50 • Decreased oxygen saturation despite oxygen supply (<90%) • Endotracheal intubation • Evidence of hypoperfusion • Use of accessory respiratory muscle during respiration, respiratory rate >25 breaths/min • Heart rate <40 beats/min or >130 beats/min • Systolic blood pressure <90 mmHg
CONCLUSION
As the prevalence of AHF has been increasing, AHF is a major cause of hospitalization and mortality. In patients with symptoms suggesting AHF, initial assessment should include general blood tests including biomarkers, electrocardiography, chest radiography, and echocardiography. Initial identification and prompt treatment of aggravating factors are crucial in AHF management. The management strategy should be decided based on the patient's status of congestion and perfusion. | 2019-01-22T22:32:14.516Z | 2018-12-27T00:00:00.000 | {
"year": 2018,
"sha1": "dd59b5669d7c66e549420bf6c726dec95ff2f6a8",
"oa_license": "CCBYNC",
"oa_url": "http://e-kcj.org/Synapse/Data/PDFData/0054KCJ/kcj-49-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd59b5669d7c66e549420bf6c726dec95ff2f6a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54085395 | pes2o/s2orc | v3-fos-license | RXTE Observations of the Low-Mass X-Ray Binary 4U 1608-522 in Upper-Banana State
To investigate the physics of mass accretion onto weakly-magnetized neutron stars, 95 archival RXTE datasets of an atoll source 4U 1608-522, acquired over 1996-2004 in so-called upper-banana state, were analyzed. The object meantime exhibited 3-30 keV luminosity in the range of<~ 10^35 - 4 x 10^37 erg s^-1, assuming a distance of 3.6 kpc. The 3-30 keV PCA spectra, produced one from each dataset, were represented successfully with a combination of a soft and a hard component, of which the presence was revealed in a model-independent manner by studying spectral variations among the observations. The soft component is expressed by so-called multi-color disk model with a temperature of ~1.8 keV, and is attributed to the emission from an optically-thick standard accretion disk. The hard component is a blackbody emission with a temperature of ~2.7 keV, thought to be emitted from the neutron-star surface. As the total luminosity increases, a continuous decrease was observed in the ratio of the blackbody luminosity to that of the disk component. This property suggests that the matter flowing through the accretion disk gradually becomes difficult to reach the neutron-star surface, presumably forming outflows driven by the increased radiation pressure. On time scales of hours to days, the overall source variability was found to be controlled by two independent variables; the mass accretion rate, and the innermost disk radius which changes both physically and artificially.
INTRODUCTION
It has long been known that low-mass X-ray binaries (LMXBs), namely close binaries involving neutron stars (NSs) without strong magnetic fields, emit thermal type spectra when they are luminous ( 10 36 erg s −1 ).In the 1970's, their spectra were often modeled empirically in terms of simple thermal Bremsstrahlung, without much physical basis.As observations made progress, it gradually became clear that this simple empirical modeling is in fact inadequate, and at least two components are needed to describe wide-band (typically 2-30 keV) LMXB spectra.Today, a canonical model (sometimes called "Eastern model") interprets the X-ray spectra of a luminous LMXB as originating from two major emission regions; the surface of the NS (or often called "boundary layer") and an accretion disk around it (Mitsuda et al. 1984).
The Eastern model was constructed based on Tenma observations of luminous LMXBs (Mitsuda et al. 1984(Mitsuda et al. , 1989;;Makishima et al. 1989), particularly considering intensity-correlated spectral changes.When an LMXB becomes brighter on a time scale of ∼ 1000 s, its spectrum hardens in energies from ∼ 2 to ∼ 10 keV, but the spectral shape in the hardest range (above 10 keV) stays constant.A difference between a pair of spectra with different intensities generally takes a form of a blackbody (BB) spectrum, of which the intensity changes but the shape (i.e., the temperature) is approximately constant.This BB component has a temperature of ∼ 2 keV, which is close to the Eddington temperature, 2.0 keV, for a NS of ∼ 1.4 M ⊙ mass (with M ⊙ being the solar mass) and 10 km radius.In addition, the area of its emission region is inferred to be a fraction of the surface area of such a NS.Therefore, the BB component has been ascribed successfully to the NS surface emission.Indeed, every spectrum observed from these sources was reproduced by a sum of this BB component, and an additional soft component.The soft component was approximated first by a softer BB of ∼ 1 keV temperature, but later, much better by a particular superposition of BB spectra, called "disk blackbody" or "multi-color disk" (MCD) emission (Mitsuda et al. 1984;Makishima et al. 1989), as predicted by the optically-thick standard accretion disk model (Shakura & Sunyaev 1973).
While the Eastern model thus provides a promising ground to understand the physics of mass accretion in luminous LMXBs, their complex spectral and intensity variations have often been described in a purely empirical manner using "color-color" diagrams and other similar methods.As a result, such empirical classifications as "Z sources" and "atoll sources" have been created, together with various "branches" on these empirical diagrams (e.g., Hasinger & van der Klis 1989).These primitive descriptions are waiting to be replaced by more physically meaningful ones, using the Eastern model.
After the launch in 1995, the Rossi X-Ray Timing Explorer (RXTE; Bradt et al. 1993) has been observing many LMXBs for a huge number of times.With the largest effective area and the highest timing resolution ever achieved, RXTE is indeed very suitable to the study of spectral variations of LMXBs.On time scales of milliseconds to seconds, Gilfanov et al. (2003) and Revnivtsev & Gilfanov (2006) carried out such studies using the RXTE data, and revealed that the variation is carried by a rather hard BB-like emission component with a temperature of ∼ 2-3 keV.These results reinforce the validity of the Eastern model, and encourage us to attempt its extensive application to the RXTE data of LMXBs.We thus hope to understand the spectral variations of LMXBs as a whole using a physical model (based on the Eastern description).
As the first of a series of our planned publications, the present paper deals with socalled upper-banana state of 4U 1608-522, which is one of the most frequently observed atoll sources by RXTE.Section 2 describes 414 RXTE data sets of this LMXB, of which 95 are selected for use in the present paper.In § 3.1, we select four representative spectra out of the 95 data sets, and analyze their intensity-correlated spectral changes to reconstruct the Eastern model.The derived model parameter values are examined in § 3.2, followed by § 3.3 where the model is applied to all the 95 energy spectra and relations among the obtained physical parameters are studied.As a reconfirmation of our approach, we calculate in § 3.4 the effective degrees of freedom involved in the variability, of which the results are physically interpreted in § 3.5.As discussed in § 4, the results give a full support to the Eastern model, and lead to a finding of mass outflows as the source gets luminous.
OBSERVATIONS AND DATA REDUCTION
Our target object 4U 1608-522 is an LMXB with a recurrent transient characteristic, exhibiting occasional outbursts.As the X-ray intensity evolves through these outbursts, the source is known to take all three spectral states of atoll sources (Hasinger & van der Klis 1989); namely island, lower-banana, and upper-banana states, in the increasing order of the source intensity (Muno et al. 2002;Gierliński & Done 2002a).In the island and lowerbanana states, a hard tail appears in the spectra presumably due to Comptonization by hot electrons, of which the temperature is of the order of several tens keV (Gierliński & Done 2002b).In the present paper, we analyze only the data in the upper-banana state, where the spectrum takes the thermal-shape characteristic of luminous LMXBs.The original paper by Mitsuda et al. (1984) also utilized 4U 1608-522, when the source was presumably in the upper-banana state.The source distance is assumed to be 3.6 kpc (Nakamura et al. 1989).
Although RXTE has observed many outbursts from 4U 1608-522, we limit the present study to those data sets which were obtained from the launch of RXTE (December 1995) to the end of August 2004, when the brightest and the second brightest outbursts were recorded.During the period, 414 pointing observations of 4U 1608-522 were conducted with the Proportional Counter Array (PCA; Jahoda et al. 1996).Although some of the five Proportional Counter Units (PCUs) of the PCA were turned off in some observations, PCU2 alone was always operational.In order to avoid systematic differences among different PCUs, we utilized the PCU2 data only, because the source in the upper-banana state is very bright (typically ∼ 1000 counts s −1 per PCU) and hence statistical errors are usually negligible.We employed "Standard-2" data, of which the time resolution is 16 s, since they are most accurately calibrated and suitable for spectral analysis.The data were filtered in the standard manner for bright sources, and were corrected for dead times in the standard way.The PCA background was estimated for each dataset by PCABACKEST version 3.0.To remove Type I bursts, we excluded those time regions when the background-subtracted source count rates (per 16 s) are outside 50%-200% of the average over each observation.
After the subtraction of the modeled background, we calculated color-color diagrams (CCDs) and hardness-intensity diagrams (HIDs) of 4U 1608-522 using data points averaged over 128 s.In this paper, the soft and hard colors refer to 6-10 keV vs. 2.5-6 keV source count ratios and those in 10-30 keV vs. 6-10 keV, respectively.The intensity used for HIDs refers to an energy band of 2.5-30 keV in the unit of the Crab Nebula count rate.As described in Jahoda et al. (2006), the overall PCA operation history is divided into 5 epochs according to differences in the high-voltage settings.As a result, CCDs and HIDs need to be produced separately for different epochs.Figure 1 representatively shows the obtained CCD and HIDs of 72 observations in the entire epoch 3. Here, we excluded faint data points of which the intensities are less than 10 counts s −1 PCU2 −1 ( 3 × 10 35 erg s −1 for 3.6 kpc).
On the CCD of Figure 1, the count rate increases from top through lower left to lower right, with each part corresponding to the island, lower-banana, and upper-banana states, respectively.On the HIDs, the data points form many vertical line-like features, or stripes.The source traces these stripes on a time scale of days, and moves horizontally on much longer time scales.The former causes the source motion within a single state on the CCD, while the latter leads to transitions among the three states.The hard color is higher in the island state, and decreases in the banana states.The present paper focuses on the upper-banana state, which is defined here as those datasets with the 3-30 keV source intensity exceeding 0.4 Crab.We applied the same criteria to the data from the other four epochs.When summed over the 5 epochs, the selected datasets reach 95 in number, with their 3-30 keV luminosities spanning a range of (1-4) ×10 37 erg s −1 .Some of these datasets were already analyzed with the Eastern model by Gierliński & Done (2002b), Gilfanov et al. (2003) and Lin et al. (2007).
For each dataset in the upper-banana state, we accumulated the Standard-2 data into a single 3-30 keV spectrum, with a typical exposure of several ks.Then, the modeled background mentioned above was subtracted.However, this method is known to causes slight (several percents) over-subtraction of the background1 .This effect was compensated for by rescaling the background spectrum up to 10%, so that its count rate matches that of the on-source data in the hardest 60-100 keV energy range, where the signal count is considered negligible.Since the source count was detected significantly up to 30 keV, we hereafter analyze the total 95 energy spectra in the energy range of 3-30 keV.The PCU2 response matrix was created for each observation using PCARSP version 10.1.In order to take into account calibration uncertainties, we add 1% and 7% systematic errors to each energy bin of the source and background spectra, respectively.
Comparison of Energy Spectra
In comparison with the previous work by Mitsuda et al. (1984), we first examine relatively slow ( 10 3 s) spectral variations along the upper-banana state of 4U 1608-522.For this purpose, we selected four representative spectra, hereafter Spec A through Spec D, obtained at the most luminous (several times 10 37 erg s −1 ) end (i.e., the right side) of the HIDs.Three of them (Spec A, Spec B and Spec C) come from the same vertical line, while the other is located on the different line.Their observation IDs, average intensities, and soft/hard colors are summarized in Table 1, while their locations on the hard-HID are indicated in Figure 1.These spectra and their ratios to their average spectrum are shown in Figure 2.
In Figure 2, the brightest (Spec A) and the second brightest (Spec B) spectra of the four are selected from the same vertical line in the HID.By comparing them, it is clear that the intensity in the hard energy band changes significantly, while that in the soft band is kept constant.Moreover, in energies above ∼ 15 keV, their ratios saturates as a function of energy.As originally proposed by Mitsuda et al. (1984), these results can be explained when the spectra consist of a variable hard component that dominates in 15 keV energy band and a stable soft component, and the normalization (but not the shape) of the former changes between the two spectra.Indeed, the difference spectrum between them, shown in Figure 3 as Spec 1, is represented successfully by a BB model of which the temperature is kT BB ∼ 2.5 keV.The obtained best-fit parameters are summarized in Table 2.Although the derived absorption column density, ∼ 3×10 23 cm −2 , is significantly higher than that reported previously (0.8×10 22 cm −2 ; Grindlay & Liller 1978;Rutledge et al. 1999), the difference may be attributed to slight changes in the assumed stable soft component (as assessed later in this subsection), and/or those in kT BB .These results reconfirm the achievements with Tenma (Mitsuda et al. 1984) and Ginga (Makishima et al. 1989).
From the above results, we consider that the harder part of the spectrum is carried by a "hard component" which is approximated by a kT BB ∼ 2.5 keV BB.Its variation (in normalization rather than in kT BB ; Makishima et al. 1989) causes significant changes in the hard color, but does not affect the intensity very much because the total source counts are dominated by softer photons.This explains the formation of the vertical stripes in the hard HID of Figure 1.This hard component is most naturally attributed to opticallythick emission from the NS surface, because the measured temperature agrees with the local Eddington temperature of a 1.4 M ⊙ NS, or ∼ 2.0 keV.Contrary to the above case, the other two spectra, the faintest spectrum (Spec D) and the second faintest one (Spec C) which lie on a pair of stripes adjacent to each other, differ mainly in the soft energy band, with the hard energy end kept almost constant.This suggests that the source variation in this case is carried by a soft spectral component which is presumably identified as the stable component suggested by Spec A and Spec B. To constrain its spectral shape, we derived the difference spectrum (denoted Spec 2) between Spec C and Spec D, and show the results in Figure 4. Clearly, this difference spectrum is much softer than the previous one (Spec 1), in agreement with the inference derived from the spectral ratios.This Spec 2 can be reproduced by either an MCD model (kT in ∼ 1.7 keV) or a BB (kT BB ∼ 1.3 keV), but as shown in Table 2, the former is more successful and gives a more reasonable absorption.We hence consider that the "soft component" is approximated by an MCD model with kT in ∼ 1.7 keV.
Since the above result supports the Eastern model, the four spectra must be reproduced by a linear combination (though with different weights) of the soft component expressed by a kT in ∼ 1.7 keV MCD, and the hard component identifiable with a kT BB ∼ 2.5 keV BB.Then, adding a narrow Gaussian component to represent an Fe-K line emission line at 6-7 keV, we fitted the four spectra with this composite model, denoted MCD+BB+Gau model.The results are shown in Figure 5 and Table 3; there, r in is the innermost radius of the accretion disk for an assumed inclination angle i = 0 • , and r BB is that of the BB model assuming a spherical emission region.Thus, all the four spectra have been fitted successfully by the model, and the obtained values of kT in ∼ 1.8 keV and kT BB ∼ 2.7 keV indeed agree with those obtained by the two difference spectra.These two temperatures are also consistent with those found in previous works that employed the Eastern model (Mitsuda et al. 1984;Makishima et al. 1989).
Can we explain these results with alternative modeling?For example, a model proposed by White et al. (1988), often called "Western" model, assumed that Thomson opacity dominates free-free opacity in the accretion disk and the emission is significantly modified by electron scattering, while the emission from the NS surface are hardly modified and observed as a simple BB emission.Then, an unsaturated Comptonized emission is expected to dominate all over the energy band, while a soft BB model is added in the intermediate range.This model can actually explain the shape of Spec 2 in terms of variations of the soft BB component.However, Spec 1 is very difficult to explain with this model, because its shape is different from those of either component of the model; we would need extreme fine tunings among the parameters of the BB and Comptonized components.
Returning to the Eastern model, a simple simulation was performed to examine the increased absorption issue in Figure 3. Employing the best fit model to Spec A, we created a fake spectrum to be called Spec A'.Yet another spectrum, named Spec B', was created, in which the BB normalization was reduced to 67% of that of Spec A', and the MCD normalization was set 5% higher than that in Spec A'.These are meant to emulate Spec B, within the allowed fit errors in Table Table 2.Then, the difference between Spec A' and Spec B' was indeed fitted successfully by a single BB model with nearly the same BB temperature as assumed in the input, but with the absorption increased to ∼ 3 × 10 23 cm −2 .
Absolute Values of the Physical Parameters
The Eastern model allows us to assign clear physical meanings to absolute values of the physical parameters obtained by the MCD and BB components.For that purpose, however, we need to convert the measured color temperature kT X (X = "in" or "BB") to the effective temperature kT eff X , using so-called hardening factor κ as kT eff X = κ −1 kT X . (1) Then, the true radius r eff X is estimated from the measured value r X as The value of κ is numerically estimated as 1.4-1.6 for Type I bursts (Ebisuzaki et al. 1984), and 1.7-2.0 for accretion disks (Shimura & Takahara 1995).
If we adopt κ = 1.7, the observed soft-component parameters of kT in ∼ 1.8 keV and r in ∼ 4 km yields kT eff in ∼ 1.1 keV and r eff in ∼ 12 km.Thus, the estimated true radius becomes larger than the NS radius of 10 km.Moreover, the value of r eff in is close to the radius of last stable orbit in terms of general relativity, 3R s = 12.4 km, where R s ≡ 2GM/c 2 is the Schwarzschild radius, G is the gravitational constant, M is the NS mass, and c is the light speed.As to the BB component, kT BB ∼ 2.7 keV and r BB ∼ 1.2 km yields kT eff BB ∼ 1.8 keV and r eff BB ∼ 2.7 km, assuming κ ∼ 1.5 (London et al. 1986;Ebisuzaki 1987).The estimated kT eff BB is close to the local Eddington temperature (2.0 keV) of a 1.4 M ⊙ NS, suggesting that the BB emission arises from a region on the NS surface.In addition, r eff BB is smaller than 10 km, when assuming isotropic emission from a spherical source.Therefore, as previously suggested by Mitsuda et al. (1984), the BB component can be regarded as being emitted from an equatorial zone of the NS, where the accretion disk contacts the surface.
Relations among the Model Parameters
From the above spectral analysis, it has been confirmed that the Eastern (MCD+BB+Gau) model successfully reproduces the selected four spectra in the upper-banana state, and yields physically reasonable interpretations.Given these, we applied the MCD+BB+Gau model to the 95 spectra in the upper-banana state, and obtained χ 2 /d.o.f. of < 1.4 for all of them.Therefore, we regard the Eastern model as applicable to all the present data sets.
Figure 6 summarizes the relation among the unabsorbed disk bolometric luminosity L disk , the unabsorbed BB bolometric luminosity for isotropic emission L BB , and their sum L tot ≡ L disk + L BB , as well as the temperatures and radii.Here, L disk is calculated assuming a face-on disk (i.e., the inclination angle i = 0 • ) as where the first factor of 2 means emission from the two sides of the disk, σ is the Stefan-Boltzmann constant, and T (r) is the disk temperature at the radius r.
In Figures 6 (a), both L disk and L BB are seen to increase as L tot increases from 1 × 10 37 to 4 × 10 37 erg s −1 .However, a closer inspection reveals that L disk varies more steeply than L BB .Actually, the data behavior in Figure 6 (a) can be approximated as As a result, the ratio between L BB and L disk decreases from 0.6 to 0.4 as L tot increases by a factor of 4, in agreement with previous reports (Gilfanov et al. 2003;Revnivtsev & Gilfanov 2006).Similarly, Figure 6 (b) shows the luminosity dependences of the temperature and radius parameters, derived in the same way as in § 3.1, and presented without applying any of the correction factors mentioned in § 3.2.The relations are approximated as kT in ∝ L 0.19 tot , r in ∝ L 0.12 tot , kT BB ∝ L 0.10 tot , r BB ∝ L 0.11 tot . (5) Instead of L tot , we may utilize the mass accretion rate Ṁ , which can be estimated in the following way.Generally, the accreting matter releases energy at a rate of GM Ṁ /r in , as it flows through the disk down to the innermost radius r in .According to the picture of the standard accretion disk, a half of this luminosity is radiated away in the accretion disk as r in , and the other half is stored in the Keplerian kinetic energy.Therefore, the mass accretion rate is estimated as This relation remains valid even if r in varies, as long as the disk stays in the standard state.
Figure 7 shows the same physical parameters as presented in Figure 6, but this time, as a function of Ṁ estimated via equation ( 6), where the unit of Ṁ is arbitrary.
In the case of a standard accretion disk, L disk and kT in are expected to be proportion to Ṁ and Ṁ0.25 , respectively (e.g., Makishima et al. 1986;Ebisawa et al. 1993;Tanaka & Shibazaki 1996), as long as r in kept constant.Indeed, in the range of Ṁ 300 (corresponding to L tot 1.5 × 10 37 erg s −1 ), Figure 7 reveals tight scalings as L disk ∝ Ṁ0.92±0.07 , kT in ∝ Ṁ0.21±0.05, r in ∝ Ṁ0.05±0.07 , which agree, within errors, with the predictions for standard disks.This result is consistent with that reported by Lin et al. (2007).
As the accretion rate Ṁ increases to 300 in Figure 7, the disk parameters start deviating from the scalings of equation ( 7), and follow new relations approximated as Thus, the inner disk radius r in apparently starts "retreating", with increasing fluctuations, but the increase in r in is compensated by the flattering in kT in .As a result, L disk starts to weakly saturate, instead of increasing in proportion to Ṁ .
Turning to the BB parameters, we find kT BB rather stable, with a weak positive dependence on Ṁ .On the contrary, r BB exhibits a large scatter, which causes similar scatter in L BB .The Ṁ -dependence of these BB parameters can be approximated as This BB behavior is the same as previously reported (Mitsuda et al. 1984;Gilfanov et al. 2003;Revnivtsev & Gilfanov 2006).We also find that kT BB is always higher than kT in , and r BB is smaller than r in .These relations are consistent with the basic assumptions of the Eastern model; the BB emission comes from the NS surface, while the MCD component from the surrounding accretion disk.
Effective Degrees of Freedom Causing Spectral Variability
We have so far described the energy spectra and their variations in terms of the four quantities, kT in , r in , kT BB , and r BB .However, it is not yet clear how many of them can be regarded as independent variables (with the rest depending on them).In other words, we need to know how many degrees of freedom are involved in the observed spectral variations in the upper-banana state.This can be done by applying "fractal dimension analysis" (e.g., Matsumoto et al. 2005) to the four model parameters.
For this purpose, let us define a 4-dimensional vector space spanned by the four variables, with i = 1, 2, .., 95 denoting the data number.Let us also define the average vector , and the zero-mean vectors, v(i) = V (i) − V ≡ {y 1 (i), y 2 (i), y 3 (i), y 4 (i)} with y k (i) ≡ Y k (i) − Y k .Then, a "distance" D(i) of the vector v(i), measured from the origin, is calculated as where Finally, we calculate the number N(< D) of those data points of which the distance D(i) is less than a given value D.
Figure 8 shows the normalized data point number N(< D)/95 from equation (10) as a function of D, over the range of N/95 = 0.2 − 0.8 (or N = 23 − 76).If the variations are controlled by n (1 ≤ n ≤ 4) independent parameters, we expect the vectors v(1), v(1), ... v(95) to form a n-dimensional subspace in the vector space, so that N(< D) should increase as ∝ D n .Indeed, Figure 8 reveals a tight power-law relation as Since this result is consistent with n = 2, we infer that the spectral behavior in the upperbanana state of 4U 1608-522 has effectively two degrees of freedom.
Fluctuations Independent of the Total Luminosity
Of the two independent variables describing the spectral variability ( § 3.4), one is obviously the total luminosity L tot , or nearly equivalently, the mass accretion rate Ṁ .Actually in Figure 6 (or Figure 7), the four quantities are all observed to depend primarily on L tot (or Ṁ ).However, they show significant scatters around the L tot -dependent correlation, so that none of them can be regarded as a single-valued function of L tot (or Ṁ ).This is consistent with the presence of the second degree of freedom revealed in § 3.4.In order to identify what is causing this extra freedom, we remove the L tot -dependence from the behavior of the four model parameters, and study the residual variations.
To eliminate the major L tot -dependence from the physical parameters, we calculated their de-trended counterparts Z ′ (i) (Z = L disk , L BB , kT in , r in , kT BB and r BB ), employing equations ( 4) and ( 5), as kT ′ BB (i) ∝ kT BB (i)/L tot (i) 0.10 , and r ′ BB (i) ∝ r BB (i)/L tot (i) 0.11 .( (As the de-trending parameter, we chose L tot rather than Ṁ , because the latter is less directly estimated from the data.)Absolute values of the de-trended parameters are not meaningful, and we hereafter consider their relative values only.Since r ′ in shows the largest variation among them, we plot in Figure 9 the de-trended parameters as a function of r ′ in .Table 4 also summarizes those of Spec A to D in § 3.1.From the behavior of the de-trended luminosities, the data points can be divided into two branches; one has almost constant luminosities as r ′ in varies, while the other is characterized by significant r ′ in -dependent variations both in L ′ disk and L ′ BB .Hereafter, the two branches are denoted as "constant-luminosity branch (CLB)" and "variable-luminosity branch (VLB)", respectively.The two branches may connect at r ′ in ∼ 0.85, rather than behaving independently from each other.
In the CLB, the de-trended BB luminosity stays nearly constant at L ′ BB ∼ 0.8; so are the two BB parameters, kT ′ BB ∼ 1.6 and r ′ BB ∼ 0.9.Similarly, the de-trended disk luminosity remains at L ′ disk ∼ 0.9.As a result, the CLB data points are distributed in Figure 6 (a) along the major correlation trends.Nevertheless, r ′ in varies by ∼ ±20%, accompanied by a clear decrease in the de-trended disk temperature as kT ′ in ∝ r in −0.5 .(This scaling is a natural consequence of the constant L ′ disk and the relation of ) There are two possibilities to explain this r ′ in behavior.One is real changes in r in , and the other is those in the color hardening factor κ of the disk.Since L ′ disk is kept constant in the CLB, the latter case may be more likely.Specifically, the observed behavior will be explained if some unspecified mechanisms (possibly related to vertical structure changes in the disk) caused κ to increased by ∼ 10% while somehow keeping the effective temperature of the disk and its true radius constant; the observed color temperature kT ′ in would then increase by ∼ 10% according to equation (1), and the apparent disk radius r ′ in would decrease by ∼ 20% after eqation (2), thus reproducing the observaton.
In the VLB, r ′ in varies over a relatively small range (∼ ±10%), whereas the two luminosities both vary significantly in an anti-correlated way; L ′ disk decreases from 1.9 to 1.5 as . This complementary behavior between the MCD and BB components appears in Figure 6 as the large fluctuations away from the major trend, where the data points reach a level of L disk ∼ L BB ∼ 0.5L tot .(This behavior cannot be a fitting artifact caused by strong couplings between the two spectral component, since the nominal fitting errors are not particularly larger in the VLB, and are generally smaller than/comparable to the size of the plotting symbols in Figure 6 and Figure 7.) Since kT ′ BB is almost constant (or only slightly decreasing) like in the CLB, the increase of L ′ BB is mainly caused by that of r ′ BB from 0.9 to 1.2 as ∝ r ′ in 1 .We again observe kT ′ in to decrease like in the CLB, but with a different r ′ in -dependence which is approximated as ∝ r ′ in −0.75 .This is exactly the behavior of a standard accretion disk (Shakura & Sunyaev 1973) when its radius varies under a constant Ṁ , because we then expect Ṁ ∝ r 3 in T 4 in (equation ( 6)).This also explains the observed disk luminosity behavior, as Obviously, the increase in the disk radius under a constant Ṁ predicts that a larger fraction of the overall gravitational energy release should be emitted by the BB component; in fact, along the full VLB in Figure 7(b), L ′ disk is seen to decrease by ∼ 0.3 in the employed arbitrary unit, while L ′ BB to increase by ∼ 0.3, thus conserving the total luminosity.In other words, the VLB is characterized by actual changes in the innermost disk radius.
Based on these correlation analyses among the de-trended spectral parameters, we conclude that the second independent variable (besides Ṁ; § 3.4) causing the variability in the upper-banana state can be ascribed to sporadic changes in r ′ in (or in r in ).However, this r ′ in variability is likely to be further subdivided into two different mechanisms under a constant Ṁ ; one is an apparent effect due to variations in κ of the disk, and the other is real changes.
As shown in Figure 9 and Table 4, the L ′ disk parameters of Spec B, C and D locate on the CLB line, while that of Spec A does on the VLB one.Therefore, we consider that the hard/soft-band differences of Spec A/B and C/D observed in Figure 2 are mainly attributed to the VLB and CLB variation, respectively.On the other hand, the de-trended parameters of Spec B and C are similar, and the variability between them observed over the entire 3-30 keV band is thought to be dominated by the Ṁ change.
The Eastern Model
Through the analysis of the RXTE spectra of the atoll source 4U 1608-522 in its upperbanana state, we have confirmed that both time-averaged and difference energy spectra can be reproduced successfully by the Eastern (MCD+BB+Gau) model.Considering the hardening factor, the effective temperature and radius of the MCD component were obtained as kT eff in ∼ 1.1 keV and r eff in ∼ 12 km, respectively.The radius is larger than the representative NS radius, 10 km, and is consistent with the last stable orbit, 3R s = 12.4 km, allowed by general relativity.This result agrees with the picture of a standard accretion disk which is formed around a NS.The BB parameters were found as kT eff BB ∼ 1.8 keV and r eff BB ∼ 2.7 km, assuming isotropic emission from a spherical source.The temperature is close to the local Eddington temperature (2.0 keV) at the NS surface, and the radius is smaller than 10 km.Then, the BB component can be regarded as being emitted from an equatorial zone of the NS, as previously suggested by Mitsuda et al. (1984).
Saturation of the Blackbody Luminosity
According to the picture of standard accretion disks, a half of the released gravitational energy is radiated from the accretion disk (L disk ), and the other half is stored in the Keplerian kinetic energy and then emitted (L BB ) when the matter settles onto the NS surface.Then, we expect L BB to be proportion to L disk , and hence the L BB /L disk ratio to be constant.Indeed, L disk was found to increase almost linearly with the total luminosity L tot (figure 6a).However, the same figure reveals that L BB increases less steeply, making the L BB /L disk ratio decrease from 0.6 to 0.4 as L tot increases from 1 × 10 37 to 4 × 10 37 erg s −1 .
We may think of two possibilities to explain the observed relative decrease of L BB .One is that the accreting matter reaches the NS surface and emits L BB which is equivalent to the Keplerian energy (i.e.∝ L disk ), but we cannot observe all the emission because its wavelength shifts outside the PCA energy band (3-30 keV), or the geometrical angle of the emission moves from our line of sight.However, we have not observed any hint of extra emission in the softest or hardest energy ends of the PCA spectra.In addition, other sources, which are thought to have different inclination angles, are also reported to show the same behavior of the decreasing L BB /L disk ratio (Revnivtsev & Gilfanov 2006).Therefore, this possibility is unlikely.
The other possibility is that the accreting matter flows all the way through the disk down to its inner radius that is close to the NS surface, but its progressively larger fraction fails to accrete onto the NS surface.If the accretion-failed matter stayed around the NS away from the surface (e.g., an expansion of the boundary layer), it would accumulate to become optically thick, eventually producing detectable emission.This would lead to a decrease in kT BB , because the emission radius should increase.As shown in Figure 6 and equation ( 5), however, this is not the case; the kT BB value stays almost constant (or rather increases) as the total luminosity increases.Then, a fraction of the matter must be outflowing and escaping from the system without releasing its kinetic energy as emission.Although the observed L tot ∼ 4 × 10 37 erg s −1 is only ∼ 20% of the Eddington luminosity for a 1.4 M ⊙ NS, the observed value of kT eff BB ∼ 1.8 keV is already close to the spherical Eddington temperature at 10 km (2.0 keV).Considering further the non-spherical geometry of the disk, and the disk radius which is possibly larger than 10 km, the putative outflow is likely to be driven by increased radiation pressure in the innermost disk region.The presence of such outflows is indeed suggested by detections of broad absorption features from LMXBs (Schulz & Brandt 2002;Ueda et al. 2004), black hole binaries (Kotani et al. 2000;Yamaoka et al. 2001;Kubota et al. 2007), and active galactic nuclei (Pounds et al. 2003ab).
Assuming that the decrease in L BB /L disk from 0.6 to 0.4 is due to outflows, about (0.6 − 0.4)/0.6∼ 30% of the accreting matter is estimated to escape from the system at a typical luminosity of ∼ 4 × 10 37 erg s −1 .Toward lower luminosities, the ratio appears to converge to ∼ 0.6, rather than 1.0 which would be theoretically expected.This may be attributed to effects due, e.g., to the system inclination, general relativity, and NS rotation (e.g., Sunyaev & Shakura (1986)).Detailed theoretical discussion on this point is beyond the scope of the present paper.
As indicated by equation ( 8), the innermost disk radius r in is observed to increase gradually when the mass accretion rate increases to 300 in Figure 7, or equivalently, when the total luminosity becomes 1.5 × 10 37 erg s −1 (corresponding to 7% of the Eddington luminosity).This is unlikely to be an apparent effect caused, e.g., by changes in the hardening factor, since this quantity would increase as the luminosity increases (Gierliński & Done 2003;Davis et al. 2005), and would hence make r in smaller.Therefore, the increase in r in toward higher accretion rates is considered to represent a real retreat of the innermost disk radius.This interpretation may agree with the results by Popham & Sunyaev (2001), who numerically calculated the physical condition of the boundary layer and showed that the disk inner edge retreats back by the increased radiation pressure when the luminosity approaches a The soft color refers to the source count ratio of 6-10 keV / 2.5-6 keV.
b The same as the soft color but the ratio of 10-30 keV / 6-10 keV counts.
This preprint was prepared with the AAS L A T E X macros v5.2.
a All the errors are single-parameter 90% confidence limits.
b The absorption column density in the unit of 10 22 cm −2 .
c Temperatures are all in the unit of keV.
42/54
a All the errors are single-parameter 90% confidence limits.The absorption column density is fixed at 0.8 × 10 22 cm −2 .
b The width of the Gaussian component is not well determined and constraint to be ≤ 0.2 keV.
c Temperatures are all in the unit of keV.
d The innermost radius in the unit of √ cos i −1 km, where i is the inclination angle.e The radius of the BB emission in the unit of km, assuming the emission comes from the isotropic sphere region.f The center energy of the Gaussian model, which is limited in the range of 6.4-6.9 keV.g The normalization of the Gaussian model in the unit of 10 −2 photons cm −2 s −1 .
Table 4: de-trended parameters of Spec A to D after eliminating the L tot dependence in Figure 9. 1).
a
See equation (14) for the L tot -dependence eliminated here.b Absolute values of the de-trended parameters are not meaningful.
Fig. 1 .
Fig. 1.-The CCD (panel a), soft-HID (panel b), and hard-HID (panel c) of 4U 1608-522 in epoch 3, with the time bin of 128 s.The utilized energy bands are described in the text.Four arrows indicate the datasets analyzed in § 3.1 (Table1).
Fig. 2 .
Fig. 2.-Four energy spectra (panel a) shown in Table 1 and Figure 1 (c), and their intensity ratios to their average (panel b).The energy spectra are shown without removing the detector response.
Fig. 3 .
Fig. 3.-The difference spectrum (Spec 1) between the brightest (Spec A) and the second brightest (Spec B) spectra in Figure 2. It is successfully represented by an absorbed BB model with a temperature kT BB ∼ 2.5 keV (lower histogram).The fit residuals are shown in the bottom panels.If the absorption is fixed at 0.8 × 10 22 cm −2 , the model over-predicts the data below 7 keV (upper histogram).
Fig. 4 .
Fig. 4.-The same as Figure 3, but for the difference (Spec 2) between the faintest spectrum (Spec D) and the second faintest one (Spec C) in Figure 2. It is reproduced by either a BB model (panel a, kT BB ∼ 1.3 keV) or an MCD model (panel b, kT in ∼ 1.7 keV), with the absorption column density left free.
Fig. 6 .
Fig. 6.-(a)The relations of the MCD luminosity (L disk ) and the BB luminosity (L BB ) as a function of L tot ≡ L disk + L BB in the upper-banana state.Each data point represents one of the 95 observations.The three dashed lines show 0.7L tot , 0.5L tot , and 0.3L tot , from top to bottom.(b) The same as (a) but for the temperatures and radii of the MCD and BB models.
Fig. 7 .
Fig. 7.-The same as Figure 6, but shown as a function of the estimated mass accretion rate Ṁ .The three dashed lines illustrate dependences as L disk ∝ Ṁ, r in ∝ Ṁ0 , and kT in ∝ Ṁ0.25 .
Fig. 8 .
Fig. 8.-Results of the fractal dimension analysis of the four model parameters (kT in , r in , kT BB , and r BB ) over the 95 data sets.See text for details.
Fig. 9 .
Fig. 9.-The same as Figure 6, but for the physical parameters from which the L tot dependence was removed via equations (12)-(14).They are plotted as a function of the de-trended innermost radius of the accretion disk r ′ in .Dashed lines indicate the relations of r ′ in p , with p = −1.0,−0.75, −0.5, 1.0 and 2.0.The L ′ disk parameters of Spec A to D inTable 4 are also indicated.
Table 1 :
Four spectra analyzed circumstantially in the upper-banana state.
Table 2 :
Model fit results to the difference spectra calculated from those in Table1.a
Table 3 :
Fitting results of the four original spectra in Table 1 with the MCD+BB+Gau model.a
Table 4
are also indicated. | 2019-04-18T13:08:40.909Z | 2011-07-19T00:00:00.000 | {
"year": 2011,
"sha1": "06163f386a414fbdcd74bee2a20727ce468736d7",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/0004-637X/738/1/62/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "5eb00e71dec4b1776b95c846e48d5d1fd23e8b15",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2901317 | pes2o/s2orc | v3-fos-license | Validation of candidate gene markers for marker-assisted selection of potato cultivars with improved tuber quality
Tuber yield, starch content, starch yield and chip color are complex traits that are important for industrial uses and food processing of potato. Chip color depends on the quantity of reducing sugars glucose and fructose in the tubers, which are generated by starch degradation. Reducing sugars accumulate when tubers are stored at low temperatures. Early and efficient selection of cultivars with superior yield, starch yield and chip color is hampered by the fact that reliable phenotypic selection requires multiple year and location trials. Application of DNA-based markers early in the breeding cycle, which are diagnostic for superior alleles of genes that control natural variation of tuber quality, will reduce the number of clones to be evaluated in field trials. Association mapping using genes functional in carbohydrate metabolism as markers has discovered alleles of invertases and starch phosphorylases that are associated with tuber quality traits. Here, we report on new DNA variants at loci encoding ADP-glucose pyrophosphorylase and the invertase Pain-1, which are associated with positive or negative effect with chip color, tuber starch content and starch yield. Marker-assisted selection (MAS) and marker validation were performed in tetraploid breeding populations, using various combinations of 11 allele-specific markers associated with tuber quality traits. To facilitate MAS, user-friendly PCR assays were developed for specific candidate gene alleles. In a multi-parental population of advanced breeding clones, genotypes were selected for having different combinations of five positive and the corresponding negative marker alleles. Genotypes combining five positive marker alleles performed on average better than genotypes with four negative alleles and one positive allele. When tested individually, seven of eight markers showed an effect on at least one quality trait. The direction of effect was as expected. Combinations of two to three marker alleles were identified that significantly improved average chip quality after cold storage and tuber starch content. In F1 progeny of a single-cross combination, MAS with six markers did not give the expected result. Reasons and implications for MAS in potato are discussed. Electronic supplementary material The online version of this article (doi:10.1007/s00122-012-2035-z) contains supplementary material, which is available to authorized users.
Introduction
Potatoes are grown worldwide for food, feed and industrial uses. The tubers are the source of carbohydrates, high quality protein, essential vitamins, minerals and trace elements. Tuber starch is the basis of various industrial products and is used increasingly as substitute for fossil oil in the generation of chemical compounds, for example, bioplastics. Besides starch quality, tuber starch content and starch yield (starch produced per area unit) are important traits for the production of potatoes for industrial uses. A further important quality criterion arises from the requirements of the food processing industry, which produces chips, French fries and other deep fried products from potatoes. This criterion is the tuber content of the reducing sugars fructose and glucose, which determines the culinary quality of chips and French fries Hayes and Thill 2002;Kirkman 2007;Mackay et al. 1990;Xiong et al. 2002). Reducing sugars undergo with amino acids at high temperatures a non-enzymatic Maillard reaction, which results, depending on the amount of reducing sugars in the tubers, in unacceptable dark colored products (Talburt et al. 1975). The amount of reducing sugars increases when tubers are stored at temperatures below 10°C, which are preferred by the industry to inhibit sprouting and to extend marketability. This cold-induced sweetening is an adaptive response to osmotic stress (Sowokinos 2001). In mature dormant tubers, the sugars are produced by degradation of a small fraction of starch (Isherwood 1973). Tuber starch and sugar content are, therefore, connected and part of the same metabolic network.
To meet the demands of farmers, industry and consumers, potato breeding seeks to develop improved varieties, which combine high yield with tuber traits optimized for the various end uses and resistance to pests and diseases. Genetic variation is generated by crossing heterozygous, tetraploid parents and selection is applied in the segregating F1 generation. F1 genotypes are vegetatively propagated and evaluated in multiple year and location trials for some 40 characters, many of them quantitative and modified by the environment (G 9 E interactions). The selection process from the initial cross to the release of a new variety requires 10-15 years (Milbourne et al. 2007). To assess reliably traits such as tuber yield, starch yield and chip quality, sufficient numbers of tubers are needed, which become available only after several years of vegetative multiplication. Tuber yield and starch content are conveniently evaluated by measuring tuber weight and specific gravity, respectively, which are both non-destructive methods. Chip quality is evaluated by a destructive frying test, by which chip color is rated from light yellow to dark brown. The chip color darkens with increasing amounts of reducing sugars. Due to the potato's low multiplication rate, the phenotypic selection of high yielding cultivars with good processing quality early in the selection cycle is not reliable. DNA-based markers diagnostic for tuber yield, starch yield and chip quality could circumvent this difficulty and drastically reduce the number of cultivars to be further propagated and evaluated in field trials. Ideally, diagnostic markers are derived from the gene variants (alleles) that are causal for natural variation of a character of interest. The causal variants may be located in the promoter region affecting gene expression, in the coding region affecting protein performance or in other regulatory sequences (introns, 3' and 5 0 untranslated regions). As knowledge of these causal genes and alleles is scarce for most plant agronomic traits, markers that are physically closely linked and, therefore, in linkage disequilibrium with the causal genes can be used as well for marker-assisted selection (MAS).
Tuber yield, starch and sugar content are complex traits controlled by multiple genetic and environmental factors. The prediction of trait values requires, therefore, sets of diagnostic markers, which tag the most important loci controlling the phenotypic variation. Molecular linkage mapping in experimental, mostly diploid populations derived from two parents identified a number of these loci as QTL (quantitative trait locus) (Bonierbale et al. 1993;Douches and Freyre 1994;Freyre and Douches 1994;Menendez et al. 2002;Schäfer-Pregl et al. 1998). QTL mapping and the generation of a molecular function map for carbohydrate metabolism and transport (Chen et al. 2001) provided the basis for adopting a candidate gene approach to identify genes that are causal for natural variation of tuber starch and sugar content. Genes functional in carbohydrate metabolism and transport, including invertases, ADP-glucose pyrophosphorylase, starch phosphorylases, sucrose phosphate synthase and sucrose synthase co-localized with QTL for tuber starch and sugar content .
In a next step, functional and positional candidate genes were tested for association with tuber yield, starch content, starch yield and chip quality in populations of varieties and advanced breeding clones that were generated in commercial breeding programs. In contrast to the experimental populations used for linkage mapping, this material originated from multiple parental genotypes and represented the common allelic variation present in advanced potato germplasm in Europe. Associations with tuber quality traits were found for DNA polymorphisms at loci encoding invertases, starch phosphorylases, soluble starch synthase I, glucose-6-phosphate dehydrogenase and ribulose bisphosphate carboxylase activase. Individual markers explained up to 12 % of the total variation and, depending on the trait, six to ten markers explained between 26 % (tuber yield) and 55 % (tuber starch content) of the total variation (Li et al. , 2008. Epistatic interactions between candidate gene alleles for tuber starch content and starch yield were also found .
A further step towards MAS is the validation of associated markers by using them to select genotypes with specific marker combinations. The efficiency of the genotypic selection is subsequently assessed by comparative phenotypic evaluation of groups of genotypes with different marker combinations. Markers can also be validated by testing whether the expected phenotypic effects are reproducible in new populations different from the one, in which the marker-trait association was originally identified.
ADP-glucose pyrophosphorylase (AGPase) is an important candidate gene that has not been fully evaluated for association with tuber quality traits. AGPase is a key enzyme in starch biosynthesis in higher plants (Tetlow et al. 2004). Potato AGPase, like all higher plant AGPases studied so far, is a heterotetramer composed of two molecules each of two distinct subunits, a large regulatory subunit (LS) and a small catalytic subunit (SS) (Okita et al. 1990). Potato AGPase LS and SS encoding genes (referred to as AGPaseS and AG-PaseB, respectively) have been cloned and characterized (Müller-Röber et al. 1990;Nakata et al. 1991;Okita et al. 1990). Molecular mapping identified three loci for the large subunit AGPaseS on potato chromosomes I, IV and VIII, and two for the small subunit AGPaseB on chromosomes VII and XII (Chen et al. 2001). The AGPaseS loci, in particular, the locus AGPaseS-a on chromosome I, co-localize with QTL for tuber starch and/or sugar content in experimental mapping populations . Whereas a minor association with chip quality has been found for an AGPaseB allele (Li et al. 2008), AGPaseS loci have not been tested for association with tuber quality traits.
Pain-1 on potato chromosome III (Chen et al. 2001) encodes soluble acid invertase. Invertases cleave sucrose into glucose and fructose and are, therefore, the most direct functional candidate genes for chip quality (Sowokinos 2001). Two highly similar Pain-1 cDNA alleles have been identified, which were associated with tuber quality traits. One was more strongly associated with tuber starch content and the other with chip quality after cold storage (Draffehn et al. 2010). DNA variation in the Pain-1 promoter region has not been analyzed for association with tuber quality traits, which might help to further dissect Pain-1 alleles and their phenotypic effects.
In this present study, we report (1) novel associations of AGPaseS alleles with tuber quality traits, (2) refined marker-trait associations at the invertase locus Pain-1, (3) the development of user-friendly polymerase chain reaction (PCR) assays for candidate gene alleles associated with tuber quality traits, (4) the first results of MAS for tuber quality and (5) the validation of previously identified marker-trait associations in new germplasm.
Plant material
A population of 243 tetraploid cultivars (Li et al. 2008) consisting of 34 standard varieties and 90, 96 and 23 breeding clones from Böhm-Nordkartoffel Agrarproduktion OHG (BNA, Ebstorf, Germany), Saka Pflanzenzucht GbR (Windeby, Germany) and Nordring-Kartoffelzuchtund Vermehrungs-GmbH (NORIKA, Groß Lüsewitz, Germany), respectively, was used for association mapping of new candidate genes and development of allele-specific marker assays. This population is referred to as the 'CHIPS-ALL' population and has been evaluated in replicated field trials for chip color after harvest (CQA) as well as after cold storage at 4°C (CQS), for tuber yield (TY), starch content (TSC) and starch yield (TSY) (Li et al. 2008). All traits are correlated with each other. Tuber yield correlates negatively with tuber starch content and chip quality, whereas tuber starch content, starch yield and chip quality are positively correlated (Table 1). MAS and marker validation were performed on two types of material. First, 500 advanced tetraploid 'BNC' clones from the breeding program of BNA involving multiple parental lines were used. The BNC clones originated from the 5th to 8th year of phenotypic selection after crossing. Second, 576 'SKC' clones derived from the cross 'Diana' 9 'Candella' were used. The 576 SKC clones were selected in 2007 from 746 seedlings based on general vigor and health. 'Diana' was one of the standard varieties in the CHIPS-ALL population, contained most positively associated markers and has relatively good chip quality (CQA = 7.8, CQS = 4.7). Average scores for chip quality of 'Candella' are lower (CQA = 6.0, CQS = 3.6). Tuber starch content was 17 % (average from 2002 and 2003) and 18 % (average from 2009 and 2010) for 'Diana' and 'Candella', respectively.
Chip color was obtained from rating three tubers per clone, time point and year. Trait means over the two replications in 2010 are coded as TSC-10, CQA-10, CQS8-10 and CQS4-10. Average chip quality over both years is referred to as CQA, CQS8 and CQS4.
DNA extraction
Leaf material was harvested from field-grown plants, freeze dried and stored at -20°C. DNA was extracted from 10 to 30 mg freeze dried leaf tissue per clone in racks with 2 ml safe-lock microcentrifuge tubes arranged in the 96-well format. Two 3 mm tungsten-carbide beads (Qiagen, Hilden, Germany) were added to each tube. Freeze dried leaves were grinded to a fine powder with a Retsch Mixer Mill MM300 (Qiagen, Hilden, Germany). Total genomic DNA was extracted using the BioSprint DNA Plant Kit and a BioSprint workstation (Qiagen, Hilden, Germany). DNA quality and quantity were determined using a spectrophotometer. The quality of the DNA re-isolated in 2009 from BNC and SKC clones selected in 2008 was additionally checked by PCR amplification of a 250 base pair ubiquitin gene fragment, using the primers 5 0 GACCATCACTCTTGAGGTTGAG 3 0 (forward) and 5 0 AATGGTGTCTGAGCTCTCGAC 3 0 (reverse) and standard PCR conditions.
Association mapping of single strand conformation polymorphism (SSCP) markers in the CHIPS-ALL population Amplicons were generated by PCR from genomic DNA as described (Li et al. 2008), using primers specific for ADP-glucose pyrophosphorylase S (AGPaseS, accession X61187) and a 1 kbp promoter fragment of soluble acid invertase (Pain-1, accession HQ197978). Primer sequences, annealing temperatures and amplicon sizes are specified in Table 2. SSCP analysis of the amplicons was performed as described Orita et al. 1989). SSCP fragments were scored as present (1) or absent (0) without considering allele dosage. Association analysis with the traits CQA, CQS, TSC, TY and TSY was performed using the model detailed in Li et al. (2008) and the GLM procedure of SPSS 15.0 software (SPSS GmbH, Munich, Germany).
Markers used for marker-assisted selection and validation
Markers were chosen based on the most significant associations with tuber quality traits detected in previous association mapping experiments (Li et al. , 2008) (this paper). The markers GP171-a and Rca-1a were amplified SCAR sequence characterized amplified region, SSCP single strand conformation polymorphism a Touch down PCR: The annealing temperature in the initial cycle was set 5°C above the optimal T A of the primers. In subsequent cycles, T A was decreased in steps of 1°C/cycle for five cycles and then maintained for 30 cycles and scored as direct PCR fragment length polymorphisms on agarose gels as described (Li et al. 2008) (Fig. 1). The GP171 amplicon originated from an anonymous RFLP (restriction fragment length polymorphism) marker (Gebhardt et al. 1991), whilst Rca-1a corresponds to allele 1a of ribulose bisphosphate carboxylase activase (RuBisCo activase, Rca). The marker allele InvGE-6f is diagnostic for the group a alleles of the apoplastic invertase gene InvGE (Draffehn et al. 2010) and was scored as direct PCR fragment length polymorphism as described . The SSCP alleles Pain1-9a and Pain1-8c both correspond to the group a alleles of the vacuolar acid invertase gene Pain-1. However, only a subset of genotypes that possesses allele Pain1-9a also has Pain1-8c, and the two alleles show slightly different associations. Pain1-9a is equivalent to the single nucleotide polymorphism (SNP) allele Pain1-A 1544 , whereas Pain1-8c is equivalent to the SNP alleles Pain1-A 718 and Pain1-C 552 (Draffehn et al. 2010). Genotyping for Pain1-9a was performed by SSCP analysis as described (Li et al. 2008). Pain1-8c and the new Pain1 prom -d/e allele were scored using specific PCR assays (see below). The SSCP alleles Stp23-8b, StpL-3b and StpL-3e originate from two plastidic starch phosphorylase genes (Li et al. 2008).
Genotyping for these three alleles as well as for the new AGPaseS alleles AGPsS-9a and AGPsS-10a was performed using allele-specific PCR assays (see below).
Development of allele-specific PCR assays
Except for AGPsS-9a, allele-specific PCR assays were developed as follows: PCR products were generated with the same primers as used for SSCP analysis from two to three genotypes of the CHIPS-ALL population, which either or not carried the targeted SSCP alleles. PCR products were separated on standard agarose gels, excised from the gels, purified with the QiaEx gel-extraction kit (Qiagen, Hilden, Germany) and cloned in the pGEM- insertion-deletion polymorphisms specific for the associated SSCP allele, which were then used for primer design (supplementary Figure 1). In some cases, allele-specific primers were designed based on the method 'PCR Amplification of Specific Alleles' (PASA) (Okimoto and Dodgson 1996;Sommer et al. 1992). The allele-specific nucleotide was placed at the 3 0 end of the primer and one mismatch was introduced at the third nucleotide position from the 3 0 terminus. Primers specific for the AGPsS-9a allele were designed based on two diagnostic SNPs identified by direct amplicon sequencing in the CHIPS-ALL association panel (unpublished data). PCR amplification was performed in 25-ll reaction mixture (20 mM Tris-HCl, pH 8.4, 1.5 mM MgCl 2 , 50 mM KCl) containing 50-60 ng DNA template, 0.4 lM of each primer, 0.2 mM dNTP and 1 U Taq DNA polymerase (Invitrogen, Darmstadt, Germany). Touchdown PCR was used in some cases to increase PCR specificity. The annealing temperature in the initial cycle was set 5°C above the optimal annealing temperature (T A ) of the primers (Table 4). In subsequent cycles, T A was decreased in steps of 1°C/cycle for five cycles and maintained for 30 cycles. The extension times were adjusted according to the amplicon size as follow: 30 s for products smaller than 500 bp, 45 s for 500-750 bp products and 1 min 30 s for 1,000-1,500 bp products. The amplicons were separated on 2 % agarose gels and visualized by ethidium bromide staining.
Test statistics for marker validation experiments
All markers were scored as 1 for presence and 0 for absence in a given genotype. Single markers with two character states were tested for significant phenotypic differences between genotype groups (p \ 0.05) by the t test for tuber starch content, yield and starch yield, and Mann-Whitney U test for chip quality. Marker combinations were analyzed using analysis of variance (ANOVA) for tuber starch content, yield and starch yield, and Kruskal-Wallis test for chip quality. All analyses were performed with SPSS 15.0 software (SPSS GmbH Software, Munich, Germany).
Results
Associations of AGPaseS and Pain-1 SSCP markers with tuber quality traits Three primer pairs were designed in exon sequences of the AGPaseS gene (Nakata et al. 1991) ( Table 2, supplementary Figure 1). One primer pair (AGPsS-7) generated directly polymorphic PCR products, which did not show association with tuber quality traits. The remaining two primer pairs produced monomorphic PCR products, which yielded three scorable, polymorphic SSCP markers after restriction with MseI. These three markers did show significant (p \ 0.01) associations with TSC, TSY, CQA and CQS but not with TY ( Table 3). The SSCP marker AGPsS-9a was positively associated with all four traits (presence of the marker increased on average tuber starch content, starch yield and chip quality), whilst AGPsS-10a was negatively associated (presence of the marker decreased on average tuber starch content, starch yield and chip quality). The SSCP marker AGPsS-10b showed small, positive associations with CQA, CQS and TSC. Blasting nucleotide sequences of AGPsS-9 and AGPsS-10 amplicons against the potato genome sequence (PGSC 2011) revealed that they were derived from the AGPaseS-a locus on chromosome I. Three of nine scorable SSCP markers derived from the Pain-1 promoter were associated with TSC, TSY, CQA and CQS (Table 3). The distribution of the Pain1 prom -a and Pain1-9a SSCP markers in the population CHIPS-ALL was highly similar, indicating that both markers detect the same Pain-1 alleles in homology group a (Draffehn et al. 2010). Pain-1 III Pain1 prom -a 28 0.001 : (4.5) 0.000 : (11.1) 0.000 : (9.5) 0.000 : Accordingly, Pain1 prom -a showed similar associations as Pain1-9a (Li et al. 2008). The marker Pain1 prom -d/e was detected only in one quarter of the individuals that carried the Pain1-8c marker. The individuals having Pain1 prom -d/e had a mean rating of 5.4 [standard deviation (SD) 1.6] for chip color after cold storage-the trait of particular interest for MAS-whereas the individuals carrying Pain1-8c had a mean rating of 4.0 (SD 2.0). The mean ratings of individuals lacking both markers Pain1 prom -d/e and Pain1-8c were 2.3 (SD 2.1) and 2.1 (SD 2.0), respectively. The marker Pain1 prom -g with small positive associations with CQA, CQS and TSC represents a new Pain-1 allele.
Development of specific PCR assays for candidate gene alleles associated with tuber quality traits To facilitate MAS for tuber quality traits, we converted the associated SSCP markers Stp23-8b, StpL-3b, StpL-3e, Pain1-8c (Li et al. 2008), Pain1 prom -d/e, AGPsS-9a and AGPsS-10a (this paper) into specific PCR assays as described in the ''Materials and methods''. Introns showed more allelic sequence variation than exons. Therefore, polymorphisms in introns were mostly used for the design of primers that generated amplicons in the range of 200-1,200 bp (supplementary Figure 1), an optimal size for separation on standard agarose gels (Fig. 1). Allelespecific primers, annealing temperatures and PCR product sizes are specified in Table 4. PCR protocols were optimized using standard varieties of the CHIPS-ALL population with and without the corresponding SSCP markers (Fig. 1). Specificity of the PCR product for the corresponding SSCP marker was assessed by testing for cosegregation of both marker types in the CHIPS-ALL population. The distributions of the allele-specific PCR markers and the original SSCP markers in the CHIPS-ALL population were nearly identical.
MAS and marker validation in BNC genotypes
In 2008, five hundred BNC breeding clones from the 5th to 8th year of phenotypic selection were screened for six markers either positively (?) or negatively (-) associated with chip quality, tuber starch content and starch yield: Stp23-8b (?), StpL-3e (?), Pain1-9a (?), AGPsS-10a (-), Rca-1a (-) and GP171-a (-). The Rca-1a marker was not detected in the 500 genotypes. Eleven groups of BNC clones (C3 individuals per group) were selected based on sharing different combinations of the five remaining markers (Table 5). Group A combined all five positive marker alleles, groups B, C, D, E and F had four positive and one negative marker allele, groups G, J, L had two and groups N and O four negative marker alleles (Table 5). Only one genotype with the 'all negative' marker combination was found, which was not sufficient for comparisons of group means and was, therefore, not considered further. In 2009, the marker tests were repeated in the BNC genotypes selected in 2008, using DNA re-extracted from leaves of plants growing 2009 in the field. Seventy-six BNC clones were finally selected that had consistent scores for all markers in both years and were used for subsequent analyses. The 76 selected BNC clones were evaluated in 2009 and 2010 for chip quality after harvest (CQA) and after cold storage (CQS7, CQS5), for tuber starch content (TSC), yield (TY) and starch yield (TSY). Population means and ranges are shown in Table 6. CQA, CQS7 and CQS5 correlated with each other. TSC, TY and TSY were also correlated, TSC negatively with TY. CQA and CQS5 showed positive correlation with TSC (Table 7).
The trait means over the years 2009 and 2010 of the 11 genotypic groups are included in Table 5. Differences between groups were significant for the traits CQS7 and TSC. Tuber starch content clearly decreased with increasing number of negative marker scores, with the best group A having a 3 % higher average starch content than the worst groups N and O. The same trend was observed for chip quality. Average ratings for chip color were always higher for group A than for groups N and O. The presence of only one or two negative markers in groups B to L did not have an observable effect on chip quality. The absence of the Stp23-8b marker in groups D and G significantly decreased the average tuber starch content when compared with group A. Stp23-8b f-cgcatcagaaaaaacctcgg r-acctcctcctgaccatcttt
65-60 a 1236
AGPsS-9a f-ctgctttcttgcttagttttacc r-catttttcagaaattatatcaggtg 63 210 AGPsS-10a f-gaaaatttatcctgaacaaacaccca r-gttaataggaagctaacctcctct 65-60 a 449 a Touch down PCR: the annealing temperature in the initial cycle was set 5°C above the optimal T A of the primers. In subsequent cycles, T A was decreased in steps of 1°C/cycle for five cycles and maintained for 30 cycles In addition to the five markers used for MAS, the 76 BNC clones were genotyped with the markers AGPsS-9a (?), Pain1-8c (?) and Pain1 prom -d/e (?). When tested individually for effects on the 2-year means of the phenotypic traits, seven of the eight markers were significant for one or two traits (Table 8). Results for the single traits in 2009 and 2010 are shown in supplementary Table 1. None of the markers showed significant effects on CQA, TY and TSY. The marker AGPsS-10a had no detectable effect on any trait. The seven significant markers showed the expected positive or negative direction of effect. Of three markers with a positive effect on tuber starch content (Stp23-8b, StpL-3e, Pain1-9a), Stp23-8b was most significant for TSC. Presence of this marker increased tuber starch content on average by 2 % [mean 1 = 17.3 % (SD 1.7), mean 0 = 15.5 % (SD 1.3)]. AGPsS-9a was the only marker significant for both traits CQS7 and CQS5 (chip quality after cold storage). Presence of this marker increased the score for chips quality on average by one unit [CQS7: mean 1 = 6.7 (SD 0.8), mean 0 = 5.8 (SD 1.3); CQS5: mean 1 = 4.8 (SD 1.1), mean 0 = 3.7 (SD 1.5)]. Consistent with previous results (Draffehn et al. 2010), Pain1-9a had a stronger effect on tuber starch content than on chips quality, whereas Pain1-8c and Pain prom -d/e, with nearly identical distribution in BNC clones, affected predominantly chips quality [mean 1 = 6.6 (SD 0.8), mean 0 = 5.7 (SD 1.4)].
To identify optimal marker combinations for chip quality and tuber starch content, marker pairs and combinations of three markers were tested for their effect on the traits (Table 8). Seven of 11 marker combinations showed highly significant effects on tuber starch content. All combinations including the marker Stp23-8b increased average tuber starch content (supplementary Table 2). Combinations of the marker AGPsS-9a either with Stp23-8b or Pain1-8c or GP171-a showed significant effects on both CQS7 and CQS5. In agreement with the expectation from previous association studies, the best average scores (5) for chip quality after 4 months storage at 5°C were obtained when the positive markers AGPsS-9a and Stp23-8b, or AGPsS-9a and StpL-3e were combined, or when AGPsS-9a was present and the negative marker GP171-a was absent. However, when combining AGPsS-9a with Pain1-8c, the genotype class with AGPsS-9a present and Pain1-8c absent scored best (supplementary Table 2). This is contrary to expectation, as Pain1-8c is positively associated with chip quality (Li et al. 2008). This observation was corroborated by the combination of the markers Pain1-8c, AGPsS-9a and Stp23-8b. The highest average scores for CQS5 (5.8) and the highest average tuber starch content (18.2 %) were observed for the genotypic class with AG-PsS-9a and Stp23-8b both present but Pain1-8c absent (supplementary Table 2). This indicates that the positive effect of the Pain1-8c marker was converted into the opposite in the presence of the AGPsS-9a marker.
MAS and marker validation in SKC genotypes
Five hundred and seventy-six F1 genotypes (SKC clones) originating from the cross Diana 9 Candella were genotyped in 2008 for the segregating markers GP171-a (-), Stp23-8b (?), StpL-3b (-), StpL-3e (?), AGPaseS-10a (-) and InvGE-6f (?). Eighteen groups of SKC clones (C3 individuals per group) were selected for having in common various combinations of the six markers. Group A consisted of five individuals with all positive markers, whereas three individuals in group P had all negative markers. Ten groups corresponded to five pairs with complementary marker combinations (D1 and D2, F1 and F2, G1 and G2, J1 and J2, K1 and K2) ( Table 9). In 2009, the selected SKC clones were propagated in the field, evaluated for chip quality and tuber starch content and re-genotyped with the markers similar to the BNC clones. One hundred and fortysix SKC clones with marker scores consistent with the previous year were finally selected. One hundred and twenty-one SKC clones could be evaluated a second time in 2010 for chip quality and tuber starch content. Unusually dry weather during June and July 2010 lead to a strong reduction in average tuber starch content (Table 6). Marker effects were, therefore, tested separately for 2009 and 2010. Chip quality scores at three different time points and storage temperatures over 2 years were correlated with each other and with TSC-10, whereas TSC-9 was correlated with TSC-10 but not with chip quality (Table 10). Except for TSC-09, phenotypic differences between the selected genotypic groups were not significant. The group D1 had the highest tuber starch content in the year 2009 (20.6 %), which differed significantly from other groups, for example, from the complementary group D2 (Table 9). However, unlike the BNC clones, no decrease in tuber starch content with increasing number of negative markers was observed. In fact, the best group A had the same average tuber starch content as the worst group P.
The SKC clones selected based on the six markers described above were genotyped for the additional markers Pain1-8c, Pain1 prom -d/e and AGPsS-9a. Pain1 prom -d/e cosegregated with Pain1-8c in the SKC family.
The single markers and combinations of two or three markers were tested for significant effects on chip quality and tuber starch content in 2009 and 2010 (Table 11). When tested individually, the markers GP171-a, Stp23-8b, StpL-3b and InvGE-6f did not show any significant effect, and none of the eight markers and combinations thereof were significant for the chip quality traits CQA-09, CQS8-09 and CQS8-10. Interestingly, the marker StpL-3e showed an effect on tuber starch content in both years, however, with opposite direction, positive as expected in 2009, but negative in 2010. Also AGPsS-10a, which was negatively associated with chip quality in the CHIP-ALL population (Table 3), showed in the SKC clones a small but positive effect on the trait CQS4-09. The positive marker AGPsS-9a showed only a small positive effect on tuber starch content in 2010 (TSC-10), in contrast to the CHIPS-ALL and BNC populations, in which this marker was strongly associated with tuber starch content and chip quality (Tables 3, 8). Consistent with the BNC and CHIPS-ALL populations, the optimal single marker for chip quality was Pain1-8c, which was significant for the 2-year average of chip quality after cold storage at 4°C (CQS4). Presence of Pain1-8c improved average chip color by 0.4 units. The best pair wise marker combinations for chip quality were Pain1-8c combined with either StpL-3e (Pain1-8c present and StpL- 3e absent) or InvGE-6f (both Pain1-8c and InvGE-6f present). The best combinations for tuber starch content were StpL-3e combined with either AGPsS-9a or AGPsS-10a. Also in these cases, the genotypic classes with the highest average tuber starch content differed between the years (supplementary Table 3). Combinations of three markers did not improve the effects on the tuber quality traits compared to pair wise combinations.
Discussion
Novel markers for tuber quality traits DNA polymorphisms at the AGPaseS-a locus on potato chromosome I and in the promoter region of Pain-1 on chromosome III were associated in the CHIPS-ALL population with tuber starch content, starch yield and chip quality before and after cold storage but not with tuber yield (Table 3). The direction of effect of each associated allele was the same for all traits, either positive or negative, meaning that an allele that increased average tuber starch content also increased average chip quality (by decreasing tuber sugar content) and vice versa. These results are consistent with the strong positive correlation between chip quality and tuber starch content as well as starch yield (Table 1). The same relationship is valid for all strongly associated candidate gene alleles identified so far, which function in carbohydrate metabolism (Li et al. 2008).
The results of association genetics are in full agreement with the known biochemical links between starch and sugars and the physiology of starch-sugar interconversion (Isherwood 1973). Genes strongly associated exclusively with either tuber starch content or chip quality have not been discovered so far based on the candidate gene approach. Such genes might function in unknown, for example, in regulatory pathways. Despite the link between tuber starch and sugar content, the relative size of phenotypic effect can vary between alleles of the same locus. Interesting examples are the invertase alleles Pain1-9a, Pain1-8c and the new Pain1prom -d/e. Pain1-8c (= Pain1-A 718 ) was in strong LD with Pain1-9a (= Pain1-A 1544 ), but was less frequent in the CHIPS-ALL population and showed stronger association with chip quality than Pain1-9a, which is predominantly associated with tuber starch content (Draffehn et al. 2010). The genotypes having the allele Pain1 prom -d/e (12 individuals of the CHIPS-ALL population) were a subset of the genotypes having the Pain1-8c allele. These genotypes scored on average even better for chip quality after cold storage than genotypes with Pain1-8c, but showed only a small effect on starch content. The distribution of Pain1-8c and Pain1 prom -d/e was nearly identical in the selected BNC population, and the two markers co-segregated in the SKC family, indicating that they are derived from the same haplotype. Alleles with positive trait associations like Pain1-8c, Pain1 prom -d/e and AGPsS-9a had a low frequency in the CHIPS-ALL population. This indicates that there is room for improving processing quality by enriching breeding populations for these low frequency alleles. Allele-specific, user-friendly PCR assays for MAS Most associations with tuber quality traits were discovered based on SSCP analysis (Li et al. , 2008) (this paper). Although this methodology is highly efficient in detecting DNA polymorphisms, it is not suitable for high throughput screening of plant material as required in MAS. We, therefore, converted seven SSCP candidate gene alleles that showed the most promising associations with tuber quality traits in the CHIPS-ALL population into allelespecific PCR assays, which were used for MAS (Table 4; Fig. 1). Primers were designed based on allele-specific SNPs or InDels, and the specificity of the PCR was verified and optimized in individuals of the CHIPS-ALL population. A similar approach was used to develop allele-specific PCR and RT-PCR assays for discrimination of the late blight resistance gene RB from numerous RB homologs in potato (Millett and Bradeen 2007). Together with the allele-specific SCAR markers InvGE-6f , Rca-1a and GP171-a (Li et al. 2008, the seven allele-specific, user-friendly markers described in this paper constitute a first set that can be widely used for exploring MAS for tuber quality traits, for analyzing associations in new populations and for allele mining, for example, in landraces or wild potato species.
Marker validation
MAS for tuber quality traits was exercised in two different types of genetic material. The BNC clones were marker selected from advanced, multi-parental material, which had undergone several years of phenotypic selection, whereas the SKC clones were selected from F1 progeny of a singlecross combination, which has been mildly selected only for general vitality in the first year after crossing. Most markers could be validated in the BNC population, whereas this was largely not the case in the SKC F1 family. In the 76 BNC genotypes selected based on presence/ absence of five allele-specific markers, a clear trend was observed from genotypic groups N and O with the worst allele combinations to group A with the best allele combination. Average tuber starch content, starch yield and chip quality increased from N/O to A as expected ( Table 5). The effects of replacing one (groups B-F) or two (groups G, J, L) positive alleles by the complementary negative allele could not be dissected, likely due to the small number of individuals in each genotypic group, which was insufficient to detect small phenotypic effects. An exception was the positive marker Stp23-8b. All five groups (C, G, L, N, O) lacking this marker had a significantly lower tuber starch content compared to the six other groups having it. The strong positive effect of Stp23-8b on tuber starch content was also evident in single marker tests and in combinations with one and two other markers (Table 8). With the exception of AGPsS-10a, all markers tested individually in the 76 BNC genotypes showed a significant effect on at least one tuber quality trait. Except for Stp23-8b, significance levels were lower than in the CHIPS-ALL population, likely due to the small population size. Lack of significant effect, for example, on CQA, may be due to limited phenotypic range. Correlations between traits were weaker due to the small number of phenotyped BNC clones but still consistent with the trait correlations observed in the CHIPS-ALL population. The directions of effects were also consistent with the original associations in the CHIPS-ALL population. A remarkable exception was combination of the markers Pain1-8c and AGPsS-9a, which individually showed reproducible, positive effects on chip quality. However, the genotype group with Pain1-8c absent and AGPsS-9a present scored best for chip quality, particularly after cold storage (supplementary Table 2), suggesting incompatibility between certain alleles of soluble acid invertase and ADP-glucose pyrophosphorylase S. The plastidic starch phosphorylase loci Stp23 and StpL are identical to PHO1A and PHO1B, respectively, which have been tested for association with tuber starch content in a population of 205 varieties and breeding clones different from the CHIPS-ALL population. SSCP markers at both loci were associated with tuber starch content (Urbany et al. 2011). Together with the results reported here for 76 BNC clones, associations with tuber starch content of allelic variants at these two starch phosphorylase loci have now been validated in three different populations.
The 146 SKC genotypes were selected based on presence/absence of six markers, four of which were the same as used for selecting the BNC genotypes. Significant differences between the 18 genotypic groups were only observed for tuber starch content in 2009, and these differences did not correspond to the expectation of increasing starch content with increasing number of positive markers (Table 9). No differences were detected for chip quality. When tested individually, four of the eight markers, for which the SKC family was genotyped, showed an effect on at least one trait. However, the direction of effect of two markers (StpL-3e, AGPsS-10a) was inconsistent between years and the original associations in the CHIPS-ALL population (Table 11).
One reason for the limited success of MAS in the SKC family could be the phenotypic evaluation, which suffered from two handicaps, the low number of tubers available for testing chip quality in 2009 and highly unusual weather conditions during the growing season 2010. Like tuber yield, the assessment of chip quality in the early years of multiplication might not be reliable enough to validate marker-trait associations that were identified in a population of varieties and more advanced breeding clones. Furthermore, marker-trait associations identified under normal climatic conditions might be unstable under exceptional weather circumstances (G 9 E interactions). In this respect, Pain1-8c was the only marker that showed a consistent positive effect on chip quality in both BNC and SKC genotypes.
Another reason may be that the markers explain only part of the phenotypic variation of polygenic traits. In this case, the markers can predict the phenotype of an individual only with a certain probability but not with certainty. The prediction of marker effect is probabilistic, not deterministic. The prediction might fail, therefore, in individual cross combinations due to additional, unknown genetic factors and epistatic interactions , which segregate in a particular F1 family. Association mapping with genome wide SNP markers, which now becomes feasible based on the draft potato genome sequence (PGSC 2011), is the strategy to fill the knowledge gaps on how many and which loci control the natural variation of complex tuber traits.
Conclusion
Marker-assisted selection in applied potato breeding programs is so far restricted to few genes for pathogen resistance with major effects (Ortega and Lopez-Vizcon 2012;Rizza et al. 2006;Whitworth et al. 2009). The implementation of MAS for polygenic traits poses a major challenge. In this paper, we report the results of the first experiments to implement MAS for polygenic tuber quality traits in potato. From this exercise, the following lessons can be learned: 1. Incompatibilities between alleles do occur and have to be taken into account. More important than the number of markers is the choice of suitable marker combinations, for example, the combination of marker Pain1-8c absent with markers AGPsS-9a and Stp23-8b present, which was optimal for improving tuber quality in the BNC clones. In other breeding populations, other marker combinations might perform better. 2. With the current state of knowledge, the most reproducible single marker for increasing average tuber starch content and eventually starch yield and chip quality is Stp23-8b, for improving average chip quality after cold storage Pain1-8c or Pain1 prom -d/e. 3. MAS for tuber quality should not rely on single-cross combinations but should be applied to multiple parents and their progeny. For example, MAS can first be used to purify parental populations from negative alleles and to increase frequency and dosage of positive alleles. Pre-selection for general plant performance such as vigor will reduce the number of progeny clones to be subjected to MAS (Ortega and Lopez-Vizcon 2012). Phenotypic evaluation of tuber quality can then be performed on the remaining clones at a later stage in the breeding cycle. | 2017-05-31T11:45:19.825Z | 2013-01-09T00:00:00.000 | {
"year": 2013,
"sha1": "0ed4fad4b9ed8f7f93f3043517be8ec235329534",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00122-012-2035-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "81e833f898fdc02867c3a84815ed2611919dee46",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
41811116 | pes2o/s2orc | v3-fos-license | Formation of Hard Power-laws in the Energetic Particle Spectra Resulting from Relativistic Magnetic Reconnection
Using fully kinetic simulations, we demonstrate that magnetic reconnection in relativistic plasmas is highly efficient at accelerating particles through a first-order Fermi process resulting from the curvature drift of particles in the direction of the electric field induced by the relativistic flows. This mechanism gives rise to the formation of hard power-law spectra in parameter regimes where the energy density in the reconnecting field exceeds the rest mass energy density $\sigma \equiv B^2/(4 \pi n m_ec^2)>1$ and when the system size is sufficiently large. In the limit $\sigma \gg 1$, the spectral index approaches $p=1$ and most of the available energy is converted into non-thermal particles. A simple analytic model is proposed which explains these key features and predicts a general condition under which hard power-law spectra will be generated from magnetic reconnection.
Introduction -Magnetic reconnection is a fundamental plasma process that allows rapid changes of magnetic field topology and the conversion of magnetic energy into plasma kinetic energy. It has been extensively discussed in solar flares, Earth's magnetosphere, and laboratory applications. However, magnetic reconnection remains poorly understood in high-energy astrophysical systems [1]. Magnetic reconnection has been suggested as a mechanism for producing high-energy emissions from pulsar wind nebula, gamma-ray bursts, and jets from active galactic nuclei [2][3][4][5][6]. In those systems, it is often expected that the magnetization parameter σ ≡ B 2 /(4πnmc 2 ) exceeds unity. Most previous kinetic studies focused on the non-relativistic regime σ < 1 and reported several acceleration mechanisms such as acceleration at X-line regions [7][8][9] and Fermi-type acceleration within magnetic islands [8][9][10][11]. More recently, the regime σ = 1-100 has been explored using pressure-balanced current sheets and strong particle acceleration has been found in both diffusion regions [12][13][14][15] and island regions [16,17]. However, this initial condition requires a hot plasma component inside the current sheet to maintain force balance, which may not be justified for high-σ plasmas.
For magnetically dominated systems, it has been shown [18,19] that the gradual evolution of the magnetic field can lead to formation of intense nearly forcefree current layers where magnetic reconnection may be triggered. In this Letter, we perform large-scale two-dimensional (2D) and three-dimensional (3D) full particle-in-cell (PIC) simulations of a relativistic forcefree current sheet with σ up to 1600. In the high-σ regime, the release of magnetic energy is accompanied by the energization of nonthermal particles on the same fast time scale as the reconnection process. Much of the magnetic energy is converted into the kinetic energy of nonthermal relativistic particles and the eventual energy spectra show a power-law f (γ) ∝ γ −p over nearly two decades, with the spectral index p decreasing with σ and system size, and approaching p = 1. The dominant acceleration mechanism is a first-order Fermi process through the curvature drift motion of particles along the electric field induced by relativistic reconnection outflows. The formation of the power-law distribution can be described by a simple model that includes both inflow and the Fermi acceleration. This model also appears to explain recent PIC simulations [15], which reported hard power-law distributions after subtracting the initial hot plasma population inside the current layer.
Numerical simulations -The initial condition is a force-free current layer with B = B 0 tanh(z/λ)x + B 0 sech(z/λ)ŷ, which corresponds to a magnetic field with magnitude B 0 rotating by 180 • across the layer with a thickness of 2λ. The plasma consists of electron-positron pairs with mass ratio m i /m e = 1. The initial distributions are Maxwellian with a uniform density n 0 and temperature (T i = T e = 0.36m e c 2 ). Particles in the sheet have a net drift U i = −U e to give a current density J = en 0 (U i − U e ) consistent with ∇ × B = 4πJ/c. The simulations are performed using the VPIC [20] and NPIC codes [21,22], both of which solve the relativistic Vlasov-Maxwell system of equations.
In the simulations, σ is adjusted by changing the ratio of the electron gyrofrequency to plasma frequency σ = B 2 /(4πn e m e c 2 ) = (Ω ce /ω pe ) 2 . A series of 2D simulations were performed with σ = 1 → 1600 and domain sizes x-z cut of current density and an isosurface of current density with color-coded J · E normalized using n0mec 2 ωpe at ωpet = 375. (c) Evolution of magnetic energy EB, total kinetic energy E k , and kinetic energy carried by relativistic particles with γ > 4. (d) Evolution of particle energy spectra from 2D and 3D simulations. Subpanel: energy spectrum from the 3D simulation at ωpet = 700.
thickness is λ = 6d i for σ ≤ 100, 12d i for σ = 400, and 24d i for σ = 1600 in order to satisfy U i < c. All simulations used more than 100 particles per cell for each species, employed periodic boundary conditions in the xand y-directions, and in the z-direction used conducting boundaries for the fields and reflecting for the particles.
A long-wavelength perturbation [22] with B z = 0.03B 0 is included to initiate reconnection. Simulation results - Figure 1 contrasts some key results from 2D and 3D simulations with σ = 100 and domain size L x × L z = 300d i × 194d i (L y = 300d i for the 3D simulation). Panel (a) shows the current density at ω pe t = 375 in the 2D simulation. Because of the secondary tearing instability, several fast-moving secondary plasmoids develop along the central region and merge to form larger plasmoids [22]. Panel (b) shows an isosurface of current density colored by J · E at ω pe t = 375 from the 3D simulation. As the initial guide field is expelled outward from the central region, the kink instability [23] develops and interacts with the tearing mode, leading to a turbulent evolution [24]. Previous studies have suggested different predictions concerning the influence of σ on the reconnection rate [25][26][27][28][29]. In this letter, the reconnection rate is observed to increase with σ from E rec ∼ 0.03B 0 for σ = 1 to E rec ∼ 0.22B 0 for σ = 1600. Although the 2D and 3D simulations appear quite different, the energy conversion and particle energization are surprisingly similar. Panel (c) compares the evolution of magnetic energy E B , plasma kinetic energy E k , and energy in relativistic particles with γ > 4. In both cases, about 20% of the magnetic energy is converted into kinetic energy of relativistic particles. Figure 1 (d) compares the energy spectra at various times. The most striking feature is that a hard power-law spectrum with index p ∼ 1.35 forms in both 2D and 3D runs. In the subpanel, the energy spectrum for all particles in the 3D simulation at ω pe t = 700 is shown by the red line. The low-energy portion can be fitted by a Maxwellian distribution (black) and the nonthermal part resembles a power-law distribution (blue) starting at γ ∼ 2 with an exponential cut-off apparent for γ 100. The nonthermal part contains ∼ 25% of particles and ∼ 95% of the kinetic energy. The maximum particle energy is predicted approximately using the reconnecting electric field m e c 2 γ max = |qE rec |cdt until the gyroradius is comparable to the system size. Although we observe a strong kink instability in the 3D simulations, the energy conversion and particle energy spectra are remarkably similar to the 2D results, indicating the 3D effects are not crucial for understanding the particle acceleration. Since there is more freedom to vary the parameters in 2D simulations, in the rest of this letter we focus on this limit.
In Figure 2, we present more analysis for the acceleration mechanism using the case with σ = 100 and L x × L z = 600d i × 388d i . Panel (a) shows the energy as a function of the x-position of four accelerated particles. The electrons gain energy by bouncing back and forth within the reconnection layer. Upon each cycle, the energy gain is ∆γ ∼ γ, which demonstrates that the acceleration mechanism is a first-order Fermi process [11,30]. To show this more rigorously, we have tracked the energy change of all the particles in the simulation and contributions from the parallel electric field (m e c 2 ∆γ = qv E dt) and curvature drift acceleration (m e c 2 ∆γ = qv curv · E ⊥ dt) similar to [31], where v curv = γv 2 (b × (b · ∇)b)/Ω ce , v is the particle velocity parallel to the magnetic field, and b = B/|B|. Panel (b) shows the averaged energy gain and the contribution from parallel electric field and curvature drift acceleration over an interval of 25ω −1 pe as a function of energy starting at ω pe t = 350. The energy gain follows ∆γ ∼ αγ, confirming the first-order Fermi process identified from particle trajectories. The energy gain from the parallel motion is weakly dependent on energy, whereas the energy gain from the curvature drift acceleration is roughly proportional to energy. In the early phase, the parallel electric field is strong but only accelerates a small portion of particles, and the curvature drift dominates the acceleration starting at about ω pe t = 250. The contribution from the gradient drift was also evaluated and found to be unimportant. Panel (c) shows α =< ∆γ > /(γ∆t) measured directly from the energy gain of the particles in the perpendicular electric field (m e c 2 ∆γ = qv ⊥ · E ⊥ dt) and estimated from the expression for the curvature drift acceleration. The close agreement demonstrates that curvature drift term dominates the particle energization. For higher σ and larger domains, the acceleration is stronger and reconnection is sustained over a longer duration. In panel (d), a summary for the observed spectral index of all the 2D runs shows that the spectrum is harder for higher σ and larger domain sizes, and approaches the limit p = 1.
New Model -It is often argued that some loss mechanism is needed to form a power-law distribution [12,30].
However, the simulation results reported here illustrate clear power-law distributions in a closed system. Here we demonstrate that these results can be understood in terms of a model illustrated in Figure 3 (a). As reconnection proceeds, the ambient plasma is injected into the acceleration region at a speed V in = cE rec × B/B 2 . We consider the continuity equation for the energy distribution function f (ε, t) within the acceleration region with ∂ε/∂t = αε, where α is the constant acceleration rate from the first-order Fermi process, ε = m e c 2 (γ − 1)/T is the normalized kinetic energy, τ inj is the time scale for injection of particles from the upstream region with fixed distribution f inj and τ esc is escape time. We assume that the initial distribution within the layer f 0 and the upstream injected distribution are both Maxwellian with initial temperature T < m e c 2 such that For simplicity, we consider the lowest order (nonrelativistic) term in this expansion and normalize f 0 = 2N0 √ π √ ε exp(−ε) by the number of particles N 0 within the initial layer and f inj by the number of particles injected into the layer N inj ∝ V in τ inj during reconnection. With these assumptions, the solution to (1) can be written as where β = 1/(ατ esc ) and Γ s (x) is the incomplete Gamma function. The first term accounts for particles initially in the acceleration region while the second term describes the evolution of injected particles. In the limit of no injection or escape (τ esc → ∞ and τ inj → ∞), the first term in (3) remains a thermal distribution with enhanced temperature e αt T , consistent with Ref. [30]. However, as reconnection proceeds new particles enter continuously into the acceleration region and due to the periodic boundary conditions there is no particle escape. Thus considering the case τ esc → ∞ and assuming N 0 N inj , at the time t = τ inj when reconnection saturates the second term in (3) simplifies to When ατ inj > 1, this gives the relation f ∝ 1/ε in the energy range 1 < ε < e ατinj as shown in Figure 3 (b) by directly evaluating (4) for different ατ inj . Interestingly, this energy range for the power-law is below that of the heated thermal particles in the initial layer. Thus in the limit N 0 ∼ N inj the first term in (3) should be retained and the power-law produced is sub-thermal relative to this population. While it is straightforward to obtain the relativistic corrections arising from the injected distribution (2), we emphasize that these terms do not alter the spectral index.
In order to estimate the acceleration rate α, the energy change of each particle can be approximated by a relativistic collision formula [e.g., 32] where V is the outflow speed, and v x is the particle velocity in the x-direction. The time between two collisions is about L is /v x , where L is is the typical size of the magnetic islands (or flux ropes in 3D). Assuming relativistic particles have a nearly isotropic dis- Using this expression, we measure the averaged V and L is from the simulations and estimate the time-dependent acceleration rate α(t). An example is shown in Figure 2 (c). This agrees reasonably well with that obtained from perpendicular acceleration and curvature drift acceleration. Figure 3 (c) shows the time-integrated value of ατ inj = τinj 0 α(t)dt for various simulations with σ = 6 − 400. For cases with ατ inj > 1, a hard powerlaw distribution with spectral index p ∼ 1 forms. For higher σ and larger system size, the magnitude of ατ inj increases approximately as ∝ σ 1/2 .
Discussion -Considering the more realistic limit with both particle loss and injection, Equation (3) predicts a spectral index p = 1 + 1/(ατ esc ) when ατ inj > 1, recovering the classical Fermi solution [e.g., 32]. If the escape is caused by convection out of the acceleration region τ esc = L x /V , the spectral index should approach p = 1 when ατ esc 1 in the high-σ regime. Although the present simulations employed periodic boundary conditions, most cases develop power-law distributions within two light-crossing times, indicating that the boundary conditions do not strongly influence the results. In preliminary 2D simulations using open boundary conditions [21], we have confirmed these general trends [Guo et al. 2014, in preparation]. For non-relativistic reconnection, the acceleration rate is lower and thus it takes longer to form a power-law distribution. Take the nonrelativistic limit for (6), if V = 0.1c, v x = 0.2c, and L is = 100d i , the reconnection has to be sustained over a time τ inj > 2 × 10 4 ω −1 pi to form a power law, which significantly exceeds the simulation time of most previous studies. It has been suggested that current sheet instabilities may strongly influence particle acceleration [13]. In contrast, the energy distributions reported here are remarkably similar in 2D and 3D, despite the broad range of secondary kink and tearing instabilities in 3D. This surprising result suggests that the underlying Fermi acceleration is rather robust and does not depend on the existence of well-defined magnetic islands. The strong similarities between the 2D and 3D acceleration spectra are also consistent with some key similarities in the reconnection dynamics. In particular, the range of scales for the 2D magnetic islands is similar to the observed 3D flux ropes. In addition, the reconnection rate and flow speeds are also quite similar between 2D and 3D, in agreement with other recent studies [33,34]. In large open systems, it remains to be seen whether 3D turbulence may affect the particle escape times. Another important factor that may influence these results is the presence of an external guide field B g . Our preliminary simulations suggest that the key results of this letter will hold for B g < B 0 . For stronger guide fields, the energy release is slower and the associated particle acceleration requires further study.
We have demonstrated that in the regime σ 1 magnetic reconnection is an efficient mechanism of converting the energy stored in the magnetic shear into relativistic nonthermal particles. These energetic particles contain a significant fraction of the total energy released and, quite interestingly, have a power-law energy distribution with spectral index p ∼ 1 when ατ inj > 1. Physically, this requires that the time scale over which particles are injected into the acceleration region is longer than acceleration time for the first-order Fermi process. The results in this letter demonstrate this condition is more easily achieved in regimes with σ 1, but may also occur with σ 1 in sufficiently large reconnection layers. Our new findings substantiate the importance of fast magnetic reconnection in strongly magnetized plasmas, and may be important for explaining the high-energy emissions in systems like pulsars, jets from black holes, and gamma-ray bursts. | 2014-10-15T05:36:27.000Z | 2014-05-16T00:00:00.000 | {
"year": 2014,
"sha1": "67674b1034d2d8dd9e6bbc53068ac55d8605619e",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.113.155005",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "67674b1034d2d8dd9e6bbc53068ac55d8605619e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
232374692 | pes2o/s2orc | v3-fos-license | Professional Development of International Chinese Teachers Based on the Complex Dynamic Theory
From the perspective of complex dynamic theory, this paper analyzes the definition, characteristics and path of the professional development of international Chinese teachers, and further explores how to improve the professional development of international Chinese teachers in the new era. After re-examining and reflecting on this issue by means of concepts such as global outlook, social demand orientation, cross-cultural teaching and communication skills, big data technology, and integration of industry and education, it can be found that the professional development of international Chinese teachers is not a closed, static, homogeneous and selfcontained system, but an open, dynamic, non-linear, and self-adaptive system. The development of comprehensive capabilities is an important manifestation of its professional development. This research is based on the theory of complex dynamic systems, combined with China's humanistic initiative of the "Belt and Road", to give a more comprehensive and in-depth explanation of the professional development of international Chinese teachers, which has reference significance for related research.
INTRODUCTION
In recent years, as China's national strength has increased, international Chinese education has made remarkable developments. According to the data from Former Hanban (Office of the National Leading Group for Teaching Chinese as a Foreign Language), China has established 541 Confucius Institutes and 1,170 Confucius Classrooms in 162 countries /regions by December 2019. The "Report on the Development of Chinese Language and Characters 2017" pointed out that 67 countries and regions around the world have incorporated Chinese language teaching into the national education system through the promulgation of laws and decrees. In terms of international students in China, statistics from the Ministry of Education show that the number in 2017 reached 489,200, and the growth rate has remained above 10% for two consecutive years. There is a great demand for international Chinese teachers. For learners have different nationalities, cultures, languages, races, ages, and religious backgrounds, so international Chinese teachers must have interdisciplinary knowledge and intercultural competence, etc. The Ministry of Education document Promoting the Joint Construction of "The Belt and Road"-Educational Action puts forward the propositions of the times regarding the training of international talents in the "Belt and Road" countries. Therefore, the characteristics and paths of the professional development of international Chinese teachers as well as its interpretation are worthy of in-depth study.
THE DEFINITION OF THE PROFESSIONAL DEVELOPMENT OF INTERNATIONAL CHINESE TEACHERS
Teachers' professional development is a dynamic developing process of continuous improvement of the professional quality and structure. It is also an independent developing process for teachers gradually becoming mature as educational and teaching professionals. It has the characteristics of initiative, effectiveness, individual independence and inherent relativity [1]. From initially entering the workplace to becoming an outstanding practitioner, teachers are affected by various internal and external factors [2]. Successful professional development depends on the working conditions of teachers and the way they perceive themselves in this framework, which is crucial to the maintenance of teachers' career [2]. Therefore, the professional development of international Chinese teachers can be defined as "the continuous development of the individual teacher's continuous exploration of time, including the enhancement of beliefs in international Chinese education, the updating, broadening and deepening of knowledge and skills in Chinese and related subjects, and the ability to produce time knowledge and the ability to cooperate with colleagues in the international Chinese education community, and eventually grow into a learning, reflective and research-oriented teacher." [3] The complexity of science changes our worldview and way of thinking. Using complex thinking paradigm to reflect on simple thinking paradigm can promote the innovation of human concepts and methods. After re-examining and reflecting on the existing teachers' professional development based on the perspective of complex theory, we found that teachers' professional development is not a closed, static, homogeneous, and self-sufficient system, but an open, dynamic and complex system in which nonlinear, heterogeneous, and adaptive elements interact. Therefore, it is not enough to simply study teachers' learning process, and formative intervention researches must also be conducted [4]. The complex dynamic theory not only helps to explain the elements of teachers' professional development objectively, comprehensively and scientifically, but also helps to understand the universality and particularity of teachers' professional development. Looking at the definition of the professional development of international Chinese teachers from the perspective of complex dynamic theoretical system, we believe that the professional development of international Chinese teachers depends on complex and diverse environmental factors such as education policies, education systems, subject orientation, social needs, economic levels, etc.; The professional developing process of teachers in different countries has different developing priorities at different stages, showing different developing paths, and then presents the characteristics of country-specific and nonlinear development, and finally achieves the purpose of improving the comprehensive ability of teachers (including language ability, body language ability, human sensory system, subject knowledge, background knowledge, teaching experience, values and other abilities); Different standards should be proposed for teachers in different periods and at different stages, guiding them to grow into those with active learning ability, reflective ability and research ability. Teachers' professional development is a complex dynamic system affected by many factors.
The Professional Development of International Chinese Teachers is Complex and Dynamic
Society, culture, politics, economy, education, systems, and the Internet constitute an environment for available education, and all system elements are open systems themselves. The professional development of international Chinese teachers is a complex dynamic system composed of various systems such as available education environment, teaching objects, teaching content, teaching goals, teaching evaluation, teaching experience, and years of experience. In this complex dynamic system, the availability of educational environment, teaching objects, teaching content and other systems interact with each other, restrict each other, and are interdependent, forming a dynamic and interwoven network architecture. Each element in the system itself is also a complex dynamic system, and is also composed of its own sub-systems. Under the interaction of these subsystems, new phenomena and new representations are constantly emerging. Every subtle change in the system will cause major or minor changes to the professional developing system of teachers. In this system, the teacher subject realizes their own professional development through demand, perception, learning and frequency in the real education and teaching process. Teacher subject includes individual teachers and teacher groups, and we will not make a detailed distinction in this article. There are characteristics of diversity and unity between individual teachers and teacher groups. Teachers themselves can be regarded as a complex dynamic system, whose influencing factors include social background, cultural background, political background, educational background, family background, values, as well as teaching environment and teaching experience, etc. These factors present different situations and seem to be unrelated to each other, but in fact they are closely related and interact with each other. Teachers should consciously pay attention to their own professional development, actively learn professional knowledge, and continuously accumulate teaching experience. It can be said that the speed of teachers' professional development is often affected by the teacher subject. The effects of each system are mutual. When one system acts on another system, it will inevitably be countered by the other. The core of the professional development of international Chinese teachers is the process of continuous self-reconstruction and self-complication through learning, practice, and reflection under the influence of multiple factors. In the process, a spiral upward development model is formed to adapt to the teaching environment of different countries, and complete the teaching tasks of different stages. With the development and changes of society, cultural, technological and operational capabilities have become the foundation of the high complexity of society and the core of the high complexity of human beings. Teachers' professional development is not a completely self-sufficient system. This is because its existence and development require the human brain, human sensory perception system and biologically evolved life with other factors to work together.
The Professional Development of International Chinese Teachers is Nonlinear
The professional development of international Chinese teachers is not a closed system. Its development is not only the internal energy of the system, but also negative entropy, that is, complex organization and information. Teachers' physiological system, professional system and developing system are all open systems, and they are mutually inclusive among them and between them and the external environment. The international Chinese teachers' Advances in Social Science, Education and Humanities Research, volume 505 professional developing System is not only a part of another system, but also self-contained. For the teacher subject has a high degree of complexity, so it enjoys the greatest freedom and dependence in human society. Similarly, the greater the freedom of the international Chinese teachers' professional developing system, the greater the dependence on the education and teaching system. Many individuals that affect the professional development of international Chinese teachers are not done in a simple superposition method, but randomly generated new relationships of opposition, competition, and complementation under the interaction of many individuals. The change in the professional development of international Chinese teachers is a non-linear phenomenon. Even a slight change in the motivation of the professional development of teachers may cause qualitative changes in the professional development system of teachers. This cause and effect are disproportionate. The phenomenon of change is a nonlinear dynamic change. In a linear system, the combined effect of two different factors is just a simple superposition of the individual effects of each factor. In a nonlinear system, a tiny factor can lead to dramatic results that cannot be measured by its magnitude. The professional development of international Chinese teachers will be affected by factors such as society, politics, economy, values, subject knowledge, background knowledge, and teaching experience. The changes in the professional development of international Chinese teachers are random and unpredictable. When far away from the state of balance, the professional development of international Chinese teachers is affected by external factors, and one or more communicative individuals may undergo sudden changes. At the same time, individual teachers themselves also have a process of self-regulation, forming with other subjects the relationship of adaptation, variation, competition or complementarity in the same system. In short, the available environment in the professional developing system of international Chinese teachers is full of needs and requirements, opportunities and restrictions, rejections and invitations, permits and constraints. Since each element is affected by other different factors, and each teacher subject has different previous experience, so even if the subject knowledge and teaching experience are exactly the same, it is impossible to have exactly the same professional development process and professional development results. Therefore, only using a simple linear method will not be able to recognize the formation rules and characteristics of the developing system of international Chinese teachers.
Comprehensive Ability Development is the Embodiment of the Professional Development of International Chinese Teachers
Comprehensive ability development is the embodiment of the professional development of international Chinese teachers. Teachers continue to accumulate experience according to different educational and teaching environments, different teaching goals and different teaching contents, and continue to improve their professional development under the promotion of reflective ability. The comprehensive ability of human body includes comprehensive coordination abilities such as language ability and body language, human sensory system, subject knowledge, background knowledge, teaching experience and values. The professional developing environment of teachers includes any information that can be touched by the five senses or synaesthesia. Visual and auditory channels are the main ones for information input, while touch, smell and taste are auxiliary channels for information input. Sometimes, multiple senses are required to cooperate to complete information input. According to teachers' different professional backgrounds, different cognitive resources are invested, and different cognitive processes and cognitive results will be obtained. By investigating the multi-modular cognitive model of teachers, it is helpful to study the process of teachers' professional development and summarize the laws and characteristics of the development. The professional developing process is not linear. Various pulling forces make it sway up and down, or even go backwards. Professional development continues to cycle. Although the direction of the cycle is the same, the trajectory of each return will be slightly different. The consistency of professional development motivation and purpose depends on the frequency of system circulation, and frequency determines the degree of similarity. The more complicated the social relationship, the higher the intellectual requirements of the brain and the larger the cortex. In this sense, the strength of professional development depends on the number of cognitive modules and the type and number of communicative situations or knowledge content mastered. This is because human brain power is limited. The amount of information that can be captured by recognizing the five sense organs, the number of communicative fields used in play-acts, and the number of concepts mastered are all factors that influence the professional development of international Chinese teachers.
THE PROFESSIONAL DEVELOPING PATH OF INTERNATIONAL CHINESE TEACHERS
Countries and regions in the world have uneven developing levels. There are both developed and developing economies. A single concept cannot be used to cultivate a multi-level, multi-type, flexible and diverse international team of Chinese language teachers. It is necessary to implement the point-to-point connection and line-tosurface strategy, make breakthroughs in key points and advance practically, so as to form a network path for teachers' professional development gradually. Under the guidance of the international Chinese teachers' professional development model based on the reflective model, Wang Tianmiao divides the professional development path of teachers into the following six types:
Advances in Social Science, Education and Humanities Research, volume 505
the first one is to establish a professional growth portfolio; the second one is to use reflective teaching concepts and methods during teaching practice; the third is to use microteaching method to promote teachers' reflection and mutual exchange and reference; the fourth is to use action research paradigms to guide teachers' educational practice and research; the fifth , Using educational narrative research methods to prompt teachers to reflect more deeply on the real situations in the educational world; the sixth is to build a professional learning community for teachers, so that teachers' perceptual practical experience can be upgraded to rational and systematic knowledge [3]. There are six main professional developing paths for international Chinese teachers: lifelong learning, action research, teaching reflection, peer mutual assistance, professional guidance, and project research. Among them, lifelong learning is the prerequisite guarantee for teachers' professional development; action research is the basic way for teachers' professional development; teaching reflection is the indispensable way for teachers' professional growth; peer mutual assistance is an effective method for teachers' professional growth; professional guidance is an important condition for teachers' professional growth; and subject research is an effective carrier for teachers' professional growth. The professional development of international Chinese teachers is a process of novice teachers growing into expert teachers, as well as a process of continuous development and improvement of teachers' international Chinese teaching ideas, teaching knowledge and teaching capabilities. While the mutual aid and collaboration among teachers produce a number of benefits that have significant impacts on their professional lives, thus playing an important role in the strategy of teachers' professional development [5]. The background of the learning society has made the view of life-long learning widely valued, and teachers are direct practitioners. These factors have become the driving force to promote teachers' independent learning and continuously improve their professional levels.
Establishing the Concept of a Global View and Accurately Positioning the Macro Goals of Teachers' Professional Development
The vision of the "Belt and Road" requires teachers' professional development to establish a global outlook. The programmatic document Promoting the Joint Construction of the "Belt and Road" -Educational Action issued by the Ministry of Education focuses on the connectivity of policy, infrastructure, trade, finance and people, known as "The Five-Connectivity Program" of the "Belt and Road" construction. The countries along the "Belt and Road" are mutually dependent, with a long history of educational exchanges and broad prospects for educational cooperation. Under the framework of the "Belt and Road", all countries along the route join hands to promote the common prosperity of education, which clarifies the global perspective of teachers' professional development from policy. As everyone knows, UNESCO actively promote "global citizenship" education and cross-cultural education, the E.U's foreign language education policy embodies the spirit of" foreign language education for intercultural citizenship", Japanese scholars put forward the concept of "global education" and carry out quality education necessary for "multicultural coexistence". We believe that the cultivation of a global vision for the professional development of international Chinese teachers in the "Belt and Road" initiative should conform to the mainstream of international values.
Guided by Social Needs, Determining the Specific Direction of Teachers' Professional Development
Social needs determine the goals and directions of the professional development of international Chinese teachers. Traditional training for Chinese language teachers mainly focuses on training general Chinese talents. In the new era, the interdisciplinary setting of Chinese and other disciplines has enhanced the purpose and practicality of Chinese learning, such as business Chinese, medical Chinese, military Chinese, legal Chinese, energy Chinese, sports Chinese, etc., focusing on cultivating specialized teachers who are proficient in the use of Chinese in a certain field, which is the direction of teachers' professional development. With the continuous expansion of the breadth and depth of exchanges between China and other countries in the fields of politics, economy, culture, science and technology, there is an increasing demand for high-level Chinese talents around the world. Having interdisciplinary knowledge has become the main goal of teachers' professional development. Social demand not only requires teachers to have teaching ability, adaptability, cross-cultural communication ability, foreign language proficiency and psychological quality, but also promotes the cross integration between Chinese language and foreign languages, history, philosophy, management, finance, architecture, energy and other disciplines. In short, international Chinese teachers must adhere to the professional developing concept of openness, tolerance and sharing, and be able to serve the needs of China and the countries along the "Belt and Road" in the fields of economy, education, military, law, energy, medicine, etc.
Improving Teachers' Professional Quality, Possessing Multilingual Intercultural Communication Skills and Teaching Ability
The construction of the "The Five-Connectivity Program" with the aim of win-win cooperation has been rooted in a multilingual and multicultural environment from the very beginning. The training of international Chinese teachers for the "Belt and Road" needs to be multilingual and multicultural. The construction of the "Belt and Road" involves different countries and different cultural traditions. The cultivation of teachers under the multicultural background must take into account multilingual and crosscultural elements, while the professional development of teachers implicated in the construction of the educational community must cultivate multilingual intercultural communication skills and teaching skills. The languages and cultures of different regions and countries collide and exchange, showing a process from conflict to acceptance and then to integration. China advocates respecting and protecting multiculturalism in the process of globalization, which has become the consensus of many international organizations such as UNESCO and the European Union.
Relying on Big Data Technology to Build a Professional Developing Platform of Teachers
In the context of the in-depth integration of big data technology with economic, educational and social development, the continuous promotion of "Internet plus teachers' professional development" is of great practical significance for enhancing the professional development of international Chinese teachers. To this end, it is necessary to integrate the latest technologies such as mobile Internet, cloud computing and artificial intelligence to build an international professional developing platform for Chinese teachers in an all-round way to leverage the advantages of big data to timely and effectively track the professional developing trend of worldwide international Chinese language teachers, realizing the real-time transmission and sharing of teaching resources, and enhancing core competitiveness of the international Chinese language teachers.
Deepening the Integration of Industry and Education, Guiding Teachers' Sustainable Development That Combines Theory and Practice
The professional development of teachers should deepen the integration of industry and education, and guide the indepth integration of professional theoretical literacy and practical ability of teachers to achieve sustainable development. Currently, the language industry is quietly becoming a new economic growth point. To better promote Chinese teachers' professional development, it is necessary to deepen the integration of industry and education, Strengthen the combination of theory and practice. At the end of 2017, the Several Opinions of the General Office of the State Council on Deepening the Integration of Industry and Education issued by the General Office of the State Council pointed out that it is necessary to deepen the integration of industry and education and promote the organic connection of the education chain, the talent chain, and the industrial chain. It is necessary to grasp the distinctive features of Chinese language and culture, and give full play to the competitive advantages of the global Chinese education market. In line with the law of market development, a flexible industrial operating mechanism will be gradually formed, and a new era of language and cultural industry with Chinese characteristics will be opened. In this way, the optimal allocation of social resources can be realized, and the sustainability of teachers' professional development can be guided.
CONCLUSION
The professional development of international Chinese teachers is an interactive system of various elements. The professional ability of teachers is generated in the process of interaction of various elements, and the professional development of teachers exists in the actual teaching and learning activities. The process of this activity includes the interactive factors between the teacher subject and the environment, it also reflects the various professional abilities of teachers themselves. | 2020-12-24T09:12:37.987Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "965774a8d20ec61a09f524d7baefd5371ddc8f42",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.201214.024",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8dee1e9e2f5706aadcba3979cc479d301b01b45f",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
11293581 | pes2o/s2orc | v3-fos-license | An immunohistochemical analysis of the neuroprotective effects of memantine, hyperbaric oxygen therapy, and brimonidine after acute ischemia reperfusion injury.
PURPOSE
This study applies treatment methods to rat retinas subjected to acute ischemia reperfusion injury and compares the efficacy of memantine, hyperbaric oxygen (HBO) therapy, and brimonidine by histopathological examination.
METHODS
Thirty adult Wistar albino rats were divided into five groups after retinal ischemia was induced by elevating the intraocular pressure to 120 mmHg. The groups were as follows: group 1: control; group 2: acute retinal ischemia (ARI) model but without treatment group; group 3: memantine (MEM) treatment group; group 4: HBO therapy group; and group 5: brimonidine treatment (BRI) group. In the control group, right eyes were cannulated with a 30-gauge needle and removed without causing any intraocular pressure change. The ARI group was an acute retinal ischemia model, but without treatment. In the MEM group, animals were given a unique dose of intravenous 25 mg/kg memantine by the tail vein route after inducing ARI. In the HBO group, at 2 h following ARI, HBO treatment was applied for nine days. In the BRI group, a 0.15% brimonidine tartrate eye drop treatment was applied twice a day (BID) for seven days before ARI. Twenty-one days after establishing ischemia reperfusion, the right eyes were enucleated after the cardiac gluteraldehyde perfusion method, and then submitted to histological evaluation.
RESULTS
On average, the total retinal ganglion cell number was 239.93 ± 8.60 in the control group, 125.14 ± 7.18 in the ARI group, 215.89 ± 8.36 in the MEM group, 208.69 ± 2.05 in the HBO group, and 172.27 ± 8.16 in the BRI group. Mean apoptotic indexes in the groups were 1.1 ± 0.35%, 57.71 ± 0.58%, 23.57 ± 1.73%, 15.63 ± 0.58%, and 29.37 ± 2.55%, respectively.
CONCLUSIONS
The present study shows that memantine, HBO, and brimonidine therapies were effective in reducing the damage induced by acute ischemia reperfusion in the rat retina. Our study suggests that these treatments had beneficial effects due to neuroprotection, and therefore may be applied in clinical practice.
Central retinal artery occlusion (CRAO) causes severe loss of vision. Treatment trials include massaging of the globe, paracentesis, antiglaucomatous eye drops, and hemodilution or lysis therapy, which in individual cases can improve the visual outcome, although in general the prognosis remains poor.
Acute retinal ischemia (ARI) is a vision-threatening condition encountered in several pathologies, including central and branch retinal artery occlusion, anterior ischemic optic neuropathy, venous occlusive disorders, ocular trauma, and acute angle closure glaucoma [1].
Reperfusion after initial ischemia paradoxically maintains the destruction process, perhaps due to increased levels of extracellular neurotransmitters, reactive oxygen species, and waste products damaging previously unharmed cells during reoxidization [1,2].
Central retinal artery occlusion is a typical example of ARI. Classical and complex treatments have not yet yielded the expected success. Hyperbaric oxygen (HBO) therapy was successfully used in the treatment of central retinal artery occlusion [3,4] and branch retinal artery occlusion, including Susac's syndrome [5,6]. HBO therapy was reported to be useful in the treatment of ocular vascular diseases [7,8]. Vasoconstriction or vascular occlusion of the retinal vessels is probably a direct response to the interaction between free oxygen radicals and nitric oxide, together with the autoregulation that occurs in this treatment. During HBO therapy, oxygen saturation rises up to 23% and the retina is not damaged [9]. However, the acclaimed HBO therapy has the drawback that it must be applied shortly after the occurrence of the ischemia to be effective. A phase called "ischemic penumbra," characterized by its reversibility, was described as occurring before definitive ischemic damage [10]. The duration of this critical period is impossible to determine in humans. It is generally admitted that an accurate and rapid therapy must be applied within 24 h. Therefore, practical and urgent therapy is needed, and determining which treatment can provide this was the goal of our study [11][12][13][14].
During ischemia, there is activation of the extracellular glutamate-bound N-methyl-D-aspartic acid (NMDA) receptors, which provokes apoptosis by means of intracellular calcium accumulation [15,16]. Memantine, known as a glutamatergic NMDA receptor antagonist, is a derivative of amantadine that inhibits excitotoxicity and neuronal cell death. The protective effect of memantine on retinal ganglion cells (RGCs) has been explained by the presence of NMDA receptors in these cells [17]. Memantine has been used in the treatment of many clinical conditions, including influenza, Parkinson disease and spasticity, Alzheimer disease, and vascular dementia [18]. The neuroprotective properties of memantine have been studied by several laboratories in a large number of in vitro and in vivo animal models, such as in the case of brain stroke, glaucoma, and retinal ischemia-related conditions [19].
Alpha-2a adrenergic receptor agonists are thought to be neuroprotective, preventing RGC death independent of pressure reduction. In vivo studies have shown that in addition to lowering intraocular pressure, alpha-2a adrenergic agonists such as brimonidine decrease RGC death subsequent to increases in intraocular pressure, retinal ischemia, or optic nerve crush. Alpha-2 receptor activation has been implicated in enhanced neuronal survival in glaucomatous in vivo models, and hence alpha-2 agonists are some of the most studied neuroprotective agents [20][21][22][23].
The aim of our research was to apply the methods mentioned above to rat retinas subjected to acute ischemia reperfusion injury and to compare the efficacy of memantine, HBO therapy, and brimonidine by histopathological examination.
Subjects:
Thirty adult Wistar albino rats at the same age of 24 weeks were used. The rats were housed in the Laboratory of Marmara University Experimental Animal Laboratory under a constant light-dark cycle and were fed ad libitum with normal rat chow and given free access to water. All efforts were made to minimize pain and distress. Protocols were approved by the Şişli Etfal Education and Research Hospital Ethical Animal Care Committee (Permission no. 52/2007). Acute retinal ischemia model: Rats were anesthetized intraperitoneally with ketamine hydrochloride (100 mg/kg) and chlorpromazine (25 mg/kg). Pupils were dilated with a topical application of tropicamide 1% (Tropamid; Visufarma, Milan, Italy). Local anesthesia was obtained using proparacaine 0.5% (Alcaine; Alcon Labs., Ft. Worth, TX). The anterior chamber of the right eyes were cannulated with a 30-gauge needle attached to a raised saline reservoir. Retinal ischemia was induced by elevating the intraocular pressure to 120 mmHg. Measurement of intraocular pressure was conducted using a TonoPen XL tonometer (Mentor, Inc., Norwell, MA) calibrated according to the manufacturer's instructions.
A hand-held ophthalmoscope was used to visually inspect the retinal blood vessels and verify ischemia. After 60 min, the saline reservoir was lowered and the intraocular pressure and retinal circulation were allowed to return to normal over a period of 10 min. The cannula was then removed from the cornea and the animals were allowed to recover.
Hyperbaric oxygen therapy: For HBO therapy, rats were placed in a small cylindrical monoplace research chamber of 0.6 m 3 . The chamber was flushed with 100% oxygen for 10 min to vent the air inside before the compression. The HBO therapy session consisted of 100% oxygen at 2.5 atmospheres absolute (which is 1.5 atmospheric pressure in addition to normal atmospheric pressure) for 80 min, including 10 min for compression and 10 min for decompression. The first HBO therapy session was performed within 2 h of retinal ischemia. Treatments were conducted three times a day for two days, and then twice a day for seven days
1.
Control group: Right eyes were cannulated with a 30-gauge needle and removed without causing any intraocular pressure change.
2.
ARI group: Acute retinal ischemia model but without treatment.
4.
HBO group: 2 h following the acute retinal ischemia HBO treatment was applied for nine days at the Department of Underwater and Hyperbaric Medicine Istanbul Faculty of Medicine.
Twenty-one days after establishing ischemia-reperfusion, the right eyes of all rats were enucleated after cardiac gluteraldehid perfusion method, and then submitted to histological evaluation [24,25]. Sacrifice and cardiac perfusion tissue processing: Rats were deeply anesthetized with ketamine hydrochloride (100 mg/kg) and chlorpromazine (25 mg/kg) and intracardially perfused with 2.5% gluteraldehyde in cacodylate buffer solution (0.1 M cacodylic acid sodium salt, pH 7.4). The eyes were enucleated and immersed in 10% neutral formaldehyde in 0.1 M phosphate buffer (pH 7.4) for postfixation. Paraffinembedded sections of 20 μm thickness were prepared from the embedded eye tissue. Stereological analysis: The optical fractionator method was used to estimate the total number of RGCs. All cells were counted by the same person. In the optical fractionator technique, the optical dissector is combined with the fractionator sampling scheme [26,27]. Equipment: For quantification, the Stereo Investigator version 7.5 (MicroBrightField, Colchester, VT) was used on a PC system connected to a Leica DM 4000 microscope (Leica, Weltzlar,Germany). A motorized automatic stage was used to control movement in the x,y plane via a connected joystick. Movement in the z-axis was generated manually with the focus button on the microscope and the distance was measured using a Heidenhain electronic microcator (Heidenhain, Traunreut, Germany). Delineation of the region of Interest and cell counting: After the section was placed in the microscope, the circumference of the specimen was delineated using a 4× objective lens. The counting was performed using a 63× Plan Apo objective (NA=1.40). Sampling: An unbiased estimation of the total number of ganglion cells was obtained from the eye globe by choosing every 10th section according to the systematic random sampling procedure. A sampling area of 900/40,000 µm 2 was found to be optimal for this study. Dissector height was 16 μm, and a 2 μm zone at the uppermost part of the section was excluded from the analysis at every step as the upper guard zone. Thus, a thickness sampling fraction of 16 μm/t was used, where it represents the mean section thickness. Estimate of total cell number: The total number of cells (N) in one eye retina was estimated as outlined by West et al. [26,27] using the equation **n=ΣQ-× 1/tsf ×1/asf ×1/ssf [1], where ΣQ-represents the total number of RGCs counted in all optically sampled fields of the retina; ssf is section sampling fraction (1/10); asf is the area sampling fraction (900/40,000); and tsf is the thickness sampling fraction (defined by dissector height [16 μm]) divided by the estimated mean section thickness. Immunohistochemical processing: Serial sections of 20 μm thicknesses of paraffin-embedded blocks were inspected, and medium sections were selected for immunohistochemical procedure. Thirty eye sections stained with an in situ cell death detection kit, POD (Roche Diagnostics; Mannheim, Germany) were used for semiquantitative analyses.
The staining procedure was performed according to the manufacturer's instructions. Eye sections were deparaffinized and dehydrated. After rinsing in phosphate buffer saline (PBS; 0.1 mM, pH 7.2), the sections were pretreated. For detergent, Triton X-100 (Acros Organics; Geel, Belgium) was employed at 0.1% (v/v) in 0.1% (w/v) sodium citrate (Merck; Darmstadt, Germany) for 2 min on ice, followed by washing of the slides twice in PBS at room temperature (RT). We employed an A Bosch HMT 812C domestic oven operating at a frequency of 2.45 GHz with five power level settings (0-900 W). The best results were obtained with a setting of 4 (600 W). After placing the slides in a plastic jar containing 200 ml of 0.01 M citrate buffer (pH 6), the samples were irradiated them for 45 s, which resulted in an increase in the temperature from RT to 86 °C. At the end of irradiation, an extra 80 ml of distilled water at RT was added to the jar to cool the solution, and the slides were then quickly immersed in PBS at RT (rapid cooling). Background was diminished by preincubating samples with 2% BSA, 10% normal goat serum (Sigma-Aldrich, Taufkirchen, Germany), and 0.03% Triton X-100 in doubledistillated water for 30 min at RT. Sections were then treated with the in situ cell death detection kit, POD, and incubated for 60 min at 37 °C in a humidified atmosphere in the dark. Subsequently, the slides were rinsed in PBS three times for 5 min each time. They were then treated for 1 h at RT with a peroxidase-labeled antidigoxigenin sheep Fab fragment (Roche Diagnostics; Mannheim, Germany), followed by washing.
Then, 0.05% 3-3′-diaminobenzidine tetrahydrochloride chromogen concentrate and diaminobenzidine substrate buffer (SkyTek, UT) mixture were used for color reaction. Slides were counterstained with Mayer's hematoxylin. Counting: In each sampling frame, the apoptotic, normal, and necrotic cells were marked using three different markers. Statistical analysis: All data are expressed as means±standard error of mean (SEM). Parametric test assumptions were available for total RGC number and apoptotic index of the RGC layer (RGCL) of the eye. They were analyzed by oneway ANOVA, and then multiple comparisons between pairs of groups were performed according to Tukey's test. SPSS version 16.0 system for personal computer (SPSS, Chicago, IL) was used, and p-value <0.05 was considered to be statistically significant.
RESULTS
The histomorphometry of the ischemic retina was observed for retinal damage 21 days after ischemia. Severe damage due to high intraocular pressure-induced ischemia was observed in the RGCL, with approximately 52% live cells (125.137±7.184 cells remained, n=6). Cell numbers in the normal retina were 239.926±8.599 in the ganglion cell layer (GCL; n=6, p<0.001). Cell numbers in the MEM group were 215.89±8.36; in the HBO group 208.69±2.05; and in the BRI group 172.27±8. 16. Morphometric analysis showed that the percentage of surviving RGCs was 86.9%, 89.9%, and 71.7% compared to postischemic retinas treated with HBO, memantine, and brimonidine (p<0.001 in all treatment methods). Considering RGC counts, there was no statistically significant difference between the control group and the MEM group (p>0.05), but there was a statistically significant difference between the control and the HBO (p<0.05) and BRI groups (p<0.001). There was also a statistically significant difference between the ARI group and MEM group (p<0.001), HBO group (p<0.001), and BRI group (p<0.01; Figure 1 and Table 1). Mean total ganglion cell count ±SEM, mean dissector number, section thickness, number of sampled sections, coefficient of error, and coefficient of variation of stereological analysis between subjected to ischemia/reperfusion and treatment groups in the rodents.
Terminal Uridine Nick 3′ End Labeling (TUNEL)positive stained cells were observed only in the GCL. At three weeks postischemia in the ARI group (n=6), the number of TUNEL-positive cells had increased significantly in each section (57.71±0.58% in the ARI group versus 1.1±0.85% in control; p<0.001; n=6). In the HBO therapy group (15.6 ± 0.58%; p<0.001; n=6), MEM treatment group (23.6±1.73%; p<0.001; n=6), and BRI treatment group (29.37±2.55%; p<0.001; n=6), the percentage of TUNEL-positive cells in the RGCL in each section was significantly reduced compared to the ischemic retina. Considering the apoptotic index count, there was a statistically significant difference between the control group and the MEM group (p<0.001), HBO group (p<0.001), and BRI group (p<0.001). There was also a statistically significant difference between the ARI group and the MEM group (p<0.001), but there was no statistically significant difference between the ARI group and HBO group (p>0.05) or BRI group (p>0.05; Figure 2, Figure 3, Figure 4, and Table 2).
DISCUSSION
The present study shows that memantine and HBO therapy were effective in reducing the damage induced by acute ischemia reperfusion in the rat retina. Brimonidine was found to be the least effective therapy. Our study suggests that these treatments had beneficial effects due to neuroprotection, and may therefore be applied in clinical practice.
In this study, the ischemia reperfusion model was obtained through an acute increase of intraocular pressure because in this type of research, this ischemia reperfusion model is widely preferred. Moreover, central retinal artery occlusion may be directly observed by ocular fundus examination in this approach. The validity of our ischemia reperfusion model was proved by comparing the mean RGC count and apoptotic index between the ARI group and control group; a statistically significant difference was found between these groups. This result showed that the ARI model was effective. A bolus intravenous injection of memantine of 25 mg/kg was found to be lethal on rabbits. Smaller doses such as 1-10 mg/kg had neither side effects nor effectiveness [28]. Philip et al. [29] applied 20 mg/kg bolus intravenous injections of memantine on rats and no lethal effect was evident. In our pilot study on rats, we showed that a 25 mg/kg bolus intravenous injection of memantine was not lethal. We can conclude that rabbits are more sensitive than rats. Therefore, we suggest that further clinical studies are needed to investigate doses of intravenous injection of memantine that are effective and not lethal. CRAO is a typical example of ARI. Since 1859, Van Graefe first described central retinal artery occlusion (CRAO) as an embolic event to the central retinal artery in a patient with endocarditis [10][11][12], classical and complex treatments have not yet yielded the success expected. The classical proposed HBO therapy has the limitation that it must be applied soon after the occurrence of the ischemia. The generally admitted period in which the therapy will be effective is 6 h, but in humans the duration of this critical period is impossible to determine. It is generally acknowledged that an accurate and rapid therapy must be applied within 24 h. Therefore, a practical, urgent treatment is needed and our study targeted the goal of identifying this treatment [11][12][13][14]. Neuroprotection has been a matter of debate for years, since the introduction of experimental ischemia and glaucoma models to the ophthalmology literature. RGCs are a major target for both the quantification of the ischemic damage and evaluation of the therapeutic potential of the agent that is used. It is well established that the death of RGCs secondary to ischemia has been known to occur via the process called apoptosis [30]. An increasing amount of evidence indicates that drugs that show antiapoptotic activity may decrease neuronal death, a major aim of neuroprotective therapy [31,32].
As a result of these considerations, in our study, the percentage of cells undergoing apoptosis was accepted as a main outcome measure, together with the total number of RGCs. To have neuroprotective efficiency, a drug must have properties like good vitreous diffusion, durability, and high concentration value in targeted tissues. Furthermore, it must have correspondent receptor binding sites, presumably on the optic nerve or the retina.
The neuroprotective effects of memantine and topical bunazosin were investigated in an optic nerve ischemia model induced by the delivery of endothelin-1 (ET-1) [33,34]. Memantine, an NMDA antagonist, was shown to be efficacious in terms of the topographic parameters of the optic nerve (cup area, cup depth, and rim volume); in addition, it was found to be neuroprotective in a rabbit model of optic nerve ischemia [33]. Memantine is currently being used in Europe for the treatment of Parkinson disease and spasticity, and it was recently approved in Europe for the treatment of Alzheimer disease and vascular dementia [18].
The neuroprotective effect of memantine has been demonstrated in many studies, especially in terms of acute brain ischemia [34,35]. In ophthalmology, however, contrary to our ARI model, these treatments were essentially applied to chronic glaucoma models in experimental studies. To our knowledge, only a few animal studies about neuroprotective treatments in ARI have been performed [36][37][38]. The aim of one of the studies was to quantify vitreous amino acid concentrations in pressure-induced retinal ischemia, and to evaluate the neuroprotective effect of memantine administered before and at two time intervals after ischemia [38]. They indicated that memantine reduced ganglion cell loss when given systemically before or within 30 min of retinal ischemia.
Lagrèze et al. [39] induced acute ischemia reperfusion injury in rats and compared the efficacy of cerestat, memantine, and riluzole. As a result, they argued that these drugs have a neuroprotective effect, reducing the excitotoxic damage of retinal neurons. Wolde Mussie et al. [38] investigated the neuroprotective effect of Memantine, an NMDA receptor channel blocker, in two RGC injury models in rats. They found that there was an approximately 80% reduction in RGC number two weeks after the partial optic nerve injury. In that study, memantine (5 mg/kg) caused a twofold increase in compound action potential amplitude and a 1.7 fold increase in the survival of RGCs.
In terms of apoptotic index and RGC count, there was no statistically significant difference in our study between the control group and the MEM group (p>0.05). These results are consistent with those of other studies [23,[36][37][38][39]. However, a statistically significant difference was found between the ARI group and MEM group. Therefore, we assume that a single bolus dose of memantine may have a neuroprotective effect in ARI.
In view of the apoptotic index, contrary to the statistically significant difference found between the ARI and HBO groups, there was no statistically significant difference between the ARI and MEM groups. We think that these results may be due to cell death originating from a necrotic mechanism more than from apoptosis. In addition, our study showed that memantine has a neuroprotective effect after ARI.
The European Committee of Hyperbaric Medicine determined the indications of HBO therapy. The treatment priority of acute ocular ischemic pathologies is classified as level C, meaning optional. However, in ophthalmological practice, HBO is the first-choice therapy in ARI. In HBO treatment studies on clinical conditions implying ARI, the clinical neuroprotective effect of HBO has been discussed [40,41]. Nevertheless, the efficacy of this method has not been proven immunohistochemically by experimental studies. The major parameters of the studies were visual prognosis and the time lag from the onset of symptoms to the beginning of hyperbaric oxygenation treatment, as well as the time lag until the beginning of retinal reperfusion. Beiran et al. [3] and Weinberger et al. [4] concluded that HBO therapy appears to have a beneficial effect on visual outcome in patients with CRAO.
Our study showed the neuroprotective effects of HBO by immunohistochemically evaluating RGC counts and apoptotic index in an ARI model. The neuroprotective effects of HBO were compared to those of memantine and were found to be similar. In view of the results of our study, we think that HBO therapy seem to be more effective. When comparing the efficacy of memantine and HBO treatments using the same parameters, they show equivalent therapeutic value. We think that the difference in apoptotic index noted between these two groups may come from an unexplained mechanism of apoptosis.
Brimonidine tartrate, an α2-receptor agonist, is an accepted treatment modality for the medical treatment of glaucoma. The α2-receptor agonists have been shown to protect RGCs in experimental models of optic nerve degeneration [42], chronic ocular hypertension [43], transient ischemia [44], and photoreceptor degeneration [45]. Aktas et al. [46] have shown the neuroprotective effect of topically applied BRI in a rabbit model of endothelin-1-induced optic nerve ischemia. As a topical agent, brimonidine was considered to have neuroprotective effects if it could penetrate into the vitreous, maintain its topical application for a given period of time, and have receptors on its target tissues, e.g., the retina and the optic nerve [47].
Based on knowledge about brimonidine receptors and pharmacological concentrations of the drug in the retina, researchers have suggested that the neuroprotective effect of brimonidine may be mediated by vascular modulation or by the upregulation of brain-derived neurotrophic factor in the RGCs [48]. Accordingly, brimonidine treatment was associated with a significant prevention of RGC injury [43]. The effects of brimonidine were reported to include a dosedependent increase in RGC survival and functioning in a partial crush injury model [20]. In this study, we found a statistically significant difference between the BRI and ARI groups when considering RGC count and apoptotic index. When comparing the HBO and MEM groups with the BRI group, there was no statistically significant difference. In view of these data, further studies need to focus on the modality of the application of brimonidine, along with its duration and doses in ARI treatment.
In conclusion, a single bolus dose of intravenous memantine and HBO therapy were found to be highly effective in the treatment of ARI; however, the topical application of brimonidine was also found to be effective in terms of prophylactic treatment. In this manner, brimonidine seems to be a good choice for preventive therapy for patient at high risk of developing CRAO. Our study shows that when HBO therapy is not immediately available, or some time is needed, as occurs in most cases, memantine and brimonidine seem to be practical and valuable therapeutic and prophylactic agents in acute ischemia. Further and more detailed clinical studies of these two treatments are needed that aim at determining modes of application, doses, and maximum time elapsed after the ischemia period. | 2016-05-04T20:20:58.661Z | 2011-04-26T00:00:00.000 | {
"year": 2011,
"sha1": "98d8bb3d47b55776464a8ff90d0654a57387a3d7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b360b62e126b44f14b8a71cb4df67772ad28f11e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118152106 | pes2o/s2orc | v3-fos-license | The Deepest Constraints on Radio and X-ray Magnetic Activity in Ultracool Dwarfs from WISE J104915.57-531906.1
We report upper limits to the radio and X-ray emission from the newly discovered ultracool dwarf binary WISE J104915.57$-$531906.1 (Luhman 16AB). As the nearest ultracool dwarf binary (2 pc), its proximity offers a hefty advantage to studying plasma processes in ultracool dwarfs which are more similar in gross properties (radius, mass, temperature) to the solar system giant planets than stars. The radio and X-ray emission upper limits from the Australia Telescope Compact Array (ATCA) and Chandra observations, each spanning multiple rotation periods, provide the deepest fractional radio and X-ray luminosities to date on an ultracool dwarf, with $\log{(L_{\rm r,\nu}/L_{\rm bol}) [Hz^{-1}]}<-18.1$ (5.5 GHz), $\log{(L_{\rm r,\nu}/L_{\rm bol}) [Hz^{-1}]}<-17.9$ (9 GHz), and $\log{(L_{\rm x}/L_{\rm bol})}<-5.7$. While the radio upper limits alone do not allow for a constraint on the magnetic field strength, we limit the size of any coherently emitting region in our line of sight to less than 0.2\% of the radius of one of the brown dwarfs. Any source of incoherent emission must span less than about 20\% of the brown dwarf radius, assuming magnetic field strengths of a few tens to a few hundred Gauss. The fast rotation and large amplitude photometric variability exhibited by the T dwarf in the Luhman 16AB system are not accompanied by enhanced nonthermal radio emission, nor enhanced heating to coronal temperatures, as observed on some higher mass ultracool dwarfs, confirming the expected decoupling of matter and magnetic field in cool neutral atmospheres.
Introduction
Recently, the discovery of a brown dwarf binary only 2 parsecs from the Sun was announced, making it the third closest system after the Alpha-Centauri system and Barnard's star (Luhman 2013). With L7.5 and T0.5 spectral types (Burgasser et al. 2013b) the Luhman 16AB system (also known as WISE J104915.57−531906.1) has quickly become a benchmark for the study of ultracool atmospheres. A unique feature of the Luhman 16AB system is the large amplitude photometric variability (11% in an i + z filter with a period of 4.87±0.01 hr; Gillon et al. 2013) of the T0.5 component, with rapid evolution of the global weather patterns on timescales of about a day (Crossfield et al. 2014). The two components of Luhman 16AB are separated by 1.5", or 3 AU (Luhman 2013), so not able to influence each other via magnetic interactions.
In principle, photospheric features causing photometric variability could be magneticor cloud-related; while there have been sporadic measurements of magnetic activity in mid-L and later spectral type dwarfs, no clear trends have emerged which connect photometrically variable and magnetically active ultracool dwarfs. Since the atmospheres of ultracool dwarfs are increasingly neutral, they are less likely to support cool magnetic spots (Gelino et al. 2002;Mohanty et al. 2002) than their earlier type stellar counterparts, and the observed variability in late L-and T-dwarfs has been attributed to the presence of patchy clouds (Ackerman & Marley 2001;Burgasser et al. 2002;Marley et al. 2010). Despite this, there is evidence that in at least some cases, photometric variability of ultracool dwarfs is linked to magnetic activity (Clarke et al. 2003;Lane et al. 2007;Harding et al. 2013) Magnetic activity signatures in ultracool dwarfs are rare: to date, only 5 L dwarfs and one T dwarf have been detected in the radio band, and only 1 L dwarf and no T dwarf has been detected in the X-ray band, with a wide range of behaviors displayed among the small number of detections. Studies have shown that for late-M to early-L dwarfs faster rotation results in an increased radio detection fraction (McLean et al. 2012) while X-ray emission seems to be suppressed leading to a sort of super-saturation (Berger et al. 2010).
Dynamo models explain the generation of magnetic fields in ultracool dwarfs by extrapolating convection-driven geodynamo models with strong density stratification (Christensen et al. 2009): part of the convected energy flux is converted to magnetic energy to balance ohmic diffusion. Such scaling laws predict quite strong magnetic fields, of order 1 kG for a 1 GY old, 0.05 M ⊙ brown dwarf with T eff =1500 K and average density of 90,000 kg m −3 . These scalings do not, however explain why only a handful of ultracool dwarfs of spectral type L and T have been detected through radio observations (implying field strengths compatible with these extrapolations) while other objects have considerably lower upper limits. Other parameters must govern the generation of radio emission and/or field strength.
Due to the proximity of the Luhman 16AB system, its magnetic activity can be probed with unprecedented sensitivity. An absence of activity signatures would support the prevailing view that large amplitude photometric variability of early T-dwarfs is not connected to magnetism, but rather is a consequence of patchy cloud coverage. We report on two epochs of radio observations of Luhman 16AB with the Australia Telescope Compact Array in March and May 2013 and on a Chandra X-ray pointing carried out in November 2013 1 .
These observations provide the most sensitive constraints to date on the radio and X-ray emission from ultracool dwarfs. Luhman 16AB was observed twice with the ATCA: in the 6A configuration (baselines of 0.337-5.94 km) on 09 March 2013 and again with the 6C configuration (baselines of 0.153-6.0 km) on 02 May 2013 (UT). Continuum mode observations were taken on both dates in dual-sideband mode simultaneously at 5.5 GHz and 9.0 GHz. The Compact Array Broadband Backend (CABB Wilson et al. 2011) was used with 2 GHz bandwidth per observing frequency and 2048 channels each 1 MHz wide. The gain calibrator QSO B1036−52 was used for both epochs, with primary flux calibrator QSO B1934−638; Luhman 16AB was tracked in 10minute intervals for both epochs. The flux calibrator was used also as the bandpass calibrator during the March observations, but RFI at early times during the May observations prevented it from also being used as a bandpass calibrator; instead a single scan of QSO B1036−52 taken at high elevation was used for bandpass calibration. All data were reduced using the AIPS package (Greisen 2003) and best practices for wide-band data reduction. Table 1 lists the beam sizes for each epoch and frequency band.
Radio Observations
of 20 cm, which took place over 7 hours on April 8, 2013 (R. Fender, private communication). The rms image noise is 4 mJy and there is no source near the expected position of the brown dwarf binary. Because of the factor of ∼1000 disparity in upper limit of the decimeter wavelength observations compared with the centimeter wavelength upper limits, we concentrate on the ATCA results in the following discussion. Data at 5.5 and 9 GHz for each epoch were imaged separately and after being combined into a single data set per band; both Stokes I and V were searched for Luhman 16AB. We propagated the W ISE position at epoch MJD 55380.018731 by the proper motions given in Luhman (2013) to get the expected position coordinates (J2000 10 49 15.99 −53 19 04.9 for March and 10 49 16.01 −53 19 04.9 for May). There is only a 0.25" difference in the position between the two radio epochs. The error range for the expected position of Luhman 16AB is comprised of ±0.75" from the binary separation, an upper limit of ±0.5" from parallactic motions (likely smaller due to the ∼ two month separation of the epochs), ±0.07" from position uncertainty (taken from Luhman 2013), and ±0.02" from propagating proper motion uncertainties stated in Luhman (2013). No source was found near the expected position for the brown dwarf pair in either epoch. Figure 1 shows the 5.5 GHz radio sky around this expected position, with a box of ±1.6" encompassing the maximum of all the errors stated above. The nearest statistically detected source is 12.5" away, with a flux density at 5.5 GHz of 30 µJy. No bursty emission in Stokes I or V is evident in either of the bands in either epoch, in light curves with bin sizes of 60, 300, and 600 seconds (see e.g., Osten & Wolk 2009). Details of the observations (and sensitivities derived from individual and combined epochs) are listed in Table 1. A 1σ upper limit of 5 (6.8) µJy/beam at an observing frequency of 5.5 (9.0) GHz, and at a distance of 2pc, translates into a 3σ radio luminosity upper limit L ν of 7.2×10 10 (9.8×10 10 ) erg s −1 Hz −1 for Luhman 16AB.
X-ray observations
Chandra observed Luhman 16AB on 10 Nov 2013 for 50 ks (ObsID 15705) using the ACIS-S3 detector. The data analysis was performed with the CIAO software package 2 version 4.6. The analysis started with the level 1 events file provided by the Chandra X-ray Center (CXC). In order to optimize the spatial resolution, pixel randomization was removed. The events file was filtered on event grades (retaining the standard grades 0, 2, 3, 4, and 6), and the standard good time interval file was used.
We determined the position of Luhman 16AB at the epoch of the Chandra observations using the proper motion given by Luhman (2013). The predicted coordinates are J2000 10 49 14.65 -53 19 05.2, and there is no evidence a of source at this location (see Figure 2). As discussed in Section 2.1, the uncertainty in the expected position is a maximum of 1.34". For an on-axis source, 90% of the encircled energy lies within a radius of 2" (Chandra Proposer's Chandra/ACIS-S E N 10" Observatory Guide), and we used this as our guide for establishing the spatial region in which to probe for any X-ray emission. The closest detected X-ray source is separated by 57 ′′ from this position, with a total of 17 counts.
Calculation of an upper limit proceeded with estimation of the background rate at the position of Luhman 16AB. 90% of the encircled energy lies within 2 ′′ of the central pixel at an energy of 1.49 keV (Chandra Proposer's Observatory Guide). Using an annulus extending from 2 ′′ − 10 ′′ around the position of Luhman 16AB, we calculate a mean background rate within 2 ′′ of the target, or 0.37 counts (0.2-2 keV) in 48.35 ks. We calculated the quantile distribution for a Poisson distribution with this intensity in the R statistical computing software package R Core Team (2012), and find an upper limit of 2 counts at a significance level of P=0.001; this corresponds to a confidence level of 99.9%, equivalent to a Gaussian sigma level of 3.09 (Gehrels 1986). For the on-source exposure time of 48.35 ks, the upper limit count rate is then 4.1 · 10 −5 cts/s. We calculated the count-to-flux conversion factor for a one-temperature thermal plasma (APEC model) with PIMMS 3 and we verified that it is insensitive for reasonable assumptions on the plasma temperature considering the negligible absorption expected for the 2 pc distance of Luhman 16AB. We thus constrain the X-ray luminosity in the 0.2-2 keV band to log L x [erg/s] < 23.0.
Magnetic Activity Constraints for Luhman 16AB
The upper limits of Luhman 16AB presented in this work are the strongest constraints obtained so far for the radio and X-ray luminosity of any ultracool dwarf. We compute the radio and X-ray activity indices, log (L r,ν /L bol ) and log (L x /L bol ), making use of the bolometric luminosities given by Faherty et al. (2014) for both components of the binary. We evaluate the activity indices separately for the L7.5 and the T0.5 component, assuming that only one of the binary components is possibly magnetically active. However, the bolometric luminosities of Luhman 16A and 16B are almost the same and the error we make by using their average is likely smaller than the sum of all other uncertainties. We find log (L r,ν /L bol ) < −18.1 (5.5 GHz), log (L r,ν /L bol ) < −17.9 (9 GHz), and log (L x /L bol ) < −5.7. Figure 3 puts these upper limits in the context of detections and upper limits for other ultracool dwarfs.
The upper limits at 5.5 and 9 GHz for Luhman 16AB from ATCA are a factor of 15 and 10, respectively, lower than the most sensitive upper limit of any other ultracool dwarf. Williams et al. (2013). Downward-pointing arrows are 3σ upper limits, while filled circles correspond to detections. Dotted vertical lines connect measurements of the same object; a star symbol connects flare measurements with measurements or limits on quiescence of the same object at the same wavelength. The radio upper limits for the Luhman 16AB system are assigned to either the L7.5 or T0.5 component of the system and connected by horizontal lines. (Bottom) X-ray luminosity versus spectral type for stellar objects with spectral types of M6 and later. Data are taken from Williams et al. (2014), Cook et al. (2014), Audard et al. (2005), and Stelzer et al. (2012). Symbols are as in the top panel. The X-ray upper limit is about a factor of two deeper than previous sensitive upper limits. ǫ Ind Bab (T1+T6) is the next closest brown dwarf with sensitive limits on X-ray and radio emission, from Audard et al. (2005). At a distance of 3.6 pc, it is only a factor of 1.8 further away than Luhman 16AB, conveying about a factor of three difference in luminosity sensitivity, and measurements were made for both systems with the same radio and X-ray facilities: ATCA and the Chandra X-ray Observatory. The disparity in radio upper limits can largely be attributed to the increase in bandwidth available with the CABB on ATCA now (2 GHz) compared with what was available for Audard et al. (2005)'s observation (128 MHz).
Interpretation
We have presented the most sensitive upper limits on X-ray and radio emission for ultracool dwarfs to date. The T dwarf component has a measured rotation period of 4.87 h (Gillon et al. 2013). Our X-ray limit for Luhman 16AB confirms for late L and T dwarfs the previous evidence gained from M/L dwarfs for a sharp drop of X-ray activity levels despite fast rotation. This absence of X-ray activity is most likely associated with the high electrical resistivities in such cool atmospheres which prevent the coupling of matter and magnetic field which is necessary to develop magnetic activity (Mohanty et al. 2002). Fleming et al. (1995) detected stellar coronal heating efficiencies (as measured by L X /L bol ) down to levels approximately a factor of five lower than our upper limit. Our upper limit is also consistent with the activity levels of the Sun at the highest points of its activity cycle, which reaches a maximum of log L X /L bol = −5.9 in the 0.2-2.4 keV band (Peres et al. 2000).
As Figure 3 shows, there is a marked drop-off in the number of radio detections for objects later than mid-L, with a range of radio luminosities observed at a fixed spectral type. Detections of radio emission in ultracool dwarfs are often used to argue for the existence of strong magnetic fields. However, the inverse is not true: the lack of radio detection does not allow for a determination of magnetic field strengths in either of these objects, contrary to the statements made in Berger (2006) for upper limits on radio emission to a sample of ultracool dwarfs. Important conclusions regarding the physical extent of any emission can be drawn from the radio flux density upper limits in examination of the conditions under which these mechanisms operate.
The interpretation of the variable radio emission in ultracool dwarfs has centered around the action of an electron-cyclotron maser operating in a region of high magnetic field strength (Nichols et al. 2012). In this scenario, the observing frequency is tied to the electroncyclotron frequency in the emitting region and is related to ν c = 2.8 × 10 6 B (MHz) by ν obs = sν c for harmonic number s equal to 1 or 2, implying kG fields in the radio-emitting region detected at cm wavelengths, and consistent with extrapolations from convectiondriven geodynamo scaling laws (Christensen et al. 2009). The intensity of radio emission expected from a coherent process such as this is not predictable based solely on the number of emitting particles or magnetic field strength. However, the high brightness temperatures required for coherent emission (usually taken to be T b >10 12 K; Kellermann & Pauliny-Toth 1969) coupled with the radio flux density upper limit and observing frequency sets a stringent upper limit on the size scale of any radio-emitting region. Rewriting the standard equation (Dulk 1985) for parameters applicable to the current case, and taking the dwarf radius to be approximately 1 Jupiter radius, in line with measurements (Sorahana et al. 2013), leads to the following constraints: where x is the size of any radio-emitting region in units of R J (R J is one Jupiter radius (=7.1×10 9 cm)), d pc is the distance to the dwarf in pc, ν GHz is the observing frequency in GHz, S µJy is the flux density in µJy, and T b is the brightness temperature in K. Evaluated for the upper limits at the two frequencies (and assuming T b =10 12 K) gives x ≤0.002 at 5.5 GHz, and x ≤0.001 at 9 GHz. These are upper limits, as stellar phenomena have demonstrated the existence of brightness temperatures as high as T b ≈10 18 K (Osten & Bastian 2008).
The upper limit on size holds if the conditions are right to produce coherent emission. Growth rates of the cyclotron maser instability are maximized in relatively rarefied, magnetized plasmas where the dimensionless ratio of the plasma frequency to the electron-cyclotron frequency is less than a few (Lee et al. 2013). The atmosphere calculations of Mohanty et al. (2002) showed that in the lower atmosphere, the total density for a dusty atmosphere model in a cool dwarf with T eff near 1500 K will be about 10 −9 g cm −3 , with an ionization fraction of about 10 −11 . This would suggest an electron density of approximately 10 3 cm −3 . The kG field strengths for these objects derived from scaling laws, combined with these parameters, indicate that the conditions for the instability may exist, but the cyclotron maser mechanism could be inoperable for reasons still to be determined. Mutel et al. (2007) found a strong dependence of the growth rates of the cyclotron maser instability on the opening angle of the loss cone distribution of electrons that could power the instability. Beaming effects may also explain the lack of detections, if there is a misalignment between the opening angle of the emission and the line of sight.
Another possibility that has been put forward to explain the quiescent radio emission from ultracool dwarfs is gyrosynchrotron emission, in analogy with the magnetic activity seen in higher mass dwarf stars (Güdel 2002). For this incoherent process, the strength of the emission depends not only on the magnetic field strength in the radio-emitting source, but also on the index of the distribution of accelerated particles with energy (δ), and the size of the emitting region. Figure 4 displays the values of δ, B, and size of the emitting source that are compatible with the observed upper limit on flux density at the two radio frequencies, given the limit of applicability of the analytic expressions in Dulk (1985). While the constraints on size are not as stringent as for the case of a coherent emission, they do rule out a global gyrosynchrotron-emitting magnetosphere around one of the dwarfs in the Luhman 16AB system, as this would lead to detectable levels of gyrosynchrotron emission.
Our discussion has concentrated mainly on steady-levels of emission. The observation of radio bursts in a relatively small sample of all radio-observed UCDs combined with the small rotation period of the known radio bursters has given rise to the discussion of selection effects, e.g., the typical length of the radio observations (few hours) may not have covered the -generally unknown -full rotational cycle of many UCDs (Stelzer et al. 2012). This bias can be ruled out here. Multiple rotation periods of the T0.5 dwarf were covered with our radio data, so the observations were sensitive to bursts occurring at particular rotation phases. Yet, the viewing geometry and/or topology of the magnetic field may prevent the detection of such bursts − if present − on Luhman 16AB. Limits presented here are likely unachievable for other ultracool dwarfs in the near future, only to be exceeded possibly by measurements from the Athena mission (for X-rays) and the Square Kilometer Array or Next Generation Very Large Array future radio telescopes.
RAO acknowledges support from the Chandra X-ray Observatory under grant GO4-15140X. C.M. acknowledges support from the National Science Foundation under award No. AST-1003318. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. Thanks to Mark Wieringa for his help with the March ATCA observations and data analysis. Thanks also to Eric Feigelson for help in setting up the first ATCA observation, and for substantive comments on the statistics of upper limits. Fig. 4.-Constraints on the size of the emitting region and magnetic field strength from the upper limits on the radio flux density at 5.5 and 9.0 GHz of Luhman 16AB. These constraints are calculated assuming that one of the dwarfs is capable of producing gyrosynchrotron emission from a power-law distribution of electrons in a high magnetic field region. The size of the emitting region is given in units of Jupiter radii, R J . Each curve gives the upper limit of the region of parameter space allowed for the specified value of δ, the index of the distribution of accelerated particles producing the gyrosynchrotron emission. Anything below the line is compatible with the upper limits for that combination of parameters. | 2015-04-24T14:07:37.000Z | 2015-04-24T00:00:00.000 | {
"year": 2015,
"sha1": "a5fb3bd8ceb40d7342df1fc75d64b5a5f5d7c007",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.06514",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a5fb3bd8ceb40d7342df1fc75d64b5a5f5d7c007",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4684763 | pes2o/s2orc | v3-fos-license | An orange fluorescent protein tagging system for real-time pollen tracking
Background Monitoring gene flow could be important for future transgenic crops, such as those producing plant-made-pharmaceuticals (PMPs) in open field production. A Nicotiana hybrid (Nicotiana. tabacum × Nicotiana glauca) shows limited male fertility and could be used as a bioconfined PMP platform. Effective assessment of gene flow from these plants is augmented with methods that utilize fluorescent proteins for transgenic pollen identification. Results We report the generation of a pollen tagging system utilizing an orange fluorescent protein to monitor pollen flow and as a visual assessment of transgene zygosity of the parent plant. This system was created to generate a tagged Nicotiana hybrid that could be used for the incidence of gene flow. Nicotiana tabacum ‘TN 90’ and Nicotiana glauca were successfully transformed via Agrobacterium tumefaciens to express the orange fluorescent protein gene, tdTomato-ER, in pollen and a green fluorescent protein gene, mgfp5-er, was expressed in vegetative structures of the plant. Hybrids were created that utilized the fluorescent proteins as a research tool for monitoring pollen movement and gene flow. Manual greenhouse crosses were used to assess hybrid sexual compatibility with N. tabacum, resulting in seed formation from hybrid pollination in 2% of crosses, which yielded non-viable seed. Pollen transfer to the hybrid formed seed in 19% of crosses and 10 out of 12 viable progeny showed GFP expression. Conclusion The orange fluorescent protein is visible when expressed in the pollen of N. glauca, N. tabacum, and the Nicotiana hybrid, although hybrid pollen did not appear as bright as the parent lines. The hybrid plants, which show limited ability to outcross, could provide bioconfinement with the benefit of detectable pollen using this system. Fluorescent protein-tagging could be a valuable tool for breeding and in vivo ecological monitoring.
Background
Increased use of transgenic crops has prompted the necessity of monitoring transgene flow in agroecological systems. Previous investigations have ascertained the utility of gene flow tracking with fluorescent proteins (FPs) [1][2][3][4]. These studies have shown that green fluorescent protein (GFP) is an effective tool for the purpose of gene flow tracking and can be targeted to various organs and tissues within plants, including pollen. This technology, in effect, could be used in an environmental monitoring system, one of the many uses of FPs in plants [5]. One drawback of using native GFP as a marker in plants is the signal-to-noise ratio at GFP's maximum excitation wavelength of 395 nm, often resulting in autofluorescence of plant tissue components [6]. Fluorescent proteins emitting in the red/orange spectrum that require longer wavelengths for excitation have lower levels of autoflorescence in plant tissues compared to blue or UV light [6]. One such widely used orange fluorescent protein (OFP), DsRed, is derived from Discosoma sp. its mutant variants have higher extinction coefficients and quantum yields [7]. Coral-derived FPs should be useful for monitoring gene flow.
Nicotiana tabacum (tobacco) and Brassica napus (canola) plants have been transformed to synthesize GFP in pollen, using pollen-specific promoters [1,4]. Long-range pollen tracking was conducted in canola species to assay pollen movement in real time (e.g. immediate detection of tagged pollen) using traps at various distances within field and greenhouse experiments. This method is quicker and less laborious for determining pollen flow than analyzing progeny from recipient plants (e.g. antibiotic screening, PCR, FP screening) [8]. Drawing upon this previous body of work, it is logical to conceptualize a method to determine bioconfinement efficacy using FP tagging utilizing an improved fluorescent protein for plants.
The Nicotiana hybrid (Nicotiana tabacum × Nicotiana glauca), is highly sterile and prompted a further examination of bioconfinement through gene flow monitoring. Recently, we have shown that GFP tagging in vegetative plant tissues of this hybrid allows for gene tracking and assists with sterility assessments [9]. Here we describe a modified system to tag pollen that is applicable to a realtime assay of pollen flow from FP-tagged plants. Our goal was to engineer each Nicotiana species for pollenspecific expression of an OFP gene that also had vegetative tissues that expressed a GFP gene. The transgenic plants could then be crossed to obtain interspecific hybrid Nicotianas that had FP genes contributed from each parent. To achieve this goal, parent plants N. tabacum 'TN 90' and N. glauca were Agrobacterium-mediated transformed to synthesize the OFP tdTomato-ER in the pollen and bred to homozygosity, and then crossed to create the transgenic interspecific hybrid. Manual greenhouse crosses were performed to assess sexual compatibility and functionality of the system.
Plants
N. tabacum 'TN 90' used for transformation was from foundation seed lot # 86-02-K-4A, N. tabacum 'MS TN 90' from foundation seed lot # 86-03-KLC-15 is a male sterile variety of TN 90 that was used as a pollen recipient plant in crosses. N. tabacum 'SN 2108', a morphologically distinct variety from the TN 90 cultivar used as a pollen donor in greenhouse crosses, is an experimental line developed into 'KT D4'; all N. tabacum were obtained from the Kentucky Tobacco Seed Improvement Association, Inc. in Lexington, KY, USA. (38°8'N, 84°2 9'W). N. glauca used for transformation was from the US National Plant Germplasm System (plant introduction 307908, accession TW55 from Peru).
Vector construction
Two fluorescent proteins were used to mark plants. The mgfp5-er gene encodes a GFP that emits green light (λ max = 509 nm) when excited by wavelengths of blue (465 nm) or ultraviolet (UV; 395 nm) light and targeted and retained in the endoplasmic reticulum (ER) within cells. GFP in transgenic plants is observable by UV light illumination in the dark or epifluorescence microscopy, and is quantifiable using fluorescence spectrometry [10,11]. tdTomato-ER, is a DsRed variant that is a tandem dimer OFP (λ max = 581 nm and excited by green light (554 nm) that is also retained in the ER [7,12]. To create dual FP marker vectors, the Gateway compatible vector backbone pMDC99, containing a hygromycin resistance cassette, and pMDC100, containing a kanamycin resistance cassette, were utilized as Gateway destination vectors [13]. An entry vector containing a pollen-specific promoter, LAT52 [14], driving expression of the OFP tdTomato-ER and a nos terminator was recombined with the destination vectors, creating the intermediate vectors pMDC99-tdTomato-ER and pMDC100-tdTomato-ER. Subsequently a GFP expression cassette (containing CaMV35S-mGFP5-ER-nosT) was amplified from pBIN19-mGFP5-ER and cloned into the intermediate vectors, creating the binary vectors TD-GFP-H (containing hygromycin selection) and TD-GFP-K (containing kanamycin selection), respectively ( Figure 1). These vectors were identical except for the antibiotic resistance genes to facilitate screening by using dual antibiotic selection after hybridization of Nicotiana species to incorporate both constructs into the F 1 hybrid.
Generation of transgenic plants
Plant transformation experiments were performed using Agrobacterium tumefaciens strain EHA 105 using the previously-described leaf disc method [15]. Sterilized leaf explants were soaked for 30 min in a mixture of Agrobacterium and liquid MS salts containing B 5 vitamin (DBI). Transformed explants were then co-cultivated on solid DBI media for 2 days before being transferred to solid DBI containing Timentin® (400 mg/L) and either kanamycin (200 mg/L) or hygromycin (50 mg/L) for selection. Shoots generated from transformed callus were transferred to MS media containing respective selective antibiotics [15,16]. Shoots were maintained at 24°C under 16/8 h light/dark periods until rooting, then transferred to soil in standard 1020 flats divided in to 18 cells measuring 7.95 cm × 7.94 cm × 5.72 cm with humidity domes to allow for acclimation for approximately two weeks. Plants leaves were then assayed with a handheld UV light (UVP model B-100AP 100 W:365 nm) to detect differentiate between transgenic and nontransgenic plants as previously described [11]. The presence of the TD-GFP-H and TD-GFP-K inserts were confirmed in each T 0 plant by DNA extraction and PCR to amplify mgfp5-er DNA ( Figure 2) as previously described [1,17]. Plants that were confirmed visually and with PCR to be transgenic were transferred into 4 L pots in a greenhouse under 16/8 h light/dark periods at 27°/20°C, respectively. Seeds were harvested from each plant by covering flowers with breathable mesh pollination bags (DelStar Technologies, Inc., Middleton, DE, USA) and periodically manually shaken until seed pods developed and were harvested. In all, 20 events were generated per construct per species.
Hybrid Nicotiana production
Plants were bred to obtain lines that had both constructs for a complete tracking of pollen. To ensure multiple transgene copies were stacked into the hybrid, our goal was to produce hybrids containing one TD-GFP-K and one TD-GFP-H construct, using dual antibiotic screening to ensure to select hybrids that were transgenic for each construct.
Fluoresence measurements and observations
Brightly fluorescent T 0 plants, as determined by visual observation for GFP, were selected for analysis and further breeding; ten T 1 TN 90 GFP-H lines and eight N. glauca T 1 TD-GFP-K lines were selected. T 1 seeds were germinated and handheld UV light was used to select the brightest GFP-expressing seedlings. GFP fluorescence was measured by a spectrofluorometer (Fluorolog®-3 HORIBA Jobin Yvon, Edison, NJ, USA) [11,18] and analyzed with its software (FluorEssence™ Version 2.5.2.0.HORIBA Jobin Yvon, Edison, NJ, USA) to quantify average fluorescence (photon counts per second) from each T 1 line (Figures 3 and 4). Individual plants were selected that had the highest measured fluorescence, and thus, were most likely to be homozygous for the TD-GFP-K or TD-GFP-H inserts [18]. When T 1 plants flowered, pollen was taken from each plant, suspended in 200 μl of water, and 15 μl of the suspension transferred to a microscope slide and observed under an epifluorescent BX 51 microscope (Olympus Corporation, Shinjuku, Tokyo, Japan). A Texas Red®/Cy3.5 (TxRed) filter set (Chroma Technology Corporation, Bellows Falls, VT, USA) was used to view fluorescent pollen grains. The field of view was captured by a digital camera (Olympus Q Color 3) and Qcapture imaging system (Q Imaging Corp., Burnaby, Canada) ( Figure 5).
Transgenic line selection
With the assumption of a single insertion event, transgene zygosity was estimated using epifluorescent microscopy. Plants with 100% fluorescent pollen (deemed homozygous) were bagged and self-fertilized as previously described. In addition to the FP pollen assay, we used progeny assays to assure that we selected homozygosity of each T 2 line. Germinated seed were screened with a handheld UV light to determine zygosity of each T 2 line (using ratios of GFP to non-GFP plants). Seeds of each T 2 line were also screened for inheritance of antibiotic resistance genes by germination on MS media [15]
Fertility assessment in hybrids
Manual crosses were conducted in a greenhouse in Lexington, Kentucky (Table 1). Hybrid OFP plants were
Statistical analysis
All analysis of variance (ANOVA) routines were performed using SAS (Version 9.3 SAS Institute Inc, Cary, NC, USA) using the MIXED procedure with a significance level of p < 0.05. When ANOVA results were found to be statistically significant, the least significant differences were used for mean separations.
Results and discussion
Transformation of N. tabacum 'TN 90' and N. glauca were successful except for N. glauca TD-GFP-H where multiple attempts failed to produce hygromycinresistant plants. GFP was visible in leaves, stems, and roots (data not shown) and OFP was visible in pollen under a microscope ( Figure 5) with the aforementioned filter set. GFP, regulated by the CaMV 35S promoter, was not visible in pollen in accordance with previous findings [19,20]. Highly fluorescent individual plants from the most fluorescent N. glauca TD-GFP-K lines were crossed with highly fluorescent TN 90 TD-GFP-H lines to ensure hybrids had both antibiotic resistance genes and would fluoresce brightly, thereby facilitating detection. Since we selected plants on the basis of green-fluorescence shoots, it is not surprising that pollen orange fluorescence was also bright in these lines. Hybrid OFP lines were 100% resistant to kanamycin and hygromycin when screened on MSO media containing both antibiotics (data not shown), indicating inclusion of both cassettes into the F 1 hybrids. Manual plant crosses revealed that the hybrids were able to backcross to a non-transgenic male sterile N. tabacum 'MS TN 90' (Table 1), forming entirely nonviable seed in 2% of the crosses (98% of the crosses produced no seed), thus restricting detectable transgene transmission rates in progeny to 0%. This result was in contrast to our previous findings where few viable seeds (2 out of 445 seeds from 96 crosses) were generated from a similar (MS TN 90 × hybrid) cross that employed a different construct using mgfp5-er [9]. In addition, we have determined that male fertility varies among hybrid lines from 0 to 3% [9]. When the fertile line, SN 2108, was used to pollinate hybrid OFP plants, limited seed set (19% of crosses) was observed. Only 10 germinated seedlings out of 12 expressed GFP, (83% detectable transgene transmission), indicating that transgenes might be segregating out of some hybrid OFP × SN 2108 progeny.
It was unknown if tdTomato-ER would be visible in the pollen of the Nicotiana hybrid as the plant largely produces immature pollen where many pollen mother cells cease to develop past the tetrad stage [21]. Many of the immature pollen grains apparently did not synthesize sufficient tdTomato-ER for visual detection. The FP was only obvious in larger, more mature hybrid OFP pollen and did not appear to fluoresce as brightly as TN 90 TD-GFP-K and N. glauca TD-GFP-H. The pollenspecific promoter LAT52, regulates gene expression during microspore mitosis, allowing transcription until anthesis [14,22]. Our observation of few mature fluorescent pollen grains produced in the hybrids demonstrates that the interspecific hybrid system could be a viable candidate for transgene bioconfinement.
Conclusions
A bright orange fluorescent protein, tdTomato-ER, can be synthesized in pollen when its gene is under the control of the LAT52 pollen promoter. Fluorescently-tagged pollen is highly distinguishable from non-tagged pollen, and shows low autofluorescence. The plants produced in this study further increase the number of tools available for gene flow studies. Crossing studies demonstrated that hybrid OFP plants had low fertility and provided bioconfinement by limiting successful crosses made to the maternal line, N. tabacum. As pollen tracking is possible with this fluorescently tagged hybrid, more research is needed to determine the efficacy of pollen detection with this system and how it relates to bioconfinement in a field setting. | 2017-06-07T22:22:47.003Z | 2013-09-27T00:00:00.000 | {
"year": 2013,
"sha1": "572c14d4dd5fd5a582809b858a1a85ee086863e7",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-6-383",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "944ac7eafcb16ee10f4f2615726d0a24e84acf2f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
96429861 | pes2o/s2orc | v3-fos-license | Improved stability for analytic quasi-convex nearly integrable systems and optimal speed of Arnold diffusion
We improve the global Nekhoroshev stability for analytic quasi-convex nearly integrable Hamiltonian systems. The new stability result is optimal, as it matches the fastest speed of Arnold diffusion.
Introduction
We consider a real analytic Hamiltonian H(θ, I) = h(I) + f (θ, I), I ∈ R n , θ ∈ T n = (R/2πZ) n , with |f | < 1. It is a classical result of Nekhoroshev ([9], [10]) that when h(I) satisfies a non-degeneracy condition known as steepness (see also the modern treatments of [11], [5]), the system enjoys a global stretched exponential stability, of the type In the case when the integrable Hamiltonian is quasi-convex (see definition below), the system enjoys the largest stability exponent b. Lochak and Neishtadt, also Pöschel (see for example [6], [8], [12]) obtained the exponents Lochak also discovered the remarkable phenomenon known as "stability by resonance", that if the initial condition is close to a d-resonance of low order, then one expects the stability exponents a = b = 1 2(n−d) . By taking advantage of this fact, and that 1−resonances divide the space, in [4], Bounemoura and Marco obtained larger stability exponent a by allowing larger stability region (i.e. smaller b). The exponents obtained are b = (n − 1)σ, a = 1 2(n − 1) − σ where σ > 0 can be arbitrarily small. The exponent a can be taken to be 1 2(n−1) if one allows stability region of order 1.
On the flip side, one is interested in the instability question known as Arnold diffusion. This research was started by the nominal work of Arnold ( [1]), where he discovered the first mechanism for instability for nearly integrable Hamiltonian systems. Bessi ([2], [3]) proved that for n = 3, 4, there exists diffusion orbits (θ, I)(t), for which there exists t > 0 such that . This result was then generalized to arbitrary n ≥ 5 by the second author of this paper ( [14], see also related work in [7]). The reason for the exponent 1 2(n−2) is due to restriction of Arnold's mechanism: the orbit constructed using Arnold's idea must always cross a double resonance, therefore the exponents obtained are the best allowed in that class.
Up to now, there was still a gap between the best lower bound and upper bound of the stability exponent a: .
In this paper, we close this gap by improving the stability exponents to Thus, the stability exponent a can be arbitrary close to 1 2(n−2) , and for Arnold diffusion, the exponent 1 2(n−2) is optimal. We obtain the improvements by separating the frequency space into two sets, one is close to resonances of order up to | log |, and the complement which is sufficiently non-resonant. In the non-resonant region we provide an improved stability result using first a normal form, then applying the Nekhoroshev's theory. In the resonant region, we apply an argument similar to the one in [4], to show that the fast diffusion orbit has to be close to a double resonance.
The paper is structured as follows: in section 2 we introduce notations and formulate the result. We also reduce the main theorem to two stability results, in the non-resonant and resonant regions. These results are proven in sections 3 and 4.
Formulation of the main result
For D ⊂ R n and r > 0 define: Let R, r 0 , s 0 , m, M > 0 be parameters let B(0, R) ⊂ R n be the ball of radius R, we assume the following conditions for h: • h ∈ A r 0 ,s 0 (B(0, R)).
Let us denote M = (R, r 0 , s 0 , l, m, M ) the ensemble of parameters, and we reserve the notation C = C(M) or C k = C k (M) for unspecified positive constants depending only on M. The following is our main theorem.
Figure 1. Resonant and non-resonant areas
The theorem is proven by dividing the I-space into two regions: neighborhood of lower order 1-resonance, and the complement. We produce a stability result on each region.
Let Λ ⊂ Z n be a submodule, the space of Λ resonant frequencies is defined by The associated resonance surface is We say that Λ has rank d if there is linearly independent {k 1 , · · · , k d } ⊂ Z n such that Λ = Span Z {k 1 , · · · , k d }. In this case, we also write R Λ = R k 1 ,··· ,k d . Λ is called maximal if it's not contained by a larger module of the same rank. Following Pöschel, we say that Λ is a K-module if is generated by |k i | ≤ K, for all i = 1, · · · , d.
Given a parameter 0 < β < 1, we define and The main observation is that orbits in the fully non-resonant region are much more stable than expected.
. Under the our standing assumptions, there is Remark. choosing L = 12s 0 implies N = 2, and the stability time is and 0 < ≤ 0 , the following hold: Proof of Theorem 2.1. Let T be as in (3), and assume that 0 is small enough so that Consider any orbit (θ, such that I(t * ) ∈ D( ), then by Proposition 2.2, Alternatively, , then Proposition 2.3 applies, and the theorem follows.
Stability in the non-resonant region
Let Λ ⊂ Z n be a maximal submodule, define the projection operator T K ϕ for ϕ(θ, I) = k∈Z n ϕ k (I)e (k·θ)i as follows: We have the following resonant normal form lemma: and Ks ≥ 6, then there exists a real analytic coordinate change Φ : We apply Lemma 3.1 to the fully non-resonant case Λ = {0}, then g 0 , g depends only on I.
Since Λ is the trivial module, g 1 , g 0 depends only on I, and g 0 D,r 1 ,s 1 ≤ f D,r,s 0 = . Define using Cauchy estimates we have Choose 0 , β 0 such that To prove quasi-convexity, note that one of the following holds for all v = 1: Our estimates imply one of the following always hold: implying l/2, m/2-semi-concavity.
We then apply the following global stability theorem, which we apply to the normal form system. It's important to note that h 1 does not satisfy our standing assumption, and special care needs to paid to which parameters the constants depends on. Consider , which includes the time interval .
Stability near strong 1-resonances
Suppose Λ ⊂ Z n is a maximal submodule, and let k 1 , · · · , k d ∈ Z n be linearly independent and generates Λ over Z. The volume |Λ| of Λ is defined as This definition is independent of the basis k 1 , · · · , k d . Λ is called a K-lattice if |k 1 |, · · · , |k d | ≤ K. where |Λ| is the volume of Λ. Then for every orbit (θ, I)(t) such that one has .
The stability in the resonant area follows by two steps. First, by geometric consideration, we show that any orbit which drifts a large enough distance, in the neighborhood of strong 1-resonance must be close to a 2-resonance R k 1 ,k 2 with estimates on |k 1 |, |k 2 |. We then apply Theorem 4.1.
First we have the following lemma, which is a modified version of Lemma 3.4 from [4].
Lemma 4.3.
Let I ⊂ [−1, 1] be a closed interval of length l > 0. Suppose 0 < K 2 < 2l −1 , then there is an irreducible rational number p/q ∈ I ∩ Q such that Proof. Let Q = 3l −1 > 2l −1 , then there is m ∈ Z such that m Q , m+1 Q ∈ I. We now show at least one of them satisfies the conclusion of the lemma. Indeed, if p 1 q 1 , p 2 q 2 ∈ Q are distinct and |q 1 |, |q 2 | ≤ K, then p 1 q 1 − p 2 q 2 ≥ 1 |q 1 q 2 | > K −2 > Q −1 , therefore at most one of m Q and m+1 Q can have denominator bounded by K when reduced.
According to our definition, R k 1 ,k 2 is generated by the module Span Z {k 1 , k 2 }, which is not necessarily maximal. In order to apply Theorem 4.1, we need the following lemma.
Proof. The lemma is non-trivial because k 1 , k 2 does not necessary generate Λ over Z.
We first derive a relation for arbitrary number of generators. Suppose Λ is the maximal module containing k 1 , · · · , k d , let k 1 , · · · , k d generate Λ over Z. Let A be the matrix with columns k 1 , · · · , k d , and B with columns k 1 , · · · , k d . Then there exist invertible d × d integer matrix G such that A = BG. | 2017-01-21T13:46:00.000Z | 2017-01-21T00:00:00.000 | {
"year": 2017,
"sha1": "d5510547a5a455a0062e8c9654b681635fc507f3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.06026",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d5510547a5a455a0062e8c9654b681635fc507f3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
118594893 | pes2o/s2orc | v3-fos-license | New formalism for selfconsistent parameters optimization of highly efficient solar cells
We analysed self-consistently photoconversion efficiency of direct-gap A3B5 semicon-ductors based solar cells and optimised their main physical characteristics. Using gallium ar-senide (GaAs) as the example and new efficient optimization formalism, we demonstrated that commonly accepted light re-emission and re-absorption in solar cells (SC) in technologically produced GaAs (in particular, with solid- or liquid-phase epitaxy) are not the main factors re-sponsible for high photoconversion efficiency. As we proved instead, the doping level of the base material and its doping type as well as Shockley-Read-Hall (SRH) and surface recombination velocities are much more important factors responsible for the photoconversion. We found that the maximum photoconversion efficiency (about 27% for AM1.5 conditions) in GaAs with typical parameters of recombination centers can be reached for p-type base doped at $2 \cdot 10^{17}$ cm$^{-3}$. The open circuit voltage $V_{OC}$ formation features are analyzed. The optimization provides a significant increase in $V_{OC}$ and the limiting photoconversion efficiency close to 30%. The approach of this research allows to predict the expected solar cells (for both direct-gap and indirect-band semiconductor) characteristics if material parameters are known. Obtained formalism allows to analyze and to optimize mass production both tandem solar cell (TSC) and one-junction SC parameters.
). A point of view that there is a strong correspondence between the internal quantum yield of luminescence pl q and photoconversion efficiency η (for defined model of the light absorption in the structure) is presented in these researchs. The photoconversion efficiency in GaAs was calculated in papers [1,2] as a confirmation of this concept. This material is not chosen by chance. On the one hand, the GaAs is the key semiconductor when creating TSC based on directgap semiconductors A 3 B 5 group. On the other hand, in double heterostructures AlGaAs -GaAs -AlGaAs the internal luminescence quantum yield is close to 100% under certain conditions as was shown experimentally [3]. Therefore, the influence of reabsorption and re-emission of light on the efficiency of solar cells based on GaAs announced very significant [1,2].
However, as will be shown below, idealized assumptions used in [1,2] are not being implemented in practice, and the implementation of typical semiconductor solar cell parameters are unsuitable for calculating and optimizing the parameters of a solar cell. Let us consider these assumptions in more detail. The basic assumption is the neglect of nonradiative recombination channels. The internal quantum yield of luminescence close to 100% is assumed. In fact, such value of pl q was obtained in laser structure with a high excitation level (~ 10 17 cm -3 ) [3]. In standard operating SC conditions the excitation is at several orders of magnitude smaller. It leads to a substantial decrease in the internal quantum efficiency of luminescence. To prove this, we obtained, solved and analysed in details the equation for the magnitude pl q which is valid for an arbitrary excitation level and takes into account the basic mechanisms of recombination.
Next approximation [1,2] is the assumption that the absorption coefficient α of light in high-purity gallium arsenide (doping level of the semiconductor ≤10 14 Thus, as shown in [5], the light absorption coefficient of doped GaAs is substantially different from absorption coefficient of the high-clean GaAs. As a result, the shortcircuit current in the p-type semiconductor increases and in the n-type semiconductor decreases.
In the calculation of short-circuit current expressions for the internal quantum efficiency SC q for cases where diffusion length L significantly exceeds SC thickness d and of the surface recombination velocity S is zero are used [1,2]. In this article much more general expression for the SC q is used for plane-parallel structures. It is valid in particular for arbitrary ratio between L and d , for arbitrary S and for any value of the light reflectance on the back surface d R , varying from 0 to 1.
Ding et. al. [4] utilize more general approach then [1,2]. The calculation of GaAs photoconversion efficiency [4] takes into account the recombination in the space charge region (SCR) in the recombination current calculation as well as the reabsorption and re-emission of photons.
The saturation current r I 0 is introduced for the implementation of I-V curve non-ideality factor equal to 2. However, calculation [4] does not take into account Shockley -Read -Hall recombination current in the neutral part of base region. and there is no connection between the saturation current r I 0 in [4] is not a function of such physical parameters as Shockley -Read -Hall recombination time SR τ , the doping level of the semiconductor, the effective densities of states in the conduction and valence bands and the band gap. The model [4] does not allow to calculate the r I 0 value for a particular semiconductor and regarded structure parameters. It denies SC optimization by the recombination current minimization.
The extended and modified GaAs SCs efficiency analysis in this paper uses some results of [5]. In particular, this research is correct for the case of an arbitrary injection level, i.e. for arbitrary excess (generated by the light) electron-hole pairs concentration p ∆ . The luminescence internal quantum yield for the arbitrary p ∆ is analyzed regarding main recombination mechanisms. The space charge region (SCR) recombination velocity and current corrected equations are determined and used to calculate open-circuit voltage. The radiative recombination coefficient A is calculated using Roosbroeck and Shockley equations [6]. This work introduces selfconsistent analysis of factors influencing photoconversion efficiency in more general assumptions than [1,2,4].
In our work the equation for open-circuit voltage is written regarding the neutral part of the base region and SCR recombination currents. Saturation currents take into account radiative and non-radiative recombination and other semiconductor parameters so the specific value of this ratio can lead to both the increase and the decrease in this current. It is shown that this dependence in some cases is essential for the SC photovoltage calculation.
It is shown that the main SC parameters could not been chosen arbitrarily, because they are interdependent. For example, determining the doping level, it is necessary to consider the change of light absorption coefficient and the radiative recombination coefficient. It is found that reabsorption and re-emission of light are not major factors for the GaAs SCs efficiency.
These results can be used to select the optimal parameters for SCs produced using both gallium arsenide and other semiconductor materials industrial technologies (including germanium and silicon solar cells) and to achieve maximum photoconversion efficiency.
Basic relations and the analysis of the influence of various factors on the GaAs photoconversion efficiency
Let us analyze GaAs photoconversion efficiency regarding the doping level and conductivity type, deep level carriers capture cross sections ratio, radiative and non-radiative recombination probabilities ratios for the SCR and the neutral part of the base region.
Influence of doping level and conductivity type
Let us analyze the behavior of the light absorption coefficient α of GaAs as a function of photon energy ph E , depending on the conductivity type and the doping level of the semiconductor. These dependences ) ( ph E α are shown in Fig. 1 [5]. Figure 1 shows we can calculate the radiative recombination coefficient A. Here с is the speed of light, h is Planck's constant, i n is the intrinsic concentration of semiconductor, ) (u ε is dielectric permittivity of the semiconductor, taking into account the dispersion of the photon energy, kT E u ph / = is the dimensionless photon energy.
The calculated values of A are shown in Figure 2. The figure shows that in the p-type semiconductors with doping level increase, the value of A increases from 7 • 10 -10 cm 3 /s for 0 p = 10 17 cm -3 to 1.5 • 10 -9 cm 3 /s for 0 p = 2 • 10 18 cm -3 . At the same time, in n -type semiconductors with doping level increase, the value of A decreases from 10 9 cm 3 /s for 0 n = 10 14 cm -3 to 9 • 10 -11 cm 3 /s for 0 n = 2 • 10 18 cm -3 . The influence of doping level on the value of bulk lifetime in GaAs [8][9][10] is shown in Fig. 3. In this case where SR τ is the Shockley-Read-Hall, is radiative lifetime and d N is the doping level of the base. Figure 3 shows that the bulk lifetime in p-type GaAs is lower than in n -type. Data of the Fig. 3 correlate with the data on the lifetimes for n-and p-type GaAs [9].
Internal quantum efficiency of the short circuit current
More general expression for the quantum yield of the short-circuit current ) ( ph sc E q [11] can be written as Here α is the light absorption coefficient, p L is diffusion length in the emitter, p d is the thickness of the emitter, p τ is the bulk lifetime in the emitter, d R is light reflection coefficient of the back surface, is the diffusion length in the base, D is the diffusion coefficient in the base, 0 S is the effective surface recombination velocity at the surface of the emitter, d S is the effective surface recombination velocity at the back surface.
When the d L >> criterion fulfils and when 0 S and 0 S are small (compared to the recombination rate in the base volume), expression for ) , ( (poorly reflecting structures of type A (Fig. 4)), and to expression (highly reflective structures of type B (Fig. 4) [4]). However, for sufficiently low recombination times when d L ≤ , as well as for large values of 0 S and d S the short-circuit current quantum yield can be significantly reduced relative to the values above.
is the Debye screening length, 0 ε is the dielectric constant of vacuum, s ε is the relative permittivity of the semiconductor, k is the Boltzmann constant, T is a temperature, q is the elementary charge, p ∆ is the minority carriers excess concentration in the base, is carriers cross sections ratio, T V is carriers average thermal velocity, pn y is the dimensionless potential at the n p − -junction interface, y is the actual dimensionless (normalized to q kT / ) electrostatic potential, is the intrinsic carriers concentration, c N and v N are effective densities of states in the conduction band and valence band for Т =300 К; g E is the bandgap and kT E r The same deep recombination center responsibility for recombination in SCR and for Shockley-Read recombination in the neutral base region of the SC is assumed in (6) for simplicity. ) ( p V SC ∆ calculation shows that for ≈ 0 n 10 7 cm -3 and p ∆ ranged from 10 12 cm -3 to 10 14 cm -3 , corresponding to the SC working conditions for different Shockley-Read-Hall recombination times, the recombination rate does not depend on the r E value up to ± 0.2 eV. Thus, when the r E modulo value is less than 0.2 eV, the terms proportional to i n in the denominator of expression (6) integrand can be neglected. It means that under these conditions the expression (6) for the SCR recombination velocity ) ( p V SC ∆ , written for deep recombination centers, located in the middle of the gap, is correct for the high enough separation (within 0.2 eV) of the recombination level from the middle of the bandgap.
Using the expression (6) for 0 = r E case, when subintegral function has a maximum and Here, κ is the constant of the order of 1. Calculations (1), (3) shows κ increase from 1.9 to 2 with p ∆ increase from 10 10 cm -3 to 10 14 cm -3 . In this case the calculation results by the exact formula and approximate evaluation coincide with less then 5% error. Figure. 5. The recombination velocity in the SCR versus the bandgap for following parameters: When illumination makes 0 p p >> ∆ the relationship between SC V and recombination saturation current density rs J is described by the relation [13] where d s Generation-recombination balance equation ignoring the SCR recombination was used in (10). The logarithm parameter (10) is much greater then 1, so the first approximateon can be used. Here SC J is the short-circuit photocurrent density. The equation (11) is correct when the diffusion length L is greater then the base thickness d .
Taking into account SCR recombination the open circuit voltage OC V can be found as when the second term is not much greater then the second. If the second term is much more greater then the first should be used to find the open-circuit voltage OC V .
The photoluminescence internal quantum yield
Let us write the expression for the photoluminescence internal quantum yield regarding the radiative recombination, the bulk Shockley -Read -Hall recombination, the Auger recombination, the surface recombination and the SCR recombination for arbitrary excitation level p ∆ . We consider the p-type semiconductor in the calculation. We take into account both the Shockley -Read -Hall time and the surface recombination velocity dependence on the excitation value for 1 ≠ b . Let us consider such nonradiative recombination parameters dependences on the equilibrium majority carriers concentration 0 p , minority carrier concentration 0 n and excitation level p ∆ as Fig. 6 shows the calculated dependence of the luminescence internal quantum yield versus excitation level using the following parameters: 0 p =3•10 17 cm -3 , 0 S = 1•10 2 cm/s, is calculated using (7). The following 0 τ values were used for 1-4 curves respectively: 5 • 10 -9 , 2 • 10 -8 , 5 • 10 -8 and 10 -6 s. Figure 6 shows the luminescence internal quantum yield increase with the increase in 0 τ value. When 0 τ = 10 -6 s, pl q value at the maximum is equal to 99.4%, which is very close to the value obtained in [3].
However p ∆ value in the SC under AM1.5 illumination is significantly lower. p ∆ is determined by generation-recombination balance equation taking into account all recombination components: Applied transcendent equations were solved numerically. Thus, pl q value is close to 100% only at laser excitation levels (obtained in [18]) when the ≥ ∆p 10 17 сm -3 . This situation does not correspond to the excitation levels of illuminated SC. Note that in the estimates above the absence of the surface recombination influence on the luminescence internal quantum yield was assumed. However, GaAs-AlGaAs system was used [3,14] in the fabrication of structures with high back surface reflectance and heterointerface recombination velocity lies in the range of 10 2 -10 4 сm/s [15,16]. For r b τ τ = = 10 -7 s, d =10 -4 cm, s S = 10 3 сm/s the luminescence internal quantum yield pl q is 33 % (14). For > s S 10 3 сm/s, for example, when s S = 10 4 cm / s we get even less pl q value. In the latter case, even in the absence of bulk Shockley-Read-Hall recombination the luminescence quantum yield is essentially reduced due to the GaAs-AlGaAs heterojunction recombination. Note that GaAs-AlGaAs system is the most matched and has the smallest the lattice constants mismatch. All other heterostructures have greater lattice mismatch and therefore greater s S values.
For p-n-junction GaAs solar cells the typical 0 S and d S values are of the order of at least 10 5 cm/s [15]. The luminescence internal quantum yield pl q for such SCs is significantly less than 1, so the effects of light re-emission and reabsorption (photon recycling) can be neglected even for large SR τ values.
The open circuit voltage dependence on the semiconductor doping level
Using saturation currents [4] and considering the photon recycling the maximum obtainable value of the open circuit voltage in direct-gap semiconductors OC V can be calculated (12). The radiative recombination saturation current density r J 0 =7·10 2 А/сm 2 for B-type structures according to [4] In conclusion, we note that the use of the r J 0 =10 4 А/сm 2 value for poorly reflecting structures and r J 0 =7·10 2 А/сm 2 value for highly reflective structures leads to an overestimation of the photon recycling influence on the open-circuit voltage OC V while radiative and nonradia-tive recombinations make comparable contributions. This is due to the use of 100% luminescence internal quantum yield in the light re-emission and reabsorption (photon recycling) [1,2,4].
If the luminescence internal quantum yield is reduced (per exemplum from 100 to 50%), only 1-2 successive reabsorption and re-emission acts will take place in highly reflective structures [1] instead of 30-40 acts. Thus the r J 0 value increases reducing the difference between A and eff A . As a result re-emission and reabsorption processes influence will be decreased even will also depend on this ratio. According to (10) and (12), τ value in the p-type. Thus, the b value should be considered in selection of the material for highly efficient solar cells.
Influence of the doping type and level on the short-circuit current
An analysis of the doping type and level influence on the short-circuit current magnitude is carried out on the typical direct-gap semiconductor -gallium arsenide. Short-circuit current density under AM1.5 conditions in the case of full absorption of the incident light, neglecting SC illuminated surface shading by electrodes is defined by the relationship: . (17) Here s R is the light reflection coefficient on the SC illuminated surface, m is the illuminated surface shading by the contact grid factor, is the spectral photocurrent density for AM1.5 illumination. Table 2 shows that SC J value for the p-type semiconductor increases with the doping level increase. This growth is associated with an increase in fundamental absorption edge blur with the increase in doping level (see Fig. 1). At the same time, the SC J value for n-type semiconductor decreases with the increase in doping level due to the conduction band filling by carriers (Burstein-Moss effect).
Direct-gap semiconductors photoconversion efficiency
Photoconversion efficiency calculation was fulfilled for GaAs SC as an example. The equation for I-V curve was written as (19) where S P is the illumination power density per unit area. Thus this doping level is the optimal for the maximum efficiency η achievement in gallium arsenide. Therefore, to achieve both high open-circuit voltage value and high short circuit current density, i.e. to get the maximum photoconversion efficiency η we must use p-type semiconduc-tor as the base material. Note that diffusion length of electrons in the p-type base material is much greater than the diffusion length of holes in the n-type base material. It is possible that the p-type GaAs disadvantage compared to the n-type is the smaller Shockley-Read-Hall lifetime SR τ . However, according to [7,8], typical p-type lifetimes SR τ are not so small compared to the n-type, so it is prematurely to do the final conclusions about the n-type.
We also note that even for the maximum Shockley-Read-Hall lifetime of 10 -6 s achieved the photon recycling neglect leads to a not more than 8% reduction in the case of highly reflective structures, and to a not more than 3% reduction in the case of poorly reflecting structures.
These differences are estimated to be not so great compared to, for example, the summarized influence of the surface recombination velocity and electrons and holes capture cross sections by the recombination level ratio, especially with the doping level influence on the OC V value.
Therefore, the influence of these factors on the photoconversion efficiency must be considered in the first place.
Photoconversion efficiency analysis features for other semiconductors (the indirect and direct band)
The relationships above can be used to optimize the other direct-gap semiconductors so- values will be small.
Indirect semiconductors (silicon)
The relations obtained can be used in a number of cases to optimize the indirect semiconductors (for example, silicon, germanium) solar cells parameters, in which the radiative recombination probability is small. The silicon and germanium are particularly used as a third element in some three-junction SCs. This is especially important for monocrystalline and multicrystalline silicon solar cells as silicon solar modules and batteries are widely used. Moreover, monocrystalline and multicrystalline silicon SCs produce the prevalent part of electricity generated by direct solar energy conversion for now.
Until recently, high-efficiency silicon solar cell parameters were modeled using the numerical solution of the drift and diffusion equations [9,12]. There were also attempts [4,17] to model silicon solar cells efficiency in the same approximations as gallium arsenide SCs, i.e. taking into account the photon recycling (re-emission and absorption of photons) in highly reflective structures or structures with multiple reflection (equal to the raise of the light absorption optical thickness). The photon recycling for silicon is assumed to be much lower than for gallium arsenide, where the luminescence internal quantum yield is assumed to be near 100% (as demonstrated for the high excitation case [3]). For example, the maximum photoluminescence external quantum yield of just 6.1% was obtained [18] at room temperature. To estimate the luminescence internal quantum yield pl q for silicon, we use the equation (14) with the next interband Auger recombination parameter [11,19]: Parameters used: d =200 µm, Т=298 К; cm/s) [21,22] achieved in the silicon, the largest pl q value is 52.7% (curve 2).
For the silicon solar cells parameters, providing a record efficiency equal to 25% [23], the highest pl q value is 4% (curve 3). At ≥ ∆p 10 16 cm -3 , there is a strong pl q drop associated with the increased nonradiative Auger recombination contribution. Since a typical p ∆ value for a silicon solar cell under illumination is greater then 10 16 cm -3 , as show the calculation (15) pl q value for this situation, as Figure 12 shows, does not exceed 25%. It means that the photon recycling can be neglected in the silicon solar cell parameters (including maximum achievable) simulation.
Note, that the formulae above are not suitable for the silicon solar cells efficiency calculation in the case of over 20% efficiency. The matter is that these solar cells are manufactured from high quality monocrystalline silicon with the Shockley-Read-Hall lifetime not less than 1 ms. Therefore, the diffusion length L for such SCs can reach more than one millimeter, and this The 25% record photoconversion efficiency in silicon under AM1.5 conditions is set [23] especially for the case above where formula (21) is correct. In addition to an almost complete incident light absorption by the SC implementation (that can be achieved through the use of geometric relief), the main problem is to minimize the surface recombination velocity values on the illuminated and back surfaces. The illuminated surface recombination velocity was minimized by the thermal oxide with a low density of surface states growth and the back surface recombination velocity minimization was achieved by the isotype junction creation by Green [23].
There are recent researches solving this problem by Si H Si − − : α heterojunctions usage [24,25]. In this case the dangling bonds passivation by the hydrogen for the surface recombination velocity S minimization is necessary. The highest 24.7% efficiency of such solar cells was obtained in [25]. It is necessary to determine the p ∆ value from the equation (15) and substitute this p ∆ in (21) to find the OC V value in this case. OC V growth is associated with a relative decrease in the second term (21) Further, we find m V from the maximum takeoff power condition with the experimentally obtained in all cases. The matching error between the calculated and the experimental η values is less than 1%. These results demonstrate the adequacy of the theoretical model proposed in this paper relative to the experimental results obtained in [23][24][25]. [17] (curve 1), calculated using the parameters of [17] (curve 2) and calculated using the parameters of our papers [19,20] (curve 3).
In conclusion, we compare the results obtained in the present article with the results of the research [17]. In this paper, Fig. 8 shows the results of the open circuit voltage calculation depending on the silicon solar cell thickness taking into account radiative recombination and interband Auger recombination. These dependences are calculated using expressions (21) and (15).
Firstly, let us correct the parameters to the parameters of [17]. Thus, we assume that the intrinsic charge carriers concentration in silicon (for T = 300 K) equals to 1.45 • 10 10 cm -3 , and the radiative recombination coefficient equals to 2.5 • 10 -15 cm 3 /s [17]. Let us also omit the p ∆ ⋅ − term in n C which is absent in [17]. The comparison of the ( ) d V OC dependencies calculated using formulae (15) and (21) curves constructed using formulae (15) and (21) of the present paper and the parameters of silicon from [19,20] (curve 3). There is a good comparison of the curves (1) and (3), that practically coincide for the actual silicon solar cell thickness (∞100 µm), although they are different for small and large SC thicknesses. A little higher ) (d V OC values at small thickness values are associated with the use of lower holes and electrons concentration in silicon equal to 8.5 • 10 9 cm -3 . Lower ) (d V OC values at large thickness values are explained by the greater radiative recombination parameter at room temperature (6 • 10 -15 cm 3 /s) and by taking into account the additional term in the relation for n C .
The limiting silicon solar cells photoconversion efficiency of our article correlates with that obtained in [17], and is consistent with the conclusions of [26]. However, the photon recycling for calculating the silicon solar cells parameters (including the maximum possible and record) can be neglected, as shown by the results of our analysis, in contrast to [4,17]. As a result, the calculation is greatly simplified and can be performed in the traditional approximations.
Conclusions
An approach to optimize the SC based on direct and indirect-gap semiconductors in order to obtain the maximal photoconversion efficiency is proposed.
Analysis shows the secondary role of photon recycling even in highly reflective struc- Obtained formalism allows to analyze and to optimize SC parameters for other direct-gap semiconductors, particularly A3B5 semiconductors.
The open circuit voltage and photoconversion efficiency generation features for monocrystalline silicon solar cells with more than 1 ms Shockley-Read-Hall lifetime were considered.
A formalism describing quantitatively the experimental results for high-efficiency silicon solar cells using various surface recombination velocity minimization techniques was proposed.
The approach of this research allows to predict the expected solar cells (for both directgap and indirect-band semiconductor) characteristics if material parameters are known. | 2019-04-13T07:50:17.292Z | 2014-02-13T00:00:00.000 | {
"year": 2014,
"sha1": "00d33234b874463aa42b0e6e6afda6f7cded2acf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "434e1223162141d74f3bf1edbad586f079bf6430",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
237967400 | pes2o/s2orc | v3-fos-license | Original Establishment of a Method for Predicting a Posed Smile from a Straight Face
: Good facial expression is an important goal of orthognathic surgery because facial expression has a considerably greater influence on humans’ aesthetic judgements than facial profile alone. However, to date, no reports have attempted to predict post-operative smiles from straight faces. The aim of this study was to evaluate the effectiveness of different tech niques to create a posed smile (virtual) from a straight face (original). Twenty-five volunteers with no medical history that would interfere with a straight face or a posed smile were enrolled. After creating homologous models from the straight face and posed smile models, we assessed the ability of the principal component (PC) method and the improved Manchester (i-M) method to create a posed smile (virtual) from a straight face (original). Positive errors for the PC and i-M were 1.4 ± 0.5 mm, 0.9 ± 0.4 mm, respectively, and there was a significant difference. Although there were significant differences in error, the error of two methods, including homologous modeling techniques and principal component analysis, were clinically small and useful for predicting change in facial expression.
Introduction
Orthognatic surgery represents a major portion of oral and maxillofacial surgeries. Good facial expression is an important goal of orthognathic surgery because facial expression has a considerably greater influence on humans' aesthetic judgements than facial profile alone. Therefore, many studies on pre-and post-operative facial changes and how to predict them have been published 1,2) . In our previous study, gender differences in posed smiles and the characteristics of posed smiles by female patients with Class 3 malocclusion before and after orthognathic surgery were investigated using principal component analysis (PCA) 3) . However, to date, no reports have attempted to predict post-operative smiles.
The homologous modeling technique has recently gained prominence as a method for creating a database of the face and jaw bones [4][5][6][7] . This technique can express an object shape as a polygon of the same topological structure with the same anatomical landmarks. To enhance the anatomical accuracy, facial reconstruction can be performed by adding a facial muscle model to the bone model. In the homologous model technique, the XYZ coordinates of the vertices can be calculated back from an arbitrary principal component (PC) value to create an average model [8][9][10] . Therefore, a regression equation that can predict the PC values of a "posed smile (original)" using the PC values of a "straight face (original)" of the same patient can also be used to create a "posed smile (virtual)" from the straight face (original).
In this study, we created a posed smile (virtual) from a straight face (original) using two different techniques and evaluated the accuracy of prediction for each technique.
Patients
Twenty-five volunteers with no medical history associated with a straight face or a posed smile were enrolled in the study. The experimental protocols were approved by the Ethics Committee of the Faculty of Dental Science, Kyushu University, Fukuoka, Japan (30-295) and were performed in line with the principles of the Declaration of Helsinki. All participants gave their informed consent to participate in the study.
Imaging and analysis
Facial image construction and homologous modeling were based on a previous report 5) . During imaging, the subjects were seated and the head was positioned without any head fixation. HBM-Rugle (Medic Engineering; Kyoto, Japan) image measurement software in stereolithographic format and Homologous Body Modeling software (Digital Human Technology; Tokyo, Japan) were used to construct a 3D image of the face for each subject and plot 11 landmarks on the surface of the 3D model 5) . Template models of the face consisting of approximately 6887 polygons were generated using Geomagic Studio 9 (Geomagic; NC, USA). The template model was automatically fitted to the individually scanned point cloud of the face by minimizing external and internal energy functions. The external energy function was based on the Euclidian distance between data points of the template model and those of the pa-tient's database. The internal energy function was based on local deformation of the template model. The vertices of the template model were considered as anatomical landmarks that were fitted to landmarks, whereas the vertices generated from the subdivision surface were fitted to the measured point clouds with minimum deformation of the initial template model. The prediction of a posed smile from a straight face was then performed using each of three methods, as follows. In the Manchester method, facial reconstruction is performed by adding an average thickness of soft tissue such as muscle, fat, and skin onto the skull using anatomical landmarks 11) . In the present study, we aimed to improve this technique and apply it to the prediction of a posed smile (virtual) from a straight face (original). First, differences between the straight face and the posed smile were identified in the XYZ coordinates of the vertices of the polygon. The differences were added to the XYZ coordinates of the vertices of the polygon of the straight face, and this procedure was repeated 25 times. The original and virtual posed smiles were compared, and the error between them was measured.
Statistical analysis
The faces were analyzed using PCA and multiple regression analysis. All descriptive data analyses were performed with JMP 5.1.2 (SAS Institute Inc; Cary, NC, USA).
Results
Among the straight face image sets, the contribution of the most important PC was 32.8%, and 10 PCs explained >90% of the total variance (Table 1).
Among the posed smile image sets, the contribution of the most important PC was 34.3%, and 11 PCs explained >90% of the total variance ( Table 2).
Positive errors in the PC and i-M were 1.4 ± 0.5 mm and 0.9 ± 0.4 mm, respectively (Fig. 3).
Discussion
Traditional facial reconstruction methods are based on manual procedures and produce 2D portraits or 3D sculptures. There are three common steps in these methods: (1) the raw skull (or a replica) is equipped with a sparse set of anatomical landmarks, (2) an average soft tissue thickness is applied to each skull landmark to estimate a corresponding landmark on the face, and (3) a face fitting the estimated landmarks is drawn or sculpted. Most practitioners add a facial muscle model to enrich the anatomical accuracy of the reconstruction, termed the Manchester method.
Computer animation software packages are a more recent development that use the same methodology as manual methods, and allow the user to tune some of the modeling parameters and combine human expertise with the flexibility of the software 12) .
Morphological measurement of the human body can be obtained using anatomical landmarks and their distances and angles; however, evaluation by this method is localized, and it is difficult to evaluate the overall characteristics. Homologous modeling technique is increasingly being used as morphological measurement method in which scanned surface shape data are represented using the same number of data points as in topology defined based on the anatomy 13) . Recent reports have indicated the efficacy of homologous modeling for 3D morphological analysis of human teeth and faces, which are composed of relatively smooth surfaces. In addition, analysis of homologous models using multivariate statistical methods enables the extraction of shape variations that are impossible to understand using linear measurements. We have previously evaluated the difference between "straight face" and "posed smile" in healthy subjects using homologous modeling technique 5) and that between mandibular condyles on the deviated and non-deviated sides in patients with jaw deviation 6) . The homologous modeling technique can calculate the averages of PC values obtained by PC analysis, and the XYZ coordinates can be calculated from the average values to visualize the virtual average shape.
In the present study, a homologous model with the same shape as the scanned model was created to apply morphing methodology and fabricate a 3D facial model. Because the homologous model in this study was created from the same original template, the number of vertices is the same and they show one-to-one correspondence, representing the morphology of the scanned model so that morphing methodology can be applied.
We considered that the main component values of a posed smile could be predicted from the main component values of a straight face using multiple regression analysis. Using the PC method, the error between the original posed smile and the virtual posed smile was 1.4 ± 0.5 mm, which was higher than that using the i-M method but can still be considered a good result clinically.
As the i-M method uses data from an average image, it may not be suitable for patients with jaw deformities. In contrast, the PC method can be used in various situations. Although there were significant differences in error, the error of PC method and i-M method, including homologous modeling techniques and principal component analysis, were clinically small and useful for predicting change in facial expression.
In the future, I think it will be necessary to use and evaluate the two methods in various situations. | 2021-08-27T16:53:37.897Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "cb1b28eaa221211db4eec14064fb57fda821d6bf",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jhtb/30/3/30_221/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e39434fdba10b2b702c820b4c08456a160022096",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
18923355 | pes2o/s2orc | v3-fos-license | Asymmetric dynamics and critical behavior in the Bak-Sneppen model
We investigate, using mean-field theory and simulation, the effect of asymmetry on the critical behavior and probability density of Bak-Sneppen models. Two kinds of anisotropy are investigated: (i) different numbers of sites to the left and right of the central (minimum) site are updated and (ii) sites to the left and right of the central site are renewed in different ways. Of particular interest is the crossover from symmetric to asymmetric scaling for weakly asymmetric dynamics, and the collapse of data with different numbers of updated sites but the same degree of asymmetry. All non-symmetric rules studied fall, independent of the degree of asymmetry, in the same universality class. Conversely, symmetric variants reproduce the exponents of the original model. Our results confirm the existence of two symmetry-based universality classes for extremal dynamics.
I. INTRODUCTION
Nature exhibits scale invariance in a variety of settings. Such behavior is often associated with power law distributions of the phenomenon of interest, for example earthquake sizes [1,2], rain intensities and drought durations [3][4][5] and physiological and morphological quantities [6,7]. In recent decades, physicists have sought the physical origin of such laws [2,5,7]. A concept introduced to partially explain the ubiquity of scale-invariant phenomena in nature is so-called self-organized criticality or SOC [2,8].
In the evolutionary interpretation of the BS model, each site i represents a biological species, and bears a real-valued variable x i representing its "fitness". The larger x i , the better adapted this species is to its environment and so the more likely it is to survive. At each time step, the site with the smallest x i and its nearest neighbors are replaced with randomly chosen values. The replacement of x i represents extinction of the less-fit species and the appearance of a new one, while the substitution of the neighboring variables with new random values may be interpreted as a sudden unpredictable change in fitness when an interdependent species goes extinct and a new species colonizes its niche. Selection, at each step, of the global minimum of the {x i } ("extremal dynamics") represents a highly nonlocal process, and would appear to require an external agent with complete information regarding the state of the system at each moment.
Due to the extremal dynamics, this system exhibits scale-invariance in the stationary state, in which several quantities display power-law behavior [8]. Simulations show that the stationary distribution of barriers follows a step function, being zero (in the infinite-size limit) for x < x * ≃ 0.66702(3) [12]. Relaxing the extremal condition leads to a smooth probability density and loss of scale invariance [10,26,27]. Datta et. al have also shown that the scaling behavior is sensitive to the number of sites updated at each step (i.e., updating only the minimum site, or the minimum and next-to-minimum as well) [33].
In this paper we investigate the effect of symmetry in the updating rule. Section II introduces the models, which are then analyzed using mean-field like approaches in Sec. III. In Sec. IV we present simulation results; our conclusions are summarized in Sec. V.
II. MODELS
The Bak-Sneppen model [8] is defined as follows. Consider a d-dimensional lattice with L d sites and periodic boundaries. In the evolutionary interpretation of the model, each site i represents a species, and bears a real-valued variable x i (t) representing its "fitness" to survive, so that x i (t) may be termed a "barrier to extinction". The initial values of the barriers are independently drawn from a uniform distribution on the interval [0,1). At time 1, the site m bearing the minimum of all the numbers {x i (0)} is identified, and it, along with its 2d nearest neighbors, are given new random values, again drawn independently from the interval [0,1). (In the one dimensional case considered here this amounts to: x m (1) = η, x m+1 (1) = η ′ , and x m−1 (1) = η ′′ , where η, η ′ , and η ′′ are independent and uniformly distributed on [0,1); for |j − m| > 1, x j (1) = x j (0).) At step 2 this process is repeated, with m representing the site with the global minimum of the variables {x i (1)}, and so on.
We now define several one-dimensional variants of the BS model, which differ from the original in the number and/or position of neighbors which are updated at each time step, or in the way that the barriers x i evolve. In the 'generalized' or 'BSab' variant, we replace the site m bearing the minimum of the {x i } plus a neighbors on the left side and b neighbors on the right with independent random numbers. If a = b = 1 we recover the original model (BS11); if a = 0 and b = 1 we have the anisotropic BS model (BS01) studied in [31,32]; if a = b we obtain modified BS models with asymmetric dynamics. These BSab variants of the Bak-Sneppen model were also studied in [31]. In the second variant, the site M bearing the maximum of the {x i } and its two nearest neighbors are updated according to the rules: x M (t + 1) = η, x M ′ (t + 1) = η ′ and x M ′′ (t + 1) = [x M ′′ (t)] 2 . Here M ′ = M + σ and M ′′ = M − σ, where σ is +1 with probability p and −1 otherwise. We shall refer to this variant as the peripheral square model with variable anisotropy. (For p = 1/2 this is the peripheral square model studied in [27].) The motivation for studying these variants is threefold. First it is of interest to examine the effect of (i) the symmetry of the dynamics and (ii) replacement of barriers with a deterministic function f (x) instead of random numbers, on the critical behavior of the model. Secondly, we study corrections to the power-laws, such as finite-size effects, and find that these corrections are large for small asymmetries in agreement with [31]. Finally, since the precise form of the dynamics in a specific setting (e.g., evolution) is generally unknown, and probably is quite different from that of the original model, it is of interest to test the robustness of the results reported for the original model. It is even possible that some of the variants considered approximate a given evolutionary process more closely than the original. In particular, if certain pairs of species (i and j, say) stand in a predator-prey relationship, one would not expect the extinction of i to have the same effect on j as the extinction of j has on i; in this situation an asymmetric interaction appears more reasonable.
III. MEAN-FIELD THEORY
We develop a mean-field theory for the BSab variants, along the lines of Refs. [10,26]. To begin, we introduce a flipping rate of Γe −βx i at site i, where Γ −1 is a characteristic time, irrelevant to stationary properties, and which we set equal to one (Γ = 1). This regularized system is the 'finite-temperature' model [10,26,27], in which all sites have a nonzero probability of being updated at any time, in contrast to the extremal dynamics of the original model. The extremal condition is recovered in the zero-temperature limit, β → ∞; a regularized flipping rate facilitates analysis of the model.
The probability density p(x, t) satisfies: where n = a + b + 1 is the number of sites that are updated at each step, p j (x, y, t) is the joint density for sites 0 and j and p(y, t) is the one-site marginal density. (We assume translation and reflection invariance, which is expected to hold at any time, if the initial distribution possesses these properties.) Invoking the mean-field factorization p j (x, y, t) = p(x, t)p(y, t), we find: where represents the overall flipping rate. In the stationary state we have Multiplying by e −βx and integrating over the range of x, we find .
In the limit β → ∞ this solution becomes a step function: Thus, the mean-field approach predicts a step-function singularity for the probability density with the critical barrier at x * = 1/n. This result is independent of the symmetry of the dynamics, and we conclude that the mean-field approximation at the level of the one-site marginal density is insensitive to differences in symmetry. Define the anisotropy coefficient (Note that k a = 0 for symmetric dynamics.) The mean-field threshold x * = 1/n can be written as For fixed a, mean-field theory predicts that the threshold x * varies linearly with the anisotropy coefficient k a .
A. Threshold values
We study the generalized model for various values of a and b, corresponding to k a in the range 0−0.857. We estimate the probability density p(x) on the basis of a histogram of barrier frequencies, dividing [0,1] into 100 subintervals. After the system (a ring of N = 2000 sites) relaxes to the stationary state 1 , histograms are accumulated at intervals of N time steps until a total of 10 8 steps are performed.
The threshold x * N for a finite system can be calculated as follows [34]. In the limit N → ∞, the probability density p(x) is a step function, with p(x) = 0 for x < x * and p(x) = C for x > x * , where C is a constant. Normalization implies that C(1 − x * ) = 1. This suggests that x * N = 1 − 1/C N can be regarded as the threshold of a finite system. In ref. [34] we performed simulations for various systems sizes and found that x * N − x * ∞ = kN −1/ν , with x * ∞ = 0.6672(2) and ν = 1.40(1) in the original model (or BS11) and x * ∞ = 0.7240(1) and ν = 1.58 (1) in the anisotropic BS model (or BS01). Typically x * N −x * ∞ ≈ 0.006 for N = 2000 and, therefore, the threshold x * N =2000 is sufficiently accurate for our present purpose, which is to investigate the effect of anisotropy on the threshold values.
Simulation confirm that the stationary probability density is a step function, in agreement with MFT. Fig. 1 shows p(x) for all variants having n = 5.
The threshold values, listed in Table I, are however always larger than that predicted by MFT, x * = 1/n. MFT is exact for the random-neighbor version, in which all sites are considered neighbors, and in which (n − 1) randomly selected sites are updated in addition to the site with the minimum x i . There is better agreement between the MFT prediction for x * and the simulation result for larger anisotropies (with n constant), and for larger n (with fixed k a ). MFT predicts that x * = x * (n), whereas in fact, for fixed n, the threshold is smaller, the larger the anisotropy. This may be understood in general terms as follows. With n fixed, the larger |a − b|, the greater is the distance between updated sites and the minimum site. This makes the dynamics resemble the mean-field (random neighbor) case more closely, causing the threshold to tend toward the MFT value 1/n. Increasing n (with k a fixed) should have a similar effect. Figure 2 shows that if we fix a and vary b, the threshold decreases linearly with k a , as predicted by MFT, except for a = 0. The actual threshold values are, however, quite different from MFT results. For instance for a = 1, MFT predicts x * = (1/3)(1 − k a ), while a fit to the simulation data yields x * = 0.664(3) − 0.668(7)k a .
B. Critical exponents
Several quantities display power-law behavior in the Bak-Sneppen model, and can be used to characterize the associated universality class [8,11,31]. In particular, we study the probability P J (r) that successive updated sites are separated by a distance r. In the original model, one finds [31] P J (r) ∼ r −π , with π = π S = 3.23(2) (figures in parentheses denote uncertainties). On the other hand, in the anisotropic BS model (BS01), π = π A = 2.401(2) [32]. Based on simulations of the generalized models using 7 × 10 8 time steps on lattices of 32000 sites we find that all variants with asymmetric dynamics belong to the universality class of the anisotropic BS model. Fig. 3 displays P J (r) for variants with different anisotropies. The corrections to the power laws are large for small asymmetries and therefore the asymptotic behavior is not observed in small lattices in these cases, because large values of r are not reached.
We find that P J (r) can be fitted with an expression representing a crossover between symmetric scaling at small r and asymmetric scaling at large r, where A and B are constants and π A and π S are, respectively, the exponents of the anisotropic and the original models quoted above. The 'mirror' structure of Eq. (10) arises due to the periodic boundary conditions, which imply that P J (r) = P J (N − r). Eq. (10) provides a good fit to the simulation data for r > 10 2 for all generalized models. Since π S ≈ π A + 1, the expression P J (r) = Ar −π A (1 + B/r) + A(N − r) −π A (1 + B/(N − r)) also fits the data well. Nevertheless, studies of weakly asymmetric models (e.g., BS(20) (21), for which k a = 0.024) indicate that the slope of P J (r) (on log scales) approaches π S , as opposed to π A + 1, before the asymptotic behavior is attained. The functions P J (r) for variants with the same anisotropy can be collapsed onto a master curve P collapse (k a , r ′ ) = n P J (r/n) , where r ′ = r/n. Figure 4 illustrates this collapse for five variants with k a = 1/6. A simple argument provides an intuitive understanding of this scaling function. Suppose site m is the extremal site at time t. Since renewed sites receive independent random numbers, all sites in the set N (t) ≡ {m(t) − a, m(t) − a + 1, ..., m(t) + b} have the same probability of being the extremum at time t + 1, which implies (with a ≤ b) so that P J (r) exhibits a plateau for r = 1, ..., a.
The probability of event m(t + 1) ∈ N (t), i.e., the extremal site belongs to the group of sites updated at the last time step, can be estimated as follows. We assume that, with a probability ≈ 1, all sites outside of N (t) have x i > x * . Then the probability that m(t If b > a, then where the symbol '≃' is used because there is a small probability of one of the distances r = a + 1, ..., b being the extremum distance even if m(t + 1) / ∈ N (t). This explains the second plateau seen in Fig. 4. For fixed k a , a = [n(1 − k a ) − 1]/2 and b = [n(1 + k a ) − 1]/2 are essentially proportional to n, so that the rescaling of the argument in Eq. (11) collapses the plateaus. For r > b the probability P J rapidly approaches a power law, which is also invariant under the rescaling of Eq. (11). Although the foregoing arguments are approximate, they provide an intuitive basis for the collapse seen in Fig. 4.
Another quantity that obeys a power law in the Bak-Sneppen model is P r (τ ), the probability that, in the stationary state, the global minimum site at time t was the extremal site most recently at time t − τ . (τ is the 'return time.') We simulate the BSab variants on a lattice of 16000 sites, using 7 × 10 8 time steps. The results are quite similar to those for the probability P J (r). For weak anisotropies, the asymptotic behavior is observed only for large τ , as shown in Fig. 5. In the limit τ → ∞, P r (τ ) ∝ τ −b , where b = 1.40(1) for all anisotropic variants and b = 1.58(1) for all isotropic cases.
C. Peripheral square model with variable anisotropy
The peripheral square model with variable anisotropy, defined in Sec. II, is a generalization of the model studied in [27], where the anisotropy now depends on a parameter p. We simulated this model (with N = 2000, 4000 and 8000 sites) for p varying from the isotropic value (p = 0.5) to the maximally anisotropic one (p = 1.0). Our analysis shows that, in the limit L → ∞, the stationary probability density consists of two regions (see Fig. 6): p st (x) = 0 for x > x * and p st (x) = 0 for x < x * . (The rounding in the threshold region is simply a finite-size effect.) This behavior is in agreement with Ref. [27], where we found that renewing the barriers via x ′ = x α , with α = 1/2 and 2, leads to a diversity of distributions with a discontinuity at the threshold. The divergence of p st as x → 0 in the present case can be understood on the basis of the meanfield theory developed in [27]. We conclude that the step-function distribution of the original BS model is not universal for self-organized criticality under extremal dynamics. In contrast, our results suggest that the step-singularity is universal. Figure 6 suggests that, in the limit x → 0, p st (x) does not depend on the parameter p. Over restricted intervals p st appears to follow a power law. For example, for x ∈ [10 −6 , 10 −4 ], p st is well described by a power law with an exponent that increases from 0.82 for p = 0.5 to 0.85 for p = 1.0. On the other hand, the mean-field solution, which appears to capture the qualitative behavior, does not follow a power law, diverging instead as ∞ n=0 4 −n x 2 −n −1 in the limit x → 0. (We further note that although the MF solution accompanies the general trend of the data, it exhibits a series of step singularities in addition to the jump at x * . Several step discontinuities are in fact observed in p st in simulations of the random-neighbor version [27], although not in the nearestneighbor version.) The dependence of the threshold x * on p is shown in Fig. 7. Note that augmenting the anisotropy, x * increases almost linearly. Therefore, similarly to the BSab variant, the effect of increasing the anisotropy, while maintaining the number of updated sites constant, is to decrease the interval on which p(x) = 0. Figure 8 shows the probability density P J (r) for various values of p. While the isotropic case (p = 0.5) preserves the exponent of the original BS model, all cases for p = 0.5 exhibit the exponent of the anisotropic BS model. In the BSab variant studied above, asymmetry was due to different numbers of sites being renewed in each side of the extremal site. In the present case we have a different kind of asymmetry, namely, sites are updated in different ways on each side of the extremal one. We nevertheless find the same critical exponents as for the anisotropic BS model. This strengthens the evidence for the existence of two symmetry-based universality classes for models under extremal dynamics.
V. CONCLUSIONS
We perform a detailed investigation of the effect of symmetry on the scaling behavior of the Bak-Sneppen model. All dynamics which preserve the reflection symmetry of the original BS model possess the same critical exponents as the original model, while asymmetric dynamics lead to the exponents of the anisotropic BS model. Therefore, our work reinforces the evidence for two symmetry-based universality classes [31].
In order to obtain these results, we define modified BS models, and study them via mean-field theory and simulation. In the generalized BS model (or BSab variant), the degree of asymmetry is quantified by the anisotropy coefficient k a . Mean-field theory provides poor predictions for the threshold x * , but correctly predicts (i) that the stationary probability density is a step function, and (ii) that x * varies linearly with the number n of updated sites, as found in simulations on a one-dimensional lattice. The simulations also lead to several conclusions that go beyond MFT analysis: (i) the threshold value decreases as the degree of anisotropy is increased; (ii) for weak anisotropy we observe a crossover between symmetric and asymmetric scaling; (iii) for fixed anisotropy we find a scaling collapse of the probability P J (r) that successive minimum sites have a separation r.
In the peripheral square model with variable anisotropy, the degree of asymmetry is quantified by the parameter p that controls on which side of the extremal site the neighbor is updated differently. Although this model is partially deterministic and has a different kind of asymmetry, we again encounter two symmetry-based universality classes. Moreover, we find that increasing the asymmetry, the region where p(x) = 0 is reduced, as in the the BSab variant.
These results, together with evidence for finite-size scaling [34], and connections with directed percolation [22], indicate that scaling in the BS model partakes of many of the characteristics associated with critical phenomena, both in and out of equilibrium. The particular distinguishing features of the Bak-Sneppen model and its variants appear to be associated with extremal dynamics.
ACKNOWLEDGMENTS
We thank CNPq, CAPES and FAPEMIG, Brazil, for financial support. | 2014-10-01T00:00:00.000Z | 2004-08-07T00:00:00.000 | {
"year": 2004,
"sha1": "2fa31be5471be02384ba852faf20244dfa5f3e29",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0408164",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5d30c0aa2b782b31e3f13fde6dd5fb452eeff695",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
233518596 | pes2o/s2orc | v3-fos-license | Overview of Self-Management Skills and Associated Assessment Tools for Children with Inflammatory Bowel Disease
: Self-management is a multi-modal approach for managing chronic conditions that en-compasses a number of different elements; knowledge, adherence, self-regulation, communication, and cognitive factors. Self-management has been shown to be beneficial for adults with inflammatory bowel disease (IBD), and for children with IBD it may help them learn to take control of their complex treatment regimens and lead to positive disease outcomes. The development of self-management skills for children with IBD is vital in order to maximize their potential for health autonomy, but it is still an emergent field in this population. This review provides an over-arching view of the self-management elements specific to children with IBD, and highlights outcome measures that may be used to assess skills within each field as well as the efficacy of targeted interventions.
Introduction
Inflammatory bowel disease (IBD) is a collective term for the clinical sub-types of Crohn's disease (CD) and ulcerative colitis (UC), both of which are complex, relapsing conditions affecting the gastrointestinal tract. The global incidence of pediatric IBD is increasing and it is now considered to be one of the most important and serious chronic diseases of childhood [1][2][3][4][5].
When IBD is diagnosed during childhood, it can be associated with more extensive disease, higher disease activity, and a more complicated progression than when diagnosed as an adult [6,7]. Treatment regimens may be complex involving the use of drug therapy, nutrition, surgical intervention, multidisciplinary team (MDT) involvement, and psycho-social input. It is, therefore, vital to identify an integrative approach that addresses all of these factors simultaneously, while being cognizant of the implication that children and their families will also be responsible for adhering to these multi-component strategies [8][9][10][11].
One such multi-modal approach is self-management, which involves children learning to take control of their own treatment strategies and gain those skills and attributes necessary to self-manage their IBD independently [12,13]. Self-management is centered on the concept of health autonomy, whereby the child's increasing age sees a paradigm shift of responsibility from disease management by their parent or family, to that of the individual. Research into the benefits of self-management for children with other chronic conditions has shown positive effects on clinical, behavioral, and psychological outcomes, with evidence of improvements to items such as; self-reported health status, health outcomes, school absenteeism, recreational activities, knowledge, quality of life, health care utilization, and self-efficacy [14,15]. In addition, self-management interventions have shown benefit among adults with IBD [16][17][18][19][20], but is an emerging field of study among children with IBD. While self-management interventions developed for adults with IBD may share components that are applicable to children, the pediatric population have unique characteristics that need to be considered [21]. Their developmental and cognitive attributes, as well as their increasing age and development of health autonomy, require interventions and assessment tools to be tailored specifically to these factors. Children of different ages will have varying levels of understanding and literacy, and there is also a paradigm shift of responsibility from parents to their child as determined by age and cognitive developmental ability. Both of these factors dictate that the approach to pediatric self-management needs to be age and developmentally appropriate, a factor that will most likely not have been considered in adult targeted interventions or assessments.
Self-management is a critical component for positive disease outcomes for children with IBD, and has great potential to lessen the disease burden and its sequelae [7,21]. The health behaviors and processes that are required of the child with IBD while developing self-management skills may be cognitive, emotional, or social in nature [10,22]. In order to further define these a number of pediatric self-management frameworks were studied [10,21,23,24] in order to determine the core elements of self-management for review which were categorized as:
•
Disease and treatment knowledge This synthesis review also highlighted that while skills in these core domains may be developed or achieved in isolation, in order to be attaining effective self-management these elements need to be addressed concurrently.
While the importance of self-management for children with IBD is increasingly recognized, it remains an emergent field and, subsequently, there is a dearth of assessment tools to measure these skills and attributes. Therefore, integrated approaches to address any gaps in children's self-management abilities are difficult to develop. Prior to developing interventions to maximize the potential for self-management it is vital to have standardized, reliable outcome measures to identify where the support is needed, and to establish the impact of any intervention through objective evaluation [25][26][27]. In order to achieve better disease outcomes for children with IBD it is important to understand the significance of self-management skills and attributes within the previously defined categories. The following review explores each self-management domain in detail and highlights a number of assessment tools that may be used to objectively measure them.
Relevant to all outcome measures identified, a number of factors should be considered when selecting assessment tools for use with children. These include whether items are age appropriate for the target population thereby considering the subject matter of included items, respondent burden, and readability. Respondent burden commonly refers to the number of items children are required to complete, and readability refers to the ease of comprehension a text should have so the target reader can understand it. Readability is frequently measured using the Flesch-Kincaid reading ease and school grade equivalence formulas [28,29]. The recommended reading ease score for children is stated as more than 70 out of a maximum of 100 [30], and the school grade score recommendations for low health literacy groups such as children should be below a grade five (age ten years) reading level [29].
Disease and Treatment Knowledge
For children with IBD the acquisition of health knowledge regarding their own diagnosis, disease course, and treatment, is integral for their ongoing adherence and the development of self-management skills [31,32]. Acquiring knowledge of their disease and of treatment regimens are considered first steps in the process of developing health autonomy, and represent a concrete and tangible accomplishment [32,33]. Such knowl-edge should be gradually developed to include, in concordance with the previous content synthesis [23,24,34,35] Unfortunately, knowledge deficiencies among children with IBD regarding their own medical history are frequently reported, with the most common being: disease characteristics, surgical history, medication regimens, and how to contact their health care team [4,33,35,36]. Knowledge deficiencies such as these have consequences for pediatric self-management, as adults have reported that knowledge of disease processes, the role of medications, and of their treatment plan, were critical to being able to self-manage their condition [37]. In addition to understanding their own individual disease and treatment, it is vital to have a general understanding of their condition, the implications of their diagnosis, and the treatments available. Deficiencies are also evident across the pediatric literature in this domain, and are reported regarding drug and nutrition therapies, surgery, growth, and investigations [26,[38][39][40][41].
Knowledge Outcome Measures
All children with IBD should have their disease and treatment knowledge levels assessed, as disease management may be adversely affected by gaps or misconceptions in either [42,43]. The most efficient way to evaluate knowledge is with an assessment tool that is appropriate for, and has been validated with, the target population. While the concept of knowledge is an abstract, subjective notion, questionnaires go some way towards assigning a value to what participants understand about a subject. When considering an assessment tool for use, it should be appropriate for the target population, as well as relevant to the most recent treatment modalities and knowledge.
Five studies were identified that measured general IBD knowledge in adults (Table 1); the Knowledge Questionnaire (KQ) [44], the Crohn's and Colitis Knowledge Score (CC-KNOW) [25], Inflammatory Bowel Disease Knowledge (IBD-KNOW) [45], the IBD-knowledge questionnaire Catalonia (QUECOMIICAT) [46] and a short questionnaire by Keegan et al. [47]. These assessment tools may be appropriate for adolescents but have not been validated in younger children and contain complex items, alongside topics such as pregnancy and smoking that may be inappropriate to ask young children.
There are three assessment tools specifically for measuring general disease and treatment knowledge in children with IBD [26,38], and four studies that measure individual disease and treatment information [4,33,35,48] (Table 1). Of those measuring general IBD knowledge in children, The Emma electronic quiz developed by Tung et al. [38] asked a series of twelve IBD questions that were automatically selected from a database of 185 items depending on the age and disease characteristics of participants, as well as four psychosocial questions. This tool was not found to be available for clinical use or research.
Haaland et al. [26] developed a tool called the 'IBD Knowledge Inventory Device' (IBD-KID), which was then re-developed following critical analysis [49] to a shorter, up to date version (IBD-KID2) [41]. IBD-KID2 has been validated among a number of population groups and has been shown to have appropriate readability and items that are generalizable between IBD clinical sub-types [50,51].
Of the four tools measuring individual disease and treatment information, three were informal and had not been validated, and one had undergone a process of validation. The non-validated assessment tools were presented in two formats; two studies utilized brief questionnaires regarding individual disease diagnosis, characteristics, and treatment [33,35] and the third study asked children and adolescents to fill out their own 'health passport' which could be carried at all times to provide information when required [4]. The fourth study presented a validated assessment tool called IBD KNOW-IT that asked a series of questions aimed at establishing whether children knew about their own disease and treatment [48].
Self-Regulation
Self-regulation is considered an essential attribute for self-management and includes three skill components: self-monitoring, self-evaluation, and self-reinforcement [52,53]. These skills relate to proactive and reactive disease management, whereby actions are performed to manage a problem (example: IBD symptom self-monitoring), or respond to a change in condition (example: making lifestyle modifications or seeking medical help) [53].
Symptom Self-Monitoring
Self-monitoring is a skill that enables children with IBD to track their symptoms in a structured way in order to promote reflection and awareness, and can augment communication of their disease state with the MDT. Tracking symptoms in this way is perceived as an efficient and cost-effective way for people to develop an awareness of their health state, but is also a means of documenting therapeutic benefit [54][55][56][57]. When used as a therapeutic tool, it has benefits over recall reports as it reduces the risk of recall bias, whereby symptoms become generalized. unless extreme events have occurred, which will skew the memory [58]. For adults with IBD, adherence to symptom self-monitoring is high, attributed to the fact that the data being collected are of personal importance [59]. In addition, commencement of self-monitoring has been reported to transform subsequent clinical encounters among adults with IBD who also considered that it enabled them to reflect on their disease [59].
Self-Evaluation
Self-evaluation for children with IBD is a skill that requires the assessment and reflection of their symptoms, and is a skill that should be learnt through education and reinforcement from the MDT and with parental support. When symptom self-monitoring is performed longitudinally it can provide information not just on their current disease state, but also provide retrospective data that can help them evaluate and reflect on changes in their condition. This enables them to recognize their symptom levels during remission, changes during periods of disease exacerbation, or assess treatment efficacy. Self-evaluation should have the discreet but beneficial effect of helping create an awareness of their own disease course and prompt the behavior of seeking timely medical help in the case of worsening symptoms [57]. Qualitative work carried out with adults with IBD reinforced that self-monitoring was considered a valuable way for patients to enhance consciousness of their health state, and to engage with their doctors [54].
Self-Reinforcement
The process of self-reinforcement is concerned with children with IBD learning to make decisions on what action is required when symptom changes are identified. This process emphasizes the importance of education and communication by the MDT. While a child can be taught to recognize symptom exacerbations, when this occurs in the pragmatic setting the child also needs to understand actionable instructions regarding what they should do and have decision support information available. Depending on the severity and nature of symptoms, it may be appropriate for them, or their parents, to call the GP, make contact with the MDT, or seek emergency help. Communication of their longitudinal symptoms to the appropriate party would enable a clinical evaluation to be made and the sharing of information relevant for this clinical evaluation is vital.
Self-Regulation Outcome Measures
In order for children with IBD to carry out symptom self-monitoring, evaluation, and reinforcement, an age appropriate and disease-specific tool is required that can provide symptom reports with clinical utility. Using a structured format for monitoring subjective variables, such as pain, well-being, and stool variables, can also quantify the disease burden for factors that are not readily observed but may help the MDT understand the child's perspective of symptoms [55,60].
The clinical assessment tools used by gastroenterologists for measuring disease activity are carried out using validated measures such as the Pediatric Crohn's Disease Activity Index (PCDAI) [61] and the Pediatric Ulcerative Colitis Activity Index (PUCAI) [62], or with a physician's global assessment (PGA) which is based on their clinical acumen. While these are not suitable to be used for child self-report due to their need for clinical data (PCDAI) and their complexity, adapted versions have been used with children with UC [57,63] and CD [57] that produce disease activity reports with good levels of crude agreement with clinician reports. No universal tool was identified in the literature that produced clinically relevant data via patient report that was universal and generalizable to both UC and CD.
A number of 'Patient Reported Outcome' (PRO) measures for children with IBD have been developed to enable them to report their symptoms. PROs are derived solely from patient input, provide feedback directly from the patient, and require no response interpretation by an observer [64]. The identified PROs were developed in conjunction with children with IBD and highlight those symptoms they consider most important. Two UCspecific PROs have been developed using similar methodologies: the TUMMY-UC [64], and the Daily Ulcerative Colitis signs and symptoms Scale (DUCS) [56]. These were developed using signs and symptoms derived from interviews with children with UC, with the purpose of providing patient reports that are intended to supplement the clinician completed PUCAI. The TUMMY-CD [65] is currently in development for children with CD using the same methodology. These PROs differ from clinical disease activity self-reports as they do not necessarily concentrate on, or reflect, the degree of inflammation or disease activity but are designed to report perceived symptom burden [64]. These PRO measures consequently quantify a different concept to disease activity such as that measured by the PUCAI and PCDAI, and provide additional aspects of outcome measurement [64]. Once again, there was no tool universal to both CD and UC.
One clinical self-report tool for children with IBD called 'IBDnow' was identified that was developed using the same subjective symptom report sections as in the PCDAI and PUCAI [66]. This tool is presented as a series of picture and text Likert scales that are used to categorize pain, well-being, and stool variables such as blood, consistency, and frequency. IBDnow has a very simple format that enables children from a young age to provide symptom reports shown to have good agreement with clinicians and is generalizable for use among both clinical subtypes [66].
Adherence
Treatment regimens for IBD have been developed with proven efficacy and positive benefit-to-risk profiles, but in order to maximize outcomes children need to practice treatment adherence [67]. Adherence is a far more multifaceted phenomenon in childhood than in adulthood. A complex dyad exists between children and their parents over responsibility for treatment adherence [68]. Furthermore, a triadic partnership between the child, parents, and medical team must be in place to support multidimensional treatment components and the dynamic maturation of the child [69,70].
Adherence rates for children with IBD show great variability, with 16-80% reported as non-adherent to their prescribed regimen [11,[71][72][73]. This variation may be a function of assessment method, patient age, and definition of the level of nonadherence [74]. Adherence to non-prescribed (over the counter) medications was significantly lower than to prescribed drugs [75], as was adherence to complementary medicines-the use of which has also been shown to also reduce adherence to prescribed treatments [73,76]. Adherence rates are higher for exclusive enteral nutrition (EEN) programs, which have been reported as 84% to 90% [77,78], although up to 33% of surveyed pediatric gastroenterologists reported non-adherence as the largest barrier to them using EEN [79]. Treatment adherence also includes attending scheduled clinic appointments, having investigations performed, and such factors as collecting new prescriptions to ensure ongoing drug adherence and performing recommended lifestyle changes. Adhering to clinic appointments has been shown to improve drug adherence, reduce the frequency of relapses, and improve remission rates [80][81][82]. Improved clinic attendance leads to stronger beliefs in the importance of medications, which in itself is a strong predictor of adherence [83].
Adherence and IBD Outcomes
For children with IBD medication adherence rates positively correlate with remission, and negatively correlate with disease severity [84,85]. Non-adherence is consistently associated with negative psychosocial outcomes such as reduced HRQoL, and increased health care utilization [86][87][88][89]. A number of risk factors for non-adherence have been identified in this population group; longer disease duration, high disease activity, greater age, use of herbal medications (a consequence of having too many drugs to take), having fewer follow-up appointments, and poorer parent-reported psychosocial HRQoL and family functioning [72,73,75,76,90,91].
Adherence Outcome Measures
An accurate assessment of medication non-adherence enables clinicians to view it as a diagnosable and treatable medical condition [92], and provides opportunities for education, to identify barriers, and to provide targeted interventions [93]. However, in order to design interventions to improve and maximize adherence, there first needs to be an understanding of which children are non-adherent and why [70]. No gold standard, validated adherence measure for children exists, and all identified techniques have been proven to have limitations [94], therefore those direct, indirect, and subjective measures available should be examined for feasibility, accuracy, and limitations prior to their use.
The top three assessments reportedly used are subjective clinical interviews (with patient or parent), biological assay for drug markers (blood or urine), and a daily adherence diary, but there are many techniques available [95]: • Patient or parent reports using interviews may be time consuming and subjective, and may overestimate adherence by up to 23% in adolescents with IBD when compared to objective measures [74,94,[96][97][98]. The Medication Adherence Measure is a validated semi-structured interview that is widely used in pediatrics, and a correction factor for child and parent self-report data has been produced that should provide more accurate adherence rates from subjective reports [ [99]. This long-term monitoring method can reveal a spectrum of dosing problems, however, it relies on presumptive data on ingestion, is costly and prone to malfunctions [97,101]. • Pill counts involve totaling tablets (or liquid quantities) at two time intervals and comparing what is expected from the prescribed dosing regimen [70]. While this method is simple, feasible, and objective, it is also prone to inaccuracy and measures removal of the drugs from the container, not actual ingestion [70]. • Validated adherence scales are structured surveys that ask specific questions regarding adherence, with responses often measured using a Likert scale. None have been developed for children yet, but the most commonly used scale with adults is the Morisky scale [102], which has also been adapted for use with adults with IBD [103]. However, this scale measures barriers to adherence instead of nonadherence frequency, and may overestimate or undervalue adherence as items only account for daily medication regimens [98]. • A simple adherence visual analogue scale (VAS) provides a self-report method that is extremely quick to comprehend and complete. Studies comparing the Morisky scale to a simple VAS showed the VAS provides a more objective measure to quantify adherence [98]. • Pharmacy records regarding refill rates and the proportion of days covered by a filled prescription provide practical data on refill behaviors believed to correspond to medication taking. However, they do not directly estimate adherence and once again assume ingestion [31].
•
The pediatric IBD disease activity indices (PUCAI and PCDAI) are frequently incorporated into adherence studies as a way of correlating symptoms with measured adherence.
None of these measurement techniques quantify exactly what each patient has taken and should be considered as measuring variables that are indicative of adherence rather than being measures of absolute medication use [104]. They do, however, provide an opportunity to triangulate multiple effective methods to provide accurate assessments of adherence [68,97].
Cognitive Attributes
Specific cognitive processes are inextricably linked to the development of self-management and health autonomy skills: patient activation, and self-efficacy.
Patient Activation
Patient activation is defined as when an individual demonstrates the necessary skills, knowledge, and motivation needed to self-manage their own health and participate in the decision-making process [105]. The defined stages of activation mark the progress of an individual from being a passive recipient of care that may be overwhelmed by the task, to them gaining knowledge and confidence in their skills, to eventually take action and perform relevant self-management behaviors [106]. Little has been published concerning patient activation in children, but it is has been associated with improved health outcomes in adults with IBD with those with higher activation levels being less likely to experience anxiety and depression, and more likely to be in remission [105]. Overall, adults who are active participants in their care are more likely to be adherent to their treatment regimen, engage in healthy behaviors, have lower health care utilization, and higher rates of accessing preventative care [106]. Higher patient activation is followed by improvements in self-management behaviors [106].
Self-Efficacy
Self-efficacy is a prerequisite for self-management, and has been shown to mediate the relationship between physical, psychological, and social functions with disease outcomes in a number of chronic diseases [107]. Self-efficacy relates to an individual's belief and confidence in their own capability to succeed in specific situations or to complete tasks [32,108], and high self-efficacy has been linked to successful health behavior change and better engagement with preventative health [107]. The importance of self-efficacy specifically in children with IBD is a nascent field, but during validation studies for self-efficacy assessment tools, higher levels were related to greater health care satisfaction, more frequent communication with the MDT, and higher IBD-related knowledge [35]. Among children with other chronic conditions those with higher self-efficacy levels reported better HRQoL, less depressive symptoms, and enhanced disease coping skills [7].
Cognitive Attribute Outcome Measures Patient Activation
The commonly used measure of patient activation among adults is the Patient Activation Measure (PAM) [109]. A shortened version developed by the same research group [110] has been used effectively in a cohort of children with IBD [111]. In addition, a parent version of the PAM (Parent-PAM) has been developed that may be utilized where an assessment is required of parents activation concerning their child's health [112].
Self-Efficacy
Assessment tools for measuring self-efficacy in the adult IBD population were developed prior to those for children and adolescents. The IBD-Self Efficacy scale (IBD-SES) was first developed for adults with IBD in 2011 [107] and underwent further validation in 2016 [113]. This scale was adapted for use with adolescents to become the IBDSES-A [7], a scale validated for children with IBD aged twelve years and over [114]. The IBD-Yourself self-efficacy assessment tool has been validated for children aged fourteen and over, but is very long, with over 70 items, and therefore may be prohibitive for use in younger children [108].
Communication
Communication is integral to the development of self-management skills. When children are younger their communication with the MDT will be part of a triadic process between themselves, their parents, and the MDT, which will continue until their cognitive and emotional development eventuates in health autonomy [13,115]. It has been demonstrated that a more direct communication between physician and child during clinical encounters contributes to an improved relationship in terms of satisfaction with care, HRQoL, reduced worry, adherence to treatment, and better health outcomes [116][117][118]. However, when quantified, children have been shown to contribute only 4% of the time, parents 35%, and doctors 61% [119]. Adults with IBD reporting poor communication with their clinician demonstrated a 19% higher risk of non-adherence than those reporting good communication [120].
Parents and the MDT can promote children's involvement by encouraging and inviting them to express their views, ask questions, and participate in discussions regarding their own health care [115]. The MDT can also provide education on how to respond to questions on health and illness by utilizing the child's individual and general IBD knowledge, a process which leads to navigational health literacy, a necessary attribute for good decisionmaking [34,116,121].
Communication Outcome Measures
In order to help children with IBD communicate with their family and the MDT about their condition, structured methods of reporting may be considered beneficial. While encouraging children to communicate regarding their current and retrospective disease state is indisputably important, defining and describing these concepts can be challenging for many children. Education regarding self-regulation, and the provision of self-monitoring tools, may help clarify these issues for the child and enable their dialogue to be of value in the clinical setting [54]. There is, therefore, an ongoing need to provide practical tools to support self-regulation that may help address communication gaps, and will enable a shared-care approach to IBD management [122].
Measuring communication skills among children with IBD should be done in subjective and objective ways. Subjectively, ongoing attention to a child's level of interaction with the MDT during consultations will identify areas for encouragement and support. Prompting children to compile a list of questions to ask during clinical appointments, and to report their own treatment, is a simple way to begin when encouraging the development of self-management skills. Following education regarding self-regulation and disease knowledge, they can be encouraged to report their symptoms and join clinical discussions regarding their ongoing management. Providing children with objective measures for symptom reports, such as the IBDnow tool [66] can facilitate a structured method of communicating their disease state in terms of their symptom burden, and longitudinal reflection can encourage them to establish flare recognition and evaluate treatment efficacy. Children keeping a diary of their treatment, symptoms, and questions, thereby encouraging a habit that will be beneficial throughout their disease course into adulthood, may combine such subjective and objective methods.
Self-Management Skills
This review has identified those processes and behaviors considered integral to the development of self-management skills for children with IBD: knowledge, self-regulation, adherence, cognitive attributes, and communication. Concurrent development of these self-management skills contributes to optimal disease outcomes throughout childhood, and maximizes the chance of a successful transition from the pediatric to adult health care team [120]. It is therefore appropriate to identify those methods available for assessing overall self-management skills relating to all domains in order to provide targeted interventions for those skills requiring additional support.
A number of approaches to measuring self-management in children have been suggested including measuring the allocation of responsibility for health care tasks between children and their parents, or by quantifying the level of self-management skills with a numeric score that can be measured longitudinally to determine if skills are increasing [123]. The appropriate assessment of self-management skills in the research setting can quantify efficacy of targeted interventions, and in the clinical setting may identify children with IBD at risk of sub-optimal health autonomy who could benefit from MDT input as targeted education and interventions.
Self-Management Outcome Measures
Studies addressing self-management interventions for adults with IBD used the Health Education Impact Questionnaire that measures skills, behaviors, and cognitive aspects across a number of domains [124]. This tool is not IBD-specific, and has not been validated among children. An initial search for self-management assessment tools specific for children with IBD revealed a wide range of approaches, subjects, and formats with some tools labelled for self-management instead pertaining to transition readiness, knowledge, selfefficacy, or education. A number of tools have also not been validated. Given the dearth of specific tools, it was considered pertinent to perform a review of those identified to ascertain if any were appropriate for use in the whole pediatric IBD population for the assessment of practical self-management skills.
Nine tools were identified through literature review as being related to self-management assessment ( Table 2). The two simplest tools are non-validated IBD transition checklists that have been developed from the literature, expert opinion, and anecdotal evidence and provide a guide for age expectations for the development of particular skills [5,125]. Two further tools were presented that could be used by children with IBD to report whether self-management tasks could be performed by the participants on their own, or with varying levels of help [32,126]. The tool presented by Whitfield et al. [126] has been devised by the ImproveCareNow IBD network in the U.S [127] and is included in their self-management manual [128], but has not been validated. Four tools were identified that were generic but been validated among children with IBD or long term gastrointestinal conditions; StarX [129,130], UNX-Transition Scale [131], Transition-Q [132], and a tool by Williams et al. [133]. Three of these generic tools contained items that were related specifically to transition and concerns for adolescents which could be deemed inappropriate for younger children with IBD who may be learning self-management (examples: adult clinics, smoking, drugs, pregnancy, sex, and alcohol), or developed in the US and contained items relating to health insurance relevant only to that setting. The final tool was a practical self-management skills assessment tool called IBD-STAR that is validated for children with IBD over the age of eight years [134]. IBD-STAR assigns scores for a number of practical self-management tasks depending on allocation of responsibility by the child with IBD, and has been shown to produce reports from participants that are in line with the age expectations in the checklists previously identified [5,125]. Country of origin: US = United States, CAN = Canada, NZ = New Zealand; Topics: IBD, T = transition, SM = self-management; readability: E = reading ease score/100, Gr = grade level equivalent; * relates to items specific to health insurance, ** relates to items specific to adult clinics, pregnancy, sex, recreational drugs, alcohol.
Conclusions
This review has outlined the multi-faceted nature of self-management and the importance of providing a cohesive approach to the essential processes involved; knowledge, communication, adherence, self-regulation, and cognitive factors (Figure 1). The development or identification of interventions that address all of these inter-related components is vital in order to improve outcomes for children with IBD, and efficacy should be assessed using validated outcome measures. Clinicians and the MDT should recommend practical self-management activities to children with IBD and their parents The development or identification of interventions that address all of these interrelated components is vital in order to improve outcomes for children with IBD, and efficacy should be assessed using validated outcome measures. Clinicians and the MDT should recommend practical self-management activities to children with IBD and their parents during clinical encounters, and children routinely assessed with targeted outcome measures to identify where additional support may be beneficial.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-04T22:06:38.011Z | 2021-03-30T00:00:00.000 | {
"year": 2021,
"sha1": "ea8976755b3b5d886bb5e3f61d3be14bfabd6ab1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2624-5647/3/2/7/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5fa8949ad0c290276eec48bdb19f7b0c95329e11",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235291707 | pes2o/s2orc | v3-fos-license | Study of biogas as an alternative energy produced in landfills
This article estimates the energy power, as well as the net energy produced from biogas from landfills as a form of alternative energy production, and can be calculated as a theoretical form in cities where it is planned to build or there is already a landfill.
Introduction
Currently one of the most common problems in large cities in developing countries such as Quito is the final disposal of solid waste, since these, due to poor management, are a source of contamination both in the soil and in the atmosphere by the production of biogas, which can be used as an energy source due to its fuel properties [1].
The production of electrical energy from the decomposition of waste in landfills is not a very common technology in these cities, as is the production of alternative electrical energy [2]. The use of biogas as a fuel source for electric generators helps to improve the treatment and management of waste from any source that comes from agricultural, forestry or urban [3].
Given this argument, this article analyzes theoretically the power generation using biogas from landfill as alternative fuel source. For which, the quantity and quality of the biogas that can be obtained in the Quito Landfill must be analyzed and determined, and in turn its corresponding amount in electrical energy produced by the obtained biogas [4].
Materials and methods
The following methods were performed in this article: deduction methods, theoretical methods, and modeling methods. The place where the research was carried out and where the data were obtained was at the El Inga landfill located in the Pichincha province, outside the city of Quito in Ecuador [5].
The Inga Landfill began its activities in 2003, but the calculation of the biogas flow has been taken since 2012 because the amount of biogas that the landfill expels in this year is already considerable (table 1).
The gases that are generated at landfills are products of biological decomposition of the organic fraction of the stored waste. The source of biogas is fractions of biodegradable waste, on average 60-80% of the mass of landfill, which includes food waste, gardening, waste paper and other cellulose waste. The rate and integrity of waste biodegradation processes depend on the morphological, chemical composition, climatic and geographical conditions, and the stage of the life cycle of a landfill.
The biodegradation process includes phases of aerobic and anaerobic destruction. Anaerobic processes are the main emission of pollutants.
The main phases of the biodegradation of anaerobic waste are: The methanogenic phase includes two stages: active and stable. At the active stage, the enzymatic decomposition of acids formed in the acetogenic phase occurs, which is accompanied by a significant evolution of gases (methane, carbon dioxide, mercaptans, ammonia, etc.).
Hydrogen sulphide is a reduced sulfur compound that predominates in biogas. The concentration of methane in biogas increases to 40-60%. The maximum biogas yield occurs after a two-year aging of waste in the landfill and stabilization of decomposition processes [6].
As suggested by this method, the lower end of the calculation is 0.00312 Nm 3 /Kg and the upper one is 0.0125 Nm 3 /Kg. These extremes consider the criteria of worst-case scenario (PC) and best scenario (BC): BG = scenario factor (PC / BC) Nm 3 /kg · ton/year · 1000kg/ton · year/525600 min (1) Calculation feasible to estimate biogas: Biogas generated Feasible = Biogas generated BC -Biogas generated PC To calculate the energy that can be obtained from the Quito landfill, some parameters specific to the site must be taken into account, such as: Methane concentration in the landfill we have a value of 56% (average) of CH 4 -concentration range (40% -65%). The calorific value of methane which is 10 kWh/m 3 . The average efficiency of the internal combustion engine that in this case is 38%.
With the data described we calculate the energy potential of 1 m 3 of landfill biogas, applying equation 1 we will have: Biogas Energy Potential = CH 4 Net electrical power = Biogas volume · Energy potential · motor efficiency (4)
Discussion
The feasible biogas calculation made in table 2 is an estimate taking into account several factors, such as a scenario factor, data and approximate calculations to later be able to calculate the energy power and net energy. Thus, we have that the production of biogas since 2012 has been increasing since the landfill has not been in operation for many years, it is in the stage of methanogenesis, apart from the climate in the region helps the degradation of waste much faster and this that the biogas flow is much stronger and consequently much more convenient [9]. In tables 2 and 3 the amount of waste deposited in the landfill gradually increases each year. In 2020 it can be observed that the amount of waste is high but it does not grow gradually as in previous years, this is due to several factors, but the main factor is thanks to the "COVID-19" pandemic since not only in Quito if Not in the whole country it is under regulations because of the pandemic, for example: parties, celebrations, cultural events and any other event with an agglomeration of people cannot be organized, which always produce large amounts of waste.
In tables 2 and 3 we can see how the quantity produced in the Inga landfill is growing but there are several times in which the production of biogas is stronger, much faster or becomes constant, this happens for the following reason. The Inga landfill began operating in 2003, the quantities of biogas issued in 2003 onwards were not significant, since the waste stored was in the acetogenesis stage, since 2012 the waste stored in the landfill is in the methanogenesis stage where the amount of biogas produced in the landfill is at its maximum capacity (this lasts about 8 to 10 years). In 2020, it can be seen how the amount of biogas normalizes and is stable.
The calculation of the generated power, which could technically be obtained, was carried out in table 2, with this theoretical data of generated power it will be shown that the project is technically feasible and economically viable.
In tables 2 and 3 we can see how the amount of biogas is higher than that of methane, this happens because methane is found in an average of 40% -60% in the biogas, in this case it is taken at 56% for the calculation because the geographical location, the climate and the type of waste stored in the sanitary landfill (in the Inga sanitary landfill urban waste is stored in which 60% is organic waste).
In tables 2 and 3 we can see the energy power is greater than the net energy produced, this is normal since the net energy is calculated from the energy power and the use of 4-stroke combustion engines.
Conclusion
The current situation of technological development and excessive consumption, due to the expansion of the global economy, translates into the problem of the generation of solid urban waste, creating a negative impact on the environment; therefore, within their controlled management modes, viability is sought for sustainable development with non-energy purposes [10].
The energy activity as fuel production for its subsequent use from the management of urban solid waste can be a complement to the social use of the economic and environmental profitability of said activity, being an option the generation of electrical energy.
The use of biogas obtained from landfills in developed countries such as Norway, Sweden, Germany or others is a reality that helps not only the environment but also the economy, if this technology could be used in developing countries such as Latin America it would be an incredible help in their economies that most of them are in chaotic situation. | 2021-06-03T01:15:45.382Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a42a74294f4d3bb99a5324576b3e0829d57217be",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/723/5/052008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a42a74294f4d3bb99a5324576b3e0829d57217be",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
222408361 | pes2o/s2orc | v3-fos-license | Characterising neuropsychiatric disorders in patients with COVID-19
Comments on an article by Aravinthan Varatharaj et al (see record 2020-71085-023) We commend Aravinthan Varatharaj et al for their study on neurological and neuropsychiatric complications of COVID-19, and we echo their comments on the importance of interdisciplinary work in the clinical neurosciences However, we are concerned by their reliance on the vague term altered mental status and the use of the term encephalopathy without reference to delirium (PsycInfo Database Record (c) 2020 APA, all rights reserved)
Psychiatry in Lebanon
Lebanon has a population of approximately 6·8 million people. The country has also been accommo dating around 250 000 refugees from Palestine since the 1950s and 1·5 million refugees from Syria since 2010. Despite the prevalence of psychiatric disorders at 17% and a treatment gap of 89·1%, 1 Lebanon's mental health services remain underfunded and are usually limited to urban centres. 2 Mental health care in Lebanon faces many challenges, some of which include the absence of a mental health act, high stigma surrounding mental health, restricted government funding, a low general health budget, elevated costs of mental health care with inadequate insurance coverage, few inpatient psychiatric units, and a shortage of mental health professionals including psychiatrists, psychiatry nurses, and social care workers. These challenges have been aggravated by the COVID19 pandemic, a major explosion in the port of Beirut on Aug 4, 2020, 3 and political unrest occurring in the country since October, 2019.
To improve mental health care in a timely manner, the Lebanese Government and international organisations should focus on allocating appropriate funding for mental health services, treatment, and training for healthcare workers; scaling up community services, promoting mental health through awareness campaigns, and providing appropriate psychological first aid.
In 2020
Characterising neuropsychiatric disorders in patients with COVID-19
We commend Aravinthan Varatharaj and colleagues 1 for their study on neurological and neuropsychiatric complications of COVID19, and we echo their comments on the importance of interdisciplinary work in the clinical neurosciences. However, we are concerned by their reliance on the vague term altered mental status and the use of the term encephalopathy without reference to delirium.
The absence of delirium in the Article's case definitions is troubling and imposes considerable constraints on the interpretation of this study, because delirium is likely to be the most frequent neuropsychiatric complication of COVID19. 2 Consistent with the high prevalence of delirium in most serious, acute diseases, we expect delirium to be present in at least a quarter of older patients (aged ≥65 years) with COVID19 and more than twothirds of severe cases. However, most reports have used nonstandard terminology to describe the mental status phenotypes in COVID19 (eg, dysexecutive syndrome, confusion, altered consciousness, or altered mental status). Of note, confusion was the fifth most common presenting feature of COVID19 overall in the International Severe Acute Respiratory and Emerging Infection Consortium WHO study (n=20 133). 2 In Varatharaj and colleagues' study, 1 altered mental status is defined as "an acute alteration in personality, behaviour, cognition, or consciousness". Additional, undefined terms include unspecified encephalopathy, newonset psychosis, and neurocognitive (dementialike) syndrome. Presuming acute onset, most of these cases probably would have fulfilled DSM5 criteria for delirium. The authors do acknowledge a potential reporting bias, but we suggest that a broader approach to reporting of cases, for example by geriatricians and acute physicians, would have generated a more representative sample.
The issue of the damaging con se quences of inconsistent termi nology was the subject of a neuropathophysiology. 3 Of note, animal models substantiate this approach. For example, peripheral inflammation in such models has been shown to provoke both a deliriumlike syndrome and new neurophysiological changes in the brain. 4 The term delirium disorder aims to integrate the two previous terms and the models they represent. 5 We propose that it is inadequate to use the term delirium without specifying the underlying cause or putative neuropathophysiology, or to use the term acute encephalopathy without consistently characterising the mental status phenotype.
AJCS reports grants from Horizon2020, during the conduct of the study, and is an advisor for Prolira. CC reports grants from IONIS Pharmaceuticals, outside of the submitted work. ERLCV reports fees for travel and accommodation from MA Healthcare, fees for a course and accommodation from NHS Digital, a sponsored | 2020-10-16T13:08:32.295Z | 2020-10-15T00:00:00.000 | {
"year": 2020,
"sha1": "bba5c264c8f9d4600baf60b79492ef328d83b23b",
"oa_license": null,
"oa_url": "http://www.thelancet.com/article/S2215036620303461/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "bba5c264c8f9d4600baf60b79492ef328d83b23b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233181625 | pes2o/s2orc | v3-fos-license | Energy-independent optical $^{1}S_{0}NN$ potential from Marchenko equation
We present a new algebraic method for solving the inverse problem of quantum scattering theory based on the Marchenko theory. We applied a triangular wave set for the Marchenko equation kernel expansion in a separable form. The separable form allows a reduction of the Marchenko equation to a system of linear equations. For the zero orbital angular momentum, a linear expression of the kernel expansion coefficients is obtained in terms of the Fourier series coefficients of a function depending on the momentum $q$ and determined by the scattering data in the finite range of $q$. It is shown that a Fourier series on a finite momentum range ($0<q<\pi/h$) of a $q(1-S)$ function ($S$ is the scattering matrix) defines the potential function of the corresponding radial Schr\"odinger equation with $h$-step accuracy. A numerical algorithm is developed for the reconstruction of the optical potential from scattering data. The developed procedure is applied to analyze the $^{1}S_{0}NN$ data up to 3 GeV. It is shown that these data are described by optical energy-independent partial potential.
I. INTRODUCTION
The inverse problem of quantum scattering is essential for various physical applications such as the internuclear potential extraction from scattering data and similar problems. The main approaches to solving the problem are Marchenko, Krein, and Gelfand-Levitan theories [1][2][3][4][5][6][7]. The ill-posedness of the inverse problem complicates its numerical solution. So far, the development of robust methods for solving the problem remains a fundamental challenge for applications. This paper considers a new algebraic method for solving the inverse problem of quantum scattering theory. We derive the method from the Marchenko theory. To this end, we propose a novel numerical solution of the Marchenko equation based on the integral kernel approximation as a separable series in the triangular and rectangular wave sets. Then we show that the series coefficients may be calculated directly from the scattering data on the finite segment of the momentum axis. We offer an exact relationship between the potential function accuracy and the range of known scattering data.
The concept of optical potential (OP) is a useful tool in many branches of nuclear physics. There are a number of microscopically derived [8][9][10][11][12][13][14][15][16][17][18][19][20][21] as well as phenomenological [12,19,[23][24][25][26][27][28][29][30] optical potentials used for description of nuclei and nuclear reactions. Nucleon-nucleon potentials are used as an input for (semi)microscopic descriptions of nuclei and nuclear reactions [8,9,12,17,19,21,26]. We analyzed the Marchenko theory [2,3] and found that it is applicable not only to unitary Smatrices but also to non-unitary S-matrices describing absorption. That is, the Marchenko equation and our algebraic form of the Marchenko equation allow us to reconstruct local and energy-independent OP from an absorbing S-matrix and characteristics of corresponding bound states. We applied the developed formalism to analyze the 1 S 0 N N data up to 3 GeV and showed that these data are described by energy-independent optical partial potential. Our results contradict conclusions of [18] where they state that "... the optical potential with a repulsive core exhibits a strong energy dependence whereas the optical potential with the structural core is characterized by a rather adiabatic energy dependence ..." On the contrary, we reconstructed from the scattering data local and energy-independent N N soft core OP.
II. MARCHENKO EQUATION IN AN ALGEBRAIC FORM
We write the radial Schrödinger equation in the form: Initial data for the Marchenko method [1] are: where S(q) = e 2ıδ(q) is a scattering matrix dependent on the momentum q. The S-matrix defines asymptotic behavior at r → +∞ of regular at r = 0 solutions of Eq. (1) for q ≥ 0;q 2 j = E j ≤ 0, E j is j-th bound state energy (−ıq j ≥ 0); M j is j-th bound state asymptotic constant. The Marchenko equation is a Fredholm integral equation of the second kind: We write the kernel function as Solution of Eq. (3) gives the potential of Eq. (1): There are many computational approaches for the solution of Fredholm integral equations of the second kind.
Many of the methods use an equation kernel's series expansion [31][32][33][34][35][36][37][38]. We also use this technique. Assuming the finite range R of the bounded potential function, we approximate the kernel function as where F k,j ≡ F (kh, jh), and the basis functions are where h is some step, and R = N h. Decreasing the step h, one can approach the kernel arbitrarily close at all points. As a result, the kernel is presented in a separable form. We solve Eq. (3) substituting Substitution of Eqs. (7) and (9) into Eq. (3), and taking into account the linear independence of the basis functions, gives We need values of P k (hp) ≡ P p,k (p, k = 0, .., N ). In this case integrals in Eq. (10) may be calculated Here, along with the Kronecker symbols δ k p , symbols η a are introduced, which are equal to one if the logical expression a is true, and are equal to zero otherwise. Considering also that ∆ k (hp) ≡ δ k p , we finally get a system of equations for each j, p = 0, .., N . Solution of Eq. (12) gives P k (hp) ≡ P p,k . Potential values at points r = hp (p = 0, .., N ) are determined from Eq. (6) by some finite difference formula. Next, we consider the case l = 0, for which h + l (qx) = e ıqx and We approximate the kernel as follows: where F 0,k ≡ F (kh) as in Eq. (7) for l = 0, and the used basis set is The Fourier transform of the basis set Eq. (15) is The function Y (q) may be presented as The last relationship may be rearranged Thus, the left side of the expression is represented as a Fourier series on the interval −π/h ≤ q ≤ π/h. Taking into account that Y (−q) = Y * (q), we get Im Y (q)e ıqhk qdq for k = −2N + 1, . . . , 2N − 1; The system (19) is solved recursively from F 0,2N . Thus, the range of known scattering data defines the step value h and, therefore, the inversion accuracy. Calculation results for the potential function V (r) = −3 exp(−3r/2) are presented in Figs. 1, 2, where h = 0.04, R = 4. Smatrix was calculated at points shown in Fig. 1 up to q = 8. The S-matrix was interpolated by a quadratic spline in the range 0 < q < 8. For q > 8 the S-matrix was approximated as asymptotic S(q) ≈ exp(−2iA/q) for q > 8, where A was calculated at q = 8.
III. ENERGY-INDEPENDENT OPTICAL 1 S0N N POTENTIAL
Realistic potentials derived unambiguously from inverse theories should describe scattering data from zero to infinite energy. It seems that it is only possible if the available scattering data approach the asymptotic region below the relativistic region. It is unnecessary because relativistic two-particle potential models may be presented in the non-relativistic form [39]. Another problem is the presence of closed channels whose characteristics are not known. It is usually assumed (for example, for an N N system) that below the inelasticity threshold, effects of closed channels can be neglected, and a real N N potential may describe the interaction of nucleons. This assumption is a consequence of the ingrained misconception that a complex potential corresponds to a non-unitary matrix. One can only assert that the Smatrix is unitary for a real potential.
We have carefully analyzed the Marchenko theory [2,3] and found that it applies not only to unitary S-matrices but also to non-unitary S-matrices describing absorption. That is, the Marchenko theory Eqs. to reconstruct local, and energy-independent OP from an absorbing S-matrix and corresponding bound states' characteristics. We present an absorptive single partial channel S-matrix on the q-axis as where superscript + means hermitian conjugation. For q > 0 we define S u (q) = e 2ıδ(q) , S n (q) = − sin 2 (ρ(q))e 2ıδ(q) , (21) where δ(q) and ρ(q) are phase shift and inelastisity parameter correspondingly. In this case we have instead of Eqs. (19) the following system where
IV. RESULTS AND CONCLUSIONS
We applied the developed formalism to analyze the 1 S 0 N N data. As input data for the reconstruction, we used modern phase shift analysis data (single-energy solutions) up to 3 GeV [41,42]. We smoothed phase shift and inelasticity parameter data for q > 3 fm −1 by the following functions: δ(q) ∼ −54.56822/q 3 + 57.55296/q 2 − 15.36687/q, ρ(q) ∼ 101.89881/q 3 − 80.13493/q 2 + 15.88984/q, (24) where we fitted the coefficients by the least-squares method. Asymptotics (24) were used to calculate coefficients of Eqs. (19) with h = 0.0125 fm corresponding to q max ≈ 251.3 fm −1 .
Results of our calculations show that these data are described by energy-independent optical partial potential (Figs. 3,4). Thus, we presented a solution of the quantum scattering inverse problem for the zero orbital angular momentum, the algorithm of which is as follows. We set the step value h, which determines a required accuracy of the potential. From the scattering data, we determine F 0,k from Eqs. (19) for unitary S-matrix or from Eqs. (22) for non-unitary S-matrix. Solution of Eqs. (12) gives values of P k (hp) (p = 0, .., N ). Further, the values of the potential function (6) are determined by some finite difference formula. Expressions (7)(8)(9)(10)(11)(12)(13)(14) give a method for the Marchenko equation's numerical solution for an arbitrary orbital angular momentum l, and may be generalized for a case of coupled channels.
Our results contradict conclusions of [18] claiming that an OP with a repulsive core exhibits a strong energy dependence. On the contrary, we reconstructed local and energy-independent N N soft core OP with Re(V (0)) ≈ 14 GeV and Im(V (0)) ≈ 19 GeV .
It may be that some local OPs lead to unsatisfactory description of nuclear reactions [30] ((d, p) transfer reactions). Our approach assumes inverse scattering reconstruction of local and energy-independent OP describing all two-particle scattering data (including high energy asymptotics) and bound states. Such OPs have not been used in nuclear calculations, though they may give an adequate description of the off-shell behaviour of the nucleon-nucleon interaction.
The reconstructed 1S0 N N optical potential may be requested from the author in the Fortran code. | 2021-04-09T01:15:45.192Z | 2021-04-08T00:00:00.000 | {
"year": 2021,
"sha1": "22beac867335fb337a4c1ace693463b5f7d508ed",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.03939",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "22beac867335fb337a4c1ace693463b5f7d508ed",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261890496 | pes2o/s2orc | v3-fos-license | Developmental Dysplasia of the Hip (DDH): Etiology, Diagnosis, and Management
Developmental dysplasia of the hip (DDH) is a complex disorder that refers to different hip problems, ranging from neonatal instability to acetabular or femoral dysplasia, hip subluxation, and hip dislocation. It may result in structural modifications, which may lead to early coxarthrosis. Despite identifying the risk factors, the exact aetiology and pathophysiology are still unclear. Neonatal screening, along with physical examination and ultrasound, is critical for the early diagnosis of DDH to prevent the occurrence of early coxarthrosis. This review summarizes the currently practised strategies for the detection and treatment of DDH, focusing particularly on current practices for managing residual acetabular dysplasia (AD). AD may persist even after a successful hip reduction. Pelvic osteotomy is required in cases of persistent AD. It could also be undertaken simultaneously with an open hip reduction. Evaluation of the residual dysplasia (RD) of the hip and its management is still a highly active area of discussion. Recent research has opened the door to discussion on this issue and suggested treatment options for AD. But there is still room for more research to assist in managing AD.
Introduction And Background
Developmental dysplasia of the hip (DDH) is a complex disorder that refers to different hip problems, including neonatal instability, acetabular, or femoral dysplasia, hip subluxation, and hip dislocation [1][2][3].DDH has replaced the previous term 'congenital dislocation of the hip (CDH)' as several manifestations of DDH may not be detectable at the time of birth and may appear at a later stage [4].Also, "congenital" has been replaced by "developmental" because the spectral range of the disease extended from acetabular dysplasia (AD) to complete dislocation [5].
Early detection and treatment are critical for improving pediatric quality of life.Delayed diagnosis and treatment at a later stage entail extensive surgery, which comes with greater difficulties and a worsened functional outcome [6].Untreated dysplasia may lead to severe discomfort, pain, and osteoarthritis, requiring total hip arthroplasty [7].Management is significantly influenced by the patient's age and the severity of the dysplasia.The focus is on obtaining a concentric femoral head reduction and promoting acetabular and proximal femur development.A Pavlik harness or rigid abduction is used as the first step in early treatment for children.Patients who do not respond to brace treatment or present late require closed or open reduction (OR) and Spica casting.AD may persist even after a successful hip reduction.Pelvic osteotomy is required in cases of persistent AD.It could also be done simultaneously with an open hip reduction [2,5].Evaluation of the residual dysplasia (RD) of the hip and its management is still a highly active area of debate.Recent research has provided insight into this issue and suggested treatment options for AD.But there is still room for further research to contribute to the better management of AD.This review describes the epidemiology, etiopathogenesis, and diagnosis of DDH and summarizes the current trends in managing recurrent AD.
Normal Hip
The hip joint is made up of the acetabulum and proximal femur.The joint is comprised of the capsule, teres ligament, transverse ligament, and pulvinar.The acetabulum is a hemispherical complex structure in growing children formed by the pubis, ischium, and ilium.The acetabular outer surface is formed by horseshoe-shaped articular cartilage.The cartilage of the acetabulum continues medially as the triradiate cartilage, and together they form the acetabular cartilage complex [8].The labrum is attached to the outer edge of the acetabulum, thereby increasing the acetabular depth and helping to keep the hip stable [9].If the head of the femur is not directly connected to the acetabulum, the latter does not develop properly, becoming flat in its shape [10].At birth, the proximal femur is entirely made up of cartilage.The cephalic nucleus begins to ossify at the age of six months, while the ossification of the trochanteric nucleus is initiated at the age of five to six years [11].
Dysplastic Changes in Hip
The growth changes affect all structures in the acetabulum, proximal femur, and soft components of the dysplastic hip.The aberrant pressure exerted on the labrum by a dislocated or subluxated femoral head promotes fibrocartilage hypertrophy and the formation of fibrous tissue.A labral inversion may be present in dislocated hips, which makes reduction difficult.The limbus, which could be everted or inverted, is the thickened labrum.In some cases, the hyaline cartilage in the acetabulum thickens in the posterosuperior region of the articular cartilage forming a crest, termed a neolimbus [12].The neolimbus develops due to eccentric pressure exerted by the femur head, which is divided into two cavities: the primary acetabulum on the medial aspect and the secondary acetabulum present laterally.When the hip is reduced, the neolimbus disappears [13].Several abnormalities are seen in the proximal femur, including a shortened femoral neck and a delay in the development of secondary ossification.The valgus and anteversion of the dysplastic femur are exaggerated.However, there is disagreement regarding femoral anteversion between the affected and unaffected sides [14].
Natural history
The term DDH may refer to one of four clinical patterns, including hip instability, AD, hip subluxation, and dislocation [3].The Barlow and Ortolani maneuvers show that hip dysplasia produces instability in the first few months after birth.Hip instability is common in infants, with a prevalence of 1% to 1.5% and an incidence rate of 5 per 1,000 in boys and 13 per 1,000 among girls.A spontaneous improvement is observed in approximately 90% of children with mild instability during the first two months of life [15].This spontaneous resolution is caused by a reduction in relaxin levels and an increase in muscle tone.Only 1.2% of neonatal hip instability occurrences necessitate surgical intervention [16].Persistent DDH left untreated results in a sequence of anatomical alterations that alter the joint biomechanics by raising tension on a reduced-contact articular surface.The maintenance of increased articular pressures for lengthy periods promotes articular cartilage degradation and early coxarthrosis.However, there is a well-established link between AD and coxarthrosis [17].On the other hand, in the case of subluxation coxarthrosis nearly always develops in the 30s and 40s for such patients [18].In true dislocation, whether unilateral or bilateral, it depends on whether the femoral head articulates with the ilium or not.In the bilateral case, where the femur head has not articulated with the ilium, the individuals have pain-free, excellent range-of-motion, but they have a waddling gait, hyperlordosis, and back pain.If the femoral head articulates with the ilium at any point, these patients develop disabling degenerative joint disease and require arthroplasty very early in life.Patients with unilateral dislocation develop leg length discrepancy, an unsteady gait, valgus deformities of the knee, lateral compartment degenerative joint disease, and possibly secondary scoliosis.
Etiology and pathogenesis
The optimal growth of the hip joint depends upon two main factors: first, the concentric reduction of femoral head, and second, adequate balance of growth between acetabular and triradiate cartilages.Any imbalance in these, whether during fetal development or postnatal growth, will result in abnormal hip development.The dynamic femoral and acetabular interactions are crucial in the development of hip joint.The complex nature of this condition is due to a mix of genetic, environmental, and mechanical factors.Various etiological theories of DDH have been proposed in the literature, highlighting hormonal, mechanical, and genetic factors.
Risk Factors
The hormonal theory: The hormonal theory has a significant role in hip dysplasia development.It is based on an imbalanced ratio of estrogen to progesterone.A progesterone-rich environment can promote dislocation, whereas an estrogen-rich environment can inhibit it [15].
Fetal packaging deformity: The mechanical factors are usually related to restricted space in utero resulting in fetal packaging deformities.This could be seen in the first baby.The baby may be growing inside a horn of a bicornuate uterus where there is limited space.If the baby is relatively large, there may be a subsequent packaging deformity [12].
Breech delivery: One of the most important mechanical factors that may be a risk for DDH is breech presentation at birth.A 25% risk of DDH exists for neonates born after being in the breech position.About 30 to 50% of patients with DDH have a history of breech delivery.During breech delivery, the hips and the knees are quite extended, and the subsequently increased flexion results in the contraction of the iliopsoas muscle, thereby further dislocating the joint [19].
Swaddling: In a newborn infant, the normal hip posture is flexion and abduction.The maintenance of acetabulo-femoral contact promotes hip growth.Although the majority of AD identified in neonatal hip ultrasound recovers spontaneously, swaddling may promote deformity in infants.DDH is more likely in situations where swaddling is a common practice [20].Swaddling has gained popularity in several developed countries in recent years due to its benefits for improving newborn sleep.The traditional infant wrapping with the lower limbs extended and adducted among Saudi population has been proposed as a predisposition to hip dislocation and future progression to an unstable hip joint [21].
Familial predisposition: An inherited predisposition has been well-established in the literature.First-degree relatives have a 12 times greater risk of acquiring a DDH, but second-degree relationships have a relative risk of only 1.7 times.In cases of DDH familial aggregation, changes in genes such as CX3CR1 have been detected [22].
Hundt et al., in a meta-analysis, emphasized that only breech presentation, females, clicking hips during the examination, and being in a familial aggregation were found to increase the chance of developing DDH [23].However, the majority of DDH patients and those who require treatment often do not exhibit any risk factors other than being female [24].
Diagnosis Clinical Examination
In newborns: All neonates, in particular those displaying the risk factors for DDH, should go through a thorough clinical assessment.The Ortolani test and the Barlow maneuver should both be included in routine screening, and each hip should be checked separately for instability [25,26].For the physical examination, the infant should be laid down on a flat, warm surface in a quiet environment.In the Ortolani reduction test, the newborn should be placed in the supine position with hip flexion kept at 90 degrees (Figure 1A).The examiner should then place his index and middle fingers on the lateral aspect of the baby's greater trochanter, while keeping his thumb medially at the groin crease.Thereafter, the stabilization of the pelvis is maintained by keeping the contralateral hip steady while the other hip is being evaluated.At the same time, an upward push is exerted through the greater trochanter laterally.Sensing a clunk is considered to be a positive result for the Ortolani test, indicating a dislocated and reducible hip.In the Barlow dislocation test, the first step is stabilizing the pelvis.The patient's position is maintained similarly to that for the Ortolani test, with the knee adducted.Then, a gentle downward force is exerted longitudinally along the femoral axis, identifying any possible posterior subluxation or dislocation by producing a palpable sensation (Figure 1B).
FIGURE 1: Tests for hip instability or dislocation in the newborn infant: (A) Ortolani's test; (B) Barlow's provocative test
In older children: Examination of the extremities of infants and toddlers involves meticulous assessment of skin folds and/or discrepancies along the length of the legs that may occur in unilateral hip dislocation cases.Asymmetrical limitation abduction may also aid in the identification of children with hip dislocation.Hip dislocation can also be detected by a positive Galeazzi sign [27].It is performed by laying the child in a supine position with the hips and knees flexed.An unequal height of the knees indicates a positive test.In neglected cases, when children reach walking age, they limp on the affected side, resulting in a positive Trendelenberg sign and hyperlordosis.
Imaging
Ultrasonography: Because the head of the femur and acetabulum are predominantly composed of cartilage, standard radiographs have poor diagnostic value in neonates [28].Ultrasonography is the investigation of choice for DDH in the first six months of life.It is more beneficial to evaluate subtle sub-types of the disorder when the clinical examination is inconclusive.Moreover, this is the only imaging mode that provides real-time 3D images of the hip joints of newborns.Other benefits include the avoidance of radiation, hip joint puncture, medium contrast, and sedation.It provides a detailed evaluation of the cartilaginous femoral head and demonstrates the relationship of the head to the bone as well as the cartilaginous acetabulum [28].Graf et al. developed a strategy based on the morphological features of the hip, requiring the calculation of two angles: the alpha angle, between the ilium and the osseous acetabular wall, and the beta angle, between the ilium and the labral cartilage [29].
Radiography: The radiographic examination is a more useful method of evaluating hip development.Several classic lines on the X-ray of the immature pelvis guide the process of assessing DDH (Figure 2) [30].Hilgenreiner's line is a line joining both of the triradiate cartilages.The Perkins line extends along the lateral border of the acetabulum and is at right angles to the Hilgenreiner's line in a normal hip.The Shenton's line contains a curvature that starts at the lesser trochanter, extends upwards towards the neck of the femur, and connects to a line along the inner margin of the pubis.In a normal hip, Shenton's line is smooth.This line is non-continuous when the affected hip is subluxated or dislocated.The angle formed at the intersection of Hilgenreiner's line and the line drawn along the surface of the acetabulum is called the acetabular index.As the baby grows, this angle changes as well.It measures how much the roof of the acetabulum is inclined.This is the most frequently employed parameter in assessing the morphological features of the acetabulum.In normal newborns, this angle is 27.50 degrees, 23.50 degrees at six months, and progresses to 20 degrees at second birthday.Generally, 30 degrees is considered as normal upper limit and a notable increase in this value is considered a sign of AD [31].
FIGURE 2: Reference lines and angles used to evaluate in DDH
However, it is important to consider the variations in normal indices among the various research papers.Moreover, the method used for the measurement of the acetabular index should be noted.Novais et al. and Tonnis both positioned the horizontal "Hilgrenreiner line" at the lower lateral iliac edge on the triradiate cartilage [28,32].Novais et al. selected the lateral margin of the weight-bearing sourcil, whereas Tonnis used the lateral bony margin as shown in Figure 3.However, disagreements on the landmark for the lateral margin persist.
FIGURE 3: Two different measuring reference points. Novais et al. used the lateral edge of the weight-bearing sourcil (point A), while Tonnis used the lateral bony edge (point B). H indicates Hilgenreiner's line.
The Wiberg centre-edge angle (CEA) is formed by the Perkin's line and the line from the centre of the femoral head to the lateral acetabulum.Due to the difficulty in locating the centre of the head of the femur, it exhibits significant variability in the initial three years of life.Among older children, it measures how much of the acetabulum covers the head of the femur.In children aged 3 to 17 years, an abnormal angle is one less than 15 degrees.On the other hand, angles greater than 25 degrees are categorized as normal among adults, while values under 20 degrees are regarded as abnormal [32].Severin's classification evaluates the hip at maturity with a good correlation regarding long-term radiological, clinical, and functional hip results [33].
CT and MRI: CT is also among the imaging modalities used to assess reduction quality after closed or OR in a Spica cast [34].CT contributes towards evaluating dysplasia in adolescents and young adults and allowing for better selection of the type of surgery required, such as pelvic or femoral osteotomies.A limited CT emits low ionizing radiation, although MRI is now successfully employed to eliminate radiation exposure [35].MRI is considered as a predictor of AVN after closed reduction in DDH.In addition, MRI is also a useful tool in detection and assessment of labral abnormalities [35].
Arthrography: Arthrography is beneficial in the non-ossified skeleton because it facilitates the assessment of soft tissues and cartilages of the femoral head and acetabulum.As a result, it is frequently utilized as an intraoperative dynamic test to determine the quality of reduction and hip joint stability.It is critical in determining whether to use closed or OR [36].
Treatment of DDH
The aim of DDH treatment depends on the patient's age at the time of diagnosis and requires concentric reduction of the femoral head into the acetabulum (Figure 4) [37,38].
Newborn to six months of age
Patients should ideally be diagnosed and managed during infancy.Hip subluxation, which is usually resolved spontaneously, can be observed for three weeks without any treatment.The commencement of treatment is recommended after three weeks if the evidence of subluxation on physical and ultrasonographic assessment is present [2].When the hip joint is fully dislocated at the neonatal stage, it is advised to start treatment immediately.Hip reduction is easier, and the Pavlik harness is the most often used orthosis during this period.In most cases, hip reduction enables the acetabulum to normalize itself during this age.In the 1940s, Arnold Pavlik invented the "harness with stirrups" [38,39].When the hip and knee are flexed with the hip in an abducted position while dynamic hip movements are enabled, the hip adduction contraction will relax and subsequently reduce spontaneously during abduction motions [2,38].It is recommended to wear the Pavlik harness with the hips abducted between 30° and 60°.The major objective is to achieve spontaneous and painless realignment and to centralize the femoral head in children from neonatal age until the age of six to 10 months to achieve optimal structural and functional outcomes [2].
The Pavlik harness had a 95% success rate in cases of AD or hip subluxation and an 80% success rate in cases of hip dislocation [2].The Pavlik harness is the most widely used approach in managing pediatric DDH from birth to six months, according to various publications, and it remains the standard treatment [27,36,38].It is safe and extremely effective.Residual AD still poses a substantial challenge after the orthopaedic intervention.Another study reported that after successfully using closed reduction with the Pavlik harness, about 30% of patients had AD [38,40].The harness is associated with a few complications that occur rarely if it is used appropriately.Avascular necrosis (AVN) of the femoral head is reported as the most serious complication and is associated with excessive abduction of the hip.Placing the harness such that the hips are flexed excessively may dislocate the joint in a downward direction or even result in femoral nerve palsy [2].
Persistent dysplasia or instability between six and 18 months
With increasing age, the hip reduction becomes more challenging and decreases the effectiveness of the Pavlik harness.If the hip reduction fails or the child is older than six months, then this is an indication for a closed or OR and Spica cast immobilization.Dynamic arthrography using fluoroscopy is recommended for evaluating the reduction quality to determine whether the reduction should be closed or open [2,5,9].
Closed Reduction
For children older than six months, a closed reduction and spica cast immobilization is indicated under general anaesthesia with the hips flexed at 90 to 100 degrees with well-controlled abduction.Immobilization should not be performed in an excessive hip abduction position.Serial radiographs are used to monitor hip development.It has been reported that the majority of patients who achieved successful closed reduction may require additional treatment after 18 months, as a sizable number of individuals had persistent AD, necessitating future acetabular osteotomies [41].Forced close reduction in the presence of interposed structures leads to poor outcomes and an elevated risk for AVN [42].
Open Reduction
With age, the risk of OR increases.OR is recommended when closed reduction has failed to reduce the dislocated hip into a stable, concentric position.Although OR is challenging, concentric reduction promotes normalization in AD because of its growth potential [37].Once OR is achieved, maintenance with a cast for three months facilitates hip stabilization.
Older than 18 months
When the hip dislocation is not detected early, secondary alterations take place in the soft tissues around the joint and subsequently in the proximal femur and the acetabulum.AD may still occur even if the reduction is carried out within the first few months of life.The potential of a dysplastic acetabulum to become normal diminishes with age [2,15].Up to 19% of patients who had successful treatment with the Pavlik harness developed RD.Similar to this, persistent dysplasia may occur in 22% to 33% of patients who have had a closed or OR [43,44].The age of the patient at the time of the surgery may have an impact on this variability [2,15].With persistent hip dislocations, significant secondary adaptive alterations exacerbate the pathophysiology of hip dysplasia.Surgery is usually required to reconstruct the acetabulum and the femur, and the release of periarticular soft tissues is usually necessary for older children.When indicated, reconstruction measures may include a pelvic or femoral osteotomy [2,43].
Femoral osteotomies can facilitate reduction by shortening and reorienting the femoral head by derotation [45].Osteotomy increases the varus of the hip joint to stabilize and stimulate acetabular growth and a reduction in the rate of osteonecrosis.These techniques are based on the controversial concepts of coxa valga and increased femoral anteversion.Subluxation of the hip is frequently believed to recur because of femoral anteversion, necessitating derotational osteotomy to maintain a stable hip reduction [40].The indications for femoral derotational osteotomy are still unclear due to a lack of consensus.Although previous research suggests the common use of derotational osteotomy, studies done recently do not recommend this [46].It has recently been recommended that, because of the inconsistency of femoral anteversion derotation osteotomy be performed on a case-by-case basis [46].Pelvic osteotomy is recommended in cases where AD persists or is detected later in its development.Pelvic osteotomy facilitates the process by increasing the cover of the femoral head on the acetabular side.In recent years, there has been a trend to perform an acetabular intervention during primary treatment to optimize the chances of normal acetabular development [2,40,43,47].Pelvic osteotomies can be organized into three subsets based on their intended effect on the acetabular morphology (Figure 5).
Redirectional Osteotomies
Re-directional pelvic osteotomy shifts the position of the acetabulum while leaving its shape and volume unchanged [2,36,43,38].Because these osteotomies are performed via complete cuts of the innominate bone, they are unstable and require stabilization with internal fixation.The three most commonly performed redirection osteotomies are the Salter, triple, and periacetabular osteotomies, commonly referred to as periacetabular osteotomy (PAO) (Figure 6).
Reshaping Osteotomies
Reshaping osteotomies are ultimately aimed at achieving a congruently reduced femoral head and acetabulum [47].These osteotomies are incomplete innominate osteotomies and are associated with high correction rates of AD as shown in Figure 7.The objective of these osteotomies is to restore the acetabular morphology by changing the shape of a capacious and wandering acetabulum.The osteotomies consist of an incomplete opening wedge osteotomy in the peri-acetabular area held open with a bone graft that results in a change in the acetabular slope, shape, and volume.These are appropriately referred to as "acetabuloplasties" and are inherently stable; therefore, fixation is not necessary.The size, direction, and location of the opening wedge dictate the resulting change and acetabular coverage.These osteotomies rely on hinges through the triradiate cartilage and are therefore indicated only in skeletally immature patients.Three of the most commonly performed reshaping osteotomies are the Dega, San Diego, and Pemberton osteotomies.These osteotomies are quite similar in their approach and vary slightly concerning the extent to which the inner table is cut and how close the osteotomy is to the joint [43,48].
Salvage/Augmentation Procedures
Salvage osteotomies are utilized when concentric reduction may not be possible.The goal is to merely increase the weight-bearing surface of the hip.Numerous factors influence the choice of pelvic osteotomy in cases of DDH, including the surgeon's preference, the patient's age, and skeletal maturity, as well as the congruity, morphological features, and volume of the hip joint itself [2,47].
Salvage osteotomies are recommended in cases where the femoral head and the acetabulum may not be congruently reduced or where hyaline cartilage is insufficient for femoral head coverage.Such an osteotomy may also be appropriate in cases with a painful subluxated hip or previous failed surgical interventions [2,36,48].These procedures aim to increase the weight-bearing surface area of the hip by causing metaplasia of hip capsular tissue into fibrocartilage.Two commonly utilized salvage procedures for the hip are the Chairi and Shelf osteotomies as depicted in Figure 8.
FIGURE 4 :
FIGURE 4: Treatment algorithm for DDH according to age DDH: Developmental dysplasia of the hip
FIGURE 5 :
FIGURE 5: Treatment algorithm for residual AD AD: Acetabular dysplasia
FIGURE 7 :
FIGURE 7: Reshaping osteotomies (a) Pemberton; (b) Dega; (c) San Diego osteotomies as viewed from the outer surfaces of the ilium | 2023-09-16T10:37:34.261Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "32e2f7a174d3e31d9dea5f676416e7add351dc3e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "32e2f7a174d3e31d9dea5f676416e7add351dc3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244832169 | pes2o/s2orc | v3-fos-license | Risk Factors for Spoilage of Groundnut Seeds in Shops during Marketing
Post-harvest storage of oilseeds, particularly groundnut, is a real problem for farmers and traders whose stocks are subject to attacks by pests and fungal contaminants in the shops. In order to find alternative solutions to this problem, a survey was conducted in Côte d’Ivoire, specifically in the markets of the communes Abobo, Adjamé and Yopougon in city of Abidjan, Côte d’Ivoire. The objective of this work is to evaluate the main risk factors for spoilage of groundnut seeds sold during storage in the Abidjan markets. To this end, a survey was conducted among 75 groundnut seeds sellers in the three aforementioned communes of Abidjan and identified the main risk factors favorable to spoilage of groundnut seeds sold during storage. The lack of exact knowledge of the origin of the groundnut seeds sold (92 to 100%), the storage of groundnut seeds in polyethylene bags (84 to 100%), the lack of knowledge of spoilage (28 to 44%), the long periods of sale (22.2 to 86.7%), moisture (0 to 72.2%) and insect pests (5.6 to 20%° were identified as the main factors of these risk of spoilage. Original Research Article Boli et al.; BJI, 25(5): 8-15, 2021; Article no.BJI.74924 9
INTRODUCTION
Groundnut (Arachis hypogaea L.) is an annual legume that occupies an important place in the diet because of its protein and essential fatty acid content [1]. It is a staple food for many populations and improves the quality of diets [2]. In Côte d'Ivoire, its production has been increasing steadily for several years, rising from 93.5 thousand tonnes in 2012 [3] to thousand tonnes in the 2018/2019 period [4]. Since the last two years, this production has not increased. Many losses are recorded especially during storage and the majority of these post-harvest losses are due to the effect of moulds [5]. These pathogenic fungal contaminants produce mycotoxins, which are a group of toxic substances with a range of adverse effects, including mutagenic, carcinogenic, teratogenic, etc. The most frequent targets of these mycotoxin-producing pathogenic fungi are cereals and oilseeds, particularly groundnuts. In Côte d'Ivoire, post-harvest groundnut are marketed in the city of Abidjan, which is the cultural crossroads of West Africa and is characterized by rapid urbanization. Its economic activity is reflected in the presence of a multitude of markets where commercial shops are the place of supply and unloading of several foodstuffs including groundnuts from the interior of the country. Once dumped on of market in Abidjan, the post-harvest groundnuts are stored in commercial shops before being used. Unfortunately, these groundnuts, which are widely consumed by the population in various froms, could cause a health problem due to their exposure to mould. Previous studies have revealed the presence of various pathogenic fungal genera and mycotoxins, notably aflatoxin B1 and ochratoxine A in samples of groundnut paste taken from markets in the city of Abidjan [6][7]. Post-harvest prevention of these fungal contaminants, their secondary metabolites and other pests of groundnut seeds could be a better control strategy to minimize the risks of spoilage. But to our knowledge, n study on postharvest prevention techniques for groundnut seeds sold during storage in Côte d'Ivoire exists. This study, which aims to evaluate the main risk factor for spoilage of groundnut seeds sold during storage in the markets of Abidjan, comes at the right time to try to solve these problems faced by farmers and traders.
Survey Material
In order to carry out this study, the methodological approach adopted consisted of identifying the markets, developing the data collection tool and collecting data in the form of interviews and or surveys using survey forms. Survey sheets were developed to collect information on the health risks associated with the storage and sale of groundnuts in three markets in Abidjan.
Presentation of the Study Area
The autonomous district of Abidjan, located in the south-east of Côte d'Ivoire, comprises five prefectures (departments), namely Abidjan-ville with its 10 communes (Abobo, Adjamé, Attécoubé, Cocody, Koumassi, Marcory, Plateau, Port-Bouët, Treichville and Yopougon), Anyama, Bingerville, Songon and Brofodoumé. The communes that were the subject of this study are Abobo, Adjamé and Yopougon (Fig. 1). These three communes were chosen because of their demographic and economic importance. The economic activity of these three communes is reflected in the presence of a multitude of markets and commercial shops that are the place where food products from the interior of the country are supplied and stored, and then to the other towns and communes of the Abidjan district.
Sampling
A pre-survey was carried out to identify in the field the places where groundnut seeds are sold in the three communes of the city of Abidjan. It lasted one month and allowed for the observation and design of a survey questionnaire. Based on this questionnaire, a series of surveys were conducted among groundnut sellers and in markets in three communes of Abidjan. A survey was carried out in the three municipalities under study during the period from June to July 2018, using a questionnaire. This survey concerned 75 traders, 25 per commune. The questions addressed during this survey were related to the types of peanut seeds sold, their delivery time, their storage mode and duration, the origin of these peanut seeds sold, knowledge of their spoilage, possible causes of their spoilage and spoilage solutions.
Knowledge of the exact origin of the peanuts sold
Fig . 2 shows that regardless of the commune, the majority of traders do not know the exact origin of the groundnuts sold. This is the case in the communes of Yopougon and Abobo where 94% and 92% of traders respectively do not know the exact origin of the groundnuts sold. Again, in the commune of Adjamé, all the traders interviewed (100%) did not know the exact origin of the groundnuts sold.
Varieties of groundnuts seeds marketed
The survey of groundnut seed traders in the shops in the three communes revealed two varieties of groundnut seeds. In general, the varieties found in these shops are small-grain and large-grain varieties. On average, 38.7% of traders sell only the small-grain variety, compared to 24% for the large-grain variety. However, both varieties (37.33%) are sold together in the same shop (Table 1). Table 2 shows the supply areas for groundnuts sold during storage in the markets according to commune markets. Of all respondents, 47.45% said that they obtain groundnut seeds sold mainly in the northern regions, 40% in the central regions and 12.11% in eastern Côte d'Ivoire.
Distribution of Storage Conditions and Spoilage of Groundnut Seeds
Across the three communes surveyed, 93.33% of groundnut seeds were stored in polythene bags, compared to 6.67% in other containers. In Abobo (96%) and Yopougon (84%), most groundnut seeds are stored in polythene bags during sale, with the exception of those sold in the commune of Adjamé (100%). During the marketing of groundnuts in shops, the duration of storage generally varies from one week to more than one month. According to the traders interviewed, 28% market all the groundnuts stored for a fortnight, followed by 25.33% for more than a month and 24% for a month. However, 21.33% of traders market almost all the stored groundnuts in the first week of sale.
As for the spoilage of groundnut seeds in the shops, 64% of the traders interviewed said they had problems, compared to 36%. According to these traders, spoilage is linked to several conditions, including storage time (43.33%), moisture (37.34%), insects (13%) and other factors (2.23%). Faced with the deterioration of these groundnut seeds during storage, the majority of traders resorted to rejecting the seeds (81.1%) as a solution, while 12.66% opted for processing them into groundnut paste and 6.26% for solar drying (Table 3).
DISCUSSION
The survey conducted among traders highlighted the traders' ignorance of the exact origin of groundnut seeds sold in the shops. This could be due to the illiteracy of most traders confirming the work of [10] who showed that 73.4% of female groundnut paste sellers in the markets of the city of Abidjan are illiterate. Indeed, in Côte d'Ivoire, illiteracy affects more than the adult population [11]. This high illiteracy is linked to difficulties in enrolment, retention and completion of primary education [12]. This study revealed that groundnut seeds are generally sourced from the North, Centre and East of Côte d'Ivoire. These findings are in agreement with the work of [10] who indicated that the origin of groundnut seeds for making groundnut paste sold in the markets of the city of Abidjan were the areas covering the North, Centre and East of Côte d'Ivoire. These results could be explained by the fact that the main groundnut production areas in Côte d'Ivoire are those covering the whole of the northern and central part of the country as indicated by the work of [13]. The majority of traders store groundnut seeds in polythene bags. These results corroborate those of [14] and [5] who reported respectively that in Ghana and Côte d'Ivoire the majority of groundnut seeds or pods are stored in polythene bags. The use of polyethylene bags adapted for the storage of groundnut seeds could be justified by the absence of technical supervision structures that should regularly sensitise producers and traders on good post-harvest practices. Also, the low cost of the polyethylene bag compared to others such as jute bags would justify this choice. The use of polyethylene bags, which are generally poorly ventilated, is conducive to mould growth [15]. The lack of ventilation can cause a variation in temperature that can lead to condensation of water in the air. This could increase the moisture content of the peanut seeds, which is conducive to mould growth and aflatoxin synthesis.
Groundnuts are sold in storage for weeks or even months. This means that the storage period for groundnut seeds can be short or long depending on whether the selling price is suitable. The long periods of sale of groundnut seed stocks could be explained by economic reasons. Indeed, in order to cover all the capital invested or to make a profit, traders are forced to sell all the groundnut seeds available, whatever the duration. This long duration of sale of groundnut seeds could be one of the critical periods of their contamination. Indeed, according to [16], the risk of contamination of groundnuts and groundnut products increases with long marketing periods due to poor practices and extensive handling. However, the risk of infection is high when groundnuts are stored for a long time at the trader's premises where facilities are not fully adequate.
More than the majority of traders (64%) claimed to be unaware of spoilage of peanut seeds during storage. This means that the risk of contamination could be high due to ignorance of poor storage of groundnut seeds inside the shops. According to [17], practices such as improper storage of groundnut seeds in shops contribute to their contamination with moulds and aflatoxins. Groundnut seeds are among the most susceptible food products to fungal contamination in the pre-and post-harvest stages due to their content of proteins, oils, fatty acids, carbohydrates and minerals that provide a rich medium for fungal growth [18][19]. According to traders, the main causes of spoilage of groundnuts are storage time, moisture and insects. However, most developing countries, notably Côte d'Ivoire, do not have good storage facilities in the informal sector as is the case here. This could lead to cross-contamination of groundnut seeds especially by insects which depreciate the nutritional quality of the groundnut and promote the development of moulds as already reported by [20].
The majority of traders (81.1%) faced with spoilage reject spoilage peanuts during marketing in the shops. According to them, the main way to get healthy groundnut seeds during marketing is to sort and reject the spoiled ones. However, other traders (12.66%) do not reject spoiled peanut seeds for the production of peanut paste. This practice represents a real health risk for consumers. Indeed, previous work on the storage of peanut seeds and paste during marketing in markets has revealed the presence of various pathogenic fungal genera [21][22] but also of mycotoxins, notably aflatoxin B1 [23]. Moreover, the level of exposure to aflatoxin B1 of consumers of peanut paste from peanut seeds during storage varies between 2.072 ng/kg/day and 2.193 ng/kg/day according to the work of [7]. Indeed, according to these same authors, statistical modelling of the data from this work using @RISK software leads to a population at risk of aflatoxin B1 exposure estimated at between 10.1 and 15.6% compared to the tolerable daily intake of 1 ng/kg/day. This means that there is a real cancer risk when considering the margin of exposure values for cancer which are well above the limit value of 10,000. Other work have indicated the presence of aflatoxin in samples of peanut seeds and products including peanut butters sold in Haiti [24-25].
CONCLUSION
The results of this study showed that groundnut seeds traded during storage originate from the northern, central and eastern regions of Côte d'Ivoire. The lack of knowledge of the exact origin of the groundnut seeds sold, the storage of groundnut seeds in nylon bags, the lack of knowledge of spoilage, the long periods of sale, humidity and insects were identified as the main risk factors for spoilage of groundnut seeds sold during storage in the shops. The poor postharvest practices of groundnut seeds sold during storage could have a health risk impact on consumers.
RECOMMENDATIONS
In order to preserve the health of consumers of food products, and particularly of marketed groundnut products, recommendations must be made at different levels.
At the level of the public authorities, the Ivorian State must: -Carry out regular unannounced checks in foodstuff marketing outlets; -Put in place an arsenal of regulations and oblige all actors in the sector to respect good storage practices for food products in marketing shops; -Raise awareness among the population about the dangers of fungal contamination of groundnut seeds in order to popularise good storage practices; At the level of traders, they should: -Use suitable jute bags for the storage of food products -Register with the Ministry of Trade and Handicrafts for better supervision and monitoring; -Observe good hygiene practices during the sale of groundnuts.
samples for analysis in the course of the study. We would also like to thank those in charge of the Biotechnology and Food Microbiology Laboratory at the University of Nangui Abrogoua as well as all those who participated in this survey. | 2021-12-03T16:06:46.874Z | 2021-11-27T00:00:00.000 | {
"year": 2021,
"sha1": "809fc76b451633a251351f9fef325574488f2355",
"oa_license": null,
"oa_url": "https://www.journalbji.com/index.php/BJI/article/download/30150/56577",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f067febf46a32623e21ce52ed639e60a98b01767",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.